threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "tableam: basic documentation.\n\nThis adds documentation about the user oriented parts of table access\nmethods (i.e. the default_table_access_method GUC and the USING clause\nfor CREATE TABLE etc), adds a basic chapter about the table access\nmethod interface, and adds a note to storage.sgml that it's contents\ndon't necessarily apply for non-builtin AMs.\n\nAuthor: Haribabu Kommi and Andres Freund\nDiscussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/b73c3a11963c8bb783993cfffabb09f558f86e37\n\nModified Files\n--------------\ndoc/src/sgml/catalogs.sgml | 9 +-\ndoc/src/sgml/config.sgml | 17 ++++\ndoc/src/sgml/filelist.sgml | 1 +\ndoc/src/sgml/indexam.sgml | 12 ++-\ndoc/src/sgml/postgres.sgml | 1 +\ndoc/src/sgml/ref/create_access_method.sgml | 14 ++--\ndoc/src/sgml/ref/create_materialized_view.sgml | 16 ++++\ndoc/src/sgml/ref/create_table.sgml | 19 ++++-\ndoc/src/sgml/ref/create_table_as.sgml | 15 ++++\ndoc/src/sgml/ref/select_into.sgml | 10 +++\ndoc/src/sgml/storage.sgml | 17 +++-\ndoc/src/sgml/tableam.sgml | 110 +++++++++++++++++++++++++\nsrc/include/access/tableam.h | 3 +\n13 files changed, 228 insertions(+), 16 deletions(-)\n\n",
"msg_date": "Thu, 04 Apr 2019 00:42:06 +0000",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pgsql: tableam: basic documentation."
},
{
"msg_contents": "Hi Andres,\n\nOn Thu, Apr 04, 2019 at 12:42:06AM +0000, Andres Freund wrote:\n> tableam: basic documentation.\n> \n> This adds documentation about the user oriented parts of table access\n> methods (i.e. the default_table_access_method GUC and the USING clause\n> for CREATE TABLE etc), adds a basic chapter about the table access\n> method interface, and adds a note to storage.sgml that it's contents\n> don't necessarily apply for non-builtin AMs.\n> \n> Author: Haribabu Kommi and Andres Freund\n> Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de\n\n+ Any developer of a new <literal>table access method</literal> can refer to\n+ the existing <literal>heap</literal> implementation present in\n+ <filename>src/backend/heap/heapam_handler.c</filename> for more details of\n+ how it is implemented.\n\nThis path is incorrect, it should be that instead (missing \"access\"):\nsrc/backend/access/heap/heapam_handler.c \n--\nMichael",
"msg_date": "Wed, 10 Apr 2019 11:55:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: tableam: basic documentation."
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 12:56 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> Hi Andres,\n>\n> On Thu, Apr 04, 2019 at 12:42:06AM +0000, Andres Freund wrote:\n> > tableam: basic documentation.\n> >\n> > This adds documentation about the user oriented parts of table access\n> > methods (i.e. the default_table_access_method GUC and the USING clause\n> > for CREATE TABLE etc), adds a basic chapter about the table access\n> > method interface, and adds a note to storage.sgml that it's contents\n> > don't necessarily apply for non-builtin AMs.\n> >\n> > Author: Haribabu Kommi and Andres Freund\n> > Discussion:\n> https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de\n>\n> + Any developer of a new <literal>table access method</literal> can refer\n> to\n> + the existing <literal>heap</literal> implementation present in\n> + <filename>src/backend/heap/heapam_handler.c</filename> for more details\n> of\n> + how it is implemented.\n>\n> This path is incorrect, it should be that instead (missing \"access\"):\n> src/backend/access/heap/heapam_handler.c\n>\n\nThanks for the review, Yes I missed it when I added the path.\nPatch attached.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Wed, 10 Apr 2019 13:19:04 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: tableam: basic documentation."
},
{
"msg_contents": "Hi,\n\nOn 2019-04-10 11:55:31 +0900, Michael Paquier wrote:\n> Hi Andres,\n> \n> On Thu, Apr 04, 2019 at 12:42:06AM +0000, Andres Freund wrote:\n> > tableam: basic documentation.\n> > \n> > This adds documentation about the user oriented parts of table access\n> > methods (i.e. the default_table_access_method GUC and the USING clause\n> > for CREATE TABLE etc), adds a basic chapter about the table access\n> > method interface, and adds a note to storage.sgml that it's contents\n> > don't necessarily apply for non-builtin AMs.\n> > \n> > Author: Haribabu Kommi and Andres Freund\n> > Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de\n> \n> + Any developer of a new <literal>table access method</literal> can refer to\n> + the existing <literal>heap</literal> implementation present in\n> + <filename>src/backend/heap/heapam_handler.c</filename> for more details of\n> + how it is implemented.\n> \n> This path is incorrect, it should be that instead (missing \"access\"):\n> src/backend/access/heap/heapam_handler.c \n\nThanks, fix pushed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Apr 2019 17:36:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: tableam: basic documentation."
}
] |
[
{
"msg_contents": "Hello.\n\nWhen I tried to start server just putting recovery.signal, I got\nthe following message.\n\n> FATAL: must specify restore_command when standby mode is not enabled\n\nI got a bit confused to see the message. Formerly this message\nwas shown when recovery.conf that is setting standby_mode to yes\ndoesn't have restore_command setting. But now we can put\nrecovery.signal separately from standby.signal. I think this\nmessage ought not to mention standby mode.\n\nFATAL: must specify restore_command to start in targeted recovery mode\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 04 Apr 2019 13:01:40 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "message when starting standby without setting restore_command"
}
] |
[
{
"msg_contents": "Hello,\n\nI propose that in v13 we redefine \"MD\" (from md.c) to mean \"main data\"\n(instead of \"magnetic disk\"). That's the standard storage layout for\ntypical table and index AMs. As opposed to the proposed undo and SLRU\nSMGRs that provide layouts specialised for different life cycles.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Apr 2019 17:49:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Retronym: s/magnetic disk/main data/"
},
{
"msg_contents": "On Thu, Apr 04, 2019 at 05:49:36PM +1300, Thomas Munro wrote:\n> I propose that in v13 we redefine \"MD\" (from md.c) to mean \"main data\"\n> (instead of \"magnetic disk\"). That's the standard storage layout for\n> typical table and index AMs. As opposed to the proposed undo and SLRU\n> SMGRs that provide layouts specialised for different life cycles.\n\nIf there is a much better name, I would have no objections with just\nrenaming the file md.c instead into something that has a better\nmeaning than \"main data\", which does not seem to be a name going to\nthe actual point...\n--\nMichael",
"msg_date": "Thu, 4 Apr 2019 15:09:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Retronym: s/magnetic disk/main data/"
},
{
"msg_contents": "On Thu, Apr 04, 2019 at 05:49:36PM +1300, Thomas Munro wrote:\n> Hello,\n> \n> I propose that in v13 we redefine \"MD\" (from md.c) to mean \"main data\"\n> (instead of \"magnetic disk\"). That's the standard storage layout for\n> typical table and index AMs. As opposed to the proposed undo and SLRU\n> SMGRs that provide layouts specialised for different life cycles.\n\nMaybe we could use dd for \"diamagnetic data\" ;)\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Thu, 4 Apr 2019 15:41:17 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Retronym: s/magnetic disk/main data/"
},
{
"msg_contents": "On Thu, Apr 04, 2019 at 03:09:15PM +0900, Michael Paquier wrote:\n> On Thu, Apr 04, 2019 at 05:49:36PM +1300, Thomas Munro wrote:\n> > I propose that in v13 we redefine \"MD\" (from md.c) to mean \"main data\"\n> > (instead of \"magnetic disk\"). That's the standard storage layout for\n> > typical table and index AMs. As opposed to the proposed undo and SLRU\n> > SMGRs that provide layouts specialised for different life cycles.\n> \n> If there is a much better name, I would have no objections with just\n> renaming the file md.c instead into something that has a better\n> meaning than \"main data\", which does not seem to be a name going to\n> the actual point...\n\n+1 on changing this to be something other than magnetic disk. I would \nvote for renaming it to relation to be in parity with SMgrRelation. \nRelation storage manager also captures the responsibilities accurately \nand fits with future undo and 'slru' (?) storage managers for example.\n\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n",
"msg_date": "Thu, 4 Apr 2019 11:52:52 -0700",
"msg_from": "Shawn Debnath <sdn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Retronym: s/magnetic disk/main data/"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 05:49:36PM +1300, Thomas Munro wrote:\n> Hello,\n> \n> I propose that in v13 we redefine \"MD\" (from md.c) to mean \"main data\"\n> (instead of \"magnetic disk\"). That's the standard storage layout for\n> typical table and index AMs. As opposed to the proposed undo and SLRU\n> SMGRs that provide layouts specialised for different life cycles.\n\nA bigger issue is that our documention often refers to \"disk\" as storage\nwhen including SSD storage, which clearly have no disks. They are\n\"solid state drives\", not disks.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 11 Apr 2019 14:05:41 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Retronym: s/magnetic disk/main data/"
}
] |
[
{
"msg_contents": "Hi All,\n\nI'm getting a server crash in *_bt_check_unique*() when running the\nfollowing test-case.\n\n*Steps to reproduce the crash:*\n*Step1:* Create a test table with primary key and insert some data in it.\n\ncreate table t1 (a integer primary key, b text);\n\ninsert into t1 values (1, 'text1');\ninsert into t1 values (2, 'text2');\ninsert into t1 values (3, 'text3');\ninsert into t1 values (4, 'text4');\ninsert into t1 values (5, 'text5');\n\n*Step2:* Start 3 backend sessions and run the following anonymous block in\neach of them in parallel:\n*Session 1:*\ndo $$\ndeclare\nbegin\n insert into t1 values (6, 'text6');\n update t1 set b = 'text66' where a=6;\n perform pg_sleep(7);\n delete from t1 where a=6;\nend $$;\n\n*Session 2:*\ndo $$\nbegin\n insert into t1 values (6, 'text6');\n perform pg_sleep('7');\n delete from t1 where a=6;\nend $$;\n\n*Session 3:*\ndo $$\nbegin\n insert into t1 values (6, 'text6');\n delete from t1 where a=6;\nend $$;\n\nHere is the backtrace for the crash:\n\n#0 0x00007f096019f277 in raise () from /lib64/libc.so.6\n#1 0x00007f09601a0968 in abort () from /lib64/libc.so.6\n#2 0x0000000000a54296 in ExceptionalCondition (conditionName=0xafdbf8\n\"!(!_bt_isequal(itupdesc, itup_key, page, offset))\",\n errorType=0xafd75a \"FailedAssertion\", fileName=0xafd81f \"nbtinsert.c\",\nlineNumber=386) at assert.c:54\n#3 0x0000000000509b0a in _bt_check_unique (rel=0x7f096101a030,\ninsertstate=0x7ffcbd5db9d0, heapRel=0x7f0961017c00,\ncheckUnique=UNIQUE_CHECK_YES,\n is_unique=0x7ffcbd5dba01, speculativeToken=0x7ffcbd5db9c8) at\nnbtinsert.c:386\n#4 0x00000000005096ab in _bt_doinsert (rel=0x7f096101a030, itup=0x27bedc0,\ncheckUnique=UNIQUE_CHECK_YES, heapRel=0x7f0961017c00) at nbtinsert.c:232\n#5 0x0000000000514bb8 in btinsert (rel=0x7f096101a030,\nvalues=0x7ffcbd5dbb50, isnull=0x7ffcbd5dbb30, ht_ctid=0x27be708,\nheapRel=0x7f0961017c00,\n checkUnique=UNIQUE_CHECK_YES, indexInfo=0x27be048) at nbtree.c:205\n#6 0x000000000050752a in index_insert (indexRelation=0x7f096101a030,\nvalues=0x7ffcbd5dbb50, isnull=0x7ffcbd5dbb30, heap_t_ctid=0x27be708,\n heapRelation=0x7f0961017c00, checkUnique=UNIQUE_CHECK_YES,\nindexInfo=0x27be048) at indexam.c:212\n#7 0x00000000006e5d70 in ExecInsertIndexTuples (slot=0x27be6d8,\nestate=0x27bd7a8, noDupErr=false, specConflict=0x0, arbiterIndexes=0x0)\n at execIndexing.c:390\n#8 0x000000000071f11f in ExecInsert (mtstate=0x27bdb60, slot=0x27be6d8,\nplanSlot=0x27be6d8, estate=0x27bd7a8, canSetTag=true) at\nnodeModifyTable.c:587\n#9 0x0000000000721696 in ExecModifyTable (pstate=0x27bdb60) at\nnodeModifyTable.c:2175\n.......\n\nThe following Assert statement in *_bt_check_unique* fails\n\n >│386 *Assert(!_bt_isequal(itupdesc,\nitup_key, page, offset));*\n\n\nUpon quick look, it seems like the following git-commit has added above\nAssert statement:\n\ncommit e5adcb789d80ba565ccacb1ed4341a7c29085238\nAuthor: Peter Geoghegan <pg@bowt.ie>\nDate: Wed Mar 20 09:30:57 2019 -0700\n\n Refactor nbtree insertion scankeys.\n\n Use dedicated struct to represent nbtree insertion scan keys. Having a\n dedicated struct makes the difference between search type scankeys and\n insertion scankeys a lot clearer, and simplifies the signature of\n several related functions. This is based on a suggestion by Andrey\n Lepikhov.\n\n....\n\nIncluding Peter and Hekki in the CC as they are the main author of above\ngit-commit as per the commit message.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:*http://www.enterprisedb.com <http://www.enterprisedb.com/>*\n\nHi All,I'm getting a server crash in _bt_check_unique() when running the following test-case.Steps to reproduce the crash:Step1: Create a test table with primary key and insert some data in it.create table t1 (a integer primary key, b text);insert into t1 values (1, 'text1');insert into t1 values (2, 'text2');insert into t1 values (3, 'text3');insert into t1 values (4, 'text4');insert into t1 values (5, 'text5');Step2: Start 3 backend sessions and run the following anonymous block in each of them in parallel:Session 1:do $$declarebegin insert into t1 values (6, 'text6'); update t1 set b = 'text66' where a=6; perform pg_sleep(7); delete from t1 where a=6;end $$;Session 2:do $$begin insert into t1 values (6, 'text6'); perform pg_sleep('7'); delete from t1 where a=6;end $$;Session 3:do $$begin insert into t1 values (6, 'text6'); delete from t1 where a=6;end $$;Here is the backtrace for the crash:#0 0x00007f096019f277 in raise () from /lib64/libc.so.6#1 0x00007f09601a0968 in abort () from /lib64/libc.so.6#2 0x0000000000a54296 in ExceptionalCondition (conditionName=0xafdbf8 \"!(!_bt_isequal(itupdesc, itup_key, page, offset))\", errorType=0xafd75a \"FailedAssertion\", fileName=0xafd81f \"nbtinsert.c\", lineNumber=386) at assert.c:54#3 0x0000000000509b0a in _bt_check_unique (rel=0x7f096101a030, insertstate=0x7ffcbd5db9d0, heapRel=0x7f0961017c00, checkUnique=UNIQUE_CHECK_YES, is_unique=0x7ffcbd5dba01, speculativeToken=0x7ffcbd5db9c8) at nbtinsert.c:386#4 0x00000000005096ab in _bt_doinsert (rel=0x7f096101a030, itup=0x27bedc0, checkUnique=UNIQUE_CHECK_YES, heapRel=0x7f0961017c00) at nbtinsert.c:232#5 0x0000000000514bb8 in btinsert (rel=0x7f096101a030, values=0x7ffcbd5dbb50, isnull=0x7ffcbd5dbb30, ht_ctid=0x27be708, heapRel=0x7f0961017c00, checkUnique=UNIQUE_CHECK_YES, indexInfo=0x27be048) at nbtree.c:205#6 0x000000000050752a in index_insert (indexRelation=0x7f096101a030, values=0x7ffcbd5dbb50, isnull=0x7ffcbd5dbb30, heap_t_ctid=0x27be708, heapRelation=0x7f0961017c00, checkUnique=UNIQUE_CHECK_YES, indexInfo=0x27be048) at indexam.c:212#7 0x00000000006e5d70 in ExecInsertIndexTuples (slot=0x27be6d8, estate=0x27bd7a8, noDupErr=false, specConflict=0x0, arbiterIndexes=0x0) at execIndexing.c:390#8 0x000000000071f11f in ExecInsert (mtstate=0x27bdb60, slot=0x27be6d8, planSlot=0x27be6d8, estate=0x27bd7a8, canSetTag=true) at nodeModifyTable.c:587#9 0x0000000000721696 in ExecModifyTable (pstate=0x27bdb60) at nodeModifyTable.c:2175.......The following Assert statement in _bt_check_unique fails >│386 Assert(!_bt_isequal(itupdesc, itup_key, page, offset)); Upon quick look, it seems like the following git-commit has added above Assert statement:commit e5adcb789d80ba565ccacb1ed4341a7c29085238Author: Peter Geoghegan <pg@bowt.ie>Date: Wed Mar 20 09:30:57 2019 -0700 Refactor nbtree insertion scankeys. Use dedicated struct to represent nbtree insertion scan keys. Having a dedicated struct makes the difference between search type scankeys and insertion scankeys a lot clearer, and simplifies the signature of several related functions. This is based on a suggestion by Andrey Lepikhov.....Including Peter and Hekki in the CC as they are the main author of above git-commit as per the commit message.-- With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Thu, 4 Apr 2019 12:40:55 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Server Crash due to assertion failure in _bt_check_unique()"
},
{
"msg_contents": "I spent some time investigating on why this assertion failure is happening\nand found the reason for it. It's basically happening because when we have\nmultiple transactions running in parallel and they all are trying to check\nfor the uniqueness of a tuple to be inserted, it's obvious that one\ntransaction will be waiting for other transaction to complete. But when we\ncome across such situation, in the current code, we are not invalidating\nthe search bounds in _bt_check_unique() that we saved during the previous\ncall to _bt_binsrch_insert(). I think, when a transaction has to wait for\nsome other transaction to complete, it should invalidate the binary search\nbounds saved in the previous call to _bt_binsrch_insert() and do the fresh\nbinary search as the transaction on which it waited for might have inserted\na new tuple.\n\nHere is the diff showing what I'm trying to say,\n\n@@ -468,8 +468,19 @@ _bt_check_unique(Relation rel, BTInsertState\ninsertstate, Relation heapRel,\n {\n if (nbuf != InvalidBuffer)\n _bt_relbuf(rel,\nnbuf);\n- /* Tell _bt_doinsert to\nwait... */\n+ /*\n+ * Tell _bt_doinsert to\nwait...\n+ *\n+ * Also, invalidate the\nsearch bounds saved in\n+ * insertstate during the\nprevious call to\n+ * _bt_binsrch_insert(). We\nwill do the fresh binary\n+ * search as the\ntransaction on which we waited for\n+ * might have inserted a\nnew tuple.\n+ */\n *speculativeToken =\nSnapshotDirty.speculativeToken;\n+\n+ insertstate->bounds_valid =\nfalse;\n+\n return xwait;\n\nAttached is the patch with above changes. Please let me know if my\nunderstanding is wrong. Thanks.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Thu, Apr 4, 2019 at 12:40 PM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> Hi All,\n>\n> I'm getting a server crash in *_bt_check_unique*() when running the\n> following test-case.\n>\n> *Steps to reproduce the crash:*\n> *Step1:* Create a test table with primary key and insert some data in it.\n>\n> create table t1 (a integer primary key, b text);\n>\n> insert into t1 values (1, 'text1');\n> insert into t1 values (2, 'text2');\n> insert into t1 values (3, 'text3');\n> insert into t1 values (4, 'text4');\n> insert into t1 values (5, 'text5');\n>\n> *Step2:* Start 3 backend sessions and run the following anonymous block\n> in each of them in parallel:\n> *Session 1:*\n> do $$\n> declare\n> begin\n> insert into t1 values (6, 'text6');\n> update t1 set b = 'text66' where a=6;\n> perform pg_sleep(7);\n> delete from t1 where a=6;\n> end $$;\n>\n> *Session 2:*\n> do $$\n> begin\n> insert into t1 values (6, 'text6');\n> perform pg_sleep('7');\n> delete from t1 where a=6;\n> end $$;\n>\n> *Session 3:*\n> do $$\n> begin\n> insert into t1 values (6, 'text6');\n> delete from t1 where a=6;\n> end $$;\n>\n> Here is the backtrace for the crash:\n>\n> #0 0x00007f096019f277 in raise () from /lib64/libc.so.6\n> #1 0x00007f09601a0968 in abort () from /lib64/libc.so.6\n> #2 0x0000000000a54296 in ExceptionalCondition (conditionName=0xafdbf8\n> \"!(!_bt_isequal(itupdesc, itup_key, page, offset))\",\n> errorType=0xafd75a \"FailedAssertion\", fileName=0xafd81f \"nbtinsert.c\",\n> lineNumber=386) at assert.c:54\n> #3 0x0000000000509b0a in _bt_check_unique (rel=0x7f096101a030,\n> insertstate=0x7ffcbd5db9d0, heapRel=0x7f0961017c00,\n> checkUnique=UNIQUE_CHECK_YES,\n> is_unique=0x7ffcbd5dba01, speculativeToken=0x7ffcbd5db9c8) at\n> nbtinsert.c:386\n> #4 0x00000000005096ab in _bt_doinsert (rel=0x7f096101a030,\n> itup=0x27bedc0, checkUnique=UNIQUE_CHECK_YES, heapRel=0x7f0961017c00) at\n> nbtinsert.c:232\n> #5 0x0000000000514bb8 in btinsert (rel=0x7f096101a030,\n> values=0x7ffcbd5dbb50, isnull=0x7ffcbd5dbb30, ht_ctid=0x27be708,\n> heapRel=0x7f0961017c00,\n> checkUnique=UNIQUE_CHECK_YES, indexInfo=0x27be048) at nbtree.c:205\n> #6 0x000000000050752a in index_insert (indexRelation=0x7f096101a030,\n> values=0x7ffcbd5dbb50, isnull=0x7ffcbd5dbb30, heap_t_ctid=0x27be708,\n> heapRelation=0x7f0961017c00, checkUnique=UNIQUE_CHECK_YES,\n> indexInfo=0x27be048) at indexam.c:212\n> #7 0x00000000006e5d70 in ExecInsertIndexTuples (slot=0x27be6d8,\n> estate=0x27bd7a8, noDupErr=false, specConflict=0x0, arbiterIndexes=0x0)\n> at execIndexing.c:390\n> #8 0x000000000071f11f in ExecInsert (mtstate=0x27bdb60, slot=0x27be6d8,\n> planSlot=0x27be6d8, estate=0x27bd7a8, canSetTag=true) at\n> nodeModifyTable.c:587\n> #9 0x0000000000721696 in ExecModifyTable (pstate=0x27bdb60) at\n> nodeModifyTable.c:2175\n> .......\n>\n> The following Assert statement in *_bt_check_unique* fails\n>\n> >│386 *Assert(!_bt_isequal(itupdesc,\n> itup_key, page, offset));*\n>\n>\n> Upon quick look, it seems like the following git-commit has added above\n> Assert statement:\n>\n> commit e5adcb789d80ba565ccacb1ed4341a7c29085238\n> Author: Peter Geoghegan <pg@bowt.ie>\n> Date: Wed Mar 20 09:30:57 2019 -0700\n>\n> Refactor nbtree insertion scankeys.\n>\n> Use dedicated struct to represent nbtree insertion scan keys. Having a\n> dedicated struct makes the difference between search type scankeys and\n> insertion scankeys a lot clearer, and simplifies the signature of\n> several related functions. This is based on a suggestion by Andrey\n> Lepikhov.\n>\n> ....\n>\n> Including Peter and Hekki in the CC as they are the main author of above\n> git-commit as per the commit message.\n>\n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:*http://www.enterprisedb.com <http://www.enterprisedb.com/>*\n>",
"msg_date": "Thu, 4 Apr 2019 16:36:41 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Server Crash due to assertion failure in _bt_check_unique()"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 4:06 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Attached is the patch with above changes. Please let me know if my understanding is wrong. Thanks.\n\nYou have it right. This bug slipped in towards the end of development,\nwhen the insertstate struct was introduced.\n\nI have pushed something very close to the patch you posted. (I added\nan additional assertion, and tweaked the comments.)\n\nThanks for the report and patch!\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 4 Apr 2019 09:42:23 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Server Crash due to assertion failure in _bt_check_unique()"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 10:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Thu, Apr 4, 2019 at 4:06 AM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> > Attached is the patch with above changes. Please let me know if my\n> understanding is wrong. Thanks.\n>\n> You have it right. This bug slipped in towards the end of development,\n> when the insertstate struct was introduced.\n>\n> I have pushed something very close to the patch you posted. (I added\n> an additional assertion, and tweaked the comments.)\n>\n>\nThanks Peter :)\n\n\n> Thanks for the report and patch!\n> --\n> Peter Geoghegan\n>\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Thu, Apr 4, 2019 at 10:12 PM Peter Geoghegan <pg@bowt.ie> wrote:On Thu, Apr 4, 2019 at 4:06 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Attached is the patch with above changes. Please let me know if my understanding is wrong. Thanks.\n\nYou have it right. This bug slipped in towards the end of development,\nwhen the insertstate struct was introduced.\n\nI have pushed something very close to the patch you posted. (I added\nan additional assertion, and tweaked the comments.)\nThanks Peter :) \nThanks for the report and patch!\n-- \nPeter Geoghegan-- With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Fri, 5 Apr 2019 07:00:52 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Server Crash due to assertion failure in _bt_check_unique()"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nRight now isolation test alter-table-4.spec fails if prepared statement \nis used:\nWhats going on:\n- There are two inherited tables \"p\" and \"c1\".\n- Session 1 starts transactions and drops inheritance\n- Session 2 prepares and executes statement which selects data from \n\"p\". It is blocked because table is locked by transaction in session 1.\n- Session 1 commits transaction.\n- Session 2 receives invalidation message.\n- Session 2 completes query execution and shows result which assumes \ninheritance between two tables (according to expected result \nalter-table-4.out it is assumed to be correct).\n- Session 2 repeat execution of query. It returns the SAME result. And \nit is not correct because now tables are not inherited.\n\nThe problem is that backend handles invalidated message in the context \nwhere schema changes are not yet visible. So statement is prepared for \nthe state of database\npreceding schema changes. And since invalidation message is already \nreceived and handled, this prepared statement will never be invalidated.\n\nIs it considered as expected and acceptable behavior?\n\nWhat seems to be suspicious to me is that schema changes are treated in \ndifferent ways.\nIf you perform select from some table using the same scenario and \nconcurrently alter this table by adding some extra columns, then result \nof the query includes this new columns. I.e. statement is compiled and \nexecuted according to the new schema.\nBut if we alter inheritance, then statement is compiled and executed as \nif inheritance didn't change (old schema is used).\nSuch behavior seems to be contradictory and error prone.\n\nPatch for alter-table-4 test is attached to this mail.\nAnd difference between expected and actual output of the test is the \nfollowing:\n\n! starting permutation: s1b s1delc1 s2sel s1c s2sel\n step s1b: BEGIN;\n step s1delc1: ALTER TABLE c1 NO INHERIT p;\n! step s2sel: SELECT SUM(a) FROM p; <waiting ...>\n step s1c: COMMIT;\n step s2sel: <... completed>\n sum\n\n 11\n! step s2sel: SELECT SUM(a) FROM p;\n sum\n\n! 1\n\n--- 1,31 ----\n! starting permutation: s1b s1delc1 s2prep s2sel s1c s2sel\n step s1b: BEGIN;\n step s1delc1: ALTER TABLE c1 NO INHERIT p;\n! step s2prep: PREPARE summa as SELECT SUM(a) FROM p;\n! step s2sel: EXECUTE summa; <waiting ...>\n step s1c: COMMIT;\n step s2sel: <... completed>\n sum\n\n 11\n! step s2sel: EXECUTE summa;\n sum\n\n! 11\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 4 Apr 2019 11:30:12 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Inheritance, invalidations and prepared statements."
}
] |
[
{
"msg_contents": "Hi hackers,\n\n I run the following sql on the latest master branch (head commit:\n6f0e19005) of Postgres:\n\n ```sql\n\n gpadmin=# create table t(c int);\nCREATE TABLE\ngpadmin=# create rule myrule as on insert to t do instead select * from t\nfor update;\nCREATE RULE\ngpadmin=# insert into t values (1);\npsql: ERROR: no relation entry for relid 1\n\n ```\n\n It throws an error.\n\n After some investigation, I found that:\n\n 1. in the function `transformRuleStmt`, it creates a new ParseState\n`sub_pstate` to transform\nactions. And in this `sub_pstate`, it is initially contains two\nrangetblentry, \"old\" and \"new\".\n\n 2. in the function `transformSelectStmt`, it will invoke\n`transformLockingClause` to handle `for update`. And it loops all the\nentries in rtables.\n\n I think for a CreateRuleStmt, its command part if is a select-for-update\nstatement, the for-update clause should skip the two \"new\", \"old\"\nRangeTblEntry.\n\nHow to fix this:\n1. forbid the syntax: rule's command cannot be a select-for-update\n2. skip new and old: I have a patch to show this idea, please see the\nattachment.\n\nAny thoughts? Thanks!\n\n\nBest Regards,\nZhenghua Lyu",
"msg_date": "Thu, 4 Apr 2019 16:33:35 +0800",
"msg_from": "Zhenghua Lyu <zlv@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Fix rules's command contains for-update"
}
] |
[
{
"msg_contents": "Announcing Release 10 of the PostgreSQL Buildfarm client\n\nPrincipal feature: support for non-standard repositories:\n\n. support multi-element branch names, such as “dev/featurename” or\n“bug/ticket_number/branchname”\n. provide a get_branches() method in SCM module\n. support regular expression branches of interest. This is matched\nagainst the list of available branches\n. prune branches when doing git fetch.\n\nThis feature and some server side changes will be explored in detail\nin my presentation at pgCon in Ottawa next month. The feature doesn’t\naffect owners of animals in our normal public Build Farm. However, the\nitems below are of use to them.\n\nOther features/ behaviour changes:\n\n. support for testing cross version upgrade extended back to 9.2\n. support for core Postgres changes:\n . extended support for USE_MODULE_DB\n . new extra_float_digits regime\n . removal of user table oid support\n . removal of abstime and friends\n . changed log file locations\n. don’t search for valgrind messages unless valgrind is configured.\n. make detection of when NO_TEMP_INSTALL is allowed more bulletproof\n\nThere are also various minor bug fixes and code improvements.\n\nThe release can be downloaded from\nhttps://github.com/PGBuildFarm/client-code/archive/REL_10.tar.gz or\nhttps://buildfarm.postgresql.org/downloads/latest-client.tgz\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Apr 2019 17:54:34 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL Buildfarm Client Release 10"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> The release can be downloaded from\n> https://github.com/PGBuildFarm/client-code/archive/REL_10.tar.gz or\n> https://buildfarm.postgresql.org/downloads/latest-client.tgz\n\nI don't actually see it on the buildfarm.postgresql.org server?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Apr 2019 18:18:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Buildfarm Client Release 10"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 6:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > The release can be downloaded from\n> > https://github.com/PGBuildFarm/client-code/archive/REL_10.tar.gz or\n> > https://buildfarm.postgresql.org/downloads/latest-client.tgz\n>\n> I don't actually see it on the buildfarm.postgresql.org server?\n>\n>\n\nIt's there. I checked the link with wget. But I think the web cache\nmight be having an issue.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Apr 2019 18:54:39 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL Buildfarm Client Release 10"
}
] |
[
{
"msg_contents": "Or rather, why does the reloption allow values below the compile-time constant?\n\nIt looks like we currently allow values as low as 128. The problem\nthere is that heap_update() and heap_prepare_insert() both only bother\ncalling toast_insert_or_update() when the tuple's length is above\nTOAST_TUPLE_TARGET, so it seems to have no effect when set to a lower\nvalue.\n\nI don't think we can change heap_update() and heap_prepare_insert() to\ndo \"tup->t_len > RelationGetToastTupleTarget(relation,\nTOAST_TUPLE_TARGET)\" instead as such a table might not even have a\nTOAST relation since needs_toast_table() will return false if it\nthinks the tuple length can't be above TOAST_TUPLE_TARGET.\n\nIt does not seem possible to add/remote the toast table when the\nreloption is changed either as we're only obtaining a\nShareUpdateExclusiveLock to set it. We'd likely need to upgrade that\nto an AccessExclusiveLock to do that.\n\nI understand from reading [1] that Simon was mostly interested in\nkeeping values inline for longer. I saw no mention of moving them out\nof line sooner.\n\n[1] https://postgr.es/m/CANP8+jKsVmw6CX6YP9z7zqkTzcKV1+Uzr3XjKcZW=2Ya00OyQQ@mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 5 Apr 2019 18:09:35 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Why does \"toast_tuple_target\" allow values below TOAST_TUPLE_TARGET?"
},
{
"msg_contents": "On 2019/04/05 14:09, David Rowley wrote:\n> Or rather, why does the reloption allow values below the compile-time constant?\n\nMaybe there is already a discussion in progress on the topic?\n\n* Caveats from reloption toast_tuple_target *\nhttps://www.postgresql.org/message-id/flat/CABOikdMt%3DmOtzW_ax_8pa9syEPo5Lji%3DLJrN2dunht8K-SLWzg%40mail.gmail.com\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Fri, 5 Apr 2019 14:25:36 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Why does \"toast_tuple_target\" allow values below\n TOAST_TUPLE_TARGET?"
},
{
"msg_contents": "On Fri, 5 Apr 2019 at 18:25, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> On 2019/04/05 14:09, David Rowley wrote:\n> > Or rather, why does the reloption allow values below the compile-time constant?\n>\n> Maybe there is already a discussion in progress on the topic?\n>\n> * Caveats from reloption toast_tuple_target *\n> https://www.postgresql.org/message-id/flat/CABOikdMt%3DmOtzW_ax_8pa9syEPo5Lji%3DLJrN2dunht8K-SLWzg%40mail.gmail.com\n\nDoh. Okay, thanks. I'll drop this thread then. Seems what I wanted to\nask and say has been asked and said already.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 5 Apr 2019 18:29:26 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Why does \"toast_tuple_target\" allow values below\n TOAST_TUPLE_TARGET?"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is a strange failure:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=loach&dt=2019-04-05%2005%3A15%3A00\n\ntest _int ... FAILED 649 ms\n\n================= pgsql.build/contrib/intarray/regression.diffs\n===================\ndiff -U3 /usr/home/pgbf/buildroot/HEAD/pgsql.build/contrib/intarray/expected/_int.out\n/usr/home/pgbf/buildroot/HEAD/pgsql.build/contrib/intarray/results/_int.out\n--- /usr/home/pgbf/buildroot/HEAD/pgsql.build/contrib/intarray/expected/_int.out\n2019-03-21 12:16:30.514677000 +0100\n+++ /usr/home/pgbf/buildroot/HEAD/pgsql.build/contrib/intarray/results/_int.out\n2019-04-05 07:23:10.005914000 +0200\n@@ -453,13 +453,13 @@\n SELECT count(*) from test__int WHERE a && '{23,50}';\n count\n -------\n- 403\n+ 402\n (1 row)\n\n SELECT count(*) from test__int WHERE a @@ '23|50';\n count\n -------\n- 403\n+ 402\n (1 row)\n\nThose two queries are run immediately after:\n\nCREATE INDEX text_idx on test__int using gist ( a gist__int_ops );\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Apr 2019 19:01:13 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Failure in contrib test _int on loach"
},
{
"msg_contents": "On Fri, Apr 5, 2019 at 2:02 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Hi,\n>\n> This is a strange failure:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=loach&dt=2019-04-05%2005%3A15%3A00\n>\n> test _int ... FAILED 649 ms\n>\n> ================= pgsql.build/contrib/intarray/regression.diffs\n> ===================\n> diff -U3 /usr/home/pgbf/buildroot/HEAD/pgsql.build/contrib/intarray/expected/_int.out\n> /usr/home/pgbf/buildroot/HEAD/pgsql.build/contrib/intarray/results/_int.out\n> --- /usr/home/pgbf/buildroot/HEAD/pgsql.build/contrib/intarray/expected/_int.out\n> 2019-03-21 12:16:30.514677000 +0100\n> +++ /usr/home/pgbf/buildroot/HEAD/pgsql.build/contrib/intarray/results/_int.out\n> 2019-04-05 07:23:10.005914000 +0200\n> @@ -453,13 +453,13 @@\n> SELECT count(*) from test__int WHERE a && '{23,50}';\n> count\n> -------\n> - 403\n> + 402\n> (1 row)\n>\n> SELECT count(*) from test__int WHERE a @@ '23|50';\n> count\n> -------\n> - 403\n> + 402\n> (1 row)\n>\n> Those two queries are run immediately after:\n>\n> CREATE INDEX text_idx on test__int using gist ( a gist__int_ops );\n>\n\n\n\nThere are a couple of other recent instances of this failure, on\nfrancolin and whelk.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 5 Apr 2019 10:01:25 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On Fri, Apr 5, 2019 at 2:02 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> This is a strange failure:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=loach&dt=2019-04-05%2005%3A15%3A00\n>> [ wrong answers from queries using a GIST index ]\n\n> There are a couple of other recent instances of this failure, on\n> francolin and whelk.\n\nYeah. Given three failures in a couple of days, we can reasonably\nguess that the problem was introduced within a day or two prior to\nthe first one. Looking at what's touched GIST in that time frame,\nsuspicion has to fall heavily on 9155580fd5fc2a0cbb23376dfca7cd21f59c2c7b.\n\nIf I had to bet, I'd bet that there's something wrong with the\nmachinations described in the commit message:\n \n For GiST, the LSN-NSN interlock makes this a little tricky. All pages must\n be marked with a valid (i.e. non-zero) LSN, so that the parent-child\n LSN-NSN interlock works correctly. We now use magic value 1 for that during\n index build. Change the fake LSN counter to begin from 1000, so that 1 is\n safely smaller than any real or fake LSN. 2 would've been enough for our\n purposes, but let's reserve a bigger range, in case we need more special\n values in the future.\n\nI'll go add this as an open issue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Apr 2019 11:01:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "05.04.2019 18:01, Tom Lane writes:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On Fri, Apr 5, 2019 at 2:02 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>>> This is a strange failure:\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=loach&dt=2019-04-05%2005%3A15%3A00\n>>> [ wrong answers from queries using a GIST index ]\n>> There are a couple of other recent instances of this failure, on\n>> francolin and whelk.\n> Yeah. Given three failures in a couple of days, we can reasonably\n> guess that the problem was introduced within a day or two prior to\n> the first one. Looking at what's touched GIST in that time frame,\n> suspicion has to fall heavily on 9155580fd5fc2a0cbb23376dfca7cd21f59c2c7b.\n>\n> If I had to bet, I'd bet that there's something wrong with the\n> machinations described in the commit message:\n> \n> For GiST, the LSN-NSN interlock makes this a little tricky. All pages must\n> be marked with a valid (i.e. non-zero) LSN, so that the parent-child\n> LSN-NSN interlock works correctly. We now use magic value 1 for that during\n> index build. Change the fake LSN counter to begin from 1000, so that 1 is\n> safely smaller than any real or fake LSN. 2 would've been enough for our\n> purposes, but let's reserve a bigger range, in case we need more special\n> values in the future.\n>\n> I'll go add this as an open issue.\n>\n> \t\t\tregards, tom lane\n>\n\nHi,\nI've already noticed the same failure in our company buildfarm and \nstarted the research.\n\nYou are right, it's the \" Generate less WAL during GiST, GIN and SP-GiST \nindex build. \" patch to blame.\nBecause of using the GistBuildLSN some pages are not linked correctly, \nso index scan cannot find some entries, while seqscan finds them.\n\nIn attachment, you can find patch with a test that allows to reproduce \nthe bug not randomly, but on every run.\nNow I'm trying to find a way to fix the issue.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 5 Apr 2019 19:41:19 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "05.04.2019 19:41, Anastasia Lubennikova writes:\n>\n> 05.04.2019 18:01, Tom Lane writes:\n>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>> On Fri, Apr 5, 2019 at 2:02 AM Thomas Munro <thomas.munro@gmail.com> \n>>> wrote:\n>>>> This is a strange failure:\n>>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=loach&dt=2019-04-05%2005%3A15%3A00 \n>>>>\n>>>> [ wrong answers from queries using a GIST index ]\n>>> There are a couple of other recent instances of this failure, on\n>>> francolin and whelk.\n>> Yeah. Given three failures in a couple of days, we can reasonably\n>> guess that the problem was introduced within a day or two prior to\n>> the first one. Looking at what's touched GIST in that time frame,\n>> suspicion has to fall heavily on \n>> 9155580fd5fc2a0cbb23376dfca7cd21f59c2c7b.\n>>\n>> If I had to bet, I'd bet that there's something wrong with the\n>> machinations described in the commit message:\n>> For GiST, the LSN-NSN interlock makes this a little tricky. \n>> All pages must\n>> be marked with a valid (i.e. non-zero) LSN, so that the \n>> parent-child\n>> LSN-NSN interlock works correctly. We now use magic value 1 for \n>> that during\n>> index build. Change the fake LSN counter to begin from 1000, so \n>> that 1 is\n>> safely smaller than any real or fake LSN. 2 would've been enough \n>> for our\n>> purposes, but let's reserve a bigger range, in case we need more \n>> special\n>> values in the future.\n>>\n>> I'll go add this as an open issue.\n>>\n>> regards, tom lane\n>>\n>\n> Hi,\n> I've already noticed the same failure in our company buildfarm and \n> started the research.\n>\n> You are right, it's the \" Generate less WAL during GiST, GIN and \n> SP-GiST index build. \" patch to blame.\n> Because of using the GistBuildLSN some pages are not linked correctly, \n> so index scan cannot find some entries, while seqscan finds them.\n>\n> In attachment, you can find patch with a test that allows to reproduce \n> the bug not randomly, but on every run.\n> Now I'm trying to find a way to fix the issue.\n\nThe problem was caused by incorrect detection of the page to insert new \ntuple after split.\nIf gistinserttuple() of the tuple formed by gistgetadjusted() had to \nsplit the page, we must to go back to the parent and\ndescend back to the child that's a better fit for the new tuple.\n\nPreviously this was handled by the code block with the following comment:\n\n* Concurrent split detected. There's no guarantee that the\n* downlink for this page is consistent with the tuple we're\n* inserting anymore, so go back to parent and rechoose the best\n* child.\n\nAfter introducing GistBuildNSN this code path became unreachable.\nTo fix it, I added new flag to detect such splits during indexbuild.\n\nThe patches with the test and fix are attached.\n\nMany thanks to Teodor Sigaev, who helped to find the bug.\n\n-- \n\nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 9 Apr 2019 19:11:06 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "On 09/04/2019 19:11, Anastasia Lubennikova wrote:\n> 05.04.2019 19:41, Anastasia Lubennikova writes:\n>> In attachment, you can find patch with a test that allows to reproduce\n>> the bug not randomly, but on every run.\n>> Now I'm trying to find a way to fix the issue.\n> \n> The problem was caused by incorrect detection of the page to insert new\n> tuple after split.\n> If gistinserttuple() of the tuple formed by gistgetadjusted() had to\n> split the page, we must to go back to the parent and\n> descend back to the child that's a better fit for the new tuple.\n> \n> Previously this was handled by the code block with the following comment:\n> \n> * Concurrent split detected. There's no guarantee that the\n> * downlink for this page is consistent with the tuple we're\n> * inserting anymore, so go back to parent and rechoose the best\n> * child.\n> \n> After introducing GistBuildNSN this code path became unreachable.\n> To fix it, I added new flag to detect such splits during indexbuild.\n\nIsn't it possible that the grandparent page is also split, so that we'd \nneed to climb further up?\n\n- Heikki\n\n\n",
"msg_date": "Wed, 10 Apr 2019 18:25:58 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "On 10/04/2019 20:25, Heikki Linnakangas wrote:\n> On 09/04/2019 19:11, Anastasia Lubennikova wrote:\n>> 05.04.2019 19:41, Anastasia Lubennikova writes:\n>>> In attachment, you can find patch with a test that allows to reproduce\n>>> the bug not randomly, but on every run.\n>>> Now I'm trying to find a way to fix the issue.\n>>\n>> The problem was caused by incorrect detection of the page to insert new\n>> tuple after split.\n>> If gistinserttuple() of the tuple formed by gistgetadjusted() had to\n>> split the page, we must to go back to the parent and\n>> descend back to the child that's a better fit for the new tuple.\n>>\n>> Previously this was handled by the code block with the following comment:\n>>\n>> * Concurrent split detected. There's no guarantee that the\n>> * downlink for this page is consistent with the tuple we're\n>> * inserting anymore, so go back to parent and rechoose the best\n>> * child.\n>>\n>> After introducing GistBuildNSN this code path became unreachable.\n>> To fix it, I added new flag to detect such splits during indexbuild.\n> \n> Isn't it possible that the grandparent page is also split, so that we'd \n> need to climb further up?\nBased on Anastasia's idea i prepare alternative solution to fix the bug \n(see attachment).\nIt utilizes the idea of linear increment of LSN/NSN. WAL write process \nis used for change NSN value to 1 for each block of index relation.\nI hope this can be a fairly clear and safe solution.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 11 Apr 2019 11:10:27 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "On 11/04/2019 09:10, Andrey Lepikhov wrote:\n> On 10/04/2019 20:25, Heikki Linnakangas wrote:\n>> On 09/04/2019 19:11, Anastasia Lubennikova wrote:\n>>> After introducing GistBuildNSN this code path became unreachable.\n>>> To fix it, I added new flag to detect such splits during indexbuild.\n>>\n>> Isn't it possible that the grandparent page is also split, so that we'd\n>> need to climb further up?\n>\n> Based on Anastasia's idea i prepare alternative solution to fix the bug\n> (see attachment).\n> It utilizes the idea of linear increment of LSN/NSN. WAL write process\n> is used for change NSN value to 1 for each block of index relation.\n> I hope this can be a fairly clear and safe solution.\n\nThat's basically the same idea as always using the \"fake LSN\" during \nindex build, like the original version of this patch did. It's got the \nproblem that I mentioned at \nhttps://www.postgresql.org/message-id/090fb3cb-1ca4-e173-ecf7-47d41ebac620@iki.fi:\n\n> * Using \"fake\" unlogged LSNs for GiST index build seemed fishy. I could \n> not convince myself that it was safe in all corner cases. In a recently \n> initdb'd cluster, it's theoretically possible that the fake LSN counter \n> overtakes the real LSN value, and that could lead to strange behavior. \n> For example, how would the buffer manager behave, if there was a dirty \n> page in the buffer cache with an LSN value that's greater than the \n> current WAL flush pointer? I think you'd get \"ERROR: xlog flush request \n> %X/%X is not satisfied --- flushed only to %X/%X\".\n\nPerhaps the risk is theoretical; the real WAL begins at XLOG_SEG_SIZE, \nso with defaults WAL segment size, the index build would have to do \nabout 16 million page splits. The index would have to be at least 150 GB \nfor that. But it seems possible, and with non-default segment and page \nsize settings more so.\n\nPerhaps we could start at 1, but instead of using a global counter, \nwhenever a page is split, we take the parent's LSN value and increment \nit by one. So different branches of the tree could use the same values, \nwhich would reduce the consumption of the counter values.\n\nYet another idea would be to start the counter at 1, but check that it \ndoesn't overtake the WAL insert pointer. If it's about to overtake it, \njust generate some dummy WAL.\n\nBut it seems best to deal with this in gistdoinsert(). I think \nAnastasia's approach of adding a flag to GISTInsertStack can be made to \nwork, if we set the flag somewhere in gistinserttuples() or \ngistplacetopage(), whenever a page is split. That way, if it needs to \nsplit multiple levels, the flag is set on all of the corresponding \nGISTInsertStack entries.\n\nYet another trivial fix would be just always start the tree descend from \nthe root in gistdoinsert(), if a page is split. Not as efficient, but \nprobably negligible in practice.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 11 Apr 2019 11:14:11 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "\n\nOn 11/04/2019 13:14, Heikki Linnakangas wrote:\n> On 11/04/2019 09:10, Andrey Lepikhov wrote:\n>> On 10/04/2019 20:25, Heikki Linnakangas wrote:\n>>> On 09/04/2019 19:11, Anastasia Lubennikova wrote:\n>>>> After introducing GistBuildNSN this code path became unreachable.\n>>>> To fix it, I added new flag to detect such splits during indexbuild.\n>>>\n>>> Isn't it possible that the grandparent page is also split, so that we'd\n>>> need to climb further up?\n>>\n>> Based on Anastasia's idea i prepare alternative solution to fix the bug\n>> (see attachment).\n>> It utilizes the idea of linear increment of LSN/NSN. WAL write process\n>> is used for change NSN value to 1 for each block of index relation.\n>> I hope this can be a fairly clear and safe solution.\n> \n> That's basically the same idea as always using the \"fake LSN\" during \n> index build, like the original version of this patch did. It's got the \n> problem that I mentioned at \n> https://www.postgresql.org/message-id/090fb3cb-1ca4-e173-ecf7-47d41ebac620@iki.fi: \n> \n> \n>> * Using \"fake\" unlogged LSNs for GiST index build seemed fishy. I \n>> could not convince myself that it was safe in all corner cases. In a \n>> recently initdb'd cluster, it's theoretically possible that the fake \n>> LSN counter overtakes the real LSN value, and that could lead to \n>> strange behavior. For example, how would the buffer manager behave, if \n>> there was a dirty page in the buffer cache with an LSN value that's \n>> greater than the current WAL flush pointer? I think you'd get \"ERROR: \n>> xlog flush request %X/%X is not satisfied --- flushed only to %X/%X\".\n> \n> Perhaps the risk is theoretical; the real WAL begins at XLOG_SEG_SIZE, \n> so with defaults WAL segment size, the index build would have to do \n> about 16 million page splits. The index would have to be at least 150 GB \n> for that. But it seems possible, and with non-default segment and page \n> size settings more so.\nAs i see in bufmgr.c, XLogFlush() can't called during index build. In \nthe log_newpage_range() call we can use mask to set value of NSN (and \nLSN) to 1.\n> \n> Perhaps we could start at 1, but instead of using a global counter, \n> whenever a page is split, we take the parent's LSN value and increment \n> it by one. So different branches of the tree could use the same values, \n> which would reduce the consumption of the counter values.\n> \n> Yet another idea would be to start the counter at 1, but check that it \n> doesn't overtake the WAL insert pointer. If it's about to overtake it, \n> just generate some dummy WAL.\n> \n> But it seems best to deal with this in gistdoinsert(). I think \n> Anastasia's approach of adding a flag to GISTInsertStack can be made to \n> work, if we set the flag somewhere in gistinserttuples() or \n> gistplacetopage(), whenever a page is split. That way, if it needs to \n> split multiple levels, the flag is set on all of the corresponding \n> GISTInsertStack entries.\n> \n> Yet another trivial fix would be just always start the tree descend from \n> the root in gistdoinsert(), if a page is split. Not as efficient, but \n> probably negligible in practice.\nAgree\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 11 Apr 2019 14:29:03 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "10.04.2019 18:25, Heikki Linnakangas writes:\n> On 09/04/2019 19:11, Anastasia Lubennikova wrote:\n>> 05.04.2019 19:41, Anastasia Lubennikova writes:\n>>> In attachment, you can find patch with a test that allows to reproduce\n>>> the bug not randomly, but on every run.\n>>> Now I'm trying to find a way to fix the issue.\n>>\n>> The problem was caused by incorrect detection of the page to insert new\n>> tuple after split.\n>> If gistinserttuple() of the tuple formed by gistgetadjusted() had to\n>> split the page, we must to go back to the parent and\n>> descend back to the child that's a better fit for the new tuple.\n>>\n>> Previously this was handled by the code block with the following \n>> comment:\n>>\n>> * Concurrent split detected. There's no guarantee that the\n>> * downlink for this page is consistent with the tuple we're\n>> * inserting anymore, so go back to parent and rechoose the best\n>> * child.\n>>\n>> After introducing GistBuildNSN this code path became unreachable.\n>> To fix it, I added new flag to detect such splits during indexbuild.\n>\n> Isn't it possible that the grandparent page is also split, so that \n> we'd need to climb further up?\n>\n From what I understand,\nthe only reason for grandparent's split during gistbuild is the \ninsertion of the newtup returned by gistgetadjusted().\n\nAfter we stepped up the stack, we will do gistchoose() to choose new \ncorrect child,\nadjust the downlink key and insert it into grandparent. If this \ninsertion caused split, we will recursively follow the same codepath\nand set stack->retry_from_parent again.\n\nSo it is possible, but it doesn't require any extra algorithm changes.\nI didn't manage to generate dataset to reproduce grandparent split.\nThough, I do agree that it's worth checking out. Do you have any ideas?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 11 Apr 2019 14:30:20 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n> So it is possible, but it doesn't require any extra algorithm changes.\n> I didn't manage to generate dataset to reproduce grandparent split.\n> Though, I do agree that it's worth checking out. Do you have any ideas?\n\nPing? This thread has gone cold, but the bug is still there, and\nIMV it's a beta blocker.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Apr 2019 15:05:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "27.04.2019 22:05, Tom Lane wrote:\n> Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n>> So it is possible, but it doesn't require any extra algorithm changes.\n>> I didn't manage to generate dataset to reproduce grandparent split.\n>> Though, I do agree that it's worth checking out. Do you have any ideas?\n> Ping? This thread has gone cold, but the bug is still there, and\n> IMV it's a beta blocker.\n\nHi,\n\nThank you for the reminder.\nIn a nutshell, this fix is ready for committer.\n\nIn previous emails, I have sent two patches with test and bugfix (see \nattached).\nAfter Heikki shared his concerns, I've rechecked the algorithm and \nhaven't found any potential error.\nSo, if other hackers are agreed with my reasoning, the suggested fix is \nsufficient and can be committed.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 29 Apr 2019 16:16:22 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "Hi!\n\n> So, if other hackers are agreed with my reasoning, the suggested fix is \n> sufficient and can be committed.\n> \n\nPatch looks right, but I think that comment should be improved in follow piece:\n\n if (stack->blkno != GIST_ROOT_BLKNO &&\n- stack->parent->lsn < GistPageGetNSN(stack->page))\n+ ((stack->parent->lsn < GistPageGetNSN(stack->page)) ||\n+ stack->retry_from_parent == true))\n {\n /*\n * Concurrent split detected. There's no guarantee that the\n\t\t....\nNot only concurrent split could be deteced here and it was missed long ago. But \nthis patch seems a good chance to change this comment.\n\n-- \nTeodor Sigaev E-mail: teodor@sigaev.ru\n WWW: http://www.sigaev.ru/\n\n\n",
"msg_date": "Tue, 30 Apr 2019 19:32:35 +0300",
"msg_from": "Teodor Sigaev <teodor@sigaev.ru>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "(resending, previous attempt didn't make it to pgsql-hackers)\n\nOn 29/04/2019 16:16, Anastasia Lubennikova wrote:\n> In previous emails, I have sent two patches with test and bugfix (see\n> attached).\n> After Heikki shared his concerns, I've rechecked the algorithm and\n> haven't found any potential error.\n> So, if other hackers are agreed with my reasoning, the suggested fix is\n> sufficient and can be committed.\n\nI still believe there is a problem with grandparent splits with this. \nI'll try to construct a test case later this week, unless you manage to \ncreate one before that.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 2 May 2019 10:37:29 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "On 02/05/2019 10:37, Heikki Linnakangas wrote:\n> (resending, previous attempt didn't make it to pgsql-hackers)\n> \n> On 29/04/2019 16:16, Anastasia Lubennikova wrote:\n>> In previous emails, I have sent two patches with test and bugfix (see\n>> attached).\n>> After Heikki shared his concerns, I've rechecked the algorithm and\n>> haven't found any potential error.\n>> So, if other hackers are agreed with my reasoning, the suggested fix is\n>> sufficient and can be committed.\n> \n> I still believe there is a problem with grandparent splits with this.\n> I'll try to construct a test case later this week, unless you manage to\n> create one before that.\n\nHere you go. If you apply the two patches from \nhttps://www.postgresql.org/message-id/5d48ce28-34cf-9b03-5d42-dbd5457926bf%40postgrespro.ru, \nand run the attached script, it will print out something like this:\n\npostgres=# \\i grandparent.sql\nDROP TABLE\nCREATE TABLE\nINSERT 0 150000\nCREATE INDEX\npsql:grandparent.sql:27: NOTICE: working on 10000\npsql:grandparent.sql:27: NOTICE: working on 20000\npsql:grandparent.sql:27: NOTICE: working on 30000\npsql:grandparent.sql:27: NOTICE: working on 40000\npsql:grandparent.sql:27: NOTICE: working on 50000\npsql:grandparent.sql:27: NOTICE: working on 60000\npsql:grandparent.sql:27: NOTICE: working on 70000\npsql:grandparent.sql:27: NOTICE: working on 80000\npsql:grandparent.sql:27: NOTICE: working on 90000\npsql:grandparent.sql:27: NOTICE: working on 100000\npsql:grandparent.sql:27: NOTICE: working on 110000\npsql:grandparent.sql:27: NOTICE: failed for 114034\npsql:grandparent.sql:27: NOTICE: working on 120000\nDO\n\nThat \"failed for 114034\" should not happen.\n\nFortunately, that's not too hard to fix. We just need to arrange things \nso that the \"retry_from_parent\" flag also gets set for the grandparent, \nwhen the grandparent is split. Like in the attached patch.\n\n- Heikki",
"msg_date": "Wed, 8 May 2019 01:31:55 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
},
{
"msg_contents": "On 08/05/2019 01:31, Heikki Linnakangas wrote:\n> On 02/05/2019 10:37, Heikki Linnakangas wrote:\n>> On 29/04/2019 16:16, Anastasia Lubennikova wrote:\n>>> In previous emails, I have sent two patches with test and bugfix (see\n>>> attached).\n>>> After Heikki shared his concerns, I've rechecked the algorithm and\n>>> haven't found any potential error.\n>>> So, if other hackers are agreed with my reasoning, the suggested fix is\n>>> sufficient and can be committed.\n>>\n>> I still believe there is a problem with grandparent splits with this.\n>> I'll try to construct a test case later this week, unless you manage to\n>> create one before that.\n> \n> Here you go. If you apply the two patches from\n> https://www.postgresql.org/message-id/5d48ce28-34cf-9b03-5d42-dbd5457926bf%40postgrespro.ru,\n> and run the attached script, it will print out something like this:\n> \n> postgres=# \\i grandparent.sql\n> DROP TABLE\n> CREATE TABLE\n> INSERT 0 150000\n> CREATE INDEX\n> psql:grandparent.sql:27: NOTICE: working on 10000\n> psql:grandparent.sql:27: NOTICE: working on 20000\n> psql:grandparent.sql:27: NOTICE: working on 30000\n> psql:grandparent.sql:27: NOTICE: working on 40000\n> psql:grandparent.sql:27: NOTICE: working on 50000\n> psql:grandparent.sql:27: NOTICE: working on 60000\n> psql:grandparent.sql:27: NOTICE: working on 70000\n> psql:grandparent.sql:27: NOTICE: working on 80000\n> psql:grandparent.sql:27: NOTICE: working on 90000\n> psql:grandparent.sql:27: NOTICE: working on 100000\n> psql:grandparent.sql:27: NOTICE: working on 110000\n> psql:grandparent.sql:27: NOTICE: failed for 114034\n> psql:grandparent.sql:27: NOTICE: working on 120000\n> DO\n> \n> That \"failed for 114034\" should not happen.\n> \n> Fortunately, that's not too hard to fix. We just need to arrange things\n> so that the \"retry_from_parent\" flag also gets set for the grandparent,\n> when the grandparent is split. Like in the attached patch.\n\nI hear no objections, so pushed that. But if you have a chance to review \nthis later, just to double-check, I'd still appreciate that.\n\n- Heikki\n\n\n",
"msg_date": "Tue, 14 May 2019 13:38:01 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Failure in contrib test _int on loach"
}
] |
[
{
"msg_contents": "Given some of the recent hubbub and analysis of CVE entries, one part of\nthe documentation[1] that could be further clarified is what initdb does\nby default, i.e. creates a cluster where users can connect with trust\nauthentication. While this may be great for people who are hacking or\nrunning PostgreSQL in a trusted local environment, this may not make\nsense for many (most?) other systems.\n\nThe attached patch clarifies this fact and adds a \"warning\" box just\nbelow the initdb examples that provides recommendations to create a more\nsecure environment. It also removes the section that discusses this\nbelow the part that discusses securing the directory, as really this\nexplanation should go right after the \"initdb\" call.\n\n(There could be an additional discussion about whether or not we want to\nchange the default behavior for initdb, but I would suggest that a safe\nstarting point would be to ensure we call this out)\n\nCredits to Magnus for pointing this out, and Tom + Andrew D. for review\nbefore posting to list.\n\nJonathan\n\n[1] https://www.postgresql.org/docs/current/creating-cluster.html",
"msg_date": "Fri, 5 Apr 2019 12:11:31 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "initdb recommendations"
},
{
"msg_contents": "On 2019-04-05 18:11, Jonathan S. Katz wrote:\n> (There could be an additional discussion about whether or not we want to\n> change the default behavior for initdb, but I would suggest that a safe\n> starting point would be to ensure we call this out)\n\nI think we should just change the defaults. There is a risk of warning\nfatigue. initdb does warn about this, so anyone who cared could have\ngotten the information.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 5 Apr 2019 22:58:11 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 4/5/19 4:58 PM, Peter Eisentraut wrote:\n> On 2019-04-05 18:11, Jonathan S. Katz wrote:\n>> (There could be an additional discussion about whether or not we want to\n>> change the default behavior for initdb, but I would suggest that a safe\n>> starting point would be to ensure we call this out)\n> \n> I think we should just change the defaults. There is a risk of warning\n> fatigue. initdb does warn about this, so anyone who cared could have\n> gotten the information.\n\nIt might actually be a combination of both updating the defaults and\nmodifying the documentation.\n\nIf we introduce better defaults, we'll need an explanation of what the\ndefaults are and why they are as such.\n\nIf we don't, we certainly need to warn the user what's happening. The\nway it's currently written, it's very easy to miss.\n\nI also don't see how it's warning fatigue when it's both a) a feature\nthat could put your system into a vulnerable state if you're not careful\nand b) the only warning on that page.\n\nJonathan",
"msg_date": "Fri, 5 Apr 2019 17:19:28 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Fri, Apr 5, 2019 at 10:58 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-04-05 18:11, Jonathan S. Katz wrote:\n> > (There could be an additional discussion about whether or not we want to\n> > change the default behavior for initdb, but I would suggest that a safe\n> > starting point would be to ensure we call this out)\n>\n> I think we should just change the defaults. There is a risk of warning\n> fatigue. initdb does warn about this, so anyone who cared could have\n> gotten the information.\n>\n\nI've been suggesting that for years, so definite strong +1 for doing that.\n\nIf it's something that annoys backend developers who initdb very often, I\nsuggest we add an environment variable to override it. But I'm not sure\nthat's really necessary -- creating a shell alias or similar is easy to do,\nand most have probably already done so for other reasons.\n\nThat said, I think it would make sense to *also* have a warning. And in\nparticular, we should strongly consider backpatching a warning.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Apr 5, 2019 at 10:58 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-04-05 18:11, Jonathan S. Katz wrote:\n> (There could be an additional discussion about whether or not we want to\n> change the default behavior for initdb, but I would suggest that a safe\n> starting point would be to ensure we call this out)\n\nI think we should just change the defaults. There is a risk of warning\nfatigue. initdb does warn about this, so anyone who cared could have\ngotten the information.I've been suggesting that for years, so definite strong +1 for doing that.If it's something that annoys backend developers who initdb very often, I suggest we add an environment variable to override it. But I'm not sure that's really necessary -- creating a shell alias or similar is easy to do, and most have probably already done so for other reasons.That said, I think it would make sense to *also* have a warning. And in particular, we should strongly consider backpatching a warning.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 6 Apr 2019 11:35:44 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Sat, Apr 06, 2019 at 11:35:44AM +0200, Magnus Hagander wrote:\n> On Fri, Apr 5, 2019 at 10:58 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > On 2019-04-05 18:11, Jonathan S. Katz wrote:\n> > > (There could be an additional discussion about whether or not we want to\n> > > change the default behavior for initdb, but I would suggest that a safe\n> > > starting point would be to ensure we call this out)\n> >\n> > I think we should just change the defaults. There is a risk of warning\n> > fatigue. initdb does warn about this, so anyone who cared could have\n> > gotten the information.\n> >\n> \n> I've been suggesting that for years, so definite strong +1 for doing that.\n\n+1\n\n> If it's something that annoys backend developers who initdb very often, I\n> suggest we add an environment variable to override it. But I'm not sure\n> that's really necessary -- creating a shell alias or similar is easy to do,\n> and most have probably already done so for other reasons.\n\nI, for one, do most initdb runs via a script and wouldn't use such an\nenvironment variable.\n\n\n",
"msg_date": "Sat, 6 Apr 2019 11:08:39 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 2019-04-05 18:11, Jonathan S. Katz wrote:\n> + <para>\n> + We recommend using the <option>-W</option>, <option>--pwprompt</option>,\n> + or <option>--pwfile</option> flags to assign a password to the database\n> + superuser, and to override the <filename>pg_hba.conf</filename> default\n> + generation using <option>-auth-local peer</option> for local connections,\n> + and <option>-auth-host scram-sha-256</option> for remote connections. See\n> + <xref linkend=\"client-authentication\"/> for more information on client\n> + authentication methods.\n> + </para>\n\nAs discussed on hackers, we are not ready to support scram-sha-256 out\nof the box. So this advice, or any similar advice elsewhere, would need\nto recommend \"md5\" as the setting --- which would probably be embarrassing.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 14:25:07 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 4/8/19 8:25 AM, Peter Eisentraut wrote:\n> On 2019-04-05 18:11, Jonathan S. Katz wrote:\n>> + <para>\n>> + We recommend using the <option>-W</option>, <option>--pwprompt</option>,\n>> + or <option>--pwfile</option> flags to assign a password to the database\n>> + superuser, and to override the <filename>pg_hba.conf</filename> default\n>> + generation using <option>-auth-local peer</option> for local connections,\n>> + and <option>-auth-host scram-sha-256</option> for remote connections. See\n>> + <xref linkend=\"client-authentication\"/> for more information on client\n>> + authentication methods.\n>> + </para>\n> \n> As discussed on hackers, we are not ready to support scram-sha-256 out\n> of the box. So this advice, or any similar advice elsewhere, would need\n> to recommend \"md5\" as the setting --- which would probably be embarrassing.\n\nWell, it's less embarrassing than trust, and we currently state:\n\n\"Also, specify -A md5 or -A password so that the default trust\nauthentication mode is not used\"[1]\n\nWe could also modify it to say :\n\n\"and <option>-auth-host scram-sha-256</option> for remote connections if\n your client supports it, otherwise <option>-auth-host md5</option>\"\n\nJonathan\n\n[1] https://www.postgresql.org/docs/current/creating-cluster.html",
"msg_date": "Mon, 8 Apr 2019 08:41:04 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 2:41 PM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> On 4/8/19 8:25 AM, Peter Eisentraut wrote:\n> > On 2019-04-05 18:11, Jonathan S. Katz wrote:\n> >> + <para>\n> >> + We recommend using the <option>-W</option>,\n> <option>--pwprompt</option>,\n> >> + or <option>--pwfile</option> flags to assign a password to the\n> database\n> >> + superuser, and to override the <filename>pg_hba.conf</filename>\n> default\n> >> + generation using <option>-auth-local peer</option> for local\n> connections,\n> >> + and <option>-auth-host scram-sha-256</option> for remote\n> connections. See\n> >> + <xref linkend=\"client-authentication\"/> for more information on\n> client\n> >> + authentication methods.\n> >> + </para>\n> >\n> > As discussed on hackers, we are not ready to support scram-sha-256 out\n> > of the box. So this advice, or any similar advice elsewhere, would need\n> > to recommend \"md5\" as the setting --- which would probably be\n> embarrassing.\n>\n> Well, it's less embarrassing than trust, and we currently state:\n>\n\nYes. Much less.\n\n\n\"Also, specify -A md5 or -A password so that the default trust\n> authentication mode is not used\"[1]\n>\n> We could also modify it to say :\n>\n> \"and <option>-auth-host scram-sha-256</option> for remote connections if\n> your client supports it, otherwise <option>-auth-host md5</option>\"\n>\n\nThat would be the best from a correctness, but if of course also makes\nthings sound more complicated. I'm not sure where the right balance is\nthere.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Apr 8, 2019 at 2:41 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:On 4/8/19 8:25 AM, Peter Eisentraut wrote:\n> On 2019-04-05 18:11, Jonathan S. Katz wrote:\n>> + <para>\n>> + We recommend using the <option>-W</option>, <option>--pwprompt</option>,\n>> + or <option>--pwfile</option> flags to assign a password to the database\n>> + superuser, and to override the <filename>pg_hba.conf</filename> default\n>> + generation using <option>-auth-local peer</option> for local connections,\n>> + and <option>-auth-host scram-sha-256</option> for remote connections. See\n>> + <xref linkend=\"client-authentication\"/> for more information on client\n>> + authentication methods.\n>> + </para>\n> \n> As discussed on hackers, we are not ready to support scram-sha-256 out\n> of the box. So this advice, or any similar advice elsewhere, would need\n> to recommend \"md5\" as the setting --- which would probably be embarrassing.\n\nWell, it's less embarrassing than trust, and we currently state:Yes. Much less.\n\"Also, specify -A md5 or -A password so that the default trust\nauthentication mode is not used\"[1]\n\nWe could also modify it to say :\n\n\"and <option>-auth-host scram-sha-256</option> for remote connections if\n your client supports it, otherwise <option>-auth-host md5</option>\"That would be the best from a correctness, but if of course also makes things sound more complicated. I'm not sure where the right balance is there. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 8 Apr 2019 14:44:03 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 4/8/19 8:44 AM, Magnus Hagander wrote:\n> On Mon, Apr 8, 2019 at 2:41 PM Jonathan S. Katz <jkatz@postgresql.org\n> <mailto:jkatz@postgresql.org>> wrote:\n> \n> On 4/8/19 8:25 AM, Peter Eisentraut wrote:\n> > On 2019-04-05 18:11, Jonathan S. Katz wrote:\n> >> + <para>\n> >> + We recommend using the <option>-W</option>,\n> <option>--pwprompt</option>,\n> >> + or <option>--pwfile</option> flags to assign a password to\n> the database\n> >> + superuser, and to override the\n> <filename>pg_hba.conf</filename> default\n> >> + generation using <option>-auth-local peer</option> for\n> local connections,\n> >> + and <option>-auth-host scram-sha-256</option> for remote\n> connections. See\n> >> + <xref linkend=\"client-authentication\"/> for more\n> information on client\n> >> + authentication methods.\n> >> + </para>\n> >\n> > As discussed on hackers, we are not ready to support scram-sha-256 out\n> > of the box. So this advice, or any similar advice elsewhere,\n> would need\n> > to recommend \"md5\" as the setting --- which would probably be\n> embarrassing.\n> \n> Well, it's less embarrassing than trust, and we currently state:\n> \n> \n> Yes. Much less.\n> \n> \n> \"Also, specify -A md5 or -A password so that the default trust\n> authentication mode is not used\"[1]\n> \n> We could also modify it to say :\n> \n> \"and <option>-auth-host scram-sha-256</option> for remote connections if\n> your client supports it, otherwise <option>-auth-host md5</option>\"\n> \n> \n> That would be the best from a correctness, but if of course also makes\n> things sound more complicated. I'm not sure where the right balance is\n> there.\n\nWe could link here[1] from the docs on the line for \"client supports it\"\n\nJonathan\n\n[1] https://wiki.postgresql.org/wiki/List_of_drivers",
"msg_date": "Mon, 8 Apr 2019 08:46:00 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 2019-04-06 20:08, Noah Misch wrote:\n>>> I think we should just change the defaults. There is a risk of warning\n>>> fatigue. initdb does warn about this, so anyone who cared could have\n>>> gotten the information.\n>>>\n>>\n>> I've been suggesting that for years, so definite strong +1 for doing that.\n> \n> +1\n\nTo recap, the idea here was to change the default authentication methods\nthat initdb sets up, in place of \"trust\".\n\nI think the ideal scenario would be to use \"peer\" for local and some\nappropriate password method (being discussed elsewhere) for host.\n\nLooking through the buildfarm, I gather that the only platforms that\ndon't support peer are Windows, AIX, and HP-UX. I think we can probably\nfigure out some fallback or alternative default for the latter two\nplatforms without anyone noticing. But what should the defaults be on\nWindows? It doesn't have local sockets, so the lack of peer wouldn't\nmatter. But is it OK to default to a password method, or would that\nupset people particularly?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 23 May 2019 18:54:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Thu, May 23, 2019, 18:54 Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-04-06 20:08, Noah Misch wrote:\n> >>> I think we should just change the defaults. There is a risk of warning\n> >>> fatigue. initdb does warn about this, so anyone who cared could have\n> >>> gotten the information.\n> >>>\n> >>\n> >> I've been suggesting that for years, so definite strong +1 for doing\n> that.\n> >\n> > +1\n>\n> To recap, the idea here was to change the default authentication methods\n> that initdb sets up, in place of \"trust\".\n>\n> I think the ideal scenario would be to use \"peer\" for local and some\n> appropriate password method (being discussed elsewhere) for host.\n>\n> Looking through the buildfarm, I gather that the only platforms that\n> don't support peer are Windows, AIX, and HP-UX. I think we can probably\n> figure out some fallback or alternative default for the latter two\n> platforms without anyone noticing. But what should the defaults be on\n> Windows? It doesn't have local sockets, so the lack of peer wouldn't\n> matter. But is it OK to default to a password method, or would that\n> upset people particularly?\n>\n\n\nI'm sure password would be fine there. It's what \"everybody else\" does\n(well sqlserver also cord integrated security, but people are used to it).\n\n/Magnus\n\nOn Thu, May 23, 2019, 18:54 Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-04-06 20:08, Noah Misch wrote:\n>>> I think we should just change the defaults. There is a risk of warning\n>>> fatigue. initdb does warn about this, so anyone who cared could have\n>>> gotten the information.\n>>>\n>>\n>> I've been suggesting that for years, so definite strong +1 for doing that.\n> \n> +1\n\nTo recap, the idea here was to change the default authentication methods\nthat initdb sets up, in place of \"trust\".\n\nI think the ideal scenario would be to use \"peer\" for local and some\nappropriate password method (being discussed elsewhere) for host.\n\nLooking through the buildfarm, I gather that the only platforms that\ndon't support peer are Windows, AIX, and HP-UX. I think we can probably\nfigure out some fallback or alternative default for the latter two\nplatforms without anyone noticing. But what should the defaults be on\nWindows? It doesn't have local sockets, so the lack of peer wouldn't\nmatter. But is it OK to default to a password method, or would that\nupset people particularly?I'm sure password would be fine there. It's what \"everybody else\" does (well sqlserver also cord integrated security, but people are used to it). /Magnus",
"msg_date": "Thu, 23 May 2019 18:56:49 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 5/23/19 12:54 PM, Peter Eisentraut wrote:\n> On 2019-04-06 20:08, Noah Misch wrote:\n>>>> I think we should just change the defaults. There is a risk of warning\n>>>> fatigue. initdb does warn about this, so anyone who cared could have\n>>>> gotten the information.\n>>>>\n>>>\n>>> I've been suggesting that for years, so definite strong +1 for doing that.\n>>\n>> +1\n> \n> To recap, the idea here was to change the default authentication methods\n> that initdb sets up, in place of \"trust\".\n> \n> I think the ideal scenario would be to use \"peer\" for local and some\n> appropriate password method (being discussed elsewhere) for host.\n\n+1.\n\n> Looking through the buildfarm, I gather that the only platforms that\n> don't support peer are Windows, AIX, and HP-UX. I think we can probably\n> figure out some fallback or alternative default for the latter two\n> platforms without anyone noticing. But what should the defaults be on\n> Windows? It doesn't have local sockets, so the lack of peer wouldn't\n> matter. But is it OK to default to a password method, or would that\n> upset people particularly?\n\n+1 for password method. Definitely better than trust :)\n\nJonathan",
"msg_date": "Thu, 23 May 2019 18:47:04 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 5/23/19 6:47 PM, Jonathan S. Katz wrote:\n> On 5/23/19 12:54 PM, Peter Eisentraut wrote:\n>> On 2019-04-06 20:08, Noah Misch wrote:\n>>>>> I think we should just change the defaults. There is a risk of warning\n>>>>> fatigue. initdb does warn about this, so anyone who cared could have\n>>>>> gotten the information.\n>>>>>\n>>>>\n>>>> I've been suggesting that for years, so definite strong +1 for doing that.\n>>>\n>>> +1\n>>\n>> To recap, the idea here was to change the default authentication methods\n>> that initdb sets up, in place of \"trust\".\n>>\n>> I think the ideal scenario would be to use \"peer\" for local and some\n>> appropriate password method (being discussed elsewhere) for host.\n> \n> +1.\n> \n>> Looking through the buildfarm, I gather that the only platforms that\n>> don't support peer are Windows, AIX, and HP-UX. I think we can probably\n>> figure out some fallback or alternative default for the latter two\n>> platforms without anyone noticing. But what should the defaults be on\n>> Windows? It doesn't have local sockets, so the lack of peer wouldn't\n>> matter. But is it OK to default to a password method, or would that\n>> upset people particularly?\n> \n> +1 for password method. Definitely better than trust :)\n\nAttached is v2 of the patch.\n\nFor now I have left in the password based method to be scram-sha-256 as\nI am optimistic about the support across client drivers[1] (and FWIW I\nhave an implementation for crystal-pg ~60% done).\n\nHowever, this probably means we would need to set the default password\nencryption guc to \"scram-sha-256\" which we're not ready to do yet, so it\nmay be moot to leave it in.\n\nSo, thinking out loud about that, we should probably use \"md5\" and once\nwe decide to make the encryption method \"scram-sha-256\" by default, then\nwe update the recommendation?\n\nThanks,\n\nJonathan",
"msg_date": "Thu, 23 May 2019 20:13:54 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> For now I have left in the password based method to be scram-sha-256 as\n> I am optimistic about the support across client drivers[1] (and FWIW I\n> have an implementation for crystal-pg ~60% done).\n\n> However, this probably means we would need to set the default password\n> encryption guc to \"scram-sha-256\" which we're not ready to do yet, so it\n> may be moot to leave it in.\n\n> So, thinking out loud about that, we should probably use \"md5\" and once\n> we decide to make the encryption method \"scram-sha-256\" by default, then\n> we update the recommendation?\n\nMeh. If we're going to break things, let's break them. Set it to\nscram by default and let people who need to cope with old clients\nchange the default. I'm tired of explaining that MD5 isn't actually\ninsecure in our usage ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 22:28:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > For now I have left in the password based method to be scram-sha-256 as\n> > I am optimistic about the support across client drivers[1] (and FWIW I\n> > have an implementation for crystal-pg ~60% done).\n> \n> > However, this probably means we would need to set the default password\n> > encryption guc to \"scram-sha-256\" which we're not ready to do yet, so it\n> > may be moot to leave it in.\n> \n> > So, thinking out loud about that, we should probably use \"md5\" and once\n> > we decide to make the encryption method \"scram-sha-256\" by default, then\n> > we update the recommendation?\n> \n> Meh. If we're going to break things, let's break them. Set it to\n> scram by default and let people who need to cope with old clients\n> change the default. I'm tired of explaining that MD5 isn't actually\n> insecure in our usage ...\n\n+many.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 23 May 2019 22:30:09 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 5/23/19 10:30 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> > For now I have left in the password based method to be scram-sha-256 as\n>> > I am optimistic about the support across client drivers[1] (and FWIW I\n>> > have an implementation for crystal-pg ~60% done).\n>> \n>> > However, this probably means we would need to set the default password\n>> > encryption guc to \"scram-sha-256\" which we're not ready to do yet, so it\n>> > may be moot to leave it in.\n>> \n>> > So, thinking out loud about that, we should probably use \"md5\" and once\n>> > we decide to make the encryption method \"scram-sha-256\" by default, then\n>> > we update the recommendation?\n>> \n>> Meh. If we're going to break things, let's break them. Set it to\n>> scram by default and let people who need to cope with old clients\n>> change the default. I'm tired of explaining that MD5 isn't actually\n>> insecure in our usage ...\n> \n> +many.\n\nmany++\n\nAre we doing this for pg12? In any case, I would think we better loudly\npoint out this change somewhere.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 24 May 2019 07:48:11 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Fri, 24 May 2019 at 07:48, Joe Conway <mail@joeconway.com> wrote:\n\n> On 5/23/19 10:30 PM, Stephen Frost wrote:\n> > Greetings,\n> >\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> >> > For now I have left in the password based method to be scram-sha-256\n> as\n> >> > I am optimistic about the support across client drivers[1] (and FWIW I\n> >> > have an implementation for crystal-pg ~60% done).\n> >>\n> >> > However, this probably means we would need to set the default password\n> >> > encryption guc to \"scram-sha-256\" which we're not ready to do yet, so\n> it\n> >> > may be moot to leave it in.\n> >>\n> >> > So, thinking out loud about that, we should probably use \"md5\" and\n> once\n> >> > we decide to make the encryption method \"scram-sha-256\" by default,\n> then\n> >> > we update the recommendation?\n> >>\n> >> Meh. If we're going to break things, let's break them. Set it to\n> >> scram by default and let people who need to cope with old clients\n> >> change the default. I'm tired of explaining that MD5 isn't actually\n> >> insecure in our usage ...\n> >\n> > +many.\n>\n> many++\n>\n> Are we doing this for pg12? In any case, I would think we better loudly\n> point out this change somewhere.\n>\n>\n+many as well given the presumption that we are going to break existing\nbehaviour\n\nDave\n\nOn Fri, 24 May 2019 at 07:48, Joe Conway <mail@joeconway.com> wrote:On 5/23/19 10:30 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> > For now I have left in the password based method to be scram-sha-256 as\n>> > I am optimistic about the support across client drivers[1] (and FWIW I\n>> > have an implementation for crystal-pg ~60% done).\n>> \n>> > However, this probably means we would need to set the default password\n>> > encryption guc to \"scram-sha-256\" which we're not ready to do yet, so it\n>> > may be moot to leave it in.\n>> \n>> > So, thinking out loud about that, we should probably use \"md5\" and once\n>> > we decide to make the encryption method \"scram-sha-256\" by default, then\n>> > we update the recommendation?\n>> \n>> Meh. If we're going to break things, let's break them. Set it to\n>> scram by default and let people who need to cope with old clients\n>> change the default. I'm tired of explaining that MD5 isn't actually\n>> insecure in our usage ...\n> \n> +many.\n\nmany++\n\nAre we doing this for pg12? In any case, I would think we better loudly\npoint out this change somewhere.\n+many as well given the presumption that we are going to break existing behaviour Dave",
"msg_date": "Fri, 24 May 2019 08:01:13 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 2019-05-24 13:48, Joe Conway wrote:\n> Are we doing this for pg12?\n\nno\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 May 2019 14:04:05 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "Greetings,\n\n* Joe Conway (mail@joeconway.com) wrote:\n> On 5/23/19 10:30 PM, Stephen Frost wrote:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> >> > For now I have left in the password based method to be scram-sha-256 as\n> >> > I am optimistic about the support across client drivers[1] (and FWIW I\n> >> > have an implementation for crystal-pg ~60% done).\n> >> \n> >> > However, this probably means we would need to set the default password\n> >> > encryption guc to \"scram-sha-256\" which we're not ready to do yet, so it\n> >> > may be moot to leave it in.\n> >> \n> >> > So, thinking out loud about that, we should probably use \"md5\" and once\n> >> > we decide to make the encryption method \"scram-sha-256\" by default, then\n> >> > we update the recommendation?\n> >> \n> >> Meh. If we're going to break things, let's break them. Set it to\n> >> scram by default and let people who need to cope with old clients\n> >> change the default. I'm tired of explaining that MD5 isn't actually\n> >> insecure in our usage ...\n> > \n> > +many.\n> \n> many++\n> \n> Are we doing this for pg12? In any case, I would think we better loudly\n> point out this change somewhere.\n\nSure, we should point it out, but I don't know that it needs to be\nscreamed from the rooftops considering the packagers have already been\nlargely ignoring our defaults here anyway...\n\nThanks,\n\nStephen",
"msg_date": "Fri, 24 May 2019 08:13:06 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 5/24/19 8:13 AM, Stephen Frost wrote:\n> Greetings,\n> \n> * Joe Conway (mail@joeconway.com) wrote:\n>> On 5/23/19 10:30 PM, Stephen Frost wrote:\n>> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> >> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> >> > For now I have left in the password based method to be scram-sha-256 as\n>> >> > I am optimistic about the support across client drivers[1] (and FWIW I\n>> >> > have an implementation for crystal-pg ~60% done).\n>> >> \n>> >> > However, this probably means we would need to set the default password\n>> >> > encryption guc to \"scram-sha-256\" which we're not ready to do yet, so it\n>> >> > may be moot to leave it in.\n>> >> \n>> >> > So, thinking out loud about that, we should probably use \"md5\" and once\n>> >> > we decide to make the encryption method \"scram-sha-256\" by default, then\n>> >> > we update the recommendation?\n>> >> \n>> >> Meh. If we're going to break things, let's break them. Set it to\n>> >> scram by default and let people who need to cope with old clients\n>> >> change the default. I'm tired of explaining that MD5 isn't actually\n>> >> insecure in our usage ...\n>> > \n>> > +many.\n>> \n>> many++\n>> \n>> Are we doing this for pg12? In any case, I would think we better loudly\n>> point out this change somewhere.\n> \n> Sure, we should point it out, but I don't know that it needs to be\n> screamed from the rooftops considering the packagers have already been\n> largely ignoring our defaults here anyway...\n\nYeah, I thought about that, but anyone not using those packages will be\nin for a big surprise. Don't get me wrong, I wholeheartedly endorse the\nchange, but I predict many related questions on the lists, and anything\nwe can do to mitigate that should be done.\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 24 May 2019 08:15:49 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "Greetings,\n\n* Joe Conway (mail@joeconway.com) wrote:\n> On 5/24/19 8:13 AM, Stephen Frost wrote:\n> > * Joe Conway (mail@joeconway.com) wrote:\n> >> On 5/23/19 10:30 PM, Stephen Frost wrote:\n> >> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> >> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> >> >> > For now I have left in the password based method to be scram-sha-256 as\n> >> >> > I am optimistic about the support across client drivers[1] (and FWIW I\n> >> >> > have an implementation for crystal-pg ~60% done).\n> >> >> \n> >> >> > However, this probably means we would need to set the default password\n> >> >> > encryption guc to \"scram-sha-256\" which we're not ready to do yet, so it\n> >> >> > may be moot to leave it in.\n> >> >> \n> >> >> > So, thinking out loud about that, we should probably use \"md5\" and once\n> >> >> > we decide to make the encryption method \"scram-sha-256\" by default, then\n> >> >> > we update the recommendation?\n> >> >> \n> >> >> Meh. If we're going to break things, let's break them. Set it to\n> >> >> scram by default and let people who need to cope with old clients\n> >> >> change the default. I'm tired of explaining that MD5 isn't actually\n> >> >> insecure in our usage ...\n> >> > \n> >> > +many.\n> >> \n> >> many++\n> >> \n> >> Are we doing this for pg12? In any case, I would think we better loudly\n> >> point out this change somewhere.\n> > \n> > Sure, we should point it out, but I don't know that it needs to be\n> > screamed from the rooftops considering the packagers have already been\n> > largely ignoring our defaults here anyway...\n> \n> Yeah, I thought about that, but anyone not using those packages will be\n> in for a big surprise. Don't get me wrong, I wholeheartedly endorse the\n> change, but I predict many related questions on the lists, and anything\n> we can do to mitigate that should be done.\n\nYou think there's someone who builds from the source and just trusts\nwhat we have put in for the defaults in pg_hba.conf..?\n\nI've got a really hard time with that idea...\n\nI'm all for making people aware of it, but I don't think it justifies\nbeing the top item of the release notes or some such. Frankly, anything\nthat starts with \"If you build from source, then...\" is already going to\nbe pretty low impact and therefore low on the list of things we need to\ncover in the release notes, et al.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 24 May 2019 08:19:04 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Fri, May 24, 2019 at 2:19 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Joe Conway (mail@joeconway.com) wrote:\n> > On 5/24/19 8:13 AM, Stephen Frost wrote:\n> > > * Joe Conway (mail@joeconway.com) wrote:\n> > >> On 5/23/19 10:30 PM, Stephen Frost wrote:\n> > >> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > >> >> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > >> >> > For now I have left in the password based method to be\n> scram-sha-256 as\n> > >> >> > I am optimistic about the support across client drivers[1] (and\n> FWIW I\n> > >> >> > have an implementation for crystal-pg ~60% done).\n> > >> >>\n> > >> >> > However, this probably means we would need to set the default\n> password\n> > >> >> > encryption guc to \"scram-sha-256\" which we're not ready to do\n> yet, so it\n> > >> >> > may be moot to leave it in.\n> > >> >>\n> > >> >> > So, thinking out loud about that, we should probably use \"md5\"\n> and once\n> > >> >> > we decide to make the encryption method \"scram-sha-256\" by\n> default, then\n> > >> >> > we update the recommendation?\n> > >> >>\n> > >> >> Meh. If we're going to break things, let's break them. Set it to\n> > >> >> scram by default and let people who need to cope with old clients\n> > >> >> change the default. I'm tired of explaining that MD5 isn't\n> actually\n> > >> >> insecure in our usage ...\n> > >> >\n> > >> > +many.\n> > >>\n> > >> many++\n> > >>\n> > >> Are we doing this for pg12? In any case, I would think we better\n> loudly\n> > >> point out this change somewhere.\n> > >\n> > > Sure, we should point it out, but I don't know that it needs to be\n> > > screamed from the rooftops considering the packagers have already been\n> > > largely ignoring our defaults here anyway...\n> >\n> > Yeah, I thought about that, but anyone not using those packages will be\n> > in for a big surprise. Don't get me wrong, I wholeheartedly endorse the\n> > change, but I predict many related questions on the lists, and anything\n> > we can do to mitigate that should be done.\n>\n> You think there's someone who builds from the source and just trusts\n> what we have put in for the defaults in pg_hba.conf..?\n>\n> I've got a really hard time with that idea...\n>\n> I'm all for making people aware of it, but I don't think it justifies\n> being the top item of the release notes or some such. Frankly, anything\n> that starts with \"If you build from source, then...\" is already going to\n> be pretty low impact and therefore low on the list of things we need to\n> cover in the release notes, et al.\n>\n\nI think changing away from \"trust\" is going to be a much smaller change\nthan people seem to worry about.\n\nIt will hit people *in the developer community*.\n\nThe thing that will potentially hit *end users* is when the RPMs, DEBs or\nWindows Installers switch to SCRAM (because of clients with older drivers).\nBut they have *already* stopped using trust many many years ago.\n\nMaking the default change away from trust in the source distro will affect\nfew people.\n\nMaking the default change of password_encryption -> scram will affect a\n*lot* of people. That one needs to be more carefully coordinated.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, May 24, 2019 at 2:19 PM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Joe Conway (mail@joeconway.com) wrote:\n> On 5/24/19 8:13 AM, Stephen Frost wrote:\n> > * Joe Conway (mail@joeconway.com) wrote:\n> >> On 5/23/19 10:30 PM, Stephen Frost wrote:\n> >> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> >> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> >> >> > For now I have left in the password based method to be scram-sha-256 as\n> >> >> > I am optimistic about the support across client drivers[1] (and FWIW I\n> >> >> > have an implementation for crystal-pg ~60% done).\n> >> >> \n> >> >> > However, this probably means we would need to set the default password\n> >> >> > encryption guc to \"scram-sha-256\" which we're not ready to do yet, so it\n> >> >> > may be moot to leave it in.\n> >> >> \n> >> >> > So, thinking out loud about that, we should probably use \"md5\" and once\n> >> >> > we decide to make the encryption method \"scram-sha-256\" by default, then\n> >> >> > we update the recommendation?\n> >> >> \n> >> >> Meh. If we're going to break things, let's break them. Set it to\n> >> >> scram by default and let people who need to cope with old clients\n> >> >> change the default. I'm tired of explaining that MD5 isn't actually\n> >> >> insecure in our usage ...\n> >> > \n> >> > +many.\n> >> \n> >> many++\n> >> \n> >> Are we doing this for pg12? In any case, I would think we better loudly\n> >> point out this change somewhere.\n> > \n> > Sure, we should point it out, but I don't know that it needs to be\n> > screamed from the rooftops considering the packagers have already been\n> > largely ignoring our defaults here anyway...\n> \n> Yeah, I thought about that, but anyone not using those packages will be\n> in for a big surprise. Don't get me wrong, I wholeheartedly endorse the\n> change, but I predict many related questions on the lists, and anything\n> we can do to mitigate that should be done.\n\nYou think there's someone who builds from the source and just trusts\nwhat we have put in for the defaults in pg_hba.conf..?\n\nI've got a really hard time with that idea...\n\nI'm all for making people aware of it, but I don't think it justifies\nbeing the top item of the release notes or some such. Frankly, anything\nthat starts with \"If you build from source, then...\" is already going to\nbe pretty low impact and therefore low on the list of things we need to\ncover in the release notes, et al.I think changing away from \"trust\" is going to be a much smaller change than people seem to worry about.It will hit people *in the developer community*.The thing that will potentially hit *end users* is when the RPMs, DEBs or Windows Installers switch to SCRAM (because of clients with older drivers). But they have *already* stopped using trust many many years ago. Making the default change away from trust in the source distro will affect few people.Making the default change of password_encryption -> scram will affect a *lot* of people. That one needs to be more carefully coordinated.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 24 May 2019 14:29:12 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> The thing that will potentially hit *end users* is when the RPMs, DEBs or\n> Windows Installers switch to SCRAM (because of clients with older drivers).\n\nAgreed. I'm not sure that our change to SCRAM as default would actually\nmake them change... It might, but I'm not sure and it's really a bit of\na different discussion in any case because we need to provide info about\nhow to go about making the migration.\n\n> Making the default change away from trust in the source distro will affect\n> few people.\n\nAgreed.\n\n> Making the default change of password_encryption -> scram will affect a\n> *lot* of people. That one needs to be more carefully coordinated.\n\nWe need to provide better documentation about how to get from md5 to\nSCRAM, in my view. I'm not sure where that should live, exactly.\nI really wish we had put more effort into making the migration easy to\ndo over a period of time, and we might actually have to do that before\nthe packagers would be willing to make that change.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 24 May 2019 08:33:17 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 5/24/19 8:33 AM, Stephen Frost wrote:\n> Greetings,\n> \n> * Magnus Hagander (magnus@hagander.net) wrote:\n>> The thing that will potentially hit *end users* is when the RPMs, DEBs or\n>> Windows Installers switch to SCRAM (because of clients with older drivers).\n> \n> Agreed. I'm not sure that our change to SCRAM as default would actually\n> make them change... It might, but I'm not sure and it's really a bit of\n> a different discussion in any case because we need to provide info about\n> how to go about making the migration.\n\nYeah, that's the key piece. Even with (almost) all the drivers now\nsupporting SCRAM, the re-hashing from md5 => scram-sha-256 does not come\nautomatically.\n\n>> Making the default change away from trust in the source distro will affect\n>> few people.\n> \n> Agreed.\n\n+1\n\n>> Making the default change of password_encryption -> scram will affect a\n>> *lot* of people. That one needs to be more carefully coordinated.\n\nPer some of the upthread comments though, if we go down this path we\nshould at least make the packagers abundantly aware if we do change the\ndefault. I think some of the work they do could help ease the upgrade pain.\n\n> We need to provide better documentation about how to get from md5 to\n> SCRAM, in my view. I'm not sure where that should live, exactly.\n> I really wish we had put more effort into making the migration easy to\n> do over a period of time, and we might actually have to do that before\n> the packagers would be willing to make that change.\n\n+100...I think we should do this regardless, and I was already thinking\nof writing something up around it. I would even suggest that we have\nsaid password upgrade documentation backpatched to 10.\n\nJonathan",
"msg_date": "Fri, 24 May 2019 08:56:05 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "Greetings,\n\n* Jonathan S. Katz (jkatz@postgresql.org) wrote:\n> On 5/24/19 8:33 AM, Stephen Frost wrote:\n> > We need to provide better documentation about how to get from md5 to\n> > SCRAM, in my view. I'm not sure where that should live, exactly.\n> > I really wish we had put more effort into making the migration easy to\n> > do over a period of time, and we might actually have to do that before\n> > the packagers would be willing to make that change.\n> \n> +100...I think we should do this regardless, and I was already thinking\n> of writing something up around it. I would even suggest that we have\n> said password upgrade documentation backpatched to 10.\n\nNot sure that backpatching is necessary, but I'm not actively against\nit.\n\nWhat I was really getting at though was the ability to have multiple\nauthenticator tokens active concurrently (eg: md5 AND SCRAM), with an\nability to use either one (idk, md5_or_scram auth method?), and then\nautomatically set both on password change until everything is using\nSCRAM and then remove all MD5 stuff.\n\nOr something along those lines. In other words, I'm talking about new\ndevelopment work to ease the migration (while also providing some oft\nasked about features, like the ability to do rolling passwords...).\n\nThanks,\n\nStephen",
"msg_date": "Fri, 24 May 2019 09:01:23 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 5/24/19 8:56 AM, Jonathan S. Katz wrote:\n> On 5/24/19 8:33 AM, Stephen Frost wrote:\n>> * Magnus Hagander (magnus@hagander.net) wrote:\n>>> Making the default change away from trust in the source distro will affect\n>>> few people.\n>> \n>> Agreed.\n> \n> +1\n\nFewer people, but likely disproportionately high representation on pgsql\nlists. Anyway, nuff said -- I guess the future will tell one way or the\nother.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 24 May 2019 09:01:35 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 5/24/19 9:01 AM, Stephen Frost wrote:\n> Greetings,\n> \n> * Jonathan S. Katz (jkatz@postgresql.org) wrote:\n>> On 5/24/19 8:33 AM, Stephen Frost wrote:\n>>> We need to provide better documentation about how to get from md5 to\n>>> SCRAM, in my view. I'm not sure where that should live, exactly.\n>>> I really wish we had put more effort into making the migration easy to\n>>> do over a period of time, and we might actually have to do that before\n>>> the packagers would be willing to make that change.\n>>\n>> +100...I think we should do this regardless, and I was already thinking\n>> of writing something up around it. I would even suggest that we have\n>> said password upgrade documentation backpatched to 10.\n> \n> Not sure that backpatching is necessary, but I'm not actively against\n> it.\n\nWell, for someone who wants to cut over and has to manually guide the\nprocess, a guide will help in absence of new development.\n\n> \n> What I was really getting at though was the ability to have multiple\n> authenticator tokens active concurrently (eg: md5 AND SCRAM), with an\n> ability to use either one (idk, md5_or_scram auth method?), and then\n> automatically set both on password change until everything is using\n> SCRAM and then remove all MD5 stuff.\n> \n> Or something along those lines. In other words, I'm talking about new\n> development work to ease the migration (while also providing some oft\n> asked about features, like the ability to do rolling passwords...).\n\nCool, I have been thinking about a similar feature as well to help ease\nthe transition (and fwiw was going to suggest it in my previous email).\n\nI think an interim step at least is to document how we can at least help\nease the transition.\n\nThanks,\n\nJonathan",
"msg_date": "Fri, 24 May 2019 09:13:42 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 24/05/2019 16:01, Stephen Frost wrote:\n> What I was really getting at though was the ability to have multiple\n> authenticator tokens active concurrently (eg: md5 AND SCRAM), with an\n> ability to use either one (idk, md5_or_scram auth method?), and then\n> automatically set both on password change until everything is using\n> SCRAM and then remove all MD5 stuff.\n\nUmm, that's what \"md5\" already does. Per documentation \n(https://www.postgresql.org/docs/current/auth-password.html):\n\n > To ease transition from the md5 method to the newer SCRAM method, if\n > md5 is specified as a method in pg_hba.conf but the user's password on\n > the server is encrypted for SCRAM (see below), then SCRAM-based\n > authentication will automatically be chosen instead.\n\nThe migration path is:\n\n1. Use \"md5\" in pg_hba.conf, and put password_encryption='scram-sha-256' \nin postgresql.conf.\n\n2. Wait until all users have reset their passwords, so that all users \nhave a SCRAM-SHA-256 verifier.\n\n3. Replace \"md5\" with \"scram-sha-256\" in pg_hba.conf.\n\nStep 3 is kind of optional; once all users have a SCRAM verifier instead \nof an MD5 hash, they will all use SCRAM even without changing \npg_hba.conf. It just prevents MD5 authentication in case a user forces a \nnew MD5 hash into the system e.g. by changing password_encryption, or by \nsetting an MD5 password explicitly with ALTER USER.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 24 May 2019 16:49:30 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "Greetings,\n\n* Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> On 24/05/2019 16:01, Stephen Frost wrote:\n> >What I was really getting at though was the ability to have multiple\n> >authenticator tokens active concurrently (eg: md5 AND SCRAM), with an\n> >ability to use either one (idk, md5_or_scram auth method?), and then\n> >automatically set both on password change until everything is using\n> >SCRAM and then remove all MD5 stuff.\n> \n> Umm, that's what \"md5\" already does. Per documentation\n> (https://www.postgresql.org/docs/current/auth-password.html):\n\nI remembered that we did something here but hadn't gone and looked at\nit recently, so sorry for misremembering. Perhaps all the more reason\nfor detailed migration documentation.\n\n> > To ease transition from the md5 method to the newer SCRAM method, if\n> > md5 is specified as a method in pg_hba.conf but the user's password on\n> > the server is encrypted for SCRAM (see below), then SCRAM-based\n> > authentication will automatically be chosen instead.\n> \n> The migration path is:\n> \n> 1. Use \"md5\" in pg_hba.conf, and put password_encryption='scram-sha-256' in\n> postgresql.conf.\n> \n> 2. Wait until all users have reset their passwords, so that all users have a\n> SCRAM-SHA-256 verifier.\n\nWait though- once a password is changed then they *have* to use SCRAM\nfor auth from that point on, right? That's great if you can be sure\nthat everything you're connecting from supports it, but that isn't going\nto necessairly be the case. I think this is what I recall being unhappy\nabout and what I was trying to remember about what we did.\n\nWe also haven't got a way to tell very easily when a given md5 (or\nscram, for that matter...) authenticator was last used, making it hard\nto see if it's still actually being used or not. Nor is there a very\nnice way to see when all users have reset their passwords to scram\nwithout inspecting the password hash itself...\n\n> 3. Replace \"md5\" with \"scram-sha-256\" in pg_hba.conf.\n> \n> Step 3 is kind of optional; once all users have a SCRAM verifier instead of\n> an MD5 hash, they will all use SCRAM even without changing pg_hba.conf. It\n> just prevents MD5 authentication in case a user forces a new MD5 hash into\n> the system e.g. by changing password_encryption, or by setting an MD5\n> password explicitly with ALTER USER.\n\nYes, which you'd certainly want to do, so I don't consider it to be\noptional. Further, we should really have a way for an admin to say\n\"never allow storing an md5 password again\" which I don't think we do.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 24 May 2019 10:00:02 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 5/24/19 9:49 AM, Heikki Linnakangas wrote:\n> On 24/05/2019 16:01, Stephen Frost wrote:\n>> What I was really getting at though was the ability to have multiple\n>> authenticator tokens active concurrently (eg: md5 AND SCRAM), with an\n>> ability to use either one (idk, md5_or_scram auth method?), and then\n>> automatically set both on password change until everything is using\n>> SCRAM and then remove all MD5 stuff.\n> \n> Umm, that's what \"md5\" already does. Per documentation\n> (https://www.postgresql.org/docs/current/auth-password.html):\n\nTested manually and verified in code, it does do that check:\n\n/*\n * If 'md5' authentication is allowed, decide whether to perform 'md5' or\n * 'scram-sha-256' authentication based on the type of password the user\n * has. If it's an MD5 hash, we must do MD5 authentication, and if it's a\n * SCRAM verifier, we must do SCRAM authentication.\n *\n * If MD5 authentication is not allowed, always use SCRAM. If the user\n * had an MD5 password, CheckSCRAMAuth() will fail.\n */\nif (port->hba->auth_method == uaMD5 && pwtype == PASSWORD_TYPE_MD5)\n auth_result = CheckMD5Auth(port, shadow_pass, logdetail);\nelse\n auth_result = CheckSCRAMAuth(port, shadow_pass, logdetail);\n\n\n>> To ease transition from the md5 method to the newer SCRAM method, if\n>> md5 is specified as a method in pg_hba.conf but the user's password on\n>> the server is encrypted for SCRAM (see below), then SCRAM-based\n>> authentication will automatically be chosen instead.\n> \n> The migration path is:\n> \n> 1. Use \"md5\" in pg_hba.conf, and put password_encryption='scram-sha-256'\n> in postgresql.conf.\n> \n> 2. Wait until all users have reset their passwords, so that all users\n> have a SCRAM-SHA-256 verifier.\n\nAnd \"a superuser can verify this has occurred by inspecting the\npg_authid table (appropriate SQL)\"\n\n> \n> 3. Replace \"md5\" with \"scram-sha-256\" in pg_hba.conf.\n> \n> Step 3 is kind of optional; once all users have a SCRAM verifier instead\n> of an MD5 hash, they will all use SCRAM even without changing\n> pg_hba.conf.\n\nVerified this is true.\n\n> It just prevents MD5 authentication in case a user forces a\n> new MD5 hash into the system e.g. by changing password_encryption, or by\n> setting an MD5 password explicitly with ALTER USER.\n\nCool. Thanks for the explanation.\n\nI do think we should document said upgrade path, my best guess being\naround here[1].\n\nJonathan\n\n[1] https://www.postgresql.org/docs/current/auth-password.html",
"msg_date": "Fri, 24 May 2019 10:02:53 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 24/05/2019 17:02, Jonathan S. Katz wrote:\n> On 5/24/19 9:49 AM, Heikki Linnakangas wrote:\n>> It just prevents MD5 authentication in case a user forces a\n>> new MD5 hash into the system e.g. by changing password_encryption, or by\n>> setting an MD5 password explicitly with ALTER USER.\n> \n> Cool. Thanks for the explanation.\n> \n> I do think we should document said upgrade path, my best guess being\n> around here[1].\n> \n> [1] https://www.postgresql.org/docs/current/auth-password.html\n\nYou mean, like this? From the bottom of that page :-)\n\n > To upgrade an existing installation from md5 to scram-sha-256, after\n > having ensured that all client libraries in use are new enough to\n > support SCRAM, set password_encryption = 'scram-sha-256' in\n > postgresql.conf, make all users set new passwords, and change the\n > authentication method specifications in pg_hba.conf to scram-sha-256.\n\nIt would be nice to expand that a little bit, though:\n\n* How do you verify if all client libraries support SCRAM? Would be good \nto mention the minimum libpq version here, at least. Can we give more \nexplicit instructions? It would be nice if there was a way to write an \nentry to the log, whenever an older client connects. Not sure how you'd \ndo that..\n\n* How does one \"make all users to set new passwords\"? Related to that, \nhow do you check if all users have reset their password to SCRAM? Give \nthe exact SQL needed to check that.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 24 May 2019 17:26:01 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 5/24/19 10:26 AM, Heikki Linnakangas wrote:\n> On 24/05/2019 17:02, Jonathan S. Katz wrote:\n>> On 5/24/19 9:49 AM, Heikki Linnakangas wrote:\n>>> It just prevents MD5 authentication in case a user forces a\n>>> new MD5 hash into the system e.g. by changing password_encryption, or by\n>>> setting an MD5 password explicitly with ALTER USER.\n>>\n>> Cool. Thanks for the explanation.\n>>\n>> I do think we should document said upgrade path, my best guess being\n>> around here[1].\n>>\n>> [1] https://www.postgresql.org/docs/current/auth-password.html\n> \n> You mean, like this? From the bottom of that page :-)\n\n...yes ;) I think what I'm saying is that it should be its own section.\n\n>> To upgrade an existing installation from md5 to scram-sha-256, after\n>> having ensured that all client libraries in use are new enough to\n>> support SCRAM, set password_encryption = 'scram-sha-256' in\n>> postgresql.conf, make all users set new passwords, and change the\n>> authentication method specifications in pg_hba.conf to scram-sha-256.\n> \n> It would be nice to expand that a little bit, though:\n> \n> * How do you verify if all client libraries support SCRAM? Would be good\n> to mention the minimum libpq version here, at least. Can we give more\n> explicit instructions? It would be nice if there was a way to write an\n> entry to the log, whenever an older client connects. Not sure how you'd\n> do that..\n\nYeah, this one is hard, because a lot of that depends on how the client\ndeals with not supporting SCRAM. Typically the server sends over\nAuthenticationSASL and the client raises an error. All the server will\nsee is the connection closed, but it could be for any reason.\n\nFor example, I tested this with an unpatched asyncpg and noted similar\nbehavior. I'm not sure there's anything we can do given we don't know\nthat the client does not support SCRAM ahead of time.\n\nI think the best we can do is mention minimums and, if we're ok with it,\nlink to the drivers wiki page so people can see which min. versions of\ntheir preferred connection library support it.\n\n> * How does one \"make all users to set new passwords\"? Related to that,\n> how do you check if all users have reset their password to SCRAM? Give\n> the exact SQL needed to check that.\n\nYeah this is a big one. I already hinted at the latter point, but also\nexplaining how to change passwords is helpful too (and I feel can also\ncause quite a debate as well. Within psql it's a straightforward choice.\nOutside of it, to do it safely you have to do a bit of extra work).\n\nJonathan",
"msg_date": "Fri, 24 May 2019 10:54:24 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Thu, May 23, 2019 at 06:56:49PM +0200, Magnus Hagander wrote:\n> On Thu, May 23, 2019, 18:54 Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > To recap, the idea here was to change the default authentication methods\n> > that initdb sets up, in place of \"trust\".\n> >\n> > I think the ideal scenario would be to use \"peer\" for local and some\n> > appropriate password method (being discussed elsewhere) for host.\n> >\n> > Looking through the buildfarm, I gather that the only platforms that\n> > don't support peer are Windows, AIX, and HP-UX. I think we can probably\n> > figure out some fallback or alternative default for the latter two\n> > platforms without anyone noticing. But what should the defaults be on\n> > Windows? It doesn't have local sockets, so the lack of peer wouldn't\n> > matter. But is it OK to default to a password method, or would that\n> > upset people particularly?\n> \n> I'm sure password would be fine there. It's what \"everybody else\" does\n> (well sqlserver also cord integrated security, but people are used to it).\n\nOur sspi auth is a more-general version of peer auth, and it works over TCP.\nIt would be a simple matter of programming to support \"peer\" on Windows,\nconsisting of sspi auth with an implicit pg_ident map. Nonetheless, I agree\npassword would be fine.\n\n\n",
"msg_date": "Fri, 24 May 2019 08:23:57 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Fri, May 24, 2019 at 08:23:57AM -0700, Noah Misch wrote:\n> Our sspi auth is a more-general version of peer auth, and it works over TCP.\n> It would be a simple matter of programming to support \"peer\" on Windows,\n> consisting of sspi auth with an implicit pg_ident map.\n\nI am not sure that it is much worth complicating the HBA rules with an\nextra alias knowing that it is possible to map pg_ident to use a regex\nmatching pattern.\n\n> Nonetheless, I agree password would be fine.\n\nFine for me.\n--\nMichael",
"msg_date": "Mon, 27 May 2019 11:19:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Fri, May 24, 2019 at 11:24 AM Noah Misch <noah@leadboat.com> wrote:\n\n> On Thu, May 23, 2019 at 06:56:49PM +0200, Magnus Hagander wrote:\n> > On Thu, May 23, 2019, 18:54 Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n> > > To recap, the idea here was to change the default authentication\n> methods\n> > > that initdb sets up, in place of \"trust\".\n> > >\n> > > I think the ideal scenario would be to use \"peer\" for local and some\n> > > appropriate password method (being discussed elsewhere) for host.\n> > >\n> > > Looking through the buildfarm, I gather that the only platforms that\n> > > don't support peer are Windows, AIX, and HP-UX. I think we can\n> probably\n> > > figure out some fallback or alternative default for the latter two\n> > > platforms without anyone noticing. But what should the defaults be on\n> > > Windows? It doesn't have local sockets, so the lack of peer wouldn't\n> > > matter. But is it OK to default to a password method, or would that\n> > > upset people particularly?\n> >\n> > I'm sure password would be fine there. It's what \"everybody else\" does\n> > (well sqlserver also cord integrated security, but people are used to\n> it).\n>\n> Our sspi auth is a more-general version of peer auth, and it works over\n> TCP.\n> It would be a simple matter of programming to support \"peer\" on Windows,\n> consisting of sspi auth with an implicit pg_ident map. Nonetheless, I\n> agree\n> password would be fine.\n>\n\nI hope oyu don't mean \"make peer use sspi on windows\". I think that's a\nreally bad idea from a confusion perspective.\n\nHowever, what we could do there is have the defaut pg_hba.conf file contain\na \"reasonable setup using sspi\" that's a different story.\n\nBut I wonder if that isn't better implemented at the installer level. I\nthink we're better off doing something like scram as the config when you\nbuild from source ,and then encourage installers to do other things based\non the fact that they know more information about the setup (such as\nusernames actually used).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, May 24, 2019 at 11:24 AM Noah Misch <noah@leadboat.com> wrote:On Thu, May 23, 2019 at 06:56:49PM +0200, Magnus Hagander wrote:\n> On Thu, May 23, 2019, 18:54 Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > To recap, the idea here was to change the default authentication methods\n> > that initdb sets up, in place of \"trust\".\n> >\n> > I think the ideal scenario would be to use \"peer\" for local and some\n> > appropriate password method (being discussed elsewhere) for host.\n> >\n> > Looking through the buildfarm, I gather that the only platforms that\n> > don't support peer are Windows, AIX, and HP-UX. I think we can probably\n> > figure out some fallback or alternative default for the latter two\n> > platforms without anyone noticing. But what should the defaults be on\n> > Windows? It doesn't have local sockets, so the lack of peer wouldn't\n> > matter. But is it OK to default to a password method, or would that\n> > upset people particularly?\n> \n> I'm sure password would be fine there. It's what \"everybody else\" does\n> (well sqlserver also cord integrated security, but people are used to it).\n\nOur sspi auth is a more-general version of peer auth, and it works over TCP.\nIt would be a simple matter of programming to support \"peer\" on Windows,\nconsisting of sspi auth with an implicit pg_ident map. Nonetheless, I agree\npassword would be fine.\nI hope oyu don't mean \"make peer use sspi on windows\". I think that's a really bad idea from a confusion perspective.However, what we could do there is have the defaut pg_hba.conf file contain a \"reasonable setup using sspi\" that's a different story.But I wonder if that isn't better implemented at the installer level. I think we're better off doing something like scram as the config when you build from source ,and then encourage installers to do other things based on the fact that they know more information about the setup (such as usernames actually used).-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 28 May 2019 12:15:35 -0400",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Tue, May 28, 2019 at 12:15:35PM -0400, Magnus Hagander wrote:\n> On Fri, May 24, 2019 at 11:24 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Thu, May 23, 2019 at 06:56:49PM +0200, Magnus Hagander wrote:\n> > > On Thu, May 23, 2019, 18:54 Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > > > To recap, the idea here was to change the default authentication methods\n> > > > that initdb sets up, in place of \"trust\".\n> > > >\n> > > > I think the ideal scenario would be to use \"peer\" for local and some\n> > > > appropriate password method (being discussed elsewhere) for host.\n> > > >\n> > > > Looking through the buildfarm, I gather that the only platforms that\n> > > > don't support peer are Windows, AIX, and HP-UX.� I think we can probably\n> > > > figure out some fallback or alternative default for the latter two\n> > > > platforms without anyone noticing.� But what should the defaults be on\n> > > > Windows?� It doesn't have local sockets, so the lack of peer wouldn't\n> > > > matter.� But is it OK to default to a password method, or would that\n> > > > upset people particularly?\n> > >\n> > > I'm sure password would be fine there. It's what \"everybody else\" does\n> > > (well sqlserver also cord integrated security, but people are used to it).\n> > \n> > Our sspi auth is a more-general version of peer auth, and it works over TCP.\n> > It would be a simple matter of programming to support \"peer\" on Windows,\n> > consisting of sspi auth with an implicit pg_ident map.� Nonetheless, I agree\n> > password would be fine.\n>\n> I hope oyu don't mean \"make peer use sspi on windows\". I think that's a\n> really bad idea from a confusion perspective.\n\nI don't mean \"make peer an alias for SSPI\", but I do mean \"implement peer on\nWindows as a special case of sspi, using the same Windows APIs\". To the\nclient, \"peer\" would look like \"sspi\". If that's confusion-prone, what's\nconfusing about it?\n\n> However, what we could do there is have the defaut pg_hba.conf file contain\n> a \"reasonable setup using sspi\" that's a different story.\n\nThat's another way to do it. Currently, to behave like \"peer\" behaves, one\nhard-codes the machine's SSPI realm into pg_ident.conf. If we introduced\npg_ident.conf syntax to remove that need (e.g. %MACHINE_REALM%), that approach\nwould work.\n\n> But I wonder if that isn't better implemented at the installer level. I\n> think we're better off doing something like scram as the config when you\n> build from source ,and then encourage installers to do other things based on\n> the fact that they know more information about the setup (such as usernames\n> actually used).\n\nIf initdb has the information needed to configure the recommended\nauthentication, that's the best place to do it, since there's one initdb and\nmany installers. So far, I haven't seen a default auth configuration proposal\ninvolving knowledge of OS usernames or other information initdb lacks.\n\n\n",
"msg_date": "Sun, 2 Jun 2019 20:55:39 -0400",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "Greetings,\n\n* Noah Misch (noah@leadboat.com) wrote:\n> On Tue, May 28, 2019 at 12:15:35PM -0400, Magnus Hagander wrote:\n> > On Fri, May 24, 2019 at 11:24 AM Noah Misch <noah@leadboat.com> wrote:\n> > > On Thu, May 23, 2019 at 06:56:49PM +0200, Magnus Hagander wrote:\n> > > > On Thu, May 23, 2019, 18:54 Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > > > > To recap, the idea here was to change the default authentication methods\n> > > > > that initdb sets up, in place of \"trust\".\n> > > > >\n> > > > > I think the ideal scenario would be to use \"peer\" for local and some\n> > > > > appropriate password method (being discussed elsewhere) for host.\n> > > > >\n> > > > > Looking through the buildfarm, I gather that the only platforms that\n> > > > > don't support peer are Windows, AIX, and HP-UX. I think we can probably\n> > > > > figure out some fallback or alternative default for the latter two\n> > > > > platforms without anyone noticing. But what should the defaults be on\n> > > > > Windows? It doesn't have local sockets, so the lack of peer wouldn't\n> > > > > matter. But is it OK to default to a password method, or would that\n> > > > > upset people particularly?\n> > > >\n> > > > I'm sure password would be fine there. It's what \"everybody else\" does\n> > > > (well sqlserver also cord integrated security, but people are used to it).\n> > > \n> > > Our sspi auth is a more-general version of peer auth, and it works over TCP.\n> > > It would be a simple matter of programming to support \"peer\" on Windows,\n> > > consisting of sspi auth with an implicit pg_ident map. Nonetheless, I agree\n> > > password would be fine.\n> >\n> > I hope oyu don't mean \"make peer use sspi on windows\". I think that's a\n> > really bad idea from a confusion perspective.\n> \n> I don't mean \"make peer an alias for SSPI\", but I do mean \"implement peer on\n> Windows as a special case of sspi, using the same Windows APIs\". To the\n> client, \"peer\" would look like \"sspi\". If that's confusion-prone, what's\n> confusing about it?\n\nI tend to agree with Magnus here. It's confusing because 'peer' in our\nexisting parlance discusses connections over a unix socket, which\ncertainly isn't what's happening on Windows. I do agree with the\ngeneral idea of making SSPI work by default on Windows.\n\n> > However, what we could do there is have the defaut pg_hba.conf file contain\n> > a \"reasonable setup using sspi\" that's a different story.\n> \n> That's another way to do it. Currently, to behave like \"peer\" behaves, one\n> hard-codes the machine's SSPI realm into pg_ident.conf. If we introduced\n> pg_ident.conf syntax to remove that need (e.g. %MACHINE_REALM%), that approach\n> would work.\n\nI would be in favor of something like this, provided the variables are\ndefined in such a way that we could avoid conflicting with real values\n(and remember that you'd need a regexp in pg_ident.conf for this to\nwork...). %xyz%, while supporting %% to mean a literal percent, seems\nlikely to work. Not sure if that's what you were thinking though.\n\n> > But I wonder if that isn't better implemented at the installer level. I\n> > think we're better off doing something like scram as the config when you\n> > build from source ,and then encourage installers to do other things based on\n> > the fact that they know more information about the setup (such as usernames\n> > actually used).\n> \n> If initdb has the information needed to configure the recommended\n> authentication, that's the best place to do it, since there's one initdb and\n> many installers. So far, I haven't seen a default auth configuration proposal\n> involving knowledge of OS usernames or other information initdb lacks.\n\nI agree with doing it at initdb time.\n\nNote that the current default auth configuration (to some extent) does\ndepend on the OS username- but that's also something that initdb knows,\nand therefore it isn't an issue here. I don't see a reason that we\nwouldn't be able to have initdb handle this.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 3 Jun 2019 17:20:42 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 2019-05-23 18:54, Peter Eisentraut wrote:\n> To recap, the idea here was to change the default authentication methods\n> that initdb sets up, in place of \"trust\".\n> \n> I think the ideal scenario would be to use \"peer\" for local and some\n> appropriate password method (being discussed elsewhere) for host.\n\nPatch for that attached.\n\n> Looking through the buildfarm, I gather that the only platforms that\n> don't support peer are Windows, AIX, and HP-UX.\n\nNote that with this change, running initdb without arguments will now\nerror on those platforms: You need to supply either a password or select\na different default authentication method.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 18 Jun 2019 22:33:38 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Tue, Jun 18, 2019 at 10:33 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-05-23 18:54, Peter Eisentraut wrote:\n> > To recap, the idea here was to change the default authentication methods\n> > that initdb sets up, in place of \"trust\".\n> >\n> > I think the ideal scenario would be to use \"peer\" for local and some\n> > appropriate password method (being discussed elsewhere) for host.\n\nI'm also personally all for that change.\n\n> Patch for that attached.\n\nPatch applies and compiles cleanly, same for documentation. The\nchange works as intended, so I don't have much to say.\n\n> Note that with this change, running initdb without arguments will now\n> error on those platforms: You need to supply either a password or select\n> a different default authentication method.\n\nShould we make this explicitly stated in the documentation? As a\nreference, it's saying:\n\nThe default client authentication setup is such that users can connect\nover the Unix-domain socket to the same database user name as their\noperating system user names (on operating systems that support this,\nwhich are most modern Unix-like systems, but not Windows) and\notherwise with a password. To assign a password to the initial\ndatabase superuser, use one of initdb's -W, --pwprompt or -- pwfile\noptions.\n\n\n",
"msg_date": "Thu, 11 Jul 2019 21:34:25 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 09:34:25PM +0200, Julien Rouhaud wrote:\n> On Tue, Jun 18, 2019 at 10:33 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > On 2019-05-23 18:54, Peter Eisentraut wrote:\n> > > To recap, the idea here was to change the default authentication methods\n> > > that initdb sets up, in place of \"trust\".\n> > >\n> > > I think the ideal scenario would be to use \"peer\" for local and some\n> > > appropriate password method (being discussed elsewhere) for host.\n> \n> I'm also personally all for that change.\n> \n> > Patch for that attached.\n> \n> Patch applies and compiles cleanly, same for documentation. The\n> change works as intended, so I don't have much to say.\n> \n> > Note that with this change, running initdb without arguments will now\n> > error on those platforms: You need to supply either a password or select\n> > a different default authentication method.\n> \n> Should we make this explicitly stated in the documentation? As a\n> reference, it's saying:\n> \n> The default client authentication setup is such that users can connect\n> over the Unix-domain socket to the same database user name as their\n> operating system user names (on operating systems that support this,\n> which are most modern Unix-like systems, but not Windows)\n\nIt turns out that really recent versions of Windows do have it.\n\nhttps://bsmadhu.wordpress.com/2018/08/22/unix-domain-socket-support-in-windows/\n\nNot that this is relevant, or will be, for another couple of years...\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Thu, 11 Jul 2019 22:48:09 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 2019-07-11 21:34, Julien Rouhaud wrote:\n>> Note that with this change, running initdb without arguments will now\n>> error on those platforms: You need to supply either a password or select\n>> a different default authentication method.\n> Should we make this explicitly stated in the documentation? As a\n> reference, it's saying:\n> \n> The default client authentication setup is such that users can connect\n> over the Unix-domain socket to the same database user name as their\n> operating system user names (on operating systems that support this,\n> which are most modern Unix-like systems, but not Windows) and\n> otherwise with a password. To assign a password to the initial\n> database superuser, use one of initdb's -W, --pwprompt or -- pwfile\n> options.\n\nDo you have a suggestion for where to put this and exactly how to phrase\nthis?\n\nI think the initdb reference page would be more appropriate than\nruntime.sgml.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 13 Jul 2019 14:44:13 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On Sat, Jul 13, 2019 at 2:44 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-07-11 21:34, Julien Rouhaud wrote:\n> >> Note that with this change, running initdb without arguments will now\n> >> error on those platforms: You need to supply either a password or select\n> >> a different default authentication method.\n> > Should we make this explicitly stated in the documentation? As a\n> > reference, it's saying:\n> >\n> > The default client authentication setup is such that users can connect\n> > over the Unix-domain socket to the same database user name as their\n> > operating system user names (on operating systems that support this,\n> > which are most modern Unix-like systems, but not Windows) and\n> > otherwise with a password. To assign a password to the initial\n> > database superuser, use one of initdb's -W, --pwprompt or -- pwfile\n> > options.\n>\n> Do you have a suggestion for where to put this and exactly how to phrase\n> this?\n>\n> I think the initdb reference page would be more appropriate than\n> runtime.sgml.\n\nYes initdb.sgml seems more suitable. I was thinking something very\nsimilar to your note, maybe like (also attached if my MUA ruins it):\n\ndiff --git a/doc/src/sgml/ref/initdb.sgml b/doc/src/sgml/ref/initdb.sgml\nindex c47b9139eb..764cf737c7 100644\n--- a/doc/src/sgml/ref/initdb.sgml\n+++ b/doc/src/sgml/ref/initdb.sgml\n@@ -143,6 +143,15 @@ PostgreSQL documentation\n connections.\n </para>\n\n+ <note>\n+ <para>\n+ Running initdb without arguments on platforms lacking\n+ <literal>peer</literal> or Unix-domain socket connections will exit\n+ with an error. On such environments, you need to either provide a\n+ password or choose a different authentication method.\n+ </para>\n+ </note>\n+\n <para>\n Do not use",
"msg_date": "Sat, 13 Jul 2019 18:58:30 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 2019-07-13 18:58, Julien Rouhaud wrote:\n>>> The default client authentication setup is such that users can connect\n>>> over the Unix-domain socket to the same database user name as their\n>>> operating system user names (on operating systems that support this,\n>>> which are most modern Unix-like systems, but not Windows) and\n>>> otherwise with a password. To assign a password to the initial\n>>> database superuser, use one of initdb's -W, --pwprompt or -- pwfile\n>>> options.\n>>\n>> Do you have a suggestion for where to put this and exactly how to phrase\n>> this?\n>>\n>> I think the initdb reference page would be more appropriate than\n>> runtime.sgml.\n> \n> Yes initdb.sgml seems more suitable. I was thinking something very\n> similar to your note, maybe like (also attached if my MUA ruins it):\n\nPushed with that note. Thanks.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 15:21:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Pushed with that note. Thanks.\n\nThis has completely broken the buildfarm.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2019 10:11:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Pushed with that note. Thanks.\n\n> This has completely broken the buildfarm.\n\nOn inspection, it seems the reason for that is that the buildfarm\nscript runs initdb with '-U buildfarm', so that peer-auth connections\nwill only work if the buildfarm is being run by an OS user named\nexactly \"buildfarm\". That happens to be true on my macOS animals,\nwhich is why they're not broken ... but apparently, nobody else\ndoes it that way.\n\nI'm afraid we're going to have to revert this, at least till\nsuch time as a fixed buildfarm client is in universal use.\n\nAs for the nature of that fix, I don't quite understand why\nthe forced -U is there --- maybe we could just remove it?\nBut there are multiple places in the buildfarm client that\nhave hard-wired references to \"buildfarm\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2019 12:25:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "I wrote:\n> I'm afraid we're going to have to revert this, at least till\n> such time as a fixed buildfarm client is in universal use.\n\n> As for the nature of that fix, I don't quite understand why\n> the forced -U is there --- maybe we could just remove it?\n> But there are multiple places in the buildfarm client that\n> have hard-wired references to \"buildfarm\".\n\nBTW, it looks like the Windows buildfarm critters have a\nseparate problem: they're failing with\n\ninitdb: error: must specify a password for the superuser to enable md5 authentication\n\nOne would imagine that even if we'd given a password to initdb,\nsubsequent connection attempts would fail for lack of a password.\nThere might not be any practical fix except forcing trust auth\nfor the Windows critters.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2019 12:39:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "\nOn 7/22/19 12:25 PM, Tom Lane wrote:\n> I wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> Pushed with that note. Thanks.\n>> This has completely broken the buildfarm.\n> On inspection, it seems the reason for that is that the buildfarm\n> script runs initdb with '-U buildfarm', so that peer-auth connections\n> will only work if the buildfarm is being run by an OS user named\n> exactly \"buildfarm\". That happens to be true on my macOS animals,\n> which is why they're not broken ... but apparently, nobody else\n> does it that way.\n>\n> I'm afraid we're going to have to revert this, at least till\n> such time as a fixed buildfarm client is in universal use.\n>\n> As for the nature of that fix, I don't quite understand why\n> the forced -U is there --- maybe we could just remove it?\n> But there are multiple places in the buildfarm client that\n> have hard-wired references to \"buildfarm\".\n\n\n\nThis goes back quite a way:\n\n\n commit 7528701abb88ab84f6775448c59b392ca7f33a07\n Author: Andrew Dunstan <andrew@dunslane.net>\n Date: Tue Nov 27 13:47:38 2012 -0500\n\n Run everything as buildfarm rather than local user name.\n \n This will help if we ever want to do things like comparing dump\n diffs.\n Done by setting PGUSER and using initdb's -U option.\n\n\nThe pg_upgrade test (not the cross-version one) doesn't use this - it\nexplicitly unsets PGUSER.\n\nThere are a few things we could do. We could force trust auth, or we\ncould add an ident map that allowed $USER to login as buildfarm. Finding\nall the places we would need to fix that could be a fun project ...\n\nWe could also maybe teach initdb to honor an environment setting\nINTDB_DEFAULT_AUTH or some such.\n\n\nI agree this should be reverted for now until we work out what we want\nto do.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n\n",
"msg_date": "Mon, 22 Jul 2019 13:02:13 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-22 13:02:13 -0400, Andrew Dunstan wrote:\n> There are a few things we could do. We could force trust auth, or we\n> could add an ident map that allowed $USER to login as buildfarm. Finding\n> all the places we would need to fix that could be a fun project ...\n\nPerhaps we could actually do so automatically when the initdb invoking\nuser isn't the same as the OS user? Imo that'd be generally quite\nuseful, and not just for the regression tets.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Jul 2019 10:40:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "I wrote:\n> BTW, it looks like the Windows buildfarm critters have a\n> separate problem: they're failing with\n> initdb: error: must specify a password for the superuser to enable md5 authentication\n\nI tried doing a run on gaur (old HPUX, so no \"peer\" auth) before the\nrevert happened. It got as far as initdb-check [1], which failed quite\nthoroughly with lots of the same error as above. Depressingly, a lot of\nthe test cases that expected some type of error \"succeeded\", indicating\nthey're not actually checking to see which error they got. Boo hiss.\n\nPresumably Noah's AIX menagerie would have failed in about the\nsame way if it had run.\n\nSo we've got a *lot* of buildfarm work to do before we can think about\nchanging this.\n\nFrankly, this episode makes me wonder whether changing the default is\neven a good idea at this point. People who care about security have\nalready set up their processes to select a useful-to-them auth option,\nwhile people who do not care are unlikely to be happy about having\nsecurity rammed down their throats, especially if it results in the\nsort of push-ups we're looking at having to do in the buildfarm.\nI think this has effectively destroyed the argument that only\ntrivial adjustments will be required.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gaur&dt=2019-07-22%2017%3A08%3A27\n\n\n",
"msg_date": "Mon, 22 Jul 2019 15:15:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "\nOn 7/22/19 12:39 PM, Tom Lane wrote:\n> I wrote:\n>> I'm afraid we're going to have to revert this, at least till\n>> such time as a fixed buildfarm client is in universal use.\n>> As for the nature of that fix, I don't quite understand why\n>> the forced -U is there --- maybe we could just remove it?\n>> But there are multiple places in the buildfarm client that\n>> have hard-wired references to \"buildfarm\".\n> BTW, it looks like the Windows buildfarm critters have a\n> separate problem: they're failing with\n>\n> initdb: error: must specify a password for the superuser to enable md5 authentication\n>\n> One would imagine that even if we'd given a password to initdb,\n> subsequent connection attempts would fail for lack of a password.\n> There might not be any practical fix except forcing trust auth\n> for the Windows critters.\n>\n> \t\t\t\n\n\n\nYeah.\n\n\nModulo this issue, experimentation shows that adding '-A trust' to the\nline in run_build.pl where initdb is called fixes the issue. If we're\ngoing to rely on a buildfarm client fix that one seems simplest. there\nare a couple of not very widely used modules that need similar treatment\n- TestSepgsql and TesUpgradeXVersion\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 22 Jul 2019 15:16:32 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "\nOn 7/22/19 3:15 PM, Tom Lane wrote:\n> I wrote:\n>> BTW, it looks like the Windows buildfarm critters have a\n>> separate problem: they're failing with\n>> initdb: error: must specify a password for the superuser to enable md5 authentication\n> I tried doing a run on gaur (old HPUX, so no \"peer\" auth) before the\n> revert happened. It got as far as initdb-check [1], which failed quite\n> thoroughly with lots of the same error as above. Depressingly, a lot of\n> the test cases that expected some type of error \"succeeded\", indicating\n> they're not actually checking to see which error they got. Boo hiss.\n>\n> Presumably Noah's AIX menagerie would have failed in about the\n> same way if it had run.\n>\n> So we've got a *lot* of buildfarm work to do before we can think about\n> changing this.\n\n\n\nOuch. I'll test more on Windows.\n\n\n\n>\n> Frankly, this episode makes me wonder whether changing the default is\n> even a good idea at this point. People who care about security have\n> already set up their processes to select a useful-to-them auth option,\n> while people who do not care are unlikely to be happy about having\n> security rammed down their throats, especially if it results in the\n> sort of push-ups we're looking at having to do in the buildfarm.\n> I think this has effectively destroyed the argument that only\n> trivial adjustments will be required.\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gaur&dt=2019-07-22%2017%3A08%3A27\n>\n\n\nThere's a strong tendency these days to be secure by default, so I\nunderstand the motivation.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 22 Jul 2019 15:20:50 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "I wrote:\n> I tried doing a run on gaur (old HPUX, so no \"peer\" auth) before the\n> revert happened. It got as far as initdb-check [1], which failed quite\n> thoroughly with lots of the same error as above.\n> ...\n> Presumably Noah's AIX menagerie would have failed in about the\n> same way if it had run.\n\nOh --- actually, Noah's machines *did* report in on that commit,\nand they got past initdb-check, only to fail at install-check-C\nmuch the same as most of the rest of the world.\n\nStudying their configure output, the reason is that they have\ngetpeereid(), so that AIX *does* support peer auth. At least\non that version of AIX. That makes it only HPUX and Windows\nthat can't do it.\n\nBTW, after looking at the patch a bit more, I'm pretty distressed\nby this:\n\n--- a/src/include/port.h\n+++ b/src/include/port.h\n@@ -361,6 +361,11 @@ extern int fls(int mask);\n extern int getpeereid(int sock, uid_t *uid, gid_t *gid);\n #endif\n \n+/* must match src/port/getpeereid.c */\n+#if defined(HAVE_GETPEEREID) || defined(SO_PEERCRED) || defined(LOCAL_PEERCRED) || defined(HAVE_GETPEERUCRED)\n+#define HAVE_AUTH_PEER 1\n+#endif\n+\n #ifndef HAVE_ISINF\n extern int isinf(double x);\n #else\n\nI seriously doubt that port.h includes, or should be made to include,\nwhatever headers provide SO_PEERCRED and/or LOCAL_PEERCRED. That means\nthat the result of this test is going to be different in different .c\nfiles depending on what was or wasn't included. It could also get\nsilently broken on specific platforms by an ill-advised #include removal\n(and, once we fix the buildfarm script to not fail on PEER-less platforms,\nthe buildfarm wouldn't detect the breakage either).\n\nAnother objection to this is that it's entirely unclear from the\nbuildfarm logs whether HAVE_AUTH_PEER got set on a particular system.\n\nI think that when/if we try again, configure itself ought to be\nresponsible for setting HAVE_AUTH_PEER after probing for these\nvarious antecedent symbols.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2019 18:08:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 7/22/19 3:20 PM, Andrew Dunstan wrote:\n> \n> On 7/22/19 3:15 PM, Tom Lane wrote:\n>>\n>> Frankly, this episode makes me wonder whether changing the default is\n>> even a good idea at this point. People who care about security have\n>> already set up their processes to select a useful-to-them auth option,\n>> while people who do not care are unlikely to be happy about having\n>> security rammed down their throats, especially if it results in the\n>> sort of push-ups we're looking at having to do in the buildfarm.\n>> I think this has effectively destroyed the argument that only\n>> trivial adjustments will be required.\n> \n> There's a strong tendency these days to be secure by default, so I\n> understand the motivation.\n\nSo perhaps to bring back the idea that spawned this thread[1], as an\ninterim step, we provide some documented recommendations on how to set\nthings up. The original patch has a warning box (and arguably defaulting\nto \"trust\" deserves a warning) but could be revised to be inline with\nthe text.\n\nJonathan\n\n[1]\nhttps://www.postgresql.org/message-id/bec17f0a-ddb1-8b95-5e69-368d9d0a3390%40postgresql.org",
"msg_date": "Mon, 22 Jul 2019 18:48:11 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 2019-07-22 21:16, Andrew Dunstan wrote:\n> Modulo this issue, experimentation shows that adding '-A trust' to the\n> line in run_build.pl where initdb is called fixes the issue. If we're\n> going to rely on a buildfarm client fix that one seems simplest.\n\nYes, that is the right fix. It's what the in-tree test drivers\n(pg_regress, PostgresNode.pm) do.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 23 Jul 2019 08:12:12 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "\nOn 7/22/19 1:40 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2019-07-22 13:02:13 -0400, Andrew Dunstan wrote:\n>> There are a few things we could do. We could force trust auth, or we\n>> could add an ident map that allowed $USER to login as buildfarm. Finding\n>> all the places we would need to fix that could be a fun project ...\n> Perhaps we could actually do so automatically when the initdb invoking\n> user isn't the same as the OS user? Imo that'd be generally quite\n> useful, and not just for the regression tets.\n>\n\nyeah, although I think that's a separate exercise.\n\n\nSo we'd have something like\n\n\nin pg_hba.conf\n\n\n local all all peer map=datadir_owner\n\n\nand in pg_ident.conf\n\n\n datadir_owner $USER $superuser\n\n\ncheers\n\n\nandrew\n\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 24 Jul 2019 09:55:05 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "\nOn 7/23/19 2:12 AM, Peter Eisentraut wrote:\n> On 2019-07-22 21:16, Andrew Dunstan wrote:\n>> Modulo this issue, experimentation shows that adding '-A trust' to the\n>> line in run_build.pl where initdb is called fixes the issue. If we're\n>> going to rely on a buildfarm client fix that one seems simplest.\n> Yes, that is the right fix. It's what the in-tree test drivers\n> (pg_regress, PostgresNode.pm) do.\n>\n\n\nI have done that, I will put out a new release probably right after the\nCF closes.\n\n\nI think we also need to change vcregress.pl to use trust explicitly for\nupgrade checks, just like the Unix upgrade test script does. That should\nhelp to future-proof us a bit.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 24 Jul 2019 10:00:34 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 2019-07-24 16:00, Andrew Dunstan wrote:\n> I think we also need to change vcregress.pl to use trust explicitly for\n> upgrade checks, just like the Unix upgrade test script does. That should\n> help to future-proof us a bit.\n\nRight, I'll add that to my patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Jul 2019 21:59:36 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 7/24/19 10:00 AM, Andrew Dunstan wrote:\n> On 7/23/19 2:12 AM, Peter Eisentraut wrote:\n>> On 2019-07-22 21:16, Andrew Dunstan wrote:\n>>> Modulo this issue, experimentation shows that adding '-A trust' to the\n>>> line in run_build.pl where initdb is called fixes the issue. If we're\n>>> going to rely on a buildfarm client fix that one seems simplest.\n>> Yes, that is the right fix. It's what the in-tree test drivers\n>> (pg_regress, PostgresNode.pm) do.\n>>\n>\n> I have done that, I will put out a new release probably right after the\n> CF closes.\n>\n>\n> I think we also need to change vcregress.pl to use trust explicitly for\n> upgrade checks, just like the Unix upgrade test script does. That should\n> help to future-proof us a bit.\n>\n>\n\nHere's a patch along those lines that pretty much syncs up\nvcregress.pl's initdb with pg_upgrade's test.sh.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 24 Jul 2019 16:02:41 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 2019-07-22 19:40, Andres Freund wrote:\n> On 2019-07-22 13:02:13 -0400, Andrew Dunstan wrote:\n>> There are a few things we could do. We could force trust auth, or we\n>> could add an ident map that allowed $USER to login as buildfarm. Finding\n>> all the places we would need to fix that could be a fun project ...\n> \n> Perhaps we could actually do so automatically when the initdb invoking\n> user isn't the same as the OS user? Imo that'd be generally quite\n> useful, and not just for the regression tets.\n\nIt seems to me that there is something missing in our client\nauthentication system here.\n\nIf I'm logged in as the OS user that owns the data directory, I should\nbe able to log in to the database system via local socket as any user.\nBecause why stop me? I can just change pg_hba.conf to let me in.\n\nThat would also address this problem that when you use the initdb -U\noption, the proposed default \"peer\" setting doesn't help you much.\nMaking a pg_ident.conf map automatically helps for that particular user\ncombination, but then not for other users. (There is no \"sameuser plus\nthese additional mappings\".)\n\nI think we could just define that if geteuid == getpeereid, then\nauthentication succeeds. Possibly make that a setting if someone wants\nto turn it off.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Jul 2019 22:08:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> If I'm logged in as the OS user that owns the data directory, I should\n> be able to log in to the database system via local socket as any user.\n> Because why stop me? I can just change pg_hba.conf to let me in.\n\nHmm ... there's probably some minor loss of safety there, but not\nmuch, as you say.\n\n> I think we could just define that if geteuid == getpeereid, then\n> authentication succeeds. Possibly make that a setting if someone wants\n> to turn it off.\n\nWe would still need to make the proposed buildfarm changes, though,\nbecause Windows. (And HPUX, though if it were the only holdout\nmaybe we could consider blowing it off.)\n\nI'm not that excited about weakening our authentication rules\njust to make things easier for the buildfarm.\n\nIt's possible that what you suggest is a good idea anyway to reduce\nthe user impact of switching from trust to peer as default auth.\nHowever, I'm a little worried that we'll start getting a lot of \"it\nworks in psql but I can't connect via JDBC-or-whatever\" complaints.\nSo I dunno if it will really make things easier for users.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2019 16:18:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
},
{
"msg_contents": "On 2019-07-24 22:18, Tom Lane wrote:\n>> I think we could just define that if geteuid == getpeereid, then\n>> authentication succeeds. Possibly make that a setting if someone wants\n>> to turn it off.\n> \n> We would still need to make the proposed buildfarm changes, though,\n> because Windows. (And HPUX, though if it were the only holdout\n> maybe we could consider blowing it off.)\n> \n> I'm not that excited about weakening our authentication rules\n> just to make things easier for the buildfarm.\n\nYes, this idea is separate from those buildfarm changes.\n\n> It's possible that what you suggest is a good idea anyway to reduce\n> the user impact of switching from trust to peer as default auth.\n> However, I'm a little worried that we'll start getting a lot of \"it\n> works in psql but I can't connect via JDBC-or-whatever\" complaints.\n\nWell, the existence of \"local\" vs. \"host\" already has that effect anyway.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Jul 2019 22:34:00 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb recommendations"
}
] |
[
{
"msg_contents": "Hello,\n\nI noticed a small typo (word 'the' is repeated twice) in the recent tableam doc.\n\nThe patch remove the typo in the doc ('/doc/src/sgml/tableam.sgml').\n\nRegard,\nAlexis",
"msg_date": "Fri, 5 Apr 2019 16:26:16 +0000",
"msg_from": "Alexis Andrieu <andrieu.alexis@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Small typo fix on tableam documentation"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-05 16:26:16 +0000, Alexis Andrieu wrote:\n> I noticed a small typo (word 'the' is repeated twice) in the recent tableam doc.\n> \n> The patch remove the typo in the doc ('/doc/src/sgml/tableam.sgml').\n\nThanks for noticing and reporting! It was already reported by Justin\nPryzby. I'd have given you co-credit if I had seen this email beforehand.\n\ncommit 86cc06d1cf9c30be3b79207242e6746f0f0b681c (HEAD -> master, upstream/master)\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2019-04-05 09:45:59 -0700\n\n table: docs: fix typos and grammar.\n \n Author: Justin Pryzby\n Discussion: https://postgr.es/m/20190404055138.GA24864@telsasoft.com\n\n\n- Andres\n\n\n",
"msg_date": "Fri, 5 Apr 2019 10:18:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Small typo fix on tableam documentation"
}
] |
[
{
"msg_contents": "Hi,\n\nIn this email I want to give a brief status update of the table access\nmethod work - I assume that most of you sensibly haven't followed it\ninto all nooks and crannies.\n\nI want to thank Haribabu, Alvaro, Alexander, David, Dmitry and all the\nothers that collaborated on making tableam happen. It was/is a huge\nproject.\n\nI think what's in v12 - I don't know of any non-cleanup / bugfix work\npending for 12 - is a pretty reasonable initial set of features. It\nallows to reimplement a heap like storage without any core modifications\n(except WAL logging, see below); it is not sufficient to implement a\ngood index oriented table AM. It does not allow to store the catalog in\na non heap table.\n\n\nThe tableam interface itself doesn't care that much about the AM\ninternally stores data. Most of the API (sequential scans, index\nlookups, insert/update/delete) don't know about blocks, and only\nindirectly & optionally about buffers (via BulkInsertState). There's a\nfew callbacks / functions that do care about blocks, because it's not\nclear, or would have been too much work, to remove the dependency. This\ncurrently is:\n\n- ANALYZE integration - currently the sampling logic is tied to blocks.\n- index build range scans - the range is defined as blocks\n- planner relation size estimate - but that could trivially just be\n filled with size-in-bytes / BLCKSZin the callback.\n- the (optional) bitmap heap scan API - that's fairly intrinsically\n block based. An AM could just internally subdivide TIDs in a different\n way, but I don't think a bitmap scan like we have would e.g. make a\n lot of sense for an index oriented table without any sort of stable\n tid.\n- the sample scan API - tsmapi.h is block based, so the tableam.h API is\n as well.\n\nI think none of these are limiting in a particularly bad way.\n\n\nThe most constraining factor for storage, I think, is that currently the\nAPI relies on ItemPointerData style TIDs in a number of places (i.e. a 6\nbyte tuple identifier). One can implement scans, and inserts into\nindex-less tables without providing that, but no updates, deletes etc.\nOne reason for that is that it'd just have required more changes to\nexecutor etc to allow for wider identifiers, but the primary reason is\nthat indexes currently simply don't support anything else.\n\nI think this is, by far, the biggest limitation of the API. If one\ne.g. wanted to implement a practical index-organized-table, the 6 byte\nlimitation obviously would become a limitation very quickly. I suspect\nthat we're going to want to get rid of that limitation in indexes before\nlong for other reasons too, to allow global indexes (which'd need to\nencode the partition somewhere).\n\n\nWith regards to storing the rows themselves, the second biggest\nlimitation is a limitation that is not actually a part of tableam\nitself: WAL. Many tableam's would want to use WAL, but we only have\nextensible WAL as part of generic_xlog.h. While that's useful to allow\nprototyping etc, it's imo not efficient enough to build a competitive\nstorage engine for OLTP (OLAP probably much less of a problem). I don't\nknow what the best approach here is - allowing \"well known\" extensions\nto register rmgr entries would be the easiest solution, but it's\ncertainly a bit crummy.\n\n\nCurrently there's some, fairly minor, requirement that TIDs are actually\nunique when not using a snapshot qualifier. That's currently only\nrelevant for GetTupleForTrigger(), AfterTriggerSaveEvent() and\nEvalPlanQualFetchRowMarks(), which use SnapshotAny. That prevents AMs\nfrom implementing in-place updates (thus a problem e.g. for zheap).\nI've a patch that fixes that, but it's too hacky for v12 - there's not\nalways a convenient snapshot to fetch a row (e.g. in\nGetTupleForTrigger() after EPQ the row isn't visible to\nes_snapshot).\n\n\nA second set of limitations is around making more of tableam\noptional. Right now it e.g. is not possible to have an AM that doesn't\nimplement insert/update/delete. Obviously an AM can just throw an error\nin the relevant callbacks, but I think it'd be better if we made those\ncallbacks optional, and threw errors at parse-analysis time (both to\nmake the errors consistent, and to ensure it's consistently thrown,\nrather than only when e.g. an UPDATE actually finds a row to update).\n\n\nCurrently foreign keys are allowed between tables of different types of\nAM. I am wondering whether we ought to allow AMs to forbid being\nreferenced. If e.g. an AM has lower consistency guarantees than the AM\nof the table referencing it, it might be preferrable to forbid\nthat. OTOH, I guess such an AM could just require UNLOGGED to be used.\n\n\nAnother restriction is actually related to UNLOGGED - currently the\nUNLOGGED processing after crashes works by recognizing init forks by\nfile name. But what if e.g. the storage isn't inside postgres files? Not\nsure if we actually can do anything good about that.\n\n\nThe last issue I know about is that nodeBitmapHeapscan.c and\nnodeIndexOnlyscan.c currently directly accesses the visibilitymap. Which\nmeans if an AM doesn't use the VM, they're never going to use the\noptimized path. And conversely if the AM uses the VM, it needs to\ninternally map tids in way compatible with heap. I strongly suspect\nthat we're going to have to fix this quite soon.\n\n\nIt'd be a pretty significant amount of work to allow storing catalogs in\na non-heap table. One difficulty is that there's just a lot of direct\naccesses to catalog via heapam.h APIs - while a significant amount of\nwork to \"fix\" that, it's probably not very hard for each individual\nsite. There's a few places that rely on heap internals (checking xmin\nfor invalidation and the like). I think the biggest issue however would\nbe the catalog bootstrapping - to be able to read pg_am, we obviously\nneed to go through relcache.c's bootstrapping, and that only works\nbecause we hardcode how those tables look like. I personally don't\nthink it's particularly important issue to work on, nor am I convinced\nthat there'd be buy-in to make the necessary extensive changes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Apr 2019 13:25:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Status of the table access method work"
},
{
"msg_contents": "On 05/04/2019 23:25, Andres Freund wrote:\n> I think what's in v12 - I don't know of any non-cleanup / bugfix work\n> pending for 12 - is a pretty reasonable initial set of features.\n\nHooray!\n\n> - the (optional) bitmap heap scan API - that's fairly intrinsically\n> block based. An AM could just internally subdivide TIDs in a different\n> way, but I don't think a bitmap scan like we have would e.g. make a\n> lot of sense for an index oriented table without any sort of stable\n> tid.\n\nIf an AM doesn't implement the bitmap heap scan API, what happens? \nBitmap scans are disabled?\n\nEven if an AM isn't block-oriented, the bitmap heap scan API still makes \nsense as long as there's some correlation between TIDs and physical \nlocation. The only really broken thing about that currently is the \nprefetching: nodeBitmapHeapScan.c calls PrefetchBuffer() directly with \nthe TID's block numbers. It would be pretty straightforward to wrap that \nin a callback, so that the AM could do something different.\n\nOr move even more of the logic to the AM, so that the AM would get the \nwhole TIDBitmap in table_beginscan_bm(). It could then implement the \nfetching and prefetching as it sees fit.\n\nI don't think it's urgent, though. We can cross that bridge when we get \nthere, with the first AM that needs that flexibility.\n\n> The most constraining factor for storage, I think, is that currently the\n> API relies on ItemPointerData style TIDs in a number of places (i.e. a 6\n> byte tuple identifier).\n\nI think 48 bits would be just about enough, but it's even more limited \nthan you might at the moment. There are a few places that assume that \nthe offsetnumber <= MaxHeapTuplesPerPage. See ginpostinglist.c, and \nMAX_TUPLES_PER_PAGE in tidbitmap.c. Also, offsetnumber can't be 0, \nbecause that makes the ItemPointer invalid, which is inconvenient if you \ntried to use ItemPointer as just an arbitrary 48-bit integer.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 8 Apr 2019 14:53:53 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "Hi\n\nOn 2019-04-08 14:53:53 +0300, Heikki Linnakangas wrote:\n> On 05/04/2019 23:25, Andres Freund wrote:\n> > - the (optional) bitmap heap scan API - that's fairly intrinsically\n> > block based. An AM could just internally subdivide TIDs in a different\n> > way, but I don't think a bitmap scan like we have would e.g. make a\n> > lot of sense for an index oriented table without any sort of stable\n> > tid.\n> \n> If an AM doesn't implement the bitmap heap scan API, what happens? Bitmap\n> scans are disabled?\n\nYea, the planner doesn't consider them. It just masks the index's\namhasgetbitmap. Seems to be the most reasonable thing to do?\n\n\n> Even if an AM isn't block-oriented, the bitmap heap scan API still makes\n> sense as long as there's some correlation between TIDs and physical\n> location.\n\nYea, it could be a non-linear mapping. But I'm honestly not sure how\nmany non-block oriented AMs with such a correlation there are - I mean\nyou're not going to have that in say an IOT. And it'd be trivial to just\n\"fake\" a block mapping for an in-memory AM.\n\n\n> The only really broken thing about that currently is the\n> prefetching: nodeBitmapHeapScan.c calls PrefetchBuffer() directly with the\n> TID's block numbers. It would be pretty straightforward to wrap that in a\n> callback, so that the AM could do something different.\n\nThat, and the VM_ALL_VISIBLE() checks both in nodeBitmapHeapscan.c and\nnodeIndexonlyscan.c.\n\n\n> Or move even more of the logic to the AM, so that the AM would get the whole\n> TIDBitmap in table_beginscan_bm(). It could then implement the fetching and\n> prefetching as it sees fit.\n> \n> I don't think it's urgent, though. We can cross that bridge when we get\n> there, with the first AM that needs that flexibility.\n\nYea, it seemed nontrivial (not in really hard, just not obvious), and\nthe implicated code duplication scared me away.\n\n\n> > The most constraining factor for storage, I think, is that currently the\n> > API relies on ItemPointerData style TIDs in a number of places (i.e. a 6\n> > byte tuple identifier).\n> \n> I think 48 bits would be just about enough\n\nI don't think that's really true. Consider e.g. implementing an index\noriented table - there's no way you can efficiently implement one with\nthat small a key. You basically need a helper index just to have\nefficient and small enough tids. And given that we're also going to\nneed wider tids for global indexes, I suspect we're just going to have\nto bite into the sour apple and make tids variable width.\n\n\n> , but it's even more limited than\n> you might at the moment. There are a few places that assume that the\n> offsetnumber <= MaxHeapTuplesPerPage. See ginpostinglist.c, and\n> MAX_TUPLES_PER_PAGE in tidbitmap.c. Also, offsetnumber can't be 0, because\n> that makes the ItemPointer invalid, which is inconvenient if you tried to\n> use ItemPointer as just an arbitrary 48-bit integer.\n\nGood point.\n\nThanks for looking (and playing, in the other thread)!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Apr 2019 09:38:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "On Sat, Apr 6, 2019 at 7:25 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> In this email I want to give a brief status update of the table access\n> method work - I assume that most of you sensibly haven't followed it\n> into all nooks and crannies.\n>\n> I want to thank Haribabu, Alvaro, Alexander, David, Dmitry and all the\n> others that collaborated on making tableam happen. It was/is a huge\n> project.\n\n\nA big thank you Andres for your enormous efforts in this patch.\nWithout your involvement, this patch couldn't have been made into v12.\n\n\nWith regards to storing the rows themselves, the second biggest\n> limitation is a limitation that is not actually a part of tableam\n> itself: WAL. Many tableam's would want to use WAL, but we only have\n> extensible WAL as part of generic_xlog.h. While that's useful to allow\n> prototyping etc, it's imo not efficient enough to build a competitive\n> storage engine for OLTP (OLAP probably much less of a problem). I don't\n> know what the best approach here is - allowing \"well known\" extensions\n> to register rmgr entries would be the easiest solution, but it's\n> certainly a bit crummy.\n>\n\nI got the same doubt when i looked into some of the UNDO patches\nwhere it tries to modify the core code to add UNDO specific WAL types.\nDifferent AM's may need different set of operations to be WAL logged,\nso it may be better for the AM's to register their own types?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Sat, Apr 6, 2019 at 7:25 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nIn this email I want to give a brief status update of the table access\nmethod work - I assume that most of you sensibly haven't followed it\ninto all nooks and crannies.\n\nI want to thank Haribabu, Alvaro, Alexander, David, Dmitry and all the\nothers that collaborated on making tableam happen. It was/is a huge\nproject.A big thank you Andres for your enormous efforts in this patch.Without your involvement, this patch couldn't have been made into v12. \nWith regards to storing the rows themselves, the second biggest\nlimitation is a limitation that is not actually a part of tableam\nitself: WAL. Many tableam's would want to use WAL, but we only have\nextensible WAL as part of generic_xlog.h. While that's useful to allow\nprototyping etc, it's imo not efficient enough to build a competitive\nstorage engine for OLTP (OLAP probably much less of a problem). I don't\nknow what the best approach here is - allowing \"well known\" extensions\nto register rmgr entries would be the easiest solution, but it's\ncertainly a bit crummy.I got the same doubt when i looked into some of the UNDO patcheswhere it tries to modify the core code to add UNDO specific WAL types.Different AM's may need different set of operations to be WAL logged,so it may be better for the AM's to register their own types?Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Tue, 9 Apr 2019 12:12:23 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "On 08/04/2019 19:38, Andres Freund wrote:\n> On 2019-04-08 14:53:53 +0300, Heikki Linnakangas wrote:\n>> On 05/04/2019 23:25, Andres Freund wrote:\n>>> - the (optional) bitmap heap scan API - that's fairly intrinsically\n>>> block based. An AM could just internally subdivide TIDs in a different\n>>> way, but I don't think a bitmap scan like we have would e.g. make a\n>>> lot of sense for an index oriented table without any sort of stable\n>>> tid.\n>>\n>> If an AM doesn't implement the bitmap heap scan API, what happens? Bitmap\n>> scans are disabled?\n> \n> Yea, the planner doesn't consider them. It just masks the index's\n> amhasgetbitmap. Seems to be the most reasonable thing to do?\n\nYep.\n\n>> Even if an AM isn't block-oriented, the bitmap heap scan API still makes\n>> sense as long as there's some correlation between TIDs and physical\n>> location.\n> \n> Yea, it could be a non-linear mapping. But I'm honestly not sure how\n> many non-block oriented AMs with such a correlation there are - I mean\n> you're not going to have that in say an IOT. And it'd be trivial to just\n> \"fake\" a block mapping for an in-memory AM.\n\nNow that Ashwin conveniently posted the ZedStore prototype we started to \nhack on [1], I'll point to that as an example :-). It stores data in a \nB-tree (or rather, multiple B-trees) on TIDs. So there's very high \ncorrelation between TIDs and physical locality, but it's not block oriented.\n\nAnother example would be the \"LZ4 Compressed Storage Manager\" that \nNikolai envisioned recently [2]. Before we came up with the idea of \nusing b-trees in ZedStore, we were actually thinking of something very \nsimilar to that. Although that one perhaps still counts as \n\"block-oriented\" as far as the bitmap heap scan API is concerned, as it \nstill deals with blocks, they're just mapped to different physical \nlocations.\n\nI'm not sure how an Index-Organized-Table would work, but I think it \nwould want to just get the whole bitmap, and figure out the best order \nto fetch the rows by itself.\n\nPS. Seems that having a table AM API has opened the floodgates for \nstorage ideas. Nice!\n\n- Heikki\n\n[1] \nhttps://www.postgresql.org/message-id/CALfoeiuF-m5jg51mJUPm5GN8u396o5sA2AF5N97vTRAEDYac7w@mail.gmail.com\n[2] \nhttps://www.postgresql.org/message-id/flat/11996861554042351%40iva4-dd95b404a60b.qloud-c.yandex.net\n\n\n",
"msg_date": "Tue, 9 Apr 2019 09:32:17 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "> On Tue, Apr 9, 2019 at 4:12 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:\n>\n> On Sat, Apr 6, 2019 at 7:25 AM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> With regards to storing the rows themselves, the second biggest\n>> limitation is a limitation that is not actually a part of tableam\n>> itself: WAL. Many tableam's would want to use WAL, but we only have\n>> extensible WAL as part of generic_xlog.h. While that's useful to allow\n>> prototyping etc, it's imo not efficient enough to build a competitive\n>> storage engine for OLTP (OLAP probably much less of a problem). I don't\n>> know what the best approach here is - allowing \"well known\" extensions\n>> to register rmgr entries would be the easiest solution, but it's\n>> certainly a bit crummy.\n>\n>\n> I got the same doubt when i looked into some of the UNDO patches\n> where it tries to modify the core code to add UNDO specific WAL types.\n> Different AM's may need different set of operations to be WAL logged,\n> so it may be better for the AM's to register their own types?\n\nI'm also curious about that. As far as I can see the main objection against\nthat was that in this case the recovery process will depend on an extension,\nwhich could violate reliability. But I wonder if this argument is still valid\nfor AM's, since the whole data is kind of depends on it, not only the recovery.\n\nBtw, can someone elaborate, why exactly generic_xlog is not efficient enough?\nI've went through the corresponding thread, looks like generic WAL records are\nbigger than normal one - is it the only reason?\n\n\n",
"msg_date": "Tue, 9 Apr 2019 11:17:29 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 2:12 PM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:\n> I got the same doubt when i looked into some of the UNDO patches\n> where it tries to modify the core code to add UNDO specific WAL types.\n> Different AM's may need different set of operations to be WAL logged,\n> so it may be better for the AM's to register their own types?\n\nIn the current undo proposal, the undo subsystem itself needs an rmgr\nID for WAL-logging of some low level undo log management records (ie\nits own record types), but then any undo-aware AM would also need to\nhave its own rmgr ID for its own universe of WAL records (its own\ntypes, meaningful to it alone), and that same rmgr ID is used also for\nits undo records (which themselves have specific types). That is, in\nrmgrlist.h, an undo-aware AM would register not only its redo function\n(called for each WAL record in recovery) but also its undo function\n(called when transaction roll back, if your transaction generated any\nundo records). Which raises the question of how a hypothetical\nundo-aware AM could deal with undo records if it's using generic WAL\nrecords. I haven't thought about that. A couple of ideas we've\nbounced around to allow extensions to work with specific WAL records:\n(1) a community-wide registry of rmgr IDs (basically, just allocate\nthe IDs for all known extensions in a header in the tree, like IANA),\nor (2) a per-cluster registry scheme where an extension, identified by\nits name, would be permanently allocated an rmgr number and library +\ncallback functions for the lifetime of that cluster. Or something\nlike that.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Apr 2019 21:41:43 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-09 11:17:29 +0200, Dmitry Dolgov wrote:\n> I'm also curious about that. As far as I can see the main objection against\n> that was that in this case the recovery process will depend on an extension,\n> which could violate reliability.\n\nI don't think that's a primary concern - although it is one. The mapping\nfrom types of records to the handler function needs to be accessible at\na very early state, when the cluster isn't yet in a consistent state. So\nwe can't just go an look into pg_am, and look up a handler function, etc\n- crash recovery happens much earlier than that is possible. Nor do we\nwant the mapping of 'rmgr id' -> 'extension' to be defined in the config\nfile, that's way too likely to be wrong. So there needs to be a\ndifferent type of mapping, accessible outside the catalog. I supect we'd\nhave to end up with something very roughly like the relmapper\ninfrastructure. A tertiary problem is then how to identify extensions\nin that mapping - although I suspect just using any library name that\ncan be passed to load_library() will be OK.\n\n\n> But I wonder if this argument is still valid for AM's, since the whole\n> data is kind of depends on it, not only the recovery.\n\nI don't buy that argument. If you have an AM that registers, using a new\nfacility, replay routines, and then it errors out / crashes during\nthose, there's no way to get the cluster back into a consistent\nstate. So it's not just the one table in that AM that's gone, it's the\nentire cluster that's impacted.\n\n\n> Btw, can someone elaborate, why exactly generic_xlog is not efficient enough?\n> I've went through the corresponding thread, looks like generic WAL records are\n> bigger than normal one - is it the only reason?\n\nThat's one big reason. But also, you just can't do much more than \"write\nthis block into that file\" during recovery with. A lot of our replay\nroutines intentionally do more complicated tasks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Apr 2019 05:32:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "On Fri, Apr 5, 2019 at 11:25 PM Andres Freund <andres@anarazel.de> wrote:\n> I want to thank Haribabu, Alvaro, Alexander, David, Dmitry and all the\n> others that collaborated on making tableam happen. It was/is a huge\n> project.\n\nThank you so much for bringing this project to commit! Excellent work!\n\nYour explanation of existing limitations looks very good and\nconvincing. But I think there is one you didn't mention. We require\nnew table AMs to basically save old \"contract\" between heap and\nindexes. We have \"all or nothing\" choice during updates. So, storage\ncan either insert to none of indexes or insert to all of them\n(HOT-like update). I think any storage, which is going to address\n\"write amplification\" point raised by Uber, needs to break this\n\"contract\".\n\nFor example, zheap is promised to implement delete-marking indexes.\nBut it's not yet published. And for me it's not clear that this\napproach is better among the alternatives. With delete-marking\napproach you need to update index tuples corresponding to old values\nof updated fields. But additionally to that it's not trivial to\ndelete index tuple. In order to do that, you need to both locate this\nindex tuple and know that this index value isn't present in undo\nchain. So, it's likely required another index lookup during purging\nof undo chain. Thus, we basically need to random lookup index twice\nfor every deleted index tuple. Also, it becomes more complex to\nlookup appropriate heap tuple during index scan. Then you need to\ncheck not only visibility, but also matching index value (here we need\nto adjust index_fetch_tuple interface). Because it might happen that\nvisible to you version have different index value. That may lead to\nO(N^2) performance while accessing single row with N versions (MySQL\nInnoDB has this problem).\n\nAlternative idea is to have MVCC-aware indexes. This approach looks\nmore attractive for me. In this approach you basically need xmin,\nxmax fields in index tuples. On insertion of index tuple you fill\nit's xmin. On update, previous index tuple is marked with xmax.\nAfter that outdated index tuples might be deleted in the lazy manner\nwhen page space is required. So, only one random access is required\nfor deleted index tuple. With this approach fetching single row is\nO(N). Also, index-only scan becomes very easy and doesn't even need a\nvisibility map. The only problem here is extra space requirements for\nindex tuples. But I think, this is well-isolated problem, which is\neasy to attack. For instance, some visibility information could be\nevicted to undo chain (like zheap does for its tuples). Also, we can\nhave special bit for \"all visible\" index tuples. With \"all visible\"\nbit set this tuple can get rid of visibility fields. We can do this\nfor index tuples, because if index tuple requires extra space we can\nsplit the page, in spite of heap where tuples are fixed in pages and\nxmax needs to be updated in-place.\n\nI understand that delete-marking indexes have some advantages, and\nsome people find them more appealing. But my point is that we\nshouldn't builtin one of this approaches into API unless we have\nconcrete proof that this approach is strongly overcomes another. It\nwould be better to have our table-AM API flexible enough to implement\nboth. I can imagine we have proper encapsulation here bringing more\ninteraction with indexes to the table AM side.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 10 Apr 2019 20:14:17 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-10 20:14:17 +0300, Alexander Korotkov wrote:\n> Your explanation of existing limitations looks very good and\n> convincing. But I think there is one you didn't mention. We require\n> new table AMs to basically save old \"contract\" between heap and\n> indexes. We have \"all or nothing\" choice during updates. So, storage\n> can either insert to none of indexes or insert to all of them\n> (HOT-like update).\n\nI think that's a problem, and yea, I should have mentioned it. I'd\nearlier thought about it and then forgot.\n\nI however don't think we should design the interface for this before we\nhave at least one AM that's actually in decent-ish shape that needs\nit. I seriously doubt we'll get the interface right enough.\n\nNote: I'm *extremely* *extremely* doubtful that moving the full executor\ninvocations for expression indices etc into the tableam is a sane thing\nto do. It's possible to convince me there's no alternative, but it'll be\nreally hard.\n\nI suspect the right direction will be more going in a direction of\ncomputing new index tuples for expression indexes before tableam gets\ninvolved. If we do that right, we can also implement the stuff that\n1c53c4dec3985512f7f2f53c9d76a5295cd0a2dd reverted in a proper way.\n\n\n> I think any storage, which is going to address \"write amplification\"\n> point raised by Uber, needs to break this \"contract\".\n\nFWIW, I don't think it makes much point in using Uber as a justification\nfor anything here. Their analysis was so deeply flawed and motivated by\nnon-technical reasons that it should just be ignored.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Apr 2019 10:32:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 8:32 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-04-10 20:14:17 +0300, Alexander Korotkov wrote:\n> > Your explanation of existing limitations looks very good and\n> > convincing. But I think there is one you didn't mention. We require\n> > new table AMs to basically save old \"contract\" between heap and\n> > indexes. We have \"all or nothing\" choice during updates. So, storage\n> > can either insert to none of indexes or insert to all of them\n> > (HOT-like update).\n>\n> I think that's a problem, and yea, I should have mentioned it. I'd\n> earlier thought about it and then forgot.\n>\n> I however don't think we should design the interface for this before we\n> have at least one AM that's actually in decent-ish shape that needs\n> it. I seriously doubt we'll get the interface right enough.\n>\n> Note: I'm *extremely* *extremely* doubtful that moving the full executor\n> invocations for expression indices etc into the tableam is a sane thing\n> to do. It's possible to convince me there's no alternative, but it'll be\n> really hard.\n>\n> I suspect the right direction will be more going in a direction of\n> computing new index tuples for expression indexes before tableam gets\n> involved. If we do that right, we can also implement the stuff that\n> 1c53c4dec3985512f7f2f53c9d76a5295cd0a2dd reverted in a proper way.\n\nProbably we can invent few modes table AM might work: calculation of\nall new index tuples, calculation of new and old index tuples for\nupdated fields, calculation of all new and old index tuples and so on.\nAnd them index tuples would be calculated either in advance or by\ncallback.\n\n> > I think any storage, which is going to address \"write amplification\"\n> > point raised by Uber, needs to break this \"contract\".\n>\n> FWIW, I don't think it makes much point in using Uber as a justification\n> for anything here. Their analysis was so deeply flawed and motivated by\n> non-technical reasons that it should just be ignored.\n\nYeah, Uber is just a buzz word here. But problem that update of\nsingle indexed field leads to insertions to every index is well-known\namong the PostgreSQL users.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 10 Apr 2019 20:41:17 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 8:32 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-04-10 20:14:17 +0300, Alexander Korotkov wrote:\n> > Your explanation of existing limitations looks very good and\n> > convincing. But I think there is one you didn't mention. We require\n> > new table AMs to basically save old \"contract\" between heap and\n> > indexes. We have \"all or nothing\" choice during updates. So, storage\n> > can either insert to none of indexes or insert to all of them\n> > (HOT-like update).\n>\n> I think that's a problem, and yea, I should have mentioned it. I'd\n> earlier thought about it and then forgot.\n>\n> I however don't think we should design the interface for this before we\n> have at least one AM that's actually in decent-ish shape that needs\n> it. I seriously doubt we'll get the interface right enough.\n\nSure.\n\nMy point is that once we get first table AM which needs this, say\nzheap, we shouldn't make it like this\n\nTM_Result (*tuple_update) (Relation rel, ... bool *update_indexes,\nbool *delete_marking);\n\nbut rather try to design proper encapsulation of logic inside of table AM.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 15 Apr 2019 05:47:55 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "> On Fri, Apr 5, 2019 at 10:25 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> A second set of limitations is around making more of tableam\n> optional. Right now it e.g. is not possible to have an AM that doesn't\n> implement insert/update/delete. Obviously an AM can just throw an error\n> in the relevant callbacks, but I think it'd be better if we made those\n> callbacks optional, and threw errors at parse-analysis time (both to\n> make the errors consistent, and to ensure it's consistently thrown,\n> rather than only when e.g. an UPDATE actually finds a row to update).\n\nAgree, but I guess some of tableam still should be mandatory, and then I wonder\nwhere to put the live between those that are optional and those that are not.\nE.g. looks like it can be relatively straightforward (ignoring `create table as`\nand some other stuff) to make insert/update/delete optional with messages at\nanalysis time, but for others like parallel scan related it's probably not.",
"msg_date": "Wed, 17 Apr 2019 22:02:24 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "Hi!\n\nOn 2019-04-17 22:02:24 +0200, Dmitry Dolgov wrote:\n> > On Fri, Apr 5, 2019 at 10:25 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > A second set of limitations is around making more of tableam\n> > optional. Right now it e.g. is not possible to have an AM that doesn't\n> > implement insert/update/delete. Obviously an AM can just throw an error\n> > in the relevant callbacks, but I think it'd be better if we made those\n> > callbacks optional, and threw errors at parse-analysis time (both to\n> > make the errors consistent, and to ensure it's consistently thrown,\n> > rather than only when e.g. an UPDATE actually finds a row to update).\n> \n> Agree, but I guess some of tableam still should be mandatory, and then I wonder\n> where to put the live between those that are optional and those that are not.\n> E.g. looks like it can be relatively straightforward (ignoring `create table as`\n> and some other stuff) to make insert/update/delete optional with messages at\n> analysis time, but for others like parallel scan related it's probably not.\n\nThanks for the patch! I assume you're aware, but it's probably not going\nto be applied for 12...\n\nI think most of the read-only stuff just needs to be non-optional, and\nmost of the DML stuff needs to be optional.\n\nOn the executor side it'd probably be good to make the sample scan\noptional too, but then we also need to check for that during\nparse-analysis. In contast to bitmap scans there's no alternative way to\nexecute them.\n\n\n> \tAssert(routine->relation_set_new_filenode != NULL);\n> diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c\n> index c39218f8db..36e2dbf1b8 100644\n> --- a/src/backend/commands/copy.c\n> +++ b/src/backend/commands/copy.c\n> @@ -41,6 +41,7 @@\n> #include \"miscadmin.h\"\n> #include \"optimizer/optimizer.h\"\n> #include \"nodes/makefuncs.h\"\n> +#include \"nodes/nodeFuncs.h\"\n> #include \"parser/parse_coerce.h\"\n> #include \"parser/parse_collate.h\"\n> #include \"parser/parse_expr.h\"\n> @@ -901,6 +902,13 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n> \t\t\t\t\t\t\t\t\t\t\tNULL, false, false);\n> \t\trte->requiredPerms = (is_from ? ACL_INSERT : ACL_SELECT);\n> \n> +\t\tif (is_from && !table_support_multi_insert(rel))\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t\t\t errmsg(\"Table access method doesn't support the operation\"),\n> +\t\t\t\t\t\t parser_errposition(pstate,\n> +\t\t\t\t\t\t\t\t\t\t\texprLocation((Node *) stmt))));\n\nProbably should fall-back to plain inserts if multi-insert isn't\nsupported.\n\nAnd if insert isn't supported either, we should probably talk about\nthat specifically? I.e. a message like\n\"access method \\\"%s\\\" of table \\\"%s\\\" does not support %s\"\n?\n\nWithout knowing at least thatmuch operation it might sometimes be very\nhard to figure out what's not supported.\n\n\n\n> +static inline bool\n> +table_support_speculative(Relation rel)\n> +{\n> +\treturn rel->rd_tableam == NULL ||\n> +\t\t (rel->rd_tableam->tuple_insert_speculative != NULL &&\n> +\t\t\trel->rd_tableam->tuple_complete_speculative != NULL);\n> +}\n\nIn GetTableAmRoutine() I'd assert that either both or none are defined.\n\n\n> +static inline bool\n> +table_support_multi_insert(Relation rel)\n> +{\n> +\treturn rel->rd_tableam == NULL ||\n> +\t\t (rel->rd_tableam->multi_insert != NULL &&\n> +\t\t\trel->rd_tableam->finish_bulk_insert != NULL);\n> +}\n\nbulk insert already is optional...\n\n\nI think there's more places that need checks like these - consider\ne.g. replication and such that don't go through the full blown executor.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Apr 2019 13:24:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "> On Wed, Apr 17, 2019 at 10:25 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> I assume you're aware, but it's probably not going to be applied for 12...\n\nSure, the patch was mostly to express more clearly what I was thinking about :)\n\n> I think most of the read-only stuff just needs to be non-optional, and most\n> of the DML stuff needs to be optional.\n\n> On the executor side it'd probably be good to make the sample scan optional\n> too, but then we also need to check for that during parse-analysis. In\n> contast to bitmap scans there's no alternative way to execute them.\n\nYeah, makes sense.\n\n> bulk insert already is optional...\n\nOh, haven't noticed.\n\n\n",
"msg_date": "Wed, 17 Apr 2019 22:37:15 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "On Wed, 10 Apr 2019 at 18:14, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n\n> Alternative idea is to have MVCC-aware indexes. This approach looks\n> more attractive for me. In this approach you basically need xmin,\n> xmax fields in index tuples. On insertion of index tuple you fill\n> it's xmin. On update, previous index tuple is marked with xmax.\n>\n\n+1\n\nxmax can be provided through to index by indexam when 1) we mark killed\ntuples, 2) when we do indexscan of index entry without xmax set.\nxmax can be set as a hint on normal scans, or set as part of an update, as\nthe index chooses\n\nAfter that outdated index tuples might be deleted in the lazy manner\n> when page space is required.\n\n\nThat is already done, so hardly any change there.\n\nAlso, we can\n\nhave special bit for \"all visible\" index tuples. With \"all visible\"\nbit set this tuple can get rid of visibility fields. We can do this\nfor index tuples, because if index tuple requires extra space we can\nsplit the page, in spite of heap where tuples are fixed in pages and\nxmax needs to be updated in-place.\n\nKeeping the xmin/xmax would also be useful for historical indexes, i.e.\nindexes that can be used to search for data with historic snapshots.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Wed, 10 Apr 2019 at 18:14, Alexander Korotkov <aekorotkov@gmail.com> wrote: Alternative idea is to have MVCC-aware indexes. This approach looks\nmore attractive for me. In this approach you basically need xmin,\nxmax fields in index tuples. On insertion of index tuple you fill\nit's xmin. On update, previous index tuple is marked with xmax. +1xmax can be provided through to index by indexam when 1) we mark killed tuples, 2) when we do indexscan of index entry without xmax set.xmax can be set as a hint on normal scans, or set as part of an update, as the index choosesAfter that outdated index tuples might be deleted in the lazy mannerwhen page space is required. That is already done, so hardly any change there.Also, we canhave special bit for \"all visible\" index tuples. With \"all visible\"bit set this tuple can get rid of visibility fields. We can do thisfor index tuples, because if index tuple requires extra space we cansplit the page, in spite of heap where tuples are fixed in pages andxmax needs to be updated in-place.Keeping the xmin/xmax would also be useful for historical indexes, i.e. indexes that can be used to search for data with historic snapshots.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 18 Apr 2019 07:27:06 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "On 2019-Apr-10, Alexander Korotkov wrote:\n\n> Alternative idea is to have MVCC-aware indexes. This approach looks\n> more attractive for me. In this approach you basically need xmin,\n> xmax fields in index tuples.\n\n\"We liked freezing xmin so much that we had to introduce freezing for\nxmax\" -- rhaas dixit. And now we want to introduce freezing for\nindexes?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Jun 2019 11:47:11 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "On Tue, Jun 11, 2019 at 11:47 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2019-Apr-10, Alexander Korotkov wrote:\n> > Alternative idea is to have MVCC-aware indexes. This approach looks\n> > more attractive for me. In this approach you basically need xmin,\n> > xmax fields in index tuples.\n>\n> \"We liked freezing xmin so much that we had to introduce freezing for\n> xmax\" -- rhaas dixit. And now we want to introduce freezing for\n> indexes?\n\nPlus it would add 8 bytes to the size of every index tuple. if you\nare indexing long-ish strings it may not hurt too much, but if your\nprimary key is an integer, I think your index is going to get a lot\nbigger.\n\nThe problem with freezing is perhaps avoidable if you store an epoch\nin the page special space as part of all this. But I don't see any\nway to avoid having the tuples get wider. Possibly you could include\nxmin and xmax only when needed, removing xmin once the tuples are\nall-visible and splitting pages if you need to make room to add an\nxmax. I'm not sure how much that helps, though, because if you do a\nbulk insert, you're going to have to leave room for all of the xmins\ninitially, and removing them later will produce free space for which\nyou may have little use.\n\nI don't think that including visibility information in indexes is a\nbad idea; we've thought about making zheap do this someday. But I\nthink that we need to use some more sophisticated approach involving,\nmaybe, undo pointers, or some other kind of magic, rather than just\nwidening the tuples. I expect that just widening the tuples would be\ngood enough to win for some use cases, but I think there would be\nothers that lose heavily.\n\nIn any case, I agree that we do not want to create more things that\nneed freezing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 Jun 2019 11:59:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "On Tue, Jun 11, 2019 at 8:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't think that including visibility information in indexes is a\n> bad idea; we've thought about making zheap do this someday. But I\n> think that we need to use some more sophisticated approach involving,\n> maybe, undo pointers, or some other kind of magic, rather than just\n> widening the tuples. I expect that just widening the tuples would be\n> good enough to win for some use cases, but I think there would be\n> others that lose heavily.\n\n+1. Limited visibility information would make sense (e.g. maybe a per\ntuple all-visible bit), which would have to be backed by something\nlike UNDO, but storing XIDs in tuples seems like a very bad idea. The\nidea that something like this would have to be usable by any possible\ntable access method seems unworkable in general.\n\nSometimes it seems like the table access method work could use some\nspecific non-goals. Perhaps I just haven't being paying enough\nattention to have noticed them.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 11 Jun 2019 09:14:46 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-11 11:59:36 -0400, Robert Haas wrote:\n> On Tue, Jun 11, 2019 at 11:47 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > On 2019-Apr-10, Alexander Korotkov wrote:\n> > > Alternative idea is to have MVCC-aware indexes. This approach looks\n> > > more attractive for me. In this approach you basically need xmin,\n> > > xmax fields in index tuples.\n> >\n> > \"We liked freezing xmin so much that we had to introduce freezing for\n> > xmax\" -- rhaas dixit. And now we want to introduce freezing for\n> > indexes?\n> \n> Plus it would add 8 bytes to the size of every index tuple. if you\n> are indexing long-ish strings it may not hurt too much, but if your\n> primary key is an integer, I think your index is going to get a lot\n> bigger.\n> \n> The problem with freezing is perhaps avoidable if you store an epoch\n> in the page special space as part of all this. But I don't see any\n> way to avoid having the tuples get wider. Possibly you could include\n> xmin and xmax only when needed, removing xmin once the tuples are\n> all-visible and splitting pages if you need to make room to add an\n> xmax. I'm not sure how much that helps, though, because if you do a\n> bulk insert, you're going to have to leave room for all of the xmins\n> initially, and removing them later will produce free space for which\n> you may have little use.\n> \n> I don't think that including visibility information in indexes is a\n> bad idea; we've thought about making zheap do this someday. But I\n> think that we need to use some more sophisticated approach involving,\n> maybe, undo pointers, or some other kind of magic, rather than just\n> widening the tuples. I expect that just widening the tuples would be\n> good enough to win for some use cases, but I think there would be\n> others that lose heavily.\n\nYea, I think there's plenty reasons to want to do something different\nthan what we're doing. But I'd like to see a concrete proposal before\nbuilding API for it...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jun 2019 09:32:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Status of the table access method work"
},
{
"msg_contents": "On Tue, Jun 11, 2019 at 12:32 PM Andres Freund <andres@anarazel.de> wrote:\n> Yea, I think there's plenty reasons to want to do something different\n> than what we're doing. But I'd like to see a concrete proposal before\n> building API for it...\n\nI wasn't intending to propose that you should. We're just in the\nbrainstorming stage here I think.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 Jun 2019 14:20:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of the table access method work"
}
] |
[
{
"msg_contents": "Hi,\n\nJust now, and also once 5-and-a-bit days ago, flaviventris failed like\nthis, as did filefish 41 days ago[1] (there may be more, I just\nchecked a random sample of InstallCheck-C failures accessible via the\nweb interface):\n\n WHERE relname like 'trunc_stats_test%' order by relname;\n relname | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup |\nn_dead_tup\n -------------------+-----------+-----------+-----------+------------+------------\n- trunc_stats_test | 3 | 0 | 0 | 0 |\n 0\n- trunc_stats_test1 | 4 | 2 | 1 | 1 |\n 0\n- trunc_stats_test2 | 1 | 0 | 0 | 1 |\n 0\n- trunc_stats_test3 | 4 | 0 | 0 | 2 |\n 2\n- trunc_stats_test4 | 2 | 0 | 0 | 0 |\n 2\n+ trunc_stats_test | 0 | 0 | 0 | 0 |\n 0\n+ trunc_stats_test1 | 0 | 0 | 0 | 0 |\n 0\n+ trunc_stats_test2 | 0 | 0 | 0 | 0 |\n 0\n+ trunc_stats_test3 | 0 | 0 | 0 | 0 |\n 0\n+ trunc_stats_test4 | 0 | 0 | 0 | 0 |\n 0\n (5 rows)\n\n SELECT st.seq_scan >= pr.seq_scan + 1,\n@@ -180,7 +180,7 @@\n WHERE st.relname='tenk2' AND cl.relname='tenk2';\n ?column? | ?column? | ?column? | ?column?\n ----------+----------+----------+----------\n- t | t | t | t\n+ f | f | f | f\n (1 row)\n\n SELECT st.heap_blks_read + st.heap_blks_hit >= pr.heap_blks + cl.relpages,\n@@ -189,7 +189,7 @@\n WHERE st.relname='tenk2' AND cl.relname='tenk2';\n ?column? | ?column?\n ----------+----------\n- t | t\n+ t | f\n (1 row)\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=filefish&dt=2019-02-23%2009%3A53%3A11\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 6 Apr 2019 11:04:53 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Intermittent failure in InstallCheck-C \"stat\" test"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Just now, and also once 5-and-a-bit days ago, flaviventris failed like\n> this, as did filefish 41 days ago[1] (there may be more, I just\n> checked a random sample of InstallCheck-C failures accessible via the\n> web interface):\n\nThis sort of thing has pretty much always happened. I believe it is\njust down to the designed-in unreliability of the current stats collection\nmechanism. We might be able to get rid of it if we go over to\nshared-memory stats, but I've yet to look at that patch :-(. In the\nmeantime I don't see any reason to think that anything's worse here\nthan it has been for many years.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Apr 2019 18:19:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent failure in InstallCheck-C \"stat\" test"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-05 18:19:17 -0400, Tom Lane wrote:\n> We might be able to get rid of it if we go over to shared-memory\n> stats, but I've yet to look at that patch :-(.\n\nI did a few review cycles on it, and while I believe the concept is\nsound, I think it needs a good bit more time to mature. Not\nrealistically doable for v12.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Apr 2019 15:21:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent failure in InstallCheck-C \"stat\" test"
},
{
"msg_contents": "On Sat, Apr 6, 2019 at 11:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Just now, and also once 5-and-a-bit days ago, flaviventris failed like\n> > this, as did filefish 41 days ago[1] (there may be more, I just\n> > checked a random sample of InstallCheck-C failures accessible via the\n> > web interface):\n>\n> This sort of thing has pretty much always happened. I believe it is\n> just down to the designed-in unreliability of the current stats collection\n> mechanism. We might be able to get rid of it if we go over to\n> shared-memory stats, but I've yet to look at that patch :-(. In the\n> meantime I don't see any reason to think that anything's worse here\n> than it has been for many years.\n\nDoes it imply that the kernel dropped a UDP packet to localhost?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 6 Apr 2019 12:44:40 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent failure in InstallCheck-C \"stat\" test"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, Apr 6, 2019 at 11:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This sort of thing has pretty much always happened. I believe it is\n>> just down to the designed-in unreliability of the current stats collection\n>> mechanism. We might be able to get rid of it if we go over to\n>> shared-memory stats, but I've yet to look at that patch :-(. In the\n>> meantime I don't see any reason to think that anything's worse here\n>> than it has been for many years.\n\n> Does it imply that the kernel dropped a UDP packet to localhost?\n\nThat's a possible explanation, anyway. The problem shows up seldom enough\nthat it's hard to say that conclusively. So *maybe* there's a bug here\nwe could actually fix, but again, without any way to repro it, it's hard\nto say much.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Apr 2019 20:01:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent failure in InstallCheck-C \"stat\" test"
}
] |
[
{
"msg_contents": "Hi\n\nWhen using a functional index on a table, we realized that the permission \ncheck done in pg_stats was incorrect and thus preventing valid access to the \nstatistics from users.\n\nHow to reproduce:\n\ncreate table tbl1 (a integer, b integer);\ninsert into tbl1 select x, x % 50 from generate_series(1, 200000) x;\ncreate index on tbl1 using btree ((a % (b + 1)));\nanalyze ;\n\ncreate user demo_priv encrypted password 'demo';\nrevoke ALL on SCHEMA public from PUBLIC ;\ngrant select on tbl1 to demo_priv;\ngrant usage on schema public to demo_priv;\n\nAnd as demo_priv user:\n\nselect tablename, attname from pg_stats where tablename like 'tbl1%';\n\nReturns:\n tablename | attname \n-----------+---------\n tbl1 | a\n tbl1 | b\n(2 rows)\n\n\nExpected:\n tablename | attname \n---------------+---------\n tbl1 | a\n tbl1 | b\n tbl1_expr_idx | expr\n(3 rows)\n\n\nThe attached patch fixes this by introducing a second path in privilege check \nin pg_stats view.\nI have not written a regression test yet, mainly because I'm not 100% certain \nwhere to write it. Given some hints, I would happily add it to this patch.\n\nRegards\n\n Pierre Ducroquet",
"msg_date": "Sat, 06 Apr 2019 13:40:27 +0200",
"msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>",
"msg_from_op": true,
"msg_subject": "[Patch] Invalid permission check in pg_stats for functional indexes"
},
{
"msg_contents": "Hello Pierre,\n\n> When using a functional index on a table, we realized that the permission\n> check done in pg_stats was incorrect and thus preventing valid access to the\n> statistics from users.\n>\n> The attached patch fixes this by introducing a second path in privilege check\n> in pg_stats view.\nThe patch doesn't apply on the latest HEAD [1].\nIIUC, the patch introduces an additional privilege check for the\nunderlying objects involved in the expression/functional index. If the\nuser has 'select' privileges on all of the columns/objects included in\nthe expression/functional index, then it should be visible in pg_stats\nview. I've applied the patch manually and tested the feature. It works\nas expected.\n\n> I have not written a regression test yet, mainly because I'm not 100% certain\n> where to write it. Given some hints, I would happily add it to this patch.\n>\nYeah, it'll be good to have some regression tests for the same. I'm\nalso not sure which regression file best suites for these tests.\n\n[1] http://cfbot.cputube.org/patch_24_2274.log\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Sep 2019 16:09:51 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Invalid permission check in pg_stats for functional\n indexes"
},
{
"msg_contents": "On Tuesday, September 3, 2019 12:39:51 PM CEST Kuntal Ghosh wrote:\n> Hello Pierre,\n\nHello Kuntal\n> \n> > When using a functional index on a table, we realized that the permission\n> > check done in pg_stats was incorrect and thus preventing valid access to\n> > the statistics from users.\n> > \n> > The attached patch fixes this by introducing a second path in privilege\n> > check in pg_stats view.\n> \n> The patch doesn't apply on the latest HEAD [1].\n\nAll my apologies for that. I submitted this patch some time ago but forgot to \nadd it to the commit fest. Attached to this mail is a rebased version.\n\n> IIUC, the patch introduces an additional privilege check for the\n> underlying objects involved in the expression/functional index. If the\n> user has 'select' privileges on all of the columns/objects included in\n> the expression/functional index, then it should be visible in pg_stats\n> view. I've applied the patch manually and tested the feature. It works\n> as expected.\n\nIndeed, you understood correctly. I have not digged around to find out the \norigin of the current situation, but it does not look like an intentional \nbehaviour, more like a small oversight.\n\n> > I have not written a regression test yet, mainly because I'm not 100%\n> > certain where to write it. Given some hints, I would happily add it to\n> > this patch.\n> Yeah, it'll be good to have some regression tests for the same. I'm\n> also not sure which regression file best suites for these tests.\n\n\n\nThank you very much for your review\n\n Pierre",
"msg_date": "Tue, 03 Sep 2019 20:53:19 +0200",
"msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] Invalid permission check in pg_stats for functional\n indexes"
},
{
"msg_contents": "On Wed, Sep 4, 2019 at 12:23 AM Pierre Ducroquet <p.psql@pinaraf.info> wrote:\n>\n> All my apologies for that. I submitted this patch some time ago but forgot to\n> add it to the commit fest. Attached to this mail is a rebased version.\n>\nThank you for the new version of the patch. It looks good to me.\nMoving the status to ready for committer.\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Sep 2019 11:02:31 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Invalid permission check in pg_stats for functional\n indexes"
},
{
"msg_contents": "On 2019-Sep-03, Pierre Ducroquet wrote:\n\n> > IIUC, the patch introduces an additional privilege check for the\n> > underlying objects involved in the expression/functional index. If the\n> > user has 'select' privileges on all of the columns/objects included in\n> > the expression/functional index, then it should be visible in pg_stats\n> > view. I've applied the patch manually and tested the feature. It works\n> > as expected.\n> \n> Indeed, you understood correctly. I have not digged around to find out the \n> origin of the current situation, but it does not look like an intentional \n> behaviour, more like a small oversight.\n\nHmm. This seems to create a large performance drop. I created your\nview as pg_stats2 alongside pg_stats, and ran EXPLAIN on both for the\nquery you posted. Look at the plan with the original query:\n\n55432 13devel 10881=# explain select tablename, attname from pg_stats where tablename like 'tbl1%';\n QUERY PLAN \n───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n Subquery Scan on pg_stats (cost=129.79..156.46 rows=1 width=128)\n Filter: (pg_stats.tablename ~~ 'tbl1%'::text)\n -> Hash Join (cost=129.79..156.39 rows=5 width=401)\n Hash Cond: ((s.starelid = c.oid) AND (s.staattnum = a.attnum))\n -> Index Only Scan using pg_statistic_relid_att_inh_index on pg_statistic s (cost=0.27..22.60 rows=422 width=6)\n -> Hash (cost=114.88..114.88 rows=976 width=138)\n -> Hash Join (cost=22.90..114.88 rows=976 width=138)\n Hash Cond: (a.attrelid = c.oid)\n Join Filter: has_column_privilege(c.oid, a.attnum, 'select'::text)\n -> Seq Scan on pg_attribute a (cost=0.00..84.27 rows=2927 width=70)\n Filter: (NOT attisdropped)\n -> Hash (cost=17.95..17.95 rows=396 width=72)\n -> Seq Scan on pg_class c (cost=0.00..17.95 rows=396 width=72)\n Filter: ((NOT relrowsecurity) OR (NOT row_security_active(oid)))\n(14 filas)\n\nand here's the plan with your modified view:\n\n55432 13devel 10881=# explain select tablename, attname from pg_stats2 where tablename like 'tbl1%';\n QUERY PLAN \n──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n Subquery Scan on pg_stats2 (cost=128.72..6861.85 rows=1 width=128)\n Filter: (pg_stats2.tablename ~~ 'tbl1%'::text)\n -> Nested Loop (cost=128.72..6861.80 rows=4 width=401)\n Join Filter: (s.starelid = c.oid)\n -> Hash Join (cost=128.45..152.99 rows=16 width=74)\n Hash Cond: ((s.starelid = a.attrelid) AND (s.staattnum = a.attnum))\n -> Index Only Scan using pg_statistic_relid_att_inh_index on pg_statistic s (cost=0.27..22.60 rows=422 width=6)\n -> Hash (cost=84.27..84.27 rows=2927 width=70)\n -> Seq Scan on pg_attribute a (cost=0.00..84.27 rows=2927 width=70)\n Filter: (NOT attisdropped)\n -> Index Scan using pg_class_oid_index on pg_class c (cost=0.27..419.29 rows=1 width=73)\n Index Cond: (oid = a.attrelid)\n Filter: (((NOT relrowsecurity) OR (NOT row_security_active(oid))) AND ((relkind = 'r'::\"char\") OR ((relkind = 'i'::\"char\") AND (NOT (alternatives: SubPlan 1 or hashed SubPlan 2)))) AND (((relkind = 'r'::\"char\") AND has_column_privilege(oid, a.attnum, 'select'::text)) OR ((relkind = 'i'::\"char\") AND (NOT (alternatives: SubPlan 1 or hashed SubPlan 2)))))\n SubPlan 1\n -> Seq Scan on pg_depend (cost=0.00..209.48 rows=1 width=0)\n Filter: ((refobjsubid > 0) AND (objid = c.oid) AND (NOT has_column_privilege(refobjid, (refobjsubid)::smallint, 'select'::text)))\n SubPlan 2\n -> Seq Scan on pg_depend pg_depend_1 (cost=0.00..190.42 rows=176 width=4)\n Filter: ((refobjsubid > 0) AND (NOT has_column_privilege(refobjid, (refobjsubid)::smallint, 'select'::text)))\n(19 filas)\n\nYou forgot to add a condition `pg_depend.classid =\n'pg_catalog.pg_class'::pg_catalog.regclass` in your subquery (fixing\nthat probably improves the plan a lot); but more generally I'm not sure\nthat querying pg_depend is an acceptable way to go about this. I have\nto admit I don't see any other way to get a list of columns involved in\nan expression, though. Maybe we need to introduce a function that\nreturns the set of columns involved in an index (which should include\nthe column in a WHERE clause if any, I suppose.)\n\nWhat about relkind='m'?\n\nI'm not sure about a writing a test for this. Do we have any tests for\nprivileges here?\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Sep 2019 15:37:36 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Invalid permission check in pg_stats for functional\n indexes"
},
{
"msg_contents": "BTW you labelled this in the CF app as targetting \"stable\", but I don't\nthink this is backpatchable. I think we should fix it in master and\ncall it a day. Changing system view definitions in stable versions is\ntough.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Sep 2019 15:42:12 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Invalid permission check in pg_stats for functional\n indexes"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Hmm. This seems to create a large performance drop.\n\nYeah. It's also flat out wrong for indexes that depend on whole-row\nvariables. For that case, we really need to insist that the user\nhave select privilege on all the table columns, but this won't\naccomplish that. It ignores pg_depend entries with refobjsubid = 0,\nand there's no good way to expand those even if it didn't.\n\n> You forgot to add a condition `pg_depend.classid =\n> 'pg_catalog.pg_class'::pg_catalog.regclass` in your subquery (fixing\n> that probably improves the plan a lot); but more generally I'm not sure\n> that querying pg_depend is an acceptable way to go about this.\n\npg_depend isn't ideal for this, I agree. It serves other masters.\n\n> I have\n> to admit I don't see any other way to get a list of columns involved in\n> an expression, though. Maybe we need to introduce a function that\n> returns the set of columns involved in an index (which should include\n> the column in a WHERE clause if any, I suppose.)\n\nI agree that some C function that inspects the index definition is\nprobably needed here. Not sure exactly what it should look like.\n\nWe might be well advised to bury the whole business in a function\nlike \"has_index_column_privilege(index_oid, col, priv_type)\" rather\nthan implementing that partly in SQL and partly in C. The performance\nwould be better, and we'd have more flexibility to fix issues without\nforcing new initdb's.\n\nOn the other hand, a SQL function that just parses the index definition\nand returns relevant column number(s) might be useful for other\npurposes, so maybe we should write that alongside this.\n\n> What about relkind='m'?\n\nAs coded, this certainly breaks pg_stat for those, and for foreign tables\nas well. Likely better to write something like\n\"case when relkind = 'i' then do-something-for-indexes else old-code end\".\n\nActually ... maybe we don't need to change the view definition at all,\nbut instead just make has_column_privilege() do something different\nfor indexes than it does for other relation types. It's dubious that\napplying that function to an index yields anything meaningful today,\nso we could redefine what it returns without (probably) breaking\nanything. That would at least give us an option to back-patch, too,\nthough the end result might be complex enough that we don't care to\nrisk it.\n\nI wonder which of the other has_xxx_privilege tests are likewise\nin need of rethink for indexes.\n\n> I'm not sure about a writing a test for this. Do we have any tests for\n> privileges here?\n\nI don't think we have any meaningful tests for the info-schema views\nas such. However, if we redefine the problem as \"has_column_privilege\non an index does the wrong thing\", then privileges.sql is a natural\nhome for testing that because it already has test cases for that\nfunction.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Sep 2019 16:56:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Invalid permission check in pg_stats for functional\n indexes"
},
{
"msg_contents": "On Thu, Sep 05, 2019 at 04:56:40PM -0400, Tom Lane wrote:\n> As coded, this certainly breaks pg_stat for those, and for foreign tables\n> as well. Likely better to write something like\n> \"case when relkind = 'i' then do-something-for-indexes else old-code end\".\n\nPierre, as an author of the patch currently waiting on author for a\ncouple of months now, are you planning to work more on that and\naddress the comments provided?\n--\nMichael",
"msg_date": "Thu, 28 Nov 2019 11:28:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Invalid permission check in pg_stats for functional\n indexes"
},
{
"msg_contents": "Awhile back I wrote:\n> Actually ... maybe we don't need to change the view definition at all,\n> but instead just make has_column_privilege() do something different\n> for indexes than it does for other relation types. It's dubious that\n> applying that function to an index yields anything meaningful today,\n> so we could redefine what it returns without (probably) breaking\n> anything. That would at least give us an option to back-patch, too,\n> though the end result might be complex enough that we don't care to\n> risk it.\n\nIn hopes of resurrecting this thread, here's a draft patch that does\nit like that (and also fixes row_security_active(), as otherwise this\nprobably creates a security hole in pg_stats).\n\nIt's definitely not commit quality as-is, for several reasons:\n\n* No regression tests.\n\n* I didn't bother to flesh out logic for looking at the individual\ncolumn privileges. I'm not sure if that's worth doing. If it is,\nwe should also look at BuildIndexValueDescription() which is worrying\nabout largely the same thing, and likewise is punting on the hardest\ncases; and selfuncs.c's examine_variable, ditto; and maybe other places.\nThey should all be able to share one implementation of a check for\nwhether the user can read all the columns the index depends on.\n\n* There's still the issue of whether any of the other nearby privilege\nchecking functions need to be synchronized with this, for consistency's\nsake. The pg_stats view doesn't care about the others, but I think\nit's a bit weird if has_column_privilege works like this but, say,\nhas_table_privilege doesn't.\n\nMore generally, does anyone want to object to the whole concept of\nredefining the has_xxx_privilege functions' behavior when applied\nto indexes?\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 26 Dec 2019 18:46:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Invalid permission check in pg_stats for functional\n indexes"
},
{
"msg_contents": "Hi Pierre,\n\nOn 12/26/19 6:46 PM, Tom Lane wrote:\n> Awhile back I wrote:\n>> Actually ... maybe we don't need to change the view definition at all,\n>> but instead just make has_column_privilege() do something different\n>> for indexes than it does for other relation types. It's dubious that\n>> applying that function to an index yields anything meaningful today,\n>> so we could redefine what it returns without (probably) breaking\n>> anything. That would at least give us an option to back-patch, too,\n>> though the end result might be complex enough that we don't care to\n>> risk it.\n> \n> In hopes of resurrecting this thread, here's a draft patch that does\n> it like that (and also fixes row_security_active(), as otherwise this\n> probably creates a security hole in pg_stats).\n\nDo you know when you will have a chance to look at this patch?\n\nTom made a suggestion up-thread about where the regression tests could go.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 25 Mar 2020 10:52:42 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Invalid permission check in pg_stats for functional\n indexes"
},
{
"msg_contents": "> On 25 Mar 2020, at 15:52, David Steele <david@pgmasters.net> wrote:\n\n> On 12/26/19 6:46 PM, Tom Lane wrote:\n>> Awhile back I wrote:\n>>> Actually ... maybe we don't need to change the view definition at all,\n>>> but instead just make has_column_privilege() do something different\n>>> for indexes than it does for other relation types. It's dubious that\n>>> applying that function to an index yields anything meaningful today,\n>>> so we could redefine what it returns without (probably) breaking\n>>> anything. That would at least give us an option to back-patch, too,\n>>> though the end result might be complex enough that we don't care to\n>>> risk it.\n>> In hopes of resurrecting this thread, here's a draft patch that does\n>> it like that (and also fixes row_security_active(), as otherwise this\n>> probably creates a security hole in pg_stats).\n> \n> Do you know when you will have a chance to look at this patch?\n> \n> Tom made a suggestion up-thread about where the regression tests could go.\n\nThis patch still hasn't progressed since Tom's draft patch, is anyone still\ninterested in pursuing it or should we close it for now?\n\ncheers ./daniel\n\n\n",
"msg_date": "Sun, 5 Jul 2020 13:44:22 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Invalid permission check in pg_stats for functional\n indexes"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> This patch still hasn't progressed since Tom's draft patch, is anyone still\n> interested in pursuing it or should we close it for now?\n\nIt seems clearly reasonable to me to close the CF item as RWF,\nexpecting that a new one can be made whenever somebody re-tackles\nthis problem. The CF list is not a to-do list.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 05 Jul 2020 10:16:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Invalid permission check in pg_stats for functional\n indexes"
},
{
"msg_contents": "> On 5 Jul 2020, at 16:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> This patch still hasn't progressed since Tom's draft patch, is anyone still\n>> interested in pursuing it or should we close it for now?\n> \n> It seems clearly reasonable to me to close the CF item as RWF,\n> expecting that a new one can be made whenever somebody re-tackles\n> this problem.\n\nThanks for confirmation, done.\n\n> The CF list is not a to-do list.\n\nYes. Multiple +1's.\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 5 Jul 2020 22:32:08 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Invalid permission check in pg_stats for functional\n indexes"
}
] |
[
{
"msg_contents": "Hackers,\n\n While working on an application, the need arose to be able \nefficiently differentiate v4/v5 UUIDs (for use in partial indexes, among \nothers)\n\n... so please find attached a trivial patch which adds the \nfunctionality. The \"uuid_version_bits()\" function (from the test suite?) \nseems quite a bit hackish, apart from inefficient :(\n\n\n I'm not sure whether this actually would justify a version bump for \nthe OSSP-UUID extension ---a misnomer, BTW, since at least in all the \nsystems I have access to, the extension is actually linked against \nlibuuid from e2fsutils, but I digress --- or not, given that it doesn't \nchange exposed functionality.\n\n\n Another matter, which I'd like to propose in a later thread, is \nwhether it'd be interesting to include the main UUID functionality \ndirectly in core, with the remaining functions in ossp-uuid (just like \nit is now, for backwards compatibility): Most current patterns for \ndistributed/sharded databases are based on using UUIDs for many PKs.\n\n\nThanks,\n\n J.L.",
"msg_date": "Sat, 6 Apr 2019 13:57:22 +0200",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": true,
"msg_subject": "[PATCH] Implement uuid_version()"
},
{
"msg_contents": "Jose Luis Tallon <jltallon@adv-solutions.net> writes:\n> While working on an application, the need arose to be able \n> efficiently differentiate v4/v5 UUIDs (for use in partial indexes, among \n> others)\n> ... so please find attached a trivial patch which adds the \n> functionality.\n\nNo particular objection...\n\n> I'm not sure whether this actually would justify a version bump for \n> the OSSP-UUID extension\n\nYes. Basically, once we've shipped a given version of an extension's\nSQL script, that version is *frozen*. Anything at all that you want\nto do to it has to be done in an extension update script, because\notherwise there's no clean migration path for users.\n\nSo basically, leave uuid-ossp--1.1.sql as it stands, and put the\nnew CREATE FUNCTION in a new uuid-ossp--1.1--1.2.sql script.\nSee any recent patch that updated an extension for an example, eg\ncommit eb6f29141bed9dc95cb473614c30f470ef980705.\n\n(We do allow exceptions when somebody's already updated the extension\nin the current devel cycle, but that doesn't apply here.)\n\n> Another matter, which I'd like to propose in a later thread, is \n> whether it'd be interesting to include the main UUID functionality \n> directly in core\n\nWe've rejected that before, and I don't see any reason to think\nthe situation has changed since prior discussions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Apr 2019 12:35:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 6/4/19 18:35, Tom Lane wrote:\n> Jose Luis Tallon <jltallon@adv-solutions.net> writes:\n>> While working on an application, the need arose to be able\n>> efficiently differentiate v4/v5 UUIDs (for use in partial indexes, among\n>> others)\n>> ... so please find attached a trivial patch which adds the\n>> functionality.\n> No particular objection...\n>\n>> I'm not sure whether this actually would justify a version bump for\n>> the OSSP-UUID extension\n> Yes. Basically, once we've shipped a given version of an extension's\n> SQL script, that version is *frozen*. Anything at all that you want\n> to do to it has to be done in an extension update script, because\n> otherwise there's no clean migration path for users.\n\nGot it, and done. Please find attached a v2 patch with the upgrade \nscript included.\n\n\nThank you for taking a look. Your time is much appreciated :)\n\n\n J.L.",
"msg_date": "Sun, 7 Apr 2019 15:38:44 +0200",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On Sat, Apr 06, 2019 at 12:35:47PM -0400, Tom Lane wrote:\n> Jose Luis Tallon <jltallon@adv-solutions.net> writes:\n> > ��� While working on an application, the need arose to be able \n> > efficiently differentiate v4/v5 UUIDs (for use in partial indexes, among \n> > others)\n> > ... so please find attached a trivial patch which adds the \n> > functionality.\n> \n> No particular objection...\n> \n> > ��� I'm not sure whether this actually would justify a version bump for \n> > the OSSP-UUID extension\n> \n> Yes. Basically, once we've shipped a given version of an extension's\n> SQL script, that version is *frozen*. Anything at all that you want\n> to do to it has to be done in an extension update script, because\n> otherwise there's no clean migration path for users.\n> \n> So basically, leave uuid-ossp--1.1.sql as it stands, and put the\n> new CREATE FUNCTION in a new uuid-ossp--1.1--1.2.sql script.\n> See any recent patch that updated an extension for an example, eg\n> commit eb6f29141bed9dc95cb473614c30f470ef980705.\n> \n> (We do allow exceptions when somebody's already updated the extension\n> in the current devel cycle, but that doesn't apply here.)\n> \n> > Another matter, which I'd like to propose in a later thread, is \n> > whether it'd be interesting to include the main UUID functionality \n> > directly in core\n> \n> We've rejected that before, and I don't see any reason to think\n> the situation has changed since prior discussions.\n\nI see some.\n\nUUIDs turn out to be super useful in distributed systems to give good\nguarantees of uniqueness without coordinating with a particular node.\nSuch systems have become a good bit more common since the most recent\ntime this was discussed.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 7 Apr 2019 16:15:01 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On Sun, Apr 7, 2019 at 10:15 AM David Fetter <david@fetter.org> wrote:\n> I see some.\n>\n> UUIDs turn out to be super useful in distributed systems to give good\n> guarantees of uniqueness without coordinating with a particular node.\n> Such systems have become a good bit more common since the most recent\n> time this was discussed.\n\nThat's not really a compelling reason, though, because anybody who\nneeds UUIDs can always install the extension. And on the other hand,\nif we moved UUID support into core, then we'd be adding a hard compile\ndependency on one of the UUID facilities, which might annoy some\ndevelopers. We could possibly work around that by implementing our\nown UUID facilities in core, but I'm not volunteering to do the work,\nand I'm not sure that the work has enough benefit to justify the\nlabor.\n\nMy biggest gripe about uuid-ossp is that the name is stupid. I wish\nwe could see our way clear to renaming that extension to just 'uuid',\nbecause as J.L. says, virtually nobody's actually compiling against\nthe OSSP library any more. The trick there is how to do that without\nannoying exiting users. Maybe we could leave behind an \"upgrade\"\nscript for the uuid-ossp extension that does CREATE EXTENSION uuid,\nthen alters all objects owned by the current extension to be owned by\nthe new extension, and maybe even drops itself.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 8 Apr 2019 11:06:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> My biggest gripe about uuid-ossp is that the name is stupid. I wish\n> we could see our way clear to renaming that extension to just 'uuid',\n> because as J.L. says, virtually nobody's actually compiling against\n> the OSSP library any more.\n\n+1\n\nThere's no ALTER EXTENSION RENAME, and I suppose there can't be because\nit would require editing/rewriting on-disk files that the server might\nnot even have write permissions for. But your idea of an \"update\"\nscript that effectively moves everything over into a new extension\n(that's physically installed but not present in current database)\nmight work.\n\nAnother way to approach it would be to have a script that belongs\nto the new extension and what you do is\n\tCREATE EXTENSION uuid FROM \"uuid_ossp\";\nto perform the migration of the SQL objects.\n\nEither way, we'd be looking at providing two .so's for some period\nof time, but fortunately they're small.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Apr 2019 11:34:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 8/4/19 17:06, Robert Haas wrote:\n> On Sun, Apr 7, 2019 at 10:15 AM David Fetter <david@fetter.org> wrote:\n>> I see some.\n>>\n>> UUIDs turn out to be super useful in distributed systems to give good\n>> guarantees of uniqueness without coordinating with a particular node.\n>> Such systems have become a good bit more common since the most recent\n>> time this was discussed.\n> That's not really a compelling reason, though, because anybody who\n> needs UUIDs can always install the extension. And on the other hand,\n> if we moved UUID support into core, then we'd be adding a hard compile\n> dependency on one of the UUID facilities, which might annoy some\n> developers. We could possibly work around that by implementing our\n> own UUID facilities in core,\n\nYup. My proposal basically revolves around implementing v3 / v4 / v5 \n(most used/useful versions for the aforementioned use cases) in core, \nusing the already existing md5 and sha1 facilities (which are already \nbeing linked from the current uuid-ossp extension as fallback with \ncertain configurations) ... and leaving the remaining functionality in \nthe extension, just as it is now.\n\nThis way, we guarantee backwards compatibility: Those already using the \nextension wouldn't have to change anything, and new users won't need to \nload any extension to benefit from this (base) functionality.\n\n> but I'm not volunteering to do the work,\nOf course, I'd take care of that :)\n> and I'm not sure that the work has enough benefit to justify the\n> labor.\n\nWith this \"encouragement\", I'll write the code and submit the patches to \na future commitfest. Then the normal procedure will take care of judging \nwhether it's worth being included or not :$\n\n> My biggest gripe about uuid-ossp is that the name is stupid. I wish\n> we could see our way clear to renaming that extension to just 'uuid',\n> because as J.L. says, virtually nobody's actually compiling against\n> the OSSP library any more. The trick there is how to do that without\n> annoying exiting users. Maybe we could leave behind an \"upgrade\"\n> script for the uuid-ossp extension that does CREATE EXTENSION uuid,\n> then alters all objects owned by the current extension to be owned by\n> the new extension, and maybe even drops itself.\n\nI believe my proposal above mostly solves the issue: new users with \n\"standard\" needs won't need to load any extension (better than current), \nold users will get the same functionality as they have today (only part \nin core and part in the extension)...\n\n ...and a relatively simple \"alias\" (think Linux kernel modules) \nfacility would make the transition fully transparent: rename extension \nto \"uuid\" ---possibly dropping the dependency on uuid-ossp in the \nprocess--- and expose an \"uuid-ossp\" alias for backwards compatibility.\n\n\nThanks,\n\n J.L.\n\n\n\n\n",
"msg_date": "Mon, 8 Apr 2019 22:56:00 +0200",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-08 11:06:57 -0400, Robert Haas wrote:\n> That's not really a compelling reason, though, because anybody who\n> needs UUIDs can always install the extension. And on the other hand,\n> if we moved UUID support into core, then we'd be adding a hard compile\n> dependency on one of the UUID facilities, which might annoy some\n> developers. We could possibly work around that by implementing our\n> own UUID facilities in core, but I'm not volunteering to do the work,\n> and I'm not sure that the work has enough benefit to justify the\n> labor.\n\nThe randomness based UUID generators don't really have dependencies, now\nthat we have a dependency on strong randomness. I kinda thing the\ndependency argument actually works *against* uuid-ossp - precisely\nbecause of its dependencies (which also vary by OS) it's not a proper\nreplacement for a type of facility a very sizable fraction of our users\nneed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Apr 2019 14:06:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 2019-04-08 23:06, Andres Freund wrote:\n> The randomness based UUID generators don't really have dependencies, now\n> that we have a dependency on strong randomness. I kinda thing the\n> dependency argument actually works *against* uuid-ossp - precisely\n> because of its dependencies (which also vary by OS) it's not a proper\n> replacement for a type of facility a very sizable fraction of our users\n> need.\n\nYeah, I think implementing a v4 generator in core would be trivial and\naddress almost everyone's requirements.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 9 Apr 2019 08:04:07 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 2019-04-09 08:04, Peter Eisentraut wrote:\n> On 2019-04-08 23:06, Andres Freund wrote:\n>> The randomness based UUID generators don't really have dependencies, now\n>> that we have a dependency on strong randomness. I kinda thing the\n>> dependency argument actually works *against* uuid-ossp - precisely\n>> because of its dependencies (which also vary by OS) it's not a proper\n>> replacement for a type of facility a very sizable fraction of our users\n>> need.\n> \n> Yeah, I think implementing a v4 generator in core would be trivial and\n> address almost everyone's requirements.\n\nHere is a proposed patch for this. I did a fair bit of looking around\nin other systems for a naming pattern but didn't find anything\nconsistent. So I ended up just taking the function name and code from\npgcrypto.\n\nAs you can see, the code is trivial and has no external dependencies. I\nthink this would significantly upgrade the usability of the uuid type.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 11 Jun 2019 10:49:17 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 11/6/19 10:49, Peter Eisentraut wrote:\n> On 2019-04-09 08:04, Peter Eisentraut wrote:\n>> On 2019-04-08 23:06, Andres Freund wrote:\n>>> The randomness based UUID generators don't really have dependencies, now\n>>> that we have a dependency on strong randomness. I kinda thing the\n>>> dependency argument actually works *against* uuid-ossp - precisely\n>>> because of its dependencies (which also vary by OS) it's not a proper\n>>> replacement for a type of facility a very sizable fraction of our users\n>>> need.\n>> Yeah, I think implementing a v4 generator in core would be trivial and\n>> address almost everyone's requirements.\n> Here is a proposed patch for this. I did a fair bit of looking around\n> in other systems for a naming pattern but didn't find anything\n> consistent. So I ended up just taking the function name and code from\n> pgcrypto.\n>\n> As you can see, the code is trivial and has no external dependencies. I\n> think this would significantly upgrade the usability of the uuid type.\n\nYes, indeed. Thanks!\n\nThis is definitively a good step towards removing external dependencies \nfor general usage of UUIDs. As recently commented, enabling extensions \nat some MSPs/Cloud providers can be a bit challenging.\n\n\nI wonder whether re-implementing some more of the extension's (ie. UUID \nv5) in terms of PgCrypto and in-core makes sense / would actually be \naccepted into core?\n\nI assume that Peter would like to commit that potential patch series?\n\n\nThanks,\n\n / J.L.\n\n\n\n\n",
"msg_date": "Tue, 11 Jun 2019 12:31:29 +0200",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 2019-06-11 12:31, Jose Luis Tallon wrote:\n> I wonder whether re-implementing some more of the extension's (ie. UUID \n> v5) in terms of PgCrypto and in-core makes sense / would actually be \n> accepted into core?\n\nThose other versions are significantly more complicated to implement,\nand I don't think many people really need them, so I'm not currently\ninterested.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Jun 2019 13:11:51 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 11/6/19 13:11, Peter Eisentraut wrote:\n> On 2019-06-11 12:31, Jose Luis Tallon wrote:\n>> I wonder whether re-implementing some more of the extension's (ie. UUID\n>> v5) in terms of PgCrypto and in-core makes sense / would actually be\n>> accepted into core?\n> Those other versions are significantly more complicated to implement,\n> and I don't think many people really need them, so I'm not currently\n> interested.\n\nFor the record: I was volunteering to implement that functionality. I'd \nonly need some committer to take a look and erm... commit it :)\n\nThank you, in any case; The patch you have provided will be very useful.\n\n\n / J.L.\n\n\n\n\n\n",
"msg_date": "Tue, 11 Jun 2019 16:47:28 +0200",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 11:04 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Yeah, I think implementing a v4 generator in core would be trivial and\n> address almost everyone's requirements.\n\nFWIW, I think that we could do better with nbtree page splits given\nsequential UUIDs of one form or another [1]. We could teach\nnbtsplitloc.c to pack leaf pages full of UUIDs in the event of the\nuser using sequential UUIDs. With a circular UUID prefix, I think\nyou'll run into an issue similar to the issue that was addressed by\nthe \"split after new tuple\" optimization -- most leaf pages end up 50%\nfull. I've not verified this, but I can't see why it would be any\ndifferent to other multimodal key space with sequential insertions\nthat are grouped. Detecting this in UUIDs may or may not require\nopclass infrastructure. Either way, I'm not likely to work on it until\nthere is a clear target, such as a core or contrib sequential UUID\ngenerator. Though I am looking at various ways to improve\nnbtsplitloc.c for Postgres 13 -- I suspect that additional wins are\npossible.\n\nAny sequential UUID scheme will already have far fewer problems with\nindexing today, since random UUIDs are *dreadful*, but I can imagine\ndoing quite a lot better still. Application developers love UUIDs. We\nshould try to meet them where they are.\n\n[1] https://www.2ndquadrant.com/en/blog/sequential-uuid-generators/\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 28 Jun 2019 15:24:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "\nHello Peter,\n\n>> Yeah, I think implementing a v4 generator in core would be trivial and\n>> address almost everyone's requirements.\n>\n> Here is a proposed patch for this. I did a fair bit of looking around\n> in other systems for a naming pattern but didn't find anything\n> consistent. So I ended up just taking the function name and code from\n> pgcrypto.\n>\n> As you can see, the code is trivial and has no external dependencies. I\n> think this would significantly upgrade the usability of the uuid type.\n\nPatch applies cleanly.\n\nHowever it does not compile, it fails on: \"Duplicate OIDs detected: 3429\".\n\nSomeone inserted a new entry since it was produced.\n\nI'm wondering whether pg_random_uuid() should be taken out of pgcrypto if \nit is available in core?\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 30 Jun 2019 15:50:15 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 03:24:03PM -0700, Peter Geoghegan wrote:\n>On Mon, Apr 8, 2019 at 11:04 PM Peter Eisentraut\n><peter.eisentraut@2ndquadrant.com> wrote:\n>> Yeah, I think implementing a v4 generator in core would be trivial and\n>> address almost everyone's requirements.\n>\n>FWIW, I think that we could do better with nbtree page splits given\n>sequential UUIDs of one form or another [1]. We could teach\n>nbtsplitloc.c to pack leaf pages full of UUIDs in the event of the\n>user using sequential UUIDs. With a circular UUID prefix, I think\n>you'll run into an issue similar to the issue that was addressed by\n>the \"split after new tuple\" optimization -- most leaf pages end up 50%\n>full. I've not verified this, but I can't see why it would be any\n>different to other multimodal key space with sequential insertions\n>that are grouped.\n\nI think the state with pages being only 50% full is only temporary,\nbecause thanks to the prefix being circular we'll get back to the page\neventually and add more tuples to it.\n\nIt's not quite why I made the prefix circular (in my extension) - that was\nto allow reuse of space after deleting rows. But I think it should help\nwith this too.\n\n\n> Detecting this in UUIDs may or may not require\n>opclass infrastructure. Either way, I'm not likely to work on it until\n>there is a clear target, such as a core or contrib sequential UUID\n>generator. Though I am looking at various ways to improve nbtsplitloc.c\n>for Postgres 13 -- I suspect that additional wins are possible.\n>\n\nI'm not against improving this, although I don't have a very clear idea\nhow it should work in the end. But UUIDs are used pretty commonly so it's\na worthwhile optimization area.\n\n>Any sequential UUID scheme will already have far fewer problems with\n>indexing today, since random UUIDs are *dreadful*, but I can imagine\n>doing quite a lot better still. Application developers love UUIDs. We\n>should try to meet them where they are.\n>\n\nI agree.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 30 Jun 2019 20:26:40 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 2019-06-30 14:50, Fabien COELHO wrote:\n> I'm wondering whether pg_random_uuid() should be taken out of pgcrypto if \n> it is available in core?\n\nThat would probably require an extension version update dance in\npgcrypto. I'm not sure if it's worth that. Thoughts?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 2 Jul 2019 08:26:16 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 2/7/19 9:26, Peter Eisentraut wrote:\n> On 2019-06-30 14:50, Fabien COELHO wrote:\n>> I'm wondering whether pg_random_uuid() should be taken out of pgcrypto if\n>> it is available in core?\n> That would probably require an extension version update dance in\n> pgcrypto. I'm not sure if it's worth that. Thoughts?\n\nWhat I have devised for my upcoming patch series is to use a \ncompatibility \"shim\" that calls the corresponding core code when the \nexpected usage does not match the new names/signatures...\n\nThis way we wouldn't even need to version bump pgcrypto (full backwards \ncompatibility -> no bump needed). Another matter is whether this should \nraise some \"deprecation warning\" or the like; I don't think we have any \nsuch mechanisms available yet.\n\n\nFWIW, I'm implementing an \"alias\" functionality for extensions, too, in \norder to achieve transparent (for the user) extension renames.\n\nHTH\n\n\nThanks,\n\n / J.L.\n\n\n\n\n",
"msg_date": "Tue, 2 Jul 2019 10:35:28 +0200",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-06-30 14:50, Fabien COELHO wrote:\n>> I'm wondering whether pg_random_uuid() should be taken out of pgcrypto if \n>> it is available in core?\n\n> That would probably require an extension version update dance in\n> pgcrypto. I'm not sure if it's worth that. Thoughts?\n\nWe have some previous experience with this type of thing when we migrated\ncontrib/tsearch2 stuff into core. I'm too caffeine-deprived to remember\nexactly what we did or how well it worked. But it seems advisable to go\nstudy that history, because we could easily make things a mess for users\nif we fail to consider their upgrade experience.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Jul 2019 11:09:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 2019-07-02 17:09, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2019-06-30 14:50, Fabien COELHO wrote:\n>>> I'm wondering whether pg_random_uuid() should be taken out of pgcrypto if \n>>> it is available in core?\n> \n>> That would probably require an extension version update dance in\n>> pgcrypto. I'm not sure if it's worth that. Thoughts?\n> \n> We have some previous experience with this type of thing when we migrated\n> contrib/tsearch2 stuff into core. I'm too caffeine-deprived to remember\n> exactly what we did or how well it worked. But it seems advisable to go\n> study that history, because we could easily make things a mess for users\n> if we fail to consider their upgrade experience.\n\nI think in that case we wanted users of the extension to transparently\nend up using the in-core code. This is not the case here: Both the\nextension and the proposed in-core code do the same thing and there is\nvery little code duplication, so having them coexist would be fine in\nprinciple.\n\nI think the alternatives are:\n\n1. We keep the code in both places. This is fine. There is no problem\nwith having the same C function or the same SQL function name in both\nplaces.\n\n2. We remove the C function from pgcrypto and make an extension version\nbump. This will create breakage for (some) current users of the\nfunction from pgcrypto.\n\nSo option 2 would ironically punish the very users we are trying to\nhelp. So I think just doing nothing is the best option.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Jul 2019 17:12:26 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I think the alternatives are:\n\n> 1. We keep the code in both places. This is fine. There is no problem\n> with having the same C function or the same SQL function name in both\n> places.\n\n> 2. We remove the C function from pgcrypto and make an extension version\n> bump. This will create breakage for (some) current users of the\n> function from pgcrypto.\n\n> So option 2 would ironically punish the very users we are trying to\n> help. So I think just doing nothing is the best option.\n\nHm. Option 1 means that it's a bit unclear which function you are\nactually calling. As long as the implementations behave identically,\nthat seems okay, but I wonder if that's a constraint we want for the\nlong term.\n\nA possible option 3 is to keep the function in pgcrypto but change\nits C code to call the core code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Jul 2019 11:17:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 2019-Jul-04, Tom Lane wrote:\n\n> A possible option 3 is to keep the function in pgcrypto but change\n> its C code to call the core code.\n\nThis seems most reasonable, and is what Jos� Luis proposed upthread. We\ndon't have to bump the pgcrypto extension version, as nothing changes\nfor pgcrypto externally.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Jul 2019 11:30:36 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 4/7/19 17:30, Alvaro Herrera wrote:\n> On 2019-Jul-04, Tom Lane wrote:\n>\n>> A possible option 3 is to keep the function in pgcrypto but change\n>> its C code to call the core code.\n> This seems most reasonable, and is what José Luis proposed upthread. We\n> don't have to bump the pgcrypto extension version, as nothing changes\n> for pgcrypto externally.\n\nYes, indeed.\n\n...which means I get another todo item if nobody else volunteers :)\n\n\nThanks!\n\n / J.L.\n\n\n\n\n",
"msg_date": "Fri, 5 Jul 2019 00:08:16 +0200",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 2019-07-05 00:08, Jose Luis Tallon wrote:\n> On 4/7/19 17:30, Alvaro Herrera wrote:\n>> On 2019-Jul-04, Tom Lane wrote:\n>>\n>>> A possible option 3 is to keep the function in pgcrypto but change\n>>> its C code to call the core code.\n\nUpdated patch with this change included.\n\n(There is also precedent for redirecting the extension function to the\ninternal one by changing the SQL-level function definition using CREATE\nOR REPLACE FUNCTION ... LANGUAGE INTERNAL. But that seems more\ncomplicated and would require a new extension version. It could maybe\nbe included if the extension version is changed for other reasons.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 5 Jul 2019 11:00:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 5/7/19 11:00, Peter Eisentraut wrote:\n> On 2019-07-05 00:08, Jose Luis Tallon wrote:\n>> On 4/7/19 17:30, Alvaro Herrera wrote:\n>>> On 2019-Jul-04, Tom Lane wrote:\n>>>\n>>>> A possible option 3 is to keep the function in pgcrypto but change\n>>>> its C code to call the core code.\n> Updated patch with this change included.\nGreat, thanks!\n\n\n",
"msg_date": "Fri, 5 Jul 2019 13:35:58 +0200",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 2019-Jul-05, Peter Eisentraut wrote:\n\n> (There is also precedent for redirecting the extension function to the\n> internal one by changing the SQL-level function definition using CREATE\n> OR REPLACE FUNCTION ... LANGUAGE INTERNAL. But that seems more\n> complicated and would require a new extension version.\n\nOne issue with this approach is that it forces the internal function to\nremain unchanged forever. That seems OK in this particular case.\n\n> It could maybe be included if the extension version is changed for\n> other reasons.)\n\nMaybe add a comment in the control file (?) so that we remember to do it\nthen.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 5 Jul 2019 10:21:26 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jul-05, Peter Eisentraut wrote:\n>> (There is also precedent for redirecting the extension function to the\n>> internal one by changing the SQL-level function definition using CREATE\n>> OR REPLACE FUNCTION ... LANGUAGE INTERNAL. But that seems more\n>> complicated and would require a new extension version.\n\n> One issue with this approach is that it forces the internal function to\n> remain unchanged forever. That seems OK in this particular case.\n\nNo, what it's establishing is that the extension and core functions\nwill do the same thing forevermore. Seems to me that's what we want\nhere.\n\n>> It could maybe be included if the extension version is changed for\n>> other reasons.)\n\n> Maybe add a comment in the control file (?) so that we remember to do it\n> then.\n\nI'm not terribly excited about that --- we'd still need to keep the\nC function redirection in place in the .so file, for benefit of\npeople who hadn't done ALTER EXTENSION UPGRADE.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jul 2019 10:27:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "\nHello Peter,\n\n>>>> A possible option 3 is to keep the function in pgcrypto but change\n>>>> its C code to call the core code.\n>\n> Updated patch with this change included.\n\nPatch applies cleanly, compiles (both pg and pgcrypto). make check (global \nand pgcrypto) ok. Doc generation ok. Minor comments:\n\nAbout doc: I'd consider \"generation\" instead of \"generating\" as a \nsecondary index term.\n\n> (There is also precedent for redirecting the extension function to the\n> internal one by changing the SQL-level function definition using CREATE\n> OR REPLACE FUNCTION ... LANGUAGE INTERNAL. But that seems more\n> complicated and would require a new extension version. It could maybe\n> be included if the extension version is changed for other reasons.)\n\nWhat about avoiding a redirection with something like:\n\nDatum (* const pg_random_uuid)(PG_FUNCTION_ARGS) = gen_random_uuid;\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 13 Jul 2019 08:08:35 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "\nHello Jose,\n\n> Got it, and done. Please find attached a v2 patch with the upgrade script \n> included.\n\nPatch v2 applies cleanly. Compiles cleanly (once running configure \n--with-uuid=...). Local make check ok. Doc build ok.\n\nThere are no tests, I'd suggest to add some under sql & change expected if \npossible which would return all possible values, including with calling \npg_random_uuid() which should return 4.\n\nDocumentation describes uuid_version(), should it not describe \nuuid_version(namespace uuid)?\n\nI'd suggest to add an example.\n\nThe extension update script seems ok, but ISTM that \"uuid-ossp-1.1.sql\" \nshould be replaced with an updated \"uuid-ossp-1.2.sql\".\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 13 Jul 2019 08:31:58 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 13/7/19 8:31, Fabien COELHO wrote:\n>\n> Hello Jose,\n\nHello, Fabien\n\nThanks for taking a look\n\n>\n>> Got it, and done. Please find attached a v2 patch with the upgrade \n>> script included.\n>\n> Patch v2 applies cleanly. Compiles cleanly (once running configure \n> --with-uuid=...). Local make check ok. Doc build ok.\n>\n> There are no tests, I'd suggest to add some under sql & change \n> expected if possible which would return all possible values, including \n> with calling pg_random_uuid() which should return 4.\n>\n> Documentation describes uuid_version(), should it not describe \n> uuid_version(namespace uuid)?\n>\n> I'd suggest to add an example.\n>\n> The extension update script seems ok, but ISTM that \n> \"uuid-ossp-1.1.sql\" should be replaced with an updated \n> \"uuid-ossp-1.2.sql\".\n>\nThis was a quite naïf approach to the issue on my part, more a \"scratch \nmy own itch\" than anything else.... but definitively sparked some \ninterest. Thanks to all involved.\n\nConsidering the later arguments on-list, I plan on submitting a more \nelaborate patchset integrating the feedback received so far, and along \nthe following lines:\n\n- uuid type, v4 generation and uuid_version() in core\n\n- Provide a means to rename/supercede extensions keeping backwards \ncompatibility (i.e. uuid-ossp -> uuid, keep old code working)\n\n- Miscellaneous other functionality\n\n- Drop \"dead\" code\n\n ...but I've tried to keep quiet so as not to disturb too much around \nrelease time.\n\n\nI intend to continue working on this in late July, aiming for the \nfollowing commitfest (once more \"urgent\" patches will have landed)\n\n\nThanks again.\n\n J.L.\n\n\n\n\n",
"msg_date": "Sat, 13 Jul 2019 12:00:48 +0200",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 2019-07-13 08:08, Fabien COELHO wrote:\n> About doc: I'd consider \"generation\" instead of \"generating\" as a \n> secondary index term.\n\nWe do use the \"-ing\" form for other secondary index terms. It's useful\nbecause the concatenation of primary and secondary term should usually\nmake a phrase of some sort. The alternative would be \"generation of\",\nbut that doesn't seem clearly better.\n\n>> (There is also precedent for redirecting the extension function to the\n>> internal one by changing the SQL-level function definition using CREATE\n>> OR REPLACE FUNCTION ... LANGUAGE INTERNAL. But that seems more\n>> complicated and would require a new extension version. It could maybe\n>> be included if the extension version is changed for other reasons.)\n> \n> What about avoiding a redirection with something like:\n> \n> Datum (* const pg_random_uuid)(PG_FUNCTION_ARGS) = gen_random_uuid;\n\nThat seems very confusing.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 13 Jul 2019 14:36:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "\nHello Peter,\n\n>> About doc: I'd consider \"generation\" instead of \"generating\" as a\n>> secondary index term.\n>\n> We do use the \"-ing\" form for other secondary index terms. It's useful\n> because the concatenation of primary and secondary term should usually\n> make a phrase of some sort. The alternative would be \"generation of\",\n> but that doesn't seem clearly better.\n\nOk, fine. I looked but did not find other instances of \"generating\".\n\n>> What about avoiding a redirection with something like:\n>>\n>> Datum (* const pg_random_uuid)(PG_FUNCTION_ARGS) = gen_random_uuid;\n>\n> That seems very confusing.\n\nDunno. Possibly. The user does not have to look at the implementation, and \nprobably such code would deserve a comment.\n\nThe point is to avoid one call so as to perform the same (otherwise the \npg_random_uuid would be slightly slower), and to ensure that it behaves \nthe same, as it would be the very same function by construction.\n\nI've switched the patch to ready anyway.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 13 Jul 2019 17:13:36 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "Hello Jose,\n\n> Considering the later arguments on-list, I plan on submitting a more \n> elaborate patchset integrating the feedback received so far, and along the \n> following lines:\n>\n> - uuid type, v4 generation and uuid_version() in core\n>\n> - Provide a means to rename/supercede extensions keeping backwards \n> compatibility (i.e. uuid-ossp -> uuid, keep old code working)\n>\n> - Miscellaneous other functionality\n>\n> - Drop \"dead\" code\n>\n> ...but I've tried to keep quiet so as not to disturb too much around \n> release time.\n>\n> I intend to continue working on this in late July, aiming for the following \n> commitfest (once more \"urgent\" patches will have landed)\n\nOk.\n\nI've changed the patch status for this CF to \"moved do next CF\", and to \n\"Waiting on author\" there.\n\nThe idea is to go on in the same thread when you are ready.\n\n-- \nFabien.",
"msg_date": "Sat, 13 Jul 2019 17:19:41 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 2019-07-13 17:13, Fabien COELHO wrote:\n>>> What about avoiding a redirection with something like:\n>>>\n>>> Datum (* const pg_random_uuid)(PG_FUNCTION_ARGS) = gen_random_uuid;\n>>\n>> That seems very confusing.\n> \n> Dunno. Possibly. The user does not have to look at the implementation, and \n> probably such code would deserve a comment.\n> \n> The point is to avoid one call so as to perform the same (otherwise the \n> pg_random_uuid would be slightly slower), and to ensure that it behaves \n> the same, as it would be the very same function by construction.\n> \n> I've switched the patch to ready anyway.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 14 Jul 2019 14:40:39 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 7/14/19 9:40 PM, Peter Eisentraut wrote:\n> On 2019-07-13 17:13, Fabien COELHO wrote:\n>>>> What about avoiding a redirection with something like:\n>>>>\n>>>> Datum (* const pg_random_uuid)(PG_FUNCTION_ARGS) = gen_random_uuid;\n>>>\n>>> That seems very confusing.\n>>\n>> Dunno. Possibly. The user does not have to look at the implementation, and\n>> probably such code would deserve a comment.\n>>\n>> The point is to avoid one call so as to perform the same (otherwise the\n>> pg_random_uuid would be slightly slower), and to ensure that it behaves\n>> the same, as it would be the very same function by construction.\n>>\n>> I've switched the patch to ready anyway.\n> \n> committed\n\nSmall doc tweak suggestion - the pgcrypto docs [1] now say about gen_random_uuid():\n\n Returns a version 4 (random) UUID. (Obsolete, this function is now also\n included in core PostgreSQL.)\n\nwhich gives the impression the code contains two versions of this function, the core\none and an obsolete one in pgcrypto. Per the commit message the situation is actually:\n\n The pgcrypto implementation now internally redirects to the built-in one.\n\nSuggested wording improvement in the attached patch.\n\n[1] https://www.postgresql.org/docs/devel/pgcrypto.html#id-1.11.7.34.9\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 15 Jul 2019 11:37:35 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
},
{
"msg_contents": "On 2019-Jul-13, Jose Luis Tallon wrote:\n\n> Considering the later arguments on-list, I plan on submitting a more\n> elaborate patchset integrating the feedback received so far, and along the\n> following lines:\n> \n> - uuid type, v4 generation and uuid_version() in core\n> \n> - Provide a means to rename/supercede extensions keeping backwards\n> compatibility (i.e. uuid-ossp -> uuid, keep old code working)\n\nIt is wholly unclear what this commitfest entry is all about; in the\nthread there's a mixture about a new uuid_version(), some new v4 stuff\nmigrating from pgcrypto (which apparently was done), plus some kind of\nmechanism to allow upgrading extension names; all this stemming from\nfeedback from the patch submitted in April. But there hasn't been a new\npatch in a long time, and there won't be a new patch version during the\ncurrent commitfest. Therefore, I'm closing this entry as Returned with\nFeedback. The author(s) can create a new entry in a future commitfest\nonce they have a new patch.\n\nI do suggest to keep such a patch restricted in scope.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 2 Sep 2019 12:00:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement uuid_version()"
}
] |
[
{
"msg_contents": "Hello devs,\n\nThe attached patch adds minimal stats during the initialization phase. \nSuch a feature was already submitted by Doug Rady two years ago,\n\n \thttps://commitfest.postgresql.org/15/1308/\n\nbut it needed to be adapted to the -I custom initialization approach \ndeveloped in the same CF, and it ended 'returned with feedback'.\n\n sh> ./pgbench -i -s 3\n dropping old tables...\n creating tables...\n generating data...\n 100000 of 300000 tuples (33%) done (elapsed 0.09 s, remaining 0.18 s)\n 200000 of 300000 tuples (66%) done (elapsed 0.20 s, remaining 0.10 s)\n 300000 of 300000 tuples (100%) done (elapsed 0.32 s, remaining 0.00 s)\n vacuuming...\n creating primary keys...\n done in 0.68 s (drop 0.06 s, create table 0.02 s, generate 0.34 s, vacuum 0.13 s, primary keys 0.13 s).\n\nSee the durations on the last line.\n\nThe intent is to test the initialization phase more precisely, and \npossibly accelerate it. For instance, is it better to do vacuum before or \nafter primary keys?\n\n-- \nFabien.",
"msg_date": "Sat, 6 Apr 2019 18:26:30 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "pgbench - add minimal stats on initialization"
},
{
"msg_contents": "> done in 0.68 s (drop 0.06 s, create table 0.02 s, generate 0.34 s, vacuum \n> 0.13 s, primary keys 0.13 s).\n>\n> See the durations on the last line.\n\nIt's even better with working TAP tests.\n\n-- \nFabien.",
"msg_date": "Sun, 7 Apr 2019 18:22:46 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add minimal stats on initialization"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: tested, failed\nDocumentation: not tested\n\nPatch works perfectly and the code is well-written. I have one minor observation that in case of initDropTables you log \"drop\" and in case of initCreateTables you log \"create table\". I think you need to be consistent. And why not \"drop tables\" and \"create tables\"\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Wed, 10 Apr 2019 09:15:47 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add minimal stats on initialization"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nPlease ignore the last email.\r\n\r\nPatch works perfectly and the code is well-written. I have one minor observation that in case of initDropTables you log \"drop\" and in case of initCreateTables you log \"create table\". I think you need to be consistent. And why not \"drop tables\" and \"create tables\"\r\n\r\nThe new status of this patch is: Waiting on Author",
"msg_date": "Wed, 10 Apr 2019 09:17:59 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add minimal stats on initialization"
},
{
"msg_contents": "Hi Fabien,\n\nI have one minor observation that in case of initDropTables you log\n'drop' and in case of initCreateTables you log 'create table'. We need\nto be consistent. The \"drop tables\" and \"create tables\" are the best\nfit here. Otherwise, the patch is good.\n\nOn Wed, Apr 10, 2019 at 2:18 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n>\n> Please ignore the last email.\n>\n> Patch works perfectly and the code is well-written. I have one minor observation that in case of initDropTables you log \"drop\" and in case of initCreateTables you log \"create table\". I think you need to be consistent. And why not \"drop tables\" and \"create tables\"\n>\n> The new status of this patch is: Waiting on Author\n\n\n\n-- \nIbrar Ahmed\n\n\n",
"msg_date": "Wed, 10 Apr 2019 17:58:11 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add minimal stats on initialization"
},
{
"msg_contents": "Hello,\n\nThanks for the feedback.\n\n> I have one minor observation that in case of initDropTables you log\n> 'drop' and in case of initCreateTables you log 'create table'. We need\n> to be consistent. The \"drop tables\" and \"create tables\" are the best\n> fit here.\n\nOk.\n\nAttached version does that, plus avoids re-assigning \"first\" on each loop, \nplus checks that --no-vacuum indeed removes all vacuums in the TAP test.\n\n-- \nFabien.",
"msg_date": "Wed, 10 Apr 2019 19:39:00 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add minimal stats on initialization"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nPatch works fine on my machine.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Thu, 11 Apr 2019 12:43:09 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add minimal stats on initialization"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 12:44 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n>\n> Patch works fine on my machine.\n>\n> The new status of this patch is: Ready for Committer\n\nI spotted one typo, a comma where a semi-colon was wanted:\n\n+ op = \"generate\",\n initGenerateData(con);\n break;\n\nI fixed that, ran it through pgindent and committed. Thanks for the\npatch and review!\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jul 2019 11:43:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add minimal stats on initialization"
}
] |
[
{
"msg_contents": "Hello devs,\n\nthe attached patch adds some more control on the initialization phase.\nIn particular, ( and ) allow to begin/commit explicitely, and G generates \nthe data server-side instead of client side, which might be a good idea \ndepending on the available bandwidth.\n\nTogether with the previously submitted patch about getting stats on the \ninitialization phase, the idea is to possibly improve this phase, or use \nit as a benchmark tool in itself.\n\n-- \nFabien.",
"msg_date": "Sat, 6 Apr 2019 18:31:11 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "pgbench - extend initialization phase control"
},
{
"msg_contents": "Does both client/server side data generation in a single command make sense?",
"msg_date": "Mon, 10 Jun 2019 14:55:33 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "\nHello Ibrar,\n\n> Does both client/server side data generation in a single command make \n> sense?\n\nI think yes, especially with the other patch which adds timing measures to \nthe initialization phases. It really depends what you want to test.\n\nWith client-side generation you test the libpq COPY interface and network \nprotocol for bulk loading.\n\nWith server-side generation you are get the final result faster when \nnetwork bandwidth is low, and somehow you are testing a different kind of \nsmall query which generates a lot of data.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 11 Jun 2019 06:43:13 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nOther than that, the patch looks good to me.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Tue, 16 Jul 2019 06:39:10 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "Hello Ibrar,\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n>\n> Other than that, the patch looks good to me.\n>\n> The new status of this patch is: Ready for Committer\n\nThanks for the review.\n\nAttached v2 is a rebase after ce8f9467.\n\n-- \nFabien.",
"msg_date": "Tue, 16 Jul 2019 07:58:46 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "Hello Fabien,\n\n> ---------- Forwarded message ---------\n> From: Fabien COELHO <coelho@cri.ensmp.fr>\n> Date: Tue, Jul 16, 2019 at 4:58 PM\n> Subject: Re: pgbench - extend initialization phase control\n> To: Ibrar Ahmed <ibrar.ahmad@gmail.com>\n> Cc: PostgreSQL Developers <pgsql-hackers@lists.postgresql.org>\n> \n> \n> \n> Hello Ibrar,\n> \n>> The following review has been posted through the commitfest \n>> application:\n>> make installcheck-world: tested, passed\n>> Implements feature: tested, passed\n>> Spec compliant: tested, passed\n>> Documentation: not tested\n>> \n>> Other than that, the patch looks good to me.\n>> \n>> The new status of this patch is: Ready for Committer\n> \n> Thanks for the review.\n> \n> Attached v2 is a rebase after ce8f9467.\n\nThanks for your new patch.\n\nBut I failed to apply it. Please rebase it against HEAD.\n\nRegards,\n\n---------\nAnna\n\n\n",
"msg_date": "Thu, 10 Oct 2019 13:30:54 +0900",
"msg_from": "btendouan <btendouan@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": ">> Attached v2 is a rebase after ce8f9467.\n\nHere is rebase v3.\n\n-- \nFabien.",
"msg_date": "Thu, 10 Oct 2019 10:46:50 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "\n> \n> Here is rebase v3.\n\nHi,\n\nThanks for your new patch.\n\nFailed regression test.\nIt's necessary to change the first a in “allowed step characters are” to \nuppercase A in the regression test of 002_pgbench_no_server.pl.\n\nThe behavior of \"g\" is different between v12 and the patche, and \nbackward compatibility is lost.\nIn v12, BEGIN and COMMIT are specified only by choosing \"g\".\nIt's a problem that backward compatibility is lost.\n\nWhen using ( and ) with the -I, the documentation should indicate that \ndouble quotes are required,\nand \"v\" not be able to enclose in ( and ).\n\nRegards,\n\n--\nAnna\n\n\n",
"msg_date": "Wed, 16 Oct 2019 14:36:19 +0900",
"msg_from": "btendouan <btendouan@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "Hi,\n\nWhen g is specified, null is inserted in the filler column of \npgbentch_tellrs, acounts, branches.\nBut when G is specified, empty string is inserted.\n\nDo you have any intention of this difference?\n\n--\nAnna\n\n\n",
"msg_date": "Thu, 17 Oct 2019 17:35:10 +0900",
"msg_from": "btendouan <btendouan@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "Hello,\n\n> Failed regression test. It's necessary to change the first a in “allowed \n> step characters are” to uppercase A in the regression test of \n> 002_pgbench_no_server.pl.\n\nArgh. I think I ran the test, then stupidly updated the message afterwards \nto better match best practices, without rechecking:-(\n\n> The behavior of \"g\" is different between v12 and the patche, and \n> backward compatibility is lost. In v12, BEGIN and COMMIT are specified \n> only by choosing \"g\". It's a problem that backward compatibility is \n> lost.\n\nSomehow yes, but I do not see this as an actual problem from a functional \npoint of view: it just means that if you use a 'dtgvp' with the newer \nversion and if the inserts were to fail, then they are not under an \nexplicit transaction, so previous inserts are not cleaned up. However, \nthis is a pretty unlikely case, and anyway the error is reported, so any \nuser would be expected not to go on after the initialization phase.\n\nSo basically I do not see the very small regression for an unlikely corner \ncase to induce any problem in practice.\n\nThe benefit of controlling where begin/end actually occur is that it may \nhave an impact on performance, and it allows to check that.\n\n> When using ( and ) with the -I, the documentation should indicate that double \n> quotes are required,\n\nOr single quotes, or backslash, if launch from the command line. I added a \nmention of escaping or protection in the doc in that case.\n\n> and \"v\" not be able to enclose in ( and ).\n\nThat is a postgresql limitation, which may evolve. There could be others. \nI updated the doc to say that some commands may not work inside an \nexplicit transaction.\n\n> When g is specified, null is inserted in the filler column of \n> pgbentch_tellrs, acounts, branches. But when G is specified, empty \n> string is inserted.\n\nIndeed there is a small diff. ISTM that the actual filling with the \ninitial client version is NULL for branches and tellers, and a \nblank-padded string for accounts.\n\nI fixed the patch so that the end-result is the same with both g and G.\n\n> Do you have any intention of this difference?\n\nYes and no.\n\nI intended that tellers & branches filler are filled, but I did not really \nnotice that the client side was implicitely using NULL, although it says \nso in a comment. Although I'm not happy with the fact because it cheats \nwith the benchmark design which requires the filler columns to be really \nfilled and stored as is, it is indeed the place to change this (bad) \nbehavior.\n\nAttached a v4 with the updates described above.\n\n-- \nFabien.",
"msg_date": "Thu, 17 Oct 2019 13:09:19 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "\n> Attached a v4 with the updates described above.\n\n\nHi,\n\nThanks for updating the patch.\nAll tests are passed. There is no problem in operation.\n\n--\nAnna\n\n\n",
"msg_date": "Wed, 23 Oct 2019 18:23:52 +0900",
"msg_from": "btendouan <btendouan@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "On Thu, Oct 17, 2019 at 8:09 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello,\n>\n> > Failed regression test. It's necessary to change the first a in “allowed\n> > step characters are” to uppercase A in the regression test of\n> > 002_pgbench_no_server.pl.\n>\n> Argh. I think I ran the test, then stupidly updated the message afterwards\n> to better match best practices, without rechecking:-(\n>\n> > The behavior of \"g\" is different between v12 and the patche, and\n> > backward compatibility is lost. In v12, BEGIN and COMMIT are specified\n> > only by choosing \"g\". It's a problem that backward compatibility is\n> > lost.\n>\n> Somehow yes, but I do not see this as an actual problem from a functional\n> point of view: it just means that if you use a 'dtgvp' with the newer\n> version and if the inserts were to fail, then they are not under an\n> explicit transaction, so previous inserts are not cleaned up. However,\n> this is a pretty unlikely case, and anyway the error is reported, so any\n> user would be expected not to go on after the initialization phase.\n>\n> So basically I do not see the very small regression for an unlikely corner\n> case to induce any problem in practice.\n>\n> The benefit of controlling where begin/end actually occur is that it may\n> have an impact on performance, and it allows to check that.\n\nI still fail to understand the benefit of addition of () settings.\nCould you clarify what case () settings are useful for? You are\nthinking to execute all initialization SQL statements within\nsingle transaction, e.g., -I (dtgp), for some reasons?\n\n> > When using ( and ) with the -I, the documentation should indicate that double\n> > quotes are required,\n>\n> Or single quotes, or backslash, if launch from the command line. I added a\n> mention of escaping or protection in the doc in that case.\n\nWhat about using, for example, b (BEGIN) and c (COMMIT) instead\nto avoid such restriction?\n\n> > and \"v\" not be able to enclose in ( and ).\n>\n> That is a postgresql limitation, which may evolve. There could be others.\n> I updated the doc to say that some commands may not work inside an\n> explicit transaction.\n\nI think that it's better to check whehter \"v\" is enclosed with () or not\nat the beginning of pgbench, and report an error if it is. Otherwise,\nif -I (dtgv) is specified, pgbench reports an error after time-consuming\ndata generation is performed, and of course that data generation is\nrollbacked.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 24 Oct 2019 14:04:22 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "Hello Masao-san,\n\n>> The benefit of controlling where begin/end actually occur is that it may\n>> have an impact on performance, and it allows to check that.\n>\n> I still fail to understand the benefit of addition of () settings.\n> Could you clarify what case () settings are useful for? You are\n> thinking to execute all initialization SQL statements within\n> single transaction, e.g., -I (dtgp), for some reasons?\n\nYep. Or anything else, including without (), to allow checking the \nperformance impact or non impact of transactions on the initialization \nphase.\n\n>>> When using ( and ) with the -I, the documentation should indicate that double\n>>> quotes are required,\n>>\n>> Or single quotes, or backslash, if launch from the command line. I added a\n>> mention of escaping or protection in the doc in that case.\n>\n> What about using, for example, b (BEGIN) and c (COMMIT) instead\n> to avoid such restriction?\n\nIt is indeed possible. Using a open/close symmetric character ( (), {}, \n[]) looks more pleasing and allows to see easily whether everything is \nproperly closed. I switched to {} which does not generate the same quoting \nissue in shell.\n\n> I think that it's better to check whehter \"v\" is enclosed with () or not\n> at the beginning of pgbench, and report an error if it is.\n>\n> Otherwise, if -I (dtgv) is specified, pgbench reports an error after \n> time-consuming data generation is performed, and of course that data \n> generation is rollbacked.\n\nPatch v5 attached added a check for v inside (), although I'm not keen on \nputting it there, and uses {} instead of ().\n\n-- \nFabien.",
"msg_date": "Thu, 24 Oct 2019 14:16:16 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "On Thu, Oct 24, 2019 at 9:16 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Masao-san,\n>\n> >> The benefit of controlling where begin/end actually occur is that it may\n> >> have an impact on performance, and it allows to check that.\n> >\n> > I still fail to understand the benefit of addition of () settings.\n> > Could you clarify what case () settings are useful for? You are\n> > thinking to execute all initialization SQL statements within\n> > single transaction, e.g., -I (dtgp), for some reasons?\n>\n> Yep. Or anything else, including without (), to allow checking the\n> performance impact or non impact of transactions on the initialization\n> phase.\n\nIs there actually such performance impact? AFAIR most time-consuming part in\ninitialization phase is the generation of pgbench_accounts data. This part is\nperformed within single transaction whether () are specified or not. No?\nSo I'm not sure how () are useful to check performance impact in init phase.\nMaybe I'm missing something...\n\n> >>> When using ( and ) with the -I, the documentation should indicate that double\n> >>> quotes are required,\n> >>\n> >> Or single quotes, or backslash, if launch from the command line. I added a\n> >> mention of escaping or protection in the doc in that case.\n> >\n> > What about using, for example, b (BEGIN) and c (COMMIT) instead\n> > to avoid such restriction?\n>\n> It is indeed possible. Using a open/close symmetric character ( (), {},\n> []) looks more pleasing and allows to see easily whether everything is\n> properly closed. I switched to {} which does not generate the same quoting\n> issue in shell.\n>\n> > I think that it's better to check whehter \"v\" is enclosed with () or not\n> > at the beginning of pgbench, and report an error if it is.\n> >\n> > Otherwise, if -I (dtgv) is specified, pgbench reports an error after\n> > time-consuming data generation is performed, and of course that data\n> > generation is rollbacked.\n>\n> Patch v5 attached added a check for v inside (), although I'm not keen on\n> putting it there, and uses {} instead of ().\n\nThanks for updating the patch!\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 24 Oct 2019 21:26:54 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "\nHello,\n\n>> Yep. Or anything else, including without (), to allow checking the\n>> performance impact or non impact of transactions on the initialization\n>> phase.\n>\n> Is there actually such performance impact? AFAIR most time-consuming part in\n> initialization phase is the generation of pgbench_accounts data.\n\nMaybe. If you cannot check, you can only guess. Probably it should be \nsmall, but the current version does not allow to check whether it is so.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 24 Oct 2019 17:06:16 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "On Fri, Oct 25, 2019 at 12:06 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello,\n>\n> >> Yep. Or anything else, including without (), to allow checking the\n> >> performance impact or non impact of transactions on the initialization\n> >> phase.\n> >\n> > Is there actually such performance impact? AFAIR most time-consuming part in\n> > initialization phase is the generation of pgbench_accounts data.\n>\n> Maybe. If you cannot check, you can only guess. Probably it should be\n> small, but the current version does not allow to check whether it is so.\n\nCould you elaborate what you actually want to measure the performance\nimpact by adding explicit begin and commit? Currently pgbench -i issues\nthe following queries. The data generation part is already executed within\nsingle transaction. You want to execute not only data generation but also\ndrop/creation of tables within single transaction, and measure how much\nperformance impact happens? I'm sure that would be negligible.\nOr you want to execute data generate in multiple transactions, i.e.,\nexecute each statement for data generation (e.g., one INSERT) in single\ntransaction, and then want to measure the performance impact?\nBut the patch doesn't enable us to do such data generation yet.\n\nSo I'm thinking that it's maybe better to commit the addtion of \"G\" option\nfirst separately. And then we can discuss how much \"(\" and \")\" options\nare useful later.\n\n------------------------------------------\ndrop table if exists pgbench_accounts, pgbench_branches,\npgbench_history, pgbench_tellers\ncreate table pgbench_history(tid int,bid int,aid int,delta\nint,mtime timestamp,filler char(22))\ncreate table pgbench_tellers(tid int not null,bid int,tbalance\nint,filler char(84)) with (fillfactor=100)\ncreate table pgbench_accounts(aid int not null,bid int,abalance\nint,filler char(84)) with (fillfactor=100)\ncreate table pgbench_branches(bid int not null,bbalance int,filler\nchar(88)) with (fillfactor=100)\nbegin\ntruncate table pgbench_accounts, pgbench_branches, pgbench_history,\npgbench_tellers\ninsert into pgbench_branches(bid,bbalance) values(1,0)\ninsert into pgbench_tellers(tid,bid,tbalance) values (1,1,0)\ninsert into pgbench_tellers(tid,bid,tbalance) values (2,1,0)\ninsert into pgbench_tellers(tid,bid,tbalance) values (3,1,0)\ninsert into pgbench_tellers(tid,bid,tbalance) values (4,1,0)\ninsert into pgbench_tellers(tid,bid,tbalance) values (5,1,0)\ninsert into pgbench_tellers(tid,bid,tbalance) values (6,1,0)\ninsert into pgbench_tellers(tid,bid,tbalance) values (7,1,0)\ninsert into pgbench_tellers(tid,bid,tbalance) values (8,1,0)\ninsert into pgbench_tellers(tid,bid,tbalance) values (9,1,0)\ninsert into pgbench_tellers(tid,bid,tbalance) values (10,1,0)\ncopy pgbench_accounts from stdin\ncommit\nvacuum analyze pgbench_branches\nvacuum analyze pgbench_tellers\nvacuum analyze pgbench_accounts\nvacuum analyze pgbench_history\nalter table pgbench_branches add primary key (bid)\nalter table pgbench_tellers add primary key (tid)\nalter table pgbench_accounts add primary key (aid)\n------------------------------------------\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Mon, 28 Oct 2019 16:53:23 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "Hello Masao-san,\n\n>> Maybe. If you cannot check, you can only guess. Probably it should be\n>> small, but the current version does not allow to check whether it is so.\n>\n> Could you elaborate what you actually want to measure the performance\n> impact by adding explicit begin and commit? Currently pgbench -i issues\n> the following queries. The data generation part is already executed within\n> single transaction. You want to execute not only data generation but also\n> drop/creation of tables within single transaction, and measure how much\n> performance impact happens? I'm sure that would be negligible.\n> Or you want to execute data generate in multiple transactions, i.e.,\n> execute each statement for data generation (e.g., one INSERT) in single\n> transaction, and then want to measure the performance impact?\n> But the patch doesn't enable us to do such data generation yet.\n\nIndeed, you cannot do this precise thing, but you can do others.\n\n> So I'm thinking that it's maybe better to commit the addtion of \"G\" option\n> first separately. And then we can discuss how much \"(\" and \")\" options\n> are useful later.\n\nAttached patch v6 only provides G - server side data generation.\n\n-- \nFabien.",
"msg_date": "Mon, 28 Oct 2019 14:36:15 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "On Mon, Oct 28, 2019 at 10:36 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Masao-san,\n>\n> >> Maybe. If you cannot check, you can only guess. Probably it should be\n> >> small, but the current version does not allow to check whether it is so.\n> >\n> > Could you elaborate what you actually want to measure the performance\n> > impact by adding explicit begin and commit? Currently pgbench -i issues\n> > the following queries. The data generation part is already executed within\n> > single transaction. You want to execute not only data generation but also\n> > drop/creation of tables within single transaction, and measure how much\n> > performance impact happens? I'm sure that would be negligible.\n> > Or you want to execute data generate in multiple transactions, i.e.,\n> > execute each statement for data generation (e.g., one INSERT) in single\n> > transaction, and then want to measure the performance impact?\n> > But the patch doesn't enable us to do such data generation yet.\n>\n> Indeed, you cannot do this precise thing, but you can do others.\n>\n> > So I'm thinking that it's maybe better to commit the addtion of \"G\" option\n> > first separately. And then we can discuss how much \"(\" and \")\" options\n> > are useful later.\n>\n> Attached patch v6 only provides G - server side data generation.\n\nThanks for the patch!\n\n+ snprintf(sql, sizeof(sql),\n+ \"insert into pgbench_branches(bid,bbalance) \"\n+ \"select bid, 0 \"\n+ \"from generate_series(1, %d) as bid\", scale);\n\n\"scale\" should be \"nbranches * scale\".\n\n+ snprintf(sql, sizeof(sql),\n+ \"insert into pgbench_accounts(aid,bid,abalance,filler) \"\n+ \"select aid, (aid - 1) / %d + 1, 0, '' \"\n+ \"from generate_series(1, %d) as aid\", naccounts, scale * naccounts);\n\nLike client-side data generation, INT64_FORMAT should be used here\ninstead of %d?\n\nIf large scale factor is specified, the query for generating pgbench_accounts\ndata can take a very long time. While that query is running, operators may be\nlikely to do Ctrl-C to cancel the data generation. In this case, IMO pgbench\nshould cancel the query, i.e., call PQcancel(). Otherwise, the query will keep\nrunning to the end.\n\n- for (step = initialize_steps; *step != '\\0'; step++)\n+ for (const char *step = initialize_steps; *step != '\\0'; step++)\n\nPer PostgreSQL basic coding style, ISTM that \"const char *step\"\nshould be declared separately from \"for\" loop, like the original.\n\n- fprintf(stderr, \"unrecognized initialization step \\\"%c\\\"\\n\",\n+ fprintf(stderr,\n+ \"unrecognized initialization step \\\"%c\\\"\\n\"\n+ \"Allowed step characters are: \\\"\" ALL_INIT_STEPS \"\\\".\\n\",\n *step);\n- fprintf(stderr, \"allowed steps are: \\\"d\\\", \\\"t\\\", \\\"g\\\", \\\"v\\\",\n\\\"p\\\", \\\"f\\\"\\n\");\n\nThe original message seems better to me. So what about just appending \"G\"\ninto the above latter message? That is,\n\"allowed steps are: \\\"d\\\", \\\"t\\\", \\\"g\\\", \\\"G\\\", \\\"v\\\", \\\"p\\\", \\\"f\\\"\\n\"\n\n- <term><literal>g</literal> (Generate data)</term>\n+ <term><literal>g</literal> or <literal>G</literal>\n(Generate data, client or server side)</term>\n\nIsn't it better to explain a bit more what \"client-side / server-side data\ngeneration\" is? For example, something like\n\n When \"g\" (client-side data generation) is specified, data is generated\n in pgbench client and sent to the server. When \"G\" (server-side data\n generation) is specified, only queries are sent from pgbench client\n and then data is generated in the server. If the network bandwidth is low\n between pgbench and the server, using \"G\" might make the data\n generation faster.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 30 Oct 2019 19:08:58 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "Hello Masao-san,\n\n> + snprintf(sql, sizeof(sql),\n> + \"insert into pgbench_branches(bid,bbalance) \"\n> + \"select bid, 0 \"\n> + \"from generate_series(1, %d) as bid\", scale);\n>\n> \"scale\" should be \"nbranches * scale\".\n\nYep, even if nbranches is 1, it should be there.\n\n> + snprintf(sql, sizeof(sql),\n> + \"insert into pgbench_accounts(aid,bid,abalance,filler) \"\n> + \"select aid, (aid - 1) / %d + 1, 0, '' \"\n> + \"from generate_series(1, %d) as aid\", naccounts, scale * naccounts);\n>\n> Like client-side data generation, INT64_FORMAT should be used here\n> instead of %d?\n\nIndeed.\n\n> If large scale factor is specified, the query for generating pgbench_accounts\n> data can take a very long time. While that query is running, operators may be\n> likely to do Ctrl-C to cancel the data generation. In this case, IMO pgbench\n> should cancel the query, i.e., call PQcancel(). Otherwise, the query will keep\n> running to the end.\n\nHmmm. Why not. Now the infra to do that seems to already exists twice, \nonce in \"src/bin/psql/common.c\" and once in \"src/bin/scripts/common.c\".\n\nI cannot say I'm thrilled to replicate this once more. I think that the \nreasonable option is to share this in fe-utils and then to reuse it from \nthere. However, ISTM that such a restructuring patch which not belong to \nthis feature.\n\n> - for (step = initialize_steps; *step != '\\0'; step++)\n> + for (const char *step = initialize_steps; *step != '\\0'; step++)\n>\n> Per PostgreSQL basic coding style,\n\nC99 (20 years ago) is new the norm, and this style is now allowed, there \nare over a hundred instances of these already. I tend to use that where\nappropriate.\n\n> - fprintf(stderr, \"unrecognized initialization step \\\"%c\\\"\\n\",\n> + fprintf(stderr,\n> + \"unrecognized initialization step \\\"%c\\\"\\n\"\n> + \"Allowed step characters are: \\\"\" ALL_INIT_STEPS \"\\\".\\n\",\n> *step);\n> - fprintf(stderr, \"allowed steps are: \\\"d\\\", \\\"t\\\", \\\"g\\\", \\\"v\\\",\n> \\\"p\\\", \\\"f\\\"\\n\");\n>\n> The original message seems better to me. So what about just appending \"G\"\n> into the above latter message? That is,\n> \"allowed steps are: \\\"d\\\", \\\"t\\\", \\\"g\\\", \\\"G\\\", \\\"v\\\", \\\"p\\\", \\\"f\\\"\\n\"\n\nI needed this list in several places, so it makes sense to share the \ndefinition, and frankly the list of half a dozen comma-separated chars \ndoes not strike me as much better than just giving the allowed chars \ndirectly. So the simpler the better, from my point of view.\n\n> Isn't it better to explain a bit more what \"client-side / server-side data\n> generation\" is? For example, something like\n\nOk.\n\nAttached v7 does most of the above, but the list of char message and the \nsignal handling. The first one does not look really better to me, and the \nsecond one belongs to a restructuring patch that I'll try to submit.\n\n-- \nFabien.",
"msg_date": "Thu, 31 Oct 2019 15:54:00 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "\nHello Masao-san,\n\n>> If large scale factor is specified, the query for generating \n>> pgbench_accounts data can take a very long time. While that query is \n>> running, operators may be likely to do Ctrl-C to cancel the data \n>> generation. In this case, IMO pgbench should cancel the query, i.e., \n>> call PQcancel(). Otherwise, the query will keep running to the end.\n>\n> Hmmm. Why not. Now the infra to do that seems to already exists twice, once \n> in \"src/bin/psql/common.c\" and once in \"src/bin/scripts/common.c\".\n>\n> I cannot say I'm thrilled to replicate this once more. I think that the \n> reasonable option is to share this in fe-utils and then to reuse it from \n> there. However, ISTM that such a restructuring patch which not belong to this \n> feature. [...]\n\nI just did a patch to share the code:\n\n https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1910311939430.27369@lancre\n https://commitfest.postgresql.org/25/2336/\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 31 Oct 2019 19:46:30 +0100 (CET)",
"msg_from": "Fabien COELHO <fabien.coelho@mines-paristech.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "On Thu, Oct 31, 2019 at 11:54 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Masao-san,\n>\n> > + snprintf(sql, sizeof(sql),\n> > + \"insert into pgbench_branches(bid,bbalance) \"\n> > + \"select bid, 0 \"\n> > + \"from generate_series(1, %d) as bid\", scale);\n> >\n> > \"scale\" should be \"nbranches * scale\".\n>\n> Yep, even if nbranches is 1, it should be there.\n>\n> > + snprintf(sql, sizeof(sql),\n> > + \"insert into pgbench_accounts(aid,bid,abalance,filler) \"\n> > + \"select aid, (aid - 1) / %d + 1, 0, '' \"\n> > + \"from generate_series(1, %d) as aid\", naccounts, scale * naccounts);\n> >\n> > Like client-side data generation, INT64_FORMAT should be used here\n> > instead of %d?\n>\n> Indeed.\n>\n> > If large scale factor is specified, the query for generating pgbench_accounts\n> > data can take a very long time. While that query is running, operators may be\n> > likely to do Ctrl-C to cancel the data generation. In this case, IMO pgbench\n> > should cancel the query, i.e., call PQcancel(). Otherwise, the query will keep\n> > running to the end.\n>\n> Hmmm. Why not. Now the infra to do that seems to already exists twice,\n> once in \"src/bin/psql/common.c\" and once in \"src/bin/scripts/common.c\".\n>\n> I cannot say I'm thrilled to replicate this once more. I think that the\n> reasonable option is to share this in fe-utils and then to reuse it from\n> there. However, ISTM that such a restructuring patch which not belong to\n> this feature.\n\nUnderstood. Ok, let's discuss this in other thread that you started.\n\n> > - for (step = initialize_steps; *step != '\\0'; step++)\n> > + for (const char *step = initialize_steps; *step != '\\0'; step++)\n> >\n> > Per PostgreSQL basic coding style,\n>\n> C99 (20 years ago) is new the norm, and this style is now allowed, there\n> are over a hundred instances of these already. I tend to use that where\n> appropriate.\n\nYes, I understood there are several places using such style.\nBut I still wonder why we should apply such change here.\nIf there is the reason why this change is necessary here,\nI'm OK with that. But if not, basically I'd like to avoid the change.\nOtherwise it may make the back-patch a bit harder\nwhen we change the surrounding code.\n\n> > - fprintf(stderr, \"unrecognized initialization step \\\"%c\\\"\\n\",\n> > + fprintf(stderr,\n> > + \"unrecognized initialization step \\\"%c\\\"\\n\"\n> > + \"Allowed step characters are: \\\"\" ALL_INIT_STEPS \"\\\".\\n\",\n> > *step);\n> > - fprintf(stderr, \"allowed steps are: \\\"d\\\", \\\"t\\\", \\\"g\\\", \\\"v\\\",\n> > \\\"p\\\", \\\"f\\\"\\n\");\n> >\n> > The original message seems better to me. So what about just appending \"G\"\n> > into the above latter message? That is,\n> > \"allowed steps are: \\\"d\\\", \\\"t\\\", \\\"g\\\", \\\"G\\\", \\\"v\\\", \\\"p\\\", \\\"f\\\"\\n\"\n>\n> I needed this list in several places, so it makes sense to share the\n> definition, and frankly the list of half a dozen comma-separated chars\n> does not strike me as much better than just giving the allowed chars\n> directly. So the simpler the better, from my point of view.\n\nOK.\n\n> > Isn't it better to explain a bit more what \"client-side / server-side data\n> > generation\" is? For example, something like\n>\n> Ok.\n>\n> Attached v7 does most of the above, but the list of char message and the\n> signal handling. The first one does not look really better to me, and the\n> second one belongs to a restructuring patch that I'll try to submit.\n\nThanks for updating the patch!\nAttached is the slightly updated version of the patch. Based on your\npatch, I added the descriptions about logging of \"g\" and \"G\" steps into\nthe doc, and did some cosmetic changes. Barrying any objections,\nI'm thinking to commit this patch.\n\nWhile reviewing the patch, I found that current code allows space\ncharacter to be specified in -I. That is, checkInitSteps() accepts\nspace character. Why should we do this? Probably I understand\nwhy runInitSteps() needs to accept space character (because \"v\"\nin the specified string with -I is replaced with a space character\nwhen --no-vacuum option is given). But I'm not sure why that's also\nnecessary in checkInitSteps(). Instead, we should treat a space\ncharacter as invalid in checkInitSteps()?\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Wed, 6 Nov 2019 00:56:51 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "\nHello,\n\n>>> - for (step = initialize_steps; *step != '\\0'; step++)\n>>> + for (const char *step = initialize_steps; *step != '\\0'; step++)\n>\n> But I still wonder why we should apply such change here.\n\nBecause it removes one declaration and reduces the scope of one variable?\n\n> If there is the reason why this change is necessary here,\n\nNope, such changes are never necessary.\n\n> I'm OK with that. But if not, basically I'd like to avoid the change.\n> Otherwise it may make the back-patch a bit harder\n> when we change the surrounding code.\n\nI think that this is small enough so that it can be managed, if any back \npatch occurs on the surrounding code, which is anyway pretty unlikely.\n\n> Attached is the slightly updated version of the patch. Based on your\n> patch, I added the descriptions about logging of \"g\" and \"G\" steps into\n> the doc, and did some cosmetic changes. Barrying any objections,\n> I'm thinking to commit this patch.\n\nI'd suggest:\n\n\"to print one message each ...\" -> \"to print one message every ...\"\n\n\"to print no progress ...\" -> \"not to print any progress ...\"\n\nI would not call \"fprintf(stderr\" twice in a row if I can call it once.\n\n> While reviewing the patch, I found that current code allows space\n> character to be specified in -I. That is, checkInitSteps() accepts\n> space character. Why should we do this?\n\n> Probably I understand why runInitSteps() needs to accept space character \n> (because \"v\" in the specified string with -I is replaced with a space \n> character when --no-vacuum option is given).\n\nYes, that is the reason, otherwise the string would have to be shifted.\n\n> But I'm not sure why that's also necessary in checkInitSteps(). Instead, \n> we should treat a space character as invalid in checkInitSteps()?\n\nI think that it may break --no-vacuum, and I thought that there may be \nother option which remove things, eventually. Also, having a NO-OP looks \nok to me.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 5 Nov 2019 22:22:51 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "On Wed, Nov 6, 2019 at 6:23 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello,\n>\n> >>> - for (step = initialize_steps; *step != '\\0'; step++)\n> >>> + for (const char *step = initialize_steps; *step != '\\0'; step++)\n> >\n> > But I still wonder why we should apply such change here.\n>\n> Because it removes one declaration and reduces the scope of one variable?\n>\n> > If there is the reason why this change is necessary here,\n>\n> Nope, such changes are never necessary.\n>\n> > I'm OK with that. But if not, basically I'd like to avoid the change.\n> > Otherwise it may make the back-patch a bit harder\n> > when we change the surrounding code.\n>\n> I think that this is small enough so that it can be managed, if any back\n> patch occurs on the surrounding code, which is anyway pretty unlikely.\n>\n> > Attached is the slightly updated version of the patch. Based on your\n> > patch, I added the descriptions about logging of \"g\" and \"G\" steps into\n> > the doc, and did some cosmetic changes. Barrying any objections,\n> > I'm thinking to commit this patch.\n>\n> I'd suggest:\n>\n> \"to print one message each ...\" -> \"to print one message every ...\"\n>\n> \"to print no progress ...\" -> \"not to print any progress ...\"\n>\n> I would not call \"fprintf(stderr\" twice in a row if I can call it once.\n\nThanks for the suggestion!\nI updated the patch in that way and committed it!\n\nThis commit doesn't include the change \"for (const char ...)\"\nand \"merge two fprintf into one\" ones that we were discussing.\nBecause they are trivial but I'm not sure if they are improvements\nor not, yet. If they are, probably it's better to apply such changes\nto all the places having the similar issues. But that seems overkill.\n\n>\n> > While reviewing the patch, I found that current code allows space\n> > character to be specified in -I. That is, checkInitSteps() accepts\n> > space character. Why should we do this?\n>\n> > Probably I understand why runInitSteps() needs to accept space character\n> > (because \"v\" in the specified string with -I is replaced with a space\n> > character when --no-vacuum option is given).\n>\n> Yes, that is the reason, otherwise the string would have to be shifted.\n>\n> > But I'm not sure why that's also necessary in checkInitSteps(). Instead,\n> > we should treat a space character as invalid in checkInitSteps()?\n>\n> I think that it may break --no-vacuum, and I thought that there may be\n> other option which remove things, eventually. Also, having a NO-OP looks\n> ok to me.\n\nAs far as I read the code, checkInitSteps() checks the initialization\nsteps that users specified. The initialization steps string that\n\"v\" was replaced with blank character is not given to checkInitSteps().\nSo ISTM that dropping the handling of blank character from\ncheckInitSteps() doesn't break --no-vacuum.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 6 Nov 2019 11:31:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "2019-11-06 11:31 に Fujii Masao さんは書きました:\n> On Wed, Nov 6, 2019 at 6:23 AM Fabien COELHO <coelho@cri.ensmp.fr> \n> wrote:\n>> \n>> \n>> Hello,\n>> \n>> >>> - for (step = initialize_steps; *step != '\\0'; step++)\n>> >>> + for (const char *step = initialize_steps; *step != '\\0'; step++)\n>> >\n>> > But I still wonder why we should apply such change here.\n>> \n>> Because it removes one declaration and reduces the scope of one \n>> variable?\n>> \n>> > If there is the reason why this change is necessary here,\n>> \n>> Nope, such changes are never necessary.\n>> \n>> > I'm OK with that. But if not, basically I'd like to avoid the change.\n>> > Otherwise it may make the back-patch a bit harder\n>> > when we change the surrounding code.\n>> \n>> I think that this is small enough so that it can be managed, if any \n>> back\n>> patch occurs on the surrounding code, which is anyway pretty unlikely.\n>> \n>> > Attached is the slightly updated version of the patch. Based on your\n>> > patch, I added the descriptions about logging of \"g\" and \"G\" steps into\n>> > the doc, and did some cosmetic changes. Barrying any objections,\n>> > I'm thinking to commit this patch.\n>> \n>> I'd suggest:\n>> \n>> \"to print one message each ...\" -> \"to print one message every ...\"\n>> \n>> \"to print no progress ...\" -> \"not to print any progress ...\"\n>> \n>> I would not call \"fprintf(stderr\" twice in a row if I can call it \n>> once.\n> \n> Thanks for the suggestion!\n> I updated the patch in that way and committed it!\n> \n> This commit doesn't include the change \"for (const char ...)\"\n> and \"merge two fprintf into one\" ones that we were discussing.\n> Because they are trivial but I'm not sure if they are improvements\n> or not, yet. If they are, probably it's better to apply such changes\n> to all the places having the similar issues. But that seems overkill.\n> \n>> \n>> > While reviewing the patch, I found that current code allows space\n>> > character to be specified in -I. That is, checkInitSteps() accepts\n>> > space character. Why should we do this?\n>> \n>> > Probably I understand why runInitSteps() needs to accept space character\n>> > (because \"v\" in the specified string with -I is replaced with a space\n>> > character when --no-vacuum option is given).\n>> \n>> Yes, that is the reason, otherwise the string would have to be \n>> shifted.\n>> \n>> > But I'm not sure why that's also necessary in checkInitSteps(). Instead,\n>> > we should treat a space character as invalid in checkInitSteps()?\n>> \n>> I think that it may break --no-vacuum, and I thought that there may be\n>> other option which remove things, eventually. Also, having a NO-OP \n>> looks\n>> ok to me.\n> \n> As far as I read the code, checkInitSteps() checks the initialization\n> steps that users specified. The initialization steps string that\n> \"v\" was replaced with blank character is not given to checkInitSteps().\n> So ISTM that dropping the handling of blank character from\n> checkInitSteps() doesn't break --no-vacuum.\n> \nThis is a patch which does not allow space character in -I options .\n\nRegard,\nYu Kimura",
"msg_date": "Thu, 07 Nov 2019 17:03:28 +0900",
"msg_from": "btkimurayuzk <btkimurayuzk@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "\n>>> I think that it may break --no-vacuum, and I thought that there may be\n>>> other option which remove things, eventually. Also, having a NO-OP looks\n>>> ok to me.\n>> \n>> As far as I read the code, checkInitSteps() checks the initialization\n>> steps that users specified. The initialization steps string that\n>> \"v\" was replaced with blank character is not given to checkInitSteps().\n>> So ISTM that dropping the handling of blank character from\n>> checkInitSteps() doesn't break --no-vacuum.\n>> \n> This is a patch which does not allow space character in -I options .\n\nI do not think that this is desirable. It would be a regression, and \nallowing a no-op is not an issue in anyway.\n\n-- \nFabien Coelho - CRI, MINES ParisTech\n\n\n",
"msg_date": "Thu, 7 Nov 2019 09:18:06 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "On Thu, Nov 7, 2019 at 5:18 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> >>> I think that it may break --no-vacuum, and I thought that there may be\n> >>> other option which remove things, eventually. Also, having a NO-OP looks\n> >>> ok to me.\n> >>\n> >> As far as I read the code, checkInitSteps() checks the initialization\n> >> steps that users specified. The initialization steps string that\n> >> \"v\" was replaced with blank character is not given to checkInitSteps().\n> >> So ISTM that dropping the handling of blank character from\n> >> checkInitSteps() doesn't break --no-vacuum.\n> >>\n> > This is a patch which does not allow space character in -I options .\n>\n> I do not think that this is desirable. It would be a regression, and\n> allowing a no-op is not an issue in anyway.\n\nWhy is that regression, you think? I think that's an oversight.\nIf I'm missing something and accepting a blank character as no-op in\nalso checkInitSteps() is really necessary for some reasons,\nwhich should be documented. But, if so, another question is;\nwhy should only blank character be treated as no-op, in checkInitSteps()?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 7 Nov 2019 17:57:09 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "Hello Masao-san,\n\n>> I do not think that this is desirable. It would be a regression, and\n>> allowing a no-op is not an issue in anyway.\n>\n> Why is that regression, you think?\n\nBecause \"pgbench -I ' d'\" currently works and it would cease to work after \nthe patch.\n\n> I think that's an oversight. If I'm missing something and accepting a \n> blank character as no-op in also checkInitSteps() is really necessary \n> for some reasons, which should be documented. But, if so, another \n> question is; why should only blank character be treated as no-op, in \n> checkInitSteps()?\n\nThe idea is to have one character that can be substituted to remove any \noperation.\n\nOn principle, allowing a no-op character, whatever the choice, is a good \nidea, because it means that the caller can take advantage of that if need \nbe.\n\nI think that the actual oversight is that the checkInitSteps should be \ncalled at the beginning of processing initialization steps rather than \nwhile processing -I, because currently other places modify the \ninitialization string (no-vacuum, foreign key) and thus are not checked.\n\nI agree that it should be documented.\n\nAttached patch adds a doc and moves the check where it should be, and \nmodifies a test with an explicit no-op space initialization step.\n\n-- \nFabien.",
"msg_date": "Thu, 7 Nov 2019 10:35:31 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "On Thu, Nov 7, 2019 at 6:35 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Masao-san,\n>\n> >> I do not think that this is desirable. It would be a regression, and\n> >> allowing a no-op is not an issue in anyway.\n> >\n> > Why is that regression, you think?\n>\n> Because \"pgbench -I ' d'\" currently works and it would cease to work after\n> the patch.\n\nIf the behavior has been documented and visible to users,\nI agree that it should not be dropped for compatibility basically.\nBut in this case, that was not.\n\n> > I think that's an oversight. If I'm missing something and accepting a\n> > blank character as no-op in also checkInitSteps() is really necessary\n> > for some reasons, which should be documented. But, if so, another\n> > question is; why should only blank character be treated as no-op, in\n> > checkInitSteps()?\n>\n> The idea is to have one character that can be substituted to remove any\n> operation.\n\nProbably I understand that idea is necessary in the internal of pgbench\nbecause pgbench internally may modify the initialization steps string.\nBut I'm not sure why it needs to be exposed, yet.\n\n> On principle, allowing a no-op character, whatever the choice, is a good\n> idea, because it means that the caller can take advantage of that if need\n> be.\n>\n> I think that the actual oversight is that the checkInitSteps should be\n> called at the beginning of processing initialization steps rather than\n> while processing -I, because currently other places modify the\n> initialization string (no-vacuum, foreign key) and thus are not checked.\n\nAs far as I read the code, runInitSteps() does the check. If the initialization\nsteps string contains unrecognized character, runInitSteps() emits an error.\n\n * (We could just leave it to runInitSteps() to fail if there are wrong\n * characters, but since initialization can take awhile, it seems friendlier\n * to check during option parsing.)\n\nThe above comment in checkInitSteps() seems to explain why\ncheckInitSteps() is called at the beginning of processing initialization\nsteps.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 7 Nov 2019 19:13:41 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - extend initialization phase control"
},
{
"msg_contents": "\nHello,\n\n>> I think that the actual oversight is that the checkInitSteps should be\n>> called at the beginning of processing initialization steps rather than\n>> while processing -I, because currently other places modify the\n>> initialization string (no-vacuum, foreign key) and thus are not checked.\n>\n> As far as I read the code, runInitSteps() does the check. If the initialization\n> steps string contains unrecognized character, runInitSteps() emits an error.\n\nSure, but the previous step have been executed and committed, the point of \nthe check is to detect the issue before starting the execution.\n\n> * (We could just leave it to runInitSteps() to fail if there are wrong\n> * characters, but since initialization can take awhile, it seems friendlier\n> * to check during option parsing.)\n>\n> The above comment in checkInitSteps() seems to explain why \n> checkInitSteps() is called at the beginning of processing initialization \n> steps.\n\nYep, the comment is right in the motivation, but not accurate anymore wrt \nthe submitted patch. V2 attached updates this comment.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 7 Nov 2019 12:18:29 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - extend initialization phase control"
}
] |
[
{
"msg_contents": "It seems that the fire-the-triggers code path in\nvalidateForeignKeyConstraint isn't being exercised; at least, that's\nwhat coverage.postgresql.org says right now, and I'm afraid that may\nhave been true for quite some time. The attached regression-test\naddition causes it to be exercised, and guess what: it blows up real\ngood.\n\nThis is a slightly adapted version of the test Hadi proposed in\nhttps://postgr.es/m/CAK=1=WonwcuN_0KiZwQO3SQxse41jZ5hOJRpFCvZ3qa8n9cssw@mail.gmail.com\nSince he didn't mention anything about core dumps or assertion\nfailures, one assumes that it did work as of the version he was\ntesting against.\n\nWhat it looks like to me is that because of this hunk in c2fe139c2:\n\n@@ -8962,7 +8981,8 @@ validateForeignKeyConstraint(char *conname,\n \t\ttrigdata.type = T_TriggerData;\n \t\ttrigdata.tg_event = TRIGGER_EVENT_INSERT | TRIGGER_EVENT_ROW;\n \t\ttrigdata.tg_relation = rel;\n-\t\ttrigdata.tg_trigtuple = tuple;\n+\t\ttrigdata.tg_trigtuple = ExecFetchSlotHeapTuple(slot, true, NULL);\n+\t\ttrigdata.tg_trigslot = slot;\n \t\ttrigdata.tg_newtuple = NULL;\n \t\ttrigdata.tg_trigger = &trig;\n \nvalidateForeignKeyConstraint asks ExecFetchSlotHeapTuple to materialize\nthe tuple, which causes it to no longer be associated with a buffer,\nwhich causes heapam_tuple_satisfies_snapshot to be very unhappy.\n\nI can make the case not crash by s/true/false/ in the above call,\nbut I wonder whether that's an appropriate fix. It seems rather\nfragile that things work like this.\n\nI plan to go ahead and commit Hadi's fix with that change included\n(as below), but I wonder whether anything else needs to be revisited.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 06 Apr 2019 14:07:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "tableam scan-API patch broke foreign key validation"
},
{
"msg_contents": "Hi,\n\nOn April 6, 2019 11:07:55 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>It seems that the fire-the-triggers code path in\n>validateForeignKeyConstraint isn't being exercised; at least, that's\n>what coverage.postgresql.org says right now, and I'm afraid that may\n>have been true for quite some time. The attached regression-test\n>addition causes it to be exercised, and guess what: it blows up real\n>good.\n>\n>This is a slightly adapted version of the test Hadi proposed in\n>https://postgr.es/m/CAK=1=WonwcuN_0KiZwQO3SQxse41jZ5hOJRpFCvZ3qa8n9cssw@mail.gmail.com\n>Since he didn't mention anything about core dumps or assertion\n>failures, one assumes that it did work as of the version he was\n>testing against.\n>\n>What it looks like to me is that because of this hunk in c2fe139c2:\n>\n>@@ -8962,7 +8981,8 @@ validateForeignKeyConstraint(char *conname,\n> \t\ttrigdata.type = T_TriggerData;\n> \t\ttrigdata.tg_event = TRIGGER_EVENT_INSERT | TRIGGER_EVENT_ROW;\n> \t\ttrigdata.tg_relation = rel;\n>-\t\ttrigdata.tg_trigtuple = tuple;\n>+\t\ttrigdata.tg_trigtuple = ExecFetchSlotHeapTuple(slot, true, NULL);\n>+\t\ttrigdata.tg_trigslot = slot;\n> \t\ttrigdata.tg_newtuple = NULL;\n> \t\ttrigdata.tg_trigger = &trig;\n> \n>validateForeignKeyConstraint asks ExecFetchSlotHeapTuple to materialize\n>the tuple, which causes it to no longer be associated with a buffer,\n>which causes heapam_tuple_satisfies_snapshot to be very unhappy.\n>\n>I can make the case not crash by s/true/false/ in the above call,\n>but I wonder whether that's an appropriate fix. It seems rather\n>fragile that things work like this.\n>\n>I plan to go ahead and commit Hadi's fix with that change included\n>(as below), but I wonder whether anything else needs to be revisited.\n\nI posted pretty much that patch nearby, with some other questions. Was waiting for David to respond.... Let me dig that out.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sat, 06 Apr 2019 11:09:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam scan-API patch broke foreign key validation"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On April 6, 2019 11:07:55 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I plan to go ahead and commit Hadi's fix with that change included\n>> (as below), but I wonder whether anything else needs to be revisited.\n\n> I posted pretty much that patch nearby, with some other questions. Was waiting for David to respond.... Let me dig that out.\n\nAh. Would you rather I wait till you push yours?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Apr 2019 14:13:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: tableam scan-API patch broke foreign key validation"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-06 14:13:29 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On April 6, 2019 11:07:55 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I plan to go ahead and commit Hadi's fix with that change included\n> >> (as below), but I wonder whether anything else needs to be revisited.\n> \n> > I posted pretty much that patch nearby, with some other questions. Was waiting for David to respond.... Let me dig that out.\n\nThe relevant thread is:\nhttps://www.postgresql.org/message-id/20190325180405.jytoehuzkeozggxx%40alap3.anarazel.de\n\n\n> Ah. Would you rather I wait till you push yours?\n\nYours looks good to me, so go ahead. I think we need a bit more than\nthat, but that can easily be committed separately:\n\nWonder if you have an opinion on:\n\n> I've also noticed that we should free the tuple - that doesn't matter\n> for heap, but it sure does for other callers. But uh, is it actually ok\n> to validate an entire table's worth of foreign keys without a memory\n> context reset? I.e. shouldn't we have a memory context that we reset\n> after each iteration?\n> \n> Also, why's there no CHECK_FOR_INTERRUPTS()? heap has some internally on\n> a page level, but that doesn't seem all that granular?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Apr 2019 11:22:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam scan-API patch broke foreign key validation"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The relevant thread is:\n> https://www.postgresql.org/message-id/20190325180405.jytoehuzkeozggxx%40alap3.anarazel.de\n\nYeah, I just found that --- would have seen it sooner if David had\nnot elected to make it a new thread.\n\n> Wonder if you have an opinion on:\n\n>> I've also noticed that we should free the tuple - that doesn't matter\n>> for heap, but it sure does for other callers.\n\nWhy should this code need to free anything? That'd be the responsibility\nof the slot code, no?\n\n>> But uh, is it actually ok\n>> to validate an entire table's worth of foreign keys without a memory\n>> context reset? I.e. shouldn't we have a memory context that we reset\n>> after each iteration?\n>> Also, why's there no CHECK_FOR_INTERRUPTS()? heap has some internally on\n>> a page level, but that doesn't seem all that granular?\n\nThese are good questions. Just eyeing RI_FKey_check(), I think\nthat it might not have any significant leaks because most of the work\nis done in an SPI context, but obviously that's pretty fragile.\n\nThe memory-context stuff in your WIP patch seems wrong, btw;\nthe second or later iteration of the loop would trash oldcxt.\n\nBut clearly we need a test case here. I'll adjust Hadi's example\nso that there's more than one tuple to check, and push it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Apr 2019 14:34:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: tableam scan-API patch broke foreign key validation"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-06 14:34:34 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > The relevant thread is:\n> > https://www.postgresql.org/message-id/20190325180405.jytoehuzkeozggxx%40alap3.anarazel.de\n> \n> Yeah, I just found that --- would have seen it sooner if David had\n> not elected to make it a new thread.\n> \n> > Wonder if you have an opinion on:\n> \n> >> I've also noticed that we should free the tuple - that doesn't matter\n> >> for heap, but it sure does for other callers.\n> \n> Why should this code need to free anything? That'd be the responsibility\n> of the slot code, no?\n\nWell, not really. If a slot doesn't hold heap tuples internally,\nExecFetchSlotHeapTuple() will return a fresh heap tuple (but signal so\nby setting *should_free = true if not NULL). That's why I was saying it\ndoesn't matter for heap (where the slot just holds a heap tuple\ninternally), but it does matter for other AMs.\n\n\n> >> But uh, is it actually ok\n> >> to validate an entire table's worth of foreign keys without a memory\n> >> context reset? I.e. shouldn't we have a memory context that we reset\n> >> after each iteration?\n> >> Also, why's there no CHECK_FOR_INTERRUPTS()? heap has some internally on\n> >> a page level, but that doesn't seem all that granular?\n> \n> These are good questions. Just eyeing RI_FKey_check(), I think\n> that it might not have any significant leaks because most of the work\n> is done in an SPI context, but obviously that's pretty fragile.\n\nYea. And especially with potentially needing to free the tuple as above,\nusing an explicit context seems more robust to me.\n\n\n> But clearly we need a test case here. I'll adjust Hadi's example\n> so that there's more than one tuple to check, and push it.\n\nCool.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Apr 2019 11:38:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam scan-API patch broke foreign key validation"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-06 14:34:34 -0400, Tom Lane wrote:\n>> Why should this code need to free anything? That'd be the responsibility\n>> of the slot code, no?\n\n> Well, not really. If a slot doesn't hold heap tuples internally,\n> ExecFetchSlotHeapTuple() will return a fresh heap tuple (but signal so\n> by setting *should_free = true if not NULL).\n\nAh, got it: ignoring should_free is indeed a potential issue here.\n\n>> But clearly we need a test case here. I'll adjust Hadi's example\n>> so that there's more than one tuple to check, and push it.\n\n> Cool.\n\nSounds like a plan.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Apr 2019 14:43:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: tableam scan-API patch broke foreign key validation"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-06 14:34:34 -0400, Tom Lane wrote:\n>> These are good questions. Just eyeing RI_FKey_check(), I think\n>> that it might not have any significant leaks because most of the work\n>> is done in an SPI context, but obviously that's pretty fragile.\n\n> Yea. And especially with potentially needing to free the tuple as above,\n> using an explicit context seems more robust to me.\n\nAdjusting the committed test case to process lots of tuples shows that\nindeed there is no leak in HEAD. But I concur that using a local context\nhere would be more future-proof.\n\nBTW, I just stumbled across a different bug in v11 by trying to run\nHEAD's test script on it ... not sure if that's a known problem or not:\n\n(gdb) f 3\n#3 0x000000000063949c in ExecSetupPartitionTupleRouting (\n mtstate=<value optimized out>, rel=0x7f343e4f4170) at execPartition.c:201\n201 Assert(update_rri_index == num_update_rri);\n(gdb) bt\n#0 0x00000037b6232495 in raise (sig=6)\n at ../nptl/sysdeps/unix/sysv/linux/raise.c:64\n#1 0x00000037b6233c75 in abort () at abort.c:92\n#2 0x00000000008a1e6d in ExceptionalCondition (\n conditionName=<value optimized out>, errorType=<value optimized out>, \n fileName=<value optimized out>, lineNumber=<value optimized out>)\n at assert.c:54\n#3 0x000000000063949c in ExecSetupPartitionTupleRouting (\n mtstate=<value optimized out>, rel=0x7f343e4f4170) at execPartition.c:201\n#4 0x00000000006595cb in ExecInitModifyTable (node=0x26a0680, \n estate=0x26a1fa8, eflags=16) at nodeModifyTable.c:2343\n#5 0x000000000063b179 in ExecInitNode (node=0x26a0680, estate=0x26a1fa8, \n eflags=16) at execProcnode.c:174\n#6 0x0000000000635824 in InitPlan (queryDesc=<value optimized out>, eflags=16)\n at execMain.c:1046\n#7 standard_ExecutorStart (queryDesc=<value optimized out>, eflags=16)\n at execMain.c:265\n#8 0x000000000066c332 in _SPI_pquery (plan=0x269fb38, paramLI=0x26b9048, \n snapshot=0x0, crosscheck_snapshot=0x0, read_only=false, \n fire_triggers=false, tcount=0) at spi.c:2482\n#9 _SPI_execute_plan (plan=0x269fb38, paramLI=0x26b9048, snapshot=0x0, \n crosscheck_snapshot=0x0, read_only=false, fire_triggers=false, tcount=0)\n at spi.c:2246\n#10 0x000000000066c7b6 in SPI_execute_snapshot (plan=0x269fb38, \n Values=<value optimized out>, Nulls=<value optimized out>, snapshot=0x0, \n crosscheck_snapshot=0x0, read_only=false, fire_triggers=false, tcount=0)\n at spi.c:562\n#11 0x0000000000838842 in ri_PerformCheck (riinfo=0x268d9c0, \n qkey=0x7fff8996f700, qplan=0x269fb38, fk_rel=0x7f343e4f4170, \n pk_rel=0x7f343e4f49d0, old_tuple=0x7fff8996fd40, new_tuple=0x0, \n detectNewRows=true, expect_OK=9) at ri_triggers.c:2606\n#12 0x0000000000839971 in ri_setnull (trigdata=<value optimized out>)\n at ri_triggers.c:1400\n#13 0x000000000060b0a8 in ExecCallTriggerFunc (trigdata=0x7fff8996fce0, \n tgindx=0, finfo=0x26c55e0, instr=0x0, per_tuple_context=0x26aeef0)\n at trigger.c:2412\n#14 0x000000000060b5e5 in AfterTriggerExecute (events=0x260b8d8, firing_id=1, \n estate=0x26c5098, delete_ok=false) at trigger.c:4359\n#15 afterTriggerInvokeEvents (events=0x260b8d8, firing_id=1, estate=0x26c5098, \n delete_ok=false) at trigger.c:4550\n#16 0x000000000060cb82 in AfterTriggerEndQuery (estate=0x26c5098)\n at trigger.c:4860\n#17 0x0000000000636871 in standard_ExecutorFinish (queryDesc=0x2717b88)\n at execMain.c:439\n#18 0x0000000000795bf8 in ProcessQuery (plan=<value optimized out>, \n sourceText=0x258ef98 \"DELETE FROM fk_notpartitioned_pk;\", params=0x0, \n queryEnv=0x0, dest=<value optimized out>, \n completionTag=0x7fff89970030 \"DELETE 2\") at pquery.c:205\n#19 0x0000000000795e35 in PortalRunMulti (portal=0x25f4518, isTopLevel=true, \n setHoldSnapshot=false, dest=0x271bb90, altdest=0x271bb90, \n completionTag=0x7fff89970030 \"DELETE 2\") at pquery.c:1286\n#20 0x0000000000796610 in PortalRun (portal=0x25f4518, \n count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x271bb90, \n altdest=0x271bb90, completionTag=0x7fff89970030 \"DELETE 2\") at pquery.c:799\n#21 0x00000000007929dd in exec_simple_query (\n query_string=0x258ef98 \"DELETE FROM fk_notpartitioned_pk;\")\n at postgres.c:1145\n#22 0x0000000000793f34 in PostgresMain (argc=<value optimized out>, \n argv=<value optimized out>, dbname=0x25b8a18 \"regression\", \n username=<value optimized out>) at postgres.c:4182\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Apr 2019 15:21:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: tableam scan-API patch broke foreign key validation"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-06 14:43:26 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-04-06 14:34:34 -0400, Tom Lane wrote:\n> >> Why should this code need to free anything? That'd be the responsibility\n> >> of the slot code, no?\n> \n> > Well, not really. If a slot doesn't hold heap tuples internally,\n> > ExecFetchSlotHeapTuple() will return a fresh heap tuple (but signal so\n> > by setting *should_free = true if not NULL).\n> \n> Ah, got it: ignoring should_free is indeed a potential issue here.\n\nI've pushed a revised version of my earlier patch adding a memory\ncontext that's reset after each tuple.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2019 22:54:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam scan-API patch broke foreign key validation"
},
{
"msg_contents": "On 2019-Apr-06, Tom Lane wrote:\n\n> BTW, I just stumbled across a different bug in v11 by trying to run\n> HEAD's test script on it ... not sure if that's a known problem or not:\n> \n> (gdb) f 3\n> #3 0x000000000063949c in ExecSetupPartitionTupleRouting (\n> mtstate=<value optimized out>, rel=0x7f343e4f4170) at execPartition.c:201\n> 201 Assert(update_rri_index == num_update_rri);\n> (gdb) bt\n> #0 0x00000037b6232495 in raise (sig=6)\n> at ../nptl/sysdeps/unix/sysv/linux/raise.c:64\n\nFor closure: this was re-reported as\nhttps://www.postgresql.org/message-id/20710.1554582479@sss.pgh.pa.us\nand the fix committed as 10e3991fad8a300ed268878ae30c96074628c1e1.\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 25 Apr 2019 14:20:52 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam scan-API patch broke foreign key validation"
}
] |
[
{
"msg_contents": "This test script works fine in HEAD:\n\ndrop table if exists parttbl cascade;\nCREATE TABLE parttbl (a int, b int) PARTITION BY LIST (a);\nCREATE TABLE parttbl_1 PARTITION OF parttbl FOR VALUES IN (NULL,500,501,502);\nUPDATE parttbl SET a = NULL, b = NULL WHERE a = 1600 AND b = 999;\n\nIn v11, it suffers an assertion failure in ExecSetupPartitionTupleRouting.\n\nIn v10, it doesn't crash, but we do get\n\nWARNING: relcache reference leak: relation \"parttbl\" not closed\n\nwhich is surely a bug as well.\n\n(This is a boiled-down version of the script I mentioned in\nhttps://www.postgresql.org/message-id/13344.1554578481@sss.pgh.pa.us)\n\nThis seems to be related to what Amit Langote complained of in\nhttps://www.postgresql.org/message-id/21e7eaa4-0d4d-20c2-a1f7-c7e96f4ce440@lab.ntt.co.jp\nbut since there's no foreign tables involved at all, either it's\na different bug or he misdiagnosed what he was seeing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Apr 2019 16:27:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Back-branch bugs with fully-prunable UPDATEs"
},
{
"msg_contents": "On Sun, Apr 7, 2019 at 5:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> This test script works fine in HEAD:\n>\n> drop table if exists parttbl cascade;\n> CREATE TABLE parttbl (a int, b int) PARTITION BY LIST (a);\n> CREATE TABLE parttbl_1 PARTITION OF parttbl FOR VALUES IN (NULL,500,501,502);\n> UPDATE parttbl SET a = NULL, b = NULL WHERE a = 1600 AND b = 999;\n>\n> In v11, it suffers an assertion failure in ExecSetupPartitionTupleRouting.\n>\n> In v10, it doesn't crash, but we do get\n>\n> WARNING: relcache reference leak: relation \"parttbl\" not closed\n>\n> which is surely a bug as well.\n>\n> (This is a boiled-down version of the script I mentioned in\n> https://www.postgresql.org/message-id/13344.1554578481@sss.pgh.pa.us)\n\nWhat we did in the following commit is behind this:\n\ncommit 58947fbd56d1481a86a03087c81f728fdf0be866\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Fri Feb 22 12:23:00 2019 -0500\n\n Fix plan created for inherited UPDATE/DELETE with all tables excluded.\n\nBefore this commit, partitioning related code in the executor could\nalways rely on the fact that ModifyTableState.resultRelInfo[] only\ncontains *leaf* partitions. As of this commit, it may contain the\nroot partitioned table in some cases, which breaks that assumption.\n\nI've attached fixes for PG 10 and 11, modifying ExecInitModifyTable()\nand inheritance_planner(), respectively.\n\n> This seems to be related to what Amit Langote complained of in\n> https://www.postgresql.org/message-id/21e7eaa4-0d4d-20c2-a1f7-c7e96f4ce440@lab.ntt.co.jp\n> but since there's no foreign tables involved at all, either it's\n> a different bug or he misdiagnosed what he was seeing.\n\nI think that one is a different bug, but maybe I haven't looked closely enough.\n\nThanks,\nAmit",
"msg_date": "Sun, 7 Apr 2019 16:54:19 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Back-branch bugs with fully-prunable UPDATEs"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Sun, Apr 7, 2019 at 5:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This test script works fine in HEAD:\n>> In v11, it suffers an assertion failure in ExecSetupPartitionTupleRouting.\n>> In v10, it doesn't crash, but we do get\n>> WARNING: relcache reference leak: relation \"parttbl\" not closed\n\n> What we did in the following commit is behind this:\n> commit 58947fbd56d1481a86a03087c81f728fdf0be866\n> Before this commit, partitioning related code in the executor could\n> always rely on the fact that ModifyTableState.resultRelInfo[] only\n> contains *leaf* partitions. As of this commit, it may contain the\n> root partitioned table in some cases, which breaks that assumption.\n\nAh. Thanks for the diagnosis and patches; pushed.\n\nI chose to patch HEAD similarly to v11, even though no bug manifests\nright now; it seems safer that way. We should certainly have the\ntest case in HEAD, now that we realize there wasn't coverage for this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Apr 2019 12:57:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Back-branch bugs with fully-prunable UPDATEs"
},
{
"msg_contents": "(2019/04/07 16:54), Amit Langote wrote:\n> On Sun, Apr 7, 2019 at 5:28 AM Tom Lane<tgl@sss.pgh.pa.us> wrote:\n\n>> This seems to be related to what Amit Langote complained of in\n>> https://www.postgresql.org/message-id/21e7eaa4-0d4d-20c2-a1f7-c7e96f4ce440@lab.ntt.co.jp\n>> but since there's no foreign tables involved at all, either it's\n>> a different bug or he misdiagnosed what he was seeing.\n>\n> I think that one is a different bug, but maybe I haven't looked closely enough.\n\nI started working on that from last Friday (though I didn't work on the \nweekend). I agree on Amit's reasoning stated in that post, and I think \nthat that's my fault. Sorry for the delay.\n\nBest regards,\nEtsuro Fujita\n\n\n\n",
"msg_date": "Mon, 08 Apr 2019 12:02:15 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Back-branch bugs with fully-prunable UPDATEs"
},
{
"msg_contents": "On 2019/04/08 1:57, Tom Lane wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n>> On Sun, Apr 7, 2019 at 5:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> This test script works fine in HEAD:\n>>> In v11, it suffers an assertion failure in ExecSetupPartitionTupleRouting.\n>>> In v10, it doesn't crash, but we do get\n>>> WARNING: relcache reference leak: relation \"parttbl\" not closed\n> \n>> What we did in the following commit is behind this:\n>> commit 58947fbd56d1481a86a03087c81f728fdf0be866\n>> Before this commit, partitioning related code in the executor could\n>> always rely on the fact that ModifyTableState.resultRelInfo[] only\n>> contains *leaf* partitions. As of this commit, it may contain the\n>> root partitioned table in some cases, which breaks that assumption.\n> \n> Ah. Thanks for the diagnosis and patches; pushed.\n\nThank you.\n\n> I chose to patch HEAD similarly to v11, even though no bug manifests\n> right now; it seems safer that way. We should certainly have the\n> test case in HEAD, now that we realize there wasn't coverage for this.\n\nAgreed, thanks for taking care of that.\n\nRegards,\nAmit\n\n\n\n",
"msg_date": "Mon, 8 Apr 2019 13:37:16 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Back-branch bugs with fully-prunable UPDATEs"
}
] |
[
{
"msg_contents": "Should we change the default of the password_encryption setting to\n'scram-sha-256' in PG12?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 7 Apr 2019 10:01:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Should we change the default of the password_encryption setting to\n> 'scram-sha-256' in PG12?\n\nI thought we were going to wait a bit longer --- that just got added\nlast year, no? What do we know about the state of support in client\nlibraries?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Apr 2019 12:59:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On Sun, Apr 07, 2019 at 12:59:05PM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > Should we change the default of the password_encryption setting to\n> > 'scram-sha-256' in PG12?\n> \n> I thought we were going to wait a bit longer --- that just got added\n> last year, no? What do we know about the state of support in client\n> libraries?\n\nGreat idea! Does it make sense to test all, or at least some\nsignificant fraction of the connectors listed in\nhttps://wiki.postgresql.org/wiki/Client_Libraries by default?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 7 Apr 2019 20:23:06 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On Sun, Apr 07, 2019 at 08:23:06PM +0200, David Fetter wrote:\n> Great idea! Does it make sense to test all, or at least some\n> significant fraction of the connectors listed in\n> https://wiki.postgresql.org/wiki/Client_Libraries by default?\n\nThis is a more interesting list:\nhttps://wiki.postgresql.org/wiki/List_of_drivers\n\nFrom what I can see, the major drivers not using directly libpq\nsupport our SASL protocol: JDBC and npgsql. However I can count three\nof them which still don't support it: Crystal, pq (Go) and asyncpg.\npq and asyncpg are very popular on github, with at least 3000 stars\neach, which is a lot I think. I have also double-checked their source\ncode and I am seeing no trace of SASL or SCRAM, so it seems to me that\nwe may want to wait more before switching the default.\n--\nMichael",
"msg_date": "Mon, 8 Apr 2019 14:28:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> From what I can see, the major drivers not using directly libpq\n> support our SASL protocol: JDBC and npgsql. However I can count three\n> of them which still don't support it: Crystal, pq (Go) and asyncpg.\n> pq and asyncpg are very popular on github, with at least 3000 stars\n> each, which is a lot I think. I have also double-checked their source\n> code and I am seeing no trace of SASL or SCRAM, so it seems to me that\n> we may want to wait more before switching the default.\n\nPerhaps we could reach out to the authors of those libraries,\nand encourage them to provide support in the next year or so?\n\nI don't doubt that switching to scram-sha-256 is a good idea in\nthe long run. The idea here was to give driver authors a reasonable\namount of time to update. I don't really think that one year\ncounts as a \"reasonable amount of time\" given how slowly this\nproject moves overall ... but we don't want to wait forever ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Apr 2019 01:34:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-08 01:34:42 -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > From what I can see, the major drivers not using directly libpq\n> > support our SASL protocol: JDBC and npgsql. However I can count three\n> > of them which still don't support it: Crystal, pq (Go) and asyncpg.\n> > pq and asyncpg are very popular on github, with at least 3000 stars\n> > each, which is a lot I think. I have also double-checked their source\n> > code and I am seeing no trace of SASL or SCRAM, so it seems to me that\n> > we may want to wait more before switching the default.\n> \n> Perhaps we could reach out to the authors of those libraries,\n> and encourage them to provide support in the next year or so?\n\n\nSeems go/pq might get it soon-ish: https://github.com/lib/pq/pull/833\n\nThere doesn't appear to be much movement on the crystal front (\nhttps://github.com/will/crystal-pg/issues/154 ), but I don't think it's\npopular enough to really worry. There's an issue for asyncpg\nhttps://github.com/MagicStack/asyncpg/issues/314 - but not too much\nmovement either.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2019 22:42:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 08/04/2019 08:42, Andres Freund wrote:\n> Seems go/pq might get it soon-ish: https://github.com/lib/pq/pull/833\n\nI wouldn't hold my breath. That's the third PR to add SCRAM support \nalready, see also https://github.com/lib/pq/pull/788 and \nhttps://github.com/lib/pq/pull/608. The project seems to lack the \ncommitter manpower or round tuits to review and push these.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 8 Apr 2019 09:08:05 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On Mon, Apr 08, 2019 at 09:08:05AM +0300, Heikki Linnakangas wrote:\n> I wouldn't hold my breath. That's the third PR to add SCRAM support already,\n> see also https://github.com/lib/pq/pull/788 and\n> https://github.com/lib/pq/pull/608. The project seems to lack the committer\n> manpower or round tuits to review and push these.\n\nI am wondering on the contrary if switching the default on Postgres\nside would make things move faster on their side though.\n--\nMichael",
"msg_date": "Mon, 8 Apr 2019 15:38:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "Hi\n\n> I am wondering on the contrary if switching the default on Postgres\n> side would make things move faster on their side though.\n\nI think we need give more time before change default. I suggest not to repeat the quick change of default to a new value as it was in the MySQL 8.0 last year [1].\n\n*1 https://mysqlserverteam.com/upgrading-to-mysql-8-0-default-authentication-plugin-considerations/\n\nregards, Sergei\n\n\n",
"msg_date": "Mon, 08 Apr 2019 10:05:47 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 2:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Apr 08, 2019 at 09:08:05AM +0300, Heikki Linnakangas wrote:\n> > I wouldn't hold my breath. That's the third PR to add SCRAM support already,\n> > see also https://github.com/lib/pq/pull/788 and\n> > https://github.com/lib/pq/pull/608. The project seems to lack the committer\n> > manpower or round tuits to review and push these.\n>\n> I am wondering on the contrary if switching the default on Postgres\n> side would make things move faster on their side though.\n\n\nYeah, if we're not going to do it now we should announce that we will\ndo it in the next release.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 07:52:53 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 2019-04-08 13:52, Andrew Dunstan wrote:\n> Yeah, if we're not going to do it now we should announce that we will\n> do it in the next release.\n\nTargeting PG13 seems reasonable.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 14:19:46 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 4/8/19 8:19 AM, Peter Eisentraut wrote:\n> On 2019-04-08 13:52, Andrew Dunstan wrote:\n>> Yeah, if we're not going to do it now we should announce that we will\n>> do it in the next release.\n> \n> Targeting PG13 seems reasonable.\n\nCounter-argument: SCRAM has been available for 2 years since 10 feature\nfreeze, there has been a lot of time already given to implement support\nfor it. Given is at least 5 months until PG12 comes out, and each of the\npopular drivers already has patches in place, we could default it for 12\nand let them know this is a reality.\n\nGiven it's superior to the existing methods, it'd be better to encourage\nthe drivers to get this in place sooner. Given what I know about md5,\nI've tried to avoid building apps with drivers that don't support SCRAM.\n\nThat said, that would be an aggressive approach, so I would not object\nto changing the default for PG13 and giving 17 months vs. 5, but we do\nlet md5 persist that much longer.\n\nJonathan",
"msg_date": "Mon, 8 Apr 2019 08:37:48 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 2:38 PM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> On 4/8/19 8:19 AM, Peter Eisentraut wrote:\n> > On 2019-04-08 13:52, Andrew Dunstan wrote:\n> >> Yeah, if we're not going to do it now we should announce that we will\n> >> do it in the next release.\n> >\n> > Targeting PG13 seems reasonable.\n>\n\nYeah, that would be fairly consistent with how we usually do htings\n\nCounter-argument: SCRAM has been available for 2 years since 10 feature\n> freeze, there has been a lot of time already given to implement support\n> for it. Given is at least 5 months until PG12 comes out, and each of the\n> popular drivers already has patches in place, we could default it for 12\n> and let them know this is a reality.\n>\n\nYou can't really count feature freeze, you have to count release I think.\nAnd basically we're saying they had 2 years. Which in itself would've been\nperfectly reasonable, *if we told them*. But we didn't.\n\nI think the real question is, is it OK to give them basically 5months\nwarning, by right now saying if you don't have a release out in 6 months,\nthings will break.\n\n\n\nGiven it's superior to the existing methods, it'd be better to encourage\n> the drivers to get this in place sooner. Given what I know about md5,\n> I've tried to avoid building apps with drivers that don't support SCRAM.\n>\n> That said, that would be an aggressive approach, so I would not object\n> to changing the default for PG13 and giving 17 months vs. 5, but we do\n> let md5 persist that much longer.\n>\n\nI think we definitely should not make it *later* than 13.\n\nMaybe we should simply reach out to those driver developers, it's not that\nmany of them after all, and *ask* if they would think it's a problem if we\nchange it in 12.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Apr 8, 2019 at 2:38 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:On 4/8/19 8:19 AM, Peter Eisentraut wrote:\n> On 2019-04-08 13:52, Andrew Dunstan wrote:\n>> Yeah, if we're not going to do it now we should announce that we will\n>> do it in the next release.\n> \n> Targeting PG13 seems reasonable.Yeah, that would be fairly consistent with how we usually do htings\nCounter-argument: SCRAM has been available for 2 years since 10 feature\nfreeze, there has been a lot of time already given to implement support\nfor it. Given is at least 5 months until PG12 comes out, and each of the\npopular drivers already has patches in place, we could default it for 12\nand let them know this is a reality.You can't really count feature freeze, you have to count release I think. And basically we're saying they had 2 years. Which in itself would've been perfectly reasonable, *if we told them*. But we didn't.I think the real question is, is it OK to give them basically 5months warning, by right now saying if you don't have a release out in 6 months, things will break.\nGiven it's superior to the existing methods, it'd be better to encourage\nthe drivers to get this in place sooner. Given what I know about md5,\nI've tried to avoid building apps with drivers that don't support SCRAM.\n\nThat said, that would be an aggressive approach, so I would not object\nto changing the default for PG13 and giving 17 months vs. 5, but we do\nlet md5 persist that much longer.I think we definitely should not make it *later* than 13.Maybe we should simply reach out to those driver developers, it's not that many of them after all, and *ask* if they would think it's a problem if we change it in 12. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 8 Apr 2019 14:49:05 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 4/8/19 8:49 AM, Magnus Hagander wrote:\n> On Mon, Apr 8, 2019 at 2:38 PM Jonathan S. Katz <jkatz@postgresql.org\n> <mailto:jkatz@postgresql.org>> wrote:\n\n> Counter-argument: SCRAM has been available for 2 years since 10 feature\n> freeze, there has been a lot of time already given to implement support\n> for it. Given is at least 5 months until PG12 comes out, and each of the\n> popular drivers already has patches in place, we could default it for 12\n> and let them know this is a reality.\n> \n> \n> You can't really count feature freeze, you have to count release I\n> think. And basically we're saying they had 2 years. Which in itself\n> would've been perfectly reasonable, *if we told them*. But we didn't.\n> \n> I think the real question is, is it OK to give them basically 5months\n> warning, by right now saying if you don't have a release out in 6\n> months, things will break.\n\nYeah, that's a good and fair question.\n\n> That said, that would be an aggressive approach, so I would not object\n> to changing the default for PG13 and giving 17 months vs. 5, but we do\n> let md5 persist that much longer.\n> \n> \n> I think we definitely should not make it *later* than 13.\n\n+1\n\n> Maybe we should simply reach out to those driver developers, it's not\n> that many of them after all, and *ask* if they would think it's a\n> problem if we change it in 12.\n\nIt wouldn't hurt. I went through the list again[1] to see which ones\ndon't have it and updated:\n\n- pgsql (Erlang) - this webpage doesn't load, maybe we should remove? It\nmay have been replaced by this one[2]?\n\n- erlang-pgsql-driver (Erlang) - on the page it says it's unsupported,\nso we should definitely remove it from the wiki and from consideration\n\n- node-postgres (JavaScript) - they added SCRAM in 7.9.0 so I've updated\nthe wiki\n\n- pq (Go) - No; as mentioned there are 3 separate patches in consideration\n\n- crystal-pg (Ruby) No; open issue, not patch\n\n- asyncpg (Python) No; open issue, suggestion on how to implement but no\npatch\n\nLet me also add:\n\n- pgx (Go)[3] - another popular Go driver, there is an open patch for\nSCRAM support\n\nSo IMV it's pq, crystal-pg, asyncpg, & pgx we have to reach out to,\npending resolution on Erlang libs.\n\nGiven the supported libraries all have open pull requests or issues, it\nshould be fairly easy to inquire if they would be able to support it for\nPG12 vs PG13. If this sounds like a reasonable plan, I'm happy to reach\nout and see.\n\nJonathan\n\n[1] https://wiki.postgresql.org/wiki/List_of_drivers\n[2] https://github.com/semiocast/pgsql\n[3] https://github.com/jackc/pgx",
"msg_date": "Mon, 8 Apr 2019 09:12:11 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 4/8/19 8:49 AM, Magnus Hagander wrote:\n>> I think the real question is, is it OK to give them basically 5months\n>> warning, by right now saying if you don't have a release out in 6\n>> months, things will break.\n\n> Given the supported libraries all have open pull requests or issues, it\n> should be fairly easy to inquire if they would be able to support it for\n> PG12 vs PG13. If this sounds like a reasonable plan, I'm happy to reach\n> out and see.\n\nI think that the right course here is to notify these developers that\nwe will change the default in PG13, and it'd be good if they put out\nstable releases with SCRAM support well before that. This discussion\nseems to be talking as though it's okay if we allow zero daylight\nbetween availability of fixed drivers and release of a PG version that\ndefaults to using SCRAM. That'd be totally unfair to packagers and\nusers. There needs to be a pretty fair-size window for those fixed\ndrivers to propagate into the wild. A year is not too much; IMO it's\nbarely enough.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Apr 2019 10:08:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 4/8/19 10:08 AM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On 4/8/19 8:49 AM, Magnus Hagander wrote:\n>>> I think the real question is, is it OK to give them basically 5months\n>>> warning, by right now saying if you don't have a release out in 6\n>>> months, things will break.\n> \n>> Given the supported libraries all have open pull requests or issues, it\n>> should be fairly easy to inquire if they would be able to support it for\n>> PG12 vs PG13. If this sounds like a reasonable plan, I'm happy to reach\n>> out and see.\n> \n> I think that the right course here is to notify these developers that\n> we will change the default in PG13, and it'd be good if they put out\n> stable releases with SCRAM support well before that.\n\n+1; I'm happy to reach out with that messaging, referencing this thread.\n\n> This discussion\n> seems to be talking as though it's okay if we allow zero daylight\n> between availability of fixed drivers and release of a PG version that\n> defaults to using SCRAM. That'd be totally unfair to packagers and\n> users. There needs to be a pretty fair-size window for those fixed\n> drivers to propagate into the wild. A year is not too much; IMO it's\n> barely enough.\n\nI agree in principle, esp. related to testing + packaging (and I think\npackaging would be my biggest concern), but IMV this primarily would\naffect new applications, which is why I thought to provide reasoning for\na more aggressive timeline. You typically keep you pg.conf settings\nconsistent between version upgrades (with exceptions, e.g. based on\nupgrade method). That could also inadvertently block people from\nupgrading, too, but the bigger risk would be new application development\non PG12.\n\nLooking at the uncovered user base too, it's not the largest portion of\nour users, though accessing PostgreSQL via Go is certainly increasingly\nrapidly so I'm very sympathetic that we don't break their accessibility\n(and I've personally used asyncpg and would not want my apps to break\neither :).\n\nAnyway, I primarily wanted to see if an aggressive timeline to update\nour default password approach would make sense esp. given we've had this\nfeature around for some time, and, again, it is far superior to the\nother password based methods. I'm fine with being cautious, just wanted\nto ensure we're not being too cautious about getting our users to\nutilize a feature with better security.\n\nJonathan",
"msg_date": "Mon, 8 Apr 2019 10:32:59 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "I'm not sure I understand all this talk about deferring changing the\ndefault to pg13. AFAICS only a few fringe drivers are missing support;\nnot changing in pg12 means we're going to leave *all* users, even those\nwhose clients have support, without the additional security for 18 more\nmonths.\n\nIIUC the vast majority of clients already support SCRAM auth. So the\nvast majority of PG users can take advantage of the additional security.\nI think the only massive-adoption exception is JDBC, and apparently they\nalready have working patches for SCRAM.\n\nLike many other configuration parameters, setting the default for this\none is a trade-off: give the most benefit to most users, causing the\nleast possible pain to users for whom the default is not good. Users\nthat require opening connections from clients that have not updated\nshould just set password_encryption to md5. It's not like things will\nsuddenly blow up in their faces.\n\nIMO we don't need to wait until every single client in existence has\nupdated to support SCRAM. After all, they've already had two years.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 13:34:12 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-08 13:34:12 -0400, Alvaro Herrera wrote:\n> I'm not sure I understand all this talk about deferring changing the\n> default to pg13. AFAICS only a few fringe drivers are missing support;\n> not changing in pg12 means we're going to leave *all* users, even those\n> whose clients have support, without the additional security for 18 more\n> months.\n\nImo making such changes after feature freeze is somewhat poor\nform. These arguments would have made a ton more sense at the\n*beginning* of the v12 development cycle, because that'd have given all\nthese driver authors a lot more heads up.\n\n\n> IIUC the vast majority of clients already support SCRAM auth. So the\n> vast majority of PG users can take advantage of the additional security.\n> I think the only massive-adoption exception is JDBC, and apparently they\n> already have working patches for SCRAM.\n\nIf jdbc didn't support scram, it'd be an absolutely clear no-go imo. A\npretty large fraction of users use jdbc to access postgres. But it seems\nto me that support has been merged for a while:\nhttps://github.com/pgjdbc/pgjdbc/pull/1014\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Apr 2019 10:41:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-08 13:34:12 -0400, Alvaro Herrera wrote:\n>> I'm not sure I understand all this talk about deferring changing the\n>> default to pg13. AFAICS only a few fringe drivers are missing support;\n>> not changing in pg12 means we're going to leave *all* users, even those\n>> whose clients have support, without the additional security for 18 more\n>> months.\n\n> Imo making such changes after feature freeze is somewhat poor\n> form.\n\nYeah.\n\n> If jdbc didn't support scram, it'd be an absolutely clear no-go imo. A\n> pretty large fraction of users use jdbc to access postgres. But it seems\n> to me that support has been merged for a while:\n> https://github.com/pgjdbc/pgjdbc/pull/1014\n\n\"Merged to upstream\" is a whole lot different from \"readily available in\nthe field\". What's the actual status in common Linux distros, for\nexample?\n\nThe scenario that worries me here is somebody using a bleeding-edge PGDG\nserver package in an environment where the rest of the Postgres ecosystem\nis much less bleeding-edge. The last time that situation would have\ncaused them can't-connect problems was, um, probably when we introduced\nMD5 password encryption. So they won't be expecting to get blindsided by\nsomething like this.\n\nI'm particularly concerned about the idea that they won't see a problem\nduring initial testing, only to have things fall over after they enter\nproduction and do a \"routine\" password change.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Apr 2019 14:28:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 4/8/19 2:28 PM, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2019-04-08 13:34:12 -0400, Alvaro Herrera wrote:\n>>> I'm not sure I understand all this talk about deferring changing the\n>>> default to pg13. AFAICS only a few fringe drivers are missing support;\n>>> not changing in pg12 means we're going to leave *all* users, even those\n>>> whose clients have support, without the additional security for 18 more\n>>> months.\n> \n>> Imo making such changes after feature freeze is somewhat poor\n>> form.\n> \n> Yeah.\n\nYeah, that's fair.\n\n> \n>> If jdbc didn't support scram, it'd be an absolutely clear no-go imo. A\n>> pretty large fraction of users use jdbc to access postgres. But it seems\n>> to me that support has been merged for a while:\n>> https://github.com/pgjdbc/pgjdbc/pull/1014\n> \n> \"Merged to upstream\" is a whole lot different from \"readily available in\n> the field\". What's the actual status in common Linux distros, for\n> example?\n\nDid some limited research just to get a sense.\n\nWell, if it's RHEL7, it's PostgreSQL 9.2 so, unless they're using our\nRPM, that definitely does not have it :)\n\n(While researching this, I noticed on the main RHEL8 beta page[1] that\nPostgreSQL is actually featured, which is kind of neat. I could not\nquickly find which version of the JDBC driver it is shipping with, though)\n\nOn Ubuntu, 18.04 LTS ships PG10, but the version of JDBC does not\ninclude SCRAM support. 18.10 ships JDBC w/SCRAM support.\n\nOn Debian, stretch is on 9.4. buster has 11 packaged, and JDBC is\nshipping with SCRAM support.\n\n> The scenario that worries me here is somebody using a bleeding-edge PGDG\n> server package in an environment where the rest of the Postgres ecosystem\n> is much less bleeding-edge. The last time that situation would have\n> caused them can't-connect problems was, um, probably when we introduced\n> MD5 password encryption. So they won't be expecting to get blindsided by\n> something like this.\n> \n> I'm particularly concerned about the idea that they won't see a problem\n> during initial testing, only to have things fall over after they enter\n> production and do a \"routine\" password change.\n\nYeah, I think all of the above is fair. It's been awhile since md5 was\nadded :)\n\nSo I think based on that and a quick look at the different distros\nindicate that changing the default to PG12 has too much risk of\nbreakage, but PG13 would be a fair target as long as we start making\nnoise sooner (now?).\n\nJonathan\n\n[1] https://developers.redhat.com/rhel8/",
"msg_date": "Mon, 8 Apr 2019 15:18:42 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On Mon, Apr 08, 2019 at 02:28:30PM -0400, Tom Lane wrote:\n>On Mon, Apr 08, 2019 at 10:41:07AM -0700, Andres Freund wrote:\n>> If jdbc didn't support scram, it'd be an absolutely clear no-go imo. A\n>> pretty large fraction of users use jdbc to access postgres. But it seems\n>> to me that support has been merged for a while:\n>> https://github.com/pgjdbc/pgjdbc/pull/1014\n> \n> \"Merged to upstream\" is a whole lot different from \"readily available in\n> the field\". What's the actual status in common Linux distros, for\n> example?\n\nI found:\n\nhttps://jdbc.postgresql.org/documentation/changelog.html#version_42.2.1\nVersion 42.2.0 (2018-01-17)\nAdded\nSupport SCRAM-SHA-256 for PostgreSQL 10 in the JDBC 4.2 version (Java 8+) using the Ongres SCRAM library. PR 842\n\nI see that's in ubuntu, but not any LTS release:\nhttps://packages.ubuntu.com/search?keywords=libpostgresql-jdbc-java\n\nAnd in Debian testing, but no released version:\nhttps://packages.debian.org/search?keywords=libpostgresql-jdbc-java\n\nFor centos6/7, OS packages would not have scram support:\n\n$ yum list --showdupl postgresql-jdbc\nAvailable Packages\npostgresql-jdbc.noarch 9.2.1002-6.el7_5 base\npostgresql-jdbc.noarch 42.2.5-1.rhel7.1 pgdg11\n\n$ yum list --showdupl postgresql-jdbc\nAvailable Packages\npostgresql-jdbc.noarch 8.4.704-2.el6 base\npostgresql-jdbc.noarch 42.2.5-1.rhel6.1 pgdg11\n\n> The scenario that worries me here is somebody using a bleeding-edge PGDG\n> server package in an environment where the rest of the Postgres ecosystem\n> is much less bleeding-edge.\n\nIf someone installs a postgres RPM/DEB from postgresql.org, they could also\ninstall postgresql-jdbc, right ?\n\nI realize that doesn't mean that people will consistently know to and actually\ndo that.\n\nIf the default were changed, possibly the PGDG package could define something\nlike (I haven't done packaging in a long time):\nConflicts: postgresql-jdbc<42.2.0\n\nOn Mon, Apr 08, 2019 at 03:18:42PM -0400, Jonathan S. Katz wrote:\n> Well, if it's RHEL7, it's PostgreSQL 9.2 so, unless they're using our\n> RPM, that definitely does not have it :)\n\n\n",
"msg_date": "Mon, 8 Apr 2019 14:49:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "Alvaro,\n\nOn Mon, 8 Apr 2019 at 13:34, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> I'm not sure I understand all this talk about deferring changing the\n> default to pg13. AFAICS only a few fringe drivers are missing support;\n> not changing in pg12 means we're going to leave *all* users, even those\n> whose clients have support, without the additional security for 18 more\n> months.\n>\n> IIUC the vast majority of clients already support SCRAM auth. So the\n> vast majority of PG users can take advantage of the additional security.\n> I think the only massive-adoption exception is JDBC, and apparently they\n> already have working patches for SCRAM.\n>\n\n\nWe have more than patches this is already in the driver.\n\nWhat do you mean by \"massive-adoption exception\"\n\nDave Cramer\n\ndavec@postgresintl.com\nwww.postgresintl.com\n\n\n\n>\n>\n\nAlvaro,On Mon, 8 Apr 2019 at 13:34, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:I'm not sure I understand all this talk about deferring changing the\ndefault to pg13. AFAICS only a few fringe drivers are missing support;\nnot changing in pg12 means we're going to leave *all* users, even those\nwhose clients have support, without the additional security for 18 more\nmonths.\n\nIIUC the vast majority of clients already support SCRAM auth. So the\nvast majority of PG users can take advantage of the additional security.\nI think the only massive-adoption exception is JDBC, and apparently they\nalready have working patches for SCRAM.We have more than patches this is already in the driver.What do you mean by \"massive-adoption exception\"Dave Cramerdavec@postgresintl.comwww.postgresintl.com",
"msg_date": "Mon, 8 Apr 2019 15:56:02 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On Mon, 8 Apr 2019 at 15:18, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n\n> On 4/8/19 2:28 PM, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> On 2019-04-08 13:34:12 -0400, Alvaro Herrera wrote:\n> >>> I'm not sure I understand all this talk about deferring changing the\n> >>> default to pg13. AFAICS only a few fringe drivers are missing support;\n> >>> not changing in pg12 means we're going to leave *all* users, even those\n> >>> whose clients have support, without the additional security for 18 more\n> >>> months.\n> >\n> >> Imo making such changes after feature freeze is somewhat poor\n> >> form.\n> >\n> > Yeah.\n>\n> Yeah, that's fair.\n>\n> >\n> >> If jdbc didn't support scram, it'd be an absolutely clear no-go imo. A\n> >> pretty large fraction of users use jdbc to access postgres. But it seems\n> >> to me that support has been merged for a while:\n> >> https://github.com/pgjdbc/pgjdbc/pull/1014\n> >\n> > \"Merged to upstream\" is a whole lot different from \"readily available in\n> > the field\". What's the actual status in common Linux distros, for\n> > example?\n>\n> Did some limited research just to get a sense.\n>\n> Well, if it's RHEL7, it's PostgreSQL 9.2 so, unless they're using our\n> RPM, that definitely does not have it :)\n>\n> (While researching this, I noticed on the main RHEL8 beta page[1] that\n> PostgreSQL is actually featured, which is kind of neat. I could not\n> quickly find which version of the JDBC driver it is shipping with, though)\n>\n> On Ubuntu, 18.04 LTS ships PG10, but the version of JDBC does not\n> include SCRAM support. 18.10 ships JDBC w/SCRAM support.\n>\n> On Debian, stretch is on 9.4. buster has 11 packaged, and JDBC is\n> shipping with SCRAM support.\n>\n>\n\nHonestly what JDBC driver XYZ distro ships with is a red herring. Any\nreasonably complex java program is going to use maven and pull it's\ndependencies.\n\nThat said from a driver developer, I support pushing this decision off to\nPG13\n\nDave Cramer\n\ndavec@postgresintl.com\nwww.postgresintl.com\n\n\n>\n>\n\nOn Mon, 8 Apr 2019 at 15:18, Jonathan S. Katz <jkatz@postgresql.org> wrote:On 4/8/19 2:28 PM, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2019-04-08 13:34:12 -0400, Alvaro Herrera wrote:\n>>> I'm not sure I understand all this talk about deferring changing the\n>>> default to pg13. AFAICS only a few fringe drivers are missing support;\n>>> not changing in pg12 means we're going to leave *all* users, even those\n>>> whose clients have support, without the additional security for 18 more\n>>> months.\n> \n>> Imo making such changes after feature freeze is somewhat poor\n>> form.\n> \n> Yeah.\n\nYeah, that's fair.\n\n> \n>> If jdbc didn't support scram, it'd be an absolutely clear no-go imo. A\n>> pretty large fraction of users use jdbc to access postgres. But it seems\n>> to me that support has been merged for a while:\n>> https://github.com/pgjdbc/pgjdbc/pull/1014\n> \n> \"Merged to upstream\" is a whole lot different from \"readily available in\n> the field\". What's the actual status in common Linux distros, for\n> example?\n\nDid some limited research just to get a sense.\n\nWell, if it's RHEL7, it's PostgreSQL 9.2 so, unless they're using our\nRPM, that definitely does not have it :)\n\n(While researching this, I noticed on the main RHEL8 beta page[1] that\nPostgreSQL is actually featured, which is kind of neat. I could not\nquickly find which version of the JDBC driver it is shipping with, though)\n\nOn Ubuntu, 18.04 LTS ships PG10, but the version of JDBC does not\ninclude SCRAM support. 18.10 ships JDBC w/SCRAM support.\n\nOn Debian, stretch is on 9.4. buster has 11 packaged, and JDBC is\nshipping with SCRAM support.\nHonestly what JDBC driver XYZ distro ships with is a red herring. Any reasonably complex java program is going to use maven and pull it's dependencies.That said from a driver developer, I support pushing this decision off to PG13Dave Cramerdavec@postgresintl.comwww.postgresintl.com",
"msg_date": "Mon, 8 Apr 2019 16:03:35 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 2019-Apr-08, Dave Cramer wrote:\n\n> > IIUC the vast majority of clients already support SCRAM auth. So the\n> > vast majority of PG users can take advantage of the additional security.\n> > I think the only massive-adoption exception is JDBC, and apparently they\n> > already have working patches for SCRAM.\n> \n> We have more than patches this is already in the driver.\n> \n> What do you mean by \"massive-adoption exception\"\n\nI meant an exception to the common situation that SCRAM-SHA-256 is\nsupported and shipped in stable releases of each driver. The wiki here\nstill says it's unsupported on JDBC:\nhttps://wiki.postgresql.org/wiki/List_of_drivers\nFor once I'm happy to learn that the wiki is outdated :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 16:07:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On Mon, 8 Apr 2019 at 16:07, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Apr-08, Dave Cramer wrote:\n>\n> > > IIUC the vast majority of clients already support SCRAM auth. So the\n> > > vast majority of PG users can take advantage of the additional\n> security.\n> > > I think the only massive-adoption exception is JDBC, and apparently\n> they\n> > > already have working patches for SCRAM.\n> >\n> > We have more than patches this is already in the driver.\n> >\n> > What do you mean by \"massive-adoption exception\"\n>\n> I meant an exception to the common situation that SCRAM-SHA-256 is\n> supported and shipped in stable releases of each driver. The wiki here\n> still says it's unsupported on JDBC:\n> https://wiki.postgresql.org/wiki/List_of_drivers\n> For once I'm happy to learn that the wiki is outdated :-)\n>\n\n\nWay too many places to update :)\n\n\nDave Cramer\n\ndavec@postgresintl.com\nwww.postgresintl.com\n\n\n>\n>\n\nOn Mon, 8 Apr 2019 at 16:07, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Apr-08, Dave Cramer wrote:\n\n> > IIUC the vast majority of clients already support SCRAM auth. So the\n> > vast majority of PG users can take advantage of the additional security.\n> > I think the only massive-adoption exception is JDBC, and apparently they\n> > already have working patches for SCRAM.\n> \n> We have more than patches this is already in the driver.\n> \n> What do you mean by \"massive-adoption exception\"\n\nI meant an exception to the common situation that SCRAM-SHA-256 is\nsupported and shipped in stable releases of each driver. The wiki here\nstill says it's unsupported on JDBC:\nhttps://wiki.postgresql.org/wiki/List_of_drivers\nFor once I'm happy to learn that the wiki is outdated :-)Way too many places to update :)Dave Cramerdavec@postgresintl.comwww.postgresintl.com",
"msg_date": "Mon, 8 Apr 2019 16:08:35 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 2019-Apr-08, Dave Cramer wrote:\n\n> On Mon, 8 Apr 2019 at 16:07, Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n\n> > I meant an exception to the common situation that SCRAM-SHA-256 is\n> > supported and shipped in stable releases of each driver. The wiki here\n> > still says it's unsupported on JDBC:\n> > https://wiki.postgresql.org/wiki/List_of_drivers\n> > For once I'm happy to learn that the wiki is outdated :-)\n> \n> Way too many places to update :)\n\nYeah. Actually, it's up to date (it says \"yes from 42.2\")... I just\nmisread it.\n\nI wonder why we have two pages\nhttps://wiki.postgresql.org/wiki/Client_Libraries\nhttps://wiki.postgresql.org/wiki/List_of_drivers\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 16:10:58 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 4/8/19 4:10 PM, Alvaro Herrera wrote:\n> On 2019-Apr-08, Dave Cramer wrote:\n> \n>> On Mon, 8 Apr 2019 at 16:07, Alvaro Herrera <alvherre@2ndquadrant.com>\n>> wrote:\n> \n>>> I meant an exception to the common situation that SCRAM-SHA-256 is\n>>> supported and shipped in stable releases of each driver. The wiki here\n>>> still says it's unsupported on JDBC:\n>>> https://wiki.postgresql.org/wiki/List_of_drivers\n>>> For once I'm happy to learn that the wiki is outdated :-)\n>>\n>> Way too many places to update :)\n> \n> Yeah. Actually, it's up to date (it says \"yes from 42.2\")... I just\n> misread it.\n> \n> I wonder why we have two pages\n> https://wiki.postgresql.org/wiki/Client_Libraries\n> https://wiki.postgresql.org/wiki/List_of_drivers\n\nNo clue, but it appears that first one is the newer of the two[1][2]\n\nI'd be happy to consolidate them and provide a forwarding reference from\nClient Libraries to List of Drivers, given I think we reference \"List of\nDrivers\" in other places.\n\nJonathan\n\n[1]\nhttps://wiki.postgresql.org/index.php?title=Client_Libraries&action=history\n[2]\nhttps://wiki.postgresql.org/index.php?title=List_of_drivers&action=history",
"msg_date": "Mon, 8 Apr 2019 16:13:38 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Mon, Apr 08, 2019 at 02:28:30PM -0400, Tom Lane wrote:\n>> The scenario that worries me here is somebody using a bleeding-edge PGDG\n>> server package in an environment where the rest of the Postgres ecosystem\n>> is much less bleeding-edge.\n\n> If someone installs a postgres RPM/DEB from postgresql.org, they could also\n> install postgresql-jdbc, right ?\n\nThe client software is very possibly not on the same machine as the server,\nand may indeed not be under the server admin's control. That sort of\ncomplex interdependency is why we need to move slowly on changes that\nrequire client updates.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Apr 2019 16:18:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 2019-Apr-08, Jonathan S. Katz wrote:\n\n> On 4/8/19 4:10 PM, Alvaro Herrera wrote:\n\n> > I wonder why we have two pages\n> > https://wiki.postgresql.org/wiki/Client_Libraries\n> > https://wiki.postgresql.org/wiki/List_of_drivers\n> \n> No clue, but it appears that first one is the newer of the two[1][2]\n> \n> I'd be happy to consolidate them and provide a forwarding reference from\n> Client Libraries to List of Drivers, given I think we reference \"List of\n> Drivers\" in other places.\n\nThere are two links to List of drivers, and one of them is in Client\nLibraries :-)\nhttps://wiki.postgresql.org/wiki/Special:WhatLinksHere/Client_Libraries\n\n+1 for consolidation and setting up a redirect.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 16:20:26 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 2019-Apr-08, Tom Lane wrote:\n\n> I'm particularly concerned about the idea that they won't see a problem\n> during initial testing, only to have things fall over after they enter\n> production and do a \"routine\" password change.\n\nThis is a fair objection.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 16:22:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": ">\n>\n>\n> > The scenario that worries me here is somebody using a bleeding-edge PGDG\n> > server package in an environment where the rest of the Postgres ecosystem\n> > is much less bleeding-edge.\n>\n> If someone installs a postgres RPM/DEB from postgresql.org, they could\n> also\n> install postgresql-jdbc, right ?\n>\n>\nNo, this is not how the majority of people use Java at all. They would use\nMaven to pull down the JDBC driver of choice.\n\nI would guess there might be some distro specific java apps that might\nactually use what is on the machine but as mentioned any reasonably complex\nJava app is going to ensure it has the correct versions for their app using\nMaven.\n\nDave Cramer\n\ndavec@postgresintl.com\nwww.postgresintl.com\n\n\n>\n\n\n> The scenario that worries me here is somebody using a bleeding-edge PGDG\n> server package in an environment where the rest of the Postgres ecosystem\n> is much less bleeding-edge.\n\nIf someone installs a postgres RPM/DEB from postgresql.org, they could also\ninstall postgresql-jdbc, right ?\nNo, this is not how the majority of people use Java at all. They would use Maven to pull down the JDBC driver of choice.I would guess there might be some distro specific java apps that might actually use what is on the machine but as mentioned any reasonably complex Java app is going to ensure it has the correct versions for their app using Maven.Dave Cramerdavec@postgresintl.comwww.postgresintl.com",
"msg_date": "Mon, 8 Apr 2019 16:30:54 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "Dave Cramer <pg@fastcrypt.com> writes:\n>> If someone installs a postgres RPM/DEB from postgresql.org, they could\n>> also install postgresql-jdbc, right ?\n\n> I would guess there might be some distro specific java apps that might\n> actually use what is on the machine but as mentioned any reasonably complex\n> Java app is going to ensure it has the correct versions for their app using\n> Maven.\n\nI'm not really sure if that makes things better or worse. If some app\nthinks that it needs version N of the driver, but SCRAM support was\nadded in version N-plus-something, how tough is it going to be to get\nit updated? And are you going to have to go through that dance for\neach app separately?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Apr 2019 16:38:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On Mon, 8 Apr 2019 at 16:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dave Cramer <pg@fastcrypt.com> writes:\n> >> If someone installs a postgres RPM/DEB from postgresql.org, they could\n> >> also install postgresql-jdbc, right ?\n>\n> > I would guess there might be some distro specific java apps that might\n> > actually use what is on the machine but as mentioned any reasonably\n> complex\n> > Java app is going to ensure it has the correct versions for their app\n> using\n> > Maven.\n>\n> I'm not really sure if that makes things better or worse. If some app\n> thinks that it needs version N of the driver, but SCRAM support was\n> added in version N-plus-something, how tough is it going to be to get\n> it updated? And are you going to have to go through that dance for\n> each app separately?\n>\n>\n\nI see the problem you are contemplating, but even installing a newer\nversion of the driver has it's perils (we have been known to break some\nexpectations in the name of the spec).\nSo I could see a situation where there is a legacy app that wants to use\nSCRAM. They update the JDBC jar on the system and due to the \"new and\nimproved\" version their app breaks.\nHonestly I don't have a solution to this.\n\nThat said 42.2.0 was released in January 2018, so by PG13 it's going to be\n4 years old.\n\nDave\n\nOn Mon, 8 Apr 2019 at 16:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:Dave Cramer <pg@fastcrypt.com> writes:\n>> If someone installs a postgres RPM/DEB from postgresql.org, they could\n>> also install postgresql-jdbc, right ?\n\n> I would guess there might be some distro specific java apps that might\n> actually use what is on the machine but as mentioned any reasonably complex\n> Java app is going to ensure it has the correct versions for their app using\n> Maven.\n\nI'm not really sure if that makes things better or worse. If some app\nthinks that it needs version N of the driver, but SCRAM support was\nadded in version N-plus-something, how tough is it going to be to get\nit updated? And are you going to have to go through that dance for\neach app separately?\nI see the problem you are contemplating, but even installing a newer version of the driver has it's perils (we have been known to break some expectations in the name of the spec). So I could see a situation where there is a legacy app that wants to use SCRAM. They update the JDBC jar on the system and due to the \"new and improved\" version their app breaks. Honestly I don't have a solution to this.That said 42.2.0 was released in January 2018, so by PG13 it's going to be 4 years old. Dave",
"msg_date": "Mon, 8 Apr 2019 16:54:33 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "Dave Cramer <pg@fastcrypt.com> writes:\n> That said 42.2.0 was released in January 2018, so by PG13 it's going to be\n> 4 years old.\n\nHuh? 13 should come out in the fall of 2020.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Apr 2019 17:06:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 4/8/19 4:20 PM, Alvaro Herrera wrote:\n> On 2019-Apr-08, Jonathan S. Katz wrote:\n> \n>> On 4/8/19 4:10 PM, Alvaro Herrera wrote:\n> \n>>> I wonder why we have two pages\n>>> https://wiki.postgresql.org/wiki/Client_Libraries\n>>> https://wiki.postgresql.org/wiki/List_of_drivers\n>>\n>> No clue, but it appears that first one is the newer of the two[1][2]\n>>\n>> I'd be happy to consolidate them and provide a forwarding reference from\n>> Client Libraries to List of Drivers, given I think we reference \"List of\n>> Drivers\" in other places.\n> \n> There are two links to List of drivers, and one of them is in Client\n> Libraries :-)\n> https://wiki.postgresql.org/wiki/Special:WhatLinksHere/Client_Libraries\n> \n> +1 for consolidation and setting up a redirect.\n\nOK, so trying to not be too off topic, I did update the original page as so:\n\nhttps://wiki.postgresql.org/wiki/List_of_drivers\n\nWhen determining what to add, I tried to keep it one-abstraction level\ndeep, i.e., a driver is implemented on top of libpq, implemented the PG\nprotocol on its own, or did some driver-like extensions on top of the\nbase language driver. I steered clear of ORMs or other abstraction\nlayers unless they met the above criteria.\n\n(There are a lot of handy ORM-ish abstraction layers as well, but I\ndon't want to go down that path on that page, at least not today).\n\nI also added a deprecation warning on top of the \"Client Libraries\"\npage. If we're feeling satisfied with the consolidation, I'll wipe the\ncontent and indicate where the maintained content is and end the\nsplit-brain situation.\n\n(One thing that I will say is this is one of those sections that may be\nworth moving to pgweb, to give it some semi-permanence. Separate\ndiscussion.)\n\nThe good news: while going through the added drivers, most of the\nnon-libpq ones I've added do support SCRAM :)\n\nThat said, I am still in favor of the PG13 plan, and without objection I\nwould like to reach out to the driver authors in the \"no\" category,\nreference this thread, and that this is at least discussed, if not\ndecided upon, and they should considering adding support for SCRAM to\nallow adequate testing time as well as time for their drivers to make it\ninto appropriate packaging systems.\n\nJonathan",
"msg_date": "Mon, 8 Apr 2019 18:10:57 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "> On Sun, Apr 07, 2019 at 12:59:05PM -0400, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> > Should we change the default of the password_encryption setting to\n>> > 'scram-sha-256' in PG12?\n>> \n>> I thought we were going to wait a bit longer --- that just got added\n>> last year, no? What do we know about the state of support in client\n>> libraries?\n> \n> Great idea! Does it make sense to test all, or at least some\n> significant fraction of the connectors listed in\n> https://wiki.postgresql.org/wiki/Client_Libraries by default?\n\nI am not sure all third party programs concerning scram-sha-256 are\nlisted on this. There are some programs that talk to PostgreSQL using\nfrontend/backend protocol, but not based on libpq or other native\ndrivers (for example Pgpool-II). I guess PgBouncer is in the same\ncategory too.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 09 Apr 2019 07:42:40 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "Em seg, 8 de abr de 2019 às 19:43, Tatsuo Ishii <ishii@sraoss.co.jp> escreveu:\n>\n> I am not sure all third party programs concerning scram-sha-256 are\n> listed on this. There are some programs that talk to PostgreSQL using\n> frontend/backend protocol, but not based on libpq or other native\n> drivers (for example Pgpool-II). I guess PgBouncer is in the same\n> category too.\n>\n... and pgbouncer doesn't support scram-sha-256 authentication method.\nThere is a bit-rot PR but the discussion died a while ago. It is\nwidely used and it would be really sad to turn on SCRAM on v13 without\npgbouncer SCRAM support.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Mon, 8 Apr 2019 20:44:41 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": ">> I am not sure all third party programs concerning scram-sha-256 are\n>> listed on this. There are some programs that talk to PostgreSQL using\n>> frontend/backend protocol, but not based on libpq or other native\n>> drivers (for example Pgpool-II). I guess PgBouncer is in the same\n>> category too.\n>>\n> ... and pgbouncer doesn't support scram-sha-256 authentication method.\n> There is a bit-rot PR but the discussion died a while ago. It is\n> widely used and it would be really sad to turn on SCRAM on v13 without\n> pgbouncer SCRAM support.\n\nI don't how hard it would be for pgbouncer to support scram-sha-256,\nbut it was pretty hard for Pgpool-II to support scram-sha-256. In case\nof Pgpool-II (it starts to support it since 4.0), it needed to keep\nclients' password lists.\n\nhttp://www.pgpool.net/docs/latest/en/html/auth-methods.html#AUTH-SCRAM\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 09 Apr 2019 09:35:10 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 10:08:07AM -0400, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > On 4/8/19 8:49 AM, Magnus Hagander wrote:\n> >> I think the real question is, is it OK to give them basically 5months\n> >> warning, by right now saying if you don't have a release out in 6\n> >> months, things will break.\n> \n> > Given the supported libraries all have open pull requests or issues, it\n> > should be fairly easy to inquire if they would be able to support it for\n> > PG12 vs PG13. If this sounds like a reasonable plan, I'm happy to reach\n> > out and see.\n> \n> I think that the right course here is to notify these developers that\n> we will change the default in PG13, and it'd be good if they put out\n> stable releases with SCRAM support well before that. This discussion\n> seems to be talking as though it's okay if we allow zero daylight\n> between availability of fixed drivers and release of a PG version that\n> defaults to using SCRAM. That'd be totally unfair to packagers and\n> users. There needs to be a pretty fair-size window for those fixed\n> drivers to propagate into the wild. A year is not too much; IMO it's\n> barely enough.\n\nIt would be nice to address channel binding as part of this.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 12 Apr 2019 19:26:10 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
},
{
"msg_contents": "On 4/8/19 6:10 PM, Jonathan S. Katz wrote:\n> On 4/8/19 4:20 PM, Alvaro Herrera wrote:\n>> On 2019-Apr-08, Jonathan S. Katz wrote:\n>>\n>>> On 4/8/19 4:10 PM, Alvaro Herrera wrote:\n>>\n>>>> I wonder why we have two pages\n>>>> https://wiki.postgresql.org/wiki/Client_Libraries\n>>>> https://wiki.postgresql.org/wiki/List_of_drivers\n>>>\n>>> No clue, but it appears that first one is the newer of the two[1][2]\n>>>\n>>> I'd be happy to consolidate them and provide a forwarding reference from\n>>> Client Libraries to List of Drivers, given I think we reference \"List of\n>>> Drivers\" in other places.\n>>\n>> There are two links to List of drivers, and one of them is in Client\n>> Libraries :-)\n>> https://wiki.postgresql.org/wiki/Special:WhatLinksHere/Client_Libraries\n>>\n>> +1 for consolidation and setting up a redirect.\n> \n> OK, so trying to not be too off topic, I did update the original page as so:\n> \n> https://wiki.postgresql.org/wiki/List_of_drivers\n> \n> That said, I am still in favor of the PG13 plan, and without objection I\n> would like to reach out to the driver authors in the \"no\" category,\n> reference this thread, and that this is at least discussed, if not\n> decided upon, and they should considering adding support for SCRAM to\n> allow adequate testing time as well as time for their drivers to make it\n> into appropriate packaging systems.\n\nOK so a small update, going through the list[1]:\n\n- The golang drivers all now support SCRAM\n- I've reached out to the remaining two driver projects on the list to\nmake them aware of this thread and the timeline discussion, and to offer\nany help where needed in adding SCRAM support.\n\nJonathan\n\n[1]https://wiki.postgresql.org/wiki/List_of_drivers",
"msg_date": "Mon, 22 Apr 2019 21:47:02 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: change password_encryption default to scram-sha-256?"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on unrelated documentation change, I noticed some\ntrailing whitespaces in various documentation files. PFA a simple\npatch to get rid of them (I didn't removed the one corresponding to\npsql output), if that helps.",
"msg_date": "Sun, 7 Apr 2019 18:51:10 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Trailing whitespaces in various documentations"
},
{
"msg_contents": "On 2019-04-07 18:51, Julien Rouhaud wrote:\n> While working on unrelated documentation change, I noticed some\n> trailing whitespaces in various documentation files. PFA a simple\n> patch to get rid of them (I didn't removed the one corresponding to\n> psql output), if that helps.\n\ncommitted, thanks\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 22:42:59 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Trailing whitespaces in various documentations"
},
{
"msg_contents": "Le lun. 8 avr. 2019 à 22:43, Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> a écrit :\n\n> On 2019-04-07 18:51, Julien Rouhaud wrote:\n> > While working on unrelated documentation change, I noticed some\n> > trailing whitespaces in various documentation files. PFA a simple\n> > patch to get rid of them (I didn't removed the one corresponding to\n> > psql output), if that helps.\n>\n> committed, thanks\n\n\nThanks!\n\nLe lun. 8 avr. 2019 à 22:43, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> a écrit :On 2019-04-07 18:51, Julien Rouhaud wrote:\n> While working on unrelated documentation change, I noticed some\n> trailing whitespaces in various documentation files. PFA a simple\n> patch to get rid of them (I didn't removed the one corresponding to\n> psql output), if that helps.\n\ncommitted, thanksThanks!",
"msg_date": "Tue, 9 Apr 2019 00:15:03 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Trailing whitespaces in various documentations"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile fixing the report at https://postgr.es/m/19321.1554567786@sss.pgh.pa.us\nI noticed that our behaviour for deleting (or updating albeit less\ndrastically) a row previously modified in the same query isn't\nparticularly useful:\n\nDROP TABLE IF EXISTS blarg;\nCREATE TABLE blarg(data text, count int);\nINSERT INTO blarg VALUES('row', '1');\nWITH upd AS (UPDATE blarg SET count = count + 1 RETURNING *)\nDELETE FROM blarg USING upd RETURNING *;\nSELECT * FROM blarg;\n┌──────┬───────┐\n│ data │ count │\n├──────┼───────┤\n│ row │ 2 │\n└──────┴───────┘\n(1 row)\n\nI.e. the delete is plainly ignored. That's because it falls under:\n\n\t\t\t\t/*\n\t\t\t\t * The target tuple was already updated or deleted by the\n\t\t\t\t * current command, or by a later command in the current\n\t\t\t\t * transaction. The former case is possible in a join DELETE\n\t\t\t\t * where multiple tuples join to the same target tuple. This\n\t\t\t\t * is somewhat questionable, but Postgres has always allowed\n\t\t\t\t * it: we just ignore additional deletion attempts.\n\t\t\t\t *\n\t\t\t\t * The latter case arises if the tuple is modified by a\n\t\t\t\t * command in a BEFORE trigger, or perhaps by a command in a\n\t\t\t\t * volatile function used in the query. In such situations we\n\t\t\t\t * should not ignore the deletion, but it is equally unsafe to\n\t\t\t\t * proceed. We don't want to discard the original DELETE\n\t\t\t\t * while keeping the triggered actions based on its deletion;\n\t\t\t\t * and it would be no better to allow the original DELETE\n\t\t\t\t * while discarding updates that it triggered. The row update\n\t\t\t\t * carries some information that might be important according\n\t\t\t\t * to business rules; so throwing an error is the only safe\n\t\t\t\t * course.\n\t\t\t\t *\n\t\t\t\t * If a trigger actually intends this type of interaction, it\n\t\t\t\t * can re-execute the DELETE and then return NULL to cancel\n\t\t\t\t * the outer delete.\n\t\t\t\t */\n\t\t\t\tif (tmfd.cmax != estate->es_output_cid)\n\t\t\t\t\tereport(ERROR,\n\t\t\t\t\t\t\t(errcode(ERRCODE_TRIGGERED_DATA_CHANGE_VIOLATION),\n\t\t\t\t\t\t\t errmsg(\"tuple to be deleted was already modified by an operation triggered by the current command\"),\n\t\t\t\t\t\t\t errhint(\"Consider using an AFTER trigger instead of a BEFORE trigger to propagate changes to other rows.\")));\n\n\t\t\t\t/* Else, already deleted by self; nothing to do */\n\n\nI'm not sure what the right behaviour is. But it feels to me like the\ncurrent behaviour wasn't particularly intentional, it's just what\nhappened. And certainly the \"already deleted by self\" comment doesn't\nindicate understanding that it could just as well be an update. Nor does\nthe comment above it refer to the possibility that the update might have\nbeen from a [different] wCTE in a different ModifyTable node, rather\nthan just a redundant update/delete by the same node.\n\nNor do I feel is there proper tests attesting to what the behaviour\nshould be.\n\nMarko, Hitoshi, Tom, was there some intended beheaviour in\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=389af951552ff2209eae3e62fa147fef12329d4f\n?\n\nKevin, did you know that that could happen when writing\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=6868ed7491b7ea7f0af6133bb66566a2f5fe5a75\n?\n\nAnyone, do you have a concrete and doable proposal of how we should\nactually handle this?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2019 13:29:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "What is the correct behaviour for a wCTE UPDATE followed by a DELETE?"
}
] |
[
{
"msg_contents": "Cleanup/remove/update references to OID column...\n\n..in wake of 578b229718e8f.\n\nSee also\n93507e67c9ca54026019ebec3026de35d30370f9\n1464755fc490a9911214817fe83077a3689250ab\n---\n doc/src/sgml/ddl.sgml | 9 ++++-----\n doc/src/sgml/ref/insert.sgml | 12 +++++-------\n doc/src/sgml/ref/psql-ref.sgml | 3 +++\n 3 files changed, 12 insertions(+), 12 deletions(-)\n\ndiff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\nindex 9e761db..db044c5 100644\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -3672,11 +3672,10 @@ VALUES ('Albany', NULL, NULL, 'NY');\n <para>\n Partitions cannot have columns that are not present in the parent. It\n is not possible to specify columns when creating partitions with\n- <command>CREATE TABLE</command>, nor is it possible to add columns to\n- partitions after-the-fact using <command>ALTER TABLE</command>. Tables may be\n- added as a partition with <command>ALTER TABLE ... ATTACH PARTITION</command>\n- only if their columns exactly match the parent, including any\n- <literal>oid</literal> column.\n+ <command>CREATE TABLE</command>, to add columns to\n+ partitions after-the-fact using <command>ALTER TABLE</command>, nor to\n+ add a partition with <command>ALTER TABLE ... ATTACH PARTITION</command>\n+ if its columns would not exactly match those of the parent.\n </para>\n </listitem>\n \ndiff --git a/doc/src/sgml/ref/insert.sgml b/doc/src/sgml/ref/insert.sgml\nindex 62e142f..3e1be4c 100644\n--- a/doc/src/sgml/ref/insert.sgml\n+++ b/doc/src/sgml/ref/insert.sgml\n@@ -552,13 +552,11 @@ INSERT INTO <replaceable class=\"parameter\">table_name</replaceable> [ AS <replac\n INSERT <replaceable>oid</replaceable> <replaceable class=\"parameter\">count</replaceable>\n </screen>\n The <replaceable class=\"parameter\">count</replaceable> is the\n- number of rows inserted or updated. If <replaceable\n- class=\"parameter\">count</replaceable> is exactly one, and the\n- target table has OIDs, then <replaceable\n- class=\"parameter\">oid</replaceable> is the <acronym>OID</acronym>\n- assigned to the inserted row. The single row must have been\n- inserted rather than updated. Otherwise <replaceable\n- class=\"parameter\">oid</replaceable> is zero.\n+ number of rows inserted or updated.\n+ <replaceable>oid</replaceable> used to be the object ID of the inserted row\n+ if <replaceable>rows</replaceable> was 1 and the target table had OIDs, but\n+ OIDs system columns are not supported anymore; therefore\n+ <replaceable>oid</replaceable> is always 0.\n </para>\n \n <para>\ndiff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml\nindex 08f4bab..0e6e792 100644\n--- a/doc/src/sgml/ref/psql-ref.sgml\n+++ b/doc/src/sgml/ref/psql-ref.sgml\n@@ -3794,6 +3794,9 @@ bar\n command. This variable is only guaranteed to be valid until\n after the result of the next <acronym>SQL</acronym> command has\n been displayed.\n+ <productname>PostgreSQL</productname> servers since version 12 do not\n+ support OID system columns in user tables, and LASTOID will always be 0\n+ following <command>INSERT</command>.\n </para>\n </listitem>\n </varlistentry>\n-- \n2.1.4",
"msg_date": "Sun, 7 Apr 2019 19:28:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Cleanup/remove/update references to OID column"
},
{
"msg_contents": "\tJustin Pryzby wrote:\n\n> Cleanup/remove/update references to OID column...\n> \n> ..in wake of 578b229718e8f.\n\nJust spotted a couple of other references that need updates:\n\n#1. In catalogs.sgml:\n\n <row>\n <entry><structfield>attnum</structfield></entry>\n <entry><type>int2</type></entry>\n <entry></entry>\n <entry>\n The number of the column. Ordinary columns are numbered from 1\n up. System columns, such as <structfield>oid</structfield>,\n have (arbitrary) negative numbers.\n </entry>\n </row>\n\noid should be replaced by xmin or some other system column.\n\n\n#2. In ddl.sgml, when describing ctid:\n\n <para>\n The physical location of the row version within its table. Note that\n although the <structfield>ctid</structfield> can be used to\n locate the row version very quickly, a row's\n <structfield>ctid</structfield> will change if it is\n updated or moved by <command>VACUUM FULL</command>. Therefore\n <structfield>ctid</structfield> is useless as a long-term row\n identifier. The OID, or even better a user-defined serial\n number, should be used to identify logical rows.\n </para>\n\n\"The OID\" used to refer to an entry above in that list, now it's not\nclear what it refers to.\n\"serial number\" also sounds somewhat obsolete now that IDENTITY is\nsupported. The last sentence could be changed to:\n \"A primary key should be used to identify logical rows\".\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 10 Apr 2019 18:32:35 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 06:32:35PM +0200, Daniel Verite wrote:\n> \tJustin Pryzby wrote:\n> \n> > Cleanup/remove/update references to OID column...\n> \n> Just spotted a couple of other references that need updates:\n\n> #1. In catalogs.sgml:\n> #2. In ddl.sgml, when describing ctid:\n\nI found and included fixes for a few more references:\n\n doc/src/sgml/catalogs.sgml | 2 +-\n doc/src/sgml/ddl.sgml | 3 +--\n doc/src/sgml/information_schema.sgml | 4 ++--\n doc/src/sgml/ref/create_trigger.sgml | 2 +-\n doc/src/sgml/spi.sgml | 2 +-\n\nJustin",
"msg_date": "Wed, 10 Apr 2019 11:59:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 11:59:18AM -0500, Justin Pryzby wrote:\n> I found and included fixes for a few more references:\n> \n> doc/src/sgml/catalogs.sgml | 2 +-\n> doc/src/sgml/ddl.sgml | 3 +--\n> doc/src/sgml/information_schema.sgml | 4 ++--\n> doc/src/sgml/ref/create_trigger.sgml | 2 +-\n> doc/src/sgml/spi.sgml | 2 +-\n\nOpen Item++.\n\nAndres, as this is a consequence of 578b229, could you look at what is\nproposed here?\n--\nMichael",
"msg_date": "Thu, 11 Apr 2019 13:26:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-11 13:26:38 +0900, Michael Paquier wrote:\n> On Wed, Apr 10, 2019 at 11:59:18AM -0500, Justin Pryzby wrote:\n> > I found and included fixes for a few more references:\n> > \n> > doc/src/sgml/catalogs.sgml | 2 +-\n> > doc/src/sgml/ddl.sgml | 3 +--\n> > doc/src/sgml/information_schema.sgml | 4 ++--\n> > doc/src/sgml/ref/create_trigger.sgml | 2 +-\n> > doc/src/sgml/spi.sgml | 2 +-\n> \n> Open Item++.\n\n> Andres, as this is a consequence of 578b229, could you look at what is\n> proposed here?\n\nYes, I was planning to commit that soon-ish. There still seemed\nreview / newer versions happening, though, so I was thinking of waiting\nfor a bit longer.\n\n- Andres\n\n\n",
"msg_date": "Thu, 11 Apr 2019 08:39:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 08:39:42AM -0700, Andres Freund wrote:\n> Yes, I was planning to commit that soon-ish. There still seemed\n> review / newer versions happening, though, so I was thinking of waiting\n> for a bit longer.\n\nI don't expect anything new.\n\nI got all that from: git grep '>oid' doc/src/sgml, so perhaps you'd want to\ncheck for any other stragglers.\n\nThanks,\nJustin\n\n\n",
"msg_date": "Thu, 11 Apr 2019 10:43:59 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "\tAndres Freund wrote:\n\n> Yes, I was planning to commit that soon-ish. There still seemed\n> review / newer versions happening, though, so I was thinking of waiting\n> for a bit longer.\n\nYou might want to apply this trivial one in the same batch:\n\nindex 452f307..7cfb67f 100644\n--- a/src/bin/pg_dump/pg_dump.c\n+++ b/src/bin/pg_dump/pg_dump.c\n@@ -428,7 +428,7 @@ main(int argc, char **argv)\n \n\tInitDumpOptions(&dopt);\n \n-\twhile ((c = getopt_long(argc, argv,\n\"abBcCd:E:f:F:h:j:n:N:oOp:RsS:t:T:U:vwWxZ:\",\n+\twhile ((c = getopt_long(argc, argv,\n\"abBcCd:E:f:F:h:j:n:N:Op:RsS:t:T:U:vwWxZ:\",\n\t\t\t\t\t\t\tlong_options,\n&optindex)) != -1)\n\t{\n\t\tswitch (c)\n\n\"o\" in the options list is a leftover. Leaving it in getopt_long() has the \neffect that pg_dump -o fails (per the default case in the switch),\nbut it's missing the expected error message (pg_dump: invalid option -- 'o')\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Mon, 15 Apr 2019 18:35:12 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-10 11:59:18 -0500, Justin Pryzby wrote:\n> @@ -1202,8 +1202,7 @@ CREATE TABLE circles (\n> <structfield>ctid</structfield> will change if it is\n> updated or moved by <command>VACUUM FULL</command>. Therefore\n> <structfield>ctid</structfield> is useless as a long-term row\n> - identifier. The OID, or even better a user-defined serial\n> - number, should be used to identify logical rows.\n> + identifier. A primary key should be used to identify logical rows.\n> </para>\n> </listitem>\n> </varlistentry>\n\nThat works for me.\n\n\n> @@ -3672,11 +3671,10 @@ VALUES ('Albany', NULL, NULL, 'NY');\n> <para>\n> Partitions cannot have columns that are not present in the parent. It\n> is not possible to specify columns when creating partitions with\n> - <command>CREATE TABLE</command>, nor is it possible to add columns to\n> - partitions after-the-fact using <command>ALTER TABLE</command>. Tables may be\n> - added as a partition with <command>ALTER TABLE ... ATTACH PARTITION</command>\n> - only if their columns exactly match the parent, including any\n> - <literal>oid</literal> column.\n> + <command>CREATE TABLE</command>, to add columns to\n> + partitions after-the-fact using <command>ALTER TABLE</command>, nor to\n> + add a partition with <command>ALTER TABLE ... ATTACH PARTITION</command>\n> + if its columns would not exactly match those of the parent.\n> </para>\n> </listitem>\n\nThis seems like a bigger change than necessary. I just chopped off the\n\"including any oid column\".\n\n\n\n> diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml\n> index 6456105..3339a4b 100644\n> --- a/doc/src/sgml/ref/create_trigger.sgml\n> +++ b/doc/src/sgml/ref/create_trigger.sgml\n> @@ -465,7 +465,7 @@ UPDATE OF <replaceable>column_name1</replaceable> [, <replaceable>column_name2</\n> that the <literal>NEW</literal> row seen by the condition is the current value,\n> as possibly modified by earlier triggers. Also, a <literal>BEFORE</literal>\n> trigger's <literal>WHEN</literal> condition is not allowed to examine the\n> - system columns of the <literal>NEW</literal> row (such as <literal>oid</literal>),\n> + system columns of the <literal>NEW</literal> row (such as <literal>ctid</literal>),\n> because those won't have been set yet.\n> </para>\n\nHm. Not because of your change, but this sentence seems wrong. For one,\n\"is not allowed\" isn't really true - one can very well write a trigger\ndoing so. The returned values just are bogus.\n\nCREATE OR REPLACE FUNCTION scream_sysattrs() RETURNS TRIGGER LANGUAGE\nplpgsql AS $$\nBEGIN\n RAISE NOTICE 'inserted row: self: % xmin: % cmin: %, xmax: %, cmax: % tableoid: %', NEW.ctid, NEW.xmin, NEW.cmin, NEW.xmax, NEW.cmax, NEW.tableoid;\n RETURN NEW;\nEND;$$;\nDROP TABLE IF EXISTS foo; CREATE TABLE foo(i int);CREATE TRIGGER foo BEFORE INSERT ON foo FOR EACH ROW EXECUTE FUNCTION scream_sysattrs();\npostgres[5532][1]=# INSERT INTO foo values(1);\nNOTICE: 00000: inserted row: self: (0,0) xmin: 112 cmin: 2249, xmax: 4294967295, cmax: 2249 tableoid: 0\nLOCATION: exec_stmt_raise, pl_exec.c:3778\nINSERT 0 1\n\nWe probably should do better...\n\n\n\n> diff --git a/doc/src/sgml/ref/insert.sgml b/doc/src/sgml/ref/insert.sgml\n> index 62e142f..3e1be4c 100644\n> --- a/doc/src/sgml/ref/insert.sgml\n> +++ b/doc/src/sgml/ref/insert.sgml\n> @@ -552,13 +552,11 @@ INSERT INTO <replaceable class=\"parameter\">table_name</replaceable> [ AS <replac\n> INSERT <replaceable>oid</replaceable> <replaceable class=\"parameter\">count</replaceable>\n> </screen>\n> The <replaceable class=\"parameter\">count</replaceable> is the\n> - number of rows inserted or updated. If <replaceable\n> - class=\"parameter\">count</replaceable> is exactly one, and the\n> - target table has OIDs, then <replaceable\n> - class=\"parameter\">oid</replaceable> is the <acronym>OID</acronym>\n> - assigned to the inserted row. The single row must have been\n> - inserted rather than updated. Otherwise <replaceable\n> - class=\"parameter\">oid</replaceable> is zero.\n> + number of rows inserted or updated.\n> + <replaceable>oid</replaceable> used to be the object ID of the inserted row\n> + if <replaceable>rows</replaceable> was 1 and the target table had OIDs, but\n> + OIDs system columns are not supported anymore; therefore\n> + <replaceable>oid</replaceable> is always 0.\n> </para>\n\nI rephrased this a bit. Felt like the important bit came after\nhistorical information:\n+ The <replaceable class=\"parameter\">count</replaceable> is the number of\n+ rows inserted or updated. <replaceable>oid</replaceable> is always 0 (it\n+ used to be the <acronym>OID</acronym> assigned to the inserted row if\n+ <replaceable>rows</replaceable> was exactly one and the target table was\n+ declared <literal>WITH OIDS</literal> and 0 otherwise, but creating a table\n+ <literal>WITH OIDS</literal> is not supported anymore).\n\n\n> <para>\n> diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml\n> index 08f4bab..0e6e792 100644\n> --- a/doc/src/sgml/ref/psql-ref.sgml\n> +++ b/doc/src/sgml/ref/psql-ref.sgml\n> @@ -3794,6 +3794,9 @@ bar\n> command. This variable is only guaranteed to be valid until\n> after the result of the next <acronym>SQL</acronym> command has\n> been displayed.\n> + <productname>PostgreSQL</productname> servers since version 12 do not\n> + support OID system columns in user tables, and LASTOID will always be 0\n> + following <command>INSERT</command>.\n> </para>\n> </listitem>\n> </varlistentry>\n\nIt's not just user tables, system tables as well (it's just an ordinary\ntable now). I also though it might be good to clarify that LASTOID still\nworks for older servers.\n\n+ <productname>PostgreSQL</productname> servers since version 12 do not\n+ support OID system columns anymore, thus LASTOID will always be 0\n+ following <command>INSERT</command> when targeting such servers.\n\n\nThanks for the patch!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Apr 2019 17:23:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-15 18:35:12 +0200, Daniel Verite wrote:\n> \tAndres Freund wrote:\n> \n> > Yes, I was planning to commit that soon-ish. There still seemed\n> > review / newer versions happening, though, so I was thinking of waiting\n> > for a bit longer.\n> \n> You might want to apply this trivial one in the same batch:\n> \n> index 452f307..7cfb67f 100644\n> --- a/src/bin/pg_dump/pg_dump.c\n> +++ b/src/bin/pg_dump/pg_dump.c\n> @@ -428,7 +428,7 @@ main(int argc, char **argv)\n> \n> \tInitDumpOptions(&dopt);\n> \n> -\twhile ((c = getopt_long(argc, argv,\n> \"abBcCd:E:f:F:h:j:n:N:oOp:RsS:t:T:U:vwWxZ:\",\n> +\twhile ((c = getopt_long(argc, argv,\n> \"abBcCd:E:f:F:h:j:n:N:Op:RsS:t:T:U:vwWxZ:\",\n> \t\t\t\t\t\t\tlong_options,\n> &optindex)) != -1)\n> \t{\n> \t\tswitch (c)\n> \n> \"o\" in the options list is a leftover. Leaving it in getopt_long() has the \n> effect that pg_dump -o fails (per the default case in the switch),\n> but it's missing the expected error message (pg_dump: invalid option -- 'o')\n\nThanks for finding! Pushed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Apr 2019 17:29:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 05:23:47PM -0700, Andres Freund wrote:\n> Thanks for the patch!\n\nThanks for fixing it up and commiting.\n\nWould you consider the remaining two hunks, attached ?\n\nJustin",
"msg_date": "Wed, 17 Apr 2019 19:42:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-17 19:42:19 -0500, Justin Pryzby wrote:\n> diff --git a/doc/src/sgml/information_schema.sgml b/doc/src/sgml/information_schema.sgml\n> index 234a3bb..9c618b1 100644\n> --- a/doc/src/sgml/information_schema.sgml\n> +++ b/doc/src/sgml/information_schema.sgml\n> @@ -1312,8 +1312,8 @@\n> <para>\n> The view <literal>columns</literal> contains information about all\n> table columns (or view columns) in the database. System columns\n> - (<literal>ctid</literal>, etc.) are not included. Only those columns are\n> - shown that the current user has access to (by way of being the\n> + (<literal>ctid</literal>, etc.) are not included. The only columns shown\n> + are those to which the current user has access (by way of being the\n> owner or having some privilege).\n> </para>\n\nI don't see the point of this change? Nor what it has to do with oids?\n\n\n> diff --git a/doc/src/sgml/ref/insert.sgml b/doc/src/sgml/ref/insert.sgml\n> index 189ce2a..f995a76 100644\n> --- a/doc/src/sgml/ref/insert.sgml\n> +++ b/doc/src/sgml/ref/insert.sgml\n> @@ -554,7 +554,7 @@ INSERT <replaceable>oid</replaceable> <replaceable class=\"parameter\">count</repl\n> The <replaceable class=\"parameter\">count</replaceable> is the number of\n> rows inserted or updated. <replaceable>oid</replaceable> is always 0 (it\n> used to be the <acronym>OID</acronym> assigned to the inserted row if\n> - <replaceable>rows</replaceable> was exactly one and the target table was\n> + <replaceable>count</replaceable> was exactly one and the target table was\n> declared <literal>WITH OIDS</literal> and 0 otherwise, but creating a table\n> <literal>WITH OIDS</literal> is not supported anymore).\n> </para>\n\nThe <replacable>rows</<replacable> reference is from your change\n:(. I'll fold it into another upcoming change for other tableam comment\nimprovements (by Heikki).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Apr 2019 17:51:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 05:51:15PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-04-17 19:42:19 -0500, Justin Pryzby wrote:\n> > diff --git a/doc/src/sgml/information_schema.sgml b/doc/src/sgml/information_schema.sgml\n> > index 234a3bb..9c618b1 100644\n> > --- a/doc/src/sgml/information_schema.sgml\n> > +++ b/doc/src/sgml/information_schema.sgml\n> > @@ -1312,8 +1312,8 @@\n> > <para>\n> > The view <literal>columns</literal> contains information about all\n> > table columns (or view columns) in the database. System columns\n> > - (<literal>ctid</literal>, etc.) are not included. Only those columns are\n> > - shown that the current user has access to (by way of being the\n> > + (<literal>ctid</literal>, etc.) are not included. The only columns shown\n> > + are those to which the current user has access (by way of being the\n> > owner or having some privilege).\n> > </para>\n> \n> I don't see the point of this change? Nor what it has to do with oids?\n\nIt doesn't have to do with oids, but seems more correct and cleaner...to my\neyes.\n\n> > - <replaceable>rows</replaceable> was exactly one and the target table was\n> > + <replaceable>count</replaceable> was exactly one and the target table was\n> The <replacable>rows</<replacable> reference is from your change\n> :(.\n\nOuch, not sure how I did that..sorry for the noise (twice).\n\nThanks,\nJustin\n\n\n",
"msg_date": "Wed, 17 Apr 2019 23:14:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 11:14:13PM -0500, Justin Pryzby wrote:\n> On Wed, Apr 17, 2019 at 05:51:15PM -0700, Andres Freund wrote:\n> > > - <replaceable>rows</replaceable> was exactly one and the target table was\n> > > + <replaceable>count</replaceable> was exactly one and the target table was\n> > The <replacable>rows</<replacable> reference is from your change\n> > :(.\n> \n> Ouch, not sure how I did that..sorry for the noise (twice).\n\nFor the record, I found that I borrowed the language from\n578b229718e8f:doc/src/sgml/protocol.sgml (but should have borrowed a bit less).\n\nJustin\n\n\n",
"msg_date": "Mon, 29 Apr 2019 05:28:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "I found what appears to be a dangling reference to old \"hidden\" OID behavior.\n\nJustin",
"msg_date": "Wed, 8 May 2019 14:05:57 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "I'm resending this patch, which still seems to be needed.\n\nAlso, should this be removed ? Or at leat remove the parenthesized text, since\nnon-system tables no longer have OIDs: \"(use to avoid output on system tables)\"\n\nhttps://www.postgresql.org/docs/devel/runtime-config-developer.html\ntrace_lock_oidmin (integer)\n\nAnd maybe this (?)\ntrace_lock_table (integer)\n\nOn Wed, May 08, 2019 at 02:05:57PM -0500, Justin Pryzby wrote:\n> I found what appears to be a dangling reference to old \"hidden\" OID behavior.\n> \n> Justin\n\n> From 1c6712c0ade949648dbc415dfd7ea80312360ef7 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Wed, 8 May 2019 13:57:12 -0500\n> Subject: [PATCH v1] Cleanup/remove/update references to OID column...\n> \n> ..in wake of 578b229718e8f.\n> \n> See also\n> 93507e67c9ca54026019ebec3026de35d30370f9\n> 1464755fc490a9911214817fe83077a3689250ab\n> f6b39171f3d65155b9390c2c69bc5b3469f923a8\n> \n> Author: Justin Pryzby <justin@telsasoft.com>\n> ---\n> doc/src/sgml/catalogs.sgml | 2 +-\n> 1 file changed, 1 insertion(+), 1 deletion(-)\n> \n> diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml\n> index d544e60..0f9c6f2 100644\n> --- a/doc/src/sgml/catalogs.sgml\n> +++ b/doc/src/sgml/catalogs.sgml\n> @@ -610,7 +610,7 @@\n> <entry><structfield>oid</structfield></entry>\n> <entry><type>oid</type></entry>\n> <entry></entry>\n> - <entry>Row identifier (hidden attribute; must be explicitly selected)</entry>\n> + <entry>Row identifier</entry>\n> </row>\n> \n> <row>\n> -- \n> 2.7.4\n> \n\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\n",
"msg_date": "Mon, 1 Jul 2019 10:59:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I'm resending this patch, which still seems to be needed.\n\nYeah, clearly one copy of that text got missed out. Pushed that.\n\n> Also, should this be removed ? Or at leat remove the parenthesized text, since\n> non-system tables no longer have OIDs: \"(use to avoid output on system tables)\"\n\nNo, I think that's still fine as-is. Tables still have OIDs, they\njust don't *contain* magic OID columns.\n\n> And maybe this (?)\n> trace_lock_table (integer)\n\nHm, the description of that isn't English, at least:\n\n\t\t\tgettext_noop(\"Sets the OID of the table with unconditionally lock tracing.\"),\n\nI'm not entirely sure about the point of tracing locks on just one\ntable, which seems to be what this is for.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Jul 2019 12:13:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
},
{
"msg_contents": "Few comments seem to have dangling references to the behavior from pre-12 \"WITH\nOIDS\". Maybe varsup.c should get a wider change?\n\ndiff --git a/src/backend/access/common/tupdesc.c b/src/backend/access/common/tupdesc.c\nindex 1e743d7d86..ce84e22cbd 100644\n--- a/src/backend/access/common/tupdesc.c\n+++ b/src/backend/access/common/tupdesc.c\n@@ -786,9 +786,7 @@ TupleDescInitEntryCollation(TupleDesc desc,\n *\n * Given a relation schema (list of ColumnDef nodes), build a TupleDesc.\n *\n- * Note: the default assumption is no OIDs; caller may modify the returned\n- * TupleDesc if it wants OIDs. Also, tdtypeid will need to be filled in\n- * later on.\n+ * tdtypeid will need to be filled in later on.\n */\n TupleDesc\n BuildDescForRelation(List *schema)\ndiff --git a/src/backend/access/transam/varsup.c b/src/backend/access/transam/varsup.c\nindex 2570e7086a..2f0eab0f53 100644\n--- a/src/backend/access/transam/varsup.c\n+++ b/src/backend/access/transam/varsup.c\n@@ -527,8 +527,7 @@ GetNewObjectId(void)\n \t * The first time through this routine after normal postmaster start, the\n \t * counter will be forced up to FirstNormalObjectId. This mechanism\n \t * leaves the OIDs between FirstBootstrapObjectId and FirstNormalObjectId\n-\t * available for automatic assignment during initdb, while ensuring they\n-\t * will never conflict with user-assigned OIDs.\n+\t * available for automatic assignment during initdb.\n \t */\n \tif (ShmemVariableCache->nextOid < ((Oid) FirstNormalObjectId))\n \t{\ndiff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c\nindex 262e14ccfb..1ca11960e2 100644\n--- a/src/backend/commands/tablecmds.c\n+++ b/src/backend/commands/tablecmds.c\n@@ -4813,7 +4813,7 @@ ATRewriteTables(AlterTableStmt *parsetree, List **wqueue, LOCKMODE lockmode,\n \t\t\tcontinue;\n \n \t\t/*\n-\t\t * If we change column data types or add/remove OIDs, the operation\n+\t\t * If we change column data types, the operation\n \t\t * has to be propagated to tables that use this table's rowtype as a\n \t\t * column type. tab->newvals will also be non-NULL in the case where\n \t\t * we're adding a column with a default. We choose to forbid that\n@@ -4837,8 +4837,7 @@ ATRewriteTables(AlterTableStmt *parsetree, List **wqueue, LOCKMODE lockmode,\n \n \t\t/*\n \t\t * We only need to rewrite the table if at least one column needs to\n-\t\t * be recomputed, we are adding/removing the OID column, or we are\n-\t\t * changing its persistence.\n+\t\t * be recomputed, or we are changing its persistence.\n \t\t *\n \t\t * There are two reasons for requiring a rewrite when changing\n \t\t * persistence: on one hand, we need to ensure that the buffers\n\n\n",
"msg_date": "Wed, 29 Apr 2020 14:46:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleanup/remove/update references to OID column"
}
] |
[
{
"msg_contents": "Hi,\n\nlazy_scan_heap() contains the following block:\n\n\t\t/*\n\t\t * If the all-visible page is turned out to be all-frozen but not\n\t\t * marked, we should so mark it. Note that all_frozen is only valid\n\t\t * if all_visible is true, so we must check both.\n\t\t */\n\t\telse if (all_visible_according_to_vm && all_visible && all_frozen &&\n\t\t\t\t !VM_ALL_FROZEN(onerel, blkno, &vmbuffer))\n\t\t{\n\t\t\t/*\n\t\t\t * We can pass InvalidTransactionId as the cutoff XID here,\n\t\t\t * because setting the all-frozen bit doesn't cause recovery\n\t\t\t * conflicts.\n\t\t\t */\n\t\t\tvisibilitymap_set(onerel, blkno, buf, InvalidXLogRecPtr,\n\t\t\t\t\t\t\t vmbuffer, InvalidTransactionId,\n\t\t\t\t\t\t\t VISIBILITYMAP_ALL_FROZEN);\n\t\t}\n\nbut I'm afraid that's not quite enough. As an earlier comment explains:\n\n\n\t\t\t * NB: If the heap page is all-visible but the VM bit is not set,\n\t\t\t * we don't need to dirty the heap page. However, if checksums\n\t\t\t * are enabled, we do need to make sure that the heap page is\n\t\t\t * dirtied before passing it to visibilitymap_set(), because it\n\t\t\t * may be logged. Given that this situation should only happen in\n\t\t\t * rare cases after a crash, it is not worth optimizing.\n\t\t\t */\n\t\t\tPageSetAllVisible(page);\n\t\t\tMarkBufferDirty(buf);\n\t\t\tvisibilitymap_set(onerel, blkno, buf, InvalidXLogRecPtr,\n\t\t\t\t\t\t\t vmbuffer, visibility_cutoff_xid, flags);\n\ndon't we need to do that here too? visibilitymap_set() does:\n\n\t\t\t\t/*\n\t\t\t\t * If data checksums are enabled (or wal_log_hints=on), we\n\t\t\t\t * need to protect the heap page from being torn.\n\t\t\t\t */\n\t\t\t\tif (XLogHintBitIsNeeded())\n\t\t\t\t{\n\t\t\t\t\tPage\t\theapPage = BufferGetPage(heapBuf);\n\n\t\t\t\t\t/* caller is expected to set PD_ALL_VISIBLE first */\n\t\t\t\t\tAssert(PageIsAllVisible(heapPage));\n\t\t\t\t\tPageSetLSN(heapPage, recptr);\n\t\t\t\t}\n\ni.e. it actually modifies the page when checksums/wal hint bits are\nenabled, setting a different LSN. Without it being dirtied that won't\npersist. Which doesn't seem good?\n\n\nvisibilitymap_set()'s comment header doesn't explain this well. Nor is\n * Call visibilitymap_pin first to pin the right one. This function doesn't do\n * any I/O.\nactually true, given it does XLogInsert(). I think that should've been\nadjusted in 503c7305a1e3 (\"Make the visibility map crash-safe.\").\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2019 20:49:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "lazy_scan_heap() forgets to mark buffer dirty when setting all\n frozen?"
},
{
"msg_contents": "On 2019-Apr-07, Andres Freund wrote:\n\n> lazy_scan_heap() contains the following block:\n> \n> \t\t/*\n> \t\t * If the all-visible page is turned out to be all-frozen but not\n> \t\t * marked, we should so mark it. Note that all_frozen is only valid\n> \t\t * if all_visible is true, so we must check both.\n> \t\t */\n> \t\telse if (all_visible_according_to_vm && all_visible && all_frozen &&\n> \t\t\t\t !VM_ALL_FROZEN(onerel, blkno, &vmbuffer))\n> \t\t{\n> \t\t\t/*\n> \t\t\t * We can pass InvalidTransactionId as the cutoff XID here,\n> \t\t\t * because setting the all-frozen bit doesn't cause recovery\n> \t\t\t * conflicts.\n> \t\t\t */\n> \t\t\tvisibilitymap_set(onerel, blkno, buf, InvalidXLogRecPtr,\n> \t\t\t\t\t\t\t vmbuffer, InvalidTransactionId,\n> \t\t\t\t\t\t\t VISIBILITYMAP_ALL_FROZEN);\n> \t\t}\n> \n> but I'm afraid that's not quite enough.\n\nApparently the initial commit a892234f830e had MarkBufferDirty, but it\nwas removed one week later by 77a1d1e79892.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 00:48:20 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: lazy_scan_heap() forgets to mark buffer dirty when setting all\n frozen?"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-08 00:48:20 -0400, Alvaro Herrera wrote:\n> On 2019-Apr-07, Andres Freund wrote:\n> \n> > lazy_scan_heap() contains the following block:\n> > \n> > \t\t/*\n> > \t\t * If the all-visible page is turned out to be all-frozen but not\n> > \t\t * marked, we should so mark it. Note that all_frozen is only valid\n> > \t\t * if all_visible is true, so we must check both.\n> > \t\t */\n> > \t\telse if (all_visible_according_to_vm && all_visible && all_frozen &&\n> > \t\t\t\t !VM_ALL_FROZEN(onerel, blkno, &vmbuffer))\n> > \t\t{\n> > \t\t\t/*\n> > \t\t\t * We can pass InvalidTransactionId as the cutoff XID here,\n> > \t\t\t * because setting the all-frozen bit doesn't cause recovery\n> > \t\t\t * conflicts.\n> > \t\t\t */\n> > \t\t\tvisibilitymap_set(onerel, blkno, buf, InvalidXLogRecPtr,\n> > \t\t\t\t\t\t\t vmbuffer, InvalidTransactionId,\n> > \t\t\t\t\t\t\t VISIBILITYMAP_ALL_FROZEN);\n> > \t\t}\n> > \n> > but I'm afraid that's not quite enough.\n> \n> Apparently the initial commit a892234f830e had MarkBufferDirty, but it\n> was removed one week later by 77a1d1e79892.\n\nGood catch. Kinda looks like it could have been an accidental removal?\nRobert?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2019 21:55:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: lazy_scan_heap() forgets to mark buffer dirty when setting all\n frozen?"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 12:49 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> lazy_scan_heap() contains the following block:\n>\n> /*\n> * If the all-visible page is turned out to be all-frozen but not\n> * marked, we should so mark it. Note that all_frozen is only valid\n> * if all_visible is true, so we must check both.\n> */\n> else if (all_visible_according_to_vm && all_visible && all_frozen &&\n> !VM_ALL_FROZEN(onerel, blkno, &vmbuffer))\n> {\n> /*\n> * We can pass InvalidTransactionId as the cutoff XID here,\n> * because setting the all-frozen bit doesn't cause recovery\n> * conflicts.\n> */\n> visibilitymap_set(onerel, blkno, buf, InvalidXLogRecPtr,\n> vmbuffer, InvalidTransactionId,\n> VISIBILITYMAP_ALL_FROZEN);\n> }\n>\n> but I'm afraid that's not quite enough. As an earlier comment explains:\n>\n>\n> * NB: If the heap page is all-visible but the VM bit is not set,\n> * we don't need to dirty the heap page. However, if checksums\n> * are enabled, we do need to make sure that the heap page is\n> * dirtied before passing it to visibilitymap_set(), because it\n> * may be logged. Given that this situation should only happen in\n> * rare cases after a crash, it is not worth optimizing.\n> */\n> PageSetAllVisible(page);\n> MarkBufferDirty(buf);\n> visibilitymap_set(onerel, blkno, buf, InvalidXLogRecPtr,\n> vmbuffer, visibility_cutoff_xid, flags);\n>\n> don't we need to do that here too?\n\nThank you for pointing out. I think that the same things are necessary\nhere. Otherwise does it lead the case that the visibility map page is\nset while the heap page bit is cleared?\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 8 Apr 2019 15:15:11 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: lazy_scan_heap() forgets to mark buffer dirty when setting all\n frozen?"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 12:55 AM Andres Freund <andres@anarazel.de> wrote:\n> > Apparently the initial commit a892234f830e had MarkBufferDirty, but it\n> > was removed one week later by 77a1d1e79892.\n>\n> Good catch. Kinda looks like it could have been an accidental removal?\n> Robert?\n\nSo you're talking about this hunk?\n\n- /* Page is marked all-visible but should be all-frozen */\n- PageSetAllFrozen(page);\n- MarkBufferDirty(buf);\n-\n\nI don't remember exactly, but I am pretty sure that I assumed from the\nway that hunk looked that the MarkBufferDirty() was only needed there\nbecause of the call to PageSetAllFrozen(). Perhaps I should've\nfigured it out from the \"NB: If the heap page is all-visible...\"\ncomment, but I unfortunately don't find that comment to be very clear\n-- it basically says we don't need to do it, and then immediately\ncontradicts itself by saying we sometimes do need to do it \"because it\nmay be logged.\" But that's hardly an explanation, because why should\nthe fact that the page is going to be logged require that it be\ndirtied? We could improve the comment, but before we go there...\n\nWhy the heck does visibilitymap_set() require callers to do\nMarkBufferDirty() instead of doing it itself? Or at least, if it's\ngot to work that way, can't it Assert() something? It seems crazy to\nme that it calls PageSetLSN() without calling MarkBufferDirty() or\nasserting that the buffer is dirty or having a header comment that\nsays that the buffer must be dirty. Ugh.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 8 Apr 2019 10:59:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: lazy_scan_heap() forgets to mark buffer dirty when setting all\n frozen?"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-08 10:59:32 -0400, Robert Haas wrote:\n> On Mon, Apr 8, 2019 at 12:55 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Apparently the initial commit a892234f830e had MarkBufferDirty, but it\n> > > was removed one week later by 77a1d1e79892.\n> >\n> > Good catch. Kinda looks like it could have been an accidental removal?\n> > Robert?\n>\n> So you're talking about this hunk?\n>\n> - /* Page is marked all-visible but should be all-frozen */\n> - PageSetAllFrozen(page);\n> - MarkBufferDirty(buf);\n> -\n>\n> I don't remember exactly, but I am pretty sure that I assumed from the\n> way that hunk looked that the MarkBufferDirty() was only needed there\n> because of the call to PageSetAllFrozen(). Perhaps I should've\n> figured it out from the \"NB: If the heap page is all-visible...\"\n> comment, but I unfortunately don't find that comment to be very clear\n> -- it basically says we don't need to do it, and then immediately\n> contradicts itself by saying we sometimes do need to do it \"because it\n> may be logged.\" But that's hardly an explanation, because why should\n> the fact that the page is going to be logged require that it be\n> dirtied? We could improve the comment, but before we go there...\n\n> Why the heck does visibilitymap_set() require callers to do\n> MarkBufferDirty() instead of doing it itself? Or at least, if it's\n> got to work that way, can't it Assert() something? It seems crazy to\n> me that it calls PageSetLSN() without calling MarkBufferDirty() or\n> asserting that the buffer is dirty or having a header comment that\n> says that the buffer must be dirty. Ugh.\n\nI think the visibilitymap_set() has incrementally gotten worse, to the\npoint that it should just be redone. Initially, before you made it\ncrashsafe, it indeed didn't do any I/O (like the header claims), and\ndidn't touch the heap. To make it crashsafe it started to call back into\nheap. Then to support checksums, most of its callers had to take be\nadapted around marking buffers dirty. And then the all-frozen stuff\ncomplicated it a bit further.\n\nI don't quite know what the right answer is, but I think\nvisibilitymap_set (or whatever it's successor is), shouldn't do any WAL\nlogging of its own, and it shouldn't call into heap - we want to be able\nto reuse the vm for things like zheap after all. It's also just an\nextremely confusing interface, especially because say\nvisibilitymap_clear() doesn't do WAL logging.\n\nI think we should have a heap_visibilitymap_set() that does the WAL\nlogging, which internally calls into visibilitymap.c.\n\nPerhaps it could look something roughly like:\n\nstatic void\nheap_visibilitymap_set(...)\n{\n Assert(LWLockIsHeld(BufferDescriptorGetIOLock(heapBuf))\n\n Assert(PageIsAllVisible(heapPage));\n /* other checks */\n\n START_CRIT_SECTION();\n\n /* if change required, this locks VM buffer */\n if (visibilitmap_start_set(rel, heapBlk, &vmBuf))\n {\n XLogRecPtr recptr;\n\n MarkBufferDirty(heapBuf)\n\n if (RelationNeedsWAL(rel))\n {\n /* inlined body of log_heap_visible */\n recptr = XLogInsert(XLOG_HEAP2_VISIBLE);\n\n /*\n * If data checksums are enabled (or wal_log_hints=on), we\n * need to protect the heap page from being torn.\n */\n if (XLogHintBitIsNeeded())\n {\n Page heapPage = BufferGetPage(heapBuf);\n\n /* caller is expected to set PD_ALL_VISIBLE first */\n Assert(PageIsAllVisible(heapPage));\n PageSetLSN(heapPage, recptr);\n }\n }\n\n\n /* actually change bits, set page LSN, release vmbuf lock */\n visibilitmap_finish_set(rel, heapBlk, vmbuf, recptr);\n }\n\n END_CRIT_SECTION();\n}\n\nThe replay routines would then not use heap_visibilitymap_set (they\nright now call visibilitymap_set with a valid XLogRecPtr), but instead\nvisibilitmap_redo_set() or such, which can check the LSNs etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Apr 2019 10:14:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: lazy_scan_heap() forgets to mark buffer dirty when setting all\n frozen?"
}
] |
[
{
"msg_contents": "Hi all\r\n\r\nzh_CN.po has not been updated for three years.\r\nThe source has changed a lot.\r\nI want to do something for postgresql.\r\nI think I can update the zh_CN.po file.\r\n\r\nI plan to translate and update the following zh_CN.po\r\npostgresql/src/bin/initdb/po/zh_CN.po [Patch has been completed] \r\npostgresql/src/bin/pg_archivecleanup/po/zh_CN.po\r\npostgresql/src/bin/pg_basebackup/po/zh_CN.po\r\npostgresql/src/bin/pg_checksums/po/zh_CN.po\r\npostgresql/src/bin/pg_config/po/zh_CN.po\r\npostgresql/src/bin/pg_controldata/po/zh_CN.po\r\npostgresql/src/bin/pg_ctl/po/zh_CN.po\r\npostgresql/src/bin/pg_dump/po/zh_CN.po\r\npostgresql/src/bin/pg_resetwal/po/zh_CN.po\r\npostgresql/src/bin/pg_rewind/po/zh_CN.po\r\npostgresql/src/bin/pg_test_fsync/po/zh_CN.po\r\npostgresql/src/bin/pg_test_timing/po/zh_CN.po\r\npostgresql/src/bin/pg_upgrade/po/zh_CN.po\r\npostgresql/src/bin/pg_waldump/po/zh_CN.po\r\npostgresql/src/bin/psql/zh_CN.po\r\npostgresql/src/bin/scripts/po/zh_CN.po\r\npostgresql/src/backend/po/zh_CN.po\r\npostgresql/src/pl/plpgsql/src/po/zh_CN.po\r\npostgresql/src/interfaces/ecpg/preproc/po/zh_CN.po\r\npostgresql/src/interfaces/ecpg/ecpglib/po/zh_CN.po\r\npostgresql/src/interfaces/libpq/po/zh_CN.po\r\n\r\nHere is a patch for postgresql/src/bin/initdb/po/zh_CN.po",
"msg_date": "Mon, 8 Apr 2019 07:10:53 +0000",
"msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Translation updates for zh_CN.po (Chinese Simplified)"
},
{
"msg_contents": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com> writes:\n> zh_CN.po has not been updated for three years.\n> The source has changed a lot.\n> I want to do something for postgresql.\n> I think I can update the zh_CN.po file.\n\nThat would be great, but we don't use the pgsql-hackers mailing list\nto coordinate translation work. Please join pgsql-translators for that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Apr 2019 10:41:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Translation updates for zh_CN.po (Chinese Simplified)"
}
] |
[
{
"msg_contents": "Hello devs,\n\nThe minor attached patch $SUBJECT, so that it can be inspected easily, \ninstead of having to look at the source code or whatever.\n\n sh> pgbench --list select-only\n -- select-only: <builtin: select only>\n \\set aid random(1, 100000 * :scale)\n SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n\nThe builtin list output is also slightly improved:\n\n sh> pgbench -b list\n Available builtin scripts:\n tpcb-like: <builtin: TPC-B (sort of)>\n simple-update: <builtin: simple update>\n select-only: <builtin: select only>\n\n-- \nFabien.",
"msg_date": "Mon, 8 Apr 2019 17:43:24 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "pgbench - add option to show actual builtin script code"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nPatch looks good to me, and work fine on my machine. One minor observation is option 'list' mostly used to list the elements like \"pgbench -b list\" shows the available builtin scripts. Therefore we should use a word which seems to be more relevant like --show-script.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Thu, 02 May 2019 13:55:18 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add option to show actual builtin script code"
},
{
"msg_contents": "Hello,\n\n> Patch looks good to me, and work fine on my machine. One minor \n> observation is option 'list' mostly used to list the elements like \n> \"pgbench -b list\" shows the available builtin scripts. Therefore we \n> should use a word which seems to be more relevant like --show-script.\n\nThanks for the review.\n\nHere is a version with \"--show-script\". I also thought about \"--listing\", \nmaybe.\n\n> The new status of this patch is: Waiting on Author\n\n-- \nFabien.",
"msg_date": "Thu, 2 May 2019 16:25:42 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add option to show actual builtin script code"
},
{
"msg_contents": "Now the patch is good now.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Thu, 02 May 2019 14:54:38 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add option to show actual builtin script code"
},
{
"msg_contents": "\n> Now the patch is good now.\n>\n> The new status of this patch is: Ready for Committer\n\nOk, thanks.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 2 May 2019 18:35:57 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add option to show actual builtin script code"
},
{
"msg_contents": "\nOn 5/2/19 12:35 PM, Fabien COELHO wrote:\n>\n>> Now the patch is good now.\n>>\n>> The new status of this patch is: Ready for Committer\n>\n> Ok, thanks.\n>\n\n\nWhy aren't we instead putting the exact scripts in the documentation?\nHaving to call pgbench with a special flag to get the script text seems\na bit odd.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 11 Jul 2019 12:04:04 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add option to show actual builtin script code"
},
{
"msg_contents": "\nHello Andrew,\n\n>>> Now the patch is good now.\n>>>\n>>> The new status of this patch is: Ready for Committer\n>\n> Why aren't we instead putting the exact scripts in the documentation?\n> Having to call pgbench with a special flag to get the script text seems\n> a bit odd.\n\nA typical use case I had is to create a new script by modifying an \nexisting one for testing or debug. I prefer \"command > file.sql ; vi \nfile.sql\" to hazardous copy-pasting stuff from html pages.\n\nI do not think that it is worth replicating all scripts inside the doc, \nthey are not that interesting, especially if more are added. Currently, \nout of the 3 scripts, only one is in the doc, and nobody complained:-)\n\nNow, they could be added to the documentation, but I'd like the option \nanyway.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 11 Jul 2019 18:20:14 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add option to show actual builtin script code"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 4:20 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> >>> Now the patch is good now.\n> >>>\n> >>> The new status of this patch is: Ready for Committer\n> >\n> > Why aren't we instead putting the exact scripts in the documentation?\n> > Having to call pgbench with a special flag to get the script text seems\n> > a bit odd.\n>\n> A typical use case I had is to create a new script by modifying an\n> existing one for testing or debug. I prefer \"command > file.sql ; vi\n> file.sql\" to hazardous copy-pasting stuff from html pages.\n>\n> I do not think that it is worth replicating all scripts inside the doc,\n> they are not that interesting, especially if more are added. Currently,\n> out of the 3 scripts, only one is in the doc, and nobody complained:-)\n>\n> Now, they could be added to the documentation, but I'd like the option\n> anyway.\n\nCommitted, after pgindent. Thanks Fabien and Ibrar.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jul 2019 12:03:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add option to show actual builtin script code"
},
{
"msg_contents": "\n> Committed, after pgindent. Thanks Fabien and Ibrar.\n\nThanks for the commit.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 16 Jul 2019 07:59:29 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add option to show actual builtin script code"
}
] |
[
{
"msg_contents": "Hello devs,\n\nThe attached patch does $SUBJECT, as a showcase for recently added \nfeatures, including advanced expressions (CASE...), \\if, \\gset, ending SQL \ncommands at \";\"...\n\nThere is also a small fix to the doc which describes the tpcb-like \nimplementation but gets one variable name wrong: balance -> delta.\n\n-- \nFabien.",
"msg_date": "Mon, 8 Apr 2019 17:58:46 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 3:58 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> The attached patch does $SUBJECT, as a showcase for recently added\n> features, including advanced expressions (CASE...), \\if, \\gset, ending SQL\n> commands at \";\"...\n\nHi Fabien,\n\n+ the account branch has a 15% probability to be in the same branch\nas the teller (unless\n\nI would say \"... has a 15% probability of being in the same ...\". The\nsame wording appears further down in the comment.\n\nI see that the parameters you propose match the TPCB 2.0\ndescription[1], and the account balance was indeed supposed to be\nreturned to the driver. I wonder if \"strict\" is the right name here\nthough. \"tpcb-like-2\" at least leaves room for someone to propose yet\nanother variant, and still includes the \"-like\" disclaimer, which I\ninterpret as a way of making it clear that this benchmark and results\nproduced by it have no official TPC audited status.\n\n> There is also a small fix to the doc which describes the tpcb-like\n> implementation but gets one variable name wrong: balance -> delta.\n\nAgreed. I committed that part. Thanks!\n\n[1] http://www.tpc.org/tpcb/spec/tpcb_current.pdf\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sun, 14 Jul 2019 15:03:39 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hello Thomas,\n\nThanks for the feedback.\n\n> + the account branch has a 15% probability to be in the same branch\n> as the teller (unless\n>\n> I would say \"... has a 15% probability of being in the same ...\". The\n> same wording appears further down in the comment.\n\nFixed.\n\n> I see that the parameters you propose match the TPCB 2.0 description[1], \n> [...]\n\nNearly:-(\n\nWhile re-re-re-re-reading the spec, it was 85%, i.e. people mostly go to \ntheir local teller, I managed to get it wrong. Sigh. Fixed. Hopefully.\n\nI've updated the script a little so that it is closer to spec. I've also \nchanged the script definition so that it still works as expected if \nsomeone changes \"nbranches\" definition for some reason, even if this\nis explicitely discourage by comments.\n\n> I wonder if \"strict\" is the right name here though. \"tpcb-like-2\" at \n> least leaves room for someone to propose yet another variant, and still \n> includes the \"-like\" disclaimer, which I interpret as a way of making it \n> clear that this benchmark and results produced by it have no official \n> TPC audited status.\n\nHmmm.\n\nThe -like suffix is really about the conformance of the script, not the \nrest, but that should indeed be clearer. I've expanded the comment and doc \nabout this with a disclaimers, so that there is no ambiguity about what is \nexpected to conform, which is only the transaction script.\n\nI have added a comment about the non conformance of the \"int\" type use for \nbalances in the initialization phase.\n\nAlso, on second thought, I've changed the name to \"standard-tpcb\", but I'm \nunsure whether it is better than \"script-tpcb\". There is an insentive to \nhave a different prefix so that \"-b t\" would not complain of ambiguity \nbetween \"tpcb-like*\", which would be a regression. This is why I did not \nchoose the simple \"tcp\". There may be a \"standard-tpcb-2\" anyway.\n\nI have added a small test run in the TAP test.\n\nOn my TODO list is adding an initialization option to change the balance \ntype for conformance, by using NUMERIC or integer8.\n\nWhile thinking about that, an unrelated thought occured to me: adding a \npartitioned initialization variant would be nice to test the performance \nimpact of partitioning easily. I should have thought of that as soon as \npartitioning was added. Added to my TODO list as well.\n\n-- \nFabien.",
"msg_date": "Sun, 14 Jul 2019 10:50:12 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> [ pgbench-strict-tpcb-2.patch ]\n\nTBH, I think we should reject this patch. Nobody cares about TPC-B\nanymore, and they care even less about differences between one\nsort-of-TPC-B test and another sort-of-TPC-B test. (As the lack\nof response on this thread shows.) We don't need this kind of\nbaggage in pgbench; it's got too many \"features\" already.\n\nI'm also highly dubious about labeling this script \"standard TPC-B\",\nwhen it resolves only some of the reasons why our traditional script\nis not really TPC-B. That's treading on being false advertising.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 18:00:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 3:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> TBH, I think we should reject this patch. Nobody cares about TPC-B\n> anymore, and they care even less about differences between one\n> sort-of-TPC-B test and another sort-of-TPC-B test. (As the lack\n> of response on this thread shows.) We don't need this kind of\n> baggage in pgbench; it's got too many \"features\" already.\n\n+1. TPC-B was officially made obsolete in 1995.\n\n> I'm also highly dubious about labeling this script \"standard TPC-B\",\n> when it resolves only some of the reasons why our traditional script\n> is not really TPC-B. That's treading on being false advertising.\n\nIANAL, but it may not even be permissible to claim that we have\nimplemented \"standard TPC-B\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 30 Jul 2019 16:31:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Tue, Jul 30, 2019 at 3:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm also highly dubious about labeling this script \"standard TPC-B\",\n>> when it resolves only some of the reasons why our traditional script\n>> is not really TPC-B. That's treading on being false advertising.\n\n> IANAL, but it may not even be permissible to claim that we have\n> implemented \"standard TPC-B\".\n\nYeah, very likely you can't legally say that unless the TPC\nhas certified your test. (Our existing code and docs are quite\ncareful to call pgbench's version \"TPC-like\" or similar weasel\nwording, and never claim that it is really TPC-B or even a close\napproximation.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 19:37:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hello Tom,\n\n>>> I'm also highly dubious about labeling this script \"standard TPC-B\",\n>>> when it resolves only some of the reasons why our traditional script\n>>> is not really TPC-B. That's treading on being false advertising.\n>\n>> IANAL, but it may not even be permissible to claim that we have\n>> implemented \"standard TPC-B\".\n>\n> Yeah, very likely you can't legally say that unless the TPC\n> has certified your test. (Our existing code and docs are quite\n> careful to call pgbench's version \"TPC-like\" or similar weasel\n> wording, and never claim that it is really TPC-B or even a close\n> approximation.)\n\nHmmm.\n\nI agree that nobody really cares about TPC-B per se. The point of this \npatch is to provide a built-in example of recent and useful pgbench \nfeatures that match a real specification.\n\nThe \"strict\" only refers to the test script. It cannot match the whole \nspec which addresses many other things, some of them more process than \ntool: table creation and data types, performance data collection, database \nconfiguration, pricing of hardware used in the tests, post-benchmark run \nchecks, auditing constraints, whatever…\n\n> [about pgbench] it's got too many \"features\" already.\n\nI disagree with this judgement.\n\nAlthough not all features are that useful, the accumulation of recent \nadditions (int/float/bool expressions, \\if, \\gset, non uniform prng, …) \nmakes it suitable for testing various realistic scenarii which could not \nbe tested before. As said above, my point was to have a builtin \nillustration of available capabilities.\n\nIt did not occur to me that a scripts which implements \"strictly\" a \nparticular section of a 25-year obsolete benchmark could raise any legal \nissue.\n\n-- \nFabien.",
"msg_date": "Wed, 31 Jul 2019 16:10:10 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 10:10 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> Hello Tom,\n>\n> >>> I'm also highly dubious about labeling this script \"standard TPC-B\",\n> >>> when it resolves only some of the reasons why our traditional script\n> >>> is not really TPC-B. That's treading on being false advertising.\n> >\n> >> IANAL, but it may not even be permissible to claim that we have\n> >> implemented \"standard TPC-B\".\n> >\n> > Yeah, very likely you can't legally say that unless the TPC\n> > has certified your test. (Our existing code and docs are quite\n> > careful to call pgbench's version \"TPC-like\" or similar weasel\n> > wording, and never claim that it is really TPC-B or even a close\n> > approximation.)\n>\n> Hmmm.\n>\n> I agree that nobody really cares about TPC-B per se. The point of this\n> patch is to provide a built-in example of recent and useful pgbench\n> features that match a real specification.\n>\n\nI agree with this. When I was at EnterpriseDB, while it wasn't audited, we\nhad to develop an actual TPC-B implementation because pgbench was too\ndifferent. pgbench itself isn't that useful as a benchmark tool, imo, but\nif we have the ability to make it better (i.e. closer to an actual\nbenchmark kit), I think we should.\n\n-- \nJonah H. Harris\n\nOn Wed, Jul 31, 2019 at 10:10 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\nHello Tom,\n\n>>> I'm also highly dubious about labeling this script \"standard TPC-B\",\n>>> when it resolves only some of the reasons why our traditional script\n>>> is not really TPC-B. That's treading on being false advertising.\n>\n>> IANAL, but it may not even be permissible to claim that we have\n>> implemented \"standard TPC-B\".\n>\n> Yeah, very likely you can't legally say that unless the TPC\n> has certified your test. (Our existing code and docs are quite\n> careful to call pgbench's version \"TPC-like\" or similar weasel\n> wording, and never claim that it is really TPC-B or even a close\n> approximation.)\n\nHmmm.\n\nI agree that nobody really cares about TPC-B per se. The point of this \npatch is to provide a built-in example of recent and useful pgbench \nfeatures that match a real specification.I agree with this. When I was at EnterpriseDB, while it wasn't audited, we had to develop an actual TPC-B implementation because pgbench was too different. pgbench itself isn't that useful as a benchmark tool, imo, but if we have the ability to make it better (i.e. closer to an actual benchmark kit), I think we should.-- Jonah H. Harris",
"msg_date": "Wed, 31 Jul 2019 10:19:15 -0400",
"msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "\"Jonah H. Harris\" <jonah.harris@gmail.com> writes:\n> On Wed, Jul 31, 2019 at 10:10 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>> I agree that nobody really cares about TPC-B per se. The point of this\n>> patch is to provide a built-in example of recent and useful pgbench\n>> features that match a real specification.\n\n> I agree with this. When I was at EnterpriseDB, while it wasn't audited, we\n> had to develop an actual TPC-B implementation because pgbench was too\n> different. pgbench itself isn't that useful as a benchmark tool, imo, but\n> if we have the ability to make it better (i.e. closer to an actual\n> benchmark kit), I think we should.\n\n[ shrug... ] TBH, the proposed patch does not look to me like actual\nbenchmark kit; it looks like a toy. Nobody who was intent on making their\nbenchmark numbers look good would do a significant amount of work in a\nslow, ad-hoc interpreted language. I also wonder to what extent the\nnumbers would reflect pgbench itself being the bottleneck. Which is\nreally the fundamental problem I've got with all the stuff that's been\ncrammed into pgbench of late --- the more computation you're doing there,\nthe less your results measure the server's capabilities rather than\npgbench's implementation details.\n\nIn any case, even if I were in love with the script itself, we cannot\ncommit something that claims to be \"standard TPC-B\". It needs weasel\nwording that makes it clear that it isn't TPC-B, and then you have a\nproblem of user confusion about why we have both not-quite-TPC-B-1\nand not-quite-TPC-B-2, and which one to use, or which one was used in\nsomebody else's tests.\n\nI think if you want to show off what these pgbench features are good\nfor, it'd be better to find some other example that's not in the\nmidst of a legal minefield.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2019 17:11:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 2:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I agree with this. When I was at EnterpriseDB, while it wasn't audited, we\n> > had to develop an actual TPC-B implementation because pgbench was too\n> > different. pgbench itself isn't that useful as a benchmark tool, imo, but\n> > if we have the ability to make it better (i.e. closer to an actual\n> > benchmark kit), I think we should.\n>\n> [ shrug... ] TBH, the proposed patch does not look to me like actual\n> benchmark kit; it looks like a toy. Nobody who was intent on making their\n> benchmark numbers look good would do a significant amount of work in a\n> slow, ad-hoc interpreted language.\n\nAccording to TPC themselves, \"In contrast to TPC-A, TPC-B is not an\nOLTP benchmark. Rather, TPC-B can be looked at as a database stress\ntest...\" [1]. Sounds like classic pgbench to me.\n\nNot sure where that leaves this patch. What problem is it actually\ntrying to solve?\n\n[1] http://www.tpc.org/tpcb/\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 31 Jul 2019 14:21:39 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hello Tom,\n\n> [ shrug... ] TBH, the proposed patch does not look to me like actual\n> benchmark kit; it looks like a toy. Nobody who was intent on making their\n> benchmark numbers look good would do a significant amount of work in a\n> slow, ad-hoc interpreted language. I also wonder to what extent the\n> numbers would reflect pgbench itself being the bottleneck.\n\n\n> Which is really the fundamental problem I've got with all the stuff \n> that's been crammed into pgbench of late --- the more computation you're \n> doing there, the less your results measure the server's capabilities \n> rather than pgbench's implementation details.\n\nThat is a very good question. It is easy to measure the overhead, for \ninstance:\n\n sh> time pgbench -r -T 30 -M prepared\n ...\n latency average = 2.425 ms\n tps = 412.394420 (including connections establishing)\n statement latencies in milliseconds:\n 0.001 \\set aid random(1, 100000 * :scale)\n 0.000 \\set bid random(1, 1 * :scale)\n 0.000 \\set tid random(1, 10 * :scale)\n 0.000 \\set delta random(-5000, 5000)\n 0.022 BEGIN;\n 0.061 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\n 0.038 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n 0.046 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;\n 0.042 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;\n 0.036 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n 2.178 END;\n real 0m30.080s, user 0m0.406s, sys 0m0.689s\n\nThe cost of pgbench interpreted part (\\set) is under 1/1000. The full time \nof the process itself counts for 1.4%, below the inevitable system time \nwhich is 2.3%. Pgbench overheads are pretty small compared to postgres \nconnection and command execution, and system time. The above used a local \nsocket, if it were an actual remote network connection, the gap would be \nlarger. A profile run could collect more data, but that does not seem \nnecessary.\n\nSome parts of Pgbench could be optimized, eg for expressions the large \nswitch could be avoided with precomputed function call, some static \nanalysis could infer some types and avoid calls to generic functions which \nhave to tests types, and so on. But franckly I do not think that this is \ncurrently needed so I would not bother unless an actual issue is proven.\n\nAlso, pgbench overheads must be compared to an actual client application, \nwhich deals with a database through some language (PHP, Python, JS, Java…) \nthe interpreter of which would be written in C/C++ just like pgbench, and \nsome library (ORM, DBI, JDBC…), possibly written in the initial language \nand relying on libpq under the hood. Ok, there could be some JIT involved, \nbut it will not change that there are costs there too, and it would have \nto do pretty much the same things that pgbench is doing, plus what the \napplication has to do with the data.\n\nAll in all, pgbench overheads are small compared to postgres processing \ntimes and representative of a reasonably optimized client application.\n\n> In any case, even if I were in love with the script itself,\n\nLove is probably not required for a feature demonstration:-)\n\n> we cannot commit something that claims to be \"standard TPC-B\".\n\nYep, I clearly underestimated this legal aspect.\n\n> It needs weasel wording that makes it clear that it isn't TPC-B, and \n> then you have a problem of user confusion about why we have both \n> not-quite-TPC-B-1 and not-quite-TPC-B-2, and which one to use, or which \n> one was used in somebody else's tests.\n\nI agree that confusion is no good either.\n\n> I think if you want to show off what these pgbench features are good\n> for, it'd be better to find some other example that's not in the\n> midst of a legal minefield.\n\nYep, I got that.\n\nTo try to salvage my illustration idea: I could change the name to \"demo\", \ni.e. quite far from \"TPC-B\", do some extensions to make it differ, eg use \na non-uniform random generator, and then explicitly say that it is a \nvaguely inspired by \"TPC-B\" and intended as a demo script susceptible to \nbe updated to illustrate new features (eg if using a non-uniform generator \nI'd really like to add a permutation layer if available some day).\n\nThis way, the \"demo\" real intention would be very clear.\n\n-- \nFabien.",
"msg_date": "Thu, 1 Aug 2019 08:52:52 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "\n> According to TPC themselves, \"In contrast to TPC-A, TPC-B is not an\n> OLTP benchmark. Rather, TPC-B can be looked at as a database stress\n> test...\" [1]. Sounds like classic pgbench to me.\n>\n> Not sure where that leaves this patch. What problem is it actually\n> trying to solve?\n\nThat there is no builtin illustration of pgbench script features that \nallow more realistic benchmarking.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 1 Aug 2019 08:57:13 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 2:53 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> All in all, pgbench overheads are small compared to postgres processing\n> times and representative of a reasonably optimized client application.\n\nIt's pretty easy to devise tests where pgbench is client-limited --\njust try running it with threads = clients/4, sometimes even\nclients/2. So I don't buy the idea that this is true in general.\n\n> To try to salvage my illustration idea: I could change the name to \"demo\",\n> i.e. quite far from \"TPC-B\", do some extensions to make it differ, eg use\n> a non-uniform random generator, and then explicitly say that it is a\n> vaguely inspired by \"TPC-B\" and intended as a demo script susceptible to\n> be updated to illustrate new features (eg if using a non-uniform generator\n> I'd really like to add a permutation layer if available some day).\n>\n> This way, the \"demo\" real intention would be very clear.\n\nI do not like this idea at all; \"demo\" is about as generic a name as\nimaginable. But I have another idea: how about including this script\nin the documentation with some explanatory text that describes (a) the\nways in which it is more faithful to TPC-B than what the normal\npgbench thing and (b) the problems that it doesn't solve, as\nenumerated by Fabien upthread:\n\n\"table creation and data types, performance data collection, database\nconfiguration, pricing of hardware used in the tests, post-benchmark run\nchecks, auditing constraints, whatever…\"\n\nPerhaps that idea still won't attract any votes, but I throw it out\nthere for consideration.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 1 Aug 2019 09:25:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-01 08:52:52 +0200, Fabien COELHO wrote:\n> sh> time pgbench -r -T 30 -M prepared\n> ...\n> latency average = 2.425 ms\n> tps = 412.394420 (including connections establishing)\n> statement latencies in milliseconds:\n> 0.001 \\set aid random(1, 100000 * :scale)\n> 0.000 \\set bid random(1, 1 * :scale)\n> 0.000 \\set tid random(1, 10 * :scale)\n> 0.000 \\set delta random(-5000, 5000)\n> 0.022 BEGIN;\n> 0.061 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\n> 0.038 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n> 0.046 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;\n> 0.042 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;\n> 0.036 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n> 2.178 END;\n> real 0m30.080s, user 0m0.406s, sys 0m0.689s\n>\n> The cost of pgbench interpreted part (\\set) is under 1/1000.\n\nI don't put much credence in those numbers for pgbench commands - they\ndon't include significant parts of the overhead of command\nexecution. Even just the fact that you need to process more commands\nthrough the pretty slow pgbench interpreter has significant costs.\n\nUsing pgbench -Mprepared -n -c 8 -j 8 -S pgbench_100 -T 10 -r -P1\ne.g. shows pgbench to use 189% CPU in my 4/8 core/thread laptop. That's\na pretty significant share.\n\nAnd before you argue that that's just about a read-only workload:\nServers with either synchronous_commit=off, or with extremely fast WAL\ncommit due to BBUs/NVMe, are quite common. So you can easily get into\nscenarios where pgbench overhead is an issue for read/write workloads too.\n\nWith synchronous_commit=off, I e.g. see:\n\n$ PGOPTIONS='-c synchronous_commit=off' /usr/bin/time pgbench -Mprepared -n -c 8 -j 8 pgbench_100 -T 10 -r\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 100\nquery mode: prepared\nnumber of clients: 8\nnumber of threads: 8\nduration: 10 s\nnumber of transactions actually processed: 179198\nlatency average = 0.447 ms\ntps = 17892.824470 (including connections establishing)\ntps = 17908.086839 (excluding connections establishing)\nstatement latencies in milliseconds:\n 0.001 \\set aid random(1, 100000 * :scale)\n 0.000 \\set bid random(1, 1 * :scale)\n 0.000 \\set tid random(1, 10 * :scale)\n 0.000 \\set delta random(-5000, 5000)\n 0.042 BEGIN;\n 0.086 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\n 0.061 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n 0.070 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;\n 0.070 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;\n 0.058 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n 0.054 END;\n6.65user 10.64system 0:10.02elapsed 172%CPU (0avgtext+0avgdata 4588maxresident)k\n0inputs+0outputs (0major+367minor)pagefaults 0swaps\n\nAnd the largest part of the overhead is in pgbench's interpreter loop:\n+ 12.35% pgbench pgbench [.] threadRun\n+ 4.47% pgbench libpq.so.5.13 [.] pqParseInput3\n+ 3.54% pgbench pgbench [.] dopr.constprop.0\n+ 3.30% pgbench pgbench [.] fmtint\n+ 3.16% pgbench libc-2.28.so [.] __strcmp_avx2\n+ 2.95% pgbench libc-2.28.so [.] malloc\n+ 2.95% pgbench libpq.so.5.13 [.] PQsendQueryPrepared\n+ 2.15% pgbench libpq.so.5.13 [.] pqPutInt\n+ 1.93% pgbench pgbench [.] getVariable\n+ 1.85% pgbench libc-2.28.so [.] ppoll\n+ 1.85% pgbench libc-2.28.so [.] __strlen_avx2\n+ 1.85% pgbench libpthread-2.28.so [.] __libc_recv\n+ 1.66% pgbench libpq.so.5.13 [.] pqPutMsgStart\n+ 1.63% pgbench libpq.so.5.13 [.] pqGetInt\n\nAnd that's the just the standard pgbench read/write case, without\nadditional script commands or anything.\n\n\n> The full time\n> of the process itself counts for 1.4%, below the inevitable system time\n> which is 2.3%. Pgbench overheads are pretty small compared to postgres\n> connection and command execution, and system time. The above used a local\n> socket, if it were an actual remote network connection, the gap would be\n> larger. A profile run could collect more data, but that does not seem\n> necessary.\n\nWell, duh, that's because you're completely IO bound. You're doing\n400tps. That's *nothing*. All you're measuring is how fast the WAL can\nbe fdatasync()ed to disk. Of *course* pgbench isn't a relevant overhead\nin that case. I really don't understand how this can be an argument.\n\n\n> Also, pgbench overheads must be compared to an actual client application,\n> which deals with a database through some language (PHP, Python, JS, Java…)\n> the interpreter of which would be written in C/C++ just like pgbench, and\n> some library (ORM, DBI, JDBC…), possibly written in the initial language and\n> relying on libpq under the hood. Ok, there could be some JIT involved, but\n> it will not change that there are costs there too, and it would have to do\n> pretty much the same things that pgbench is doing, plus what the application\n> has to do with the data.\n\nUh, but those clients aren't all running on a single machine.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Aug 2019 10:50:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hello Robert,\n\n>> All in all, pgbench overheads are small compared to postgres processing\n>> times and representative of a reasonably optimized client application.\n>\n> It's pretty easy to devise tests where pgbench is client-limited --\n> just try running it with threads = clients/4, sometimes even\n> clients/2. So I don't buy the idea that this is true in general.\n\nOk, one thread cannot feed an N core server if enough client are executed \nper thread and the server has few things to do.\n\nThe point I'm clumsily trying to make is that pgbench-specific overheads \nare quite small: Any benchmark driver would have pretty much at least the \nsame costs, because you have the cpu cost of the tool itself, then the \nlibrary it uses, eg lib{pq,c}, then syscalls. Even if the first costs are \nreduced to zero, you still have to deal with the database through the \nsystem, and this part will be the same.\n\nAs the cost of pgbench itself in a reduced part of the total cpu costs of \nrunning the bench client side, there is no extraordinary improvement to \nexpect by optimizing this part. This does not mean that pgbench \nperformance should not be improved, if possible and maintainable.\n\nI'll develop a little more that point in an answer to Andres figures, \nwhich are very interesting, by providing some more figures.\n\n>> To try to salvage my illustration idea: I could change the name to \"demo\",\n>> i.e. quite far from \"TPC-B\", do some extensions to make it differ, eg use\n>> a non-uniform random generator, and then explicitly say that it is a\n>> vaguely inspired by \"TPC-B\" and intended as a demo script susceptible to\n>> be updated to illustrate new features (eg if using a non-uniform generator\n>> I'd really like to add a permutation layer if available some day).\n>>\n>> This way, the \"demo\" real intention would be very clear.\n>\n> I do not like this idea at all; \"demo\" is about as generic a name as\n> imaginable.\n\nWhat name would you suggest, if it were to be made available from pgbench \nas a builtin, that avoids confusion with \"tpcb-like\"?\n\n> But I have another idea: how about including this script in the \n> documentation with some explanatory text that describes (a) the ways in \n> which it is more faithful to TPC-B than what the normal pgbench thing \n> and (b) the problems that it doesn't solve, as enumerated by Fabien \n> upthread:\n\nWe can put more examples in the documentation, ok.\n\nOne of the issue raised by Tom is that claiming faithfulness to TCP-B is \nprone to legal issues. Franckly, I do not care about TPC-B, only that it \nis a *real* benchmark, and that it allows to illustrate pgbench \ncapabilities.\n\nAnother point is confusion if there are two tpcb-like scripts provided.\n\nSo I'm fine with giving up any claim about faithfulness, especially as it \nwould allow the \"demo\" script to be more didactic and illustrate more\nof pgbench capabilities.\n\n> \"table creation and data types, performance data collection, database\n> configuration, pricing of hardware used in the tests, post-benchmark run\n> checks, auditing constraints, whatever…\"\n\nI already put such caveats in comments and in the documentation, but that \ndoes not seem to be enough for Tom.\n\n> Perhaps that idea still won't attract any votes, but I throw it out\n> there for consideration.\n\nI think that adding an illustration section could be fine, but ISTM that \nit would still be appropriate to have the example executable. Moreover, I \nthink that your idea does not fixes the \"we need not to make too much \nclaims about TPC-B to avoid potential legal issues\".\n\n-- \nFabien.",
"msg_date": "Fri, 2 Aug 2019 08:38:37 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hello Andres,\n\nThanks a lot for these feedbacks and comments.\n\n> Using pgbench -Mprepared -n -c 8 -j 8 -S pgbench_100 -T 10 -r -P1\n> e.g. shows pgbench to use 189% CPU in my 4/8 core/thread laptop. That's\n> a pretty significant share.\n\nFine, but what is the corresponding server load? 211%? 611%? And what \nactual time is spent in pgbench itself, vs libpq and syscalls?\n\nFigures and discussion below.\n\n> And before you argue that that's just about a read-only workload:\n\nI'm fine with worth case scenarii:-) Let's do the worse for my 2 cores \nrunning at 2.2 GHz laptop:\n\n\n(0) we can run a really do nearly nothing script:\n\n sh> cat nope.sql\n \\sleep 0\n # do not sleep, so stay awake…\n\n sh> time pgbench -f nope.sql -T 10 -r\n latency average = 0.000 ms\n tps = 12569499.226367 (excluding connections establishing) # 12.6M\n statement latencies in milliseconds:\n 0.000 \\sleep 0\n real 0m10.072s, user 0m10.027s, sys 0m0.012s\n\nUnsurprisingly pgbench is at about 100% cpu load, and the transaction cost \n(transaction loop and stat collection) is 0.080 µs (1/12.6M) per script \nexecution (one client on one thread).\n\n\n(1) a pgbench complex-commands-only script:\n\n sh> cat set.sql\n \\set x random_exponential(1, :scale * 10, 2.5) + 2.1\n \\set y random(1, 9) + 17.1 * :x\n \\set z case when :x > 7 then 1.0 / ln(:y) else 2.0 / sqrt(:y) end\n\n sh> time pgbench -f set.sql -T 10 -r\n latency average = 0.001 ms\n tps = 1304989.729560 (excluding connections establishing) # 1.3M\n statement latencies in milliseconds:\n 0.000 \\set x random_exponential(1, :scale * 10, 2.5) + 2.1\n 0.000 \\set y random(1, 9) + 17.1 * :x\n 0.000 \\set z case when :x > 7 then 1.0 / ln(:y) else 2.0 / sqrt(:y) end\n real 0m10.038s, user 0m10.003s, sys 0m0.000s\n\nAgain pgbench load is near 100%, with only pgbench stuff (thread loop, \nexpression evaluation, variables, stat collection) costing about 0.766 µs \ncpu per script execution. This is about 10 times the previous case, 90% of \npgbench cpu cost is in expressions and variables, not a surprise.\n\nProbably this under-a-µs could be reduced… but what overall improvements \nwould it provide? An answer with the last test:\n\n\n(2) a ridiculously small SQL query, tested through a local unix socket:\n\n sh> cat empty.sql\n ;\n # yep, an empty query!\n\n sh> time pgbench -f empty.sql -T 10 -r\n latency average = 0.016 ms\n tps = 62206.501709 (excluding connections establishing) # 62.2K\n statement latencies in milliseconds:\n 0.016 ;\n real 0m10.038s, user 0m1.754s, sys 0m3.867s\n\nWe are adding minimal libpq and underlying system calls to pgbench \ninternal cpu costs in the most favorable (or worst:-) sql query with the \nmost favorable postgres connection.\n\nApparent load is about (1.754+3.867)/10.038 = 56%, so the cpu cost per \nscript is 0.56 / 62206.5 = 9 µs, over 100 times the cost of a do-nothing \nscript (0), and over 10 times the cost of a complex expression command \nscript (1).\n\nConclusion: pgbench-specific overheads are typically (much) below 10% of \nthe total client-side cpu cost of a transaction, while over 90% of the cpu \ncost is spent in libpq and system, for the worst case do-nothing query.\n\nA perfect bench driver which would have zero overheads would reduce the \ncpu cost by at most 10%, because you still have to talk to the database. \nthrough the system. If pgbench cost were divided by two, which would be a \nreasonable achievement, the benchmark client cost would be reduced by 5%.\n\nWow?\n\nI have already given some thought in the past to optimize \"pgbench\", \nespecially to avoid long switches (eg in expression evaluation) and maybe \nto improve variable management, but as show above I would not expect a \ngain worth the effort and assume that a patch would probably be justly \nrejected, because for a realistic benchmark script these costs are already \nmuch less than other inevitable libpq/syscall costs.\n\nThat does not mean that nothing needs to be done, but the situation is \ncurrently quite good.\n\nIn conclusion, ISTM that current pgbench allows to saturate a postgres \nserver with a client significantly smaller than the server, which seems \nlike a reasonable benchmarking situation. Any other driver in any other \nlanguage would necessarily incur the same kind of costs.\n\n\n> [...] And the largest part of the overhead is in pgbench's interpreter \n> loop:\n\nIndeed, the figures below are very interesting! Thanks for collecting \nthem.\n\n> + 12.35% pgbench pgbench [.] threadRun\n> + 3.54% pgbench pgbench [.] dopr.constprop.0\n> + 3.30% pgbench pgbench [.] fmtint\n> + 1.93% pgbench pgbench [.] getVariable\n\n~ 21%, probably some inlining has been performed, because I would have \nexpected to see significant time in \"advanceConnectionState\".\n\n> + 2.95% pgbench libpq.so.5.13 [.] PQsendQueryPrepared\n> + 2.15% pgbench libpq.so.5.13 [.] pqPutInt\n> + 4.47% pgbench libpq.so.5.13 [.] pqParseInput3\n> + 1.66% pgbench libpq.so.5.13 [.] pqPutMsgStart\n> + 1.63% pgbench libpq.so.5.13 [.] pqGetInt\n\n~ 13%\n\n> + 3.16% pgbench libc-2.28.so [.] __strcmp_avx2\n> + 2.95% pgbench libc-2.28.so [.] malloc\n> + 1.85% pgbench libc-2.28.so [.] ppoll\n> + 1.85% pgbench libc-2.28.so [.] __strlen_avx2\n> + 1.85% pgbench libpthread-2.28.so [.] __libc_recv\n\n~ 11%, str is a pain… Not sure who is calling though, pgbench or libpq.\n\nThis is basically 47% pgbench, 53% lib*, on the sample provided. I'm \nunclear about where system time is measured.\n\n> And that's the just the standard pgbench read/write case, without\n> additional script commands or anything.\n\n> Well, duh, that's because you're completely IO bound. You're doing\n> 400tps. That's *nothing*. All you're measuring is how fast the WAL can\n> be fdatasync()ed to disk. Of *course* pgbench isn't a relevant overhead\n> in that case. I really don't understand how this can be an argument.\n\nSure. My interest in running it was to show that the \\set stuff was \nridiculous compared to processing an actual SQL query, but it does not \nallow to analyze all overheads. I hope that the 3 above examples allow to \nmake my point more understandable.\n\n>> Also, pgbench overheads must be compared to an actual client application,\n>> which deals with a database through some language (PHP, Python, JS, Java…)\n>> the interpreter of which would be written in C/C++ just like pgbench, and\n>> some library (ORM, DBI, JDBC…), possibly written in the initial language and\n>> relying on libpq under the hood. Ok, there could be some JIT involved, but\n>> it will not change that there are costs there too, and it would have to do\n>> pretty much the same things that pgbench is doing, plus what the application\n>> has to do with the data.\n>\n> Uh, but those clients aren't all running on a single machine.\n\nSure.\n\nThe cumulated power of the clients is probably much larger than the \npostgres server itself, and ISTM that pgbench allows to simulate such \nthings with much smaller client-side requirements, and that any other tool \ncould not do much better.\n\n-- \nFabien.",
"msg_date": "Fri, 2 Aug 2019 10:34:24 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-02 10:34:24 +0200, Fabien COELHO wrote:\n> \n> Hello Andres,\n> \n> Thanks a lot for these feedbacks and comments.\n> \n> > Using pgbench -Mprepared -n -c 8 -j 8 -S pgbench_100 -T 10 -r -P1\n> > e.g. shows pgbench to use 189% CPU in my 4/8 core/thread laptop. That's\n> > a pretty significant share.\n> \n> Fine, but what is the corresponding server load? 211%? 611%? And what actual\n> time is spent in pgbench itself, vs libpq and syscalls?\n\nSystem wide pgbench, including libpq, is about 22% of the whole system.\n\nAs far as I can tell there's a number of things that are wrong:\n- prepared statement names are recomputed for every query execution\n- variable name lookup is done for every command, rather than once, when\n parsing commands\n- a lot of string->int->string type back and forths\n\n\n> Conclusion: pgbench-specific overheads are typically (much) below 10% of the\n> total client-side cpu cost of a transaction, while over 90% of the cpu cost\n> is spent in libpq and system, for the worst case do-nothing query.\n\nI don't buy that that's the actual worst case, or even remotely close to\nit. I e.g. see higher pgbench overhead for the *modify* case than for\nthe pgbench's readonly case. And that's because some of the meta\ncommands are slow, in particular everything related to variables. And\nthe modify case just has more variables.\n\n\n\n> \n> > + 12.35% pgbench pgbench [.] threadRun\n> > + 3.54% pgbench pgbench [.] dopr.constprop.0\n> > + 3.30% pgbench pgbench [.] fmtint\n> > + 1.93% pgbench pgbench [.] getVariable\n> \n> ~ 21%, probably some inlining has been performed, because I would have\n> expected to see significant time in \"advanceConnectionState\".\n\nYea, there's plenty inlining. Note dopr() is string processing.\n\n\n> > + 2.95% pgbench libpq.so.5.13 [.] PQsendQueryPrepared\n> > + 2.15% pgbench libpq.so.5.13 [.] pqPutInt\n> > + 4.47% pgbench libpq.so.5.13 [.] pqParseInput3\n> > + 1.66% pgbench libpq.so.5.13 [.] pqPutMsgStart\n> > + 1.63% pgbench libpq.so.5.13 [.] pqGetInt\n> \n> ~ 13%\n\nA lot of that is really stupid. We need to improve\nlibpq. PqsendQueryGuts (attributed to PQsendQueryPrepared here), builds\nthe command in many separate pqPut* commands, which reside in another\ntranslation unit, is pretty sad.\n\n\n> > + 3.16% pgbench libc-2.28.so [.] __strcmp_avx2\n> > + 2.95% pgbench libc-2.28.so [.] malloc\n> > + 1.85% pgbench libc-2.28.so [.] ppoll\n> > + 1.85% pgbench libc-2.28.so [.] __strlen_avx2\n> > + 1.85% pgbench libpthread-2.28.so [.] __libc_recv\n> \n> ~ 11%, str is a pain… Not sure who is calling though, pgbench or\n> libpq.\n\nBoth. Most of the strcmp is from getQueryParams()/getVariable(). The\ndopr() is from pg_*printf, which is mostly attributable to\npreparedStatementName() and getVariable().\n\n\n> This is basically 47% pgbench, 53% lib*, on the sample provided. I'm unclear\n> about where system time is measured.\n\nIt was excluded in this profile, both to reduce profiling costs, and to\nfocus on pgbench.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 11:31:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hello Andres,\n\n>>> Using pgbench -Mprepared -n -c 8 -j 8 -S pgbench_100 -T 10 -r -P1\n>>> e.g. shows pgbench to use 189% CPU in my 4/8 core/thread laptop. That's\n>>> a pretty significant share.\n>>\n>> Fine, but what is the corresponding server load? 211%? 611%? And what actual\n>> time is spent in pgbench itself, vs libpq and syscalls?\n>\n> System wide pgbench, including libpq, is about 22% of the whole system.\n\nHmmm. I guess that the consistency between 189% CPU on 4 cores/8 threads \nand 22% overall load is that 189/800 = 23.6% ~ 22%.\n\nGiven the simplicity of the select-only transaction the stuff is CPU \nbound, so postgres 8 server processes should saturate the 4 core CPU, and \npgbench & postgres are competing for CPU time. The overall load is \nprobably 100%, i.e. 22% pgbench vs 78% postgres (assuming system is \nincluded), 78/22 = 3.5, i.e. pgbench on one core would saturate postgres \non 3.5 cores on a CPU bound load.\n\nI'm not chocked by these results for near worst-case conditions (i.e. the \nserver side has very little to do).\n\nIt seems quite consistent with the really worst-case example I reported \n(empty query, cannot do less). Looking at the same empty-sql-query load \nthrough \"htop\", I have 95% postgres and 75% pgbench. This is not fully \nconsistent with \"time\" which reports 55% pgbench overall, over 2/3 of \nwhich in system, under 1/3 pgbench which must be devided into pgbench \nactual code and external libpq/lib* other stuff.\n\nYet again, pgbench code is not the issue from my point of view, because \ntime is spent mostly elsewhere and any other driver would have to do the \nsame.\n\n> As far as I can tell there's a number of things that are wrong:\n\nSure, I agree that things could be improved.\n\n> - prepared statement names are recomputed for every query execution\n\nI'm not sure it is a bug issue, but it should be precomputed somewhere, \nthough.\n\n> - variable name lookup is done for every command, rather than once, when\n> parsing commands\n\nHmmm. The names of variables are not all known in advance, eg \\gset. \nPossibly it does not matter, because the name of actually used variables \nis known. Each used variables could get a number so that using a variable \nwould be accessing an array at the corresponding index.\n\n> - a lot of string->int->string type back and forths\n\nYep, that is a pain, ISTM that strings are exchanged at the protocol \nlevel, but this is libpq design, not pgbench.\n\nAs far as variable values are concerned, AFAICR conversion are performed \non demand only, and just once.\n\nOverall, my point if that even if all pgbench-specific costs were wiped \nout it would not change the final result (pgbench load) much because most \nof the time is spent in libpq and system. Any other test driver would \nincur the same cost.\n\n>> Conclusion: pgbench-specific overheads are typically (much) below 10% of the\n>> total client-side cpu cost of a transaction, while over 90% of the cpu cost\n>> is spent in libpq and system, for the worst case do-nothing query.\n>\n> I don't buy that that's the actual worst case, or even remotely close to \n> it.\n\nHmmm. I'm not sure I can do much worse than 3 complex expressions against \none empty sql query. Ok, I could put 27 complex expressions to reach \n50-50, but the 3-to-1 complex-expression-to-empty-sql ratio already seems \nok for a realistic worst-case test script.\n\n> I e.g. see higher pgbench overhead for the *modify* case than for\n> the pgbench's readonly case. And that's because some of the meta\n> commands are slow, in particular everything related to variables. And\n> the modify case just has more variables.\n\nHmmm. WRT \\set and expressions, the two main cost seems to be the large \nswitch and the variable management. Yet again, I still interpret the \nfigures I collected as these costs are small compared to libpq/system \noverheads, and the overall result is below postgres own CPU costs (on a \nper client basis).\n\n>>> + 12.35% pgbench pgbench [.] threadRun\n>>> + 3.54% pgbench pgbench [.] dopr.constprop.0\n>>\n>> ~ 21%, probably some inlining has been performed, because I would have\n>> expected to see significant time in \"advanceConnectionState\".\n>\n> Yea, there's plenty inlining. Note dopr() is string processing.\n\nWhich is a pain, no doubt about that. Some of it as been taken out of \npgbench already, eg comparing commands vs using an enum.\n\n>>> + 2.95% pgbench libpq.so.5.13 [.] PQsendQueryPrepared\n>>> + 2.15% pgbench libpq.so.5.13 [.] pqPutInt\n>>> + 4.47% pgbench libpq.so.5.13 [.] pqParseInput3\n>>> + 1.66% pgbench libpq.so.5.13 [.] pqPutMsgStart\n>>> + 1.63% pgbench libpq.so.5.13 [.] pqGetInt\n>>\n>> ~ 13%\n>\n> A lot of that is really stupid. We need to improve libpq. \n> PqsendQueryGuts (attributed to PQsendQueryPrepared here), builds the \n> command in many separate pqPut* commands, which reside in another \n> translation unit, is pretty sad.\n\nIndeed, I'm definitely convinced that libpq costs are high and should be \nreduced where possible. Now, yet again, they are much smaller than the \ntime spent in the system to send and receive the data on a local socket, \nso somehow they could be interpreted as good enough, even if not that \ngood.\n\n>>> + 3.16% pgbench libc-2.28.so [.] __strcmp_avx2\n>>> + 2.95% pgbench libc-2.28.so [.] malloc\n>>> + 1.85% pgbench libc-2.28.so [.] ppoll\n>>> + 1.85% pgbench libc-2.28.so [.] __strlen_avx2\n>>> + 1.85% pgbench libpthread-2.28.so [.] __libc_recv\n>>\n>> ~ 11%, str is a pain… Not sure who is calling though, pgbench or\n>> libpq.\n>\n> Both. Most of the strcmp is from getQueryParams()/getVariable(). The\n> dopr() is from pg_*printf, which is mostly attributable to\n> preparedStatementName() and getVariable().\n\nHmmm. Franckly I can optimize pgbench code pretty easily, but I'm not sure \nof maintainability, and as I said many times, about the real effect it \nwould have, because these cost are a minor part of the client side \nbenchmark part.\n\n>> This is basically 47% pgbench, 53% lib*, on the sample provided. I'm unclear\n>> about where system time is measured.\n>\n> It was excluded in this profile, both to reduce profiling costs, and to\n> focus on pgbench.\n\nOk.\n\nIf we take my other figures and round up, for a running pgbench we have \n1/6 actual pgbench, 1/6 libpq, 2/3 system.\n\nIf I get a factor of 10 speedup in actual pgbench (let us assume I'm that \ngood:-), then the overall gain is 1/6 - 1/6/10 = 15%. Although I can do \nit, it would be some fun, but the code would get ugly (not too bad, but \nnevertheless probably less maintainable, with a partial typing phase and \nexpression compilation, and my bet is that however good the patch would be \nrejected).\n\nDo you see an error in my evaluation of pgbench actual costs and its \ncontribution to the overall performance of running a benchmark?\n\nIf yes, which it is?\n\nIf not, do you think advisable to spend time improving the evaluator & \nvariable stuff and possibly other places for an overall 15% gain?\n\nAlso, what would be the likelyhood of such optimization patch to pass?\n\nI could do a limited variable management improvement patch, eventually, I \nhave funny ideas to speedup the thing, some of which outlined above, some \nothers even more terrible.\n\n-- \nFabien.",
"msg_date": "Sat, 3 Aug 2019 11:30:30 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hello Andres,\n\n> If not, do you think advisable to spend time improving the evaluator & \n> variable stuff and possibly other places for an overall 15% gain?\n>\n> Also, what would be the likelyhood of such optimization patch to pass?\n>\n> I could do a limited variable management improvement patch, eventually, I \n> have funny ideas to speedup the thing, some of which outlined above, some \n> others even more terrible.\n\nAttached is a quick PoC.\n\n sh> cat set.sql\n \\set x 0\n \\set y :x\n \\set z :y\n \\set w :z\n \\set v :w\n \\set x :v\n \\set y :x\n \\set z :y\n \\set w :z\n \\set v :w\n \\set x :v\n \\set y :x\n \\set z :y\n \\set w :z\n \\set v :w\n \\set x1 :x\n \\set x2 :x\n \\set y1 :z\n \\set y0 :w\n \\set helloworld :x\n\nBefore the patch:\n\n sh> ./pgbench -T 10 -f vars.sql\n ...\n tps = 802966.183240 (excluding connections establishing) # 0.8M\n\nAfter the patch:\n\n sh> ./pgbench -T 10 -f vars.sql\n ...\n tps = 2665382.878271 (excluding connections establishing) # 2.6M\n\nWhich is a (somehow disappointing) * 3.3 speedup. The impact on the 3 \ncomplex expressions tests is not measurable, though.\n\nProbably variable management should be reworked more deeply to cleanup the \ncode.\n\nQuestions:\n - how likely is such a patch to pass? (IMHO not likely)\n - what is its impact to overall performance when actual queries\n are performed (IMHO very small).\n\n-- \nFabien.",
"msg_date": "Mon, 5 Aug 2019 17:38:23 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 17:38:23 +0200, Fabien COELHO wrote:\n> Which is a (somehow disappointing) * 3.3 speedup. The impact on the 3\n> complex expressions tests is not measurable, though.\n\nI don't know why that could be disappointing. We put in much more work\nfor much smaller gains in other places.\n\n\n> Probably variable management should be reworked more deeply to cleanup the\n> code.\n\nAgreed.\n\n\n> Questions:\n> - how likely is such a patch to pass? (IMHO not likely)\n\nI don't see why? I didn't review the patch in any detail, but it didn't\nlook crazy in quick skim? Increasing how much load can be simulated\nusing pgbench, is something I personally find much more interesting than\nadding capabilities that very few people will ever use.\n\nFWIW, the areas I find current pgbench \"most lacking\" during development\nwork are:\n\n1) Data load speed. The data creation is bottlenecked on fprintf in a\n single process. The index builds are done serially. The vacuum could\n be replaced by COPY FREEZE. For a lot of meaningful tests one needs\n 10-1000s of GB of testdata - creating that is pretty painful.\n\n2) Lack of proper initialization integration for custom\n scripts. I.e. have steps that are in the custom script that allow -i,\n vacuum, etc to be part of the script, rather than separately\n executable steps. --init-steps doesn't do anything for that.\n\n3) pgbench overhead, although that's to a significant degree libpq's fault\n\n4) Ability to cancel pgbench and get approximate results. That currently\n works if the server kicks out the clients, but not when interrupting\n pgbench - which is just plain weird. Obviously that doesn't matter\n for \"proper\" benchmark runs, but often during development, it's\n enough to run pgbench past some events (say the next checkpoint).\n\n\n> - what is its impact to overall performance when actual queries\n> are performed (IMHO very small).\n\nObviously not huge - I'd also not expect it to be unobservably small\neither. Especially if somebody went and fixed some of the inefficiency\nin libpq, but even without that. And even moreso, if somebody revived\nthe libpq batch work + the relevant pgbench patch, because that removes\na lot of the system/kernel overhead, due to the crazy number of context\nswitches (obviously not realistic for all workloads, but e.g. for plenty\njava workloads, it is), but leaves the same number of variable accesses\netc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Aug 2019 09:24:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 2:38 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Ok, one thread cannot feed an N core server if enough client are executed\n> per thread and the server has few things to do.\n\nRight ... where N is, uh, TWO.\n\n> The point I'm clumsily trying to make is that pgbench-specific overheads\n> are quite small: Any benchmark driver would have pretty much at least the\n> same costs, because you have the cpu cost of the tool itself, then the\n> library it uses, eg lib{pq,c}, then syscalls. Even if the first costs are\n> reduced to zero, you still have to deal with the database through the\n> system, and this part will be the same.\n\nI'm not convinced. Perhaps you're right; after all, it's not like\npgbench is doing any real work. On the other hand, I've repeatedly\nbeen annoyed by how inefficient pgbench is, so I'm not totally\nprepared to concede that any benchmark driver would have the same\ncosts, or that it's a reasonably well-optimized client application.\nWhen I run the pgbench, I want to know how fast the server is, not how\nfast pgbench is.\n\n> What name would you suggest, if it were to be made available from pgbench\n> as a builtin, that avoids confusion with \"tpcb-like\"?\n\nI'm not in favor of adding it as a built-in. If we were going to do\nit, I don't know that we could do better than tcpb-like-2, and I'm not\nexcited about that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Aug 2019 15:59:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hello Andres,\n\n>> Which is a (somehow disappointing) * 3.3 speedup. The impact on the 3\n>> complex expressions tests is not measurable, though.\n>\n> I don't know why that could be disappointing. We put in much more work\n> for much smaller gains in other places.\n\nProbably, but I thought I would have a better deal by eliminating most \nstring stuff from variables.\n\n>> Questions:\n>> - how likely is such a patch to pass? (IMHO not likely)\n>\n> I don't see why? I didn't review the patch in any detail, but it didn't\n> look crazy in quick skim? Increasing how much load can be simulated\n> using pgbench, is something I personally find much more interesting than\n> adding capabilities that very few people will ever use.\n\nYep, but my point is that the bottleneck is mostly libpq/system, as I \ntried to demonstrate with the few experiments I reported.\n\n> FWIW, the areas I find current pgbench \"most lacking\" during development\n> work are:\n>\n> 1) Data load speed. The data creation is bottlenecked on fprintf in a\n> single process.\n\nsnprintf actually, could be replaced.\n\nI submitted a patch to add more control on initialization, including a \nserver-side loading feature, i.e. the client does not send data, the \nserver generates its own, see 'G':\n\n \thttps://commitfest.postgresql.org/24/2086/\n\nHowever on my laptop it is slower than client-side loading on a local \nsocket. The client version is doing around 70 MB/s, the client load is \n20-30%, postgres load is 85%, but I'm not sure I can hope for much more on \nmy SSD. On my laptop the bottleneck is postgres/disk, not fprintf.\n\n> The index builds are done serially. The vacuum could be replaced by COPY \n> FREEZE.\n\nWell, it could be added?\n\n> For a lot of meaningful tests one needs 10-1000s of GB of testdata - \n> creating that is pretty painful.\n\nYep.\n\n> 2) Lack of proper initialization integration for custom\n> scripts.\n\nHmmm…\n\nYou can always write a psql script for schema and possibly simplistic data \ninitialization?\n\nHowever, generating meaningful pseudo-random data for an arbitrary schema \nis a pain. I did an external tool for that a few years ago:\n\n \thttp://www.coelho.net/datafiller.html\n\nbut it is still a pain.\n\n> I.e. have steps that are in the custom script that allow -i, vacuum, etc \n> to be part of the script, rather than separately executable steps. \n> --init-steps doesn't do anything for that.\n\nSure. It just gives some control.\n\n> 3) pgbench overhead, although that's to a significant degree libpq's fault\n\nI'm afraid that is currently the case.\n\n> 4) Ability to cancel pgbench and get approximate results. That currently\n> works if the server kicks out the clients, but not when interrupting\n> pgbench - which is just plain weird. Obviously that doesn't matter\n> for \"proper\" benchmark runs, but often during development, it's\n> enough to run pgbench past some events (say the next checkpoint).\n\nDo you mean have a report anyway on \"Ctrl-C\"?\n\nI usually do a -P 1 to see the progress, but making Ctrl-C work should be \nreasonably easy.\n\n>> - what is its impact to overall performance when actual queries\n>> are performed (IMHO very small).\n>\n> Obviously not huge - I'd also not expect it to be unobservably small\n> either.\n\nHmmm… Indeed, the 20 \\set script runs at 2.6 M/s, that is 0.019 µs per \n\\set, and any discussion over the connection is at least 15 µs (for one \nclient on a local socket).\n\n-- \nFabien.",
"msg_date": "Mon, 5 Aug 2019 22:45:53 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hello Robert,\n\n>> Ok, one thread cannot feed an N core server if enough client are executed\n>> per thread and the server has few things to do.\n>\n> Right ... where N is, uh, TWO.\n\nYes, two indeed… For low-work cpu-bound load, given libpq & system \noverheads, you cannot really hope for a better deal.\n\nI think that the documentation could be clearer about thread/core \nrecommendations, i.e. how much ressources you should allocate to pgbench\nso that the server is more likely to be the bottleneck, in the \"Good \nPractices\" section.\n\n>> The point I'm clumsily trying to make is that pgbench-specific overheads\n>> are quite small: Any benchmark driver would have pretty much at least the\n>> same costs, because you have the cpu cost of the tool itself, then the\n>> library it uses, eg lib{pq,c}, then syscalls. Even if the first costs are\n>> reduced to zero, you still have to deal with the database through the\n>> system, and this part will be the same.\n>\n> I'm not convinced. Perhaps you're right; after all, it's not like\n> pgbench is doing any real work.\n\nYep, pgbench is not doing much beyond going from one libpq call to the \nnext. It can be improved, but the overhead is already reasonably low.\n\n> On the other hand, I've repeatedly been annoyed by how inefficient \n> pgbench is, so I'm not totally prepared to concede that any benchmark \n> driver would have the same costs, or that it's a reasonably \n> well-optimized client application. When I run the pgbench, I want to \n> know how fast the server is, not how fast pgbench is.\n\nHmmm. You cannot fully isolate components: You can only basically learn \nhow fast the server is when accessed through a libpq connection, which is \nquite reasonable because this is what a user can expect anyway.\n\n-- \nFabien.",
"msg_date": "Tue, 6 Aug 2019 08:58:45 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "> On Mon, Aug 5, 2019 at 10:46 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> > The index builds are done serially. The vacuum could be replaced by COPY\n> > FREEZE.\n>\n> Well, it could be added?\n\nWhile doing benchmarking using different tools, including pgbench, I found it\nuseful as a temporary hack to add copy freeze and maintenance_work_mem options\n(the last one not as an env variable, just as a set before, although not sure\nif it's a best idea). Is it similar to what you were talking about?",
"msg_date": "Tue, 27 Aug 2019 17:26:49 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "Hello Dmitry,\n\n>> Well, it could be added?\n>\n> While doing benchmarking using different tools, including pgbench, I found it\n> useful as a temporary hack to add copy freeze and maintenance_work_mem options\n> (the last one not as an env variable, just as a set before, although not sure\n> if it's a best idea). Is it similar to what you were talking about?\n\nAbout this patch:\n\nConcerning the --maintenance... option, ISTM that there could rather be a \ngeneric way to provide \"set\" settings, not a specific option for a \nspecific parameter with a specific unit. Moreover, ISTM that it only needs \nto be set once on a connection, not per command. I'd suggest something \nlike:\n\n --connection-initialization '...'\n\nThat would be issue when a connection is started, for any query, then the \neffect would be achieved with:\n\n pgbench --conn…-init… \"SET maintenance_work_main TO '12MB'\" ...\n\nThe --help does not say that the option expects a parameter.\n\nAlso, in you patch it is a initialization option, but the code does not \ncheck for that.\n\nConcerning the freeze option:\n\nIt is also a initialization-specific option that should be checked for \nthat.\n\nThe option does not make sense if\n\nThe alternative queries could be managed simply without intermediate \nvariables.\n\nPgbench documentation is not updated.\n\nThere are no tests.\n\nThis patch should be submitted in its own thread to help manage it in the \nCF app.\n\n-- \nFabien.",
"msg_date": "Wed, 28 Aug 2019 07:36:45 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
},
{
"msg_contents": "> On Wed, Aug 28, 2019 at 7:37 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> > While doing benchmarking using different tools, including pgbench, I found it\n> > useful as a temporary hack to add copy freeze and maintenance_work_mem options\n> > (the last one not as an env variable, just as a set before, although not sure\n> > if it's a best idea). Is it similar to what you were talking about?\n>\n> About this patch:\n>\n> Concerning the --maintenance... option, ISTM that there could rather be a\n> generic way to provide \"set\" settings, not a specific option for a\n> specific parameter with a specific unit. Moreover, ISTM that it only needs\n> to be set once on a connection, not per command. I'd suggest something\n> like:\n>\n> --connection-initialization '...'\n>\n> That would be issue when a connection is started, for any query, then the\n> effect would be achieved with:\n>\n> pgbench --conn…-init… \"SET maintenance_work_main TO '12MB'\" ...\n>\n> The --help does not say that the option expects a parameter.\n>\n> Also, in you patch it is a initialization option, but the code does not\n> check for that.\n>\n> Concerning the freeze option:\n>\n> It is also a initialization-specific option that should be checked for\n> that.\n>\n> The option does not make sense if\n>\n> The alternative queries could be managed simply without intermediate\n> variables.\n>\n> Pgbench documentation is not updated.\n>\n> There are no tests.\n>\n> This patch should be submitted in its own thread to help manage it in the\n> CF app.\n\nThanks, that was a pretty deep answer for what was supposed to be just an\nalignment question :) But sure, I can prepare a proper version and post it\nseparately.\n\n\n",
"msg_date": "Wed, 28 Aug 2019 10:23:47 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - implement strict TPC-B benchmark"
}
] |
[
{
"msg_contents": "Hi,\n\nThe following case\n\n-- test.sql --\nCREATE TABLE test (a text PRIMARY KEY, b text) PARTITION BY HASH (a);\nCREATE TABLE test_p0 PARTITION OF test FOR VALUES WITH (MODULUS 2, \nREMAINDER 0);\nCREATE TABLE test_p1 PARTITION OF test FOR VALUES WITH (MODULUS 2, \nREMAINDER 1);\n-- CREATE INDEX idx_test_b ON test USING HASH (b);\n\nINSERT INTO test VALUES ('aaaa', 'aaaa');\n\n-- Regression\nUPDATE test SET b = 'bbbb' WHERE a = 'aaaa';\n-- test.sql --\n\nfails on master, which includes [1], with\n\n\npsql:test.sql:9: ERROR: could not determine which collation to use for \nstring hashing\nHINT: Use the COLLATE clause to set the collation explicitly.\n\n\nIt passes on 11.x.\n\nI'll add it to the open items list.\n\n[1] \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5e1963fb764e9cc092e0f7b58b28985c311431d9\n\nBest regards,\n Jesper\n\n\n",
"msg_date": "Mon, 8 Apr 2019 12:33:55 -0400",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": true,
"msg_subject": "COLLATE: Hash partition vs UPDATE"
},
{
"msg_contents": "Hi Jesper,\n\nOn 2019/04/09 1:33, Jesper Pedersen wrote:\n> Hi,\n> \n> The following case\n> \n> -- test.sql --\n> CREATE TABLE test (a text PRIMARY KEY, b text) PARTITION BY HASH (a);\n> CREATE TABLE test_p0 PARTITION OF test FOR VALUES WITH (MODULUS 2,\n> REMAINDER 0);\n> CREATE TABLE test_p1 PARTITION OF test FOR VALUES WITH (MODULUS 2,\n> REMAINDER 1);\n> -- CREATE INDEX idx_test_b ON test USING HASH (b);\n> \n> INSERT INTO test VALUES ('aaaa', 'aaaa');\n> \n> -- Regression\n> UPDATE test SET b = 'bbbb' WHERE a = 'aaaa';\n> -- test.sql --\n> \n> fails on master, which includes [1], with\n> \n> \n> psql:test.sql:9: ERROR: could not determine which collation to use for\n> string hashing\n> HINT: Use the COLLATE clause to set the collation explicitly.\n> \n> \n> It passes on 11.x.\n\nThanks for the report.\n\nThis seems to broken since the following commit (I see you already cc'd\nPeter):\n\ncommit 5e1963fb764e9cc092e0f7b58b28985c311431d9\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nDate: Fri Mar 22 12:09:32 2019 +0100\n\n Collations with nondeterministic comparison\n\n\nAs of this commit, hashing functions hashtext() and hashtextextended()\nrequire a valid collation to be passed in. ISTM,\nsatisfies_hash_partition() that's called by hash partition constraint\nchecking should have been changed to use FunctionCall2Coll() interface to\naccount for the requirements of the above commit. I see that it did that\nfor compute_partition_hash_value(), which is used by hash partition tuple\nrouting. That also seems to be covered by regression tests, but there are\nno tests that cover satisfies_hash_partition().\n\nAttached patch is an attempt to fix this. I've also added Amul Sul who\ncan maybe comment on the satisfies_hash_partition() changes.\n\nBTW, it seems we don't need to back-patch this to PG 11 which introduced\nhash partitioning, because text hashing functions don't need collation\nthere, right?\n\nThanks,\nAmit",
"msg_date": "Tue, 9 Apr 2019 12:18:55 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: COLLATE: Hash partition vs UPDATE"
},
{
"msg_contents": "Hi Amit,\n\nOn 4/8/19 11:18 PM, Amit Langote wrote:\n> As of this commit, hashing functions hashtext() and hashtextextended()\n> require a valid collation to be passed in. ISTM,\n> satisfies_hash_partition() that's called by hash partition constraint\n> checking should have been changed to use FunctionCall2Coll() interface to\n> account for the requirements of the above commit. I see that it did that\n> for compute_partition_hash_value(), which is used by hash partition tuple\n> routing. That also seems to be covered by regression tests, but there are\n> no tests that cover satisfies_hash_partition().\n> \n> Attached patch is an attempt to fix this. I've also added Amul Sul who\n> can maybe comment on the satisfies_hash_partition() changes.\n> \n\nYeah, that works here - apart from an issue with the test case; fixed in \nthe attached.\n\nBest regards,\n Jesper",
"msg_date": "Tue, 9 Apr 2019 08:43:57 -0400",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: COLLATE: Hash partition vs UPDATE"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 9:44 PM Jesper Pedersen\n<jesper.pedersen@redhat.com> wrote:\n>\n> Hi Amit,\n>\n> On 4/8/19 11:18 PM, Amit Langote wrote:\n> > As of this commit, hashing functions hashtext() and hashtextextended()\n> > require a valid collation to be passed in. ISTM,\n> > satisfies_hash_partition() that's called by hash partition constraint\n> > checking should have been changed to use FunctionCall2Coll() interface to\n> > account for the requirements of the above commit. I see that it did that\n> > for compute_partition_hash_value(), which is used by hash partition tuple\n> > routing. That also seems to be covered by regression tests, but there are\n> > no tests that cover satisfies_hash_partition().\n> >\n> > Attached patch is an attempt to fix this. I've also added Amul Sul who\n> > can maybe comment on the satisfies_hash_partition() changes.\n> >\n>\n> Yeah, that works here - apart from an issue with the test case; fixed in\n> the attached.\n\nAh, crap. Last minute changes are bad.\n\nThanks for fixing.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 9 Apr 2019 21:58:20 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COLLATE: Hash partition vs UPDATE"
},
{
"msg_contents": "Jesper Pedersen <jesper.pedersen@redhat.com> writes:\n> Yeah, that works here - apart from an issue with the test case; fixed in \n> the attached.\n\nCouple issues spotted in an eyeball review of that:\n\n* There is code that supposes that partsupfunc[] is the last\nfield of ColumnsHashData, eg\n\n fcinfo->flinfo->fn_extra =\n MemoryContextAllocZero(fcinfo->flinfo->fn_mcxt,\n offsetof(ColumnsHashData, partsupfunc) +\n sizeof(FmgrInfo) * nargs);\n\nI'm a bit surprised that this patch manages to run without crashing,\nbecause this would certainly not allocate space for partcollid[].\n\nI think we would likely be well advised to do\n\n-\t\tFmgrInfo\tpartsupfunc[PARTITION_MAX_KEYS];\n+\t\tFmgrInfo\tpartsupfunc[FLEXIBLE_ARRAY_MEMBER];\n\nto make it more obvious that that has to be the last field. Or else\ndrop the cuteness with variable-size allocations of ColumnsHashData.\nFmgrInfo is only 48 bytes, I'm not really sure that it's worth the\nrisk of bugs to \"optimize\" this.\n\n* I see collation-less calls of the partsupfunc at both partbounds.c:2931\nand partbounds.c:2970, but this patch touches only the first one. How\ncan that be right?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 14 Apr 2019 16:50:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: COLLATE: Hash partition vs UPDATE"
},
{
"msg_contents": "Thanks for the review.\n\nOn 2019/04/15 5:50, Tom Lane wrote:\n> Jesper Pedersen <jesper.pedersen@redhat.com> writes:\n>> Yeah, that works here - apart from an issue with the test case; fixed in \n>> the attached.\n> \n> Couple issues spotted in an eyeball review of that:\n> \n> * There is code that supposes that partsupfunc[] is the last\n> field of ColumnsHashData, eg\n> \n> fcinfo->flinfo->fn_extra =\n> MemoryContextAllocZero(fcinfo->flinfo->fn_mcxt,\n> offsetof(ColumnsHashData, partsupfunc) +\n> sizeof(FmgrInfo) * nargs);\n> \n> I'm a bit surprised that this patch manages to run without crashing,\n> because this would certainly not allocate space for partcollid[].\n> \n> I think we would likely be well advised to do\n> \n> -\t\tFmgrInfo\tpartsupfunc[PARTITION_MAX_KEYS];\n> +\t\tFmgrInfo\tpartsupfunc[FLEXIBLE_ARRAY_MEMBER];\n\nI went with this:\n\n- FmgrInfo partsupfunc[PARTITION_MAX_KEYS];\n Oid partcollid[PARTITION_MAX_KEYS];\n+ FmgrInfo partsupfunc[FLEXIBLE_ARRAY_MEMBER];\n\n> to make it more obvious that that has to be the last field. Or else\n> drop the cuteness with variable-size allocations of ColumnsHashData.\n> FmgrInfo is only 48 bytes, I'm not really sure that it's worth the\n> risk of bugs to \"optimize\" this.\n\nI wonder if workloads on hash partitioned tables that require calling\nsatisfies_hash_partition repeatedly may not be as common as thought when\nwriting this code? The only case I see where it's being repeatedly called\nis bulk inserts into a hash-partitioned table, that too, only if BR\ntriggers on partitions necessitate rechecking the partition constraint.\n\n> * I see collation-less calls of the partsupfunc at both partbounds.c:2931\n> and partbounds.c:2970, but this patch touches only the first one. How\n> can that be right?\n\nOops, that's wrong.\n\nAttached updated patch.\n\nThanks,\nAmit",
"msg_date": "Mon, 15 Apr 2019 15:22:08 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: COLLATE: Hash partition vs UPDATE"
},
{
"msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> Attached updated patch.\n\nLGTM, pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 16:47:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: COLLATE: Hash partition vs UPDATE"
},
{
"msg_contents": "On 2019/04/16 5:47, Tom Lane wrote:\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>> Attached updated patch.\n> \n> LGTM, pushed.\n\nThank you.\n\nRegards,\nAmit\n\n\n\n\n",
"msg_date": "Tue, 16 Apr 2019 09:17:05 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: COLLATE: Hash partition vs UPDATE"
}
] |
[
{
"msg_contents": "Hello devs,\n\nA long time ago I submitted a pgbench \\into command to store results of \nqueries into variables independently of the query being processed, which \ngot turn into \\gset (;) and \\cset (\\;), which got committed, then \\cset \nwas removed because it was not \"up to standard\", as it could not work with \nempty query (the underlying issue is that pg silently skips empty queries, \nso that \"\\; SELECT 1 \\; \\; SELECT 3,\" returns 2 results instead of 4, a \nmisplaced optimisation from my point of view).\n\nNow there is a pgbench \\gset which allows to extract the results of \nvariables of the last query, but as it does both setting and ending a \nquery at the same time, there is no way to set variables out of a combined \n(\\;) query but the last, which is the kind of non orthogonal behavior that \nI dislike much.\n\nThis annoys me because testing the performance of combined queries cannot \nbe tested if the script needs to extract variables.\n\nTo make the feature somehow accessible to combined queries, the attached \npatch adds the \"\\aset\" (all set) command to store all results of queries \nwhich return just one row into variables, i.e.:\n\n SELECT 1 AS one \\;\n SELECT 2 AS two UNION SELECT 2 \\;\n SELECT 3 AS three \\aset\n\nwill set both \"one\" and \"three\", while \"two\" is not set because there were \ntwo rows. It is a kind of more permissive \\gset.\n\nBecause it does it for all queries, there is no need for synchronizing \nwith the underlying queries, which made the code for \\cset both awkward \nand with limitations. Hopefully this version might be \"up to standard\".\nI'll see. I'm in no hurry:-)\n\n-- \nFabien.",
"msg_date": "Mon, 8 Apr 2019 19:32:51 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "V2 is a rebase.\n\n> A long time ago I submitted a pgbench \\into command to store results of \n> queries into variables independently of the query being processed, which got \n> turn into \\gset (;) and \\cset (\\;), which got committed, then \\cset was \n> removed because it was not \"up to standard\", as it could not work with empty \n> query (the underlying issue is that pg silently skips empty queries, so that \n> \"\\; SELECT 1 \\; \\; SELECT 3,\" returns 2 results instead of 4, a misplaced \n> optimisation from my point of view).\n>\n> Now there is a pgbench \\gset which allows to extract the results of variables \n> of the last query, but as it does both setting and ending a query at the same \n> time, there is no way to set variables out of a combined (\\;) query but the \n> last, which is the kind of non orthogonal behavior that I dislike much.\n>\n> This annoys me because testing the performance of combined queries cannot be \n> tested if the script needs to extract variables.\n>\n> To make the feature somehow accessible to combined queries, the attached \n> patch adds the \"\\aset\" (all set) command to store all results of queries \n> which return just one row into variables, i.e.:\n>\n> SELECT 1 AS one \\;\n> SELECT 2 AS two UNION SELECT 2 \\;\n> SELECT 3 AS three \\aset\n>\n> will set both \"one\" and \"three\", while \"two\" is not set because there were \n> two rows. It is a kind of more permissive \\gset.\n>\n> Because it does it for all queries, there is no need for synchronizing with \n> the underlying queries, which made the code for \\cset both awkward and with \n> limitations. Hopefully this version might be \"up to standard\".\n> I'll see. I'm in no hurry:-)\n>\n>\n\n-- \nFabien.",
"msg_date": "Thu, 23 May 2019 16:10:55 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "> SELECT 1 AS one \\;\r\n> SELECT 2 AS two UNION SELECT 2 \\;\r\n> SELECT 3 AS three \\aset\r\n>\r\n> will set both \"one\" and \"three\", while \"two\" is not set because there were \r\n> two rows. It is a kind of more permissive \\gset.\r\n\r\nAre you sure two is not set :)? \r\nSELECT 2 AS two UNION SELECT 2; -- only returns one row.\r\nbut\r\nSELECT 2 AS two UNION SELECT 10; -- returns the two rows.\r\n\r\n\r\nIs this the expected behavior with \\aset? In my opinion throwing a valid error like \"client 0 script 0 command 0 query 0: expected one row, got 2\" make more sense.\r\n\r\n\r\n - With \\gset \r\n\r\nSELECT 2 AS two UNION SELECT 10 \\gset\r\nINSERT INTO test VALUES(:two,0,0);\r\n\r\n$ pgbench postgres -f pgbench_aset.sql -T 1 -j 1 -c 1 -s 10\r\nstarting vacuum...end.\r\nclient 0 script 0 command 0 query 0: expected one row, got 2\r\ntransaction type: pgbench_aset.sql\r\nscaling factor: 10\r\nquery mode: simple\r\nnumber of clients: 1\r\nnumber of threads: 1\r\nduration: 1 s\r\nnumber of transactions actually processed: 0\r\nRun was aborted; the above results are incomplete.\r\n\r\n\r\n- With \\aset\r\n\r\nSELECT 2 AS two UNION SELECT 10 \\aset\r\nINSERT INTO test VALUES(:two,0,0);\r\n\r\nvagrant@vagrant:~/percona/postgresql$ pgbench postgres -f pgbench_aset.sql -T 1 -j 1 -c 1 -s 10\r\nstarting vacuum...end.\r\nclient 0 script 0 aborted in command 1 query 0: ERROR: syntax error at or near \":\"\r\nLINE 1: INSERT INTO test VALUES(:two,0,0);\r\n ^\r\ntransaction type: pgbench_aset.sql\r\nscaling factor: 10\r\nquery mode: simple\r\nnumber of clients: 1\r\nnumber of threads: 1\r\nduration: 1 s\r\nnumber of transactions actually processed: 0\r\nRun was aborted; the above results are incomplete.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Tue, 09 Jul 2019 03:29:31 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "Hello Ibrar,\n\n>> SELECT 1 AS one \\;\n>> SELECT 2 AS two UNION SELECT 2 \\;\n>> SELECT 3 AS three \\aset\n>>\n>> will set both \"one\" and \"three\", while \"two\" is not set because there were\n>> two rows. It is a kind of more permissive \\gset.\n>\n> Are you sure two is not set :)?\n>\n> SELECT 2 AS two UNION SELECT 2; -- only returns one row.\n> but\n> SELECT 2 AS two UNION SELECT 10; -- returns the two rows.\n\nIndeed, my intension was to show an example like the second.\n\n> Is this the expected behavior with \\aset?\n\n> In my opinion throwing a valid error like \"client 0 script 0 command 0 \n> query 0: expected one row, got 2\" make more sense.\n\nHmmm. My intention with \\aset is really NOT to throw an error. With \npgbench, the existence of the variable can be tested later to know whether \nit was assigned or not, eg:\n\n SELECT 1 AS x \\;\n -- 2 rows, no assignment\n SELECT 'calvin' AS firstname UNION SELECT 'hobbes' \\;\n SELECT 2 AS z \\aset\n -- next test is false\n \\if :{?firstname}\n ...\n \\endif\n\nThe rational is that one may want to benefit from combined queries (\\;)\nwhich result in less communication thus has lower latency, but still be \ninterested in extracting some results.\n\nThe question is what to do if the query returns 0 or >1 rows. If an error \nis raised, the construct cannot be used for testing whether there is one \nresult or not, eg for a query returning 0 or 1 row, you could not write:\n\n \\set id random(1, :number_of_users)\n SELECT firtname AS fn FROM user WHERE id = :id \\aset\n \\if :{?fn}\n -- the user exists, proceed with further queries\n \\else\n -- no user, maybe it was removed, it is not an error\n \\endif\n\nAnother option would to just assign the value so that\n - on 0 row no assignment is made, and it can be tested afterwards.\n - on >1 rows the last (first?) value is kept. I took last so to\n ensure that all results are received.\n\nI think that having some permissive behavior allows to write some more \ninteresting test scripts that use combined queries and extract values.\n\nWhat do you think?\n\n> - With \\gset\n>\n> SELECT 2 AS two UNION SELECT 10 \\gset\n> INSERT INTO test VALUES(:two,0,0);\n>\n> client 0 script 0 command 0 query 0: expected one row, got 2\n> Run was aborted; the above results are incomplete.\n\nYes, that is the intented behavior.\n\n> - With \\aset\n>\n> SELECT 2 AS two UNION SELECT 10 \\aset\n> INSERT INTO test VALUES(:two,0,0);\n> [...]\n> client 0 script 0 aborted in command 1 query 0: ERROR: syntax error at or near \":\"\n\nIndeed, the user should test whether the variable was assigned before \nusing it if the result is not warranted to return one row.\n\n> The new status of this patch is: Waiting on Author\n\nThe attached patch implements the altered behavior described above.\n\n-- \nFabien.",
"msg_date": "Wed, 10 Jul 2019 08:32:56 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 11:33 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> Hello Ibrar,\n>\n> >> SELECT 1 AS one \\;\n> >> SELECT 2 AS two UNION SELECT 2 \\;\n> >> SELECT 3 AS three \\aset\n> >>\n> >> will set both \"one\" and \"three\", while \"two\" is not set because there\n> were\n> >> two rows. It is a kind of more permissive \\gset.\n> >\n> > Are you sure two is not set :)?\n> >\n> > SELECT 2 AS two UNION SELECT 2; -- only returns one row.\n> > but\n> > SELECT 2 AS two UNION SELECT 10; -- returns the two rows.\n>\n> Indeed, my intension was to show an example like the second.\n>\n> > Is this the expected behavior with \\aset?\n>\n> > In my opinion throwing a valid error like \"client 0 script 0 command 0\n> > query 0: expected one row, got 2\" make more sense.\n>\n> Hmmm. My intention with \\aset is really NOT to throw an error. With\n> pgbench, the existence of the variable can be tested later to know whether\n> it was assigned or not, eg:\n>\n> SELECT 1 AS x \\;\n> -- 2 rows, no assignment\n> SELECT 'calvin' AS firstname UNION SELECT 'hobbes' \\;\n> SELECT 2 AS z \\aset\n> -- next test is false\n> \\if :{?firstname}\n> ...\n> \\endif\n>\n> The rational is that one may want to benefit from combined queries (\\;)\n> which result in less communication thus has lower latency, but still be\n> interested in extracting some results.\n>\n> The question is what to do if the query returns 0 or >1 rows. If an error\n> is raised, the construct cannot be used for testing whether there is one\n> result or not, eg for a query returning 0 or 1 row, you could not write:\n>\n> \\set id random(1, :number_of_users)\n> SELECT firtname AS fn FROM user WHERE id = :id \\aset\n> \\if :{?fn}\n> -- the user exists, proceed with further queries\n> \\else\n> -- no user, maybe it was removed, it is not an error\n> \\endif\n>\n> Another option would to just assign the value so that\n> - on 0 row no assignment is made, and it can be tested afterwards.\n> - on >1 rows the last (first?) value is kept. I took last so to\n> ensure that all results are received.\n>\n> I think that having some permissive behavior allows to write some more\n> interesting test scripts that use combined queries and extract values.\n>\n> What do you think?\n>\n> Yes, I think that make more sense.\n\n> > - With \\gset\n> >\n> > SELECT 2 AS two UNION SELECT 10 \\gset\n> > INSERT INTO test VALUES(:two,0,0);\n> >\n> > client 0 script 0 command 0 query 0: expected one row, got 2\n> > Run was aborted; the above results are incomplete.\n>\n> Yes, that is the intented behavior.\n>\n> > - With \\aset\n> >\n> > SELECT 2 AS two UNION SELECT 10 \\aset\n> > INSERT INTO test VALUES(:two,0,0);\n> > [...]\n> > client 0 script 0 aborted in command 1 query 0: ERROR: syntax error at\n> or near \":\"\n>\n> Indeed, the user should test whether the variable was assigned before\n> using it if the result is not warranted to return one row.\n>\n> > The new status of this patch is: Waiting on Author\n>\n> The attached patch implements the altered behavior described above.\n>\n> --\n> Fabien.\n\n\n\n-- \nIbrar Ahmed\n\nOn Wed, Jul 10, 2019 at 11:33 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\nHello Ibrar,\n\n>> SELECT 1 AS one \\;\n>> SELECT 2 AS two UNION SELECT 2 \\;\n>> SELECT 3 AS three \\aset\n>>\n>> will set both \"one\" and \"three\", while \"two\" is not set because there were\n>> two rows. It is a kind of more permissive \\gset.\n>\n> Are you sure two is not set :)?\n>\n> SELECT 2 AS two UNION SELECT 2; -- only returns one row.\n> but\n> SELECT 2 AS two UNION SELECT 10; -- returns the two rows.\n\nIndeed, my intension was to show an example like the second.\n\n> Is this the expected behavior with \\aset?\n\n> In my opinion throwing a valid error like \"client 0 script 0 command 0 \n> query 0: expected one row, got 2\" make more sense.\n\nHmmm. My intention with \\aset is really NOT to throw an error. With \npgbench, the existence of the variable can be tested later to know whether \nit was assigned or not, eg:\n\n SELECT 1 AS x \\;\n -- 2 rows, no assignment\n SELECT 'calvin' AS firstname UNION SELECT 'hobbes' \\;\n SELECT 2 AS z \\aset\n -- next test is false\n \\if :{?firstname}\n ...\n \\endif\n\nThe rational is that one may want to benefit from combined queries (\\;)\nwhich result in less communication thus has lower latency, but still be \ninterested in extracting some results.\n\nThe question is what to do if the query returns 0 or >1 rows. If an error \nis raised, the construct cannot be used for testing whether there is one \nresult or not, eg for a query returning 0 or 1 row, you could not write:\n\n \\set id random(1, :number_of_users)\n SELECT firtname AS fn FROM user WHERE id = :id \\aset\n \\if :{?fn}\n -- the user exists, proceed with further queries\n \\else\n -- no user, maybe it was removed, it is not an error\n \\endif\n\nAnother option would to just assign the value so that\n - on 0 row no assignment is made, and it can be tested afterwards.\n - on >1 rows the last (first?) value is kept. I took last so to\n ensure that all results are received.\n\nI think that having some permissive behavior allows to write some more \ninteresting test scripts that use combined queries and extract values.\n\nWhat do you think?\nYes, I think that make more sense. \n> - With \\gset\n>\n> SELECT 2 AS two UNION SELECT 10 \\gset\n> INSERT INTO test VALUES(:two,0,0);\n>\n> client 0 script 0 command 0 query 0: expected one row, got 2\n> Run was aborted; the above results are incomplete.\n\nYes, that is the intented behavior.\n\n> - With \\aset\n>\n> SELECT 2 AS two UNION SELECT 10 \\aset\n> INSERT INTO test VALUES(:two,0,0);\n> [...]\n> client 0 script 0 aborted in command 1 query 0: ERROR: syntax error at or near \":\"\n\nIndeed, the user should test whether the variable was assigned before \nusing it if the result is not warranted to return one row.\n\n> The new status of this patch is: Waiting on Author\n\nThe attached patch implements the altered behavior described above.\n\n-- \nFabien.-- Ibrar Ahmed",
"msg_date": "Thu, 15 Aug 2019 23:21:41 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nThe patch passed my review, I have not reviewed the documentation changes.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Thu, 15 Aug 2019 18:30:13 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "On Thu, Aug 15, 2019 at 06:30:13PM +0000, Ibrar Ahmed wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n> \n> The patch passed my review, I have not reviewed the documentation changes.\n> \n> The new status of this patch is: Ready for Committer\n\n@@ -524,6 +526,7 @@ typedef struct Command\n int argc;\n char *argv[MAX_ARGS];\n char *varprefix;\n+ bool aset;\n\nIt seems to me that there is no point to have the variable aset in\nCommand because this structure includes already MetaCommand, so the \ninformation is duplicated. And I would suggest to change\nreadCommandResponse() to use a MetaCommand in argument. Perhaps I am\nmissing something?\n--\nMichael",
"msg_date": "Fri, 29 Nov 2019 14:56:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "Michaᅵl,\n\n> + bool aset;\n>\n> It seems to me that there is no point to have the variable aset in \n> Command because this structure includes already MetaCommand, so the \n> information is duplicated. [...] Perhaps I am missing something?\n\nYep. ISTM that you are missing that aset is not an independent meta \ncommand like most others but really changes the state of the previous SQL \ncommand, so that it needs to be stored into that with some additional \nfields. This is the same with \"gset\" which is tagged by a non-null \n\"varprefix\".\n\nSo I cannot remove the \"aset\" field.\n\n> And I would suggest to change readCommandResponse() to use a MetaCommand \n> in argument.\n\nMetaCommand is not enough: we need varprefix, and then distinguishing \nbetween aset and gset. Although this last point can be done with a \nMetaCommand, ISTM that a bool is_aset is clear and good enough. It is \npossible to switch if you insist on it, but I do not think it is \ndesirable.\n\nAttached v4 removes an unwanted rebased comment duplication and does minor \nchanges while re-reading the code.\n\n-- \nFabien.",
"msg_date": "Fri, 29 Nov 2019 10:34:05 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "On 11/29/19 4:34 AM, Fabien COELHO wrote:\n>>\n>> It seems to me that there is no point to have the variable aset in \n>> Command because this structure includes already MetaCommand, so the \n>> information is duplicated. [...] Perhaps I am missing something?\n> \n> Yep. ISTM that you are missing that aset is not an independent meta \n> command like most others but really changes the state of the previous \n> SQL command, so that it needs to be stored into that with some \n> additional fields. This is the same with \"gset\" which is tagged by a \n> non-null \"varprefix\".\n> \n> So I cannot remove the \"aset\" field.\n> \n>> And I would suggest to change readCommandResponse() to use a \n>> MetaCommand in argument.\n> \n> MetaCommand is not enough: we need varprefix, and then distinguishing \n> between aset and gset. Although this last point can be done with a \n> MetaCommand, ISTM that a bool is_aset is clear and good enough. It is \n> possible to switch if you insist on it, but I do not think it is desirable.\n\nMichael, do you agree with Fabien's comments?\n\n> Attached v4 removes an unwanted rebased comment duplication and does \n> minor changes while re-reading the code.\n\nThis patch no longer applies: http://cfbot.cputube.org/patch_27_2091.log\n\nCF entry has been updated to Waiting on Author.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 24 Mar 2020 11:04:45 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 11:04:45AM -0400, David Steele wrote:\n> On 11/29/19 4:34 AM, Fabien COELHO wrote:\n>> MetaCommand is not enough: we need varprefix, and then distinguishing\n>> between aset and gset. Although this last point can be done with a\n>> MetaCommand, ISTM that a bool is_aset is clear and good enough. It is\n>> possible to switch if you insist on it, but I do not think it is\n>> desirable.\n> \n> Michael, do you agree with Fabien's comments?\n\nThanks for the reminder. I am following up with Fabien's comments.\n--\nMichael",
"msg_date": "Thu, 26 Mar 2020 13:00:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 10:34:05AM +0100, Fabien COELHO wrote:\n>> It seems to me that there is no point to have the variable aset in\n>> Command because this structure includes already MetaCommand, so the\n>> information is duplicated. [...] Perhaps I am missing something?\n> \n> Yep. ISTM that you are missing that aset is not an independent meta command\n> like most others but really changes the state of the previous SQL command,\n> so that it needs to be stored into that with some additional fields. This is\n> the same with \"gset\" which is tagged by a non-null \"varprefix\".\n> \n> So I cannot remove the \"aset\" field.\n\nStill sounds strange to me to invent a new variable to this structure\nif it is possible to track the exact same thing with an existing part\nof a Command, or it would make sense to split Command into two\ndifferent structures with an extra structure used after the parsing\nfor clarity?\n\n>> And I would suggest to change readCommandResponse() to use a MetaCommand\n>> in argument.\n> \n> MetaCommand is not enough: we need varprefix, and then distinguishing\n> between aset and gset. Although this last point can be done with a\n> MetaCommand, ISTM that a bool is_aset is clear and good enough. It is\n> possible to switch if you insist on it, but I do not think it is desirable.\n> \n> Attached v4 removes an unwanted rebased comment duplication and does minor\n> changes while re-reading the code.\n\nWell, it still looks cleaner to me to just assign the meta field\nproperly within ParseScript(), and you get the same result. And it is \nalso possible to use \"meta\" to do more sanity checks around META_GSET\nfor some code paths. So I'd actually find the addition of a new\nargument using a meta command within readCommandResponse() cleaner.\n\n- * varprefix SQL commands terminated with \\gset have this set\n+ * varprefix SQL commands terminated with \\gset or \\aset have this set\nNit from v4: varprefix can be used for \\gset and \\aset, and the\ncomment was not updated.\n\n+ /* coldly skip empty result under \\aset */\n+ if (ntuples <= 0)\n+ break;\nShouldn't this check after \\aset? And it seems to me that this code\npath is not taken, so a test would be nice.\n\n- } while (res);\n+ } while (res != NULL);\nUseless diff.\n\nThe conflicts came from the switch to the common logging of\nsrc/common/. That was no big deal to solve.\n--\nMichael",
"msg_date": "Thu, 26 Mar 2020 14:39:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "Bonjour Micha�l,\n\n> [...] Still sounds strange to me to invent a new variable to this \n> structure if it is possible to track the exact same thing with an \n> existing part of a Command, or it would make sense to split Command into \n> two different structures with an extra structure used after the parsing \n> for clarity?\n\nHmmm.\n\nYour point is to store the gset/aset status into the meta field, even if \nthe command type is SQL. This is not done for gset, which relies on the \nnon-null prefix, and breaks the assumption that meta is set to something \nonly when the command is a meta command. Why not. I updated the comment, \nso now meta is none/gset/aset when command type is sql, and I removed the \naset field.\n\n> Well, it still looks cleaner to me to just assign the meta field\n> properly within ParseScript(), and you get the same result. And it is\n> also possible to use \"meta\" to do more sanity checks around META_GSET\n> for some code paths. So I'd actually find the addition of a new\n> argument using a meta command within readCommandResponse() cleaner.\n\nI tried to do that.\n\n> - * varprefix SQL commands terminated with \\gset have this set\n> + * varprefix SQL commands terminated with \\gset or \\aset have this set\n\n> Nit from v4: varprefix can be used for \\gset and \\aset, and the\n> comment was not updated.\n\nIt is now updated.\n\n> + /* coldly skip empty result under \\aset */\n> + if (ntuples <= 0)\n> + break;\n> Shouldn't this check after \\aset? And it seems to me that this code\n> path is not taken, so a test would be nice.\n\nAdded (I think, if I understood what you suggested.).\n\n> - } while (res);\n> + } while (res != NULL);\n> Useless diff.\n\nYep.\n\nAttached an updated v5.\n\n-- \nFabien.",
"msg_date": "Thu, 26 Mar 2020 22:35:03 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 10:35:03PM +0100, Fabien COELHO wrote:\n> Your point is to store the gset/aset status into the meta field, even if the\n> command type is SQL. This is not done for gset, which relies on the non-null\n> prefix, and breaks the assumption that meta is set to something only when\n> the command is a meta command. Why not. I updated the comment, so now meta\n> is none/gset/aset when command type is sql, and I removed the aset field.\n\nYes, that's the point I was trying to make. Thanks for sending a new\nversion. \n\n> - * meta The type of meta-command, or META_NONE if command is SQL\n> + * meta The type of meta-command, if command is SQL META_NONE,\n> + * META_GSET or META_ASET which dictate what to do with the\n> + * SQL query result.\n\nI did not quite get why you need to update this comment. The same\nconcepts as before apply.\n\n>> + /* coldly skip empty result under \\aset */\n>> + if (ntuples <= 0)\n>> + break;\n>> Shouldn't this check after \\aset? And it seems to me that this code\n>> path is not taken, so a test would be nice.\n> \n> Added (I think, if I understood what you suggested.).\n\n+ /* coldly skip empty result under \\aset */\n+ else if (meta == META_ASET && ntuples <= 0)\n+ break;\nYes, that's what I meant. Now it happens that we don't have a\nregression test to cover the path where we have no tuples. Could it\nbe possible to add one?\n\n+ Assert((meta == META_NONE && varprefix == NULL) ||\n+ ((meta == META_GSET || meta == META_ASET) && varprefix != NULL));\n+\nGood addition. That would blow up if meta is set to something else\nthan {META_NONE,META_GSET,META_ASET}, so anybody changing this code\npath will need to question if he/she needs to do something here.\n\nExcept for the addition of a test case to skip empty results when\n\\aset is used, I think that we are pretty good here.\n--\nMichael",
"msg_date": "Mon, 30 Mar 2020 15:30:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 03:30:58PM +0900, Michael Paquier wrote:\n> Except for the addition of a test case to skip empty results when\n> \\aset is used, I think that we are pretty good here.\n\nWhile hacking on the patch more by myself, I found that mixing tests\nfor \\gset and \\aset was rather messy. A test for an empty result\nleads also to a failure with the pgbench command as we want to make\nsure that the variable does not exist in this case using debug(). So\nlet's split the tests in three parts:\n- the set for \\get is left alone.\n- addition of a new set for the valid cases of \\aset.\n- addition of an invalid test for \\aset (the empty set one).\n\nFabien, what do you think about the attached? Perhaps we should also\nhave a test where we return more than 1 row for \\get? The last point\nis unrelated to this thread though.\n--\nMichael",
"msg_date": "Wed, 1 Apr 2020 17:18:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "Bonjour Michaël,\n\n>> Except for the addition of a test case to skip empty results when\n>> \\aset is used, I think that we are pretty good here.\n>\n> While hacking on the patch more by myself, I found that mixing tests\n> for \\gset and \\aset was rather messy. A test for an empty result\n> leads also to a failure with the pgbench command as we want to make\n> sure that the variable does not exist in this case using debug().\n\nISTM that I submitted a patch to test whether a variable exists in \npgbench, like available in psql (:{?var} I think), but AFAICS it did not \npass. Maybe I should resurect it as it would allow to test simply whether \nan empty result was returned to aset, which could make sense in a bench \nscript (get something, if it does not exist skip remainder… I can see some \ninteresting use cases).\n\n> So let's split the tests in three parts:\n> - the set for \\get is left alone.\n> - addition of a new set for the valid cases of \\aset.\n> - addition of an invalid test for \\aset (the empty set one).\n\nOk.\n\n> Fabien, what do you think about the attached?\n\nIt does not need to create an UNLOGGED table, a mere \"WHERE FALSE\" \nsuffices.\n\nI do not understand why you removed the comment about meta which makes it \nfalse, so I added something minimal back.\n\n> Perhaps we should also have a test where we return more than 1 row for \n> \\get? The last point is unrelated to this thread though.\n\nYes, but ISTM that it is not worth a dedicated patch… so I added a test \nsimilar to the one about empty aset.\n\nSee v7 attached.\n\n-- \nFabien.",
"msg_date": "Thu, 2 Apr 2020 08:08:08 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "On Thu, Apr 02, 2020 at 08:08:08AM +0200, Fabien COELHO wrote:\n> ISTM that I submitted a patch to test whether a variable exists in pgbench,\n> like available in psql (:{?var} I think), but AFAICS it did not pass. Maybe\n> I should resurect it as it would allow to test simply whether an empty\n> result was returned to aset, which could make sense in a bench script (get\n> something, if it does not exist skip remainder… I can see some interesting\n> use cases).\n\nNot sure if improving the readability of the tests is a reason for\nthis patch. So I would suggest to just live with relying on debug()\nfor now to check that a variable with a given prefix exists.\n\n> It does not need to create an UNLOGGED table, a mere \"WHERE FALSE\" suffices.\n\nGood point, that's cheaper.\n\n> I do not understand why you removed the comment about meta which makes it\n> false, so I added something minimal back.\n\n * type SQL_COMMAND or META_COMMAND\n- * meta The type of meta-command, or META_NONE if command is SQL\n+ * meta The type of meta-command. On SQL_COMMAND: META_NONE/GSET/ASET.\n\nOh, OK. I see your point. Sorry about that.\n\n>> Perhaps we should also have a test where we return more than 1 row for\n>> \\get? The last point is unrelated to this thread though.\n> \n> Yes, but ISTM that it is not worth a dedicated patch… so I added a test\n> similar to the one about empty aset.\n\nThanks. So, it looks like everything has been addressed. Do you have\nanything else in mind?\n\nNB: I think that it is really strange to not use an array for the\noptions in subroutine pgbench() of 001_pgbench_with_server.pl.\nShouldn't this be an array of options instead? The current logic of \nusing a splitted string is weak when it comes to option quoting in\nperl and command handling in general.\n--\nMichael",
"msg_date": "Thu, 2 Apr 2020 17:03:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "Micha�l,\n\n>> ISTM that I submitted a patch to test whether a variable exists in pgbench,\n>> like available in psql (:{?var} I think),\n>\n> Not sure if improving the readability of the tests is a reason for\n> this patch. So I would suggest to just live with relying on debug()\n> for now to check that a variable with a given prefix exists.\n\nSure. I meant that the feature would make sense to write benchmark scripts \nwhich would use aset and be able to act on the success or not of this \naset, not to resurrect it for a hidden coverage test.\n\n> Thanks. So, it looks like everything has been addressed. Do you have\n> anything else in mind?\n\nNope.\n\n> NB: I think that it is really strange to not use an array for the\n> options in subroutine pgbench() of 001_pgbench_with_server.pl.\n> Shouldn't this be an array of options instead? The current logic of\n> using a splitted string is weak when it comes to option quoting in\n> perl and command handling in general.\n\nThe idea is that a scalar is simpler and readable to write in the simple \ncase than a perl array. Now maybe qw() could have done the trick.\n\n-- \nFabien.",
"msg_date": "Thu, 2 Apr 2020 15:58:50 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "On Thu, Apr 02, 2020 at 03:58:50PM +0200, Fabien COELHO wrote:\n> Sure. I meant that the feature would make sense to write benchmark scripts\n> which would use aset and be able to act on the success or not of this aset,\n> not to resurrect it for a hidden coverage test.\n\nThis could always be discussed for v14. We'll see.\n\n>> Thanks. So, it looks like everything has been addressed. Do you have\n>> anything else in mind?\n> \n> Nope.\n\nApplied, then. Thanks!\n--\nMichael",
"msg_date": "Fri, 3 Apr 2020 11:47:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n>> Sure. I meant that the feature would make sense to write benchmark scripts\n>> which would use aset and be able to act on the success or not of this aset,\n>> not to resurrect it for a hidden coverage test.\n>\n> This could always be discussed for v14. We'll see.\n\nOr v15, or never, who knows? :-)\n\nThe use case I have in mind for such a feature is to be able to have a \nflow of DELETE transactions in a multi-script benchmark without breaking \nconcurrent SELECT/UPDATE transactions. For that, the ability of extracting \ndata easily and testing whether it was non empty would help.\n\n> Applied, then. Thanks!\n\nThanks to you!\n\n-- \nFabien.",
"msg_date": "Fri, 3 Apr 2020 07:23:59 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - add \\aset to store results of a combined query"
}
] |
[
{
"msg_contents": "I notice that a number of contrib modules call PageGetFreeSpace()\nwhere they should really call PageGetExactFreeSpace() instead.\nPageGetFreeSpace() assumes that the overhead for one line pointer\nshould be pro-actively subtracted, which is handy for plenty of nbtree\ncode, but doesn't quite make sense for stuff like pageinspect's\nbt_page_stats() function.\n\nI was thinking about fixing one or two of these buglets, without\nexpecting that to be complicated in any way. However, now that I take\na closer look I also notice that there is core code that calls\nPageGetFreeSpace() when it probably shouldn't, either. For example,\nwhat business does heap_xlog_visible() have calling\nPageGetFreeSpace()? I also doubt that terminate_brin_buildstate()\nshould call it -- doesn't BRIN avoid using conventional item pointers?\nDoesn't GIN's entryIsEnoughSpace() function double-count the item\npointer overhead?\n\nI wonder if we should add an assertion based on the pd_special offset\nof the page to PageGetFreeSpace(). That would make it harder to make\nmistakes like this in the future. Maybe it would be better to revise\nthe whole API instead, though.\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 8 Apr 2019 14:05:02 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "PageGetFreeSpace() isn't quite the right thing for some of its\n callers"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-08 14:05:02 -0700, Peter Geoghegan wrote:\n> However, now that I take a closer look I also notice that there is\n> core code that calls PageGetFreeSpace() when it probably shouldn't,\n> either. For example, what business does heap_xlog_visible() have\n> calling PageGetFreeSpace()?\n\nI'm not sure I understand what the problem is. We got to get the\ninformation for the fsm from somewhere? Are you arguing we should\ninstead have it included as an explicit xlog record payload? Or just\nthat it should use PageGetExactFreeSpace()? I assume the former based on\nyour \"what business\" language, but that'd not make terribly much sense\nto me. I don't think precision terribly matters here...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Apr 2019 14:10:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PageGetFreeSpace() isn't quite the right thing for some of its\n callers"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm not sure I understand what the problem is. We got to get the\n> information for the fsm from somewhere? Are you arguing we should\n> instead have it included as an explicit xlog record payload?\n\nNo. I am simply pointing out that PageGetFreeSpace() \"should usually\nonly be used on index pages\" according to its own comments. And yet\nit's called for other stuff.\n\nMaybe it's not that important in that one instance, but I find it\npretty distracting that PageGetFreeSpace() is intended for index AMs\nthat use conventional line pointers.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 8 Apr 2019 14:14:48 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: PageGetFreeSpace() isn't quite the right thing for some of its\n callers"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 5:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Apr 8, 2019 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'm not sure I understand what the problem is. We got to get the\n> > information for the fsm from somewhere? Are you arguing we should\n> > instead have it included as an explicit xlog record payload?\n>\n> No. I am simply pointing out that PageGetFreeSpace() \"should usually\n> only be used on index pages\" according to its own comments. And yet\n> it's called for other stuff.\n>\n> Maybe it's not that important in that one instance, but I find it\n> pretty distracting that PageGetFreeSpace() is intended for index AMs\n> that use conventional line pointers.\n\nMaybe we should rename it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 9 Apr 2019 14:35:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PageGetFreeSpace() isn't quite the right thing for some of its\n callers"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 11:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Maybe we should rename it.\n\nThere are only about 20 PageGetFreeSpace() callers, so that shouldn't\nbe too disruptive. We might also need to rename\nPageGetFreeSpaceForMultipleTuples() and PageGetExactFreeSpace(). And,\nheapam.c won't be able to call whatever we rename PageGetFreeSpace()\nto.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 9 Apr 2019 13:46:24 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: PageGetFreeSpace() isn't quite the right thing for some of its\n callers"
}
] |
[
{
"msg_contents": "Howdy folks,\n\nI noticed that the docs currently state \"A different order of columns\nin the target table is allowed, but the column types have to match.\"\nThis is untrue, as you can replicate between any two data types as\nlong as the data can be coerced into the right format on the\nsubscriber. Attached is a patch that attempts to clarify this, and\nprovides some additional wordsmithing of that section. Patch is\nagainst head but the nature of the patch would apply to the docs for\n11 and 10, which both have the incorrect information as well, even if\nthe patch itself does not.\n\n\nRobert Treat\nhttps://xzilla.net",
"msg_date": "Mon, 8 Apr 2019 18:39:39 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": true,
"msg_subject": "Fix doc bug in logical replication."
},
{
"msg_contents": "Em seg, 8 de abr de 2019 às 19:38, Robert Treat <rob@xzilla.net> escreveu:\n>\n> I noticed that the docs currently state \"A different order of columns\n> in the target table is allowed, but the column types have to match.\"\n> This is untrue, as you can replicate between any two data types as\n> long as the data can be coerced into the right format on the\n> subscriber. Attached is a patch that attempts to clarify this, and\n> provides some additional wordsmithing of that section. Patch is\n> against head but the nature of the patch would apply to the docs for\n> 11 and 10, which both have the incorrect information as well, even if\n> the patch itself does not.\n>\nI would say it is inaccurate because the actual instruction works. I\nagree that your words are an improvement but it lacks a comment on the\nslot_[store|modify]_cstrings.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Mon, 8 Apr 2019 20:19:12 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: Fix doc bug in logical replication."
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 7:19 PM Euler Taveira <euler@timbira.com.br> wrote:\n>\n> Em seg, 8 de abr de 2019 às 19:38, Robert Treat <rob@xzilla.net> escreveu:\n> >\n> > I noticed that the docs currently state \"A different order of columns\n> > in the target table is allowed, but the column types have to match.\"\n> > This is untrue, as you can replicate between any two data types as\n> > long as the data can be coerced into the right format on the\n> > subscriber. Attached is a patch that attempts to clarify this, and\n> > provides some additional wordsmithing of that section. Patch is\n> > against head but the nature of the patch would apply to the docs for\n> > 11 and 10, which both have the incorrect information as well, even if\n> > the patch itself does not.\n> >\n> I would say it is inaccurate because the actual instruction works. I\n> agree that your words are an improvement but it lacks a comment on the\n> slot_[store|modify]_cstrings.\n>\n\nIt is clear to me that the docs are wrong, but I don't see anything\ninherently incorrect about the code itself. Do you have suggestions\nfor how you would like to see the code comments improved?\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Fri, 12 Apr 2019 13:52:58 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": true,
"msg_subject": "Re: Fix doc bug in logical replication."
},
{
"msg_contents": "On 2019-04-12 19:52, Robert Treat wrote:\n> It is clear to me that the docs are wrong, but I don't see anything\n> inherently incorrect about the code itself. Do you have suggestions\n> for how you would like to see the code comments improved?\n\nThe question is perhaps whether we want to document that non-matching\ndata types do work. It happens to work now, but do we always want to\nguarantee that? There is talk of a binary mode for example.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 23 Jun 2019 19:25:26 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix doc bug in logical replication."
},
{
"msg_contents": "On Sun, Jun 23, 2019 at 1:25 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-04-12 19:52, Robert Treat wrote:\n> > It is clear to me that the docs are wrong, but I don't see anything\n> > inherently incorrect about the code itself. Do you have suggestions\n> > for how you would like to see the code comments improved?\n>\n> The question is perhaps whether we want to document that non-matching\n> data types do work. It happens to work now, but do we always want to\n> guarantee that? There is talk of a binary mode for example.\n>\n\nWhether we *want* to document that it works, documenting that it\ndoesn't work when it does can't be the right answer. If you want to\ncouch the language to leave the door open that we may not support this\nthe same way in the future I wouldn't be opposed to that, but at this\npoint we will have three releases with the current behavior in\nproduction, so if we decide to change the behavior, it is likely going\nto break certain use cases. That may be ok, but I'd expect a\ndocumentation update to accompany a change that would cause such a\nbreaking change.\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Sun, 23 Jun 2019 22:26:47 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": true,
"msg_subject": "Re: Fix doc bug in logical replication."
},
{
"msg_contents": "On Sun, Jun 23, 2019 at 10:26:47PM -0400, Robert Treat wrote:\n>On Sun, Jun 23, 2019 at 1:25 PM Peter Eisentraut\n><peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> On 2019-04-12 19:52, Robert Treat wrote:\n>> > It is clear to me that the docs are wrong, but I don't see anything\n>> > inherently incorrect about the code itself. Do you have suggestions\n>> > for how you would like to see the code comments improved?\n>>\n>> The question is perhaps whether we want to document that non-matching\n>> data types do work. It happens to work now, but do we always want to\n>> guarantee that? There is talk of a binary mode for example.\n>>\n>\n>Whether we *want* to document that it works, documenting that it\n>doesn't work when it does can't be the right answer. If you want to\n>couch the language to leave the door open that we may not support this\n>the same way in the future I wouldn't be opposed to that, but at this\n>point we will have three releases with the current behavior in\n>production, so if we decide to change the behavior, it is likely going\n>to break certain use cases. That may be ok, but I'd expect a\n>documentation update to accompany a change that would cause such a\n>breaking change.\n>\n\nI agree with that. We have this behavior for quite a bit of time, and\nwhile technically we could change the behavior in the future (using the\n\"not supported\" statement), IMO that'd be pretty annoying move. I always\ndespised systems that \"fix\" bugs by documenting that it does not work, and\nthis is a bit similar.\n\nFWIW I don't quite see why supporting binary mode would change this?\nSurely we can't just enable binary mode blindly, there need to be some\nsort of checks (alignment, type sizes, ...) with fallback to text mode.\nAnd perhaps support only for built-in types.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 27 Jun 2019 18:50:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix doc bug in logical replication."
},
{
"msg_contents": "On Thu, 27 Jun 2019 at 12:50, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Sun, Jun 23, 2019 at 10:26:47PM -0400, Robert Treat wrote:\n> >On Sun, Jun 23, 2019 at 1:25 PM Peter Eisentraut\n> ><peter.eisentraut@2ndquadrant.com> wrote:\n> >>\n> >> On 2019-04-12 19:52, Robert Treat wrote:\n> >> > It is clear to me that the docs are wrong, but I don't see anything\n> >> > inherently incorrect about the code itself. Do you have suggestions\n> >> > for how you would like to see the code comments improved?\n> >>\n> >> The question is perhaps whether we want to document that non-matching\n> >> data types do work. It happens to work now, but do we always want to\n> >> guarantee that? There is talk of a binary mode for example.\n> >>\n> >\n> >Whether we *want* to document that it works, documenting that it\n> >doesn't work when it does can't be the right answer. If you want to\n> >couch the language to leave the door open that we may not support this\n> >the same way in the future I wouldn't be opposed to that, but at this\n> >point we will have three releases with the current behavior in\n> >production, so if we decide to change the behavior, it is likely going\n> >to break certain use cases. That may be ok, but I'd expect a\n> >documentation update to accompany a change that would cause such a\n> >breaking change.\n> >\n>\n> I agree with that. We have this behavior for quite a bit of time, and\n> while technically we could change the behavior in the future (using the\n> \"not supported\" statement), IMO that'd be pretty annoying move. I always\n> despised systems that \"fix\" bugs by documenting that it does not work, and\n> this is a bit similar.\n>\n> FWIW I don't quite see why supporting binary mode would change this?\n> Surely we can't just enable binary mode blindly, there need to be some\n> sort of checks (alignment, type sizes, ...) with fallback to text mode.\n> And perhaps support only for built-in types.\n>\n\nThe proposed implementation of binary only supports built-in types.\nThe subscriber turns it on so presumably it can handle the binary data\ncoming at it.\n\nDave\n\n>\n>\n\nOn Thu, 27 Jun 2019 at 12:50, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Sun, Jun 23, 2019 at 10:26:47PM -0400, Robert Treat wrote:\n>On Sun, Jun 23, 2019 at 1:25 PM Peter Eisentraut\n><peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> On 2019-04-12 19:52, Robert Treat wrote:\n>> > It is clear to me that the docs are wrong, but I don't see anything\n>> > inherently incorrect about the code itself. Do you have suggestions\n>> > for how you would like to see the code comments improved?\n>>\n>> The question is perhaps whether we want to document that non-matching\n>> data types do work. It happens to work now, but do we always want to\n>> guarantee that? There is talk of a binary mode for example.\n>>\n>\n>Whether we *want* to document that it works, documenting that it\n>doesn't work when it does can't be the right answer. If you want to\n>couch the language to leave the door open that we may not support this\n>the same way in the future I wouldn't be opposed to that, but at this\n>point we will have three releases with the current behavior in\n>production, so if we decide to change the behavior, it is likely going\n>to break certain use cases. That may be ok, but I'd expect a\n>documentation update to accompany a change that would cause such a\n>breaking change.\n>\n\nI agree with that. We have this behavior for quite a bit of time, and\nwhile technically we could change the behavior in the future (using the\n\"not supported\" statement), IMO that'd be pretty annoying move. I always\ndespised systems that \"fix\" bugs by documenting that it does not work, and\nthis is a bit similar.\n\nFWIW I don't quite see why supporting binary mode would change this?\nSurely we can't just enable binary mode blindly, there need to be some\nsort of checks (alignment, type sizes, ...) with fallback to text mode.\nAnd perhaps support only for built-in types.The proposed implementation of binary only supports built-in types. The subscriber turns it on so presumably it can handle the binary data coming at it.Dave",
"msg_date": "Thu, 27 Jun 2019 13:46:47 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix doc bug in logical replication."
},
{
"msg_contents": "On Thu, Jun 27, 2019 at 01:46:47PM -0400, Dave Cramer wrote:\n>On Thu, 27 Jun 2019 at 12:50, Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote:\n>\n>> On Sun, Jun 23, 2019 at 10:26:47PM -0400, Robert Treat wrote:\n>> >On Sun, Jun 23, 2019 at 1:25 PM Peter Eisentraut\n>> ><peter.eisentraut@2ndquadrant.com> wrote:\n>> >>\n>> >> On 2019-04-12 19:52, Robert Treat wrote:\n>> >> > It is clear to me that the docs are wrong, but I don't see anything\n>> >> > inherently incorrect about the code itself. Do you have suggestions\n>> >> > for how you would like to see the code comments improved?\n>> >>\n>> >> The question is perhaps whether we want to document that non-matching\n>> >> data types do work. It happens to work now, but do we always want to\n>> >> guarantee that? There is talk of a binary mode for example.\n>> >>\n>> >\n>> >Whether we *want* to document that it works, documenting that it\n>> >doesn't work when it does can't be the right answer. If you want to\n>> >couch the language to leave the door open that we may not support this\n>> >the same way in the future I wouldn't be opposed to that, but at this\n>> >point we will have three releases with the current behavior in\n>> >production, so if we decide to change the behavior, it is likely going\n>> >to break certain use cases. That may be ok, but I'd expect a\n>> >documentation update to accompany a change that would cause such a\n>> >breaking change.\n>> >\n>>\n>> I agree with that. We have this behavior for quite a bit of time, and\n>> while technically we could change the behavior in the future (using the\n>> \"not supported\" statement), IMO that'd be pretty annoying move. I always\n>> despised systems that \"fix\" bugs by documenting that it does not work, and\n>> this is a bit similar.\n>>\n>> FWIW I don't quite see why supporting binary mode would change this?\n>> Surely we can't just enable binary mode blindly, there need to be some\n>> sort of checks (alignment, type sizes, ...) with fallback to text mode.\n>> And perhaps support only for built-in types.\n>>\n>\n>The proposed implementation of binary only supports built-in types.\n>The subscriber turns it on so presumably it can handle the binary data\n>coming at it.\n>\n\nI don't recall that being discussed in the patch thread, but maybe it\nshould not be enabled merely based on what the subscriber requests. Maybe\nthe subscriber should indicate \"interest\" and the decision should be made\non the upstream, after some additional checks.\n\nThat's why pglogical does check_binary_compatibility() - see [1].\n\nThis is necessary, because the FE/BE protocol docs [2] say:\n\n Keep in mind that binary representations for complex data types might\n change across server versions; the text format is usually the more\n portable choice.\n\nSo you can't just assume the subscriber knows what it's doing, because\neither of the sides might be upgraded.\n\nNote: The pglogical code also does check additional stuff (like\nsizeof(Datum) or endianess), but I'm not sure that's actually necessary -\nI believe the binary protocol should be independent of that.\n\n\nregards\n\n[1] https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_output_plugin.c#L107\n\n[2] https://www.postgresql.org/docs/current/protocol-overview.html\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 27 Jun 2019 20:20:35 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix doc bug in logical replication."
},
{
"msg_contents": "On Thu, 27 Jun 2019 at 14:20, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Thu, Jun 27, 2019 at 01:46:47PM -0400, Dave Cramer wrote:\n> >On Thu, 27 Jun 2019 at 12:50, Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> >wrote:\n> >\n> >> On Sun, Jun 23, 2019 at 10:26:47PM -0400, Robert Treat wrote:\n> >> >On Sun, Jun 23, 2019 at 1:25 PM Peter Eisentraut\n> >> ><peter.eisentraut@2ndquadrant.com> wrote:\n> >> >>\n> >> >> On 2019-04-12 19:52, Robert Treat wrote:\n> >> >> > It is clear to me that the docs are wrong, but I don't see anything\n> >> >> > inherently incorrect about the code itself. Do you have suggestions\n> >> >> > for how you would like to see the code comments improved?\n> >> >>\n> >> >> The question is perhaps whether we want to document that non-matching\n> >> >> data types do work. It happens to work now, but do we always want to\n> >> >> guarantee that? There is talk of a binary mode for example.\n> >> >>\n> >> >\n> >> >Whether we *want* to document that it works, documenting that it\n> >> >doesn't work when it does can't be the right answer. If you want to\n> >> >couch the language to leave the door open that we may not support this\n> >> >the same way in the future I wouldn't be opposed to that, but at this\n> >> >point we will have three releases with the current behavior in\n> >> >production, so if we decide to change the behavior, it is likely going\n> >> >to break certain use cases. That may be ok, but I'd expect a\n> >> >documentation update to accompany a change that would cause such a\n> >> >breaking change.\n> >> >\n> >>\n> >> I agree with that. We have this behavior for quite a bit of time, and\n> >> while technically we could change the behavior in the future (using the\n> >> \"not supported\" statement), IMO that'd be pretty annoying move. I always\n> >> despised systems that \"fix\" bugs by documenting that it does not work,\n> and\n> >> this is a bit similar.\n> >>\n> >> FWIW I don't quite see why supporting binary mode would change this?\n> >> Surely we can't just enable binary mode blindly, there need to be some\n> >> sort of checks (alignment, type sizes, ...) with fallback to text mode.\n> >> And perhaps support only for built-in types.\n> >>\n> >\n> >The proposed implementation of binary only supports built-in types.\n> >The subscriber turns it on so presumably it can handle the binary data\n> >coming at it.\n> >\n>\n> I don't recall that being discussed in the patch thread, but maybe it\n> should not be enabled merely based on what the subscriber requests. Maybe\n> the subscriber should indicate \"interest\" and the decision should be made\n> on the upstream, after some additional checks.\n>\n> That's why pglogical does check_binary_compatibility() - see [1].\n>\n> This is necessary, because the FE/BE protocol docs [2] say:\n>\n> Keep in mind that binary representations for complex data types might\n> change across server versions; the text format is usually the more\n> portable choice.\n>\n> So you can't just assume the subscriber knows what it's doing, because\n> either of the sides might be upgraded.\n>\n> Note: The pglogical code also does check additional stuff (like\n> sizeof(Datum) or endianess), but I'm not sure that's actually necessary -\n> I believe the binary protocol should be independent of that.\n>\n>\n> regards\n>\n> [1]\n> https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_output_plugin.c#L107\n>\n> [2] https://www.postgresql.org/docs/current/protocol-overview.html\n>\n>\nThanks for the pointer. I'll add that to the patch.\n\n\nDave Cramer\n\ndavec@postgresintl.com\nwww.postgresintl.com\n\n\n>\n>\n\nOn Thu, 27 Jun 2019 at 14:20, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Thu, Jun 27, 2019 at 01:46:47PM -0400, Dave Cramer wrote:\n>On Thu, 27 Jun 2019 at 12:50, Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote:\n>\n>> On Sun, Jun 23, 2019 at 10:26:47PM -0400, Robert Treat wrote:\n>> >On Sun, Jun 23, 2019 at 1:25 PM Peter Eisentraut\n>> ><peter.eisentraut@2ndquadrant.com> wrote:\n>> >>\n>> >> On 2019-04-12 19:52, Robert Treat wrote:\n>> >> > It is clear to me that the docs are wrong, but I don't see anything\n>> >> > inherently incorrect about the code itself. Do you have suggestions\n>> >> > for how you would like to see the code comments improved?\n>> >>\n>> >> The question is perhaps whether we want to document that non-matching\n>> >> data types do work. It happens to work now, but do we always want to\n>> >> guarantee that? There is talk of a binary mode for example.\n>> >>\n>> >\n>> >Whether we *want* to document that it works, documenting that it\n>> >doesn't work when it does can't be the right answer. If you want to\n>> >couch the language to leave the door open that we may not support this\n>> >the same way in the future I wouldn't be opposed to that, but at this\n>> >point we will have three releases with the current behavior in\n>> >production, so if we decide to change the behavior, it is likely going\n>> >to break certain use cases. That may be ok, but I'd expect a\n>> >documentation update to accompany a change that would cause such a\n>> >breaking change.\n>> >\n>>\n>> I agree with that. We have this behavior for quite a bit of time, and\n>> while technically we could change the behavior in the future (using the\n>> \"not supported\" statement), IMO that'd be pretty annoying move. I always\n>> despised systems that \"fix\" bugs by documenting that it does not work, and\n>> this is a bit similar.\n>>\n>> FWIW I don't quite see why supporting binary mode would change this?\n>> Surely we can't just enable binary mode blindly, there need to be some\n>> sort of checks (alignment, type sizes, ...) with fallback to text mode.\n>> And perhaps support only for built-in types.\n>>\n>\n>The proposed implementation of binary only supports built-in types.\n>The subscriber turns it on so presumably it can handle the binary data\n>coming at it.\n>\n\nI don't recall that being discussed in the patch thread, but maybe it\nshould not be enabled merely based on what the subscriber requests. Maybe\nthe subscriber should indicate \"interest\" and the decision should be made\non the upstream, after some additional checks.\n\nThat's why pglogical does check_binary_compatibility() - see [1].\n\nThis is necessary, because the FE/BE protocol docs [2] say:\n\n Keep in mind that binary representations for complex data types might\n change across server versions; the text format is usually the more\n portable choice.\n\nSo you can't just assume the subscriber knows what it's doing, because\neither of the sides might be upgraded.\n\nNote: The pglogical code also does check additional stuff (like\nsizeof(Datum) or endianess), but I'm not sure that's actually necessary -\nI believe the binary protocol should be independent of that.\n\n\nregards\n\n[1] https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_output_plugin.c#L107\n\n[2] https://www.postgresql.org/docs/current/protocol-overview.html\nThanks for the pointer. I'll add that to the patch.Dave Cramerdavec@postgresintl.comwww.postgresintl.com",
"msg_date": "Thu, 27 Jun 2019 14:37:58 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix doc bug in logical replication."
},
{
"msg_contents": "On 2019-06-27 18:50, Tomas Vondra wrote:\n>> Whether we *want* to document that it works, documenting that it\n>> doesn't work when it does can't be the right answer. If you want to\n>> couch the language to leave the door open that we may not support this\n>> the same way in the future I wouldn't be opposed to that, but at this\n>> point we will have three releases with the current behavior in\n>> production, so if we decide to change the behavior, it is likely going\n>> to break certain use cases. That may be ok, but I'd expect a\n>> documentation update to accompany a change that would cause such a\n>> breaking change.\n>>\n> I agree with that. We have this behavior for quite a bit of time, and\n> while technically we could change the behavior in the future (using the\n> \"not supported\" statement), IMO that'd be pretty annoying move. I always\n> despised systems that \"fix\" bugs by documenting that it does not work, and\n> this is a bit similar.\n\ncommitted back to PG10\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Jul 2019 14:56:37 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix doc bug in logical replication."
}
] |
[
{
"msg_contents": "Heikki and I have been hacking recently for few weeks to implement\nin-core columnar storage for PostgreSQL. Here's the design and initial\nimplementation of Zedstore, compressed in-core columnar storage (table\naccess method). Attaching the patch and link to github branch [1] to\nfollow along.\n\nThe objective is to gather feedback on design and approach to the\nsame. The implementation has core basic pieces working but not close\nto complete.\n\nBig thank you to Andres, Haribabu and team for the table access method\nAPI's. Leveraged the API's for implementing zedstore, and proves API\nto be in very good shape. Had to enhance the same minimally but\nin-general didn't had to touch executor much.\n\nMotivations / Objectives\n\n* Performance improvement for queries selecting subset of columns\n (reduced IO).\n* Reduced on-disk footprint compared to heap table. Shorter tuple\n headers and also leveraging compression of similar type data\n* Be first-class citizen in the Postgres architecture (tables data can\n just independently live in columnar storage)\n* Fully MVCC compliant\n* All Indexes supported\n* Hybrid row-column store, where some columns are stored together, and\n others separately. Provide flexibility of granularity on how to\n divide the columns. Columns accessed together can be stored\n together.\n* Provide better control over bloat (similar to zheap)\n* Eliminate need for separate toast tables\n* Faster add / drop column or changing data type of column by avoiding\n full rewrite of the table.\n\nHigh-level Design - B-trees for the win!\n========================================\n\nTo start simple, let's ignore column store aspect for a moment and\nconsider it as compressed row store. The column store is natural\nextension of this concept, explained in next section.\n\nThe basic on-disk data structure leveraged is a B-tree, indexed by\nTID. BTree being a great data structure, fast and versatile. Note this\nis not referring to existing Btree indexes, but instead net new\nseparate BTree for table data storage.\n\nTID - logical row identifier:\nTID is just a 48-bit row identifier. The traditional division into\nblock and offset numbers is meaningless. In order to find a tuple with\na given TID, one must always descend the B-tree. Having logical TID\nprovides flexibility to move the tuples around different pages on page\nsplits or page merges can be performed.\n\nThe internal pages of the B-tree are super simple and boring. Each\ninternal page just stores an array of TID and downlink pairs. Let's\nfocus on the leaf level. Leaf blocks have short uncompressed header,\nfollowed by btree items. Two kinds of items exist:\n\n - plain item, holds one tuple or one datum, uncompressed payload\n - a \"container item\", holds multiple plain items, compressed payload\n\n+-----------------------------\n| Fixed-size page header:\n|\n| LSN\n| TID low and hi key (for Lehman & Yao B-tree operations)\n| left and right page pointers\n|\n| Items:\n|\n| TID | size | flags | uncompressed size | lastTID | payload (container\nitem)\n| TID | size | flags | uncompressed size | lastTID | payload (container\nitem)\n| TID | size | flags | undo pointer | payload (plain item)\n| TID | size | flags | undo pointer | payload (plain item)\n| ...\n|\n+----------------------------\n\nRow store\n---------\n\nThe tuples are stored one after another, sorted by TID. For each\ntuple, we store its 48-bit TID, a undo record pointer, and the actual\ntuple data uncompressed.\n\nIn uncompressed form, the page can be arbitrarily large. But after\ncompression, it must fit into a physical 8k block. If on insert or\nupdate of a tuple, the page cannot be compressed below 8k anymore, the\npage is split. Note that because TIDs are logical rather than physical\nidentifiers, we can freely move tuples from one physical page to\nanother during page split. A tuple's TID never changes.\n\nThe buffer cache caches compressed blocks. Likewise, WAL-logging,\nfull-page images etc. work on compressed blocks. Uncompression is done\non-the-fly, as and when needed in backend-private memory, when\nreading. For some compressions like rel encoding or delta encoding\ntuples can be constructed directly from compressed data.\n\nColumn store\n------------\n\nA column store uses the same structure but we have *multiple* B-trees,\none for each column, all indexed by TID. The B-trees for all columns\nare stored in the same physical file.\n\nA metapage at block 0, has links to the roots of the B-trees. Leaf\npages look the same, but instead of storing the whole tuple, stores\njust a single attribute. To reconstruct a row with given TID, scan\ndescends down the B-trees for all the columns using that TID, and\nfetches all attributes. Likewise, a sequential scan walks all the\nB-trees in lockstep.\n\nSo, in summary can imagine Zedstore as forest of B-trees, one for each\ncolumn, all indexed by TIDs.\n\nThis way of laying out the data also easily allows for hybrid\nrow-column store, where some columns are stored together, and others\nhave a dedicated B-tree. Need to have user facing syntax to allow\nspecifying how to group the columns.\n\n\nMain reasons for storing data this way\n--------------------------------------\n\n* Layout the data/tuples in mapped fashion instead of keeping the\n logical to physical mapping separate from actual data. So, keep the\n meta-data and data logically in single stream of file, avoiding the\n need for separate forks/files to store meta-data and data.\n\n* Stick to fixed size physical blocks. Variable size blocks pose need\n for increased logical to physical mapping maintenance, plus\n restrictions on concurrency of writes and reads to files. Hence\n adopt compression to fit fixed size blocks instead of other way\n round.\n\n\nMVCC\n----\nMVCC works very similar to zheap for zedstore. Undo record pointers\nare used to implement MVCC. Transaction information if not directly\nstored with the data. In zheap, there's a small, fixed, number of\n\"transaction slots\" on each page, but zedstore has undo pointer with\neach item directly; in normal cases, the compression squeezes this\ndown to almost nothing.\n\n\nImplementation\n==============\n\nInsert:\nInserting a new row, splits the row into datums. Then for first column\ndecide which block to insert the same to, and pick a TID for it, and\nwrite undo record for the same. Rest of the columns are inserted using\nthat same TID and point to same undo position.\n\nCompression:\nItems are added to Btree in uncompressed form. If page is full and new\nitem can't be added, compression kicks in. Existing uncompressed items\n(plain items) of the page are passed to compressor for\ncompression. Already compressed items are added back as is. Page is\nrewritten with compressed data with new item added to it. If even\nafter compression, can't add item to page, then page split happens.\n\nToast:\nWhen an overly large datum is stored, it is divided into chunks, and\neach chunk is stored on a dedicated toast page within the same\nphysical file. The toast pages of a datum form list, each page has a\nnext/prev pointer.\n\nSelect:\nProperty is added to Table AM to convey if column projection is\nleveraged by AM for scans. While scanning tables with AM leveraging\nthis property, executor parses the plan. Leverages the target list and\nquals to find the required columns for query. This list is passed down\nto AM on beginscan. Zedstore uses this column projection list to only\npull data from selected columns. Virtual tuple table slot is used to\npass back the datums for subset of columns.\n\nCurrent table am API requires enhancement here to pass down column\nprojection to AM. The patch showcases two different ways for the same.\n\n* For sequential scans added new beginscan_with_column_projection()\n API. Executor checks AM property and if it leverages column\n projection uses this new API else normal beginscan() API.\n\n* For index scans instead of modifying the begin scan API, added new\n API to specifically pass column projection list after calling begin\n scan to populate the scan descriptor but before fetching the tuples.\n\nIndex Support:\nBuilding index also leverages columnar storage and only scans columns\nrequired to build the index. Indexes work pretty similar to heap\ntables. Data is inserted into tables and TID for the tuple same gets\nstored in index. On index scans, required column Btrees are scanned\nfor given TID and datums passed back using virtual tuple.\n\nPage Format:\nZedStore table contains different kinds of pages, all in the same\nfile. Kinds of pages are meta-page, per-attribute btree internal and\nleaf pages, UNDO log page, and toast pages. Each page type has its own\ndistinct data storage format.\n\nBlock 0 is always a metapage. It contains the block numbers of the\nother data structures stored within the file, like the per-attribute\nB-trees, and the UNDO log.\n\n\nEnhancements to design:\n=======================\n\nInstead of compressing all the tuples on a page in one batch, we could\nstore a small \"dictionary\", e.g. in page header or meta-page, and use\nit to compress each tuple separately. That could make random reads and\nupdates of individual tuples faster.\n\nWhen adding column, just need to create new Btree for newly added\ncolumn and linked to meta-page. No existing content needs to be\nrewritten.\n\nWhen the column is dropped, can scan the B-tree of that column, and\nimmediately mark all the pages as free in the FSM. But we don't\nactually have to scan the leaf level: all leaf tuples have a downlink\nin the parent, so we can scan just the internal pages. Unless the\ncolumn is very wide, that's only a small fraction of the data. That\nmakes the space immediately reusable for new insertions, but it won't\nreturn the space to the Operating System. In order to do that, we'd\nstill need to defragment, moving pages from the end of the file closer\nto the beginning, and truncate the file.\n\nIn this design, we only cache compressed pages in the page cache. If\nwe want to cache uncompressed pages instead, or in addition to that,\nwe need to invent a whole new kind of a buffer cache that can deal\nwith the variable-size blocks.\n\nIf you do a lot of updates, the file can get fragmented, with lots of\nunused space on pages. Losing the correlation between TIDs and\nphysical order is also bad, because it will make SeqScans slow, as\nthey're not actually doing sequential I/O anymore. We can write a\ndefragmenter to fix things up. Half-empty pages can be merged, and\npages can be moved to restore TID/physical correlation. This format\ndoesn't have the same MVCC problems with moving tuples around that the\nPostgres heap does, so it can be fairly easily be done on-line.\n\nMin-Max values can be stored for block to easily skip scanning if\ncolumn values doesn't fall in range.\n\nNotes about current patch\n=========================\n\nBasic (core) functionality is implemented to showcase and play with.\n\nTwo compression algorithms are supported Postgres pg_lzcompress and\nlz4. Compiling server with --with-lz4 enables the LZ4 compression for\nzedstore else pg_lzcompress is default. Definitely LZ4 is super fast\nat compressing and uncompressing.\n\nNot all the table AM API's are implemented. For the functionality not\nimplmented yet will ERROR out with not supported. Zedstore Table can\nbe created using command:\n\nCREATE TABLE <name> (column listing) USING zedstore;\n\nBulk load can be performed using COPY. INSERT, SELECT, UPDATE and\nDELETES work. Btree indexes can be created. Btree and bitmap index\nscans work. Test in src/test/regress/sql/zedstore.sql showcases all\nthe functionality working currently. Updates are currently implemented\nas cold, means always creates new items and not performed in-place.\n\nTIDs currently can't leverage the full 48 bit range but instead need\nto limit to values which are considered valid ItemPointers. Also,\nMaxHeapTuplesPerPage pose restrictions on the values currently it can\nhave. Refer [7] for the same.\n\nExtremely basic UNDO logging has be implemented just for MVCC\nperspective. MVCC is missing tuple lock right now. Plus, doesn't\nactually perform any undo yet. No WAL logging exist currently hence\nits not crash safe either.\n\nHelpful functions to find how many pages of each type is present in\nzedstore table and also to find compression ratio is provided.\n\nTest mentioned in thread \"Column lookup in a row performance\" [6],\ngood example query for zedstore locally on laptop using lz4 shows\n\npostgres=# SELECT AVG(i199) FROM (select i199 from layout offset 0) x; --\nheap\n avg\n---------------------\n 500000.500000000000\n(1 row)\n\nTime: 4679.026 ms (00:04.679)\n\npostgres=# SELECT AVG(i199) FROM (select i199 from zlayout offset 0) x; --\nzedstore\n avg\n---------------------\n 500000.500000000000\n(1 row)\n\nTime: 379.710 ms\n\nImportant note:\n---------------\nPlanner has not been modified yet to leverage the columnar\nstorage. Hence, plans using \"physical tlist\" optimization or such good\nfor row store miss out to leverage the columnar nature\ncurrently. Hence, can see the need for subquery with OFFSET 0 above to\ndisable the optimization and scan only required column.\n\n\n\nThe current proposal and discussion is more focused on AM layer work\nfirst. Hence, currently intentionally skips to discuss the planner or\nexecutor \"feature\" enhancements like adding vectorized execution and\nfamily of features.\n\nPrevious discussions or implementations for column store Vertical\ncluster index [2], Incore columnar storage [3] and [4], cstore_fdw [5]\nwere refered to distill down objectives and come up with design and\nimplementations to avoid any earlier concerns raised. Learnings from\nGreenplum Database column store also leveraged while designing and\nimplementing the same.\n\nCredit: Design is moslty brain child of Heikki, or actually his\nepiphany to be exact. I acted as idea bouncing board and contributed\nenhancements to the same. We both are having lot of fun writing the\ncode for this.\n\n\nReferences\n1] https://github.com/greenplum-db/postgres/tree/zedstore\n2]\nhttps://www.postgresql.org/message-id/flat/CAJrrPGfaC7WC9NK6PTTy6YN-NN%2BhCy8xOLAh2doYhVg5d6HsAA%40mail.gmail.com\n3]\nhttps://www.postgresql.org/message-id/flat/20150611230316.GM133018%40postgresql.org\n4]\nhttps://www.postgresql.org/message-id/flat/20150831225328.GM2912%40alvherre.pgsql\n5] https://github.com/citusdata/cstore_fdw\n6]\nhttps://www.postgresql.org/message-id/flat/CAOykqKfko-n5YiBJtk-ocVdp%2Bj92Apu5MJBwbGGh4awRY5NCuQ%40mail.gmail.com\n7]\nhttps://www.postgresql.org/message-id/d0fc97bd-7ec8-2388-e4a6-0fda86d71a43%40iki.fi",
"msg_date": "Mon, 8 Apr 2019 17:27:05 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-08 17:27:05 -0700, Ashwin Agrawal wrote:\n> Heikki and I have been hacking recently for few weeks to implement\n> in-core columnar storage for PostgreSQL. Here's the design and initial\n> implementation of Zedstore, compressed in-core columnar storage (table\n> access method).\n\nThat's very cool.\n\n\n> Motivations / Objectives\n> \n> * Performance improvement for queries selecting subset of columns\n> (reduced IO).\n> * Reduced on-disk footprint compared to heap table. Shorter tuple\n> headers and also leveraging compression of similar type data\n> * Be first-class citizen in the Postgres architecture (tables data can\n> just independently live in columnar storage)\n> * Fully MVCC compliant\n> * All Indexes supported\n> * Hybrid row-column store, where some columns are stored together, and\n> others separately. Provide flexibility of granularity on how to\n> divide the columns. Columns accessed together can be stored\n> together.\n> * Provide better control over bloat (similar to zheap)\n> * Eliminate need for separate toast tables\n> * Faster add / drop column or changing data type of column by avoiding\n> full rewrite of the table.\n\nIs storage going through the bufmgr.c or separately?\n\n\n\n> In uncompressed form, the page can be arbitrarily large. But after\n> compression, it must fit into a physical 8k block. If on insert or\n> update of a tuple, the page cannot be compressed below 8k anymore, the\n> page is split. Note that because TIDs are logical rather than physical\n> identifiers, we can freely move tuples from one physical page to\n> another during page split. A tuple's TID never changes.\n\nWhen does compression happen? After each modifcation of the expanded\n\"page\"? Are repeated expansions prevented somehow, e.g. if I\ninsert/delete rows into the same page one-by-one?\n\n\n> A metapage at block 0, has links to the roots of the B-trees. Leaf\n> pages look the same, but instead of storing the whole tuple, stores\n> just a single attribute. To reconstruct a row with given TID, scan\n> descends down the B-trees for all the columns using that TID, and\n> fetches all attributes. Likewise, a sequential scan walks all the\n> B-trees in lockstep.\n\nDoes the size of the metapage limit the number of column [groups]? Or is\nthere some overflow / tree of trees / whatnot happening?\n\n\n> Insert:\n> Inserting a new row, splits the row into datums. Then for first column\n> decide which block to insert the same to, and pick a TID for it, and\n> write undo record for the same. Rest of the columns are inserted using\n> that same TID and point to same undo position.\n\nIs there some buffering? Without that it seems like retail inserts are\ngoing to be pretty slow?\n\n\n\n> Property is added to Table AM to convey if column projection is\n> leveraged by AM for scans. While scanning tables with AM leveraging\n> this property, executor parses the plan. Leverages the target list and\n> quals to find the required columns for query. This list is passed down\n> to AM on beginscan. Zedstore uses this column projection list to only\n> pull data from selected columns. Virtual tuple table slot is used to\n> pass back the datums for subset of columns.\n> \n> Current table am API requires enhancement here to pass down column\n> projection to AM. The patch showcases two different ways for the same.\n> \n> * For sequential scans added new beginscan_with_column_projection()\n> API. Executor checks AM property and if it leverages column\n> projection uses this new API else normal beginscan() API.\n> \n> * For index scans instead of modifying the begin scan API, added new\n> API to specifically pass column projection list after calling begin\n> scan to populate the scan descriptor but before fetching the tuples.\n\nFWIW, I don't quite think this is the right approach. I've only a vague\nsketch of this in my head, but I think we should want a general API to\npass that down to *any* scan. Even for heap, not deforming leading\ncolumns that a uninteresting, but precede relevant columns, would be\nquite a noticable performance win. I don't think the projection list is\nthe right approach for that.\n\n\n> Extremely basic UNDO logging has be implemented just for MVCC\n> perspective. MVCC is missing tuple lock right now. Plus, doesn't\n> actually perform any undo yet. No WAL logging exist currently hence\n> its not crash safe either.\n\nHave you looked at the undo APIs developed for zheap, as discussed on\nthe list? Seems important that they're suitable for this too.\n\n\n> Test mentioned in thread \"Column lookup in a row performance\" [6],\n> good example query for zedstore locally on laptop using lz4 shows\n> \n> postgres=# SELECT AVG(i199) FROM (select i199 from layout offset 0) x; --\n> heap\n> avg\n> ---------------------\n> 500000.500000000000\n> (1 row)\n> \n> Time: 4679.026 ms (00:04.679)\n> \n> postgres=# SELECT AVG(i199) FROM (select i199 from zlayout offset 0) x; --\n> zedstore\n> avg\n> ---------------------\n> 500000.500000000000\n> (1 row)\n> \n> Time: 379.710 ms\n\nWell, I'm not sure I'm actually impressed by that. What does the\nperformance look like if you select i0 instead?\n\n\n> Important note:\n> ---------------\n> Planner has not been modified yet to leverage the columnar\n> storage. Hence, plans using \"physical tlist\" optimization or such good\n> for row store miss out to leverage the columnar nature\n> currently. Hence, can see the need for subquery with OFFSET 0 above to\n> disable the optimization and scan only required column.\n\nI'm more and more thinking that we should just nix the physical tlist\nstuff and start afresh.\n\n\nCongrats again, this is cool stuff.\n\n\n- Andres\n\n\n",
"msg_date": "Mon, 8 Apr 2019 18:04:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 6:04 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-04-08 17:27:05 -0700, Ashwin Agrawal wrote:\n> > Heikki and I have been hacking recently for few weeks to implement\n> > in-core columnar storage for PostgreSQL. Here's the design and initial\n> > implementation of Zedstore, compressed in-core columnar storage (table\n> > access method).\n>\n> That's very cool.\n>\n>\n> > Motivations / Objectives\n> >\n> > * Performance improvement for queries selecting subset of columns\n> > (reduced IO).\n> > * Reduced on-disk footprint compared to heap table. Shorter tuple\n> > headers and also leveraging compression of similar type data\n> > * Be first-class citizen in the Postgres architecture (tables data can\n> > just independently live in columnar storage)\n> > * Fully MVCC compliant\n> > * All Indexes supported\n> > * Hybrid row-column store, where some columns are stored together, and\n> > others separately. Provide flexibility of granularity on how to\n> > divide the columns. Columns accessed together can be stored\n> > together.\n> > * Provide better control over bloat (similar to zheap)\n> > * Eliminate need for separate toast tables\n> > * Faster add / drop column or changing data type of column by avoiding\n> > full rewrite of the table.\n>\n> Is storage going through the bufmgr.c or separately?\n>\n\nYes, below access method its pretty much same as heap. All reads and writes\nflow via buffer cache. The implementation sits nicely in between, just\nmodifying the access method code changing how just how data is stored in\npages, above AM and below AM is basically all behaves similar to heap code.\n\n\n> > In uncompressed form, the page can be arbitrarily large. But after\n> > compression, it must fit into a physical 8k block. If on insert or\n> > update of a tuple, the page cannot be compressed below 8k anymore, the\n> > page is split. Note that because TIDs are logical rather than physical\n> > identifiers, we can freely move tuples from one physical page to\n> > another during page split. A tuple's TID never changes.\n>\n> When does compression happen? After each modifcation of the expanded\n>\n\"page\"? Are repeated expansions prevented somehow, e.g. if I\n> insert/delete rows into the same page one-by-one?\n>\n\nCompression is performed with new data is only if page becomes full, till\nthen uncompressed data is added to the page. If even after compression\ncannot add data to the page then page split is performed. Already\ncompressed data is not compressed again on next insert. New compressed\nblock is created for newly added uncompressed items.\n\nThe line of thought we have for delete is will not free the space as soon\nas delete is performed, but instead delay and reuse the space deleted on\nnext insertion to the page.\n\n\n> A metapage at block 0, has links to the roots of the B-trees. Leaf\n> > pages look the same, but instead of storing the whole tuple, stores\n> > just a single attribute. To reconstruct a row with given TID, scan\n> > descends down the B-trees for all the columns using that TID, and\n> > fetches all attributes. Likewise, a sequential scan walks all the\n> > B-trees in lockstep.\n>\n> Does the size of the metapage limit the number of column [groups]? Or is\n> there some overflow / tree of trees / whatnot happening?\n>\n\nIn design it doesn't limit the number of columns, as can have chain of\nmeta-pages to store the required meta-data, page 0 still being start of the\nchain.\n\n\n> > Insert:\n> > Inserting a new row, splits the row into datums. Then for first column\n> > decide which block to insert the same to, and pick a TID for it, and\n> > write undo record for the same. Rest of the columns are inserted using\n> > that same TID and point to same undo position.\n>\n> Is there some buffering? Without that it seems like retail inserts are\n> going to be pretty slow?\n>\n\nYes, regular buffer cache.\n\n\n\n> > Property is added to Table AM to convey if column projection is\n> > leveraged by AM for scans. While scanning tables with AM leveraging\n> > this property, executor parses the plan. Leverages the target list and\n> > quals to find the required columns for query. This list is passed down\n> > to AM on beginscan. Zedstore uses this column projection list to only\n> > pull data from selected columns. Virtual tuple table slot is used to\n> > pass back the datums for subset of columns.\n> >\n> > Current table am API requires enhancement here to pass down column\n> > projection to AM. The patch showcases two different ways for the same.\n> >\n> > * For sequential scans added new beginscan_with_column_projection()\n> > API. Executor checks AM property and if it leverages column\n> > projection uses this new API else normal beginscan() API.\n> >\n> > * For index scans instead of modifying the begin scan API, added new\n> > API to specifically pass column projection list after calling begin\n> > scan to populate the scan descriptor but before fetching the tuples.\n>\n> FWIW, I don't quite think this is the right approach. I've only a vague\n> sketch of this in my head, but I think we should want a general API to\n> pass that down to *any* scan. Even for heap, not deforming leading\n> columns that a uninteresting, but precede relevant columns, would be\n> quite a noticable performance win. I don't think the projection list is\n> the right approach for that.\n>\n\nSure, would love to hear more on it and can enhance the same as makes more\nusable for AMs.\n\n\n\n> > Extremely basic UNDO logging has be implemented just for MVCC\n> > perspective. MVCC is missing tuple lock right now. Plus, doesn't\n> > actually perform any undo yet. No WAL logging exist currently hence\n> > its not crash safe either.\n>\n> Have you looked at the undo APIs developed for zheap, as discussed on\n> the list? Seems important that they're suitable for this too.\n>\n\nNot in details yet, but yes plan is to leverage the same common framework\nand undo log API as zheap. Will look into the details. With the current\nzedstore implementation the requirements from the undo are prertty clear.\n\n\n\n> > Test mentioned in thread \"Column lookup in a row performance\" [6],\n> > good example query for zedstore locally on laptop using lz4 shows\n> >\n> > postgres=# SELECT AVG(i199) FROM (select i199 from layout offset 0) x; --\n> > heap\n> > avg\n> > ---------------------\n> > 500000.500000000000\n> > (1 row)\n> >\n> > Time: 4679.026 ms (00:04.679)\n> >\n> > postgres=# SELECT AVG(i199) FROM (select i199 from zlayout offset 0) x;\n> --\n> > zedstore\n> > avg\n> > ---------------------\n> > 500000.500000000000\n> > (1 row)\n> >\n> > Time: 379.710 ms\n>\n> Well, I'm not sure I'm actually impressed by that. What does the\n> performance look like if you select i0 instead?\n>\n\nJust for quick test used 100 instead of 200 columns (with 200 the results\nwould be more diverged), this is what it reports\n\npostgres=# SELECT AVG(i0) FROM (select i0 from layout offset 0) x; -- heap\n avg\n------------------------\n 1.00000000000000000000\n(1 row)\n\nTime: 183.865 ms\npostgres=# SELECT AVG(i0) FROM (select i0 from zlayout offset 0) x; --\nzedstore\n avg\n------------------------\n 1.00000000000000000000\n(1 row)\n\nTime: 47.624 ms\n\nOn Mon, Apr 8, 2019 at 6:04 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-04-08 17:27:05 -0700, Ashwin Agrawal wrote:\n> Heikki and I have been hacking recently for few weeks to implement\n> in-core columnar storage for PostgreSQL. Here's the design and initial\n> implementation of Zedstore, compressed in-core columnar storage (table\n> access method).\n\nThat's very cool.\n\n\n> Motivations / Objectives\n> \n> * Performance improvement for queries selecting subset of columns\n> (reduced IO).\n> * Reduced on-disk footprint compared to heap table. Shorter tuple\n> headers and also leveraging compression of similar type data\n> * Be first-class citizen in the Postgres architecture (tables data can\n> just independently live in columnar storage)\n> * Fully MVCC compliant\n> * All Indexes supported\n> * Hybrid row-column store, where some columns are stored together, and\n> others separately. Provide flexibility of granularity on how to\n> divide the columns. Columns accessed together can be stored\n> together.\n> * Provide better control over bloat (similar to zheap)\n> * Eliminate need for separate toast tables\n> * Faster add / drop column or changing data type of column by avoiding\n> full rewrite of the table.\n\nIs storage going through the bufmgr.c or separately?Yes, below access method its pretty much same as heap. All reads and writes flow via buffer cache. The implementation sits nicely in between, just modifying the access method code changing how just how data is stored in pages, above AM and below AM is basically all behaves similar to heap code. \n> In uncompressed form, the page can be arbitrarily large. But after\n> compression, it must fit into a physical 8k block. If on insert or\n> update of a tuple, the page cannot be compressed below 8k anymore, the\n> page is split. Note that because TIDs are logical rather than physical\n> identifiers, we can freely move tuples from one physical page to\n> another during page split. A tuple's TID never changes.\n\nWhen does compression happen? After each modifcation of the expanded\n\"page\"? Are repeated expansions prevented somehow, e.g. if I\ninsert/delete rows into the same page one-by-one?Compression is performed with new data is only if page becomes full, till then uncompressed data is added to the page. If even after \ncompression cannot add data to the page then page split is performed. Already compressed data is not compressed again on next insert. New compressed block is created for newly added uncompressed items.The line of thought we have for delete is will not free the space as soon as delete is performed, but instead delay and reuse the space deleted on next insertion to the page.\n> A metapage at block 0, has links to the roots of the B-trees. Leaf\n> pages look the same, but instead of storing the whole tuple, stores\n> just a single attribute. To reconstruct a row with given TID, scan\n> descends down the B-trees for all the columns using that TID, and\n> fetches all attributes. Likewise, a sequential scan walks all the\n> B-trees in lockstep.\n\nDoes the size of the metapage limit the number of column [groups]? Or is\nthere some overflow / tree of trees / whatnot happening?In design it doesn't limit the number of columns, as can have chain of meta-pages to store the required meta-data, page 0 still being start of the chain. \n> Insert:\n> Inserting a new row, splits the row into datums. Then for first column\n> decide which block to insert the same to, and pick a TID for it, and\n> write undo record for the same. Rest of the columns are inserted using\n> that same TID and point to same undo position.\n\nIs there some buffering? Without that it seems like retail inserts are\ngoing to be pretty slow?Yes, regular buffer cache. \n> Property is added to Table AM to convey if column projection is\n> leveraged by AM for scans. While scanning tables with AM leveraging\n> this property, executor parses the plan. Leverages the target list and\n> quals to find the required columns for query. This list is passed down\n> to AM on beginscan. Zedstore uses this column projection list to only\n> pull data from selected columns. Virtual tuple table slot is used to\n> pass back the datums for subset of columns.\n> \n> Current table am API requires enhancement here to pass down column\n> projection to AM. The patch showcases two different ways for the same.\n> \n> * For sequential scans added new beginscan_with_column_projection()\n> API. Executor checks AM property and if it leverages column\n> projection uses this new API else normal beginscan() API.\n> \n> * For index scans instead of modifying the begin scan API, added new\n> API to specifically pass column projection list after calling begin\n> scan to populate the scan descriptor but before fetching the tuples.\n\nFWIW, I don't quite think this is the right approach. I've only a vague\nsketch of this in my head, but I think we should want a general API to\npass that down to *any* scan. Even for heap, not deforming leading\ncolumns that a uninteresting, but precede relevant columns, would be\nquite a noticable performance win. I don't think the projection list is\nthe right approach for that.Sure, would love to hear more on it and can enhance the same as makes more usable for AMs. \n> Extremely basic UNDO logging has be implemented just for MVCC\n> perspective. MVCC is missing tuple lock right now. Plus, doesn't\n> actually perform any undo yet. No WAL logging exist currently hence\n> its not crash safe either.\n\nHave you looked at the undo APIs developed for zheap, as discussed on\nthe list? Seems important that they're suitable for this too.Not in details yet, but yes plan is to leverage the same common framework and undo log API as zheap. Will look into the details. With the current zedstore implementation the requirements from the undo are prertty clear. \n> Test mentioned in thread \"Column lookup in a row performance\" [6],\n> good example query for zedstore locally on laptop using lz4 shows\n> \n> postgres=# SELECT AVG(i199) FROM (select i199 from layout offset 0) x; --\n> heap\n> avg\n> ---------------------\n> 500000.500000000000\n> (1 row)\n> \n> Time: 4679.026 ms (00:04.679)\n> \n> postgres=# SELECT AVG(i199) FROM (select i199 from zlayout offset 0) x; --\n> zedstore\n> avg\n> ---------------------\n> 500000.500000000000\n> (1 row)\n> \n> Time: 379.710 ms\n\nWell, I'm not sure I'm actually impressed by that. What does the\nperformance look like if you select i0 instead?Just for quick test used 100 instead of 200 columns (with 200 the results would be more diverged), this is what it reportspostgres=# SELECT AVG(i0) FROM (select i0 from layout offset 0) x; -- heap avg ------------------------ 1.00000000000000000000(1 row)Time: 183.865 mspostgres=# SELECT AVG(i0) FROM (select i0 from zlayout offset 0) x; -- zedstore avg ------------------------ 1.00000000000000000000(1 row)Time: 47.624 ms",
"msg_date": "Mon, 8 Apr 2019 19:26:53 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi,\n\nOn 09.04.2019 3:27, Ashwin Agrawal wrote:\n> Heikki and I have been hacking recently for few weeks to implement\n> in-core columnar storage for PostgreSQL. Here's the design and initial\n> implementation of Zedstore, compressed in-core columnar storage (table\n> access method). Attaching the patch and link to github branch [1] to\n> follow along.\n\nThank you for publishing this patch. IMHO Postgres is really missing \nnormal support of columnar store and table access method\nAPI is the best way of integrating it.\n\nI wanted to compare memory footprint and performance of zedstore with \nstandard Postgres heap and my VOPS extension.\nAs test data I used TPC-H benchmark (actually only one lineitem table \ngenerated with tpch-dbgen utility with scale factor 10 (~8Gb database).\nI attached script which I have use to populate data (you have to to \ndownload, build and run tpch-dbgen yourself, also you can comment code \nrelated with VOPS).\nUnfortunately I failed to load data in zedstore:\n\npostgres=# insert into zedstore_lineitem_projection (select \nl_shipdate,l_quantity,l_extendedprice,l_discount,l_tax,l_returnflag::\"char\",l_linestatus::\"char\" \nfrom lineitem);\npsql: ERROR: compression failed. what now?\nTime: 237804.775 ms (03:57.805)\n\n\nThen I try to check if there is something in \nzedstore_lineitem_projection table:\n\npostgres=# select count(*) from zedstore_lineitem_projection;\npsql: WARNING: terminating connection because of crash of another \nserver process\nDETAIL: The postmaster has commanded this server process to roll back \nthe current transaction and exit, because another server process exited \nabnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and \nrepeat your command.\npsql: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 145710.828 ms (02:25.711)\n\n\nBackend consumes 16GB of RAM and 16Gb of swap and was killed by OOM \nkiller (undo log?)\nSubsequent attempt to run the same command is failed with the following \nerror:\n\npostgres=# select count(*) from zedstore_lineitem_projection;\npsql: ERROR: unexpected level encountered when descending tree\n\n\nSo the only thing I can do at this moment is report size of tables on \nthe disk:\n\npostgres=# select pg_relation_size('lineitem');\n pg_relation_size\n------------------\n 10455441408\n(1 row)\n\n\npostgres=# select pg_relation_size('lineitem_projection');\n pg_relation_size\n------------------\n 3129974784\n(1 row)\n\npostgres=# select pg_relation_size('vops_lineitem_projection');\n pg_relation_size\n------------------\n 1535647744\n(1 row)\n\npostgres=# select pg_relation_size('zedstore_lineitem_projection');\n pg_relation_size\n------------------\n 2303688704\n(1 row)\n\n\nBut I do not know how much data was actually loaded in zedstore table...\nActually the main question is why this table is not empty if INSERT \nstatement was failed?\n\nPlease let me know if I can somehow help you to reproduce and \ninvestigate the problem.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 9 Apr 2019 17:09:21 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 09.04.2019 17:09, Konstantin Knizhnik wrote:\n> Hi,\n>\n> On 09.04.2019 3:27, Ashwin Agrawal wrote:\n>> Heikki and I have been hacking recently for few weeks to implement\n>> in-core columnar storage for PostgreSQL. Here's the design and initial\n>> implementation of Zedstore, compressed in-core columnar storage (table\n>> access method). Attaching the patch and link to github branch [1] to\n>> follow along.\n>\n> Thank you for publishing this patch. IMHO Postgres is really missing \n> normal support of columnar store and table access method\n> API is the best way of integrating it.\n>\n> I wanted to compare memory footprint and performance of zedstore with \n> standard Postgres heap and my VOPS extension.\n> As test data I used TPC-H benchmark (actually only one lineitem table \n> generated with tpch-dbgen utility with scale factor 10 (~8Gb database).\n> I attached script which I have use to populate data (you have to to \n> download, build and run tpch-dbgen yourself, also you can comment code \n> related with VOPS).\n> Unfortunately I failed to load data in zedstore:\n>\n> postgres=# insert into zedstore_lineitem_projection (select \n> l_shipdate,l_quantity,l_extendedprice,l_discount,l_tax,l_returnflag::\"char\",l_linestatus::\"char\" \n> from lineitem);\n> psql: ERROR: compression failed. what now?\n> Time: 237804.775 ms (03:57.805)\n>\n>\n> Then I try to check if there is something in \n> zedstore_lineitem_projection table:\n>\n> postgres=# select count(*) from zedstore_lineitem_projection;\n> psql: WARNING: terminating connection because of crash of another \n> server process\n> DETAIL: The postmaster has commanded this server process to roll back \n> the current transaction and exit, because another server process \n> exited abnormally and possibly corrupted shared memory.\n> HINT: In a moment you should be able to reconnect to the database and \n> repeat your command.\n> psql: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> Time: 145710.828 ms (02:25.711)\n>\n>\n> Backend consumes 16GB of RAM and 16Gb of swap and was killed by OOM \n> killer (undo log?)\n> Subsequent attempt to run the same command is failed with the \n> following error:\n>\n> postgres=# select count(*) from zedstore_lineitem_projection;\n> psql: ERROR: unexpected level encountered when descending tree\n>\n>\n> So the only thing I can do at this moment is report size of tables on \n> the disk:\n>\n> postgres=# select pg_relation_size('lineitem');\n> pg_relation_size\n> ------------------\n> 10455441408\n> (1 row)\n>\n>\n> postgres=# select pg_relation_size('lineitem_projection');\n> pg_relation_size\n> ------------------\n> 3129974784\n> (1 row)\n>\n> postgres=# select pg_relation_size('vops_lineitem_projection');\n> pg_relation_size\n> ------------------\n> 1535647744\n> (1 row)\n>\n> postgres=# select pg_relation_size('zedstore_lineitem_projection');\n> pg_relation_size\n> ------------------\n> 2303688704\n> (1 row)\n>\n>\n> But I do not know how much data was actually loaded in zedstore table...\n> Actually the main question is why this table is not empty if INSERT \n> statement was failed?\n>\n> Please let me know if I can somehow help you to reproduce and \n> investigate the problem.\n>\n\nLooks like the original problem was caused by internal postgres \ncompressor: I have not configured Postgres to use lz4.\nWhen I configured Postgres --with-lz4, data was correctly inserted in \nzedstore table, but looks it is not compressed at all:\n\npostgres=# select pg_relation_size('zedstore_lineitem_projection');\n pg_relation_size\n------------------\n 9363010640\n\nNo wonder that zedstore shows the worst results:\n\nlineitem 6240.261 ms\nlineitem_projection 5390.446 ms\nzedstore_lineitem_projection 23310.341 ms\nvops_lineitem_projection 439.731 ms\n\n\nUpdated version of vstore_bench.sql is attached (sorry, there was some \nerrors in previous version of this script).\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 9 Apr 2019 18:00:39 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 09/04/2019 18:00, Konstantin Knizhnik wrote:\n> On 09.04.2019 17:09, Konstantin Knizhnik wrote:\n>> standard Postgres heap and my VOPS extension.\n>> As test data I used TPC-H benchmark (actually only one lineitem table\n>> generated with tpch-dbgen utility with scale factor 10 (~8Gb database).\n>> I attached script which I have use to populate data (you have to to\n>> download, build and run tpch-dbgen yourself, also you can comment code\n>> related with VOPS).\n\nCool, thanks!\n\n>> Unfortunately I failed to load data in zedstore:\n>>\n>> postgres=# insert into zedstore_lineitem_projection (select\n>> l_shipdate,l_quantity,l_extendedprice,l_discount,l_tax,l_returnflag::\"char\",l_linestatus::\"char\"\n>> from lineitem);\n>> psql: ERROR: compression failed. what now?\n>> Time: 237804.775 ms (03:57.805)\n\nYeah, it's still early days, it will crash and burn in a lot of cases. \nWe wanted to publish this early, to gather ideas and comments on the \nhigh level design, and to validate that the table AM API that's in v12 \nis usable.\n\n> Looks like the original problem was caused by internal postgres\n> compressor: I have not configured Postgres to use lz4.\n> When I configured Postgres --with-lz4, data was correctly inserted in\n> zedstore table, but looks it is not compressed at all:\n> \n> postgres=# select pg_relation_size('zedstore_lineitem_projection');\n> pg_relation_size\n> ------------------\n> 9363010640\n\nThe single-insert codepath isn't very optimized yet. If you populate the \ntable with large \"INSERT ... SELECT ...\", you end up with a huge undo \nlog. Try loading it with COPY.\n\nYou can also see how many pages of each type there is with:\n\nselect count(*), pg_zs_page_type('zedstore_lineitem_projection', g)\n from generate_series(0, pg_table_size('zedstore_lineitem_projection') \n/ 8192 - 1) g group by 2;\n\n- Heikki\n\n\n",
"msg_date": "Tue, 9 Apr 2019 18:08:40 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "\n\nOn 09.04.2019 18:08, Heikki Linnakangas wrote:\n> On 09/04/2019 18:00, Konstantin Knizhnik wrote:\n>> On 09.04.2019 17:09, Konstantin Knizhnik wrote:\n>>> standard Postgres heap and my VOPS extension.\n>>> As test data I used TPC-H benchmark (actually only one lineitem table\n>>> generated with tpch-dbgen utility with scale factor 10 (~8Gb database).\n>>> I attached script which I have use to populate data (you have to to\n>>> download, build and run tpch-dbgen yourself, also you can comment code\n>>> related with VOPS).\n>\n> Cool, thanks!\n>\n>>> Unfortunately I failed to load data in zedstore:\n>>>\n>>> postgres=# insert into zedstore_lineitem_projection (select\n>>> l_shipdate,l_quantity,l_extendedprice,l_discount,l_tax,l_returnflag::\"char\",l_linestatus::\"char\" \n>>>\n>>> from lineitem);\n>>> psql: ERROR: compression failed. what now?\n>>> Time: 237804.775 ms (03:57.805)\n>\n> Yeah, it's still early days, it will crash and burn in a lot of cases. \n> We wanted to publish this early, to gather ideas and comments on the \n> high level design, and to validate that the table AM API that's in v12 \n> is usable.\n>\n>> Looks like the original problem was caused by internal postgres\n>> compressor: I have not configured Postgres to use lz4.\n>> When I configured Postgres --with-lz4, data was correctly inserted in\n>> zedstore table, but looks it is not compressed at all:\n>>\n>> postgres=# select pg_relation_size('zedstore_lineitem_projection');\n>> pg_relation_size\n>> ------------------\n>> 9363010640\n>\n> The single-insert codepath isn't very optimized yet. If you populate \n> the table with large \"INSERT ... SELECT ...\", you end up with a huge \n> undo log. Try loading it with COPY.\n>\n> You can also see how many pages of each type there is with:\n>\n> select count(*), pg_zs_page_type('zedstore_lineitem_projection', g)\n> from generate_series(0, \n> pg_table_size('zedstore_lineitem_projection') / 8192 - 1) g group by 2;\n>\n> - Heikki\n\npostgres=# copy zedstore_lineitem from '/mnt/data/lineitem.tbl' \ndelimiter '|' csv;\nCOPY 59986052\nTime: 232802.257 ms (03:52.802)\npostgres=# select pg_relation_size('zedstore_lineitem');\n pg_relation_size\n------------------\n 10346504192\n(1 row)\npostgres=# select count(*), pg_zs_page_type('zedstore_lineitem', g)\n from generate_series(0, pg_table_size('zedstore_lineitem') / 8192 - \n1) g group by 2;\n count | pg_zs_page_type\n---------+-----------------\n 1 | META\n 1262308 | BTREE\n 692 | UNDO\n(3 rows)\n\nAnd now performance is much worser:\nTime: 99819.476 ms (01:39.819)\n\nIt is strange, because the main advantage of columnar store is that it \nhas to fetch only accessed rows.\nWhat I see is that in non-parallel mode (max_parallel_workers_per_gather \n= 0)\nbackend consumes about 11GB of memory. It fits in my desktop RAM (16GB) \nand speed is ~58 seconds.\nBut one I start 4 parallel workers, them cause huge swapping:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ \nCOMMAND\n28195 knizhnik 20 0 11.823g 6.553g 5.072g D 7.6 42.2 0:17.19 \npostgres\n28074 knizhnik 20 0 11.848g 6.726g 5.223g D 7.3 43.3 4:14.96 \npostgres\n28192 knizhnik 20 0 11.854g 6.586g 5.075g D 7.3 42.4 0:17.18 \npostgres\n28193 knizhnik 20 0 11.870g 6.594g 5.064g D 7.3 42.4 0:17.19 \npostgres\n28194 knizhnik 20 0 11.854g 6.589g 5.078g D 7.3 42.4 0:17.09 \npostgres\n\nwhich is also strange because data should be present in shared buffers.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 9 Apr 2019 18:45:26 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 2019-Apr-09, Konstantin Knizhnik wrote:\n\n> On 09.04.2019 3:27, Ashwin Agrawal wrote:\n> > Heikki and I have been hacking recently for few weeks to implement\n> > in-core columnar storage for PostgreSQL. Here's the design and initial\n> > implementation of Zedstore, compressed in-core columnar storage (table\n> > access method). Attaching the patch and link to github branch [1] to\n> > follow along.\n> \n> Thank you for publishing this patch. IMHO Postgres is really missing normal\n> support of columnar store\n\nYep.\n\n> and table access method API is the best way of integrating it.\n\nThis is not surprising, considering that columnar store is precisely the\nreason for starting the work on table AMs.\n\nWe should certainly look into integrating some sort of columnar storage\nin mainline. Not sure which of zedstore or VOPS is the best candidate,\nor maybe we'll have some other proposal. My feeling is that having more\nthan one is not useful; if there are optimizations to one that can be\nborrowed from the other, let's do that instead of duplicating effort.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 9 Apr 2019 11:51:21 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "\n\nOn 09.04.2019 18:51, Alvaro Herrera wrote:\n> On 2019-Apr-09, Konstantin Knizhnik wrote:\n>\n>> On 09.04.2019 3:27, Ashwin Agrawal wrote:\n>>> Heikki and I have been hacking recently for few weeks to implement\n>>> in-core columnar storage for PostgreSQL. Here's the design and initial\n>>> implementation of Zedstore, compressed in-core columnar storage (table\n>>> access method). Attaching the patch and link to github branch [1] to\n>>> follow along.\n>> Thank you for publishing this patch. IMHO Postgres is really missing normal\n>> support of columnar store\n> Yep.\n>\n>> and table access method API is the best way of integrating it.\n> This is not surprising, considering that columnar store is precisely the\n> reason for starting the work on table AMs.\n>\n> We should certainly look into integrating some sort of columnar storage\n> in mainline. Not sure which of zedstore or VOPS is the best candidate,\n> or maybe we'll have some other proposal. My feeling is that having more\n> than one is not useful; if there are optimizations to one that can be\n> borrowed from the other, let's do that instead of duplicating effort.\n>\nThere are two different aspects:\n1. Store format.\n2. Vector execution.\n\n1. VOPS is using mixed format, something similar with Apache parquet.\nTuples are stored vertically, but only inside one page.\nIt tries to minimize trade-offs between true horizontal and true \nvertical storage:\nfirst is most optimal for selecting all rows, while second - for \nselecting small subset of rows.\nTo make this approach more efficient, it is better to use large page \nsize - default Postgres 8k pages is not enough.\n\n From my point of view such format is better than pure vertical storage \nwhich will be very inefficient if query access larger number of columns.\nThis problem can be somehow addressed by creating projections: grouping \nseveral columns together. But it requires more space for storing \nmultiple projections.\n\n2. Doesn't matter which format we choose, to take all advantages of \nvertical representation we need to use vector operations.\nAnd Postgres executor doesn't support them now. This is why VOPS is \nusing some hacks, which is definitely not good and not working in all cases.\nzedstore is not using such hacks and ... this is why it never can reach \nVOPS performance.\n\nThe right solution is to add vector operations support to Postgres \nplanner and executors.\nBut is much harder than develop columnar store itself.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 9 Apr 2019 19:13:32 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 09/04/2019 18:00, Konstantin Knizhnik wrote:\n> Looks like the original problem was caused by internal postgres\n> compressor: I have not configured Postgres to use lz4.\n> When I configured Postgres --with-lz4, data was correctly inserted in\n> zedstore table, but looks it is not compressed at all:\n> \n> postgres=# select pg_relation_size('zedstore_lineitem_projection');\n> pg_relation_size\n> ------------------\n> 9363010640\n> \n> No wonder that zedstore shows the worst results:\n> \n> lineitem 6240.261 ms\n> lineitem_projection 5390.446 ms\n> zedstore_lineitem_projection 23310.341 ms\n> vops_lineitem_projection 439.731 ms\n> \n> Updated version of vstore_bench.sql is attached (sorry, there was some\n> errors in previous version of this script).\n\nI tried this quickly, too. With default work_mem and no parallelism, and \n1 gb table size, it seems that the query chooses a different plan with \nheap and zedstore, with a sort+group for zedstore and hash agg for heap. \nThere's no ANALYZE support in zedstore yet, and we haven't given much \nthought to parallelism either. With work_mem='1GB' and no parallelism, \nboth queries use a hash agg, and the numbers are much closer than what \nyou saw, about 6 s for heap, and 9 s for zedstore.\n\n- Heikki\n\n\n",
"msg_date": "Tue, 9 Apr 2019 19:19:57 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "\n\nOn 09.04.2019 19:19, Heikki Linnakangas wrote:\n> On 09/04/2019 18:00, Konstantin Knizhnik wrote:\n>> Looks like the original problem was caused by internal postgres\n>> compressor: I have not configured Postgres to use lz4.\n>> When I configured Postgres --with-lz4, data was correctly inserted in\n>> zedstore table, but looks it is not compressed at all:\n>>\n>> postgres=# select pg_relation_size('zedstore_lineitem_projection');\n>> pg_relation_size\n>> ------------------\n>> 9363010640\n>>\n>> No wonder that zedstore shows the worst results:\n>>\n>> lineitem 6240.261 ms\n>> lineitem_projection 5390.446 ms\n>> zedstore_lineitem_projection 23310.341 ms\n>> vops_lineitem_projection 439.731 ms\n>>\n>> Updated version of vstore_bench.sql is attached (sorry, there was some\n>> errors in previous version of this script).\n>\n> I tried this quickly, too. With default work_mem and no parallelism, \n> and 1 gb table size, it seems that the query chooses a different plan \n> with heap and zedstore, with a sort+group for zedstore and hash agg \n> for heap. There's no ANALYZE support in zedstore yet, and we haven't \n> given much thought to parallelism either. With work_mem='1GB' and no \n> parallelism, both queries use a hash agg, and the numbers are much \n> closer than what you saw, about 6 s for heap, and 9 s for zedstore.\n>\n> - Heikki\nYes, you was right. The plan for zedstore uses GroupAggregate instead \nof HashAggregate.\nIncreasing work_mem force optimizer to use HashAggregate in all cases.\nBut it doesn't prevent memory overflow in my case.\nAnd it is very strange to me, because there are just 4 groups in the \nresult, so it should not consume any memory.\n\nYet another strange thing is that size of zedstore_table is 10Gb \naccording to pg_relation_size.\nQ1 query access only some some subset of \"lineitem\" columns, not \ntouching the largest ones (with text).\nI have configured 12Gb shared buffers. And all this 11Gb are used! Looks \nlike all columns are fetched from the disk.\nAnd looks like except this 11Gb of shard data, backend (and each \nparallel worker) is also consuming several gigabytes of heap memory.\nAs a result total size of used memory during parallel query execution \nwith 4 workers exceeds 20GB and cause terrible swapping at my system.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 9 Apr 2019 19:54:03 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 11:51 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> This is not surprising, considering that columnar store is precisely the\n> reason for starting the work on table AMs.\n>\n> We should certainly look into integrating some sort of columnar storage\n> in mainline. Not sure which of zedstore or VOPS is the best candidate,\n> or maybe we'll have some other proposal. My feeling is that having more\n> than one is not useful; if there are optimizations to one that can be\n> borrowed from the other, let's do that instead of duplicating effort.\n\nI think that conclusion may be premature. There seem to be a bunch of\ndifferent ways of doing columnar storage, so I don't know how we can\nbe sure that one size will fit all, or that the first thing we accept\nwill be the best thing.\n\nOf course, we probably do not want to accept a ton of storage manager\nimplementations is core. I think if people propose implementations\nthat are poor quality, or missing important features, or don't have\nsignificantly different use cases from the ones we've already got,\nit's reasonable to reject those. But I wouldn't be prepared to say\nthat if we have two significantly different column store that are both\nawesome code with a complete feature set and significantly disjoint\nuse cases, we should reject the second one just because it is also a\ncolumn store. I think that won't get out of control because few\npeople will be able to produce really high-quality implementations.\n\nThis stuff is hard, which I think is also why we only have 6.5 index\nAMs in core after many, many years. And our standards have gone up\nover the years - not all of those would pass muster if they were\nproposed today.\n\nBTW, can I express a small measure of disappointment that the name for\nthe thing under discussion on this thread chose to be called\n\"zedstore\"? That seems to invite confusion with \"zheap\", especially\nin parts of the world where the last letter of the alphabet is\npronounced \"zed,\" where people are going to say zed-heap and\nzed-store. Brr.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 9 Apr 2019 14:29:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 11:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Apr 9, 2019 at 11:51 AM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> > This is not surprising, considering that columnar store is precisely the\n> > reason for starting the work on table AMs.\n> >\n> > We should certainly look into integrating some sort of columnar storage\n> > in mainline. Not sure which of zedstore or VOPS is the best candidate,\n> > or maybe we'll have some other proposal. My feeling is that having more\n> > than one is not useful; if there are optimizations to one that can be\n> > borrowed from the other, let's do that instead of duplicating effort.\n>\n> I think that conclusion may be premature. There seem to be a bunch of\n> different ways of doing columnar storage, so I don't know how we can\n> be sure that one size will fit all, or that the first thing we accept\n> will be the best thing.\n>\n> Of course, we probably do not want to accept a ton of storage manager\n> implementations is core. I think if people propose implementations\n> that are poor quality, or missing important features, or don't have\n> significantly different use cases from the ones we've already got,\n> it's reasonable to reject those. But I wouldn't be prepared to say\n> that if we have two significantly different column store that are both\n> awesome code with a complete feature set and significantly disjoint\n> use cases, we should reject the second one just because it is also a\n> column store. I think that won't get out of control because few\n> people will be able to produce really high-quality implementations.\n>\n> This stuff is hard, which I think is also why we only have 6.5 index\n> AMs in core after many, many years. And our standards have gone up\n> over the years - not all of those would pass muster if they were\n> proposed today.\n>\n\n+1\n\n\n> BTW, can I express a small measure of disappointment that the name for\n> the thing under discussion on this thread chose to be called\n> \"zedstore\"? That seems to invite confusion with \"zheap\", especially\n> in parts of the world where the last letter of the alphabet is\n> pronounced \"zed,\" where people are going to say zed-heap and\n> zed-store. Brr.\n>\n\nSurprised its felt this thread would initiate the invitation to confusion.\nBased on past internal and meetup discussions for few quite sometime now,\nthe confusion already exists for zheap pronunciation because of the reason\nmentioned, as last letter is not pronounced universally same. Hence we\nexplicitly called it zedstore to learn from and make the pronunciation\nworld wide universal for new thing atleast.\n\nOn Tue, Apr 9, 2019 at 11:29 AM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Apr 9, 2019 at 11:51 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> This is not surprising, considering that columnar store is precisely the\n> reason for starting the work on table AMs.\n>\n> We should certainly look into integrating some sort of columnar storage\n> in mainline. Not sure which of zedstore or VOPS is the best candidate,\n> or maybe we'll have some other proposal. My feeling is that having more\n> than one is not useful; if there are optimizations to one that can be\n> borrowed from the other, let's do that instead of duplicating effort.\n\nI think that conclusion may be premature. There seem to be a bunch of\ndifferent ways of doing columnar storage, so I don't know how we can\nbe sure that one size will fit all, or that the first thing we accept\nwill be the best thing.\n\nOf course, we probably do not want to accept a ton of storage manager\nimplementations is core. I think if people propose implementations\nthat are poor quality, or missing important features, or don't have\nsignificantly different use cases from the ones we've already got,\nit's reasonable to reject those. But I wouldn't be prepared to say\nthat if we have two significantly different column store that are both\nawesome code with a complete feature set and significantly disjoint\nuse cases, we should reject the second one just because it is also a\ncolumn store. I think that won't get out of control because few\npeople will be able to produce really high-quality implementations.\n\nThis stuff is hard, which I think is also why we only have 6.5 index\nAMs in core after many, many years. And our standards have gone up\nover the years - not all of those would pass muster if they were\nproposed today.+1 \nBTW, can I express a small measure of disappointment that the name for\nthe thing under discussion on this thread chose to be called\n\"zedstore\"? That seems to invite confusion with \"zheap\", especially\nin parts of the world where the last letter of the alphabet is\npronounced \"zed,\" where people are going to say zed-heap and\nzed-store. Brr.Surprised its felt this thread would initiate the invitation to confusion. Based on past internal and meetup discussions for few quite sometime now, the confusion already exists for zheap pronunciation because of the reason mentioned, as last letter is not pronounced universally same. Hence we explicitly called it zedstore to learn from and make the pronunciation world wide universal for new thing atleast.",
"msg_date": "Tue, 9 Apr 2019 13:24:37 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 9:13 AM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n>\n>\n> On 09.04.2019 18:51, Alvaro Herrera wrote:\n> > On 2019-Apr-09, Konstantin Knizhnik wrote:\n> >\n> >> On 09.04.2019 3:27, Ashwin Agrawal wrote:\n> >>> Heikki and I have been hacking recently for few weeks to implement\n> >>> in-core columnar storage for PostgreSQL. Here's the design and initial\n> >>> implementation of Zedstore, compressed in-core columnar storage (table\n> >>> access method). Attaching the patch and link to github branch [1] to\n> >>> follow along.\n> >> Thank you for publishing this patch. IMHO Postgres is really missing\n> normal\n> >> support of columnar store\n> > Yep.\n> >\n> >> and table access method API is the best way of integrating it.\n> > This is not surprising, considering that columnar store is precisely the\n> > reason for starting the work on table AMs.\n> >\n> > We should certainly look into integrating some sort of columnar storage\n> > in mainline. Not sure which of zedstore or VOPS is the best candidate,\n> > or maybe we'll have some other proposal. My feeling is that having more\n> > than one is not useful; if there are optimizations to one that can be\n> > borrowed from the other, let's do that instead of duplicating effort.\n> >\n> There are two different aspects:\n> 1. Store format.\n> 2. Vector execution.\n>\n> 1. VOPS is using mixed format, something similar with Apache parquet.\n> Tuples are stored vertically, but only inside one page.\n> It tries to minimize trade-offs between true horizontal and true\n> vertical storage:\n> first is most optimal for selecting all rows, while second - for\n> selecting small subset of rows.\n> To make this approach more efficient, it is better to use large page\n> size - default Postgres 8k pages is not enough.\n>\n> From my point of view such format is better than pure vertical storage\n> which will be very inefficient if query access larger number of columns.\n> This problem can be somehow addressed by creating projections: grouping\n> several columns together. But it requires more space for storing\n> multiple projections.\n>\n\nRight, storing all the columns in single page doens't give any savings on\nIO.\n\n2. Doesn't matter which format we choose, to take all advantages of\n> vertical representation we need to use vector operations.\n> And Postgres executor doesn't support them now. This is why VOPS is\n> using some hacks, which is definitely not good and not working in all\n> cases.\n> zedstore is not using such hacks and ... this is why it never can reach\n> VOPS performance.\n>\n\nVectorized execution is orthogonal to storage format. It can be even\napplied to row store and performance gained. Similarly column store without\nvectorized execution also gives performance gain better compression rations\nand such benefits. Column store clubbed with vecotorized execution makes it\nlot more performant agree. Zedstore currently is focused to have AM piece\nin place, which fits the postgres ecosystem and supports all the features\nheap does.\n\nOn Tue, Apr 9, 2019 at 9:13 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\nOn 09.04.2019 18:51, Alvaro Herrera wrote:\n> On 2019-Apr-09, Konstantin Knizhnik wrote:\n>\n>> On 09.04.2019 3:27, Ashwin Agrawal wrote:\n>>> Heikki and I have been hacking recently for few weeks to implement\n>>> in-core columnar storage for PostgreSQL. Here's the design and initial\n>>> implementation of Zedstore, compressed in-core columnar storage (table\n>>> access method). Attaching the patch and link to github branch [1] to\n>>> follow along.\n>> Thank you for publishing this patch. IMHO Postgres is really missing normal\n>> support of columnar store\n> Yep.\n>\n>> and table access method API is the best way of integrating it.\n> This is not surprising, considering that columnar store is precisely the\n> reason for starting the work on table AMs.\n>\n> We should certainly look into integrating some sort of columnar storage\n> in mainline. Not sure which of zedstore or VOPS is the best candidate,\n> or maybe we'll have some other proposal. My feeling is that having more\n> than one is not useful; if there are optimizations to one that can be\n> borrowed from the other, let's do that instead of duplicating effort.\n>\nThere are two different aspects:\n1. Store format.\n2. Vector execution.\n\n1. VOPS is using mixed format, something similar with Apache parquet.\nTuples are stored vertically, but only inside one page.\nIt tries to minimize trade-offs between true horizontal and true \nvertical storage:\nfirst is most optimal for selecting all rows, while second - for \nselecting small subset of rows.\nTo make this approach more efficient, it is better to use large page \nsize - default Postgres 8k pages is not enough.\n\n From my point of view such format is better than pure vertical storage \nwhich will be very inefficient if query access larger number of columns.\nThis problem can be somehow addressed by creating projections: grouping \nseveral columns together. But it requires more space for storing \nmultiple projections.Right, storing all the columns in single page doens't give any savings on IO.\n2. Doesn't matter which format we choose, to take all advantages of \nvertical representation we need to use vector operations.\nAnd Postgres executor doesn't support them now. This is why VOPS is \nusing some hacks, which is definitely not good and not working in all cases.\nzedstore is not using such hacks and ... this is why it never can reach \nVOPS performance.Vectorized execution is orthogonal to storage format. It can be even applied to row store and performance gained. Similarly column store without vectorized execution also gives performance gain better compression rations and such benefits. Column store clubbed with vecotorized execution makes it lot more performant agree. Zedstore currently is focused to have AM piece in place, which fits the postgres ecosystem and supports all the features heap does.",
"msg_date": "Tue, 9 Apr 2019 14:03:09 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 09/04/2019 23:24, Ashwin Agrawal wrote:\n> BTW, can I express a small measure of disappointment that the name for\n> the thing under discussion on this thread chose to be called\n> \"zedstore\"? That seems to invite confusion with \"zheap\", especially\n> in parts of the world where the last letter of the alphabet is\n> pronounced \"zed,\" where people are going to say zed-heap and\n> zed-store. Brr.\n> \n> Surprised its felt this thread would initiate the invitation to \n> confusion. Based on past internal and meetup discussions for few quite \n> sometime now, the confusion already exists for zheap pronunciation \n> because of the reason mentioned, as last letter is not pronounced \n> universally same. Hence we explicitly called it zedstore to learn from \n> and make the pronunciation world wide universal for new thing atleast.\n\nYeah, you can blame me for the name. It's a pun on zheap. I'm hoping we \ncome up with a better name before this matures; I'm thinking it could be \njust \"column store\" or something like that in the end, but it's good to \nhave a more unique name during development.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 10 Apr 2019 00:08:11 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "C-Tree?\n\nPeter Geoghegan\n(Sent from my phone)\n\nC-Tree? Peter Geoghegan(Sent from my phone)",
"msg_date": "Tue, 9 Apr 2019 14:15:43 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 5:57 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n>\n> Heikki and I have been hacking recently for few weeks to implement\n> in-core columnar storage for PostgreSQL. Here's the design and initial\n> implementation of Zedstore, compressed in-core columnar storage (table\n> access method). Attaching the patch and link to github branch [1] to\n> follow along.\n>\n> The objective is to gather feedback on design and approach to the\n> same. The implementation has core basic pieces working but not close\n> to complete.\n>\n> Big thank you to Andres, Haribabu and team for the table access method\n> API's. Leveraged the API's for implementing zedstore, and proves API\n> to be in very good shape. Had to enhance the same minimally but\n> in-general didn't had to touch executor much.\n>\n> Motivations / Objectives\n>\n> * Performance improvement for queries selecting subset of columns\n> (reduced IO).\n> * Reduced on-disk footprint compared to heap table. Shorter tuple\n> headers and also leveraging compression of similar type data\n> * Be first-class citizen in the Postgres architecture (tables data can\n> just independently live in columnar storage)\n> * Fully MVCC compliant\n> * All Indexes supported\n> * Hybrid row-column store, where some columns are stored together, and\n> others separately. Provide flexibility of granularity on how to\n> divide the columns. Columns accessed together can be stored\n> together.\n> * Provide better control over bloat (similar to zheap)\n> * Eliminate need for separate toast tables\n> * Faster add / drop column or changing data type of column by avoiding\n> full rewrite of the table.\n>\n> High-level Design - B-trees for the win!\n> ========================================\n>\n> To start simple, let's ignore column store aspect for a moment and\n> consider it as compressed row store. The column store is natural\n> extension of this concept, explained in next section.\n>\n> The basic on-disk data structure leveraged is a B-tree, indexed by\n> TID. BTree being a great data structure, fast and versatile. Note this\n> is not referring to existing Btree indexes, but instead net new\n> separate BTree for table data storage.\n>\n> TID - logical row identifier:\n> TID is just a 48-bit row identifier. The traditional division into\n> block and offset numbers is meaningless. In order to find a tuple with\n> a given TID, one must always descend the B-tree. Having logical TID\n> provides flexibility to move the tuples around different pages on page\n> splits or page merges can be performed.\n>\n> The internal pages of the B-tree are super simple and boring. Each\n> internal page just stores an array of TID and downlink pairs. Let's\n> focus on the leaf level. Leaf blocks have short uncompressed header,\n> followed by btree items. Two kinds of items exist:\n>\n> - plain item, holds one tuple or one datum, uncompressed payload\n> - a \"container item\", holds multiple plain items, compressed payload\n>\n> +-----------------------------\n> | Fixed-size page header:\n> |\n> | LSN\n> | TID low and hi key (for Lehman & Yao B-tree operations)\n> | left and right page pointers\n> |\n> | Items:\n> |\n> | TID | size | flags | uncompressed size | lastTID | payload (container item)\n> | TID | size | flags | uncompressed size | lastTID | payload (container item)\n> | TID | size | flags | undo pointer | payload (plain item)\n> | TID | size | flags | undo pointer | payload (plain item)\n> | ...\n> |\n> +----------------------------\n>\n> Row store\n> ---------\n>\n> The tuples are stored one after another, sorted by TID. For each\n> tuple, we store its 48-bit TID, a undo record pointer, and the actual\n> tuple data uncompressed.\n>\n\nStoring undo record pointer with each tuple can take quite a lot of\nspace in cases where you can't compress them. Have you thought how\nwill you implement the multi-locker scheme in this design? In zheap,\nwe have used undo for the same and it is easy to imagine when you have\nseparate transaction slots for each transaction. I am not sure how\nwill you implement the same here.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Apr 2019 11:59:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 10/04/2019 09:29, Amit Kapila wrote:\n> On Tue, Apr 9, 2019 at 5:57 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n>> Row store\n>> ---------\n>>\n>> The tuples are stored one after another, sorted by TID. For each\n>> tuple, we store its 48-bit TID, a undo record pointer, and the actual\n>> tuple data uncompressed.\n>>\n> \n> Storing undo record pointer with each tuple can take quite a lot of\n> space in cases where you can't compress them.\n\nYeah. This does depend on compression to eliminate the unused fields \nquite heavily at the moment. But you could have a flag in the header to \nindicate \"no undo pointer needed\", and just leave it out, when it's needed.\n\n> Have you thought how will you implement the multi-locker scheme in\n> this design? In zheap, we have used undo for the same and it is easy\n> to imagine when you have separate transaction slots for each\n> transaction. I am not sure how will you implement the same here.\nI've been thinking that the undo record would store all the XIDs \ninvolved. So if there are multiple lockers, the UNDO record would store \na list of XIDs. Alternatively, I suppose you could store multiple UNDO \npointers for the same tuple.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 10 Apr 2019 10:25:44 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "\n\nOn 10.04.2019 10:25, Heikki Linnakangas wrote:\n> On 10/04/2019 09:29, Amit Kapila wrote:\n>> On Tue, Apr 9, 2019 at 5:57 AM Ashwin Agrawal <aagrawal@pivotal.io> \n>> wrote:\n>>> Row store\n>>> ---------\n>>>\n>>> The tuples are stored one after another, sorted by TID. For each\n>>> tuple, we store its 48-bit TID, a undo record pointer, and the actual\n>>> tuple data uncompressed.\n>>>\n>>\n>> Storing undo record pointer with each tuple can take quite a lot of\n>> space in cases where you can't compress them.\n>\n> Yeah. This does depend on compression to eliminate the unused fields \n> quite heavily at the moment. But you could have a flag in the header \n> to indicate \"no undo pointer needed\", and just leave it out, when it's \n> needed.\n>\n>> Have you thought how will you implement the multi-locker scheme in\n>> this design? In zheap, we have used undo for the same and it is easy\n>> to imagine when you have separate transaction slots for each\n>> transaction. I am not sure how will you implement the same here.\n> I've been thinking that the undo record would store all the XIDs \n> involved. So if there are multiple lockers, the UNDO record would \n> store a list of XIDs. Alternatively, I suppose you could store \n> multiple UNDO pointers for the same tuple.\n>\n> - Heikki\n>\n>\n\nI also a little bit confused about UNDO records and MVCC support in \nZedstore. Actually columnar store is mostly needed for analytic for\nread-only or append-only data. One of the disadvantages of Postgres is \nquite larger per-record space overhead caused by MVCC.\nIt may be critical if you want to store huge timeseries with relatively \nsmall number of columns (like measurements of some sensor).\nIt will be nice to have storage format which reduce this overhead when \nit is not needed (data is not updated).\n\nRight now, even without UNFO pages, size of zedstore is larger than size \nof original Postgres table.\nIt seems to be very strange.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 10:38:13 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 10/04/2019 10:38, Konstantin Knizhnik wrote:\n> I also a little bit confused about UNDO records and MVCC support in\n> Zedstore. Actually columnar store is mostly needed for analytic for\n> read-only or append-only data. One of the disadvantages of Postgres is\n> quite larger per-record space overhead caused by MVCC.\n> It may be critical if you want to store huge timeseries with relatively\n> small number of columns (like measurements of some sensor).\n> It will be nice to have storage format which reduce this overhead when\n> it is not needed (data is not updated).\n\nSure. Definitely something we need to optimize.\n\n> Right now, even without UNFO pages, size of zedstore is larger than size\n> of original Postgres table.\n> It seems to be very strange.\n\nIf you have a table with a lot of columns, but each column is small, \ne.g. lots of boolean columns, the item headers that zedstore currently \nstores for each datum take up a lot of space. We'll need to squeeze \nthose harder to make this competitive. Instead of storing a header for \neach datum, if a group of consecutive tuples have the same visibility \ninformation, we could store the header just once, with an array of the \ndatums, for example.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 10 Apr 2019 10:48:22 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 12:55 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 10/04/2019 09:29, Amit Kapila wrote:\n> > On Tue, Apr 9, 2019 at 5:57 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> >> Row store\n> >> ---------\n> >>\n> >> The tuples are stored one after another, sorted by TID. For each\n> >> tuple, we store its 48-bit TID, a undo record pointer, and the actual\n> >> tuple data uncompressed.\n> >>\n> >\n> > Storing undo record pointer with each tuple can take quite a lot of\n> > space in cases where you can't compress them.\n>\n> Yeah. This does depend on compression to eliminate the unused fields\n> quite heavily at the moment. But you could have a flag in the header to\n> indicate \"no undo pointer needed\", and just leave it out, when it's needed.\n>\n> > Have you thought how will you implement the multi-locker scheme in\n> > this design? In zheap, we have used undo for the same and it is easy\n> > to imagine when you have separate transaction slots for each\n> > transaction. I am not sure how will you implement the same here.\n> I've been thinking that the undo record would store all the XIDs\n> involved. So if there are multiple lockers, the UNDO record would store\n> a list of XIDs.\n>\n\nThis will be quite tricky. Whenever a new locker arrives, you first\nneed to fetch previous undo to see which all XIDs already have a lock\non it. Not only that, it will make discarding undo's way complicated.\n We have considered this approach before implementing the current\napproach in zheap.\n\n> Alternatively, I suppose you could store multiple UNDO\n> pointers for the same tuple.\n>\n\nThis will not only make the length of the tuple unnecessarily long but\nwould make it much harder to reclaim that space once the corresponding\nundo is discarded.\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Apr 2019 15:43:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 9/04/19 12:27 PM, Ashwin Agrawal wrote:\n\n> Heikki and I have been hacking recently for few weeks to implement\n> in-core columnar storage for PostgreSQL. Here's the design and initial\n> implementation of Zedstore, compressed in-core columnar storage (table\n> access method). Attaching the patch and link to github branch [1] to\n> follow along.\n>\n>\n\nVery nice. I realize that it is very early days, but applying this patch \nI've managed to stumble over some compression bugs doing some COPY's:\n\nbenchz=# COPY dim1 FROM '/data0/dump/dim1.dat'\nUSING DELIMITERS ',';\npsql: ERROR: compression failed. what now?\nCONTEXT: COPY dim1, line 458\n\nThe log has:\n\n2019-04-11 15:48:43.976 NZST [2006] ERROR: XX000: compression failed. \nwhat now?\n2019-04-11 15:48:43.976 NZST [2006] CONTEXT: COPY dim1, line 458\n2019-04-11 15:48:43.976 NZST [2006] LOCATION: zs_compress_finish, \nzedstore_compression.c:287\n2019-04-11 15:48:43.976 NZST [2006] STATEMENT: COPY dim1 FROM \n'/data0/dump/dim1.dat'\n USING DELIMITERS ',';\n\nThe dataset is generated from and old DW benchmark I wrote \n(https://sourceforge.net/projects/benchw/). The row concerned looks like:\n\n457,457th interesting measure,1th measure \ntype,aqwycdevcmybxcnpwqgrdsmfelaxfpbhfxghamfezdiwfvneltvqlivstwralshsppcpchvdkdbraoxnkvexdbpyzgamajfp\n458,458th interesting measure,2th measure \ntype,bjgdsciehjvkxvxjqbhtdwtcftpfewxfhfkzjsdrdabbvymlctghsblxucezydghjrgsjjjnmmqhncvpwbwodhnzmtakxhsg\n\n\nI'll see if changing to LZ4 makes any different.\n\nbest wishes\n\nMark\n\n\n\n",
"msg_date": "Thu, 11 Apr 2019 16:01:45 +1200",
"msg_from": "Mark Kirkwood <mark.kirkwood@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "\nOn 11/04/19 4:01 PM, Mark Kirkwood wrote:\n> On 9/04/19 12:27 PM, Ashwin Agrawal wrote:\n>\n>> Heikki and I have been hacking recently for few weeks to implement\n>> in-core columnar storage for PostgreSQL. Here's the design and initial\n>> implementation of Zedstore, compressed in-core columnar storage (table\n>> access method). Attaching the patch and link to github branch [1] to\n>> follow along.\n>>\n>>\n>\n> Very nice. I realize that it is very early days, but applying this \n> patch I've managed to stumble over some compression bugs doing some \n> COPY's:\n>\n> benchz=# COPY dim1 FROM '/data0/dump/dim1.dat'\n> USING DELIMITERS ',';\n> psql: ERROR: compression failed. what now?\n> CONTEXT: COPY dim1, line 458\n>\n> The log has:\n>\n> 2019-04-11 15:48:43.976 NZST [2006] ERROR: XX000: compression failed. \n> what now?\n> 2019-04-11 15:48:43.976 NZST [2006] CONTEXT: COPY dim1, line 458\n> 2019-04-11 15:48:43.976 NZST [2006] LOCATION: zs_compress_finish, \n> zedstore_compression.c:287\n> 2019-04-11 15:48:43.976 NZST [2006] STATEMENT: COPY dim1 FROM \n> '/data0/dump/dim1.dat'\n> USING DELIMITERS ',';\n>\n> The dataset is generated from and old DW benchmark I wrote \n> (https://sourceforge.net/projects/benchw/). The row concerned looks like:\n>\n> 457,457th interesting measure,1th measure \n> type,aqwycdevcmybxcnpwqgrdsmfelaxfpbhfxghamfezdiwfvneltvqlivstwralshsppcpchvdkdbraoxnkvexdbpyzgamajfp\n> 458,458th interesting measure,2th measure \n> type,bjgdsciehjvkxvxjqbhtdwtcftpfewxfhfkzjsdrdabbvymlctghsblxucezydghjrgsjjjnmmqhncvpwbwodhnzmtakxhsg\n>\n>\n> I'll see if changing to LZ4 makes any different.\n>\n>\n\nThe COPY works with LZ4 configured.\n\n\n\n",
"msg_date": "Thu, 11 Apr 2019 16:08:14 +1200",
"msg_from": "Mark Kirkwood <mark.kirkwood@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "\n> On Apr 10, 2019, at 9:08 PM, Mark Kirkwood <mark.kirkwood@catalyst.net.nz> wrote:\n> \n> \n>> On 11/04/19 4:01 PM, Mark Kirkwood wrote:\n>>> On 9/04/19 12:27 PM, Ashwin Agrawal wrote:\n>>> \n>>> Heikki and I have been hacking recently for few weeks to implement\n>>> in-core columnar storage for PostgreSQL. Here's the design and initial\n>>> implementation of Zedstore, compressed in-core columnar storage (table\n>>> access method). Attaching the patch and link to github branch [1] to\n>>> follow along.\n>>> \n>>> \n>> \n>> Very nice. I realize that it is very early days, but applying this patch I've managed to stumble over some compression bugs doing some COPY's:\n>> \n>> benchz=# COPY dim1 FROM '/data0/dump/dim1.dat'\n>> USING DELIMITERS ',';\n>> psql: ERROR: compression failed. what now?\n>> CONTEXT: COPY dim1, line 458\n>> \n>> The log has:\n>> \n>> 2019-04-11 15:48:43.976 NZST [2006] ERROR: XX000: compression failed. what now?\n>> 2019-04-11 15:48:43.976 NZST [2006] CONTEXT: COPY dim1, line 458\n>> 2019-04-11 15:48:43.976 NZST [2006] LOCATION: zs_compress_finish, zedstore_compression.c:287\n>> 2019-04-11 15:48:43.976 NZST [2006] STATEMENT: COPY dim1 FROM '/data0/dump/dim1.dat'\n>> USING DELIMITERS ',';\n>> \n>> The dataset is generated from and old DW benchmark I wrote (https://urldefense.proofpoint.com/v2/url?u=https-3A__sourceforge.net_projects_benchw_&d=DwIDaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=gxIaqms7ncm0pvqXLI_xjkgwSStxAET2rnZQpzba2KM&m=BgmTkDoY6SKOgODe8v6fpH4hs-wM0H91cLfrAfEL6C0&s=lLcXp_8h2bRb_OR4FT8kxD-FG9MaLBPU7M5aV9nQ7JY&e=). The row concerned looks like:\n>> \n>> 457,457th interesting measure,1th measure type,aqwycdevcmybxcnpwqgrdsmfelaxfpbhfxghamfezdiwfvneltvqlivstwralshsppcpchvdkdbraoxnkvexdbpyzgamajfp\n>> 458,458th interesting measure,2th measure type,bjgdsciehjvkxvxjqbhtdwtcftpfewxfhfkzjsdrdabbvymlctghsblxucezydghjrgsjjjnmmqhncvpwbwodhnzmtakxhsg\n>> \n>> \n>> I'll see if changing to LZ4 makes any different.\n>> \n>> \n> \n> The COPY works with LZ4 configured.\n\nThank you for trying it out. Yes, noticed for certain patterns pg_lzcompress() actually requires much larger output buffers. Like for one 86 len source it required 2296 len output buffer. Current zedstore code doesn’t handle this case and errors out. LZ4 for same patterns works fine, would highly recommend using LZ4 only, as anyways speed is very fast as well with it.\n\n",
"msg_date": "Wed, 10 Apr 2019 22:03:26 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 11.04.2019 8:03, Ashwin Agrawal wrote:\n>> On Apr 10, 2019, at 9:08 PM, Mark Kirkwood <mark.kirkwood@catalyst.net.nz> wrote:\n>>\n>>\n>>> On 11/04/19 4:01 PM, Mark Kirkwood wrote:\n>>>> On 9/04/19 12:27 PM, Ashwin Agrawal wrote:\n>>>>\n>>>> Heikki and I have been hacking recently for few weeks to implement\n>>>> in-core columnar storage for PostgreSQL. Here's the design and initial\n>>>> implementation of Zedstore, compressed in-core columnar storage (table\n>>>> access method). Attaching the patch and link to github branch [1] to\n>>>> follow along.\n>>>>\n>>>>\n>>> Very nice. I realize that it is very early days, but applying this patch I've managed to stumble over some compression bugs doing some COPY's:\n>>>\n>>> benchz=# COPY dim1 FROM '/data0/dump/dim1.dat'\n>>> USING DELIMITERS ',';\n>>> psql: ERROR: compression failed. what now?\n>>> CONTEXT: COPY dim1, line 458\n>>>\n>>> The log has:\n>>>\n>>> 2019-04-11 15:48:43.976 NZST [2006] ERROR: XX000: compression failed. what now?\n>>> 2019-04-11 15:48:43.976 NZST [2006] CONTEXT: COPY dim1, line 458\n>>> 2019-04-11 15:48:43.976 NZST [2006] LOCATION: zs_compress_finish, zedstore_compression.c:287\n>>> 2019-04-11 15:48:43.976 NZST [2006] STATEMENT: COPY dim1 FROM '/data0/dump/dim1.dat'\n>>> USING DELIMITERS ',';\n>>>\n>>> The dataset is generated from and old DW benchmark I wrote (https://urldefense.proofpoint.com/v2/url?u=https-3A__sourceforge.net_projects_benchw_&d=DwIDaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=gxIaqms7ncm0pvqXLI_xjkgwSStxAET2rnZQpzba2KM&m=BgmTkDoY6SKOgODe8v6fpH4hs-wM0H91cLfrAfEL6C0&s=lLcXp_8h2bRb_OR4FT8kxD-FG9MaLBPU7M5aV9nQ7JY&e=). The row concerned looks like:\n>>>\n>>> 457,457th interesting measure,1th measure type,aqwycdevcmybxcnpwqgrdsmfelaxfpbhfxghamfezdiwfvneltvqlivstwralshsppcpchvdkdbraoxnkvexdbpyzgamajfp\n>>> 458,458th interesting measure,2th measure type,bjgdsciehjvkxvxjqbhtdwtcftpfewxfhfkzjsdrdabbvymlctghsblxucezydghjrgsjjjnmmqhncvpwbwodhnzmtakxhsg\n>>>\n>>>\n>>> I'll see if changing to LZ4 makes any different.\n>>>\n>>>\n>> The COPY works with LZ4 configured.\n> Thank you for trying it out. Yes, noticed for certain patterns pg_lzcompress() actually requires much larger output buffers. Like for one 86 len source it required 2296 len output buffer. Current zedstore code doesn’t handle this case and errors out. LZ4 for same patterns works fine, would highly recommend using LZ4 only, as anyways speed is very fast as well with it.\n>\n\n\nInternal Postgres lz compressor is really very inefficient comparing \nwith other compression algorithms.\nBut in any case you should never assume that size of compressed data \nwill be smaller than size of plain data.\nMoreover, if you are trying to compress already compressed data, then \nresult almost always will be larger.\nIf size of compressed data is larger (or even not significantly smaller) \nthan size of raw data, then you should store original data.\n\nlz4 is actually very fast. But it doesn't provide good compression ratio.\nThis my results of compressing pbench data using different compressors:\n\nConfiguration \tSize (Gb) \tTime (sec)\nno compression\n\t15.31 \t92\nzlib (default level) \t2.37 \t284\nzlib (best speed) \t2.43 \t191\npostgres internal lz \t3.89 \t214\nlz4 \t4.12\n\t95\nsnappy \t5.18 \t99\nlzfse \t2.80 \t1099\n(apple) 2.80 1099\n\t1.69 \t125\n\n\n\nYou see that zstd provides almost 2 times better compression ration and \nalmost at the same speed.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 11.04.2019 8:03, Ashwin Agrawal\n wrote:\n\n\n\n\n\nOn Apr 10, 2019, at 9:08 PM, Mark Kirkwood <mark.kirkwood@catalyst.net.nz> wrote:\n\n\n\n\nOn 11/04/19 4:01 PM, Mark Kirkwood wrote:\n\n\nOn 9/04/19 12:27 PM, Ashwin Agrawal wrote:\n\nHeikki and I have been hacking recently for few weeks to implement\nin-core columnar storage for PostgreSQL. Here's the design and initial\nimplementation of Zedstore, compressed in-core columnar storage (table\naccess method). Attaching the patch and link to github branch [1] to\nfollow along.\n\n\n\n\n\nVery nice. I realize that it is very early days, but applying this patch I've managed to stumble over some compression bugs doing some COPY's:\n\nbenchz=# COPY dim1 FROM '/data0/dump/dim1.dat'\nUSING DELIMITERS ',';\npsql: ERROR: compression failed. what now?\nCONTEXT: COPY dim1, line 458\n\nThe log has:\n\n2019-04-11 15:48:43.976 NZST [2006] ERROR: XX000: compression failed. what now?\n2019-04-11 15:48:43.976 NZST [2006] CONTEXT: COPY dim1, line 458\n2019-04-11 15:48:43.976 NZST [2006] LOCATION: zs_compress_finish, zedstore_compression.c:287\n2019-04-11 15:48:43.976 NZST [2006] STATEMENT: COPY dim1 FROM '/data0/dump/dim1.dat'\n USING DELIMITERS ',';\n\nThe dataset is generated from and old DW benchmark I wrote (https://urldefense.proofpoint.com/v2/url?u=https-3A__sourceforge.net_projects_benchw_&d=DwIDaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=gxIaqms7ncm0pvqXLI_xjkgwSStxAET2rnZQpzba2KM&m=BgmTkDoY6SKOgODe8v6fpH4hs-wM0H91cLfrAfEL6C0&s=lLcXp_8h2bRb_OR4FT8kxD-FG9MaLBPU7M5aV9nQ7JY&e=). The row concerned looks like:\n\n457,457th interesting measure,1th measure type,aqwycdevcmybxcnpwqgrdsmfelaxfpbhfxghamfezdiwfvneltvqlivstwralshsppcpchvdkdbraoxnkvexdbpyzgamajfp\n458,458th interesting measure,2th measure type,bjgdsciehjvkxvxjqbhtdwtcftpfewxfhfkzjsdrdabbvymlctghsblxucezydghjrgsjjjnmmqhncvpwbwodhnzmtakxhsg\n\n\nI'll see if changing to LZ4 makes any different.\n\n\n\n\n\nThe COPY works with LZ4 configured.\n\n\n\nThank you for trying it out. Yes, noticed for certain patterns pg_lzcompress() actually requires much larger output buffers. Like for one 86 len source it required 2296 len output buffer. Current zedstore code doesn’t handle this case and errors out. LZ4 for same patterns works fine, would highly recommend using LZ4 only, as anyways speed is very fast as well with it.\n\n\n\n\n\n Internal Postgres lz compressor is really very inefficient comparing\n with other compression algorithms.\n But in any case you should never assume that size of compressed data\n will be smaller than size of plain data.\n Moreover, if you are trying to compress already compressed data,\n then result almost always will be larger.\n If size of compressed data is larger (or even not significantly\n smaller) than size of raw data, then you should store original data.\n\n lz4 is actually very fast. But it doesn't provide good compression\n ratio.\n This my results of compressing pbench data using different\n compressors:\n\n\n\n\nConfiguration\nSize (Gb)\nTime (sec)\n\n\nno compression\n\n15.31\n92\n\n\nzlib (default level) \n2.37 \n284\n\n\nzlib (best speed) \n2.43\n191\n\n\npostgres internal lz \n3.89 \n214\n\n\nlz4\n4.12 \n\n95\n\n\nsnappy\n5.18\n99\n\n\nlzfse\n2.80\n1099\n\n\n (apple) 2.80 1099\n\n1.69\n125\n\n\n\n\n\n You see that zstd provides almost 2 times better compression ration\n and almost at the same speed.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 11 Apr 2019 11:46:09 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, 9 Apr 2019 at 02:27, Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n\n> Heikki and I have been hacking recently for few weeks to implement\n> in-core columnar storage for PostgreSQL. Here's the design and initial\n> implementation of Zedstore, compressed in-core columnar storage (table\n> access method). Attaching the patch and link to github branch [1] to\n> follow along.\n>\n> The objective is to gather feedback on design and approach to the\n> same. The implementation has core basic pieces working but not close\n> to complete.\n>\n> Big thank you to Andres, Haribabu and team for the table access method\n> API's. Leveraged the API's for implementing zedstore, and proves API\n> to be in very good shape. Had to enhance the same minimally but\n> in-general didn't had to touch executor much.\n>\n> Motivations / Objectives\n>\n> * Performance improvement for queries selecting subset of columns\n> (reduced IO).\n> * Reduced on-disk footprint compared to heap table. Shorter tuple\n> headers and also leveraging compression of similar type data\n> * Be first-class citizen in the Postgres architecture (tables data can\n> just independently live in columnar storage)\n> * Fully MVCC compliant\n> * All Indexes supported\n> * Hybrid row-column store, where some columns are stored together, and\n> others separately. Provide flexibility of granularity on how to\n> divide the columns. Columns accessed together can be stored\n> together.\n> * Provide better control over bloat (similar to zheap)\n> * Eliminate need for separate toast tables\n> * Faster add / drop column or changing data type of column by avoiding\n> full rewrite of the table.\n>\n> High-level Design - B-trees for the win!\n> ========================================\n>\n> To start simple, let's ignore column store aspect for a moment and\n> consider it as compressed row store. The column store is natural\n> extension of this concept, explained in next section.\n>\n> The basic on-disk data structure leveraged is a B-tree, indexed by\n> TID. BTree being a great data structure, fast and versatile. Note this\n> is not referring to existing Btree indexes, but instead net new\n> separate BTree for table data storage.\n>\n> TID - logical row identifier:\n> TID is just a 48-bit row identifier. The traditional division into\n> block and offset numbers is meaningless. In order to find a tuple with\n> a given TID, one must always descend the B-tree. Having logical TID\n> provides flexibility to move the tuples around different pages on page\n> splits or page merges can be performed.\n>\n> The internal pages of the B-tree are super simple and boring. Each\n> internal page just stores an array of TID and downlink pairs. Let's\n> focus on the leaf level. Leaf blocks have short uncompressed header,\n> followed by btree items. Two kinds of items exist:\n>\n> - plain item, holds one tuple or one datum, uncompressed payload\n> - a \"container item\", holds multiple plain items, compressed payload\n>\n> +-----------------------------\n> | Fixed-size page header:\n> |\n> | LSN\n> | TID low and hi key (for Lehman & Yao B-tree operations)\n> | left and right page pointers\n> |\n> | Items:\n> |\n> | TID | size | flags | uncompressed size | lastTID | payload (container\n> item)\n> | TID | size | flags | uncompressed size | lastTID | payload (container\n> item)\n> | TID | size | flags | undo pointer | payload (plain item)\n> | TID | size | flags | undo pointer | payload (plain item)\n> | ...\n> |\n> +----------------------------\n>\n> Row store\n> ---------\n>\n> The tuples are stored one after another, sorted by TID. For each\n> tuple, we store its 48-bit TID, a undo record pointer, and the actual\n> tuple data uncompressed.\n>\n> In uncompressed form, the page can be arbitrarily large. But after\n> compression, it must fit into a physical 8k block. If on insert or\n> update of a tuple, the page cannot be compressed below 8k anymore, the\n> page is split. Note that because TIDs are logical rather than physical\n> identifiers, we can freely move tuples from one physical page to\n> another during page split. A tuple's TID never changes.\n>\n> The buffer cache caches compressed blocks. Likewise, WAL-logging,\n> full-page images etc. work on compressed blocks. Uncompression is done\n> on-the-fly, as and when needed in backend-private memory, when\n> reading. For some compressions like rel encoding or delta encoding\n> tuples can be constructed directly from compressed data.\n>\n> Column store\n> ------------\n>\n> A column store uses the same structure but we have *multiple* B-trees,\n> one for each column, all indexed by TID. The B-trees for all columns\n> are stored in the same physical file.\n>\n> A metapage at block 0, has links to the roots of the B-trees. Leaf\n> pages look the same, but instead of storing the whole tuple, stores\n> just a single attribute. To reconstruct a row with given TID, scan\n> descends down the B-trees for all the columns using that TID, and\n> fetches all attributes. Likewise, a sequential scan walks all the\n> B-trees in lockstep.\n>\n> So, in summary can imagine Zedstore as forest of B-trees, one for each\n> column, all indexed by TIDs.\n>\n> This way of laying out the data also easily allows for hybrid\n> row-column store, where some columns are stored together, and others\n> have a dedicated B-tree. Need to have user facing syntax to allow\n> specifying how to group the columns.\n>\n>\n> Main reasons for storing data this way\n> --------------------------------------\n>\n> * Layout the data/tuples in mapped fashion instead of keeping the\n> logical to physical mapping separate from actual data. So, keep the\n> meta-data and data logically in single stream of file, avoiding the\n> need for separate forks/files to store meta-data and data.\n>\n> * Stick to fixed size physical blocks. Variable size blocks pose need\n> for increased logical to physical mapping maintenance, plus\n> restrictions on concurrency of writes and reads to files. Hence\n> adopt compression to fit fixed size blocks instead of other way\n> round.\n>\n>\n> MVCC\n> ----\n> MVCC works very similar to zheap for zedstore. Undo record pointers\n> are used to implement MVCC. Transaction information if not directly\n> stored with the data. In zheap, there's a small, fixed, number of\n> \"transaction slots\" on each page, but zedstore has undo pointer with\n> each item directly; in normal cases, the compression squeezes this\n> down to almost nothing.\n>\n>\n> Implementation\n> ==============\n>\n> Insert:\n> Inserting a new row, splits the row into datums. Then for first column\n> decide which block to insert the same to, and pick a TID for it, and\n> write undo record for the same. Rest of the columns are inserted using\n> that same TID and point to same undo position.\n>\n> Compression:\n> Items are added to Btree in uncompressed form. If page is full and new\n> item can't be added, compression kicks in. Existing uncompressed items\n> (plain items) of the page are passed to compressor for\n> compression. Already compressed items are added back as is. Page is\n> rewritten with compressed data with new item added to it. If even\n> after compression, can't add item to page, then page split happens.\n>\n> Toast:\n> When an overly large datum is stored, it is divided into chunks, and\n> each chunk is stored on a dedicated toast page within the same\n> physical file. The toast pages of a datum form list, each page has a\n> next/prev pointer.\n>\n> Select:\n> Property is added to Table AM to convey if column projection is\n> leveraged by AM for scans. While scanning tables with AM leveraging\n> this property, executor parses the plan. Leverages the target list and\n> quals to find the required columns for query. This list is passed down\n> to AM on beginscan. Zedstore uses this column projection list to only\n> pull data from selected columns. Virtual tuple table slot is used to\n> pass back the datums for subset of columns.\n>\n> Current table am API requires enhancement here to pass down column\n> projection to AM. The patch showcases two different ways for the same.\n>\n> * For sequential scans added new beginscan_with_column_projection()\n> API. Executor checks AM property and if it leverages column\n> projection uses this new API else normal beginscan() API.\n>\n> * For index scans instead of modifying the begin scan API, added new\n> API to specifically pass column projection list after calling begin\n> scan to populate the scan descriptor but before fetching the tuples.\n>\n> Index Support:\n> Building index also leverages columnar storage and only scans columns\n> required to build the index. Indexes work pretty similar to heap\n> tables. Data is inserted into tables and TID for the tuple same gets\n> stored in index. On index scans, required column Btrees are scanned\n> for given TID and datums passed back using virtual tuple.\n>\n> Page Format:\n> ZedStore table contains different kinds of pages, all in the same\n> file. Kinds of pages are meta-page, per-attribute btree internal and\n> leaf pages, UNDO log page, and toast pages. Each page type has its own\n> distinct data storage format.\n>\n> Block 0 is always a metapage. It contains the block numbers of the\n> other data structures stored within the file, like the per-attribute\n> B-trees, and the UNDO log.\n>\n>\n> Enhancements to design:\n> =======================\n>\n> Instead of compressing all the tuples on a page in one batch, we could\n> store a small \"dictionary\", e.g. in page header or meta-page, and use\n> it to compress each tuple separately. That could make random reads and\n> updates of individual tuples faster.\n>\n> When adding column, just need to create new Btree for newly added\n> column and linked to meta-page. No existing content needs to be\n> rewritten.\n>\n> When the column is dropped, can scan the B-tree of that column, and\n> immediately mark all the pages as free in the FSM. But we don't\n> actually have to scan the leaf level: all leaf tuples have a downlink\n> in the parent, so we can scan just the internal pages. Unless the\n> column is very wide, that's only a small fraction of the data. That\n> makes the space immediately reusable for new insertions, but it won't\n> return the space to the Operating System. In order to do that, we'd\n> still need to defragment, moving pages from the end of the file closer\n> to the beginning, and truncate the file.\n>\n> In this design, we only cache compressed pages in the page cache. If\n> we want to cache uncompressed pages instead, or in addition to that,\n> we need to invent a whole new kind of a buffer cache that can deal\n> with the variable-size blocks.\n>\n> If you do a lot of updates, the file can get fragmented, with lots of\n> unused space on pages. Losing the correlation between TIDs and\n> physical order is also bad, because it will make SeqScans slow, as\n> they're not actually doing sequential I/O anymore. We can write a\n> defragmenter to fix things up. Half-empty pages can be merged, and\n> pages can be moved to restore TID/physical correlation. This format\n> doesn't have the same MVCC problems with moving tuples around that the\n> Postgres heap does, so it can be fairly easily be done on-line.\n>\n> Min-Max values can be stored for block to easily skip scanning if\n> column values doesn't fall in range.\n>\n> Notes about current patch\n> =========================\n>\n> Basic (core) functionality is implemented to showcase and play with.\n>\n> Two compression algorithms are supported Postgres pg_lzcompress and\n> lz4. Compiling server with --with-lz4 enables the LZ4 compression for\n> zedstore else pg_lzcompress is default. Definitely LZ4 is super fast\n> at compressing and uncompressing.\n>\n> Not all the table AM API's are implemented. For the functionality not\n> implmented yet will ERROR out with not supported. Zedstore Table can\n> be created using command:\n>\n> CREATE TABLE <name> (column listing) USING zedstore;\n>\n> Bulk load can be performed using COPY. INSERT, SELECT, UPDATE and\n> DELETES work. Btree indexes can be created. Btree and bitmap index\n> scans work. Test in src/test/regress/sql/zedstore.sql showcases all\n> the functionality working currently. Updates are currently implemented\n> as cold, means always creates new items and not performed in-place.\n>\n> TIDs currently can't leverage the full 48 bit range but instead need\n> to limit to values which are considered valid ItemPointers. Also,\n> MaxHeapTuplesPerPage pose restrictions on the values currently it can\n> have. Refer [7] for the same.\n>\n> Extremely basic UNDO logging has be implemented just for MVCC\n> perspective. MVCC is missing tuple lock right now. Plus, doesn't\n> actually perform any undo yet. No WAL logging exist currently hence\n> its not crash safe either.\n>\n> Helpful functions to find how many pages of each type is present in\n> zedstore table and also to find compression ratio is provided.\n>\n> Test mentioned in thread \"Column lookup in a row performance\" [6],\n> good example query for zedstore locally on laptop using lz4 shows\n>\n> postgres=# SELECT AVG(i199) FROM (select i199 from layout offset 0) x; --\n> heap\n> avg\n> ---------------------\n> 500000.500000000000\n> (1 row)\n>\n> Time: 4679.026 ms (00:04.679)\n>\n> postgres=# SELECT AVG(i199) FROM (select i199 from zlayout offset 0) x; --\n> zedstore\n> avg\n> ---------------------\n> 500000.500000000000\n> (1 row)\n>\n> Time: 379.710 ms\n>\n> Important note:\n> ---------------\n> Planner has not been modified yet to leverage the columnar\n> storage. Hence, plans using \"physical tlist\" optimization or such good\n> for row store miss out to leverage the columnar nature\n> currently. Hence, can see the need for subquery with OFFSET 0 above to\n> disable the optimization and scan only required column.\n>\n>\n>\n> The current proposal and discussion is more focused on AM layer work\n> first. Hence, currently intentionally skips to discuss the planner or\n> executor \"feature\" enhancements like adding vectorized execution and\n> family of features.\n>\n> Previous discussions or implementations for column store Vertical\n> cluster index [2], Incore columnar storage [3] and [4], cstore_fdw [5]\n> were refered to distill down objectives and come up with design and\n> implementations to avoid any earlier concerns raised. Learnings from\n> Greenplum Database column store also leveraged while designing and\n> implementing the same.\n>\n> Credit: Design is moslty brain child of Heikki, or actually his\n> epiphany to be exact. I acted as idea bouncing board and contributed\n> enhancements to the same. We both are having lot of fun writing the\n> code for this.\n>\n>\n> References\n> 1] https://github.com/greenplum-db/postgres/tree/zedstore\n> 2]\n> https://www.postgresql.org/message-id/flat/CAJrrPGfaC7WC9NK6PTTy6YN-NN%2BhCy8xOLAh2doYhVg5d6HsAA%40mail.gmail.com\n> 3]\n> https://www.postgresql.org/message-id/flat/20150611230316.GM133018%40postgresql.org\n> 4]\n> https://www.postgresql.org/message-id/flat/20150831225328.GM2912%40alvherre.pgsql\n> 5] https://github.com/citusdata/cstore_fdw\n> 6]\n> https://www.postgresql.org/message-id/flat/CAOykqKfko-n5YiBJtk-ocVdp%2Bj92Apu5MJBwbGGh4awRY5NCuQ%40mail.gmail.com\n> 7]\n> https://www.postgresql.org/message-id/d0fc97bd-7ec8-2388-e4a6-0fda86d71a43%40iki.fi\n>\n>\nReading about it reminds me of this work -- TAG column storage(\nhttp://www09.sigmod.org/sigmod/record/issues/0703/03.article-graefe.pdf ).\nIsn't this storage system inspired from there, with TID as the TAG?\n\nIt is not referenced here so made me wonder.\n-- \nRegards,\nRafia Sabih\n\nOn Tue, 9 Apr 2019 at 02:27, Ashwin Agrawal <aagrawal@pivotal.io> wrote:Heikki and I have been hacking recently for few weeks to implementin-core columnar storage for PostgreSQL. Here's the design and initialimplementation of Zedstore, compressed in-core columnar storage (tableaccess method). Attaching the patch and link to github branch [1] tofollow along.The objective is to gather feedback on design and approach to thesame. The implementation has core basic pieces working but not closeto complete.Big thank you to Andres, Haribabu and team for the table access methodAPI's. Leveraged the API's for implementing zedstore, and proves APIto be in very good shape. Had to enhance the same minimally butin-general didn't had to touch executor much.Motivations / Objectives* Performance improvement for queries selecting subset of columns (reduced IO).* Reduced on-disk footprint compared to heap table. Shorter tuple headers and also leveraging compression of similar type data* Be first-class citizen in the Postgres architecture (tables data can just independently live in columnar storage)* Fully MVCC compliant* All Indexes supported* Hybrid row-column store, where some columns are stored together, and others separately. Provide flexibility of granularity on how to divide the columns. Columns accessed together can be stored together.* Provide better control over bloat (similar to zheap)* Eliminate need for separate toast tables* Faster add / drop column or changing data type of column by avoiding full rewrite of the table.High-level Design - B-trees for the win!========================================To start simple, let's ignore column store aspect for a moment andconsider it as compressed row store. The column store is naturalextension of this concept, explained in next section.The basic on-disk data structure leveraged is a B-tree, indexed byTID. BTree being a great data structure, fast and versatile. Note thisis not referring to existing Btree indexes, but instead net newseparate BTree for table data storage.TID - logical row identifier:TID is just a 48-bit row identifier. The traditional division intoblock and offset numbers is meaningless. In order to find a tuple witha given TID, one must always descend the B-tree. Having logical TIDprovides flexibility to move the tuples around different pages on pagesplits or page merges can be performed.The internal pages of the B-tree are super simple and boring. Eachinternal page just stores an array of TID and downlink pairs. Let'sfocus on the leaf level. Leaf blocks have short uncompressed header,followed by btree items. Two kinds of items exist: - plain item, holds one tuple or one datum, uncompressed payload - a \"container item\", holds multiple plain items, compressed payload+-----------------------------| Fixed-size page header:|| LSN| TID low and hi key (for Lehman & Yao B-tree operations)| left and right page pointers|| Items:|| TID | size | flags | uncompressed size | lastTID | payload (container item)| TID | size | flags | uncompressed size | lastTID | payload (container item)| TID | size | flags | undo pointer | payload (plain item)| TID | size | flags | undo pointer | payload (plain item)| ...|+----------------------------Row store---------The tuples are stored one after another, sorted by TID. For eachtuple, we store its 48-bit TID, a undo record pointer, and the actualtuple data uncompressed.In uncompressed form, the page can be arbitrarily large. But aftercompression, it must fit into a physical 8k block. If on insert orupdate of a tuple, the page cannot be compressed below 8k anymore, thepage is split. Note that because TIDs are logical rather than physicalidentifiers, we can freely move tuples from one physical page toanother during page split. A tuple's TID never changes.The buffer cache caches compressed blocks. Likewise, WAL-logging,full-page images etc. work on compressed blocks. Uncompression is doneon-the-fly, as and when needed in backend-private memory, whenreading. For some compressions like rel encoding or delta encodingtuples can be constructed directly from compressed data.Column store------------A column store uses the same structure but we have *multiple* B-trees,one for each column, all indexed by TID. The B-trees for all columnsare stored in the same physical file.A metapage at block 0, has links to the roots of the B-trees. Leafpages look the same, but instead of storing the whole tuple, storesjust a single attribute. To reconstruct a row with given TID, scandescends down the B-trees for all the columns using that TID, andfetches all attributes. Likewise, a sequential scan walks all theB-trees in lockstep.So, in summary can imagine Zedstore as forest of B-trees, one for eachcolumn, all indexed by TIDs.This way of laying out the data also easily allows for hybridrow-column store, where some columns are stored together, and othershave a dedicated B-tree. Need to have user facing syntax to allowspecifying how to group the columns.Main reasons for storing data this way--------------------------------------* Layout the data/tuples in mapped fashion instead of keeping the logical to physical mapping separate from actual data. So, keep the meta-data and data logically in single stream of file, avoiding the need for separate forks/files to store meta-data and data.* Stick to fixed size physical blocks. Variable size blocks pose need for increased logical to physical mapping maintenance, plus restrictions on concurrency of writes and reads to files. Hence adopt compression to fit fixed size blocks instead of other way round.MVCC----MVCC works very similar to zheap for zedstore. Undo record pointersare used to implement MVCC. Transaction information if not directlystored with the data. In zheap, there's a small, fixed, number of\"transaction slots\" on each page, but zedstore has undo pointer witheach item directly; in normal cases, the compression squeezes thisdown to almost nothing.Implementation==============Insert:Inserting a new row, splits the row into datums. Then for first columndecide which block to insert the same to, and pick a TID for it, andwrite undo record for the same. Rest of the columns are inserted usingthat same TID and point to same undo position.Compression:Items are added to Btree in uncompressed form. If page is full and newitem can't be added, compression kicks in. Existing uncompressed items(plain items) of the page are passed to compressor forcompression. Already compressed items are added back as is. Page isrewritten with compressed data with new item added to it. If evenafter compression, can't add item to page, then page split happens.Toast:When an overly large datum is stored, it is divided into chunks, andeach chunk is stored on a dedicated toast page within the samephysical file. The toast pages of a datum form list, each page has anext/prev pointer.Select:Property is added to Table AM to convey if column projection isleveraged by AM for scans. While scanning tables with AM leveragingthis property, executor parses the plan. Leverages the target list andquals to find the required columns for query. This list is passed downto AM on beginscan. Zedstore uses this column projection list to onlypull data from selected columns. Virtual tuple table slot is used topass back the datums for subset of columns.Current table am API requires enhancement here to pass down columnprojection to AM. The patch showcases two different ways for the same.* For sequential scans added new beginscan_with_column_projection() API. Executor checks AM property and if it leverages column projection uses this new API else normal beginscan() API.* For index scans instead of modifying the begin scan API, added new API to specifically pass column projection list after calling begin scan to populate the scan descriptor but before fetching the tuples.Index Support:Building index also leverages columnar storage and only scans columnsrequired to build the index. Indexes work pretty similar to heaptables. Data is inserted into tables and TID for the tuple same getsstored in index. On index scans, required column Btrees are scannedfor given TID and datums passed back using virtual tuple.Page Format:ZedStore table contains different kinds of pages, all in the samefile. Kinds of pages are meta-page, per-attribute btree internal andleaf pages, UNDO log page, and toast pages. Each page type has its owndistinct data storage format.Block 0 is always a metapage. It contains the block numbers of theother data structures stored within the file, like the per-attributeB-trees, and the UNDO log.Enhancements to design:=======================Instead of compressing all the tuples on a page in one batch, we couldstore a small \"dictionary\", e.g. in page header or meta-page, and useit to compress each tuple separately. That could make random reads andupdates of individual tuples faster.When adding column, just need to create new Btree for newly addedcolumn and linked to meta-page. No existing content needs to berewritten.When the column is dropped, can scan the B-tree of that column, andimmediately mark all the pages as free in the FSM. But we don'tactually have to scan the leaf level: all leaf tuples have a downlinkin the parent, so we can scan just the internal pages. Unless thecolumn is very wide, that's only a small fraction of the data. Thatmakes the space immediately reusable for new insertions, but it won'treturn the space to the Operating System. In order to do that, we'dstill need to defragment, moving pages from the end of the file closerto the beginning, and truncate the file.In this design, we only cache compressed pages in the page cache. Ifwe want to cache uncompressed pages instead, or in addition to that,we need to invent a whole new kind of a buffer cache that can dealwith the variable-size blocks.If you do a lot of updates, the file can get fragmented, with lots ofunused space on pages. Losing the correlation between TIDs andphysical order is also bad, because it will make SeqScans slow, asthey're not actually doing sequential I/O anymore. We can write adefragmenter to fix things up. Half-empty pages can be merged, andpages can be moved to restore TID/physical correlation. This formatdoesn't have the same MVCC problems with moving tuples around that thePostgres heap does, so it can be fairly easily be done on-line.Min-Max values can be stored for block to easily skip scanning ifcolumn values doesn't fall in range.Notes about current patch=========================Basic (core) functionality is implemented to showcase and play with.Two compression algorithms are supported Postgres pg_lzcompress andlz4. Compiling server with --with-lz4 enables the LZ4 compression forzedstore else pg_lzcompress is default. Definitely LZ4 is super fastat compressing and uncompressing.Not all the table AM API's are implemented. For the functionality notimplmented yet will ERROR out with not supported. Zedstore Table canbe created using command:CREATE TABLE <name> (column listing) USING zedstore;Bulk load can be performed using COPY. INSERT, SELECT, UPDATE andDELETES work. Btree indexes can be created. Btree and bitmap indexscans work. Test in src/test/regress/sql/zedstore.sql showcases allthe functionality working currently. Updates are currently implementedas cold, means always creates new items and not performed in-place.TIDs currently can't leverage the full 48 bit range but instead needto limit to values which are considered valid ItemPointers. Also,MaxHeapTuplesPerPage pose restrictions on the values currently it canhave. Refer [7] for the same.Extremely basic UNDO logging has be implemented just for MVCCperspective. MVCC is missing tuple lock right now. Plus, doesn'tactually perform any undo yet. No WAL logging exist currently henceits not crash safe either.Helpful functions to find how many pages of each type is present inzedstore table and also to find compression ratio is provided.Test mentioned in thread \"Column lookup in a row performance\" [6],good example query for zedstore locally on laptop using lz4 showspostgres=# SELECT AVG(i199) FROM (select i199 from layout offset 0) x; -- heap avg --------------------- 500000.500000000000(1 row)Time: 4679.026 ms (00:04.679)postgres=# SELECT AVG(i199) FROM (select i199 from zlayout offset 0) x; -- zedstore avg --------------------- 500000.500000000000(1 row)Time: 379.710 msImportant note:---------------Planner has not been modified yet to leverage the columnarstorage. Hence, plans using \"physical tlist\" optimization or such goodfor row store miss out to leverage the columnar naturecurrently. Hence, can see the need for subquery with OFFSET 0 above todisable the optimization and scan only required column.The current proposal and discussion is more focused on AM layer workfirst. Hence, currently intentionally skips to discuss the planner orexecutor \"feature\" enhancements like adding vectorized execution andfamily of features.Previous discussions or implementations for column store Verticalcluster index [2], Incore columnar storage [3] and [4], cstore_fdw [5]were refered to distill down objectives and come up with design andimplementations to avoid any earlier concerns raised. Learnings fromGreenplum Database column store also leveraged while designing andimplementing the same.Credit: Design is moslty brain child of Heikki, or actually hisepiphany to be exact. I acted as idea bouncing board and contributedenhancements to the same. We both are having lot of fun writing thecode for this.References1] https://github.com/greenplum-db/postgres/tree/zedstore2] https://www.postgresql.org/message-id/flat/CAJrrPGfaC7WC9NK6PTTy6YN-NN%2BhCy8xOLAh2doYhVg5d6HsAA%40mail.gmail.com3] https://www.postgresql.org/message-id/flat/20150611230316.GM133018%40postgresql.org4] https://www.postgresql.org/message-id/flat/20150831225328.GM2912%40alvherre.pgsql5] https://github.com/citusdata/cstore_fdw6] https://www.postgresql.org/message-id/flat/CAOykqKfko-n5YiBJtk-ocVdp%2Bj92Apu5MJBwbGGh4awRY5NCuQ%40mail.gmail.com7] https://www.postgresql.org/message-id/d0fc97bd-7ec8-2388-e4a6-0fda86d71a43%40iki.fi\nReading about it reminds me of this work -- TAG column storage( http://www09.sigmod.org/sigmod/record/issues/0703/03.article-graefe.pdf ). Isn't this storage system inspired from there, with TID as the TAG?It is not referenced here so made me wonder.-- Regards,Rafia Sabih",
"msg_date": "Thu, 11 Apr 2019 15:05:34 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, 9 Apr 2019 at 20:29, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Apr 9, 2019 at 11:51 AM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> > This is not surprising, considering that columnar store is precisely the\n> > reason for starting the work on table AMs.\n> >\n> > We should certainly look into integrating some sort of columnar storage\n> > in mainline. Not sure which of zedstore or VOPS is the best candidate,\n> > or maybe we'll have some other proposal. My feeling is that having more\n> > than one is not useful; if there are optimizations to one that can be\n> > borrowed from the other, let's do that instead of duplicating effort.\n>\n> I think that conclusion may be premature. There seem to be a bunch of\n> different ways of doing columnar storage, so I don't know how we can\n> be sure that one size will fit all, or that the first thing we accept\n> will be the best thing.\n>\n> Of course, we probably do not want to accept a ton of storage manager\n> implementations is core. I think if people propose implementations\n> that are poor quality, or missing important features, or don't have\n> significantly different use cases from the ones we've already got,\n> it's reasonable to reject those. But I wouldn't be prepared to say\n> that if we have two significantly different column store that are both\n> awesome code with a complete feature set and significantly disjoint\n> use cases, we should reject the second one just because it is also a\n> column store. I think that won't get out of control because few\n> people will be able to produce really high-quality implementations.\n>\n> This stuff is hard, which I think is also why we only have 6.5 index\n> AMs in core after many, many years. And our standards have gone up\n> over the years - not all of those would pass muster if they were\n> proposed today.\n>\n> BTW, can I express a small measure of disappointment that the name for\n> the thing under discussion on this thread chose to be called\n> \"zedstore\"? That seems to invite confusion with \"zheap\", especially\n> in parts of the world where the last letter of the alphabet is\n> pronounced \"zed,\" where people are going to say zed-heap and\n> zed-store. Brr.\n>\n\n+1 on Brr. Looks like Thomas and your thought on having 'z' makes things\npopular/stylish, etc. is after all true, I was skeptical back then.\n\n-- \nRegards,\nRafia Sabih\n\nOn Tue, 9 Apr 2019 at 20:29, Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Apr 9, 2019 at 11:51 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> This is not surprising, considering that columnar store is precisely the\n> reason for starting the work on table AMs.\n>\n> We should certainly look into integrating some sort of columnar storage\n> in mainline. Not sure which of zedstore or VOPS is the best candidate,\n> or maybe we'll have some other proposal. My feeling is that having more\n> than one is not useful; if there are optimizations to one that can be\n> borrowed from the other, let's do that instead of duplicating effort.\n\nI think that conclusion may be premature. There seem to be a bunch of\ndifferent ways of doing columnar storage, so I don't know how we can\nbe sure that one size will fit all, or that the first thing we accept\nwill be the best thing.\n\nOf course, we probably do not want to accept a ton of storage manager\nimplementations is core. I think if people propose implementations\nthat are poor quality, or missing important features, or don't have\nsignificantly different use cases from the ones we've already got,\nit's reasonable to reject those. But I wouldn't be prepared to say\nthat if we have two significantly different column store that are both\nawesome code with a complete feature set and significantly disjoint\nuse cases, we should reject the second one just because it is also a\ncolumn store. I think that won't get out of control because few\npeople will be able to produce really high-quality implementations.\n\nThis stuff is hard, which I think is also why we only have 6.5 index\nAMs in core after many, many years. And our standards have gone up\nover the years - not all of those would pass muster if they were\nproposed today.\n\nBTW, can I express a small measure of disappointment that the name for\nthe thing under discussion on this thread chose to be called\n\"zedstore\"? That seems to invite confusion with \"zheap\", especially\nin parts of the world where the last letter of the alphabet is\npronounced \"zed,\" where people are going to say zed-heap and\nzed-store. Brr. +1 on Brr. Looks like Thomas and your thought on having 'z' makes things popular/stylish, etc. is after all true, I was skeptical back then.-- Regards,Rafia Sabih",
"msg_date": "Thu, 11 Apr 2019 15:12:33 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 11/04/2019 16:12, Rafia Sabih wrote:\n> On Tue, 9 Apr 2019 at 20:29, Robert Haas <robertmhaas@gmail.com \n> <mailto:robertmhaas@gmail.com>> wrote:\n> \n> BTW, can I express a small measure of disappointment that the name for\n> the thing under discussion on this thread chose to be called\n> \"zedstore\"? That seems to invite confusion with \"zheap\", especially\n> in parts of the world where the last letter of the alphabet is\n> pronounced \"zed,\" where people are going to say zed-heap and\n> zed-store. Brr.\n> \n> +1 on Brr. Looks like Thomas and your thought on having 'z' makes \n> things popular/stylish, etc. is after all true, I was skeptical back then.\n\nBrrStore works for me, too ;-).\n\n- Heikki\n\n\n",
"msg_date": "Thu, 11 Apr 2019 16:15:09 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 4/11/19 10:46 AM, Konstantin Knizhnik wrote:\n\n> This my results of compressing pbench data using different compressors:\n>\n> Configuration \tSize (Gb) \tTime (sec)\n> no compression\n> \t15.31 \t92\n> zlib (default level) \t2.37 \t284\n> zlib (best speed) \t2.43 \t191\n> postgres internal lz \t3.89 \t214\n> lz4 \t4.12\n> \t95\n> snappy \t5.18 \t99\n> lzfse \t2.80 \t1099\n> (apple) 2.80 1099\n> \t1.69 \t125\n>\n>\n>\n> You see that zstd provides almost 2 times better compression ration \n> and almost at the same speed.\n\n\nWhat is \"(apple) 2.80 1099\"? Was that intended to be zstd?\n\nAndreas\n\n\n\n\n\n\n\nOn 4/11/19 10:46 AM, Konstantin Knizhnik wrote:\n\n\n This my results of compressing pbench data using different\n compressors:\n\n\n\n\nConfiguration\nSize (Gb)\nTime (sec)\n\n\nno compression\n\n15.31\n92\n\n\nzlib (default level) \n2.37 \n284\n\n\nzlib (best speed) \n2.43\n191\n\n\npostgres internal lz \n3.89 \n214\n\n\nlz4\n4.12 \n\n95\n\n\nsnappy\n5.18\n99\n\n\nlzfse\n2.80\n1099\n\n\n (apple) 2.80 1099\n\n1.69\n125\n\n\n\n\n\n You see that zstd provides almost 2 times better compression\n ration and almost at the same speed.\n\n\n\nWhat is \"(apple) 2.80 1099\"? Was that intended to be zstd?\nAndreas",
"msg_date": "Thu, 11 Apr 2019 15:18:41 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 11.04.2019 16:18, Andreas Karlsson wrote:\n>\n> On 4/11/19 10:46 AM, Konstantin Knizhnik wrote:\n>\n>> This my results of compressing pbench data using different compressors:\n>>\n>> Configuration \tSize (Gb) \tTime (sec)\n>> no compression\n>> \t15.31 \t92\n>> zlib (default level) \t2.37 \t284\n>> zlib (best speed) \t2.43 \t191\n>> postgres internal lz \t3.89 \t214\n>> lz4 \t4.12\n>> \t95\n>> snappy \t5.18 \t99\n>> lzfse \t2.80 \t1099\n>> (apple) 2.80 1099\n>> \t1.69 \t125\n>>\n>>\n>>\n>> You see that zstd provides almost 2 times better compression ration \n>> and almost at the same speed.\n>\n>\n> What is \"(apple) 2.80 1099\"? Was that intended to be zstd?\n>\n> Andreas\n>\nUgh...\nCut and paste problems.\nThe whole document can be found here: \nhttp://garret.ru/PageLevelCompression.pdf\n\nlzfse (apple) 2.80 1099\nzstd (facebook) 1.69 125\n\nztsd is compression algorithm proposed by facebook: \nhttps://github.com/facebook/zstd\nLooks like it provides the best speed/compress ratio result.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 11.04.2019 16:18, Andreas Karlsson\n wrote:\n\n\n\nOn 4/11/19 10:46 AM, Konstantin Knizhnik wrote:\n\n\n This my results of compressing pbench data using different\n compressors:\n\n\n\n\nConfiguration\nSize (Gb)\nTime (sec)\n\n\nno compression\n\n15.31\n92\n\n\nzlib (default level) \n2.37 \n284\n\n\nzlib (best speed) \n2.43\n191\n\n\npostgres internal lz \n3.89 \n214\n\n\nlz4\n4.12 \n\n95\n\n\nsnappy\n5.18\n99\n\n\nlzfse\n2.80\n1099\n\n\n (apple) 2.80 1099\n\n1.69\n125\n\n\n\n\n\n You see that zstd provides almost 2 times better compression\n ration and almost at the same speed.\n\n\n\nWhat is \"(apple) 2.80 1099\"? Was that intended to be zstd?\nAndreas\n\n Ugh...\n Cut and paste problems.\n The whole document can be found here:\n http://garret.ru/PageLevelCompression.pdf\n\n lzfse (apple) 2.80 1099\n zstd (facebook) 1.69 125\n\n ztsd is compression algorithm proposed by facebook: \n https://github.com/facebook/zstd\n Looks like it provides the best speed/compress ratio result.\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 11 Apr 2019 16:52:33 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 3:15 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 11/04/2019 16:12, Rafia Sabih wrote:\n> > On Tue, 9 Apr 2019 at 20:29, Robert Haas <robertmhaas@gmail.com\n> > <mailto:robertmhaas@gmail.com>> wrote:\n> >\n> > BTW, can I express a small measure of disappointment that the name\n> for\n> > the thing under discussion on this thread chose to be called\n> > \"zedstore\"? That seems to invite confusion with \"zheap\", especially\n> > in parts of the world where the last letter of the alphabet is\n> > pronounced \"zed,\" where people are going to say zed-heap and\n> > zed-store. Brr.\n> >\n> > +1 on Brr. Looks like Thomas and your thought on having 'z' makes\n> > things popular/stylish, etc. is after all true, I was skeptical back\n> then.\n>\n> BrrStore works for me, too ;-).\n>\n\nAlso works as a reference to the Finnish climate?\n\n(Sorry, couldn't help myself)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Apr 11, 2019 at 3:15 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 11/04/2019 16:12, Rafia Sabih wrote:\n> On Tue, 9 Apr 2019 at 20:29, Robert Haas <robertmhaas@gmail.com \n> <mailto:robertmhaas@gmail.com>> wrote:\n> \n> BTW, can I express a small measure of disappointment that the name for\n> the thing under discussion on this thread chose to be called\n> \"zedstore\"? That seems to invite confusion with \"zheap\", especially\n> in parts of the world where the last letter of the alphabet is\n> pronounced \"zed,\" where people are going to say zed-heap and\n> zed-store. Brr.\n> \n> +1 on Brr. Looks like Thomas and your thought on having 'z' makes \n> things popular/stylish, etc. is after all true, I was skeptical back then.\n\nBrrStore works for me, too ;-).Also works as a reference to the Finnish climate?(Sorry, couldn't help myself) -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 11 Apr 2019 16:01:02 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, 9 Apr 2019 at 02:27, Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n\n> Heikki and I have been hacking recently for few weeks to implement\n> in-core columnar storage for PostgreSQL. Here's the design and initial\n> implementation of Zedstore, compressed in-core columnar storage (table\n> access method). Attaching the patch and link to github branch [1] to\n> follow along.\n>\n> The objective is to gather feedback on design and approach to the\n> same. The implementation has core basic pieces working but not close\n> to complete.\n>\n> Big thank you to Andres, Haribabu and team for the table access method\n> API's. Leveraged the API's for implementing zedstore, and proves API\n> to be in very good shape. Had to enhance the same minimally but\n> in-general didn't had to touch executor much.\n>\n> Motivations / Objectives\n>\n> * Performance improvement for queries selecting subset of columns\n> (reduced IO).\n> * Reduced on-disk footprint compared to heap table. Shorter tuple\n> headers and also leveraging compression of similar type data\n> * Be first-class citizen in the Postgres architecture (tables data can\n> just independently live in columnar storage)\n> * Fully MVCC compliant\n> * All Indexes supported\n> * Hybrid row-column store, where some columns are stored together, and\n> others separately. Provide flexibility of granularity on how to\n> divide the columns. Columns accessed together can be stored\n> together.\n> * Provide better control over bloat (similar to zheap)\n> * Eliminate need for separate toast tables\n> * Faster add / drop column or changing data type of column by avoiding\n> full rewrite of the table.\n>\n> High-level Design - B-trees for the win!\n> ========================================\n>\n> To start simple, let's ignore column store aspect for a moment and\n> consider it as compressed row store. The column store is natural\n> extension of this concept, explained in next section.\n>\n> The basic on-disk data structure leveraged is a B-tree, indexed by\n> TID. BTree being a great data structure, fast and versatile. Note this\n> is not referring to existing Btree indexes, but instead net new\n> separate BTree for table data storage.\n>\n> TID - logical row identifier:\n> TID is just a 48-bit row identifier. The traditional division into\n> block and offset numbers is meaningless. In order to find a tuple with\n> a given TID, one must always descend the B-tree. Having logical TID\n> provides flexibility to move the tuples around different pages on page\n> splits or page merges can be performed.\n>\n> In my understanding these TIDs will follow the datatype of the current\nones. Then my question is will TIDs be reusable here and how will the\nreusable range of TIDs be determined? If not, wouldn't that become a hard\nlimit to the number of insertions performed on a table?\n\nThe internal pages of the B-tree are super simple and boring. Each\n> internal page just stores an array of TID and downlink pairs. Let's\n> focus on the leaf level. Leaf blocks have short uncompressed header,\n> followed by btree items. Two kinds of items exist:\n>\n> - plain item, holds one tuple or one datum, uncompressed payload\n> - a \"container item\", holds multiple plain items, compressed payload\n>\n> +-----------------------------\n> | Fixed-size page header:\n> |\n> | LSN\n> | TID low and hi key (for Lehman & Yao B-tree operations)\n> | left and right page pointers\n> |\n> | Items:\n> |\n> | TID | size | flags | uncompressed size | lastTID | payload (container\n> item)\n> | TID | size | flags | uncompressed size | lastTID | payload (container\n> item)\n> | TID | size | flags | undo pointer | payload (plain item)\n> | TID | size | flags | undo pointer | payload (plain item)\n> | ...\n> |\n> +----------------------------\n>\n> Row store\n> ---------\n>\n> The tuples are stored one after another, sorted by TID. For each\n> tuple, we store its 48-bit TID, a undo record pointer, and the actual\n> tuple data uncompressed.\n>\n> In uncompressed form, the page can be arbitrarily large. But after\n> compression, it must fit into a physical 8k block. If on insert or\n> update of a tuple, the page cannot be compressed below 8k anymore, the\n> page is split. Note that because TIDs are logical rather than physical\n> identifiers, we can freely move tuples from one physical page to\n> another during page split. A tuple's TID never changes.\n>\n> The buffer cache caches compressed blocks. Likewise, WAL-logging,\n> full-page images etc. work on compressed blocks. Uncompression is done\n> on-the-fly, as and when needed in backend-private memory, when\n> reading. For some compressions like rel encoding or delta encoding\n> tuples can be constructed directly from compressed data.\n>\n> Column store\n> ------------\n>\n> A column store uses the same structure but we have *multiple* B-trees,\n> one for each column, all indexed by TID. The B-trees for all columns\n> are stored in the same physical file.\n>\n> A metapage at block 0, has links to the roots of the B-trees. Leaf\n> pages look the same, but instead of storing the whole tuple, stores\n> just a single attribute. To reconstruct a row with given TID, scan\n> descends down the B-trees for all the columns using that TID, and\n> fetches all attributes. Likewise, a sequential scan walks all the\n> B-trees in lockstep.\n>\n> So, in summary can imagine Zedstore as forest of B-trees, one for each\n> column, all indexed by TIDs.\n>\n> This way of laying out the data also easily allows for hybrid\n> row-column store, where some columns are stored together, and others\n> have a dedicated B-tree. Need to have user facing syntax to allow\n> specifying how to group the columns.\n>\n>\n> Main reasons for storing data this way\n> --------------------------------------\n>\n> * Layout the data/tuples in mapped fashion instead of keeping the\n> logical to physical mapping separate from actual data. So, keep the\n> meta-data and data logically in single stream of file, avoiding the\n> need for separate forks/files to store meta-data and data.\n>\n> * Stick to fixed size physical blocks. Variable size blocks pose need\n> for increased logical to physical mapping maintenance, plus\n> restrictions on concurrency of writes and reads to files. Hence\n> adopt compression to fit fixed size blocks instead of other way\n> round.\n>\n>\n> MVCC\n> ----\n> MVCC works very similar to zheap for zedstore. Undo record pointers\n> are used to implement MVCC. Transaction information if not directly\n> stored with the data. In zheap, there's a small, fixed, number of\n> \"transaction slots\" on each page, but zedstore has undo pointer with\n> each item directly; in normal cases, the compression squeezes this\n> down to almost nothing.\n>\n> How about using a separate BTree for undo also?\n\n> Implementation\n> ==============\n>\n> Insert:\n> Inserting a new row, splits the row into datums. Then for first column\n> decide which block to insert the same to, and pick a TID for it, and\n> write undo record for the same. Rest of the columns are inserted using\n> that same TID and point to same undo position.\n>\n> Compression:\n> Items are added to Btree in uncompressed form. If page is full and new\n> item can't be added, compression kicks in. Existing uncompressed items\n> (plain items) of the page are passed to compressor for\n> compression. Already compressed items are added back as is. Page is\n> rewritten with compressed data with new item added to it. If even\n> after compression, can't add item to page, then page split happens.\n>\n> Toast:\n> When an overly large datum is stored, it is divided into chunks, and\n> each chunk is stored on a dedicated toast page within the same\n> physical file. The toast pages of a datum form list, each page has a\n> next/prev pointer.\n>\n> Select:\n> Property is added to Table AM to convey if column projection is\n> leveraged by AM for scans. While scanning tables with AM leveraging\n> this property, executor parses the plan. Leverages the target list and\n> quals to find the required columns for query. This list is passed down\n> to AM on beginscan. Zedstore uses this column projection list to only\n> pull data from selected columns. Virtual tuple table slot is used to\n> pass back the datums for subset of columns.\n>\n> I am curious about how delete is working here? Will the TID entries will\nbe just marked delete as in current heap, or will they be actually removed\nand whole btree is restructured (if required) then?\nSimilarly, about updates, will they be just delete+insert or something\nclever will be happening there?\nWill there be in-place updates and in what scenarios they will be possible?\nThere is nothing mentioned in this direction, however using undo files\nassures me there must be some in-place updates somewhere.\n\n>\n> Enhancements to design:\n> =======================\n>\n> Instead of compressing all the tuples on a page in one batch, we could\n> store a small \"dictionary\", e.g. in page header or meta-page, and use\n> it to compress each tuple separately. That could make random reads and\n> updates of individual tuples faster.\n>\n> When adding column, just need to create new Btree for newly added\n> column and linked to meta-page. No existing content needs to be\n> rewritten.\n>\n> When the column is dropped, can scan the B-tree of that column, and\n> immediately mark all the pages as free in the FSM. But we don't\n> actually have to scan the leaf level: all leaf tuples have a downlink\n> in the parent, so we can scan just the internal pages. Unless the\n> column is very wide, that's only a small fraction of the data. That\n> makes the space immediately reusable for new insertions, but it won't\n> return the space to the Operating System. In order to do that, we'd\n> still need to defragment, moving pages from the end of the file closer\n> to the beginning, and truncate the file.\n>\n> In this design, we only cache compressed pages in the page cache. If\n> we want to cache uncompressed pages instead, or in addition to that,\n> we need to invent a whole new kind of a buffer cache that can deal\n> with the variable-size blocks.\n>\n> If you do a lot of updates, the file can get fragmented, with lots of\n> unused space on pages. Losing the correlation between TIDs and\n> physical order is also bad, because it will make SeqScans slow, as\n> they're not actually doing sequential I/O anymore. We can write a\n> defragmenter to fix things up. Half-empty pages can be merged, and\n> pages can be moved to restore TID/physical correlation. This format\n> doesn't have the same MVCC problems with moving tuples around that the\n> Postgres heap does, so it can be fairly easily be done on-line.\n>\n> Min-Max values can be stored for block to easily skip scanning if\n> column values doesn't fall in range.\n>\n> Notes about current patch\n> =========================\n>\n> Basic (core) functionality is implemented to showcase and play with.\n>\n> Two compression algorithms are supported Postgres pg_lzcompress and\n> lz4. Compiling server with --with-lz4 enables the LZ4 compression for\n> zedstore else pg_lzcompress is default. Definitely LZ4 is super fast\n> at compressing and uncompressing.\n>\n> Not all the table AM API's are implemented. For the functionality not\n> implmented yet will ERROR out with not supported. Zedstore Table can\n> be created using command:\n>\n> CREATE TABLE <name> (column listing) USING zedstore;\n>\n> Bulk load can be performed using COPY. INSERT, SELECT, UPDATE and\n> DELETES work. Btree indexes can be created. Btree and bitmap index\n> scans work. Test in src/test/regress/sql/zedstore.sql showcases all\n> the functionality working currently. Updates are currently implemented\n> as cold, means always creates new items and not performed in-place.\n>\n> TIDs currently can't leverage the full 48 bit range but instead need\n> to limit to values which are considered valid ItemPointers. Also,\n> MaxHeapTuplesPerPage pose restrictions on the values currently it can\n> have. Refer [7] for the same.\n>\n> Extremely basic UNDO logging has be implemented just for MVCC\n> perspective. MVCC is missing tuple lock right now. Plus, doesn't\n> actually perform any undo yet. No WAL logging exist currently hence\n> its not crash safe either.\n>\n> Helpful functions to find how many pages of each type is present in\n> zedstore table and also to find compression ratio is provided.\n>\n> Test mentioned in thread \"Column lookup in a row performance\" [6],\n> good example query for zedstore locally on laptop using lz4 shows\n>\n> postgres=# SELECT AVG(i199) FROM (select i199 from layout offset 0) x; --\n> heap\n> avg\n> ---------------------\n> 500000.500000000000\n> (1 row)\n>\n> Time: 4679.026 ms (00:04.679)\n>\n> postgres=# SELECT AVG(i199) FROM (select i199 from zlayout offset 0) x; --\n> zedstore\n> avg\n> ---------------------\n> 500000.500000000000\n> (1 row)\n>\n> Time: 379.710 ms\n>\n> Important note:\n> ---------------\n> Planner has not been modified yet to leverage the columnar\n> storage. Hence, plans using \"physical tlist\" optimization or such good\n> for row store miss out to leverage the columnar nature\n> currently. Hence, can see the need for subquery with OFFSET 0 above to\n> disable the optimization and scan only required column.\n>\n>\n>\n> The current proposal and discussion is more focused on AM layer work\n> first. Hence, currently intentionally skips to discuss the planner or\n> executor \"feature\" enhancements like adding vectorized execution and\n> family of features.\n>\n> Previous discussions or implementations for column store Vertical\n> cluster index [2], Incore columnar storage [3] and [4], cstore_fdw [5]\n> were refered to distill down objectives and come up with design and\n> implementations to avoid any earlier concerns raised. Learnings from\n> Greenplum Database column store also leveraged while designing and\n> implementing the same.\n>\n> Credit: Design is moslty brain child of Heikki, or actually his\n> epiphany to be exact. I acted as idea bouncing board and contributed\n> enhancements to the same. We both are having lot of fun writing the\n> code for this.\n>\n>\n> References\n> 1] https://github.com/greenplum-db/postgres/tree/zedstore\n> 2]\n> https://www.postgresql.org/message-id/flat/CAJrrPGfaC7WC9NK6PTTy6YN-NN%2BhCy8xOLAh2doYhVg5d6HsAA%40mail.gmail.com\n> 3]\n> https://www.postgresql.org/message-id/flat/20150611230316.GM133018%40postgresql.org\n> 4]\n> https://www.postgresql.org/message-id/flat/20150831225328.GM2912%40alvherre.pgsql\n> 5] https://github.com/citusdata/cstore_fdw\n> 6]\n> https://www.postgresql.org/message-id/flat/CAOykqKfko-n5YiBJtk-ocVdp%2Bj92Apu5MJBwbGGh4awRY5NCuQ%40mail.gmail.com\n> 7]\n> https://www.postgresql.org/message-id/d0fc97bd-7ec8-2388-e4a6-0fda86d71a43%40iki.fi\n>\n>\n\n-- \nRegards,\nRafia Sabih\n\nOn Tue, 9 Apr 2019 at 02:27, Ashwin Agrawal <aagrawal@pivotal.io> wrote:Heikki and I have been hacking recently for few weeks to implementin-core columnar storage for PostgreSQL. Here's the design and initialimplementation of Zedstore, compressed in-core columnar storage (tableaccess method). Attaching the patch and link to github branch [1] tofollow along.The objective is to gather feedback on design and approach to thesame. The implementation has core basic pieces working but not closeto complete.Big thank you to Andres, Haribabu and team for the table access methodAPI's. Leveraged the API's for implementing zedstore, and proves APIto be in very good shape. Had to enhance the same minimally butin-general didn't had to touch executor much.Motivations / Objectives* Performance improvement for queries selecting subset of columns (reduced IO).* Reduced on-disk footprint compared to heap table. Shorter tuple headers and also leveraging compression of similar type data* Be first-class citizen in the Postgres architecture (tables data can just independently live in columnar storage)* Fully MVCC compliant* All Indexes supported* Hybrid row-column store, where some columns are stored together, and others separately. Provide flexibility of granularity on how to divide the columns. Columns accessed together can be stored together.* Provide better control over bloat (similar to zheap)* Eliminate need for separate toast tables* Faster add / drop column or changing data type of column by avoiding full rewrite of the table.High-level Design - B-trees for the win!========================================To start simple, let's ignore column store aspect for a moment andconsider it as compressed row store. The column store is naturalextension of this concept, explained in next section.The basic on-disk data structure leveraged is a B-tree, indexed byTID. BTree being a great data structure, fast and versatile. Note thisis not referring to existing Btree indexes, but instead net newseparate BTree for table data storage.TID - logical row identifier:TID is just a 48-bit row identifier. The traditional division intoblock and offset numbers is meaningless. In order to find a tuple witha given TID, one must always descend the B-tree. Having logical TIDprovides flexibility to move the tuples around different pages on pagesplits or page merges can be performed.In my understanding these TIDs will follow the datatype of the current ones. Then my question is will TIDs be reusable here and how will the reusable range of TIDs be determined? If not, wouldn't that become a hard limit to the number of insertions performed on a table?The internal pages of the B-tree are super simple and boring. Eachinternal page just stores an array of TID and downlink pairs. Let'sfocus on the leaf level. Leaf blocks have short uncompressed header,followed by btree items. Two kinds of items exist: - plain item, holds one tuple or one datum, uncompressed payload - a \"container item\", holds multiple plain items, compressed payload+-----------------------------| Fixed-size page header:|| LSN| TID low and hi key (for Lehman & Yao B-tree operations)| left and right page pointers|| Items:|| TID | size | flags | uncompressed size | lastTID | payload (container item)| TID | size | flags | uncompressed size | lastTID | payload (container item)| TID | size | flags | undo pointer | payload (plain item)| TID | size | flags | undo pointer | payload (plain item)| ...|+----------------------------Row store---------The tuples are stored one after another, sorted by TID. For eachtuple, we store its 48-bit TID, a undo record pointer, and the actualtuple data uncompressed.In uncompressed form, the page can be arbitrarily large. But aftercompression, it must fit into a physical 8k block. If on insert orupdate of a tuple, the page cannot be compressed below 8k anymore, thepage is split. Note that because TIDs are logical rather than physicalidentifiers, we can freely move tuples from one physical page toanother during page split. A tuple's TID never changes.The buffer cache caches compressed blocks. Likewise, WAL-logging,full-page images etc. work on compressed blocks. Uncompression is doneon-the-fly, as and when needed in backend-private memory, whenreading. For some compressions like rel encoding or delta encodingtuples can be constructed directly from compressed data.Column store------------A column store uses the same structure but we have *multiple* B-trees,one for each column, all indexed by TID. The B-trees for all columnsare stored in the same physical file.A metapage at block 0, has links to the roots of the B-trees. Leafpages look the same, but instead of storing the whole tuple, storesjust a single attribute. To reconstruct a row with given TID, scandescends down the B-trees for all the columns using that TID, andfetches all attributes. Likewise, a sequential scan walks all theB-trees in lockstep.So, in summary can imagine Zedstore as forest of B-trees, one for eachcolumn, all indexed by TIDs.This way of laying out the data also easily allows for hybridrow-column store, where some columns are stored together, and othershave a dedicated B-tree. Need to have user facing syntax to allowspecifying how to group the columns.Main reasons for storing data this way--------------------------------------* Layout the data/tuples in mapped fashion instead of keeping the logical to physical mapping separate from actual data. So, keep the meta-data and data logically in single stream of file, avoiding the need for separate forks/files to store meta-data and data.* Stick to fixed size physical blocks. Variable size blocks pose need for increased logical to physical mapping maintenance, plus restrictions on concurrency of writes and reads to files. Hence adopt compression to fit fixed size blocks instead of other way round.MVCC----MVCC works very similar to zheap for zedstore. Undo record pointersare used to implement MVCC. Transaction information if not directlystored with the data. In zheap, there's a small, fixed, number of\"transaction slots\" on each page, but zedstore has undo pointer witheach item directly; in normal cases, the compression squeezes thisdown to almost nothing.How about using a separate BTree for undo also?Implementation==============Insert:Inserting a new row, splits the row into datums. Then for first columndecide which block to insert the same to, and pick a TID for it, andwrite undo record for the same. Rest of the columns are inserted usingthat same TID and point to same undo position.Compression:Items are added to Btree in uncompressed form. If page is full and newitem can't be added, compression kicks in. Existing uncompressed items(plain items) of the page are passed to compressor forcompression. Already compressed items are added back as is. Page isrewritten with compressed data with new item added to it. If evenafter compression, can't add item to page, then page split happens.Toast:When an overly large datum is stored, it is divided into chunks, andeach chunk is stored on a dedicated toast page within the samephysical file. The toast pages of a datum form list, each page has anext/prev pointer.Select:Property is added to Table AM to convey if column projection isleveraged by AM for scans. While scanning tables with AM leveragingthis property, executor parses the plan. Leverages the target list andquals to find the required columns for query. This list is passed downto AM on beginscan. Zedstore uses this column projection list to onlypull data from selected columns. Virtual tuple table slot is used topass back the datums for subset of columns.I am curious about how delete is working here? Will the TID entries will be just marked delete as in current heap, or will they be actually removed and whole btree is restructured (if required) then?Similarly, about updates, will they be just delete+insert or something clever will be happening there? Will there be in-place updates and in what scenarios they will be possible? There is nothing mentioned in this direction, however using undo files assures me there must be some in-place updates somewhere.Enhancements to design:=======================Instead of compressing all the tuples on a page in one batch, we couldstore a small \"dictionary\", e.g. in page header or meta-page, and useit to compress each tuple separately. That could make random reads andupdates of individual tuples faster.When adding column, just need to create new Btree for newly addedcolumn and linked to meta-page. No existing content needs to berewritten.When the column is dropped, can scan the B-tree of that column, andimmediately mark all the pages as free in the FSM. But we don'tactually have to scan the leaf level: all leaf tuples have a downlinkin the parent, so we can scan just the internal pages. Unless thecolumn is very wide, that's only a small fraction of the data. Thatmakes the space immediately reusable for new insertions, but it won'treturn the space to the Operating System. In order to do that, we'dstill need to defragment, moving pages from the end of the file closerto the beginning, and truncate the file.In this design, we only cache compressed pages in the page cache. Ifwe want to cache uncompressed pages instead, or in addition to that,we need to invent a whole new kind of a buffer cache that can dealwith the variable-size blocks.If you do a lot of updates, the file can get fragmented, with lots ofunused space on pages. Losing the correlation between TIDs andphysical order is also bad, because it will make SeqScans slow, asthey're not actually doing sequential I/O anymore. We can write adefragmenter to fix things up. Half-empty pages can be merged, andpages can be moved to restore TID/physical correlation. This formatdoesn't have the same MVCC problems with moving tuples around that thePostgres heap does, so it can be fairly easily be done on-line.Min-Max values can be stored for block to easily skip scanning ifcolumn values doesn't fall in range.Notes about current patch=========================Basic (core) functionality is implemented to showcase and play with.Two compression algorithms are supported Postgres pg_lzcompress andlz4. Compiling server with --with-lz4 enables the LZ4 compression forzedstore else pg_lzcompress is default. Definitely LZ4 is super fastat compressing and uncompressing.Not all the table AM API's are implemented. For the functionality notimplmented yet will ERROR out with not supported. Zedstore Table canbe created using command:CREATE TABLE <name> (column listing) USING zedstore;Bulk load can be performed using COPY. INSERT, SELECT, UPDATE andDELETES work. Btree indexes can be created. Btree and bitmap indexscans work. Test in src/test/regress/sql/zedstore.sql showcases allthe functionality working currently. Updates are currently implementedas cold, means always creates new items and not performed in-place.TIDs currently can't leverage the full 48 bit range but instead needto limit to values which are considered valid ItemPointers. Also,MaxHeapTuplesPerPage pose restrictions on the values currently it canhave. Refer [7] for the same.Extremely basic UNDO logging has be implemented just for MVCCperspective. MVCC is missing tuple lock right now. Plus, doesn'tactually perform any undo yet. No WAL logging exist currently henceits not crash safe either.Helpful functions to find how many pages of each type is present inzedstore table and also to find compression ratio is provided.Test mentioned in thread \"Column lookup in a row performance\" [6],good example query for zedstore locally on laptop using lz4 showspostgres=# SELECT AVG(i199) FROM (select i199 from layout offset 0) x; -- heap avg --------------------- 500000.500000000000(1 row)Time: 4679.026 ms (00:04.679)postgres=# SELECT AVG(i199) FROM (select i199 from zlayout offset 0) x; -- zedstore avg --------------------- 500000.500000000000(1 row)Time: 379.710 msImportant note:---------------Planner has not been modified yet to leverage the columnarstorage. Hence, plans using \"physical tlist\" optimization or such goodfor row store miss out to leverage the columnar naturecurrently. Hence, can see the need for subquery with OFFSET 0 above todisable the optimization and scan only required column.The current proposal and discussion is more focused on AM layer workfirst. Hence, currently intentionally skips to discuss the planner orexecutor \"feature\" enhancements like adding vectorized execution andfamily of features.Previous discussions or implementations for column store Verticalcluster index [2], Incore columnar storage [3] and [4], cstore_fdw [5]were refered to distill down objectives and come up with design andimplementations to avoid any earlier concerns raised. Learnings fromGreenplum Database column store also leveraged while designing andimplementing the same.Credit: Design is moslty brain child of Heikki, or actually hisepiphany to be exact. I acted as idea bouncing board and contributedenhancements to the same. We both are having lot of fun writing thecode for this.References1] https://github.com/greenplum-db/postgres/tree/zedstore2] https://www.postgresql.org/message-id/flat/CAJrrPGfaC7WC9NK6PTTy6YN-NN%2BhCy8xOLAh2doYhVg5d6HsAA%40mail.gmail.com3] https://www.postgresql.org/message-id/flat/20150611230316.GM133018%40postgresql.org4] https://www.postgresql.org/message-id/flat/20150831225328.GM2912%40alvherre.pgsql5] https://github.com/citusdata/cstore_fdw6] https://www.postgresql.org/message-id/flat/CAOykqKfko-n5YiBJtk-ocVdp%2Bj92Apu5MJBwbGGh4awRY5NCuQ%40mail.gmail.com7] https://www.postgresql.org/message-id/d0fc97bd-7ec8-2388-e4a6-0fda86d71a43%40iki.fi\n-- Regards,Rafia Sabih",
"msg_date": "Thu, 11 Apr 2019 16:03:39 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> Thank you for trying it out. Yes, noticed for certain patterns pg_lzcompress() actually requires much larger output buffers. Like for one 86 len source it required 2296 len output buffer. Current zedstore code doesn’t handle this case and errors out. LZ4 for same patterns works fine, would highly recommend using LZ4 only, as anyways speed is very fast as well with it.\n\nYou realize of course that *every* compression method has some inputs that\nit makes bigger. If your code assumes that compression always produces a\nsmaller string, that's a bug in your code, not the compression algorithm.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Apr 2019 10:54:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 11/04/2019 17:54, Tom Lane wrote:\n> Ashwin Agrawal <aagrawal@pivotal.io> writes:\n>> Thank you for trying it out. Yes, noticed for certain patterns pg_lzcompress() actually requires much larger output buffers. Like for one 86 len source it required 2296 len output buffer. Current zedstore code doesn’t handle this case and errors out. LZ4 for same patterns works fine, would highly recommend using LZ4 only, as anyways speed is very fast as well with it.\n> \n> You realize of course that *every* compression method has some inputs that\n> it makes bigger. If your code assumes that compression always produces a\n> smaller string, that's a bug in your code, not the compression algorithm.\n\nOf course. The code is not making that assumption, although clearly \nthere is a bug there somewhere because it throws that error. It's early \ndays..\n\nIn practice it's easy to weasel out of that, by storing the data \nuncompressed, if compression would make it longer. Then you need an \nextra flag somewhere to indicate whether it's compressed or not. It \ndoesn't break the theoretical limit because the actual stored length is \nthen original length + 1 bit, but it's usually not hard to find a place \nfor one extra bit.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 11 Apr 2019 18:20:47 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 6:06 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> Reading about it reminds me of this work -- TAG column storage( http://www09.sigmod.org/sigmod/record/issues/0703/03.article-graefe.pdf ).\n> Isn't this storage system inspired from there, with TID as the TAG?\n>\n> It is not referenced here so made me wonder.\n\nI don't think they're particularly similar, because that paper\ndescribes an architecture based on using purely logical row\nidentifiers, which is not what a TID is. TID is a hybrid\nphysical/logical identifier, sometimes called a \"physiological\"\nidentifier, which will have significant overhead. Ashwin said that\nZedStore TIDs are logical identifiers, but I don't see how that's\ncompatible with a hybrid row/column design (unless you map heap TID to\nlogical row identifier using a separate B-Tree).\n\nThe big idea with Graefe's TAG design is that there is practically no\nstorage overhead for these logical identifiers, because each entry's\nidentifier is calculated by adding its slot number to the page's\ntag/low key. The ZedStore design, in contrast, explicitly stores TID\nfor every entry. ZedStore seems more flexible for that reason, but at\nthe same time the per-datum overhead seems very high to me. Maybe\nprefix compression could help here, which a low key and high key can\ndo rather well.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 13 Apr 2019 16:22:25 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, Apr 09, 2019 at 02:29:09PM -0400, Robert Haas wrote:\n>On Tue, Apr 9, 2019 at 11:51 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> This is not surprising, considering that columnar store is precisely the\n>> reason for starting the work on table AMs.\n>>\n>> We should certainly look into integrating some sort of columnar storage\n>> in mainline. Not sure which of zedstore or VOPS is the best candidate,\n>> or maybe we'll have some other proposal. My feeling is that having more\n>> than one is not useful; if there are optimizations to one that can be\n>> borrowed from the other, let's do that instead of duplicating effort.\n>\n>I think that conclusion may be premature. There seem to be a bunch of\n>different ways of doing columnar storage, so I don't know how we can\n>be sure that one size will fit all, or that the first thing we accept\n>will be the best thing.\n>\n>Of course, we probably do not want to accept a ton of storage manager\n>implementations is core. I think if people propose implementations\n>that are poor quality, or missing important features, or don't have\n>significantly different use cases from the ones we've already got,\n>it's reasonable to reject those. But I wouldn't be prepared to say\n>that if we have two significantly different column store that are both\n>awesome code with a complete feature set and significantly disjoint\n>use cases, we should reject the second one just because it is also a\n>column store. I think that won't get out of control because few\n>people will be able to produce really high-quality implementations.\n>\n>This stuff is hard, which I think is also why we only have 6.5 index\n>AMs in core after many, many years. And our standards have gone up\n>over the years - not all of those would pass muster if they were\n>proposed today.\n>\n\nIt's not clear to me whether you're arguing for not having any such\nimplementation in core, or having multiple ones? I think we should aim\nto have at least one in-core implementation, even if it's not the best\npossible one for all sizes. It's not like our rowstore is the best\npossible implementation for all cases either.\n\nI think having a colstore in core is important not just for adoption,\nbut also for testing and development of the executor / planner bits.\n\nIf we have multiple candidates with sufficient code quality, then we may\nconsider including both. I don't think it's very likely to happen in the\nsame release, considering how much work it will require. And I have no\nidea if zedstore or VOPS are / will be the only candidates - it's way\ntoo early at this point.\n\nFWIW I personally plan to focus primarily on the features that aim to\nbe included in core, and that applies to colstores too.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 14 Apr 2019 18:22:10 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, Apr 09, 2019 at 02:03:09PM -0700, Ashwin Agrawal wrote:\n> On Tue, Apr 9, 2019 at 9:13 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>\n> On 09.04.2019 18:51, Alvaro Herrera wrote:\n> > On 2019-Apr-09, Konstantin Knizhnik wrote:\n> >\n> >> On 09.04.2019 3:27, Ashwin Agrawal wrote:\n> >>> Heikki and I have been hacking recently for few weeks to implement\n> >>> in-core columnar storage for PostgreSQL. Here's the design and\n> initial\n> >>> implementation of Zedstore, compressed in-core columnar storage\n> (table\n> >>> access method). Attaching the patch and link to github branch [1] to\n> >>> follow along.\n> >> Thank you for publishing this patch. IMHO Postgres is really missing\n> normal\n> >> support of columnar store\n> > Yep.\n> >\n> >> and table access method API is the best way of integrating it.\n> > This is not surprising, considering that columnar store is precisely\n> the\n> > reason for starting the work on table AMs.\n> >\n> > We should certainly look into integrating some sort of columnar\n> storage\n> > in mainline.� Not sure which of zedstore or VOPS is the best\n> candidate,\n> > or maybe we'll have some other proposal.� My feeling is that having\n> more\n> > than one is not useful; if there are optimizations to one that can be\n> > borrowed from the other, let's do that instead of duplicating effort.\n> >\n> There are two different aspects:\n> 1. Store format.\n> 2. Vector execution.\n>\n> 1. VOPS is using mixed format, something similar with Apache parquet.\n> Tuples are stored vertically, but only inside one page.\n> It tries to minimize trade-offs between true horizontal and true\n> vertical storage:\n> first is most optimal for selecting all rows, while second - for\n> selecting small subset of rows.\n> To make this approach more efficient, it is better to use large page\n> size - default Postgres 8k pages is not enough.\n>\n> �From my point of view such format is better than pure vertical storage\n> which will be very inefficient if query access larger number of columns.\n> This problem can be somehow addressed by creating projections: grouping\n> several columns together. But it requires more space for storing\n> multiple projections.\n>\n> Right, storing all the columns in single page doens't give any savings on\n> IO.\n>\n\nYeah, although you could save some I/O thanks to compression even in\nthat case.\n\n> 2. Doesn't matter which format we choose, to take all advantages of\n> vertical representation we need to use vector operations.\n> And Postgres executor doesn't support them now. This is why VOPS is\n> using some hacks, which is definitely not good and not working in all\n> cases.\n> zedstore is not using such hacks and ... this is why it never can reach\n> VOPS performance.\n>\n> Vectorized execution is orthogonal to storage format. It can be even\n> applied to row store and performance gained. Similarly column store\n> without vectorized execution also gives performance gain better\n> compression rations and such benefits. Column store clubbed with\n> vecotorized execution makes it lot more performant agree. Zedstore\n> currently is focused to have AM piece in place, which fits the postgres\n> ecosystem and supports all the features heap does.\n\nNot sure it's quite orthogonal. Sure, you can apply it to rowstores too,\nbut I'd say column stores are naturally better suited for it.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 14 Apr 2019 18:26:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 04:52:33PM +0300, Konstantin Knizhnik wrote:\n> On 11.04.2019 16:18, Andreas Karlsson wrote:\n>\n> On 4/11/19 10:46 AM, Konstantin Knizhnik wrote:\n>\n> This my results of compressing pbench data using different\n> compressors:\n>\n> +-------------------------------------------------------------+\n> |Configuration |Size (Gb) |Time (sec) |\n> |---------------------------+----------------+----------------|\n> |no compression |15.31 |92 |\n> |---------------------------+----------------+----------------|\n> |zlib (default level) |2.37 |284 |\n> |---------------------------+----------------+----------------|\n> |zlib (best speed) |2.43 |191 |\n> |---------------------------+----------------+----------------|\n> |postgres internal lz |3.89 |214 |\n> |---------------------------+----------------+----------------|\n> |lz4 |4.12 |95 |\n> |---------------------------+----------------+----------------|\n> |snappy |5.18 |99 |\n> |---------------------------+----------------+----------------|\n> |lzfse |2.80 |1099 |\n> |---------------------------+----------------+----------------|\n> |(apple) 2.80 1099 |1.69 |125 |\n> +-------------------------------------------------------------+\n>\n> You see that zstd provides almost 2 times better compression ration\n> and almost at the same speed.\n>\n> What is \"(apple) 2.80 1099\"? Was that intended to be zstd?\n>\n> Andreas\n>\n> Ugh...\n> Cut and paste problems.\n> The whole document can be found here:\n> http://garret.ru/PageLevelCompression.pdf\n>\n> lzfse (apple)������ 2.80��� 1099\n> zstd (facebook)� 1.69��� 125\n>\n> ztsd is compression algorithm proposed by facebook:�\n> https://github.com/facebook/zstd\n> Looks like it provides the best speed/compress ratio result.\n>\n\nI think those comparisons are cute and we did a fair amount of them when\nconsidering a drop-in replacement for pglz, but ultimately it might be a\nbit pointless because:\n\n(a) it very much depends on the dataset (one algorithm may work great on\none type of data, suck on another)\n\n(b) different systems may require different trade-offs (high ingestion\nrate vs. best compression ratio)\n\n(c) decompression speed may be much more important\n\nWhat I'm trying to say is that we shouldn't obsess about picking one\nparticular algorithm too much, because it's entirely pointless. Instead,\nwe should probably design the system to support different compression\nalgorithms, ideally at column level.\n\nAlso, while these general purpose algorithms are nice, what I think will\nbe important in later stages of colstore development will be compression\nalgorithms allowing execution directly on the compressed data (like RLE,\ndictionary and similar approaches).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 14 Apr 2019 18:36:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 06:20:47PM +0300, Heikki Linnakangas wrote:\n>On 11/04/2019 17:54, Tom Lane wrote:\n>>Ashwin Agrawal <aagrawal@pivotal.io> writes:\n>>>Thank you for trying it out. Yes, noticed for certain patterns pg_lzcompress() actually requires much larger output buffers. Like for one 86 len source it required 2296 len output buffer. Current zedstore code doesn’t handle this case and errors out. LZ4 for same patterns works fine, would highly recommend using LZ4 only, as anyways speed is very fast as well with it.\n>>\n>>You realize of course that *every* compression method has some inputs that\n>>it makes bigger. If your code assumes that compression always produces a\n>>smaller string, that's a bug in your code, not the compression algorithm.\n>\n>Of course. The code is not making that assumption, although clearly \n>there is a bug there somewhere because it throws that error. It's \n>early days..\n>\n>In practice it's easy to weasel out of that, by storing the data \n>uncompressed, if compression would make it longer. Then you need an \n>extra flag somewhere to indicate whether it's compressed or not. It \n>doesn't break the theoretical limit because the actual stored length \n>is then original length + 1 bit, but it's usually not hard to find a \n>place for one extra bit.\n>\n\nDon't we already have that flag, though? I see ZSCompressedBtreeItem has\nt_flags, and there's ZSBT_COMPRESSED, but maybe it's more complicated.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 14 Apr 2019 18:39:47 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-14 18:36:18 +0200, Tomas Vondra wrote:\n> I think those comparisons are cute and we did a fair amount of them when\n> considering a drop-in replacement for pglz, but ultimately it might be a\n> bit pointless because:\n> \n> (a) it very much depends on the dataset (one algorithm may work great on\n> one type of data, suck on another)\n> \n> (b) different systems may require different trade-offs (high ingestion\n> rate vs. best compression ratio)\n> \n> (c) decompression speed may be much more important\n> \n> What I'm trying to say is that we shouldn't obsess about picking one\n> particular algorithm too much, because it's entirely pointless. Instead,\n> we should probably design the system to support different compression\n> algorithms, ideally at column level.\n\nI think we still need to pick a default algorithm, and realistically\nthat's going to be used by like 95% of the users.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 14 Apr 2019 09:45:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 08, 2019 at 05:27:05PM -0700, Ashwin Agrawal wrote:\n> Heikki and I have been hacking recently for few weeks to implement\n> in-core columnar storage for PostgreSQL. Here's the design and initial\n> implementation of Zedstore, compressed in-core columnar storage (table\n> access method). Attaching the patch and link to github branch [1] to\n> follow along.\n>\n> The objective is to gather feedback on design and approach to the\n> same. The implementation has core basic pieces working but not close\n> to complete.\n>\n> Big thank you to Andres, Haribabu and team for the table access method\n> API's. Leveraged the API's for implementing zedstore, and proves API\n> to be in very good shape. Had to enhance the same minimally but\n> in-general didn't had to touch executor much.\n>\n> Motivations / Objectives\n>\n> * Performance improvement for queries selecting subset of columns\n> � (reduced IO).\n> * Reduced on-disk footprint compared to heap table. Shorter tuple\n> � headers and also leveraging compression of similar type data\n> * Be first-class citizen in the Postgres architecture (tables data can\n> � just independently live in columnar storage)\n> * Fully MVCC compliant\n> * All Indexes supported\n> * Hybrid row-column store, where some columns are stored together, and\n> � others separately. Provide flexibility of granularity on how to\n> � divide the columns. Columns accessed together can be stored\n> � together.\n> * Provide better control over bloat (similar to zheap)\n> * Eliminate need for separate toast tables\n> * Faster add / drop column or changing data type of column by avoiding\n> � full rewrite of the table.\n>\n\nCool. Me gusta.\n\n> High-level Design - B-trees for the win!\n> ========================================\n>\n> To start simple, let's ignore column store aspect for a moment and\n> consider it as compressed row store. The column store is natural\n> extension of this concept, explained in next section.\n>\n> The basic on-disk data structure leveraged is a B-tree, indexed by\n> TID. BTree being a great data structure, fast and versatile. Note this\n> is not referring to existing Btree indexes, but instead net new\n> separate BTree for table data storage.\n>\n> TID - logical row identifier:\n> TID is just a 48-bit row identifier. The traditional division into\n> block and offset numbers is meaningless. In order to find a tuple with\n> a given TID, one must always descend the B-tree. Having logical TID\n> provides flexibility to move the tuples around different pages on page\n> splits or page merges can be performed.\n>\n\nSo if TIDs are redefined this way, how does affect BRIN indexes? I mean,\nthat's a lightweight indexing scheme which however assumes TIDs encode\ncertain amount of locality - so this probably makes them (and Bitmap\nHeap Scans in general) much less eficient. That's a bit unfortunate,\nalthough I don't see a way around it :-(\n\n> The internal pages of the B-tree are super simple and boring. Each\n> internal page just stores an array of TID and downlink pairs. Let's\n> focus on the leaf level. Leaf blocks have short uncompressed header,\n> followed by btree items. Two kinds of items exist:\n>\n> �- plain item, holds one tuple or one datum, uncompressed payload\n> �- a \"container item\", holds multiple plain items, compressed payload\n>\n> +-----------------------------\n> | Fixed-size page header:\n> |\n> |�� LSN\n> |�� TID low and hi key (for Lehman & Yao B-tree operations)\n> |�� left and right page pointers\n> |\n> | Items:\n> |\n> |�� TID | size | flags | uncompressed size | lastTID | payload (container\n> item)\n> |�� TID | size | flags | uncompressed size | lastTID | payload (container\n> item)\n> |�� TID | size | flags | undo pointer | payload (plain item)\n> |�� TID | size | flags | undo pointer | payload (plain item)\n> |�� ...\n> |\n> +----------------------------\n>\n\nSo if I understand it correctly, ZSUncompressedBtreeItem is the \"plain\"\nitem and ZSCompressedBtreeItem is the container one. Correct?\n\nI find it a bit confusing, and I too ran into the issue with data that\ncan't be compressed, so I think the \"container\" should support both\ncompressed and uncompressed data. Heikki already mentioned that, so I\nsuppose it's just not implemented yet. That however means the name of\nthe \"compressed\" struct gets confusing, so I suggest to rename to:\n\n ZSUncompressedBtreeItem -> ZSPlainBtreeItem\n ZSCompressedBtreeItem -> ZSContainerBtreeItem\n\nwhere the container supports both compressed and uncompressed mode.\nAlso, maybe we don't need to put \"Btree\" into every damn name ;-)\n\nLooking at the ZSCompressedBtreeItem, I see it stores just first/last\nTID for the compressed data. Won't that be insufficient when there are\nsome gaps due to deletions or something? Or perhaps I just don't\nunderstand how it works.\n\nAnother thing is that with uncompressed size being stored as uint16,\nwon't that be insufficient for highly compressible data / large pages? I\nmean, we can have pages up to 32kB, which is not that far.\n\n\n> Column store\n> ------------\n>\n> A column store uses the same structure but we have *multiple* B-trees,\n> one for each column, all indexed by TID. The B-trees for all columns\n> are stored in the same physical file.\n>\n> A metapage at block 0, has links to the roots of the B-trees. Leaf\n> pages look the same, but instead of storing the whole tuple, stores\n> just a single attribute. To reconstruct a row with given TID, scan\n> descends down the B-trees for all the columns using that TID, and\n> fetches all attributes. Likewise, a sequential scan walks all the\n> B-trees in lockstep.\n>\n\nOK, so data for all the columns are stored in separate btrees, but in\nthe same physical file. Wouldn't it be more convenient to have one\nrelfilenode per column?\n\nThat would also mean the 32TB limit applies to individual columns, not\nthe whole table. Of course, it'd be more complicated and partitioning\nallows us to work around that limit.\n\n\n> So, in summary can imagine Zedstore as forest of B-trees, one for each\n> column, all indexed by TIDs.\n>\n> This way of laying out the data also easily allows for hybrid\n> row-column store, where some columns are stored together, and others\n> have a dedicated B-tree. Need to have user facing syntax to allow\n> specifying how to group the columns.\n>\n\nOK, makes sense. Do you also envision supporting per-column / per-group\ncompression etc?\n\n> Main reasons for storing data this way\n> --------------------------------------\n>\n> * Layout the data/tuples in mapped fashion instead of keeping the\n> � logical to physical mapping separate from actual data. So, keep the\n> � meta-data and data logically in single stream of file, avoiding the\n> � need for separate forks/files to store meta-data and data.\n>\n> * Stick to fixed size physical blocks. Variable size blocks pose need\n> � for increased logical to physical mapping maintenance, plus\n> � restrictions on concurrency of writes and reads to files. Hence\n> � adopt compression to fit fixed size blocks instead of other way\n> � round.\n>\n> MVCC\n> ----\n> MVCC works very similar to zheap for zedstore. Undo record pointers\n> are used to implement MVCC. Transaction information if not directly\n> stored with the data. In zheap, there's a small, fixed, number of\n> \"transaction slots\" on each page, but zedstore has undo pointer with\n> each item directly; in normal cases, the compression squeezes this\n> down to almost nothing.\n>\n> Implementation\n> ==============\n>\n> Insert:\n> Inserting a new row, splits the row into datums. Then for first column\n> decide which block to insert the same to, and pick a TID for it, and\n> write undo record for the same. Rest of the columns are inserted using\n> that same TID and point to same undo position.\n>\n\nWhat about deletes? How do these work?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 14 Apr 2019 19:08:53 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 09:45:10AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-04-14 18:36:18 +0200, Tomas Vondra wrote:\n>> I think those comparisons are cute and we did a fair amount of them when\n>> considering a drop-in replacement for pglz, but ultimately it might be a\n>> bit pointless because:\n>>\n>> (a) it very much depends on the dataset (one algorithm may work great on\n>> one type of data, suck on another)\n>>\n>> (b) different systems may require different trade-offs (high ingestion\n>> rate vs. best compression ratio)\n>>\n>> (c) decompression speed may be much more important\n>>\n>> What I'm trying to say is that we shouldn't obsess about picking one\n>> particular algorithm too much, because it's entirely pointless. Instead,\n>> we should probably design the system to support different compression\n>> algorithms, ideally at column level.\n>\n>I think we still need to pick a default algorithm, and realistically\n>that's going to be used by like 95% of the users.\n>\n\nTrue. Do you expect it to be specific to the column store, or should be\nset per-instance default (even for regular heap)?\n\nFWIW I think the conclusion from past dev meetings was we're unlikely to\nfind anything better than lz4. I doubt that changed very much.\n\nregard\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 14 Apr 2019 19:12:33 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Tue, Apr 09, 2019 at 02:29:09PM -0400, Robert Haas wrote:\n> >On Tue, Apr 9, 2019 at 11:51 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >>This is not surprising, considering that columnar store is precisely the\n> >>reason for starting the work on table AMs.\n> >>\n> >>We should certainly look into integrating some sort of columnar storage\n> >>in mainline. Not sure which of zedstore or VOPS is the best candidate,\n> >>or maybe we'll have some other proposal. My feeling is that having more\n> >>than one is not useful; if there are optimizations to one that can be\n> >>borrowed from the other, let's do that instead of duplicating effort.\n> >\n> >I think that conclusion may be premature. There seem to be a bunch of\n> >different ways of doing columnar storage, so I don't know how we can\n> >be sure that one size will fit all, or that the first thing we accept\n> >will be the best thing.\n> >\n> >Of course, we probably do not want to accept a ton of storage manager\n> >implementations is core. I think if people propose implementations\n> >that are poor quality, or missing important features, or don't have\n> >significantly different use cases from the ones we've already got,\n> >it's reasonable to reject those. But I wouldn't be prepared to say\n> >that if we have two significantly different column store that are both\n> >awesome code with a complete feature set and significantly disjoint\n> >use cases, we should reject the second one just because it is also a\n> >column store. I think that won't get out of control because few\n> >people will be able to produce really high-quality implementations.\n> >\n> >This stuff is hard, which I think is also why we only have 6.5 index\n> >AMs in core after many, many years. And our standards have gone up\n> >over the years - not all of those would pass muster if they were\n> >proposed today.\n> \n> It's not clear to me whether you're arguing for not having any such\n> implementation in core, or having multiple ones? I think we should aim\n> to have at least one in-core implementation, even if it's not the best\n> possible one for all sizes. It's not like our rowstore is the best\n> possible implementation for all cases either.\n> \n> I think having a colstore in core is important not just for adoption,\n> but also for testing and development of the executor / planner bits.\n\nAgreed.\n\n> If we have multiple candidates with sufficient code quality, then we may\n> consider including both. I don't think it's very likely to happen in the\n> same release, considering how much work it will require. And I have no\n> idea if zedstore or VOPS are / will be the only candidates - it's way\n> too early at this point.\n\nDefinitely, but having as many different indexes as we have is certainly\na good thing and we should be looking to a future where we have multiple\nin-core options for row and column-oriented storage.\n\n> FWIW I personally plan to focus primarily on the features that aim to\n> be included in core, and that applies to colstores too.\n\nYeah, same here.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 15 Apr 2019 08:34:24 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 06:39:47PM +0200, Tomas Vondra wrote:\n>On Thu, Apr 11, 2019 at 06:20:47PM +0300, Heikki Linnakangas wrote:\n>>On 11/04/2019 17:54, Tom Lane wrote:\n>>>Ashwin Agrawal <aagrawal@pivotal.io> writes:\n>>>>Thank you for trying it out. Yes, noticed for certain patterns\n>>>>pg_lzcompress() actually requires much larger output buffers. Like\n>>>>for one 86 len source it required 2296 len output buffer. Current\n>>>>zedstore code doesn’t handle this case and errors out. LZ4 for same\n>>>>patterns works fine, would highly recommend using LZ4 only, as\n>>>>anyways speed is very fast as well with it.\n>>>\n>>>You realize of course that *every* compression method has some inputs\n>>>that it makes bigger. If your code assumes that compression always\n>>>produces a smaller string, that's a bug in your code, not the\n>>>compression algorithm.\n>>\n>>Of course. The code is not making that assumption, although clearly\n>>there is a bug there somewhere because it throws that error. It's\n>>early days..\n>>\n>>In practice it's easy to weasel out of that, by storing the data\n>>uncompressed, if compression would make it longer. Then you need an\n>>extra flag somewhere to indicate whether it's compressed or not. It\n>>doesn't break the theoretical limit because the actual stored length\n>>is then original length + 1 bit, but it's usually not hard to find a\n>>place for one extra bit.\n>>\n>\n>Don't we already have that flag, though? I see ZSCompressedBtreeItem\n>has t_flags, and there's ZSBT_COMPRESSED, but maybe it's more\n>complicated.\n>\n\nAfter thinking about this a bit more, I think a simple flag may not be\nenough. It might be better to have some sort of ID of the compression\nalgorithm in each item, which would allow switching algorithm for new\ndata (which may be useful e.g after we add new stuff in core, or when\nthe initial choice was not the best one).\n\nOf course, those are just wild thoughts at this point, it's not\nsomething the current PoC has to solve right away.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 15 Apr 2019 15:01:38 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>> I think having a colstore in core is important not just for adoption,\n>> but also for testing and development of the executor / planner bits.\n\n> Agreed.\n\nTBH, I thought the reason we were expending so much effort on a tableam\nAPI was exactly so we *wouldn't* have to include such stuff in core.\n\nThere is a finite limit to how much stuff we can maintain as part of core.\nWe should embrace the notion that Postgres is an extensible system, rather\nthan build all the tooling for extension and then proceed to dump stuff\ninto core anyway.\n\n>> If we have multiple candidates with sufficient code quality, then we may\n>> consider including both.\n\nDear god, no.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 11:10:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Sat, Apr 13, 2019 at 4:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Thu, Apr 11, 2019 at 6:06 AM Rafia Sabih <rafia.pghackers@gmail.com>\n> wrote:\n> > Reading about it reminds me of this work -- TAG column storage(\n> https://urldefense.proofpoint.com/v2/url?u=http-3A__www09.sigmod.org_sigmod_record_issues_0703_03.article-2Dgraefe.pdf&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=gxIaqms7ncm0pvqXLI_xjkgwSStxAET2rnZQpzba2KM&m=H2hOVqCm9svWVOW1xh7FhoURKEP-WWpWso6lKD1fLoM&s=KNOse_VUg9-BW7SyDXt1vw92n6x_B92N9SJHZKrdoIo&e=\n> ).\n> > Isn't this storage system inspired from there, with TID as the TAG?\n> >\n> > It is not referenced here so made me wonder.\n>\n> I don't think they're particularly similar, because that paper\n> describes an architecture based on using purely logical row\n> identifiers, which is not what a TID is. TID is a hybrid\n> physical/logical identifier, sometimes called a \"physiological\"\n> identifier, which will have significant overhead.\n\n\nStorage system wasn't inspired by that paper, but yes seems it also talks\nabout laying out column data in btrees, which is good to see. But yes as\npointed out by Peter, the main aspect the paper is focusing on to save\nspace for TAG, isn't something zedstore plan's to leverage, it being more\nrestrictive. As discussed below we can use other alternatives to save space.\n\n\n> Ashwin said that\n> ZedStore TIDs are logical identifiers, but I don't see how that's\n> compatible with a hybrid row/column design (unless you map heap TID to\n> logical row identifier using a separate B-Tree).\n>\n\nWould like to know more specifics on this Peter. We may be having different\ncontext on hybrid row/column design. When we referenced design supports\nhybrid row/column families, it meant not within same table. So, not inside\na table one can have some data in row and some in column nature. For a\ntable, the structure will be homogenous. But it can easily support storing\nall the columns together, or subset of columns together or single column\nall connected together by TID.\n\n\n> The big idea with Graefe's TAG design is that there is practically no\n> storage overhead for these logical identifiers, because each entry's\n> identifier is calculated by adding its slot number to the page's\n> tag/low key. The ZedStore design, in contrast, explicitly stores TID\n> for every entry. ZedStore seems more flexible for that reason, but at\n> the same time the per-datum overhead seems very high to me. Maybe\n> prefix compression could help here, which a low key and high key can\n> do rather well.\n>\n\nYes, the plan to optimize out TID space per datum, either by prefix\ncompression or delta compression or some other trick.\n\nOn Sat, Apr 13, 2019 at 4:22 PM Peter Geoghegan <pg@bowt.ie> wrote:On Thu, Apr 11, 2019 at 6:06 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> Reading about it reminds me of this work -- TAG column storage( https://urldefense.proofpoint.com/v2/url?u=http-3A__www09.sigmod.org_sigmod_record_issues_0703_03.article-2Dgraefe.pdf&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=gxIaqms7ncm0pvqXLI_xjkgwSStxAET2rnZQpzba2KM&m=H2hOVqCm9svWVOW1xh7FhoURKEP-WWpWso6lKD1fLoM&s=KNOse_VUg9-BW7SyDXt1vw92n6x_B92N9SJHZKrdoIo&e= ).\n> Isn't this storage system inspired from there, with TID as the TAG?\n>\n> It is not referenced here so made me wonder.\n\nI don't think they're particularly similar, because that paper\ndescribes an architecture based on using purely logical row\nidentifiers, which is not what a TID is. TID is a hybrid\nphysical/logical identifier, sometimes called a \"physiological\"\nidentifier, which will have significant overhead. Storage system wasn't inspired by that paper, but yes seems it also talks about laying out column data in btrees, which is good to see. But yes as pointed out by Peter, the main aspect the paper is focusing on to save space for TAG, isn't something zedstore plan's to leverage, it being more restrictive. As discussed below we can use other alternatives to save space. Ashwin said that\nZedStore TIDs are logical identifiers, but I don't see how that's\ncompatible with a hybrid row/column design (unless you map heap TID to\nlogical row identifier using a separate B-Tree).Would like to know more specifics on this Peter. We may be having different context on hybrid row/column design. When we referenced design supports hybrid row/column families, it meant not within same table. So, not inside a table one can have some data in row and some in column nature. For a table, the structure will be homogenous. But it can easily support storing all the columns together, or subset of columns together or single column all connected together by TID.\n\nThe big idea with Graefe's TAG design is that there is practically no\nstorage overhead for these logical identifiers, because each entry's\nidentifier is calculated by adding its slot number to the page's\ntag/low key. The ZedStore design, in contrast, explicitly stores TID\nfor every entry. ZedStore seems more flexible for that reason, but at\nthe same time the per-datum overhead seems very high to me. Maybe\nprefix compression could help here, which a low key and high key can\ndo rather well.Yes, the plan to optimize out TID space per datum, either by prefix compression or delta compression or some other trick.",
"msg_date": "Mon, 15 Apr 2019 09:15:51 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 9:40 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Thu, Apr 11, 2019 at 06:20:47PM +0300, Heikki Linnakangas wrote:\n> >On 11/04/2019 17:54, Tom Lane wrote:\n> >>Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> >>>Thank you for trying it out. Yes, noticed for certain patterns\n> pg_lzcompress() actually requires much larger output buffers. Like for one\n> 86 len source it required 2296 len output buffer. Current zedstore code\n> doesn’t handle this case and errors out. LZ4 for same patterns works fine,\n> would highly recommend using LZ4 only, as anyways speed is very fast as\n> well with it.\n> >>\n> >>You realize of course that *every* compression method has some inputs\n> that\n> >>it makes bigger. If your code assumes that compression always produces a\n> >>smaller string, that's a bug in your code, not the compression algorithm.\n> >\n> >Of course. The code is not making that assumption, although clearly\n> >there is a bug there somewhere because it throws that error. It's\n> >early days..\n> >\n> >In practice it's easy to weasel out of that, by storing the data\n> >uncompressed, if compression would make it longer. Then you need an\n> >extra flag somewhere to indicate whether it's compressed or not. It\n> >doesn't break the theoretical limit because the actual stored length\n> >is then original length + 1 bit, but it's usually not hard to find a\n> >place for one extra bit.\n> >\n>\n> Don't we already have that flag, though? I see ZSCompressedBtreeItem has\n> t_flags, and there's ZSBT_COMPRESSED, but maybe it's more complicated.\n>\n\nThe flag ZSBT_COMPRESSED differentiates between container (compressed) item\nand plain (uncompressed item). Current code is writtten such that within\ncontainer (compressed) item, all the data is compressed. If need exists to\nstore some part of uncompressed data inside container item, then this\nadditional flag would be required to indicate the same. Hence its different\nthan ZSBT_COMPRESSED. I am thinking one of the ways could be to just not\nstore this datum in container item if can't be compressed and just store it\nas plain item with uncompressed data, this additional flag won't be\nrequired. Will know more once write code for this.\n\nOn Sun, Apr 14, 2019 at 9:40 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Thu, Apr 11, 2019 at 06:20:47PM +0300, Heikki Linnakangas wrote:\n>On 11/04/2019 17:54, Tom Lane wrote:\n>>Ashwin Agrawal <aagrawal@pivotal.io> writes:\n>>>Thank you for trying it out. Yes, noticed for certain patterns pg_lzcompress() actually requires much larger output buffers. Like for one 86 len source it required 2296 len output buffer. Current zedstore code doesn’t handle this case and errors out. LZ4 for same patterns works fine, would highly recommend using LZ4 only, as anyways speed is very fast as well with it.\n>>\n>>You realize of course that *every* compression method has some inputs that\n>>it makes bigger. If your code assumes that compression always produces a\n>>smaller string, that's a bug in your code, not the compression algorithm.\n>\n>Of course. The code is not making that assumption, although clearly \n>there is a bug there somewhere because it throws that error. It's \n>early days..\n>\n>In practice it's easy to weasel out of that, by storing the data \n>uncompressed, if compression would make it longer. Then you need an \n>extra flag somewhere to indicate whether it's compressed or not. It \n>doesn't break the theoretical limit because the actual stored length \n>is then original length + 1 bit, but it's usually not hard to find a \n>place for one extra bit.\n>\n\nDon't we already have that flag, though? I see ZSCompressedBtreeItem has\nt_flags, and there's ZSBT_COMPRESSED, but maybe it's more complicated.The flag ZSBT_COMPRESSED differentiates between container (compressed) item and plain (uncompressed item). Current code is writtten such that within container (compressed) item, all the data is compressed. If need exists to store some part of uncompressed data inside container item, then this additional flag would be required to indicate the same. Hence its different than ZSBT_COMPRESSED. I am thinking one of the ways could be to just not store this datum in container item if can't be compressed and just store it as plain item with uncompressed data, this additional flag won't be required. Will know more once write code for this.",
"msg_date": "Mon, 15 Apr 2019 09:29:37 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 09:29:37AM -0700, Ashwin Agrawal wrote:\n> On Sun, Apr 14, 2019 at 9:40 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Thu, Apr 11, 2019 at 06:20:47PM +0300, Heikki Linnakangas wrote:\n> >On 11/04/2019 17:54, Tom Lane wrote:\n> >>Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> >>>Thank you for trying it out. Yes, noticed for certain patterns\n> pg_lzcompress() actually requires much larger output buffers. Like for\n> one 86 len source it required 2296 len output buffer. Current zedstore\n> code doesn’t handle this case and errors out. LZ4 for same patterns\n> works fine, would highly recommend using LZ4 only, as anyways speed is\n> very fast as well with it.\n> >>\n> >>You realize of course that *every* compression method has some inputs\n> that\n> >>it makes bigger. If your code assumes that compression always\n> produces a\n> >>smaller string, that's a bug in your code, not the compression\n> algorithm.\n> >\n> >Of course. The code is not making that assumption, although clearly\n> >there is a bug there somewhere because it throws that error. It's\n> >early days..\n> >\n> >In practice it's easy to weasel out of that, by storing the data\n> >uncompressed, if compression would make it longer. Then you need an\n> >extra flag somewhere to indicate whether it's compressed or not. It\n> >doesn't break the theoretical limit because the actual stored length\n> >is then original length + 1 bit, but it's usually not hard to find a\n> >place for one extra bit.\n> >\n>\n> Don't we already have that flag, though? I see ZSCompressedBtreeItem has\n> t_flags, and there's ZSBT_COMPRESSED, but maybe it's more complicated.\n>\n> The flag ZSBT_COMPRESSED differentiates between container (compressed)\n> item and plain (uncompressed item). Current code is writtten such that\n> within container (compressed) item, all the data is compressed. If need\n> exists to store some part of uncompressed data inside container item, then\n> this additional flag would be required to indicate the same. Hence its\n> different than ZSBT_COMPRESSED. I am thinking one of the ways could be to\n> just not store this datum in container item if can't be compressed and\n> just store it as plain item with uncompressed data, this additional flag\n> won't be required. Will know more once write code for this.\n\nI see. Perhaps it'd be better to call the flag ZSBT_CONTAINER, when it\nmeans \"this is a container\". And then have another flag to track whether\nthe container is compressed or not. But as I suggested elsewhere in this\nthread, I think it might be better to store some ID of the compression\nalgorithm used instead of a simple flag.\n\nFWIW when I had to deal with incremental compression (adding data into\nalready compressed buffers), which is what seems to be happening here, I\nfound it very useful/efficient to allow partially compressed buffers and\nonly trigger recompressin when absolutely needed.\n\nApplied to this case, the container would first store compressed chunk,\nfollowed by raw (uncompressed) data. Say, like this:\n\nZSContainerData {\n\n // header etc.\n\n int nbytes; /* total bytes in data */\n int ncompressed; /* ncompressed <= nbytes, fully compressed when\n * (ncompressed == nbytes) */\n\n char data[FLEXIBLE_ARRAY_MEMBER];\n}\n\nWhen adding a value to the buffer, it'd be simply appended to the data\narray. When the container would grow too much (can't fit on the page or\nsomething), recompression is triggered.\n\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 15 Apr 2019 19:32:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 10:33 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Mon, Apr 15, 2019 at 09:29:37AM -0700, Ashwin Agrawal wrote:\n> > On Sun, Apr 14, 2019 at 9:40 AM Tomas Vondra\n> > <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > On Thu, Apr 11, 2019 at 06:20:47PM +0300, Heikki Linnakangas wrote:\n> > >On 11/04/2019 17:54, Tom Lane wrote:\n> > >>Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> > >>>Thank you for trying it out. Yes, noticed for certain patterns\n> > pg_lzcompress() actually requires much larger output buffers. Like\n> for\n> > one 86 len source it required 2296 len output buffer. Current\n> zedstore\n> > code doesn’t handle this case and errors out. LZ4 for same patterns\n> > works fine, would highly recommend using LZ4 only, as anyways speed\n> is\n> > very fast as well with it.\n> > >>\n> > >>You realize of course that *every* compression method has some\n> inputs\n> > that\n> > >>it makes bigger. If your code assumes that compression always\n> > produces a\n> > >>smaller string, that's a bug in your code, not the compression\n> > algorithm.\n> > >\n> > >Of course. The code is not making that assumption, although clearly\n> > >there is a bug there somewhere because it throws that error. It's\n> > >early days..\n> > >\n> > >In practice it's easy to weasel out of that, by storing the data\n> > >uncompressed, if compression would make it longer. Then you need an\n> > >extra flag somewhere to indicate whether it's compressed or not. It\n> > >doesn't break the theoretical limit because the actual stored length\n> > >is then original length + 1 bit, but it's usually not hard to find a\n> > >place for one extra bit.\n> > >\n> >\n> > Don't we already have that flag, though? I see ZSCompressedBtreeItem\n> has\n> > t_flags, and there's ZSBT_COMPRESSED, but maybe it's more\n> complicated.\n> >\n> > The flag ZSBT_COMPRESSED differentiates between container (compressed)\n> > item and plain (uncompressed item). Current code is writtten such that\n> > within container (compressed) item, all the data is compressed. If need\n> > exists to store some part of uncompressed data inside container item,\n> then\n> > this additional flag would be required to indicate the same. Hence its\n> > different than ZSBT_COMPRESSED. I am thinking one of the ways could be\n> to\n> > just not store this datum in container item if can't be compressed and\n> > just store it as plain item with uncompressed data, this additional\n> flag\n> > won't be required. Will know more once write code for this.\n>\n> I see. Perhaps it'd be better to call the flag ZSBT_CONTAINER, when it\n> means \"this is a container\". And then have another flag to track whether\n> the container is compressed or not. But as I suggested elsewhere in this\n> thread, I think it might be better to store some ID of the compression\n> algorithm used instead of a simple flag.\n>\n> FWIW when I had to deal with incremental compression (adding data into\n> already compressed buffers), which is what seems to be happening here, I\n> found it very useful/efficient to allow partially compressed buffers and\n> only trigger recompressin when absolutely needed.\n>\n> Applied to this case, the container would first store compressed chunk,\n> followed by raw (uncompressed) data. Say, like this:\n>\n> ZSContainerData {\n>\n> // header etc.\n>\n> int nbytes; /* total bytes in data */\n> int ncompressed; /* ncompressed <= nbytes, fully compressed when\n> * (ncompressed == nbytes) */\n>\n> char data[FLEXIBLE_ARRAY_MEMBER];\n> }\n>\n> When adding a value to the buffer, it'd be simply appended to the data\n> array. When the container would grow too much (can't fit on the page or\n> something), recompression is triggered.\n>\n\nI think what you suggested here is exactly how its handled currently, just\nthe mechanics are little different. Plain items are added to page as\ninsertions are performed. Then when page becomes full, compression is\ntriggerred container item is created for them to store the compressed data.\nThen new insertions are stored as plain items, once again when page becomes\nfull, they are compressed and container item created for it. So, never,\ncompressed data is attempted to be compressed again. So, on page plain\nitems are acting as data section you mentioned above. A page can have mix\nof n plain and n container items.\n\nOn Mon, Apr 15, 2019 at 10:33 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Mon, Apr 15, 2019 at 09:29:37AM -0700, Ashwin Agrawal wrote:\n> On Sun, Apr 14, 2019 at 9:40 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Thu, Apr 11, 2019 at 06:20:47PM +0300, Heikki Linnakangas wrote:\n> >On 11/04/2019 17:54, Tom Lane wrote:\n> >>Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> >>>Thank you for trying it out. Yes, noticed for certain patterns\n> pg_lzcompress() actually requires much larger output buffers. Like for\n> one 86 len source it required 2296 len output buffer. Current zedstore\n> code doesn’t handle this case and errors out. LZ4 for same patterns\n> works fine, would highly recommend using LZ4 only, as anyways speed is\n> very fast as well with it.\n> >>\n> >>You realize of course that *every* compression method has some inputs\n> that\n> >>it makes bigger. If your code assumes that compression always\n> produces a\n> >>smaller string, that's a bug in your code, not the compression\n> algorithm.\n> >\n> >Of course. The code is not making that assumption, although clearly\n> >there is a bug there somewhere because it throws that error. It's\n> >early days..\n> >\n> >In practice it's easy to weasel out of that, by storing the data\n> >uncompressed, if compression would make it longer. Then you need an\n> >extra flag somewhere to indicate whether it's compressed or not. It\n> >doesn't break the theoretical limit because the actual stored length\n> >is then original length + 1 bit, but it's usually not hard to find a\n> >place for one extra bit.\n> >\n>\n> Don't we already have that flag, though? I see ZSCompressedBtreeItem has\n> t_flags, and there's ZSBT_COMPRESSED, but maybe it's more complicated.\n>\n> The flag ZSBT_COMPRESSED differentiates between container (compressed)\n> item and plain (uncompressed item). Current code is writtten such that\n> within container (compressed) item, all the data is compressed. If need\n> exists to store some part of uncompressed data inside container item, then\n> this additional flag would be required to indicate the same. Hence its\n> different than ZSBT_COMPRESSED. I am thinking one of the ways could be to\n> just not store this datum in container item if can't be compressed and\n> just store it as plain item with uncompressed data, this additional flag\n> won't be required. Will know more once write code for this.\n\nI see. Perhaps it'd be better to call the flag ZSBT_CONTAINER, when it\nmeans \"this is a container\". And then have another flag to track whether\nthe container is compressed or not. But as I suggested elsewhere in this\nthread, I think it might be better to store some ID of the compression\nalgorithm used instead of a simple flag.\n\nFWIW when I had to deal with incremental compression (adding data into\nalready compressed buffers), which is what seems to be happening here, I\nfound it very useful/efficient to allow partially compressed buffers and\nonly trigger recompressin when absolutely needed.\n\nApplied to this case, the container would first store compressed chunk,\nfollowed by raw (uncompressed) data. Say, like this:\n\nZSContainerData {\n\n // header etc.\n\n int nbytes; /* total bytes in data */\n int ncompressed; /* ncompressed <= nbytes, fully compressed when\n * (ncompressed == nbytes) */\n\n char data[FLEXIBLE_ARRAY_MEMBER];\n}\n\nWhen adding a value to the buffer, it'd be simply appended to the data\narray. When the container would grow too much (can't fit on the page or\nsomething), recompression is triggered.I think what you suggested here is exactly how its handled currently, just the mechanics are little different. Plain items are added to page as insertions are performed. Then when page becomes full, compression is triggerred container item is created for them to store the compressed data. Then new insertions are stored as plain items, once again when page becomes full, they are compressed and container item created for it. So, never, compressed data is attempted to be compressed again. So, on page plain items are acting as data section you mentioned above. A page can have mix of n plain and n container items.",
"msg_date": "Mon, 15 Apr 2019 10:50:21 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 12:22 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> It's not clear to me whether you're arguing for not having any such\n> implementation in core, or having multiple ones? I think we should aim\n> to have at least one in-core implementation, even if it's not the best\n> possible one for all sizes. It's not like our rowstore is the best\n> possible implementation for all cases either.\n\nI'm mostly arguing that it's too early to decide anything at this\npoint. I'm definitely not opposed to having a column store in core.\n\n> I think having a colstore in core is important not just for adoption,\n> but also for testing and development of the executor / planner bits.\n>\n> If we have multiple candidates with sufficient code quality, then we may\n> consider including both. I don't think it's very likely to happen in the\n> same release, considering how much work it will require. And I have no\n> idea if zedstore or VOPS are / will be the only candidates - it's way\n> too early at this point.\n>\n> FWIW I personally plan to focus primarily on the features that aim to\n> be included in core, and that applies to colstores too.\n\nI agree with all of that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Apr 2019 13:55:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 11:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> TBH, I thought the reason we were expending so much effort on a tableam\n> API was exactly so we *wouldn't* have to include such stuff in core.\n>\n> There is a finite limit to how much stuff we can maintain as part of core.\n> We should embrace the notion that Postgres is an extensible system, rather\n> than build all the tooling for extension and then proceed to dump stuff\n> into core anyway.\n\nI don't agree with that at all. I expect, and hope, that there will\nbe some table AMs maintained outside of core, and I think that's\ngreat. At the same time, it's not like we have had any great success\nwith out-of-core index AMs, and I don't see that table AMs are likely\nto be any different in that regard; indeed, they may be quite a bit\nworse. Up until now an index has only had to worry about one kind of\na table, but now a table is going to have to worry about every kind of\nindex. Furthermore, different table AMs are going to have different\nneeds. It has already been remarked by both Andres and on this thread\nthat for columnar storage to really zip along, the executor is going\nto need to be much smarter about deciding which columns to request.\nPresumably there will be a market for planner/executor optimizations\nthat postpone fetching columns for as long as possible. It's not\ngoing to be maintainable to build that kind of infrastructure in core\nand then have no in-core user of it.\n\nBut even if it were, it would be foolish from an adoption perspective\nto drive away people who are trying to contribute that kind of\ntechnology to PostgreSQL. Columnar storage is a big deal. Very\nsignificant numbers of people who won't consider PostgreSQL today\nbecause the performance characteristics are not good enough for what\nthey need will consider it if it's got something like what Ashwin and\nHeikki are building built in. Some of those people may be determined\nenough that even if the facility is out-of-core they'll be willing to\ndownload an extension and compile it, but others won't. It's already\na problem that people have to go get pgbouncer and/or pgpool to do\nsomething that they kinda think the database should just handle.\nColumnar storage, like JSON, is not some fringe thing where we can say\nthat the handful of people who want it can go get it: people expect\nthat to be a standard offering, and they wonder why PostgreSQL hasn't\ngot it yet.\n\n> >> If we have multiple candidates with sufficient code quality, then we may\n> >> consider including both.\n>\n> Dear god, no.\n\nI hate to pick on any particular part of the tree, but it seems\nentirely plausible to me that a second columnar storage implementation\ncould deliver more incremental value than spgist, an index AM you\ncommitted. We should not move the goal posts into the stratosphere\nhere.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Apr 2019 14:11:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 10:50:21AM -0700, Ashwin Agrawal wrote:\n> On Mon, Apr 15, 2019 at 10:33 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>\n> ...\n>\n> I see. Perhaps it'd be better to call the flag ZSBT_CONTAINER, when it\n> means \"this is a container\". And then have another flag to track whether\n> the container is compressed or not. But as I suggested elsewhere in this\n> thread, I think it might be better to store some ID of the compression\n> algorithm used instead of a simple flag.\n>\n> FWIW when I had to deal with incremental compression (adding data into\n> already compressed buffers), which is what seems to be happening here, I\n> found it very useful/efficient to allow partially compressed buffers and\n> only trigger recompressin when absolutely needed.\n>\n> Applied to this case, the container would first store compressed chunk,\n> followed by raw (uncompressed) data. Say, like this:\n>\n> ZSContainerData {\n>\n> � � // header etc.\n>\n> � � int nbytes;� � � � �/* total bytes in data */\n> � � int ncompressed;� � /* ncompressed <= nbytes, fully compressed when\n> � � � � � � � � � � � � �* (ncompressed == nbytes) */\n>\n> � � char data[FLEXIBLE_ARRAY_MEMBER];\n> }\n>\n> When adding a value to the buffer, it'd be simply appended to the data\n> array. When the container would grow too much (can't fit on the page or\n> something), recompression is triggered.\n>\n> I think what you suggested here is exactly how its handled currently, just\n> the mechanics are little different. Plain items are added to page as\n> insertions are performed. Then when page becomes full, compression is\n> triggerred container item is created for them to store the compressed\n> data. Then new insertions are stored as plain items, once again when page\n> becomes full, they are compressed and container item created for it. So,\n> never, compressed data is attempted to be compressed again. So, on page\n> plain items are acting as data section you mentioned above. A page can\n> have mix of n plain and n container items.\n\nMaybe. I'm not going to pretend I fully understand the internals. Does\nthat mean the container contains ZSUncompressedBtreeItem as elements? Or\njust the plain Datum values?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 15 Apr 2019 20:17:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-15 11:10:38 -0400, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> >> I think having a colstore in core is important not just for adoption,\n> >> but also for testing and development of the executor / planner bits.\n> \n> > Agreed.\n> \n> TBH, I thought the reason we were expending so much effort on a tableam\n> API was exactly so we *wouldn't* have to include such stuff in core.\n\nI think it's mostly orthogonal. We need something like tableam to have\nmultiple types of storage options for tables - independent of whether\nthey are in core. And sure, we could have maybe reduced the effort a bit\nhere and there by e.g. not allowing AMs to be dynamlically loaded, or\nwriting fewer comments or such.\n\n\n> There is a finite limit to how much stuff we can maintain as part of core.\n> We should embrace the notion that Postgres is an extensible system, rather\n> than build all the tooling for extension and then proceed to dump stuff\n> into core anyway.\n\nI totally agree that that's something we should continue to focus\non. I personally think we *already* *have* embraced that - pretty\nheavily so. And sometimes to the detriment of our users.\n\nI think there's a pretty good case for e.g. *one* column store\nin-core. For one there is a huge portion of existing postgres workloads\nthat benefit from them (often not for all tables, but some). Relatedly,\nit's also one of the more frequent reasons why users can't migrate to\npostgres / have to migrate off. And from a different angle, there's\nplenty planner and executor work to be done to make column stores fast -\nand that can't really be done nicely outside of core; and doing the\nimprovements in core without a user there is both harder, less likely to\nbe accepted, and more likely to regress.\n\n\n> >> If we have multiple candidates with sufficient code quality, then we may\n> >> consider including both.\n> \n> Dear god, no.\n\nYea, I don't see much point in that. Unless there's a pretty fundamental\nreason why one columnar AM can't fullfill two different workloads\n(e.g. by having options that define how things are laid out / compressed\n/ whatnot), I think that'd be a *terrible* idea. By that logic we'd just\nget a lot of AMs with a few differences in some workloads, and our users\nwould be unable to choose one, and all of them would suck. I think one\nsuch fundamental difference is e.g. the visibility management for an\nin-line mvcc approach like heap, and an undo-based mvcc row-store (like\nzheap) - it's very hard to imagine meaningful code savings by having\nthose combined into one AM. I'm sure we can find similar large\narchitectural issues for some types of columnar AMs - but I'm far far\nfrom convinced that there's enough distinctive need for two different\napproaches in postgres. Without having heap historically and the desire\nfor on-disk compat, I can't quite see being convinced that we should\ne.g. add a store like heap if we already had zheap.\n\nI think it's perfectly reasonable to have in-core AMs try to optimize\n~80% for a lot of different [sets of] workloads, even if a very\nspecialized AM could be optimized for it much further.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Apr 2019 11:22:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-15 14:11:02 -0400, Robert Haas wrote:\n> Furthermore, different table AMs are going to have different\n> needs. It has already been remarked by both Andres and on this thread\n> that for columnar storage to really zip along, the executor is going\n> to need to be much smarter about deciding which columns to request.\n> Presumably there will be a market for planner/executor optimizations\n> that postpone fetching columns for as long as possible. It's not\n> going to be maintainable to build that kind of infrastructure in core\n> and then have no in-core user of it.\n\nRight. Two notes on that: A lot of that infrastructure needed for fast\nquery execution (both plan time and execution time) is also going to be\nuseful for a row store like heap, even though it won't have the\n~order-of-magnitude impacts it can have for column stores. Secondly,\neven without those, the storage density alone can make column stores\nworthwhile, even without query execution speedups (or even slowdowns).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Apr 2019 11:27:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Apr 15, 2019 at 11:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> There is a finite limit to how much stuff we can maintain as part of core.\n\n> I don't agree with that at all.\n\nReally? Let's have a discussion of how thermodynamics applies to\nsoftware management sometime.\n\n>>> If we have multiple candidates with sufficient code quality, then we may\n>>> consider including both.\n\n>> Dear god, no.\n\n> I hate to pick on any particular part of the tree, but it seems\n> entirely plausible to me that a second columnar storage implementation\n> could deliver more incremental value than spgist, an index AM you\n> committed.\n\nYeah, and that's something I've regretted more than once; I think SP-GiST\nis a sterling example of something that isn't nearly useful enough in the\nreal world to justify the amount of maintenance effort we've been forced\nto expend on it. You might trawl the commit logs to get a sense of the\namount of my own personal time --- not that of the original submitters ---\nthat's gone into that one module. Then ask yourself how much that model\nwill scale, and what other more-useful things I could've accomplished\nwith that time.\n\nWe do need to limit what we accept into core PG. I do not buy your\nargument that users expect everything to be in core. Or more accurately,\nthe people who do think that way won't be using PG anyway --- they'll\nbe using MSSQL because it comes from their OS vendor.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 14:35:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-15 14:11:02 -0400, Robert Haas wrote:\n> I hate to pick on any particular part of the tree, but it seems\n> entirely plausible to me that a second columnar storage implementation\n> could deliver more incremental value than spgist, an index AM you\n> committed. We should not move the goal posts into the stratosphere\n> here.\n\nOh, I forgot: I agree that we don't need to be absurdly picky - but I\nalso think that table storage is much more crucial to get right than\nindex storage, which is already plenty crucial. Especially when that\ntype of index is not commonly usable for constraints. It really sucks to\nget wrong query results due to a corrupted index / wrong index\nimplementation - but if your table AM level corruption, you're *really*\nin a dark place. There's no way to just REINDEX and potentially recover\nmost information with a bit of surgery. Sure there can be app level\nconsequences to wrong query results that can be really bad, and lead to\nvery permanent data loss. On-disk compat is also much more important\nfor table level data - it's annoying to have to reindex indexes after an\nupgrade, but at least it can be done concurrently after the most\nimportant indexes are operational.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Apr 2019 11:41:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 11:18 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n>\n> Maybe. I'm not going to pretend I fully understand the internals. Does\n> that mean the container contains ZSUncompressedBtreeItem as elements? Or\n> just the plain Datum values?\n>\n\nFirst, your reading of code and all the comments/questions so far have been\nhighly encouraging. Thanks a lot for the same.\n\nContainer contains ZSUncompressedBtreeItem as elements. As for Item will\nhave to store meta-data like size, undo and such info. We don't wish to\nrestrict compressing only items from same insertion sessions only. Hence,\nyes doens't just store Datum values. Wish to consider it more tuple level\noperations and have meta-data for it and able to work with tuple level\ngranularity than block level.\n\nDefinitely many more tricks can be and need to be applied to optimize\nstorage format, like for fixed width columns no need to store the size in\nevery item. Keep it simple is theme have been trying to maintain.\nCompression ideally should compress duplicate data pretty easily and\nefficiently as well, but we will try to optimize as much we can without the\nsame.\n\nOn Mon, Apr 15, 2019 at 11:18 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\nMaybe. I'm not going to pretend I fully understand the internals. Does\nthat mean the container contains ZSUncompressedBtreeItem as elements? Or\njust the plain Datum values?First, your reading of code and all the comments/questions so far have been highly encouraging. Thanks a lot for the same.Container contains ZSUncompressedBtreeItem as elements. As for Item will have to store meta-data like size, undo and such info. We don't wish to restrict compressing only items from same insertion sessions only. Hence, yes doens't just store Datum values. Wish to consider it more tuple level operations and have meta-data for it and able to work with tuple level granularity than block level.Definitely many more tricks can be and need to be applied to optimize storage format, like for fixed width columns no need to store the size in every item. Keep it simple is theme have been trying to maintain. Compression ideally should compress duplicate data pretty easily and efficiently as well, but we will try to optimize as much we can without the same.",
"msg_date": "Mon, 15 Apr 2019 11:57:49 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 11:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We do need to limit what we accept into core PG. I do not buy your\n> argument that users expect everything to be in core. Or more accurately,\n> the people who do think that way won't be using PG anyway --- they'll\n> be using MSSQL because it comes from their OS vendor.\n\nI am also concerned by the broad scope of ZedStore, and I tend to\nagree that it will be difficult to maintain in core. At the same time,\nI think that Andres and Robert are probably right about the difficulty\nof maintaining it outside of core -- that would be difficult to\nimpossible as a practical matter.\n\nUnfortunately, you're both right. I don't know where that leaves us.\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 15 Apr 2019 11:58:47 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 2:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Apr 15, 2019 at 11:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> There is a finite limit to how much stuff we can maintain as part of core.\n>\n> > I don't agree with that at all.\n>\n> Really? Let's have a discussion of how thermodynamics applies to\n> software management sometime.\n\nSounds like an interesting discussion, perhaps for PGCon, but what I\nwas actually disagreeing with was the idea that we should add a table\nAM interface and then not accept any new table AMs, which I think\nwould be silly. And if we're going to accept any, a columnar one\nseems like a strong candidate.\n\n> Yeah, and that's something I've regretted more than once; I think SP-GiST\n> is a sterling example of something that isn't nearly useful enough in the\n> real world to justify the amount of maintenance effort we've been forced\n> to expend on it. You might trawl the commit logs to get a sense of the\n> amount of my own personal time --- not that of the original submitters ---\n> that's gone into that one module. Then ask yourself how much that model\n> will scale, and what other more-useful things I could've accomplished\n> with that time.\n\nYep, that's fair.\n\n> We do need to limit what we accept into core PG. I do not buy your\n> argument that users expect everything to be in core. Or more accurately,\n> the people who do think that way won't be using PG anyway --- they'll\n> be using MSSQL because it comes from their OS vendor.\n\nI agree that we need to be judicious in what we accept, but I don't\nagree that we should therefore accept nothing. There are lots of\nthings that we could put in core and users would like it that I'm glad\nwe haven't put in core.\n\nI think you might be surprised at the number of people who normally\nwant everything from a single source but are still willing to consider\nPostgreSQL; vendors like my employer help to smooth the road for such\npeople. Still, I don't think there is any major database product\nother than PostgreSQL that ships only a single table storage format\nand just expects that it will be good enough for everyone. Like\n640kB, it just isn't.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Apr 2019 15:04:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I am also concerned by the broad scope of ZedStore, and I tend to\n> agree that it will be difficult to maintain in core. At the same time,\n> I think that Andres and Robert are probably right about the difficulty\n> of maintaining it outside of core -- that would be difficult to\n> impossible as a practical matter.\n\nPerhaps, but we won't know if we don't try. I think we should try,\nand be willing to add hooks and flexibility to core as needed to make\nit possible. Adding such flexibility would be good for other outside\nprojects that have no chance of (or perhaps no interest in) getting into\ncore, even if we end up deciding that ZedStore or some other specific\nimplementation is so useful that it does belong in core.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 15:19:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-15 14:35:43 -0400, Tom Lane wrote:\n> Yeah, and that's something I've regretted more than once; I think SP-GiST\n> is a sterling example of something that isn't nearly useful enough in the\n> real world to justify the amount of maintenance effort we've been forced\n> to expend on it. You might trawl the commit logs to get a sense of the\n> amount of my own personal time --- not that of the original submitters ---\n> that's gone into that one module. Then ask yourself how much that model\n> will scale, and what other more-useful things I could've accomplished\n> with that time.\n\nI do agree that the [group of] contributor's history of maintaining such\nwork should play a role. And I think that's doubly so with a piece as\ncrucial as a table AM.\n\nBut:\n\n> We do need to limit what we accept into core PG. I do not buy your\n> argument that users expect everything to be in core. Or more accurately,\n> the people who do think that way won't be using PG anyway --- they'll\n> be using MSSQL because it comes from their OS vendor.\n\nI don't think anybody disagrees with that, actually. Including\nRobert.\n\nBut I don't think it follows that we shouldn't provide things that are\neither much more reasonably done in core like a pooler (authentication /\nencryption; infrastructure for managing state like prepared statements,\nGUCs; avoiding issues of explosion of connection counts with pooling in\nother places), are required by a very significant portion of our users\n(imo the case for a columnar store or a row store without the\narchitectural issues of heap), or where it's hard to provide the\nnecessary infrastructure without an in-core user (imo also the case with\ncolumnar, due to the necessary planner / executor improvements for fast\nquery execution).\n\nWe also have at times pretty explicitly resisted making crucial pieces\nof infrastructure usable outside of core. E.g. because it's legitimately\nhard (grammar extensibility), or because we'd some concerns around\nstability and the exact approach (WAL - the generic stuff is usable for\nanything that wants to even be somewhat efficient, some xlog\nintegration). So there's several types of extensions that one\nrealistically cannot do out of core, by our choice.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Apr 2019 12:20:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Perhaps, but we won't know if we don't try. I think we should try,\n> and be willing to add hooks and flexibility to core as needed to make\n> it possible.\n\nWe could approach it without taking a firm position on inclusion in\ncore until the project begins to mature. I have little faith in our\nability to predict which approach will be the least painful at this\nearly stage.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 15 Apr 2019 12:32:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> We also have at times pretty explicitly resisted making crucial pieces\n> of infrastructure usable outside of core. E.g. because it's legitimately\n> hard (grammar extensibility), or because we'd some concerns around\n> stability and the exact approach (WAL - the generic stuff is usable for\n> anything that wants to even be somewhat efficient, some xlog\n> integration). So there's several types of extensions that one\n> realistically cannot do out of core, by our choice.\n\nWell, the grammar issue comes from a pretty specific technical problem:\nbison grammars don't cope with run-time extension, and moving off of bison\nwould cost a lot of work, and probably more than just work (i.e., probable\nloss of ability to detect grammar ambiguity). WAL extensibility likewise\nhas some technical issues that are hard to surmount (how do you find the\ncode for replaying an extension WAL record, when you can't read catalogs).\nI think we could fix the latter, it's just that no one has yet troubled\nto expend the effort. Similarly, things like the planner's hard-wired\nhandling of physical-tlist optimization are certainly a problem for\ncolumn stores, but I think the way to solve that is to provide an actual\nextension capability, not merely replace one hard-wired behavior with two.\nAs a counterpoint to my gripe about SP-GiST being a time sink, I do not\nthink I'll regret the time I spent a few months ago on implementing\n\"planner support function\" hooks. I'm all in favor of adding flexibility\nlike that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 15:36:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-15 15:19:41 -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > I am also concerned by the broad scope of ZedStore, and I tend to\n> > agree that it will be difficult to maintain in core. At the same time,\n> > I think that Andres and Robert are probably right about the difficulty\n> > of maintaining it outside of core -- that would be difficult to\n> > impossible as a practical matter.\n> \n> Perhaps, but we won't know if we don't try. I think we should try,\n> and be willing to add hooks and flexibility to core as needed to make\n> it possible. Adding such flexibility would be good for other outside\n> projects that have no chance of (or perhaps no interest in) getting into\n> core, even if we end up deciding that ZedStore or some other specific\n> implementation is so useful that it does belong in core.\n\nI don't think anybody argued against providing that flexibility. I think\nwe should absolutely do so - but that's imo not an argument against\nintegrating something like a hypothetical well developed columnstore to\ncore. I worked on tableam, which certainly provides a lot of new\nextensibility, because it was the sane architecture to able to integrate\nzheap. The current set of UNDO patches (developed for zheap), while\nrequiring core integration for xact.c etc, co-initiated improvements to\nmake the checkpointer fsync being closer to extensible and UNDO as\ncurrently developed would be extensible if WAL was extensible as it's\ntied to rmgrlist.h. And the improvements necessary to make query\nexecutions for in-core columnar AM faster, would largely also be\napplicable for out-of-core columnar AMs, and I'm sure we'd try to make\nthe necessary decisions not hardcoded if reasonable.\n\nI think it's actually really hard to actually make something non-trivial\nextensible without there being a proper in-core user of most of that\ninfrastructure.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Apr 2019 12:38:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 9:16 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> Would like to know more specifics on this Peter. We may be having different context on hybrid row/column design.\n\nI'm confused about how close your idea of a TID is to the traditional\ndefinition from heapam (and even zheap). If it's a purely logical\nidentifier, then why would it have two components like a TID? Is that\njust a short-term convenience or something?\n\n> Yes, the plan to optimize out TID space per datum, either by prefix compression or delta compression or some other trick.\n\nIt would be easier to do this if you knew for sure that the TID\nbehaves almost the same as a bigserial column -- a purely logical\nmonotonically increasing identifier. That's why I'm interested in what\nexactly you mean by TID, the stability of a TID value, etc. If a leaf\npage almost always stores a range covering no more than few hundred\ncontiguous logical values, you can justify aggressively compressing\nthe representation in the B-Tree entries. Compression would still be\nbased on prefix compression, but the representation itself can be\nspecialized.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 15 Apr 2019 12:50:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-15 12:50:14 -0700, Peter Geoghegan wrote:\n> On Mon, Apr 15, 2019 at 9:16 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> > Would like to know more specifics on this Peter. We may be having different context on hybrid row/column design.\n> \n> I'm confused about how close your idea of a TID is to the traditional\n> definition from heapam (and even zheap). If it's a purely logical\n> identifier, then why would it have two components like a TID? Is that\n> just a short-term convenience or something?\n\nThere's not much of an alternative currently. Indexes require tid\nlooking things, and as a consequence (and some other comparatively small\nchanges that'd be required) tableam does too. And there's a few places\nthat imbue additional meaning into the higher bits of ip_posid too, so\nnot all of them are valid (It can't currently be zero - or\nItemPointerIsValid fails, it can't be larger than MaxOffsetNumber -\nthat's used to allocate things in e.g. indexes, tidbmap.c etc).\n\nThat's one of the reasons why I've been trying to get you to get on\nboard with allowing different leaf-level \"item pointer equivalents\"\nwidths inside nbtree...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Apr 2019 13:02:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 1:02 PM Andres Freund <andres@anarazel.de> wrote:\n> There's not much of an alternative currently. Indexes require tid\n> looking things, and as a consequence (and some other comparatively small\n> changes that'd be required) tableam does too.\n\nI'm trying to establish whether or not that's the only reason. It\nmight be okay to use the same item pointer struct as the\nrepresentation of a integer-like logical identifier. Even if it isn't,\nI'm still interested in just how logical the TIDs are, because it's an\nimportant part of the overall design.\n\n> That's one of the reasons why I've been trying to get you to get on\n> board with allowing different leaf-level \"item pointer equivalents\"\n> widths inside nbtree...\n\nGetting me to agree that that would be nice and getting me to do the\nwork are two very different things. ;-)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 15 Apr 2019 13:07:38 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 11:57:49AM -0700, Ashwin Agrawal wrote:\n> On Mon, Apr 15, 2019 at 11:18 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Maybe. I'm not going to pretend I fully understand the internals. Does\n> that mean the container contains ZSUncompressedBtreeItem as elements? Or\n> just the plain Datum values?\n>\n> First, your reading of code and all the comments/questions so far have\n> been highly encouraging. Thanks a lot for the same.\n\n;-)\n\n> Container contains ZSUncompressedBtreeItem as elements. As for Item will\n> have to store meta-data like size, undo and such info. We don't wish to\n> restrict compressing only items from same insertion sessions only. Hence,\n> yes doens't just store Datum values. Wish to consider it more tuple level\n> operations and have meta-data for it and able to work with tuple level\n> granularity than block level.\n\nOK, thanks for the clarification, that somewhat explains my confusion.\nSo if I understand it correctly, ZSCompressedBtreeItem is essentially a\nsequence of ZSUncompressedBtreeItem(s) stored one after another, along\nwith some additional top-level metadata.\n\n> Definitely many more tricks can be and need to be applied to optimize\n> storage format, like for fixed width columns no need to store the size in\n> every item. Keep it simple is theme have been trying to maintain.\n> Compression ideally should compress duplicate data pretty easily and\n> efficiently as well, but we will try to optimize as much we can without\n> the same.\n\nI think there's plenty of room for improvement. The main problem I see\nis that it mixes different types of data, which is bad for compression\nand vectorized execution. I think we'll end up with a very different\nrepresentation of the container, essentially decomposing the items into \narrays of values of the same type - array of TIDs, array of undo \npointers, buffer of serialized values, etc.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 15 Apr 2019 22:17:09 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 12:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Mon, Apr 15, 2019 at 9:16 AM Ashwin Agrawal <aagrawal@pivotal.io>\n> wrote:\n> > Would like to know more specifics on this Peter. We may be having\n> different context on hybrid row/column design.\n>\n> I'm confused about how close your idea of a TID is to the traditional\n> definition from heapam (and even zheap). If it's a purely logical\n> identifier, then why would it have two components like a TID? Is that\n> just a short-term convenience or something?\n>\n\nTID is purely logical identifier. Hence, stated in initial email that for\nZedstore TID, block number and offset split carries no meaning at all. It's\npurely 48 bit integer entity assigned to datum of first column during\ninsertion, based on where in BTree it gets inserted. Rest of the column\ndatums are inserted using this assigned TID value. Just due to rest to\nsystem restrictions discussed by Heikki and Andres on table am thread poses\nlimitations of value it can carry currently otherwise from zedstore design\nperspective it just integer number.\n\n\n\n> > Yes, the plan to optimize out TID space per datum, either by prefix\n> compression or delta compression or some other trick.\n>\n> It would be easier to do this if you knew for sure that the TID\n> behaves almost the same as a bigserial column -- a purely logical\n> monotonically increasing identifier. That's why I'm interested in what\n> exactly you mean by TID, the stability of a TID value, etc. If a leaf\n> page almost always stores a range covering no more than few hundred\n> contiguous logical values, you can justify aggressively compressing\n> the representation in the B-Tree entries. Compression would still be\n> based on prefix compression, but the representation itself can be\n> specialized.\n>\n\nYes, it's for sure logical increasing number. With only inserts the number\nis monotonically increasing. With deletes and updates, insert could use the\npreviously free'd TID values. Since TID is logical datums can be easily\nmoved around to split or merge pages as required.\n\nOn Mon, Apr 15, 2019 at 12:50 PM Peter Geoghegan <pg@bowt.ie> wrote:On Mon, Apr 15, 2019 at 9:16 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> Would like to know more specifics on this Peter. We may be having different context on hybrid row/column design.\n\nI'm confused about how close your idea of a TID is to the traditional\ndefinition from heapam (and even zheap). If it's a purely logical\nidentifier, then why would it have two components like a TID? Is that\njust a short-term convenience or something?TID is purely logical identifier. Hence, stated in initial email that for Zedstore TID, block number and offset split carries no meaning at all. It's purely 48 bit integer entity assigned to datum of first column during insertion, based on where in BTree it gets inserted. Rest of the column datums are inserted using this assigned TID value. Just due to rest to system restrictions discussed by Heikki and Andres on table am thread poses limitations of value it can carry currently otherwise from zedstore design perspective it just integer number. \n> Yes, the plan to optimize out TID space per datum, either by prefix compression or delta compression or some other trick.\n\nIt would be easier to do this if you knew for sure that the TID\nbehaves almost the same as a bigserial column -- a purely logical\nmonotonically increasing identifier. That's why I'm interested in what\nexactly you mean by TID, the stability of a TID value, etc. If a leaf\npage almost always stores a range covering no more than few hundred\ncontiguous logical values, you can justify aggressively compressing\nthe representation in the B-Tree entries. Compression would still be\nbased on prefix compression, but the representation itself can be\nspecialized.Yes, it's for sure logical increasing number. With only inserts the number is monotonically increasing. With deletes and updates, insert could use the previously free'd TID values. Since TID is logical datums can be easily moved around to split or merge pages as required.",
"msg_date": "Mon, 15 Apr 2019 22:45:51 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 10:45:51PM -0700, Ashwin Agrawal wrote:\n>On Mon, Apr 15, 2019 at 12:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n>> On Mon, Apr 15, 2019 at 9:16 AM Ashwin Agrawal <aagrawal@pivotal.io>\n>> wrote:\n>> > Would like to know more specifics on this Peter. We may be having\n>> different context on hybrid row/column design.\n>>\n>> I'm confused about how close your idea of a TID is to the traditional\n>> definition from heapam (and even zheap). If it's a purely logical\n>> identifier, then why would it have two components like a TID? Is that\n>> just a short-term convenience or something?\n>>\n>\n>TID is purely logical identifier. Hence, stated in initial email that for\n>Zedstore TID, block number and offset split carries no meaning at all. It's\n>purely 48 bit integer entity assigned to datum of first column during\n>insertion, based on where in BTree it gets inserted. Rest of the column\n>datums are inserted using this assigned TID value. Just due to rest to\n>system restrictions discussed by Heikki and Andres on table am thread poses\n>limitations of value it can carry currently otherwise from zedstore design\n>perspective it just integer number.\n>\n\nI'm not sure it's that clear cut, actually. Sure, it's not the usual\n(block,item) pair so it's not possible to jump to the exact location, so\nit's not the raw physical identifier as regular TID. But the data are\norganized in a btree, with the TID as a key, so it does actually provide\nsome information about the location.\n\nI've asked about BRIN indexes elsewhere in this thread, which I think is\nrelated to this question, because that index type relies on TID providing\nsufficient information about location. And I think BRIN indexes are going\nto be rather important for colstores (and formats like ORC have something\nvery similar built-in).\n\nBut maybe all we'll have to do is define the ranges differently - instead\nof \"number of pages\" we may define them as \"number of rows\" and it might\nbe working.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 16 Apr 2019 18:15:24 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 9:15 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n>\n> I'm not sure it's that clear cut, actually. Sure, it's not the usual\n> (block,item) pair so it's not possible to jump to the exact location, so\n> it's not the raw physical identifier as regular TID. But the data are\n> organized in a btree, with the TID as a key, so it does actually provide\n> some information about the location.\n>\n\n From representation perspective its logical identifier. But yes\nsince\nis used as used as key to layout datum's, there exists pretty\ngood\ncorrelation between TIDs and physical location. Can consider it\nas\nclustered based on TID.\n\nI've asked about BRIN indexes elsewhere in this thread, which I think is\n> related to this question, because that index type relies on TID providing\n> sufficient information about location. And I think BRIN indexes are going\n> to be rather important for colstores (and formats like ORC have something\n> very similar built-in).\n>\n> But maybe all we'll have to do is define the ranges differently - instead\n> of \"number of pages\" we may define them as \"number of rows\" and it might\n> be working.\n>\n\nBRIN indexes work for zedstore right now. A block range maps\nto\njust a range of TIDs in zedstore, as pointed out above. When one converts\na\nzstid to an ItemPointer, can get the \"block number\" from\nthe\n\nItemPointer, like from a normal heap TID. It doesn't mean the\ndirect\nphysical location of the row in zedstore, but that's\nfine.\n\n\n\nIt might be sub-optimal in some cases. For example if one\nzedstore\n\npage contains TIDs 1-1000, and another 1000-2000, and the entry in\nthe\nBRIN index covers TIDs 500-1500, have to access both\nzedstore\n\npages. Would be better if the cutoff points in the BRIN index\nwould\nmatch the physical pages of the zedstore. But it still works, and\nis\nprobably fine in\npractice.\n\n\n\nPlan is to add integrated BRIN index in zedstore, means keep\nmin-max\nvalues for appropriate columns within page. This will not help\nto\neliminate the IO as external BRIN index does but helps to\nskip\n\nuncompression and visibility checks etc... for blocks not matching\nthe\nconditions.\n\n\n\nJust to showcase brin works for zedstore, played with hands-on example\nmentioned in\n[1].\n\n\n\nWith btree index on zedstore\n\nQUERY\nPLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Aggregate (cost=4351.50..4351.51 rows=1 width=32) (actual\ntime=1267.140..1267.140 rows=1\nloops=1)\n\n -> Index Scan using idx_ztemperature_log_log_timestamp on\nztemperature_log (cost=0.56..4122.28 rows=91686 width=4) (actual\ntime=0.117..1244.112 rows=86400\nloops=1)\n\n Index Cond: ((log_timestamp >= '2016-04-04 00:00:00'::timestamp\nwithout time zone) AND (log_timestamp < '2016-04-05 00:00:00'::timestamp\nwithout time\nzone))\n\n Planning Time: 0.240\nms\n\n Execution Time: 1269.016\nms\n\n(5\nrows)\n\n\n\nWith brin index on zedstore.\nNote: Bitmap index for zedstore currently scans all the columns.\nScanning only required columns for query is yet to be implemented.\n\n\n\n\n\nQUERY\nPLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Finalize Aggregate (cost=217538.85..217538.86 rows=1 width=32) (actual\ntime=54.167..54.167 rows=1\nloops=1)\n\n -> Gather (cost=217538.63..217538.84 rows=2 width=32) (actual\ntime=53.967..55.184 rows=3\nloops=1)\n\n Workers Planned:\n2\n\n Workers Launched:\n2\n\n -> Partial Aggregate (cost=216538.63..216538.64 rows=1 width=32)\n(actual time=42.956..42.957 rows=1\nloops=3)\n\n -> Parallel Bitmap Heap Scan on ztemperature_log\n(cost=59.19..216446.98 rows=36660 width=4) (actual time=3.571..35.904\nrows=28800\nloops=3)\n\n Recheck Cond: ((log_timestamp >= '2016-04-04\n00:00:00'::timestamp without time zone) AND (log_timestamp < '2016-04-05\n00:00:00'::timestamp without time\nzone))\n\n Rows Removed by Index Recheck:\n3968\n\n Heap Blocks:\nlossy=381\n\n -> Bitmap Index Scan on\nidx_ztemperature_log_log_timestamp (cost=0.00..37.19 rows=98270 width=0)\n(actual time=1.201..1.201 rows=7680\nloops=1)\n\n Index Cond: ((log_timestamp >= '2016-04-04\n00:00:00'::timestamp without time zone) AND (log_timestamp < '2016-04-05\n00:00:00'::timestamp without time\nzone))\n\n Planning Time: 0.240\nms\n\n Execution Time: 55.341\nms\n\n(13\nrows)\n\n\n\n schema_name | index_name | index_ratio |\nindex_size |\ntable_size\n\n-------------+------------------------------------+-------------+------------+------------\n\n public | idx_ztemperature_log_log_timestamp | 0 | 80\nkB | 1235\nMB\n\n(1\nrow)\n\n\n1]\nhttps://www.postgresql.fastware.com/blog/brin-indexes-what-are-they-and-how-do-you-use-them\n\nOn Tue, Apr 16, 2019 at 9:15 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\r\nI'm not sure it's that clear cut, actually. Sure, it's not the usual\r\n(block,item) pair so it's not possible to jump to the exact location, so\r\nit's not the raw physical identifier as regular TID. But the data are\r\norganized in a btree, with the TID as a key, so it does actually provide\r\nsome information about the location.From representation perspective its logical identifier. But yes since is used as used as key to layout datum's, there exists pretty good correlation between TIDs and physical location. Can consider it as clustered based on TID.\r\nI've asked about BRIN indexes elsewhere in this thread, which I think is\r\nrelated to this question, because that index type relies on TID providing\r\nsufficient information about location. And I think BRIN indexes are going\r\nto be rather important for colstores (and formats like ORC have something\r\nvery similar built-in).\n\r\nBut maybe all we'll have to do is define the ranges differently - instead\r\nof \"number of pages\" we may define them as \"number of rows\" and it might\r\nbe working.BRIN indexes work for zedstore right now. A block range maps to just a range of TIDs in zedstore, as pointed out above. When one converts a zstid to an ItemPointer, can get the \"block number\" from the ItemPointer, like from a normal heap TID. It doesn't mean the direct physical location of the row in zedstore, but that's fine. It might be sub-optimal in some cases. For example if one zedstore page contains TIDs 1-1000, and another 1000-2000, and the entry in the BRIN index covers TIDs 500-1500, have to access both zedstore pages. Would be better if the cutoff points in the BRIN index would match the physical pages of the zedstore. But it still works, and is probably fine in practice. Plan is to add integrated BRIN index in zedstore, means keep min-max values for appropriate columns within page. This will not help to eliminate the IO as external BRIN index does but helps to skip uncompression and visibility checks etc... for blocks not matching the conditions. Just to showcase brin works for zedstore, played with hands-on examplementioned in [1]. With btree index on zedstore QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=4351.50..4351.51 rows=1 width=32) (actual time=1267.140..1267.140 rows=1 loops=1) -> Index Scan using idx_ztemperature_log_log_timestamp on ztemperature_log (cost=0.56..4122.28 rows=91686 width=4) (actual time=0.117..1244.112 rows=86400 loops=1) Index Cond: ((log_timestamp >= '2016-04-04 00:00:00'::timestamp without time zone) AND (log_timestamp < '2016-04-05 00:00:00'::timestamp without time zone)) Planning Time: 0.240 ms Execution Time: 1269.016 ms (5 rows) With brin index on zedstore.Note: Bitmap index for zedstore currently scans all the columns.Scanning only required columns for query is yet to be implemented. QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Finalize Aggregate (cost=217538.85..217538.86 rows=1 width=32) (actual time=54.167..54.167 rows=1 loops=1) -> Gather (cost=217538.63..217538.84 rows=2 width=32) (actual time=53.967..55.184 rows=3 loops=1) Workers Planned: 2 Workers Launched: 2 -> Partial Aggregate (cost=216538.63..216538.64 rows=1 width=32) (actual time=42.956..42.957 rows=1 loops=3) -> Parallel Bitmap Heap Scan on ztemperature_log (cost=59.19..216446.98 rows=36660 width=4) (actual time=3.571..35.904 rows=28800 loops=3) Recheck Cond: ((log_timestamp >= '2016-04-04 00:00:00'::timestamp without time zone) AND (log_timestamp < '2016-04-05 00:00:00'::timestamp without time zone)) Rows Removed by Index Recheck: 3968 Heap Blocks: lossy=381 -> Bitmap Index Scan on idx_ztemperature_log_log_timestamp (cost=0.00..37.19 rows=98270 width=0) (actual time=1.201..1.201 rows=7680 loops=1) Index Cond: ((log_timestamp >= '2016-04-04 00:00:00'::timestamp without time zone) AND (log_timestamp < '2016-04-05 00:00:00'::timestamp without time zone)) Planning Time: 0.240 ms Execution Time: 55.341 ms (13 rows) schema_name | index_name | index_ratio | index_size | table_size -------------+------------------------------------+-------------+------------+------------ public | idx_ztemperature_log_log_timestamp | 0 | 80 kB | 1235 MB (1 row) 1] https://www.postgresql.fastware.com/blog/brin-indexes-what-are-they-and-how-do-you-use-them",
"msg_date": "Wed, 24 Apr 2019 12:19:20 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 15/04/2019 22:32, Peter Geoghegan wrote:\n> On Mon, Apr 15, 2019 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Perhaps, but we won't know if we don't try. I think we should try,\n>> and be willing to add hooks and flexibility to core as needed to make\n>> it possible.\n> \n> We could approach it without taking a firm position on inclusion in\n> core until the project begins to mature. I have little faith in our\n> ability to predict which approach will be the least painful at this\n> early stage.\n\nWhen we started hacking on this, we went in with the assumption that \nthis would have to be in core, because WAL-logging, and also because a \ncolumn-store will probably need some changes to the planner and executor \nto make it shine. And also because a lot of people would like to have a \ncolumn store in PostgreSQL (although a \"column store\" could mean many \ndifferent things with different tradeoffs). But if we just have all the \nnecessary hooks in core, sure, this could be an extension, too.\n\nBut as you said, we don't need to decide that yet. Let's wait and see, \nas this matures.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 25 Apr 2019 09:44:45 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "We (Heikki, me and Melanie) are continuing to build Zedstore. Wish to\nshare the recent additions and modifications. Attaching a patch\nwith the latest code. Link to github branch [1] to follow\nalong. The approach we have been leaning towards is to build required\nfunctionality, get passing the test and then continue to iterate to\noptimize the same. It's still work-in-progress.\n\nSharing the details now, as have reached our next milestone for\nZedstore. All table AM API's are implemented for Zedstore (except\ncompute_xid_horizon_for_tuples, seems need test for it first).\n\nCurrent State:\n\n- A new type of item added to Zedstore \"Array item\", to boost\n compression and performance. Based on Konstantin's performance\n experiments [2] and inputs from Tomas Vodra [3], this is\n added. Array item holds multiple datums, with consecutive TIDs and\n the same visibility information. An array item saves space compared\n to multiple single items, by leaving out repetitive UNDO and TID\n fields. An array item cannot mix NULLs and non-NULLs. So, those\n experiments should result in improved performance now. Inserting\n data via COPY creates array items currently. Code for insert has not\n been modified from last time. Making singleton inserts or insert\n into select, performant is still on the todo list.\n\n- Now we have a separate and dedicated meta-column btree alongside\n rest of the data column btrees. This special or first btree for\n meta-column is used to assign TIDs for tuples, track the UNDO\n location which provides visibility information. Also, this special\n btree, which always exists, helps to support zero-column tables\n (which can be a result of ADD COLUMN DROP COLUMN actions as\n well). Plus, having meta-data stored separately from data, helps to\n get better compression ratios. And also helps to further simplify\n the overall design/implementation as for deletes just need to edit\n the meta-column and avoid touching the actual data btrees. Index\n scans can just perform visibility checks based on this meta-column\n and fetch required datums only for visible tuples. For tuple locks\n also just need to access this meta-column only. Previously, every\n column btree used to carry the same undo pointer. Thus visibility\n check could be potentially performed, with the past layout, using\n any column. But considering overall simplification new layout\n provides it's fine to give up on that aspect. Having dedicated\n meta-column highly simplified handling for add columns with default\n and null values, as this column deterministically provides all the\n TIDs present in the table, which can't be said for any other data\n columns due to default or null values during add column.\n\n- Free Page Map implemented. The Free Page Map keeps track of unused\n pages in the relation. The FPM is also a b-tree, indexed by physical\n block number. To be more compact, it stores \"extents\", i.e. block\n ranges, rather than just blocks, when possible. An interesting paper [4]\non\n how modern filesystems manage space acted as a good source for ideas.\n\n- Tuple locks implemented\n\n- Serializable isolation handled\n\n- With \"default_table_access_method=zedstore\"\n - 31 out of 194 failing regress tests\n - 10 out of 86 failing isolation tests\nMany of the current failing tests are due to plan differences, like\nIndex scans selected for zedstore over IndexOnly scans, as zedstore\ndoesn't yet have visibility map. I am yet to give a thought on\nindex-only scans. Or plan diffs due to table size differences between\nheap and zedstore.\n\nNext few milestones we wish to hit for Zedstore:\n- Make check regress green\n- Make check isolation green\n- Zedstore crash safe (means also replication safe). Implement WAL\n logs\n- Performance profiling and optimizations for Insert, Selects, Index\n Scans, etc...\n- Once UNDO framework lands in Upstream, Zedstore leverages it instead\n of its own version of UNDO\n\nOpen questions / discussion items:\n\n- how best to get \"column projection list\" from planner? (currently,\n we walk plan and find the columns required for the query in\n the executor, refer GetNeededColumnsForNode())\n\n- how to pass the \"column projection list\" to table AM? (as stated in\n initial email, currently we have modified table am API to pass the\n projection to AM)\n\n- TID treated as (block, offset) in current indexing code\n\n- Physical tlist optimization? (currently, we disabled it for\n zedstore)\n\nTeam:\nMelanie joined Heikki and me to write code for zedstore. Majority of\nthe code continues to be contributed by Heikki. We are continuing to\nhave fun building column store implementation and iterate\naggressively.\n\nReferences:\n1] https://github.com/greenplum-db/postgres/tree/zedstore\n2]\nhttps://www.postgresql.org/message-id/3978b57e-fe25-ca6b-f56c-48084417e115%40postgrespro.ru\n3]\nhttps://www.postgresql.org/message-id/20190415173254.nlnk2xqhgt7c5pta%40development\n4] https://www.kernel.org/doc/ols/2010/ols2010-pages-121-132.pdf",
"msg_date": "Wed, 22 May 2019 17:07:45 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi Ashwin,\n\n- how to pass the \"column projection list\" to table AM? (as stated in\n initial email, currently we have modified table am API to pass the\n projection to AM)\n\nWe were working on a similar columnar storage using pluggable APIs; one\nidea that we thought of was to modify the scan slot based on the targetlist\nto have only the relevant columns in the scan descriptor. This way the\ntable AMs are passed a slot with only relevant columns in the descriptor.\nToday we do something similar to the result slot using\nExecInitResultTypeTL(), now do it to the scan tuple slot as well. So\nsomewhere after creating the scan slot using ExecInitScanTupleSlot(), call\na table am handler API to modify the scan tuple slot based on the\ntargetlist, a probable name for the new table am handler would be:\nexec_init_scan_slot_tl(PlanState *planstate, TupleTableSlot *slot).\n\n So this way the scan am handlers like getnextslot is passed a slot only\nhaving the relevant columns in the scan descriptor. One issue though is\nthat the beginscan is not passed the slot, so if some memory allocation\nneeds to be done based on the column list, it can't be done in beginscan.\nLet me know what you think.\n\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Thu, May 23, 2019 at 3:56 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n\n>\n> We (Heikki, me and Melanie) are continuing to build Zedstore. Wish to\n> share the recent additions and modifications. Attaching a patch\n> with the latest code. Link to github branch [1] to follow\n> along. The approach we have been leaning towards is to build required\n> functionality, get passing the test and then continue to iterate to\n> optimize the same. It's still work-in-progress.\n>\n> Sharing the details now, as have reached our next milestone for\n> Zedstore. All table AM API's are implemented for Zedstore (except\n> compute_xid_horizon_for_tuples, seems need test for it first).\n>\n> Current State:\n>\n> - A new type of item added to Zedstore \"Array item\", to boost\n> compression and performance. Based on Konstantin's performance\n> experiments [2] and inputs from Tomas Vodra [3], this is\n> added. Array item holds multiple datums, with consecutive TIDs and\n> the same visibility information. An array item saves space compared\n> to multiple single items, by leaving out repetitive UNDO and TID\n> fields. An array item cannot mix NULLs and non-NULLs. So, those\n> experiments should result in improved performance now. Inserting\n> data via COPY creates array items currently. Code for insert has not\n> been modified from last time. Making singleton inserts or insert\n> into select, performant is still on the todo list.\n>\n> - Now we have a separate and dedicated meta-column btree alongside\n> rest of the data column btrees. This special or first btree for\n> meta-column is used to assign TIDs for tuples, track the UNDO\n> location which provides visibility information. Also, this special\n> btree, which always exists, helps to support zero-column tables\n> (which can be a result of ADD COLUMN DROP COLUMN actions as\n> well). Plus, having meta-data stored separately from data, helps to\n> get better compression ratios. And also helps to further simplify\n> the overall design/implementation as for deletes just need to edit\n> the meta-column and avoid touching the actual data btrees. Index\n> scans can just perform visibility checks based on this meta-column\n> and fetch required datums only for visible tuples. For tuple locks\n> also just need to access this meta-column only. Previously, every\n> column btree used to carry the same undo pointer. Thus visibility\n> check could be potentially performed, with the past layout, using\n> any column. But considering overall simplification new layout\n> provides it's fine to give up on that aspect. Having dedicated\n> meta-column highly simplified handling for add columns with default\n> and null values, as this column deterministically provides all the\n> TIDs present in the table, which can't be said for any other data\n> columns due to default or null values during add column.\n>\n> - Free Page Map implemented. The Free Page Map keeps track of unused\n> pages in the relation. The FPM is also a b-tree, indexed by physical\n> block number. To be more compact, it stores \"extents\", i.e. block\n> ranges, rather than just blocks, when possible. An interesting paper [4]\n> on\n> how modern filesystems manage space acted as a good source for ideas.\n>\n> - Tuple locks implemented\n>\n> - Serializable isolation handled\n>\n> - With \"default_table_access_method=zedstore\"\n> - 31 out of 194 failing regress tests\n> - 10 out of 86 failing isolation tests\n> Many of the current failing tests are due to plan differences, like\n> Index scans selected for zedstore over IndexOnly scans, as zedstore\n> doesn't yet have visibility map. I am yet to give a thought on\n> index-only scans. Or plan diffs due to table size differences between\n> heap and zedstore.\n>\n> Next few milestones we wish to hit for Zedstore:\n> - Make check regress green\n> - Make check isolation green\n> - Zedstore crash safe (means also replication safe). Implement WAL\n> logs\n> - Performance profiling and optimizations for Insert, Selects, Index\n> Scans, etc...\n> - Once UNDO framework lands in Upstream, Zedstore leverages it instead\n> of its own version of UNDO\n>\n> Open questions / discussion items:\n>\n> - how best to get \"column projection list\" from planner? (currently,\n> we walk plan and find the columns required for the query in\n> the executor, refer GetNeededColumnsForNode())\n>\n> - how to pass the \"column projection list\" to table AM? (as stated in\n> initial email, currently we have modified table am API to pass the\n> projection to AM)\n>\n> - TID treated as (block, offset) in current indexing code\n>\n> - Physical tlist optimization? (currently, we disabled it for\n> zedstore)\n>\n> Team:\n> Melanie joined Heikki and me to write code for zedstore. Majority of\n> the code continues to be contributed by Heikki. We are continuing to\n> have fun building column store implementation and iterate\n> aggressively.\n>\n> References:\n> 1] https://github.com/greenplum-db/postgres/tree/zedstore\n> 2]\n> https://www.postgresql.org/message-id/3978b57e-fe25-ca6b-f56c-48084417e115%40postgrespro.ru\n> 3]\n> https://www.postgresql.org/message-id/20190415173254.nlnk2xqhgt7c5pta%40development\n> 4] https://www.kernel.org/doc/ols/2010/ols2010-pages-121-132.pdf\n>\n>\n\nHi Ashwin,- how to pass the \"column projection list\" to table AM? (as stated in initial email, currently we have modified table am API to pass the projection to AM) We were working on a similar columnar storage using pluggable APIs; one idea that we thought of was to modify the scan slot based on the targetlist to have only the relevant columns in the scan descriptor. This way the table AMs are passed a slot with only relevant columns in the descriptor. Today we do something similar to the result slot using ExecInitResultTypeTL(), now do it to the scan tuple slot as well. So somewhere after creating the scan slot using ExecInitScanTupleSlot(), call a table am handler API to modify the scan tuple slot based on the targetlist, a probable name for the new table am handler would be: exec_init_scan_slot_tl(PlanState *planstate, TupleTableSlot *slot). So this way the scan am handlers like getnextslot is passed a slot only having the relevant columns in the scan descriptor. One issue though is that the beginscan is not passed the slot, so if some memory allocation needs to be done based on the column list, it can't be done in beginscan. Let me know what you think.regards,Ajin CherianFujitsu AustraliaOn Thu, May 23, 2019 at 3:56 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:We (Heikki, me and Melanie) are continuing to build Zedstore. Wish toshare the recent additions and modifications. Attaching a patchwith the latest code. Link to github branch [1] to followalong. The approach we have been leaning towards is to build requiredfunctionality, get passing the test and then continue to iterate tooptimize the same. It's still work-in-progress.Sharing the details now, as have reached our next milestone forZedstore. All table AM API's are implemented for Zedstore (exceptcompute_xid_horizon_for_tuples, seems need test for it first).Current State:- A new type of item added to Zedstore \"Array item\", to boost compression and performance. Based on Konstantin's performance experiments [2] and inputs from Tomas Vodra [3], this is added. Array item holds multiple datums, with consecutive TIDs and the same visibility information. An array item saves space compared to multiple single items, by leaving out repetitive UNDO and TID fields. An array item cannot mix NULLs and non-NULLs. So, those experiments should result in improved performance now. Inserting data via COPY creates array items currently. Code for insert has not been modified from last time. Making singleton inserts or insert into select, performant is still on the todo list.- Now we have a separate and dedicated meta-column btree alongside rest of the data column btrees. This special or first btree for meta-column is used to assign TIDs for tuples, track the UNDO location which provides visibility information. Also, this special btree, which always exists, helps to support zero-column tables (which can be a result of ADD COLUMN DROP COLUMN actions as well). Plus, having meta-data stored separately from data, helps to get better compression ratios. And also helps to further simplify the overall design/implementation as for deletes just need to edit the meta-column and avoid touching the actual data btrees. Index scans can just perform visibility checks based on this meta-column and fetch required datums only for visible tuples. For tuple locks also just need to access this meta-column only. Previously, every column btree used to carry the same undo pointer. Thus visibility check could be potentially performed, with the past layout, using any column. But considering overall simplification new layout provides it's fine to give up on that aspect. Having dedicated meta-column highly simplified handling for add columns with default and null values, as this column deterministically provides all the TIDs present in the table, which can't be said for any other data columns due to default or null values during add column.- Free Page Map implemented. The Free Page Map keeps track of unused pages in the relation. The FPM is also a b-tree, indexed by physical block number. To be more compact, it stores \"extents\", i.e. block ranges, rather than just blocks, when possible. An interesting paper [4] on how modern filesystems manage space acted as a good source for ideas.- Tuple locks implemented- Serializable isolation handled- With \"default_table_access_method=zedstore\" - 31 out of 194 failing regress tests - 10 out of 86 failing isolation testsMany of the current failing tests are due to plan differences, likeIndex scans selected for zedstore over IndexOnly scans, as zedstoredoesn't yet have visibility map. I am yet to give a thought onindex-only scans. Or plan diffs due to table size differences betweenheap and zedstore.Next few milestones we wish to hit for Zedstore:- Make check regress green- Make check isolation green- Zedstore crash safe (means also replication safe). Implement WAL logs- Performance profiling and optimizations for Insert, Selects, Index Scans, etc...- Once UNDO framework lands in Upstream, Zedstore leverages it instead of its own version of UNDOOpen questions / discussion items:- how best to get \"column projection list\" from planner? (currently, we walk plan and find the columns required for the query in the executor, refer GetNeededColumnsForNode())- how to pass the \"column projection list\" to table AM? (as stated in initial email, currently we have modified table am API to pass the projection to AM)- TID treated as (block, offset) in current indexing code- Physical tlist optimization? (currently, we disabled it for zedstore)Team:Melanie joined Heikki and me to write code for zedstore. Majority ofthe code continues to be contributed by Heikki. We are continuing tohave fun building column store implementation and iterateaggressively.References:1] https://github.com/greenplum-db/postgres/tree/zedstore2] https://www.postgresql.org/message-id/3978b57e-fe25-ca6b-f56c-48084417e115%40postgrespro.ru3] https://www.postgresql.org/message-id/20190415173254.nlnk2xqhgt7c5pta%40development4] https://www.kernel.org/doc/ols/2010/ols2010-pages-121-132.pdf",
"msg_date": "Fri, 24 May 2019 12:30:19 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Thu, May 23, 2019 at 7:30 PM Ajin Cherian <itsajin@gmail.com> wrote:\n\n> Hi Ashwin,\n>\n> - how to pass the \"column projection list\" to table AM? (as stated in\n> initial email, currently we have modified table am API to pass the\n> projection to AM)\n>\n> We were working on a similar columnar storage using pluggable APIs; one\n> idea that we thought of was to modify the scan slot based on the targetlist\n> to have only the relevant columns in the scan descriptor. This way the\n> table AMs are passed a slot with only relevant columns in the descriptor.\n> Today we do something similar to the result slot using\n> ExecInitResultTypeTL(), now do it to the scan tuple slot as well. So\n> somewhere after creating the scan slot using ExecInitScanTupleSlot(), call\n> a table am handler API to modify the scan tuple slot based on the\n> targetlist, a probable name for the new table am handler would be:\n> exec_init_scan_slot_tl(PlanState *planstate, TupleTableSlot *slot).\n>\n\nInteresting.\n\nThough this reads hacky and not clean approach to me. Reasons:\n\n- The memory allocation and initialization for slot descriptor was\n done in ExecInitScanTupleSlot(). exec_init_scan_slot_tl() would\n redo lot of work. ExecInitScanTupleSlot() ideally just points to\n tupleDesc from Relation object. But for exec_init_scan_slot_tl()\n will free the existing tupleDesc and reallocate fresh. Plus, can't\n point to Relation tuple desc but essentially need to craft one out.\n\n- As discussed in thread [1], several places want to use different\n slots for the same scan, so that means will have to modify the\n descriptor every time on such occasions even if it remains the same\n throughout the scan. Some extra code can be added to keep around old\n tupledescriptor and then reuse for next slot, but that seems again\n added code complexity.\n\n- AM needs to know the attnum in terms of relation's attribute number\n to scan. How would tupledesc convey that? Like TupleDescData's attrs\n currently carries info for attnum at attrs[attnum - 1]. If TupleDesc\n needs to convey random attributes to scan, seems this relationship\n has to be broken. attrs[offset] will provide info for some attribute\n in relation, means offset != (attrs->attnum + 1). Which I am not\n sure how many places in code rely on that logic to get information.\n\n- The tupledesc provides lot of information not just attribute numbers\n to scan. Like it provides information in TupleConstr about default\n value for column. If AM layer has to modify existing slot's\n tupledesc, it would have to copy over such information as well. This\n information today is fetched using attnum as offset value in\n constr->missing array. If this information will be retained how will\n the constr array constructed? Will the array contain only values for\n columns to scan or will contain constr array as is from Relation's\n tuple descriptor as it does today. Seems will be overhead to\n construct the constr array fresh and if not constructing fresh seems\n will have mismatch between natt and array elements.\n\nSeems with the proposed exec_init_scan_slot_tl() API, will have to\ncall it after beginscan and before calling getnextslot, to provide\ncolumn projection list to AM. Special dedicated API we have for\nZedstore to pass down column projection list, needs same calling\nconvention which is the reason I don't like it and trying to find\nalternative. But at least the api we added for Zedstore seems much\nsimple, generic and flexible, in comparison, as lets AM decide what it\nwishes to do with it. AM can fiddle with slot's TupleDescriptor if\nwishes or can handle the column projection some other way.\n\n So this way the scan am handlers like getnextslot is passed a slot only\n> having the relevant columns in the scan descriptor. One issue though is\n> that the beginscan is not passed the slot, so if some memory allocation\n> needs to be done based on the column list, it can't be done in beginscan.\n> Let me know what you think.\n>\n\nYes, ideally would like to see if possible having this information\navailable on beginscan. But if can't be then seems fine to delay such\nallocations on first calls to getnextslot and friends, that's how we\ndo today for Zedstore.\n\n1]\nhttps://www.postgresql.org/message-id/20190508214627.hw7wuqwawunhynj6%40alap3.anarazel.de\n\nOn Thu, May 23, 2019 at 7:30 PM Ajin Cherian <itsajin@gmail.com> wrote:Hi Ashwin,- how to pass the \"column projection list\" to table AM? (as stated in initial email, currently we have modified table am API to pass the projection to AM) We were working on a similar columnar storage using pluggable APIs; one idea that we thought of was to modify the scan slot based on the targetlist to have only the relevant columns in the scan descriptor. This way the table AMs are passed a slot with only relevant columns in the descriptor. Today we do something similar to the result slot using ExecInitResultTypeTL(), now do it to the scan tuple slot as well. So somewhere after creating the scan slot using ExecInitScanTupleSlot(), call a table am handler API to modify the scan tuple slot based on the targetlist, a probable name for the new table am handler would be: exec_init_scan_slot_tl(PlanState *planstate, TupleTableSlot *slot).Interesting.Though this reads hacky and not clean approach to me. Reasons:- The memory allocation and initialization for slot descriptor was done in ExecInitScanTupleSlot(). exec_init_scan_slot_tl() would redo lot of work. ExecInitScanTupleSlot() ideally just points to tupleDesc from Relation object. But for exec_init_scan_slot_tl() will free the existing tupleDesc and reallocate fresh. Plus, can't point to Relation tuple desc but essentially need to craft one out.- As discussed in thread [1], several places want to use different slots for the same scan, so that means will have to modify the descriptor every time on such occasions even if it remains the same throughout the scan. Some extra code can be added to keep around old tupledescriptor and then reuse for next slot, but that seems again added code complexity.- AM needs to know the attnum in terms of relation's attribute number to scan. How would tupledesc convey that? Like TupleDescData's attrs currently carries info for attnum at attrs[attnum - 1]. If TupleDesc needs to convey random attributes to scan, seems this relationship has to be broken. attrs[offset] will provide info for some attribute in relation, means offset != (attrs->attnum + 1). Which I am not sure how many places in code rely on that logic to get information.- The tupledesc provides lot of information not just attribute numbers to scan. Like it provides information in TupleConstr about default value for column. If AM layer has to modify existing slot's tupledesc, it would have to copy over such information as well. This information today is fetched using attnum as offset value in constr->missing array. If this information will be retained how will the constr array constructed? Will the array contain only values for columns to scan or will contain constr array as is from Relation's tuple descriptor as it does today. Seems will be overhead to construct the constr array fresh and if not constructing fresh seems will have mismatch between natt and array elements.Seems with the proposed exec_init_scan_slot_tl() API, will have tocall it after beginscan and before calling getnextslot, to providecolumn projection list to AM. Special dedicated API we have forZedstore to pass down column projection list, needs same callingconvention which is the reason I don't like it and trying to findalternative. But at least the api we added for Zedstore seems muchsimple, generic and flexible, in comparison, as lets AM decide what itwishes to do with it. AM can fiddle with slot's TupleDescriptor ifwishes or can handle the column projection some other way. So this way the scan am handlers like getnextslot is passed a slot only having the relevant columns in the scan descriptor. One issue though is that the beginscan is not passed the slot, so if some memory allocation needs to be done based on the column list, it can't be done in beginscan. Let me know what you think.Yes, ideally would like to see if possible having this informationavailable on beginscan. But if can't be then seems fine to delay suchallocations on first calls to getnextslot and friends, that's how wedo today for Zedstore.1] https://www.postgresql.org/message-id/20190508214627.hw7wuqwawunhynj6%40alap3.anarazel.de",
"msg_date": "Fri, 24 May 2019 15:37:08 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "\nOn 23/05/19 12:07 PM, Ashwin Agrawal wrote:\n>\n> We (Heikki, me and Melanie) are continuing to build Zedstore. Wish to\n> share the recent additions and modifications. Attaching a patch\n> with the latest code. Link to github branch [1] to follow\n> along. The approach we have been leaning towards is to build required\n> functionality, get passing the test and then continue to iterate to\n> optimize the same. It's still work-in-progress.\n>\n> Sharing the details now, as have reached our next milestone for\n> Zedstore. All table AM API's are implemented for Zedstore (except\n> compute_xid_horizon_for_tuples, seems need test for it first).\n>\n> Current State:\n>\n> - A new type of item added to Zedstore \"Array item\", to boost\n> compression and performance. Based on Konstantin's performance\n> experiments [2] and inputs from Tomas Vodra [3], this is\n> added. Array item holds multiple datums, with consecutive TIDs and\n> the same visibility information. An array item saves space compared\n> to multiple single items, by leaving out repetitive UNDO and TID\n> fields. An array item cannot mix NULLs and non-NULLs. So, those\n> experiments should result in improved performance now. Inserting\n> data via COPY creates array items currently. Code for insert has not\n> been modified from last time. Making singleton inserts or insert\n> into select, performant is still on the todo list.\n>\n> - Now we have a separate and dedicated meta-column btree alongside\n> rest of the data column btrees. This special or first btree for\n> meta-column is used to assign TIDs for tuples, track the UNDO\n> location which provides visibility information. Also, this special\n> btree, which always exists, helps to support zero-column tables\n> (which can be a result of ADD COLUMN DROP COLUMN actions as\n> well). Plus, having meta-data stored separately from data, helps to\n> get better compression ratios. And also helps to further simplify\n> the overall design/implementation as for deletes just need to edit\n> the meta-column and avoid touching the actual data btrees. Index\n> scans can just perform visibility checks based on this meta-column\n> and fetch required datums only for visible tuples. For tuple locks\n> also just need to access this meta-column only. Previously, every\n> column btree used to carry the same undo pointer. Thus visibility\n> check could be potentially performed, with the past layout, using\n> any column. But considering overall simplification new layout\n> provides it's fine to give up on that aspect. Having dedicated\n> meta-column highly simplified handling for add columns with default\n> and null values, as this column deterministically provides all the\n> TIDs present in the table, which can't be said for any other data\n> columns due to default or null values during add column.\n>\n> - Free Page Map implemented. The Free Page Map keeps track of unused\n> pages in the relation. The FPM is also a b-tree, indexed by physical\n> block number. To be more compact, it stores \"extents\", i.e. block\n> ranges, rather than just blocks, when possible. An interesting paper \n> [4] on\n> how modern filesystems manage space acted as a good source for ideas.\n>\n> - Tuple locks implemented\n>\n> - Serializable isolation handled\n>\n> - With \"default_table_access_method=zedstore\"\n> - 31 out of 194 failing regress tests\n> - 10 out of 86 failing isolation tests\n> Many of the current failing tests are due to plan differences, like\n> Index scans selected for zedstore over IndexOnly scans, as zedstore\n> doesn't yet have visibility map. I am yet to give a thought on\n> index-only scans. Or plan diffs due to table size differences between\n> heap and zedstore.\n>\n> Next few milestones we wish to hit for Zedstore:\n> - Make check regress green\n> - Make check isolation green\n> - Zedstore crash safe (means also replication safe). Implement WAL\n> logs\n> - Performance profiling and optimizations for Insert, Selects, Index\n> Scans, etc...\n> - Once UNDO framework lands in Upstream, Zedstore leverages it instead\n> of its own version of UNDO\n>\n> Open questions / discussion items:\n>\n> - how best to get \"column projection list\" from planner? (currently,\n> we walk plan and find the columns required for the query in\n> the executor, refer GetNeededColumnsForNode())\n>\n> - how to pass the \"column projection list\" to table AM? (as stated in\n> initial email, currently we have modified table am API to pass the\n> projection to AM)\n>\n> - TID treated as (block, offset) in current indexing code\n>\n> - Physical tlist optimization? (currently, we disabled it for\n> zedstore)\n>\n> Team:\n> Melanie joined Heikki and me to write code for zedstore. Majority of\n> the code continues to be contributed by Heikki. We are continuing to\n> have fun building column store implementation and iterate\n> aggressively.\n>\n> References:\n> 1] https://github.com/greenplum-db/postgres/tree/zedstore\n> 2] \n> https://www.postgresql.org/message-id/3978b57e-fe25-ca6b-f56c-48084417e115%40postgrespro.ru\n> 3] \n> https://www.postgresql.org/message-id/20190415173254.nlnk2xqhgt7c5pta%40development\n> 4] https://www.kernel.org/doc/ols/2010/ols2010-pages-121-132.pdf\n>\n\nFWIW - building this against latest 12 beta1:\n\nLoading and examining the standard pgbench schema (with the old names, \nsorry) in v10 (standard heap_ and v12 (zedstore)\n\nv10:\n\nbench=# \\i load.sql\nCOPY 100\nTime: 16.335 ms\nCOPY 1000\nTime: 16.748 ms\nCOPY 10000000\nTime: 50276.230 ms (00:50.276)\nbench=# \\dt+\n List of relations\n Schema | Name | Type | Owner | Size | Description\n--------+----------+-------+----------+------------+-------------\n public | accounts | table | postgres | 1281 MB |\n public | branches | table | postgres | 8192 bytes |\n public | history | table | postgres | 0 bytes |\n public | tellers | table | postgres | 72 kB |\n\nv12+zedstore:\n\nbench=# \\i load.sql\nCOPY 100\nTime: 0.656 ms\nCOPY 1000\nTime: 3.573 ms\nCOPY 10000000\nTime: 26244.832 ms (00:26.245)\nbench=# \\dt+\n List of relations\n Schema | Name | Type | Owner | Size | Description\n--------+----------+-------+----------+---------+-------------\n public | accounts | table | postgres | 264 MB |\n public | branches | table | postgres | 56 kB |\n public | history | table | postgres | 0 bytes |\n public | tellers | table | postgres | 64 kB |\n\nSo a good improvement in load times and on disk footprint! Also note \nthat I did not build with lz4 so looks like you guys have fixed the \nquirks with compression making things bigger.\n\nregards\n\nMark\n\n\n\n",
"msg_date": "Sat, 25 May 2019 16:48:24 +1200",
"msg_from": "Mark Kirkwood <mark.kirkwood@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "\nit's really cool and very good progress, \n\nI'm interesting if SIDM/JIT will be supported\n\nbest wishes\n\nTY\n\n\nOn 2019/5/23 08:07, Ashwin Agrawal wrote:\n> We (Heikki, me and Melanie) are continuing to build Zedstore. Wish to\n> share the recent additions and modifications. Attaching a patch\n> with the latest code. Link to github branch [1] to follow\n> along. The approach we have been leaning towards is to build required\n> functionality, get passing the test and then continue to iterate to\n> optimize the same. It's still work-in-progress.\n>\n> Sharing the details now, as have reached our next milestone for\n> Zedstore. All table AM API's are implemented for Zedstore (except\n> compute_xid_horizon_for_tuples, seems need test for it first).\n>\n> Current State:\n>\n> - A new type of item added to Zedstore \"Array item\", to boost\n> compression and performance. Based on Konstantin's performance\n> experiments [2] and inputs from Tomas Vodra [3], this is\n> added. Array item holds multiple datums, with consecutive TIDs and\n> the same visibility information. An array item saves space compared\n> to multiple single items, by leaving out repetitive UNDO and TID\n> fields. An array item cannot mix NULLs and non-NULLs. So, those\n> experiments should result in improved performance now. Inserting\n> data via COPY creates array items currently. Code for insert has not\n> been modified from last time. Making singleton inserts or insert\n> into select, performant is still on the todo list.\n>\n> - Now we have a separate and dedicated meta-column btree alongside\n> rest of the data column btrees. This special or first btree for\n> meta-column is used to assign TIDs for tuples, track the UNDO\n> location which provides visibility information. Also, this special\n> btree, which always exists, helps to support zero-column tables\n> (which can be a result of ADD COLUMN DROP COLUMN actions as\n> well). Plus, having meta-data stored separately from data, helps to\n> get better compression ratios. And also helps to further simplify\n> the overall design/implementation as for deletes just need to edit\n> the meta-column and avoid touching the actual data btrees. Index\n> scans can just perform visibility checks based on this meta-column\n> and fetch required datums only for visible tuples. For tuple locks\n> also just need to access this meta-column only. Previously, every\n> column btree used to carry the same undo pointer. Thus visibility\n> check could be potentially performed, with the past layout, using\n> any column. But considering overall simplification new layout\n> provides it's fine to give up on that aspect. Having dedicated\n> meta-column highly simplified handling for add columns with default\n> and null values, as this column deterministically provides all the\n> TIDs present in the table, which can't be said for any other data\n> columns due to default or null values during add column.\n>\n> - Free Page Map implemented. The Free Page Map keeps track of unused\n> pages in the relation. The FPM is also a b-tree, indexed by physical\n> block number. To be more compact, it stores \"extents\", i.e. block\n> ranges, rather than just blocks, when possible. An interesting paper [4]\n> on\n> how modern filesystems manage space acted as a good source for ideas.\n>\n> - Tuple locks implemented\n>\n> - Serializable isolation handled\n>\n> - With \"default_table_access_method=zedstore\"\n> - 31 out of 194 failing regress tests\n> - 10 out of 86 failing isolation tests\n> Many of the current failing tests are due to plan differences, like\n> Index scans selected for zedstore over IndexOnly scans, as zedstore\n> doesn't yet have visibility map. I am yet to give a thought on\n> index-only scans. Or plan diffs due to table size differences between\n> heap and zedstore.\n>\n> Next few milestones we wish to hit for Zedstore:\n> - Make check regress green\n> - Make check isolation green\n> - Zedstore crash safe (means also replication safe). Implement WAL\n> logs\n> - Performance profiling and optimizations for Insert, Selects, Index\n> Scans, etc...\n> - Once UNDO framework lands in Upstream, Zedstore leverages it instead\n> of its own version of UNDO\n>\n> Open questions / discussion items:\n>\n> - how best to get \"column projection list\" from planner? (currently,\n> we walk plan and find the columns required for the query in\n> the executor, refer GetNeededColumnsForNode())\n>\n> - how to pass the \"column projection list\" to table AM? (as stated in\n> initial email, currently we have modified table am API to pass the\n> projection to AM)\n>\n> - TID treated as (block, offset) in current indexing code\n>\n> - Physical tlist optimization? (currently, we disabled it for\n> zedstore)\n>\n> Team:\n> Melanie joined Heikki and me to write code for zedstore. Majority of\n> the code continues to be contributed by Heikki. We are continuing to\n> have fun building column store implementation and iterate\n> aggressively.\n>\n> References:\n> 1] https://github.com/greenplum-db/postgres/tree/zedstore\n> 2]\n> https://www.postgresql.org/message-id/3978b57e-fe25-ca6b-f56c-48084417e115%40postgrespro.ru\n> 3]\n> https://www.postgresql.org/message-id/20190415173254.nlnk2xqhgt7c5pta%40development\n> 4] https://www.kernel.org/doc/ols/2010/ols2010-pages-121-132.pdf\n>\n\n\n\n\n",
"msg_date": "Thu, 30 May 2019 23:07:30 +0800",
"msg_from": "DEV_OPS <devops@ww-it.cn>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "From: Ashwin Agrawal [mailto:aagrawal@pivotal.io]\r\n> The objective is to gather feedback on design and approach to the same.\r\n> The implementation has core basic pieces working but not close to complete.\r\n\r\nThank you for proposing a very interesting topic. Are you thinking of including this in PostgreSQL 13 if possible?\r\n\r\n\r\n> * All Indexes supported\r\n...\r\n> work. Btree indexes can be created. Btree and bitmap index scans work.\r\n\r\nDoes Zedstore allow to create indexes of existing types on the table (btree, GIN, BRIN, etc.) and perform index scans (point query, range query, etc.)?\r\n\r\n\r\n> * Hybrid row-column store, where some columns are stored together, and\r\n> others separately. Provide flexibility of granularity on how to\r\n> divide the columns. Columns accessed together can be stored\r\n> together.\r\n...\r\n> This way of laying out the data also easily allows for hybrid row-column\r\n> store, where some columns are stored together, and others have a dedicated\r\n> B-tree. Need to have user facing syntax to allow specifying how to group\r\n> the columns.\r\n...\r\n> Zedstore Table can be\r\n> created using command:\r\n> \r\n> CREATE TABLE <name> (column listing) USING zedstore;\r\n\r\nAre you aiming to enable Zedstore to be used for HTAP, i.e. the same table can be accessed simultaneously for both OLTP and analytics with the minimal performance impact on OLTP? (I got that impression from the word \"hybrid\".)\r\nIf yes, is the assumption that only a limited number of columns are to be stored in columnar format (for efficient scanning), and many other columns are to be stored in row format for efficient tuple access?\r\nAre those row-formatted columns stored in the same file as the column-formatted columns, or in a separate file?\r\n\r\nRegarding the column grouping, can I imagine HBase and Cassandra?\r\nHow could the current CREATE TABLE syntax support column grouping? (I guess CREATE TABLE needs a syntax for columnar store, and Zedstore need to be incorporated in core, not as an extension...)\r\n\r\n\r\n> A column store uses the same structure but we have *multiple* B-trees, one\r\n> for each column, all indexed by TID. The B-trees for all columns are stored\r\n> in the same physical file.\r\n\r\nDid you think that it's not a good idea to have a different file for each group of columns? Is that because we can't expect physical adjacency of data blocks on disk even if we separate a column in a separate file?\r\n\r\nI thought a separate file for each group of columns would be easier and less error-prone to implement and debug. Adding and dropping the column group would also be very easy and fast.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Mon, 1 Jul 2019 02:59:17 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Sun, Jun 30, 2019 at 7:59 PM Tsunakawa, Takayuki <\ntsunakawa.takay@jp.fujitsu.com> wrote:\n\n> From: Ashwin Agrawal [mailto:aagrawal@pivotal.io]\n> > The objective is to gather feedback on design and approach to the same.\n> > The implementation has core basic pieces working but not close to\n> complete.\n>\n> Thank you for proposing a very interesting topic. Are you thinking of\n> including this in PostgreSQL 13 if possible?\n>\n>\n> > * All Indexes supported\n> ...\n> > work. Btree indexes can be created. Btree and bitmap index scans work.\n>\n> Does Zedstore allow to create indexes of existing types on the table\n> (btree, GIN, BRIN, etc.) and perform index scans (point query, range query,\n> etc.)?\n>\n\nYes, all indexes types work for zedstore and allow point or range queries.\n\n\n> > * Hybrid row-column store, where some columns are stored together, and\n> > others separately. Provide flexibility of granularity on how to\n> > divide the columns. Columns accessed together can be stored\n> > together.\n> ...\n> > This way of laying out the data also easily allows for hybrid row-column\n> > store, where some columns are stored together, and others have a\n> dedicated\n> > B-tree. Need to have user facing syntax to allow specifying how to group\n> > the columns.\n> ...\n> > Zedstore Table can be\n> > created using command:\n> >\n> > CREATE TABLE <name> (column listing) USING zedstore;\n>\n> Are you aiming to enable Zedstore to be used for HTAP, i.e. the same table\n> can be accessed simultaneously for both OLTP and analytics with the minimal\n> performance impact on OLTP? (I got that impression from the word \"hybrid\".)\n>\n\nWell \"hybrid\" is more to convey compressed row and column store can be\nsupported with same design. It really wasn't referring to HTAP. In general\nthe goal we are moving towards is column store to be extremely efficient at\nanalytics but still should be able to support all the OLTP operations (with\nminimal performance or storage size impact) Like when making trade-offs\nbetween different design choices and if both can't be meet, preference if\ntowards analytics.\n\nIf yes, is the assumption that only a limited number of columns are to be\n> stored in columnar format (for efficient scanning), and many other columns\n> are to be stored in row format for efficient tuple access?\n>\n\nYes, like if its known that certain columns are always accessed together\nbetter to store them together and avoid the tuple formation cost. Though\nits still to be seen if compression plays role and storing each individual\ncolumn and compressing can still be winner compared to compressing\ndifferent columns as blob. Like saving on IO cost offsets out the tuple\nformation cost or not.\n\nAre those row-formatted columns stored in the same file as the\n> column-formatted columns, or in a separate file?\n>\n\nCurrently, we are focused to just get pure column store working and hence\nnot coded anything for hybrid layout yet. But at least right now the\nthought is would be in same file.\n\nRegarding the column grouping, can I imagine HBase and Cassandra?\n> How could the current CREATE TABLE syntax support column grouping? (I\n> guess CREATE TABLE needs a syntax for columnar store, and Zedstore need to\n> be incorporated in core, not as an extension...)\n>\n\nWhen column grouping comes up yes will need to modify CREATE TABLE syntax,\nwe are still to reach that point in development.\n\n\n> > A column store uses the same structure but we have *multiple* B-trees,\n> one\n> > for each column, all indexed by TID. The B-trees for all columns are\n> stored\n> > in the same physical file.\n>\n> Did you think that it's not a good idea to have a different file for each\n> group of columns? Is that because we can't expect physical adjacency of\n> data blocks on disk even if we separate a column in a separate file?\n>\n> I thought a separate file for each group of columns would be easier and\n> less error-prone to implement and debug. Adding and dropping the column\n> group would also be very easy and fast.\n>\n\nCurrently, each group is a single column (till we don't have column\nfamilies) and having file for each column definitely seems not good idea.\nAs it just explodes the number of files. Separate file may have its\nadvantage from pre-fetching point of view but yes can't expect physical\nadjacency of data blocks plus access pattern will anyways involve reading\nmultiple files (if each column stored in separate file).\n\nI doubt storing each group makes it any easier to implement or debug, I\nfeel its actually reverse. Storing everything in single file but separate\nblocks, keep the logic contained inside AM layer. And don't have to write\nspecial code for example for drop table to delete files for all the groups\nand all, or while moving table to different tablespace and all such\ncomplication.\n\nAdding and dropping column group, irrespective can be made easy and fast\nwith blocks for that group, added or marked for reuse within same file.\n\nThank you for the questions.\n\nOn Sun, Jun 30, 2019 at 7:59 PM Tsunakawa, Takayuki <tsunakawa.takay@jp.fujitsu.com> wrote:From: Ashwin Agrawal [mailto:aagrawal@pivotal.io]\n> The objective is to gather feedback on design and approach to the same.\n> The implementation has core basic pieces working but not close to complete.\n\nThank you for proposing a very interesting topic. Are you thinking of including this in PostgreSQL 13 if possible?\n\n\n> * All Indexes supported\n...\n> work. Btree indexes can be created. Btree and bitmap index scans work.\n\nDoes Zedstore allow to create indexes of existing types on the table (btree, GIN, BRIN, etc.) and perform index scans (point query, range query, etc.)?Yes, all indexes types work for zedstore and allow point or range queries. \n> * Hybrid row-column store, where some columns are stored together, and\n> others separately. Provide flexibility of granularity on how to\n> divide the columns. Columns accessed together can be stored\n> together.\n...\n> This way of laying out the data also easily allows for hybrid row-column\n> store, where some columns are stored together, and others have a dedicated\n> B-tree. Need to have user facing syntax to allow specifying how to group\n> the columns.\n...\n> Zedstore Table can be\n> created using command:\n> \n> CREATE TABLE <name> (column listing) USING zedstore;\n\nAre you aiming to enable Zedstore to be used for HTAP, i.e. the same table can be accessed simultaneously for both OLTP and analytics with the minimal performance impact on OLTP? (I got that impression from the word \"hybrid\".)Well \"hybrid\" is more to convey compressed row and column store can be supported with same design. It really wasn't referring to HTAP. In general the goal we are moving towards is column store to be extremely efficient at analytics but still should be able to support all the OLTP operations (with minimal performance or storage size impact) Like when making trade-offs between different design choices and if both can't be meet, preference if towards analytics.\nIf yes, is the assumption that only a limited number of columns are to be stored in columnar format (for efficient scanning), and many other columns are to be stored in row format for efficient tuple access?Yes, like if its known that certain columns are always accessed together better to store them together and avoid the tuple formation cost. Though its still to be seen if compression plays role and storing each individual column and compressing can still be winner compared to compressing different columns as blob. Like saving on IO cost offsets out the tuple formation cost or not.\nAre those row-formatted columns stored in the same file as the column-formatted columns, or in a separate file?Currently, we are focused to just get pure column store working and hence not coded anything for hybrid layout yet. But at least right now the thought is would be in same file.\nRegarding the column grouping, can I imagine HBase and Cassandra?\nHow could the current CREATE TABLE syntax support column grouping? (I guess CREATE TABLE needs a syntax for columnar store, and Zedstore need to be incorporated in core, not as an extension...)When column grouping comes up yes will need to modify CREATE TABLE syntax, we are still to reach that point in development. \n> A column store uses the same structure but we have *multiple* B-trees, one\n> for each column, all indexed by TID. The B-trees for all columns are stored\n> in the same physical file.\n\nDid you think that it's not a good idea to have a different file for each group of columns? Is that because we can't expect physical adjacency of data blocks on disk even if we separate a column in a separate file?\n\nI thought a separate file for each group of columns would be easier and less error-prone to implement and debug. Adding and dropping the column group would also be very easy and fast.Currently, each group is a single column (till we don't have column families) and having file for each column definitely seems not good idea. As it just explodes the number of files. Separate file may have its advantage from pre-fetching point of view but yes can't expect physical adjacency of data blocks plus access pattern will anyways involve reading multiple files (if each column stored in separate file).I doubt storing each group makes it any easier to implement or debug, I feel its actually reverse. Storing everything in single file but separate blocks, keep the logic contained inside AM layer. And don't have to write special code for example for drop table to delete files for all the groups and all, or while moving table to different tablespace and all such complication.Adding and dropping column group, irrespective can be made easy and fast with blocks for that group, added or marked for reuse within same file.Thank you for the questions.",
"msg_date": "Mon, 1 Jul 2019 12:08:06 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Thu, May 30, 2019 at 8:07 AM DEV_OPS <devops@ww-it.cn> wrote:\n\n>\n> it's really cool and very good progress,\n>\n> I'm interesting if SIDM/JIT will be supported\n>\n\nThat's something outside of Zedstore work directly at least now. The intent\nis to work with current executor code or enhance it only wherever needed.\nIf current executor code supports something that would work for Zedstore.\nBut any other enhancements to executor will be separate undertaking.\n\nOn Thu, May 30, 2019 at 8:07 AM DEV_OPS <devops@ww-it.cn> wrote:\nit's really cool and very good progress, \n\nI'm interesting if SIDM/JIT will be supportedThat's something outside of Zedstore work directly at least now. The intent is to work with current executor code or enhance it only wherever needed. If current executor code supports something that would work for Zedstore. But any other enhancements to executor will be separate undertaking.",
"msg_date": "Mon, 1 Jul 2019 12:14:37 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi Ashwin,\n\nI tried playing around with the zedstore code a bit today and there\nare couple questions that came into my mind.\n\n1) Can zedstore tables be vacuumed? If yes, does VACUUM on zedstore\ntable set the VM bits associated with it.\n\n2) Is there a chance that IndexOnlyScan would ever be required for\nzedstore tables considering the design approach taken for it?\n\nFurther, I tried creating a zedstore table with btree index on one of\nit's column and loaded around 50 lacs record into the table. When the\nindexed column was scanned (with enable_seqscan flag set to off), it\nwent for IndexOnlyScan and that took around 15-20 times more than it\nwould take for IndexOnly Scan on heap table just because IndexOnlyScan\nin zedstore always goes to heap as the visibility check fails.\nHowever, the seqscan on zedstore table is quite faster than seqscan on\nheap table because the time taken for I/O is quite less in case for\nzedstore.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Tue, Jul 2, 2019 at 12:45 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n>\n> On Thu, May 30, 2019 at 8:07 AM DEV_OPS <devops@ww-it.cn> wrote:\n>>\n>>\n>> it's really cool and very good progress,\n>>\n>> I'm interesting if SIDM/JIT will be supported\n>\n>\n> That's something outside of Zedstore work directly at least now. The intent is to work with current executor code or enhance it only wherever needed. If current executor code supports something that would work for Zedstore. But any other enhancements to executor will be separate undertaking.\n\n\n",
"msg_date": "Wed, 14 Aug 2019 15:20:57 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Wed, Aug 14, 2019 at 2:51 AM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> Hi Ashwin,\n>\n> I tried playing around with the zedstore code a bit today and there\n> are couple questions that came into my mind.\n>\n\nGreat! Thank You.\n\n\n>\n> 1) Can zedstore tables be vacuumed? If yes, does VACUUM on zedstore\n> table set the VM bits associated with it.\n>\n\nZedstore tables can be vacuumed. On vacuum, minimal work is performed\nthough compared to heap. Full table is not scanned. Only UNDO log is\ntruncated/discarded based on RecentGlobalXmin. Plus, only TidTree or\nMeta column is scanned to find dead tuples and index entries cleaned\nfor them, based on the same.\n\nCurrently, for zedstore we have not used the VM at all. So, it doesn't\ntouch the same during any operation.\n\n2) Is there a chance that IndexOnlyScan would ever be required for\n> zedstore tables considering the design approach taken for it?\n>\n\nWe have not given much thought to IndexOnlyScans so far. But I think\nIndexOnlyScan definitely would be beneficial for zedstore as\nwell. Even for normal index scans as well, fetching as many columns\npossible from Index itself and only getting rest of required columns\nfrom the table would be good for zedstore. It would help to further\ncut down IO. Ideally, for visibility checking only TidTree needs to be\nscanned and visibility checked with the same, so the cost of checking\nis much lower compared to heap (if VM can't be consulted) but still is\na cost. Also, with vacuum, if UNDO log gets trimmed, the visibility\nchecks are pretty cheap. Still given all that, having VM type thing to\noptimize the same further would help.\n\n\n> Further, I tried creating a zedstore table with btree index on one of\n> it's column and loaded around 50 lacs record into the table. When the\n> indexed column was scanned (with enable_seqscan flag set to off), it\n> went for IndexOnlyScan and that took around 15-20 times more than it\n> would take for IndexOnly Scan on heap table just because IndexOnlyScan\n> in zedstore always goes to heap as the visibility check fails.\n> However, the seqscan on zedstore table is quite faster than seqscan on\n> heap table because the time taken for I/O is quite less in case for\n> zedstore.\n>\n\nThanks for reporting, we will look into it. Should be able to optimize\nit. Given no VM exists, IndexOnlyScans currently for zedstore behave\nmore or less like IndexScans. Planner picks IndexOnlyScans for\nzedstore, mostly due to off values for reltuples, relpages, and\nrelallvisible.\n\nWe have been focused on implementing and optimizing the AM pieces. So,\nnot much work has been done for planner estimates and tunning yet. The\nfirst step for the same to get the needed columns in the planner\ninstead of the executor in [1] is proposed. Once, that bakes will use\nthe same to perform more planner estimates and all. Also, analyze\nneeds work to properly reflect reltuples and relpages to influence the\nplanner correctly.\n\n\n1]\nhttps://www.postgresql.org/message-id/CAAKRu_ZQ0Jy7LfZDCY0JdxChdpja9rf-S8Y5%2BU4vX7cYJd62dA%40mail.gmail.com\n\n On Wed, Aug 14, 2019 at 2:51 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:Hi Ashwin,\nI tried playing around with the zedstore code a bit today and thereare couple questions that came into my mind.Great! Thank You. \n1) Can zedstore tables be vacuumed? If yes, does VACUUM on zedstoretable set the VM bits associated with it.Zedstore tables can be vacuumed. On vacuum, minimal work is performedthough compared to heap. Full table is not scanned. Only UNDO log istruncated/discarded based on RecentGlobalXmin. Plus, only TidTree orMeta column is scanned to find dead tuples and index entries cleanedfor them, based on the same.Currently, for zedstore we have not used the VM at all. So, it doesn'ttouch the same during any operation.2) Is there a chance that IndexOnlyScan would ever be required forzedstore tables considering the design approach taken for it?We have not given much thought to IndexOnlyScans so far. But I thinkIndexOnlyScan definitely would be beneficial for zedstore aswell. Even for normal index scans as well, fetching as many columnspossible from Index itself and only getting rest of required columnsfrom the table would be good for zedstore. It would help to furthercut down IO. Ideally, for visibility checking only TidTree needs to bescanned and visibility checked with the same, so the cost of checkingis much lower compared to heap (if VM can't be consulted) but still isa cost. Also, with vacuum, if UNDO log gets trimmed, the visibilitychecks are pretty cheap. Still given all that, having VM type thing tooptimize the same further would help.\nFurther, I tried creating a zedstore table with btree index on one ofit's column and loaded around 50 lacs record into the table. When theindexed column was scanned (with enable_seqscan flag set to off), itwent for IndexOnlyScan and that took around 15-20 times more than itwould take for IndexOnly Scan on heap table just because IndexOnlyScanin zedstore always goes to heap as the visibility check fails.However, the seqscan on zedstore table is quite faster than seqscan onheap table because the time taken for I/O is quite less in case forzedstore.Thanks for reporting, we will look into it. Should be able to optimizeit. Given no VM exists, IndexOnlyScans currently for zedstore behavemore or less like IndexScans. Planner picks IndexOnlyScans forzedstore, mostly due to off values for reltuples, relpages, andrelallvisible.We have been focused on implementing and optimizing the AM pieces. So,not much work has been done for planner estimates and tunning yet. Thefirst step for the same to get the needed columns in the plannerinstead of the executor in [1] is proposed. Once, that bakes will usethe same to perform more planner estimates and all. Also, analyzeneeds work to properly reflect reltuples and relpages to influence theplanner correctly.1] https://www.postgresql.org/message-id/CAAKRu_ZQ0Jy7LfZDCY0JdxChdpja9rf-S8Y5%2BU4vX7cYJd62dA%40mail.gmail.com",
"msg_date": "Wed, 14 Aug 2019 10:32:22 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 14/08/2019 20:32, Ashwin Agrawal wrote:\n> On Wed, Aug 14, 2019 at 2:51 AM Ashutosh Sharma wrote:\n>> 2) Is there a chance that IndexOnlyScan would ever be required for\n>> zedstore tables considering the design approach taken for it?\n> \n> We have not given much thought to IndexOnlyScans so far. But I think\n> IndexOnlyScan definitely would be beneficial for zedstore as\n> well. Even for normal index scans as well, fetching as many columns\n> possible from Index itself and only getting rest of required columns\n> from the table would be good for zedstore. It would help to further\n> cut down IO. Ideally, for visibility checking only TidTree needs to be\n> scanned and visibility checked with the same, so the cost of checking\n> is much lower compared to heap (if VM can't be consulted) but still is\n> a cost. Also, with vacuum, if UNDO log gets trimmed, the visibility\n> checks are pretty cheap. Still given all that, having VM type thing to\n> optimize the same further would help.\n\nHmm, yeah. An index-only scan on a zedstore table could perform the \"VM \nchecks\" by checking the TID tree in the zedstore. It's not as compact as \nthe 2 bits per TID in the heapam's visibility map, but it's pretty good.\n\n>> Further, I tried creating a zedstore table with btree index on one of\n>> it's column and loaded around 50 lacs record into the table. When the\n>> indexed column was scanned (with enable_seqscan flag set to off), it\n>> went for IndexOnlyScan and that took around 15-20 times more than it\n>> would take for IndexOnly Scan on heap table just because IndexOnlyScan\n>> in zedstore always goes to heap as the visibility check fails.\n\nCurrently, an index-only scan on zedstore should be pretty much the same \nspeed as a regular index scan. All the visibility checks will fail, and \nyou end up fetching every row from the table, just like a regular index \nscan. So I think what you're seeing is that the index fetches on a \nzedstore table is much slower than on heap.\n\nIdeally, on a column store the index fetches would only fetch the needed \ncolumns, but I don't think that's been implemented yet, so all the \ncolumns are fetched. That can make a big difference, if you have a wide \ntable with lots of columns, but only actually need a few of them. Was \nyour test case something like that?\n\nWe haven't spent much effort on optimizing index fetches yet, so I hope \nthere's many other little tweaks there as well, that we can do to make \nit faster.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 15 Aug 2019 12:38:30 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "We've continued hacking on Zedstore, here's a new patch version against \ncurrent PostgreSQL master (commit f1bf619acdf). If you want to follow \nthe development in real-time, we're working on this branch: \nhttps://github.com/greenplum-db/postgres/tree/zedstore\n\nIf you want to do performance testing with this, make sure you configure \nwith the --with-lz4 option. Otherwise, you'll get pglz compression, \nwhich is *much* slower.\n\n\nMajor TODOs:\n\n* Make it crash-safe, by WAL-logging.\n\n* Teach the planner and executor to pass down the list of columns \nneeded. Currently, many plans will unnecessarily fetch columns that are \nnot needed.\n\n* Make visibility checks against the TID tree in index-only scans.\n\n* zedstore-toast pages are currently leaked, so you'll get a lot of \nbloat if you delete/update rows with large datums\n\n* Use the UNDO framework that's been discussed on another thread. \nThere's UNDO-logging built into zedstore at the moment, but it's not \nvery optimized.\n\n* Improve free space management. Pages that become empty are currently \nrecycled, but space on pages that are not completely empty is not not \nreused, and half-empty pages are not merged.\n\n* Implement TID recycling. Currently, TIDs are allocated in increasing \norder, and after all 2^48 TIDs have been used, even if the rows have \nbeen deleted since, no more ruples can be inserted.\n\n- Heikki",
"msg_date": "Thu, 15 Aug 2019 13:05:49 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Thu, Aug 15, 2019 at 01:05:49PM +0300, Heikki Linnakangas wrote:\n> We've continued hacking on Zedstore, here's a new patch version against\n> current PostgreSQL master (commit f1bf619acdf). If you want to follow the\n> development in real-time, we're working on this branch:\n> https://github.com/greenplum-db/postgres/tree/zedstore\n\nThanks for persuing this. It's an exciting development and I started\nlooking at how we'd put it to use. I imagine we'd use it in favour of ZFS\ntablespaces, which I hope to retire.\n\nI've just done very brief experiment so far. Some thoughts:\n\n . I was missing a way to check for compression ratio; it looks like zedstore\n with lz4 gets ~4.6x for our largest customer's largest table. zfs using\n compress=gzip-1 gives 6x compression across all their partitioned tables,\n and I'm surprised it beats zedstore .\n\n . What do you think about pg_restore --no-tableam; similar to \n --no-tablespaces, it would allow restoring a table to a different AM:\n PGOPTIONS='-c default_table_access_method=zedstore' pg_restore --no-tableam ./pg_dump.dat -d postgres\n Otherwise, the dump says \"SET default_table_access_method=heap\", which\n overrides any value from PGOPTIONS and precludes restoring to new AM.\n\n . It occured to me that indices won't be compressed. That's no fault of\n zedstore, but it may mean that some sites would need to retain their ZFS\n tablespace, and suggests the possibility of an separate, future project\n (I wonder if there's some way a new meta-AM could \"enable\" compression of\n other index AMs, to avoid the need to implement zbtree, zhash, zgin, ...).\n\n . it'd be nice if there was an ALTER TABLE SET ACCESS METHOD, to allow\n migrating data. Otherwise I think the alternative is:\n\tbegin; lock t;\n\tCREATE TABLE new_t LIKE (t INCLUDING ALL) USING (zedstore);\n\tINSERT INTO new_t SELECT * FROM t;\n\tfor index; do CREATE INDEX...; done\n\tDROP t; RENAME new_t (and all its indices). attach/inherit, etc.\n\tcommit;\n\n . Speaking of which, I think LIKE needs a new option for ACCESS METHOD, which\n is otherwise lost.\n\nCheers,\nJustin\n\n\n",
"msg_date": "Sun, 18 Aug 2019 14:35:33 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Sun, Aug 18, 2019 at 12:35 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n>\n> . I was missing a way to check for compression ratio;\n\n\nHere are the ways to check compression ratio for zedstore:\n\nTable level:\nselect sum(uncompressedsz::numeric) / sum(totalsz) as compratio from\npg_zs_btree_pages(<tablename>);\n\nPer column level:\nselect attno, count(*), sum(uncompressedsz::numeric) / sum(totalsz) as\ncompratio from pg_zs_btree_pages(<tablename>) group by attno order by attno;\n\n\n> it looks like zedstore\n> with lz4 gets ~4.6x for our largest customer's largest table. zfs using\n> compress=gzip-1 gives 6x compression across all their partitioned\n> tables,\n> and I'm surprised it beats zedstore .\n>\n\nWhat kind of tables did you use? Is it possible to give us the schema\nof the table? Did you perform 'INSERT INTO ... SELECT' or COPY?\nCurrently COPY give better compression ratios than single INSERT\nbecause it generates less pages for meta data. Using the above per column\nlevel compression ratio will provide which columns have lower\ncompression ratio.\n\nWe plan to add other compression algorithms like RLE and delta\nencoding which should give better compression ratios for column store\nalong with LZ4.\n\nOn Sun, Aug 18, 2019 at 12:35 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n . I was missing a way to check for compression ratio; Here are the ways to check compression ratio for zedstore:Table level:select sum(uncompressedsz::numeric) / sum(totalsz) as compratio frompg_zs_btree_pages(<tablename>);Per column level:select attno, count(*), sum(uncompressedsz::numeric) / sum(totalsz) ascompratio from pg_zs_btree_pages(<tablename>) group by attno order by attno; it looks like zedstore\n with lz4 gets ~4.6x for our largest customer's largest table. zfs using\n compress=gzip-1 gives 6x compression across all their partitioned tables,\n and I'm surprised it beats zedstore . What kind of tables did you use? Is it possible to give us the schemaof the table? Did you perform 'INSERT INTO ... SELECT' or COPY?Currently COPY give better compression ratios than single INSERTbecause it generates less pages for meta data. Using the above per columnlevel compression ratio will provide which columns have lowercompression ratio.We plan to add other compression algorithms like RLE and deltaencoding which should give better compression ratios for column storealong with LZ4.",
"msg_date": "Mon, 19 Aug 2019 16:15:30 -0700",
"msg_from": "Alexandra Wang <lewang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Aug 19, 2019 at 04:15:30PM -0700, Alexandra Wang wrote:\n> On Sun, Aug 18, 2019 at 12:35 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> > . I was missing a way to check for compression ratio;\n> \n> Here are the ways to check compression ratio for zedstore:\n> \n> Table level:\n> SELECT sum(uncompressedsz::numeric)/sum(totalsz) AS compratio FROM pg_zs_btree_pages(<tablename>);\n\npostgres=# SELECT sum(uncompressedsz::numeric)/sum(totalsz) AS compratio FROM pg_zs_btree_pages('child.cdrs_huawei_pgwrecord_2019_07_01');\ncompratio | 4.2730304163521529\n\nFor a fair test, I created a separate ZFS tablspace for storing just a copy of\nthat table.\n\nts=# CREATE TABLE test TABLESPACE testcomp AS SELECT * FROM child.cdrs_huawei_pgwrecord_2019_07_01;\nSELECT 39933381\nTime: 882417.775 ms (14:42.418)\n\nzfs/testJTP20190819 compressratio 6.01x -\nzfs/testJTP20190819 compression gzip-1 inherited from zfs\n\n> Per column level:\n> select attno, count(*), sum(uncompressedsz::numeric)/sum(totalsz) as compratio from pg_zs_btree_pages(<tablename>) group by attno order by attno;\n\nOrder by 3; I see we have SOME highly compressed columns.\n\nIt's still surprising to me that's as low as it is, given their content: phone\nnumbers and IPv4 addresses in text form, using characters limited to\n[[:digit:].]\n\n(I realize we can probably save space using inet type.)\n\n 0 | 4743 | 1.00000000000000000000\n 32 | 21912 | 1.05953637381493823513\n 80 | 36441 | 1.2416446300175039\n 4 | 45059 | 1.3184106811322728\n 83 | 45059 | 1.3184106811322728\n 52 | 39208 | 1.3900788061770992\n...\n 74 | 3464 | 10.8258665101057364\n 17 | 3535 | 10.8776086243096534\n 3 | 7092 | 11.0388009154683678\n 11 | 3518 | 11.4396055611832109\n 65 | 3333 | 14.6594723104237634\n 35 | 14077 | 15.1642131499381887\n...\n 43 | 1601 | 21.4200106784573211\n 79 | 1599 | 21.4487670806076829\n 89 | 1934 | 23.6292134031933401\n 33 | 1934 | 23.6292134031933401\n\nIt seems clear the columns with high n_distinct have low compress ratio, and\ncolumns with high compress ratio are those with n_distinct=1...\n\nCREATE TEMP TABLE zs AS SELECT zs.*, n_distinct, avg_width, a.attname FROM (SELECT 'child.cdrs_huawei_pgwrecord_2019_07_01'::regclass t)t , LATERAL (SELECT attno, count(*), sum(uncompressedsz::numeric)/sum(totalsz) AS compratio FROM pg_zs_btree_pages(t) GROUP BY attno)zs , pg_attribute a, pg_class c, pg_stats s WHERE a.attrelid=t AND a.attnum=zs.attno AND c.oid=a.attrelid AND c.relname=s.tablename AND s.attname=a.attname;\n\n n_distinct | compratio \n------------+------------------------\n 217141 | 1.2416446300175039\n 154829 | 1.5306062496764190\n 144486 | 1.3900788061770992\n 128334 | 1.5395022739568842\n 121324 | 1.4005533187886683\n 86341 | 1.6262709389296389\n 84073 | 4.4379336418590519\n 65413 | 5.1890181028038757\n 63703 | 5.5029855093836425\n 63637 | 5.3648468796642262\n 46450 | 1.3184106811322728\n 46450 | 1.3184106811322728\n 43029 | 1.8003513772661308\n 39363 | 1.5845730687475706\n 36720 | 1.4751147557399539\n 36445 | 1.8403087513759131\n 36445 | 1.5453935268318613\n 11455 | 1.05953637381493823513\n 2862 | 9.8649823666870671\n 2625 | 2.3573614181847621\n 1376 | 1.7895024285340428\n 1335 | 2.2812551964262787\n 807 | 7.1192324141359373\n 610 | 7.9373623460089360\n 16 | 11.4396055611832109\n 10 | 5.5429763442365557\n 7 | 5.0440578041440675\n 7 | 5.2000132813261135\n 4 | 6.9741514753325536\n 4 | 4.2872818036896340\n 3 | 1.9080838412634827\n 3 | 2.9915954457453485\n 3 | 2.3056387009407882\n 2 | 10.8776086243096534\n 2 | 5.5950929307378287\n 2 | 18.5796576388128741\n 2 | 10.8258665101057364\n 2 | 9.1112820658021406\n 2 | 3.4986057630739795\n 2 | 4.6250999234025238\n 2 | 11.0388009154683678\n 1 | 15.1642131499381887\n 1 | 2.8855860118178798\n 1 | 23.6292134031933401\n 1 | 21.4200106784573211\n[...]\n\n> > it looks like zedstore\n> > with lz4 gets ~4.6x for our largest customer's largest table. zfs using\n> > compress=gzip-1 gives 6x compression across all their partitioned\n> > tables,\n> > and I'm surprised it beats zedstore .\n> >\n> \n> What kind of tables did you use? Is it possible to give us the schema\n> of the table? Did you perform 'INSERT INTO ... SELECT' or COPY?\n\nI did this:\n\n|time ~/src/postgresql.bin/bin/pg_restore /srv/cdrperfbackup/ts/final/child.cdrs_huawei_pgwrecord_2019_07_01 -f- |PGOPTIONS='-cdefault_table_access_method=zedstore' psql --port 5678 postgres --host /tmp\n...\nCOPY 39933381\n...\nreal 100m25.764s\n\n child | cdrs_huawei_pgwrecord_2019_07_01 | table | pryzbyj | permanent | 8277 MB | \n\npostgres=# SELECT array_to_string(array_agg(format_type(atttypid, atttypmod) ||CASE WHEN attnotnull THEN ' not null' ELSE '' END ORDER BY attnum),',') FROM pg_attribute WHERE attrelid='child.cdrs_huawei_pgwrecord_2019_07_01'::regclass AND attnum>0;\narray_to_string | text not null,text,text not null,text not null,text not null,text,text,text,boolean,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,timestamp without time zone not null,bigint not null,text not null,text,text,text,text,text,text,text,text,text,text not null,text,boolean,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,timestamp with time zone,timestamp with time zone,text,text,boolean,text,text,boolean,boolean,text not null,text not null\n\n\n\n",
"msg_date": "Mon, 19 Aug 2019 21:04:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 20/08/2019 05:04, Justin Pryzby wrote:\n>>> it looks like zedstore\n>>> with lz4 gets ~4.6x for our largest customer's largest table. zfs using\n>>> compress=gzip-1 gives 6x compression across all their partitioned\n>>> tables,\n>>> and I'm surprised it beats zedstore .\n\nI did a quick test, with 10 million random IP addresses, in text format. \nI loaded it into a zedstore table (\"create table ips (ip text) using \nzedstore\"), and poked around a little bit to see how the space is used.\n\npostgres=# select lokey, nitems, ncompressed, totalsz, uncompressedsz, \nfreespace from pg_zs_btree_pages('ips') where attno=1 and level=0 limit 10;\n lokey | nitems | ncompressed | totalsz | uncompressedsz | freespace\n-------+--------+-------------+---------+----------------+-----------\n 1 | 4 | 4 | 6785 | 7885 | 1320\n 537 | 5 | 5 | 7608 | 8818 | 492\n 1136 | 4 | 4 | 6762 | 7888 | 1344\n 1673 | 5 | 5 | 7548 | 8776 | 540\n 2269 | 4 | 4 | 6841 | 7895 | 1256\n 2807 | 5 | 5 | 7555 | 8784 | 540\n 3405 | 5 | 5 | 7567 | 8772 | 524\n 4001 | 4 | 4 | 6791 | 7899 | 1320\n 4538 | 5 | 5 | 7596 | 8776 | 500\n 5136 | 4 | 4 | 6750 | 7875 | 1360\n(10 rows)\n\nThere's on average about 10% of free space on the pages. We're losing \nquite a bit to to ZFS compression right there. I'm sure there's some \nfree space on the heap pages as well, but ZFS compression will squeeze \nit out.\n\nThe compression ratio is indeed not very good. I think one reason is \nthat zedstore does LZ4 in relatively small chunks, while ZFS surely \ncompresses large blocks in one go. Looking at the above, there is on \naverage 125 datums packed into each \"item\" (avg(hikey-lokey) / nitems). \nI did a quick test with the \"lz4\" command-line utility, compressing flat \nfiles containing random IP addresses.\n\n$ lz4 /tmp/125-ips.txt\nCompressed filename will be : /tmp/125-ips.txt.lz4\nCompressed 1808 bytes into 1519 bytes ==> 84.02% \n\n$ lz4 /tmp/550-ips.txt\nCompressed filename will be : /tmp/550-ips.txt.lz4\nCompressed 7863 bytes into 6020 bytes ==> 76.56% \n\n$ lz4 /tmp/750-ips.txt\nCompressed filename will be : /tmp/750-ips.txt.lz4\nCompressed 10646 bytes into 8035 bytes ==> 75.47%\n\nThe first case is roughly what we do in zedstore currently: we compress \nabout 125 datums as one chunk. The second case is roughty what we would \nget, if we collected on 8k worth of datums and compressed them all as \none chunk. And the third case simulates the case we would allow the \ninput to be larger than 8k, so that the compressed chunk just fits on an \n8k page. Not too much difference between the second and third case, but \nits pretty clear that we're being hurt by splitting the input into such \nsmall chunks.\n\nThe downside of using a larger compression chunk size is that random \naccess becomes more expensive. Need to give the on-disk format some more \nthought. Although I actually don't feel too bad about the current \ncompression ratio, perfect can be the enemy of good.\n\n- Heikki\n\n\n",
"msg_date": "Tue, 20 Aug 2019 14:12:32 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Thanks Ashwin and Heikki for your responses. I've one more query here,\n\nIf BTree index is created on a zedstore table, the t_tid field of\nIndex tuple contains the physical tid that is not actually pointing to\nthe data block instead it contains something from which the logical\ntid can be derived. So, when IndexScan is performed on a zedstore\ntable, it fetches the physical tid from the index page and derives the\nlogical tid out of it and then retrieves the data corresponding to\nthis logical tid from the zedstore table. For that, it kind of\nperforms SeqScan on the zedstore table for the given tid. From this it\nappears to me as if the Index Scan is as good as SeqScan for zedstore\ntable. If that is true, will we be able to get the benefit of\nIndexScan on zedstore tables? Please let me know if i am missing\nsomething here.\n\nAFAIU, the following user level query on zedstore table\n\nselect * from zed_tab where a = 3;\n\ngets internally converted to\n\nselect * from zed_tab where tid = 3; -- assuming that index is created\non column 'a' and the logical tid associated with a = 3 is 3.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Thu, Aug 15, 2019 at 3:08 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 14/08/2019 20:32, Ashwin Agrawal wrote:\n> > On Wed, Aug 14, 2019 at 2:51 AM Ashutosh Sharma wrote:\n> >> 2) Is there a chance that IndexOnlyScan would ever be required for\n> >> zedstore tables considering the design approach taken for it?\n> >\n> > We have not given much thought to IndexOnlyScans so far. But I think\n> > IndexOnlyScan definitely would be beneficial for zedstore as\n> > well. Even for normal index scans as well, fetching as many columns\n> > possible from Index itself and only getting rest of required columns\n> > from the table would be good for zedstore. It would help to further\n> > cut down IO. Ideally, for visibility checking only TidTree needs to be\n> > scanned and visibility checked with the same, so the cost of checking\n> > is much lower compared to heap (if VM can't be consulted) but still is\n> > a cost. Also, with vacuum, if UNDO log gets trimmed, the visibility\n> > checks are pretty cheap. Still given all that, having VM type thing to\n> > optimize the same further would help.\n>\n> Hmm, yeah. An index-only scan on a zedstore table could perform the \"VM\n> checks\" by checking the TID tree in the zedstore. It's not as compact as\n> the 2 bits per TID in the heapam's visibility map, but it's pretty good.\n>\n> >> Further, I tried creating a zedstore table with btree index on one of\n> >> it's column and loaded around 50 lacs record into the table. When the\n> >> indexed column was scanned (with enable_seqscan flag set to off), it\n> >> went for IndexOnlyScan and that took around 15-20 times more than it\n> >> would take for IndexOnly Scan on heap table just because IndexOnlyScan\n> >> in zedstore always goes to heap as the visibility check fails.\n>\n> Currently, an index-only scan on zedstore should be pretty much the same\n> speed as a regular index scan. All the visibility checks will fail, and\n> you end up fetching every row from the table, just like a regular index\n> scan. So I think what you're seeing is that the index fetches on a\n> zedstore table is much slower than on heap.\n>\n> Ideally, on a column store the index fetches would only fetch the needed\n> columns, but I don't think that's been implemented yet, so all the\n> columns are fetched. That can make a big difference, if you have a wide\n> table with lots of columns, but only actually need a few of them. Was\n> your test case something like that?\n>\n> We haven't spent much effort on optimizing index fetches yet, so I hope\n> there's many other little tweaks there as well, that we can do to make\n> it faster.\n>\n> - Heikki\n\n\n",
"msg_date": "Mon, 26 Aug 2019 18:05:57 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Aug 26, 2019 at 5:36 AM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> Thanks Ashwin and Heikki for your responses. I've one more query here,\n>\n> If BTree index is created on a zedstore table, the t_tid field of\n> Index tuple contains the physical tid that is not actually pointing to\n> the data block instead it contains something from which the logical\n> tid can be derived. So, when IndexScan is performed on a zedstore\n> table, it fetches the physical tid from the index page and derives the\n> logical tid out of it and then retrieves the data corresponding to\n> this logical tid from the zedstore table. For that, it kind of\n> performs SeqScan on the zedstore table for the given tid.\n\n\nNope, it won't perform seqscan. As zedstore is laid out as btree itself\nwith logical TID as its key. It can quickly find which page the logical TID\nbelongs to and only access that page. It doesn't need to perform the\nseqscan for the same. That's one of the rationals for laying out things in\nbtree fashion to easily connect logical to physical world and not keep any\nexternal mapping.\n\nAFAIU, the following user level query on zedstore table\n>\n> select * from zed_tab where a = 3;\n>\n> gets internally converted to\n>\n> select * from zed_tab where tid = 3; -- assuming that index is created\n> on column 'a' and the logical tid associated with a = 3 is 3.\n>\n\nSo, for this it will first only access the TID btree, find the leaf page\nwith tid=3. Perform the visibility checks for the tuple and if tuple is\nvisible, then only will fetch all the columns for that TID. Again using the\nbtrees for those columns to only fetch leaf page for that logical tid.\n\nHope that helps to clarify the confusion.\n\nOn Mon, Aug 26, 2019 at 5:36 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:Thanks Ashwin and Heikki for your responses. I've one more query here,\n\nIf BTree index is created on a zedstore table, the t_tid field of\nIndex tuple contains the physical tid that is not actually pointing to\nthe data block instead it contains something from which the logical\ntid can be derived. So, when IndexScan is performed on a zedstore\ntable, it fetches the physical tid from the index page and derives the\nlogical tid out of it and then retrieves the data corresponding to\nthis logical tid from the zedstore table. For that, it kind of\nperforms SeqScan on the zedstore table for the given tid. Nope, it won't perform seqscan. As zedstore is laid out as btree itself with logical TID as its key. It can quickly find which page the logical TID belongs to and only access that page. It doesn't need to perform the seqscan for the same. That's one of the rationals for laying out things in btree fashion to easily connect logical to physical world and not keep any external mapping.\nAFAIU, the following user level query on zedstore table\n\nselect * from zed_tab where a = 3;\n\ngets internally converted to\n\nselect * from zed_tab where tid = 3; -- assuming that index is created\non column 'a' and the logical tid associated with a = 3 is 3.So, for this it will first only access the TID btree, find the leaf page with tid=3. Perform the visibility checks for the tuple and if tuple is visible, then only will fetch all the columns for that TID. Again using the btrees for those columns to only fetch leaf page for that logical tid.Hope that helps to clarify the confusion.",
"msg_date": "Mon, 26 Aug 2019 17:33:00 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, Aug 27, 2019 at 6:03 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> Hope that helps to clarify the confusion.\n>\n\nThanks for the explanation. Yes, it does clarify my doubt to some extent.\n\nMy point is, once we find the leaf page containing the given tid, we go\nthrough each item in the page until we find the data corresponding to the\ngiven tid which means we kind of perform a sequential scan at the page\nlevel. I'm referring to the below loop in zsbt_attr_scan_fetch_array().\n\n for (off = FirstOffsetNumber; off <= maxoff; off++)\n {\n ItemId iid = PageGetItemId(page, off);\n ZSAttributeArrayItem *item = (ZSAttributeArrayItem *)\nPageGetItem(page, iid);\n\n if (item->t_endtid <= nexttid)\n continue;\n\n if (item->t_firsttid > nexttid)\n break;\n\nBut that's not true for IndexScan in case of heap table because there the\nindex tuple contains the exact physical location of tuple in the heap. So,\nthere is no need to scan the entire page.\n\nFurther here are some minor comments that i could find while doing a quick\ncode walkthrough.\n\n1) In zsundo_insert_finish(), there is a double call to\nBufferGetPage(undobuf); Is that required ?\n\n2) In zedstoream_fetch_row(), why is zsbt_tid_begin_scan() being called\ntwice? I'm referring to the below code.\n\n if (fetch_proj->num_proj_atts == 0)\n {\n ....\n ....\n zsbt_tid_begin_scan(rel, tid, tid + 1,\n snapshot,\n &fetch_proj->tid_scan);\n fetch_proj->tid_scan.serializable = true;\n\n for (int i = 1; i < fetch_proj->num_proj_atts; i++)\n {\n int attno = fetch_proj->proj_atts[i];\n\n zsbt_attr_begin_scan(rel, reldesc, attno,\n &fetch_proj->attr_scans[i - 1]);\n }\n MemoryContextSwitchTo(oldcontext);\n\n zsbt_tid_begin_scan(rel, tid, tid + 1, snapshot,\n&fetch_proj->tid_scan);\n }\n\nAlso, for all types of update operation (be it key or non-key update) we\ncreate a new tid for the new version of tuple. Can't we use the tid\nassociated with the old tuple for the cases where there is no concurrent\ntransactions to whom the old tuple is still visible.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Tue, Aug 27, 2019 at 6:03 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:> Hope that helps to clarify the confusion.>Thanks for the explanation. Yes, it does clarify my doubt to some extent. My point is, once we find the leaf page containing the given tid, we go through each item in the page until we find the data corresponding to the given tid which means we kind of perform a sequential scan at the page level. I'm referring to the below loop in zsbt_attr_scan_fetch_array(). for (off = FirstOffsetNumber; off <= maxoff; off++) { ItemId iid = PageGetItemId(page, off); ZSAttributeArrayItem *item = (ZSAttributeArrayItem *) PageGetItem(page, iid); if (item->t_endtid <= nexttid) continue; if (item->t_firsttid > nexttid) break;But that's not true for IndexScan in case of heap table because there the index tuple contains the exact physical location of tuple in the heap. So, there is no need to scan the entire page.Further here are some minor comments that i could find while doing a quick code walkthrough.1) In zsundo_insert_finish(), there is a double call to BufferGetPage(undobuf); Is that required ?2) In zedstoream_fetch_row(), why is zsbt_tid_begin_scan() being called twice? I'm referring to the below code. if (fetch_proj->num_proj_atts == 0) { .... .... zsbt_tid_begin_scan(rel, tid, tid + 1, snapshot, &fetch_proj->tid_scan); fetch_proj->tid_scan.serializable = true; for (int i = 1; i < fetch_proj->num_proj_atts; i++) { int attno = fetch_proj->proj_atts[i]; zsbt_attr_begin_scan(rel, reldesc, attno, &fetch_proj->attr_scans[i - 1]); } MemoryContextSwitchTo(oldcontext); zsbt_tid_begin_scan(rel, tid, tid + 1, snapshot, &fetch_proj->tid_scan); }Also, for all types of update operation (be it key or non-key update) we create a new tid for the new version of tuple. Can't we use the tid associated with the old tuple for the cases where there is no concurrent transactions to whom the old tuple is still visible.-- With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Tue, 27 Aug 2019 12:33:01 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, Aug 27, 2019 at 12:03 AM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> My point is, once we find the leaf page containing the given tid, we go\n> through each item in the page until we find the data corresponding to the\n> given tid which means we kind of perform a sequential scan at the page\n> level. I'm referring to the below loop in zsbt_attr_scan_fetch_array().\n>\n> for (off = FirstOffsetNumber; off <= maxoff; off++)\n> {\n> ItemId iid = PageGetItemId(page, off);\n> ZSAttributeArrayItem *item = (ZSAttributeArrayItem *)\n> PageGetItem(page, iid);\n>\n> if (item->t_endtid <= nexttid)\n> continue;\n>\n> if (item->t_firsttid > nexttid)\n> break;\n>\n> But that's not true for IndexScan in case of heap table because there the\n> index tuple contains the exact physical location of tuple in the heap. So,\n> there is no need to scan the entire page.\n>\n\nYou are correct that we currently go through each item in the leaf page that\ncontains the given tid, specifically, the logic to retrieve all the\nattribute\nitems inside a ZSAttStream is now moved to decode_attstream() in the latest\ncode, and then in zsbt_attr_fetch() we again loop through each item we\npreviously retrieved from decode_attstream() and look for the given tid. One\noptimization we can to is to tell decode_attstream() to stop decoding at the\ntid we are interested in. We can also apply other tricks to speed up the\nlookups in the page, for fixed length attribute, it is easy to do binary\nsearch\ninstead of linear search, and for variable length attribute, we can probably\ntry something that we didn't think of yet.\n\n\n1) In zsundo_insert_finish(), there is a double call to\n> BufferGetPage(undobuf); Is that required ?\n>\n\nFixed, thanks!\n\n\n2) In zedstoream_fetch_row(), why is zsbt_tid_begin_scan() being called\n> twice? I'm referring to the below code.\n>\n> if (fetch_proj->num_proj_atts == 0)\n> {\n> ....\n> ....\n> zsbt_tid_begin_scan(rel, tid, tid + 1,\n> snapshot,\n> &fetch_proj->tid_scan);\n> fetch_proj->tid_scan.serializable = true;\n>\n> for (int i = 1; i < fetch_proj->num_proj_atts; i++)\n> {\n> int attno = fetch_proj->proj_atts[i];\n>\n> zsbt_attr_begin_scan(rel, reldesc, attno,\n> &fetch_proj->attr_scans[i - 1]);\n> }\n> MemoryContextSwitchTo(oldcontext);\n>\n> zsbt_tid_begin_scan(rel, tid, tid + 1, snapshot,\n> &fetch_proj->tid_scan);\n> }\n>\n\nI removed the second call, thanks!\n\n\n\n> Also, for all types of update operation (be it key or non-key update) we\n> create a new tid for the new version of tuple. Can't we use the tid\n> associated with the old tuple for the cases where there is no concurrent\n> transactions to whom the old tuple is still visible.\n>\n\nZedstore currently implement update as delete+insert, hence the old tid is\nnot\nreused. We don't store the tuple in our UNDO log, and we only store the\ntransaction information in the UNDO log. Reusing the tid of the old tuple\nmeans\nputting the old tuple in the UNDO log, which we have not implemented yet.\n\n\nThanks for reporting, this is very helpful! Patches are welcome as well!\n\nOn Tue, Aug 27, 2019 at 12:03 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:My point is, once we find the leaf page containing the given tid, we go through each item in the page until we find the data corresponding to the given tid which means we kind of perform a sequential scan at the page level. I'm referring to the below loop in zsbt_attr_scan_fetch_array(). for (off = FirstOffsetNumber; off <= maxoff; off++) { ItemId iid = PageGetItemId(page, off); ZSAttributeArrayItem *item = (ZSAttributeArrayItem *) PageGetItem(page, iid); if (item->t_endtid <= nexttid) continue; if (item->t_firsttid > nexttid) break;But that's not true for IndexScan in case of heap table because there the index tuple contains the exact physical location of tuple in the heap. So, there is no need to scan the entire page.You are correct that we currently go through each item in the leaf page thatcontains the given tid, specifically, the logic to retrieve all the attributeitems inside a ZSAttStream is now moved to decode_attstream() in the latestcode, and then in zsbt_attr_fetch() we again loop through each item wepreviously retrieved from decode_attstream() and look for the given tid. Oneoptimization we can to is to tell decode_attstream() to stop decoding at thetid we are interested in. We can also apply other tricks to speed up thelookups in the page, for fixed length attribute, it is easy to do binary searchinstead of linear search, and for variable length attribute, we can probablytry something that we didn't think of yet. 1) In zsundo_insert_finish(), there is a double call to BufferGetPage(undobuf); Is that required ?Fixed, thanks! 2) In zedstoream_fetch_row(), why is zsbt_tid_begin_scan() being called twice? I'm referring to the below code. if (fetch_proj->num_proj_atts == 0) { .... .... zsbt_tid_begin_scan(rel, tid, tid + 1, snapshot, &fetch_proj->tid_scan); fetch_proj->tid_scan.serializable = true; for (int i = 1; i < fetch_proj->num_proj_atts; i++) { int attno = fetch_proj->proj_atts[i]; zsbt_attr_begin_scan(rel, reldesc, attno, &fetch_proj->attr_scans[i - 1]); } MemoryContextSwitchTo(oldcontext); zsbt_tid_begin_scan(rel, tid, tid + 1, snapshot, &fetch_proj->tid_scan); }I removed the second call, thanks! Also, for all types of update operation (be it key or non-key update) we create a new tid for the new version of tuple. Can't we use the tid associated with the old tuple for the cases where there is no concurrent transactions to whom the old tuple is still visible.Zedstore currently implement update as delete+insert, hence the old tid is notreused. We don't store the tuple in our UNDO log, and we only store thetransaction information in the UNDO log. Reusing the tid of the old tuple meansputting the old tuple in the UNDO log, which we have not implemented yet.Thanks for reporting, this is very helpful! Patches are welcome as well!",
"msg_date": "Tue, 27 Aug 2019 16:59:30 -0700",
"msg_from": "Alexandra Wang <lewang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Wed, Aug 28, 2019 at 5:30 AM Alexandra Wang <lewang@pivotal.io> wrote:\n\n> You are correct that we currently go through each item in the leaf page\n> that\n> contains the given tid, specifically, the logic to retrieve all the\n> attribute\n> items inside a ZSAttStream is now moved to decode_attstream() in the latest\n> code, and then in zsbt_attr_fetch() we again loop through each item we\n> previously retrieved from decode_attstream() and look for the given tid.\n>\n\nOkay. Any idea why this new way of storing attribute data as streams\n(lowerstream and upperstream) has been chosen just for the attributes but\nnot for tids. Are only attribute blocks compressed but not the tids blocks?\n\n\n> One\n> optimization we can to is to tell decode_attstream() to stop decoding at\n> the\n> tid we are interested in. We can also apply other tricks to speed up the\n> lookups in the page, for fixed length attribute, it is easy to do binary\n> search\n> instead of linear search, and for variable length attribute, we can\n> probably\n> try something that we didn't think of yet.\n>\n\nI think we can probably ask decode_attstream() to stop once it has found\nthe tid that we are searching for but then we only need to do that for\nIndex Scans.\n\nZedstore currently implement update as delete+insert, hence the old tid is\n> not\n> reused. We don't store the tuple in our UNDO log, and we only store the\n> transaction information in the UNDO log. Reusing the tid of the old tuple\n> means\n> putting the old tuple in the UNDO log, which we have not implemented yet.\n>\n>\nOKay, so that means performing update on a non-key attribute would also\nrequire changes in the index table. In short, HOT update is currently not\npossible with zedstore table. Am I right?\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:*http://www.enterprisedb.com <http://www.enterprisedb.com/>*\n\nOn Wed, Aug 28, 2019 at 5:30 AM Alexandra Wang <lewang@pivotal.io> wrote:You are correct that we currently go through each item in the leaf page thatcontains the given tid, specifically, the logic to retrieve all the attributeitems inside a ZSAttStream is now moved to decode_attstream() in the latestcode, and then in zsbt_attr_fetch() we again loop through each item wepreviously retrieved from decode_attstream() and look for the given tid. Okay. Any idea why this new way of storing attribute data as streams (lowerstream and upperstream) has been chosen just for the attributes but not for tids. Are only attribute blocks compressed but not the tids blocks? Oneoptimization we can to is to tell decode_attstream() to stop decoding at thetid we are interested in. We can also apply other tricks to speed up thelookups in the page, for fixed length attribute, it is easy to do binary searchinstead of linear search, and for variable length attribute, we can probablytry something that we didn't think of yet. I think we can probably ask decode_attstream() to stop once it has found the tid that we are searching for but then we only need to do that for Index Scans.Zedstore currently implement update as delete+insert, hence the old tid is notreused. We don't store the tuple in our UNDO log, and we only store thetransaction information in the UNDO log. Reusing the tid of the old tuple meansputting the old tuple in the UNDO log, which we have not implemented yet.OKay, so that means performing update on a non-key attribute would also require changes in the index table. In short, HOT update is currently not possible with zedstore table. Am I right?-- With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Thu, 29 Aug 2019 17:00:45 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 29/08/2019 14:30, Ashutosh Sharma wrote:\n> \n> On Wed, Aug 28, 2019 at 5:30 AM Alexandra Wang <lewang@pivotal.io \n> <mailto:lewang@pivotal.io>> wrote:\n> \n> You are correct that we currently go through each item in the leaf\n> page that\n> contains the given tid, specifically, the logic to retrieve all the\n> attribute\n> items inside a ZSAttStream is now moved to decode_attstream() in the\n> latest\n> code, and then in zsbt_attr_fetch() we again loop through each item we\n> previously retrieved from decode_attstream() and look for the given\n> tid. \n> \n> \n> Okay. Any idea why this new way of storing attribute data as streams \n> (lowerstream and upperstream) has been chosen just for the attributes \n> but not for tids. Are only attribute blocks compressed but not the tids \n> blocks?\n\nRight, only attribute blocks are currently compressed. Tid blocks need \nto be modified when there are UPDATEs or DELETE, so I think having to \ndecompress and recompress them would be more costly. Also, there is no \nuser data on the TID tree, and the Simple-8b encoded codewords used to \nrepresent the TIDs are already pretty compact. I'm not sure how much \ngain you would get from passing it through a general purpose compressor.\n\nI could be wrong though. We could certainly try it out, and see how it \nperforms.\n\n> One\n> optimization we can to is to tell decode_attstream() to stop\n> decoding at the\n> tid we are interested in. We can also apply other tricks to speed up the\n> lookups in the page, for fixed length attribute, it is easy to do\n> binary search\n> instead of linear search, and for variable length attribute, we can\n> probably\n> try something that we didn't think of yet. \n> \n> \n> I think we can probably ask decode_attstream() to stop once it has found \n> the tid that we are searching for but then we only need to do that for \n> Index Scans.\n\nI've been thinking that we should add a few \"bookmarks\" on long streams, \nso that you could skip e.g. to the midpoint in a stream. It's a tradeoff \nthough; when you add more information for random access, it makes the \nrepresentation less compact.\n\n> Zedstore currently implement update as delete+insert, hence the old\n> tid is not\n> reused. We don't store the tuple in our UNDO log, and we only store the\n> transaction information in the UNDO log. Reusing the tid of the old\n> tuple means\n> putting the old tuple in the UNDO log, which we have not implemented\n> yet.\n> \n> OKay, so that means performing update on a non-key attribute would also \n> require changes in the index table. In short, HOT update is currently \n> not possible with zedstore table. Am I right?\n\nThat's right. There's a lot of potential gain for doing HOT updates. For \nexample, if you UPDATE one column on every row on a table, ideally you \nwould only modify the attribute tree containing that column. But that \nhasn't been implemented.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 29 Aug 2019 15:09:33 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Thu, Aug 29, 2019 at 5:39 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 29/08/2019 14:30, Ashutosh Sharma wrote:\n> >\n> > On Wed, Aug 28, 2019 at 5:30 AM Alexandra Wang <lewang@pivotal.io\n> > <mailto:lewang@pivotal.io>> wrote:\n> >\n> > You are correct that we currently go through each item in the leaf\n> > page that\n> > contains the given tid, specifically, the logic to retrieve all the\n> > attribute\n> > items inside a ZSAttStream is now moved to decode_attstream() in the\n> > latest\n> > code, and then in zsbt_attr_fetch() we again loop through each item we\n> > previously retrieved from decode_attstream() and look for the given\n> > tid.\n> >\n> >\n> > Okay. Any idea why this new way of storing attribute data as streams\n> > (lowerstream and upperstream) has been chosen just for the attributes\n> > but not for tids. Are only attribute blocks compressed but not the tids\n> > blocks?\n>\n> Right, only attribute blocks are currently compressed. Tid blocks need\n> to be modified when there are UPDATEs or DELETE, so I think having to\n> decompress and recompress them would be more costly. Also, there is no\n> user data on the TID tree, and the Simple-8b encoded codewords used to\n> represent the TIDs are already pretty compact. I'm not sure how much\n> gain you would get from passing it through a general purpose compressor.\n>\n> I could be wrong though. We could certainly try it out, and see how it\n> performs.\n>\n> > One\n> > optimization we can to is to tell decode_attstream() to stop\n> > decoding at the\n> > tid we are interested in. We can also apply other tricks to speed up the\n> > lookups in the page, for fixed length attribute, it is easy to do\n> > binary search\n> > instead of linear search, and for variable length attribute, we can\n> > probably\n> > try something that we didn't think of yet.\n> >\n> >\n> > I think we can probably ask decode_attstream() to stop once it has found\n> > the tid that we are searching for but then we only need to do that for\n> > Index Scans.\n>\n> I've been thinking that we should add a few \"bookmarks\" on long streams,\n> so that you could skip e.g. to the midpoint in a stream. It's a tradeoff\n> though; when you add more information for random access, it makes the\n> representation less compact.\n>\n> > Zedstore currently implement update as delete+insert, hence the old\n> > tid is not\n> > reused. We don't store the tuple in our UNDO log, and we only store the\n> > transaction information in the UNDO log. Reusing the tid of the old\n> > tuple means\n> > putting the old tuple in the UNDO log, which we have not implemented\n> > yet.\n> >\n> > OKay, so that means performing update on a non-key attribute would also\n> > require changes in the index table. In short, HOT update is currently\n> > not possible with zedstore table. Am I right?\n>\n> That's right. There's a lot of potential gain for doing HOT updates. For\n> example, if you UPDATE one column on every row on a table, ideally you\n> would only modify the attribute tree containing that column. But that\n> hasn't been implemented.\n\nThanks Heikki for your reply. After quite some time today I got chance\nto look back into the code. I could see that you have changed the\ntuple insertion and update mechanism a bit. As per the latest changes\nall the tuples being inserted/updated in a transaction are spooled\ninto a hash table and then flushed at the time of transaction commit\nand probably due to this change, I could see that the server crashes\nwhen trying to perform UPDATE operation on a zedstore table having 10\nlacs record. See below example,\n\ncreate table t1(a int, b int) using zedstore;\ninsert into t1 select i, i+10 from generate_series(1, 1000000) i;\npostgres=# update t1 set b = 200;\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nAbove update statement crashed due to some extensive memory leak.\n\nFurther, the UPDATE operation on zedstore table is very slow. I think\nthat's because in case of zedstore table we have to update all the\nbtree data structures even if one column is updated and that really\nsucks. Please let me know if there is some other reason for it.\n\nI also found some typos when going through the writeup in\nzedstore_internal.h and thought of correcting those. Attached is the\npatch with the changes.\n\nThanks,\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Tue, 17 Sep 2019 16:45:11 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 4:15 AM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> create table t1(a int, b int) using zedstore;\n> insert into t1 select i, i+10 from generate_series(1, 1000000) i;\n> postgres=# update t1 set b = 200;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n>\n> Above update statement crashed due to some extensive memory leak.\n>\n\nThank you for reporting! We have located the memory leak and also\nnoticed some other memory related bugs. We are working on the fixes\nplease stay tuned!\n\n\n> I also found some typos when going through the writeup in\n> zedstore_internal.h and thought of correcting those. Attached is the\n> patch with the changes.\n>\n\nApplied. Thank you!\n\nOn Tue, Sep 17, 2019 at 4:15 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:create table t1(a int, b int) using zedstore;\ninsert into t1 select i, i+10 from generate_series(1, 1000000) i;\npostgres=# update t1 set b = 200;\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nAbove update statement crashed due to some extensive memory leak.Thank you for reporting! We have located the memory leak and alsonoticed some other memory related bugs. We are working on the fixesplease stay tuned! \nI also found some typos when going through the writeup in\nzedstore_internal.h and thought of correcting those. Attached is the\npatch with the changes.Applied. Thank you!",
"msg_date": "Wed, 18 Sep 2019 19:39:41 -0700",
"msg_from": "Alexandra Wang <lewang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 8:10 AM Alexandra Wang <lewang@pivotal.io> wrote:\n>\n> On Tue, Sep 17, 2019 at 4:15 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>>\n>> create table t1(a int, b int) using zedstore;\n>> insert into t1 select i, i+10 from generate_series(1, 1000000) i;\n>> postgres=# update t1 set b = 200;\n>> server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> The connection to the server was lost. Attempting reset: Failed.\n>>\n>> Above update statement crashed due to some extensive memory leak.\n>\n>\n> Thank you for reporting! We have located the memory leak and also\n> noticed some other memory related bugs. We are working on the fixes\n> please stay tuned!\n>\n\nCool. As I suspected earlier, it's basically \"ZedstoreAMTupleBuffers\"\ncontext that is completely exhausting the memory and it is being used\nto spool the tuples.\n\n>>\n>> I also found some typos when going through the writeup in\n>> zedstore_internal.h and thought of correcting those. Attached is the\n>> patch with the changes.\n>\n>\n> Applied. Thank you!\n\nThanks for that.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Sep 2019 11:35:56 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 11:35 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Thu, Sep 19, 2019 at 8:10 AM Alexandra Wang <lewang@pivotal.io> wrote:\n> >\n> > On Tue, Sep 17, 2019 at 4:15 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >>\n> >> create table t1(a int, b int) using zedstore;\n> >> insert into t1 select i, i+10 from generate_series(1, 1000000) i;\n> >> postgres=# update t1 set b = 200;\n> >> server closed the connection unexpectedly\n> >> This probably means the server terminated abnormally\n> >> before or while processing the request.\n> >> The connection to the server was lost. Attempting reset: Failed.\n> >>\n> >> Above update statement crashed due to some extensive memory leak.\n> >\n> >\n> > Thank you for reporting! We have located the memory leak and also\n> > noticed some other memory related bugs. We are working on the fixes\n> > please stay tuned!\n> >\n>\n> Cool. As I suspected earlier, it's basically \"ZedstoreAMTupleBuffers\"\n> context that is completely exhausting the memory and it is being used\n> to spool the tuples.\n>\n\nSome more updates on top of this:\n\nWhen doing update operation, for each tuple being modified,\n*tuplebuffers_insert()* says that there is no entry for the relation\nbeing modified in the hash table although it was already added when\nthe first tuple in the table was updated. Why is it so? I mean if I\nhave added an entry in the hash table *tuplebuffers* for let's say\ntable t1 then should the subsequent call to tuplebuffers_insert() say\nthat there is no entry for table t1 in the *tuplebuffers*. Shouldn't\nthat only happen once you have flushed all the tuples in the\ntupbuffer->attbuffers. Because of this reason, for each tuple,\ntupbuffer->attbuffers is allocated resulting into a lot of memory\nconsumption. OTOH if the insert is performed on the same table only\nfor the first tuple tuplebuffers_insert() says that is no entry for\nthe the table t1 in hash but from the second time onwards that doesn;t\nhappen. I think because of this reason the memory leak is happening in\ncase of update operation. Please let me know if I'm missing something\nhere just because I didn't get chance to spent\nmuch time on this. Thank you.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Sep 2019 17:11:52 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "> When doing update operation, for each tuple being modified,\n> *tuplebuffers_insert()* says that there is no entry for the relation\n> being modified in the hash table although it was already added when\n> the first tuple in the table was updated. Why is it so?\n\nCurrently, when doing an update, it will actually flush the tuple\nbuffers every time we update a tuple. As a result, we only ever spool\nup one tuple at a time. This is a good place to put in an optimization\nlike was implemented for insert, but I haven't gotten around to\nlooking into that yet.\n\nThe memory leak is actually happening because it isn't freeing the\nattbuffers after flushing. Alexandra Wang and I have a working\nbranch[1] where we tried to plug the leak by freeing the attbuffers,\nbut it has exposed an issue with triggers that I need to understand\nbefore I push the fix into the main zedstore branch.\n\nI don't like our solution of freeing the buffers either, because they\ncould easily be reused. I'm going to take a stab at making that better\nbefore merging in the fix.\n\n[1] https://github.com/l-wang/postgres-1/tree/zedstore-fix-memory-issues\n\n> When doing update operation, for each tuple being modified,> *tuplebuffers_insert()* says that there is no entry for the relation> being modified in the hash table although it was already added when> the first tuple in the table was updated. Why is it so?Currently, when doing an update, it will actually flush the tuplebuffers every time we update a tuple. As a result, we only ever spoolup one tuple at a time. This is a good place to put in an optimizationlike was implemented for insert, but I haven't gotten around tolooking into that yet.The memory leak is actually happening because it isn't freeing theattbuffers after flushing. Alexandra Wang and I have a workingbranch[1] where we tried to plug the leak by freeing the attbuffers,but it has exposed an issue with triggers that I need to understandbefore I push the fix into the main zedstore branch.I don't like our solution of freeing the buffers either, because theycould easily be reused. I'm going to take a stab at making that betterbefore merging in the fix.[1] https://github.com/l-wang/postgres-1/tree/zedstore-fix-memory-issues",
"msg_date": "Thu, 19 Sep 2019 17:18:19 -0700",
"msg_from": "Taylor Vesely <tvesely@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 5:48 AM Taylor Vesely <tvesely@pivotal.io> wrote:\n>\n> > When doing update operation, for each tuple being modified,\n> > *tuplebuffers_insert()* says that there is no entry for the relation\n> > being modified in the hash table although it was already added when\n> > the first tuple in the table was updated. Why is it so?\n>\n> Currently, when doing an update, it will actually flush the tuple\n> buffers every time we update a tuple. As a result, we only ever spool\n> up one tuple at a time. This is a good place to put in an optimization\n> like was implemented for insert, but I haven't gotten around to\n> looking into that yet.\n>\n\nOkay. So, that's the root cause. Spooling just one tuple where at\nleast 60 tuples can be spooled and then not freeing it at all is\naltogether the reason for this extensive memory leak.\n\n> The memory leak is actually happening because it isn't freeing the\n> attbuffers after flushing. Alexandra Wang and I have a working\n> branch[1] where we tried to plug the leak by freeing the attbuffers,\n> but it has exposed an issue with triggers that I need to understand\n> before I push the fix into the main zedstore branch.\n>\n> I don't like our solution of freeing the buffers either, because they\n> could easily be reused. I'm going to take a stab at making that better\n> before merging in the fix.\n>\n\nThat's right, why do we need to free the memory after flushing data in\nattbuffers. We can simply reuse it for next set of data to be updated.\n\n> [1] https://github.com/l-wang/postgres-1/tree/zedstore-fix-memory-issues\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Sep 2019 08:59:36 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi Alexandra,\n\nOn Tue, Sep 17, 2019 at 4:45 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Thu, Aug 29, 2019 at 5:39 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > On 29/08/2019 14:30, Ashutosh Sharma wrote:\n> > >\n> > > On Wed, Aug 28, 2019 at 5:30 AM Alexandra Wang <lewang@pivotal.io\n> > > <mailto:lewang@pivotal.io>> wrote:\n>\n> Further, the UPDATE operation on zedstore table is very slow. I think\n> that's because in case of zedstore table we have to update all the\n> btree data structures even if one column is updated and that really\n> sucks. Please let me know if there is some other reason for it.\n>\n\nThere was no answer for this in your previous reply. It seems like you\nmissed it. As I said earlier, I tried performing UPDATE operation with\noptimised build and found that to update around 10 lacs record in\nzedstore table it takes around 24k ms whereas for normal heap table it\ntakes 2k ms. Is that because in case of zedstore table we have to\nupdate all the Btree data structures even if one column is updated or\nthere is some other reason for it. If yes, could you please let us\nknow. FYI, I'm trying to update the table with just two columns.\n\nFurther, In the latest code I'm getting this warning message when it\nis compiled using -O2 optimisation flag.\n\nzedstore_tidpage.c: In function ‘zsbt_collect_dead_tids’:\nzedstore_tidpage.c:978:10: warning: ‘page’ may be used uninitialized\nin this function [-Wmaybe-uninitialized]\n opaque = ZSBtreePageGetOpaque(page);\n ^\nAttached is the patch that fixes it.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Wed, 25 Sep 2019 16:39:47 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi Ashutosh,\n\nSorry I indeed missed your question, thanks for the reminder!\n\nOn Wed, Sep 25, 2019 at 4:10 AM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> > Further, the UPDATE operation on zedstore table is very slow. I think\n> > that's because in case of zedstore table we have to update all the\n> > btree data structures even if one column is updated and that really\n> > sucks. Please let me know if there is some other reason for it.\n> >\n>\n> There was no answer for this in your previous reply. It seems like you\n> missed it. As I said earlier, I tried performing UPDATE operation with\n> optimised build and found that to update around 10 lacs record in\n> zedstore table it takes around 24k ms whereas for normal heap table it\n> takes 2k ms. Is that because in case of zedstore table we have to\n> update all the Btree data structures even if one column is updated or\n> there is some other reason for it. If yes, could you please let us\n> know. FYI, I'm trying to update the table with just two columns.\n>\n\nZedstore UPDATE operation currently fetches the old rows, updates the\nundo pointers stored in the tid btree, and insert new rows into all\nthe attribute btrees with the new tids. So performance of updating one\ncolumn makes no difference from updating all the columns. That said,\nthe wider the table is, the longer it takes to update, regardless\nupdating one column or all the columns.\n\nHowever, since your test table only has two columns, and we also\ntested the same on a one-column table and got similar results as\nyours, there is definitely room for optimizations. Attached file\nzedstore_update_flames_lz4_first_update.svg is the profiling results\nfor the update query on a one-column table with 1M records. It spent\nmost of the time in zedstoream_fetch_row() and zsbt_tid_update(). For\nzedstoream_fetch_row(), Taylor and I had some interesting findings\nwhich I'm going to talk about next, I haven't dived into\nzsbt_tid_update() yet and need to think about it more.\n\nTo understand what slows down zedstore UDPATE, Taylor and I did the\nfollowing test and profiling on a zedstore table with only one column.\n\npostgres=# create table onecol(a int) using zedstore;\npostgres=# insert into onecol select i from generate_series(1, 1000000) i;\n\n-- Create view to count zedstore pages group by page types\npostgres=# CREATE VIEW pg_zs_page_counts AS\n SELECT\n c.relnamespace::regnamespace,\n c.oid,\n c.relname,\n pg_zs_page_type(c.oid, generate_series(0, c.relpages - 1)),\n count(*)\n FROM pg_am am\n JOIN pg_class c ON (c.relam = am.oid)\n WHERE am.amname='zedstore'\n GROUP BY 1,2,3,4;\n\npostgres=# select * from pg_zs_page_counts;\n relnamespace | oid | relname | pg_zs_page_type | count\n--------------+-------+---------+-----------------+-------\n public | 32768 | onecol | BTREE | 640\n public | 32768 | onecol | FREE | 90\n public | 32768 | onecol | META | 1\n(3 rows)\n\n-- Run update query the first time\npostgres=# update onecol set a = 200; -- profiling attached in\nzedstore_update_flames_lz4_first_update.svg\nTime: 28760.199 ms (00:28.760)\n\npostgres=# select * from pg_zs_page_counts;\n relnamespace | oid | relname | pg_zs_page_type | count\n--------------+-------+---------+-----------------+-------\n public | 32768 | onecol | BTREE | 6254\n public | 32768 | onecol | FREE | 26915\n public | 32768 | onecol | META | 1\n(6 rows)\n\npostgres=# select count(*) from pg_zs_btree_pages('onecol') where attno = 0;\n count\n-------\n 5740\n(1 row)\n\npostgres=# select count(*) from pg_zs_btree_pages('onecol') where attno = 1;\n count\n-------\n 514\n(1 row)\n\npostgres=# select * from pg_zs_btree_pages('onecol') where attno = 1 and\ntotalsz > 0;\n blkno | nextblk | attno | level | lokey | hikey | nitems |\nncompressed | totalsz | uncompressedsz | freespace\n-------+------------+-------+-------+---------+-----------------+--------+-------------+---------+----------------+-----------\n 730 | 6580 | 1 | 0 | 999901 | 1182451 | 1 |\n 1 | 3156 | 778480 | 4980\n 6580 | 13030 | 1 | 0 | 1182451 | 1380771 | 2 |\n 1 | 8125 | 859104 | 11\n 13030 | 19478 | 1 | 0 | 1380771 | 1579091 | 2 |\n 1 | 8125 | 859104 | 11\n 19478 | 25931 | 1 | 0 | 1579091 | 1777411 | 2 |\n 1 | 8125 | 859104 | 11\n 25931 | 32380 | 1 | 0 | 1777411 | 1975731 | 2 |\n 1 | 8125 | 859104 | 11\n 32380 | 4294967295 | 1 | 0 | 1975731 | 281474976645120 | 2 |\n 1 | 2033 | 105016 | 6103\n(6 rows)\n\n-- Run update query the second time\npostgres=# update onecol set a = 200; -- profiling attached in\nzedstore_update_flames_lz4_second_update.svg\nTime: 267135.703 ms (04:27.136)\n\nAs you can see, it took 28s to run the update query for the first\ntime, it was slow but expected. However, when we run the same update\nquery again it took 4 mins and 27s, almost 10x slower than the first\nrun. The profiling result of the second update is attached, it shows\nthat 57% of all the time it's doing decode_chunk_fixed(), which is\nused for decoding a chunk in a attstream so that we can confirm\nwhether the tid of interest is in that chunk and fetch it if true.\nRight now, each chunk contains at most 60 tids for fixed length\nattributes and at most 30 tids for varlena attributes, and we decode\nall the tids each chunk contains one by one.\n\nGoing back to our test, before and after the first UPDATE, the BTREE\npage counts increased from 640 to 6254, however, only 6 out of the 514\nattribute btree pages actually store data. It seems like a bug that we\nleft behind 508 empty btree pages, we should fix it, but let's put it\naside as a seperate problem. With 6 pages we stored 1M rows, each page\ncontains as many as 198,320 tids. This is the reason why the second\nUPDATE spent so much time at decoding chunks. The btree structure only\nhelps us locate the page for a given tid, but once we get to the page,\nthe better compression we have, the more chunks we can pack in one\npage, the more calls per page to decode_chunk(). Even worse, unlike\nINSERT, UPDATE currently initialize a new fetcher every time it\nfetches a new row, which means it doesn't remember the last position\nthe decoder was at in the attstream, so everytime it fetches a new\nrow, the decoder starts all over from the beginning of the attstream,\nand we are talking about an attstream that could have 198,320 records.\nWe also haven't done any optimization inside of decode_chunk() itself,\nlike checking first and last tid, stop decoding once found the tid, or\ndoing binary search for fixed length attributes.\n\nSo, I think what slows down the second UPDATE are also part of the\nreasons why the first UPDATE is slow. We still haven't done any\noptimization for UPDATE so far, probably because we didn't expect it\nto be better than heap, but we should try to make it not too much\nworse.\n\n\n> Further, In the latest code I'm getting this warning message when it\n> is compiled using -O2 optimisation flag.\n>\n> zedstore_tidpage.c: In function ‘zsbt_collect_dead_tids’:\n> zedstore_tidpage.c:978:10: warning: ‘page’ may be used uninitialized\n> in this function [-Wmaybe-uninitialized]\n> opaque = ZSBtreePageGetOpaque(page);\n> ^\n> Attached is the patch that fixes it.\n>\n\nApplied. Thanks!\n\nHi Ashutosh,Sorry I indeed missed your question, thanks for the reminder!On Wed, Sep 25, 2019 at 4:10 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Further, the UPDATE operation on zedstore table is very slow. I think\n> that's because in case of zedstore table we have to update all the\n> btree data structures even if one column is updated and that really\n> sucks. Please let me know if there is some other reason for it.\n>\n\nThere was no answer for this in your previous reply. It seems like you\nmissed it. As I said earlier, I tried performing UPDATE operation with\noptimised build and found that to update around 10 lacs record in\nzedstore table it takes around 24k ms whereas for normal heap table it\ntakes 2k ms. Is that because in case of zedstore table we have to\nupdate all the Btree data structures even if one column is updated or\nthere is some other reason for it. If yes, could you please let us\nknow. FYI, I'm trying to update the table with just two columns.Zedstore UPDATE operation currently fetches the old rows, updates theundo pointers stored in the tid btree, and insert new rows into allthe attribute btrees with the new tids. So performance of updating onecolumn makes no difference from updating all the columns. That said,the wider the table is, the longer it takes to update, regardlessupdating one column or all the columns. However, since your test table only has two columns, and we alsotested the same on a one-column table and got similar results asyours, there is definitely room for optimizations. Attached filezedstore_update_flames_lz4_first_update.svg is the profiling resultsfor the update query on a one-column table with 1M records. It spentmost of the time in zedstoream_fetch_row() and zsbt_tid_update(). Forzedstoream_fetch_row(), Taylor and I had some interesting findingswhich I'm going to talk about next, I haven't dived intozsbt_tid_update() yet and need to think about it more.To understand what slows down zedstore UDPATE, Taylor and I did thefollowing test and profiling on a zedstore table with only one column.postgres=# create table onecol(a int) using zedstore;postgres=# insert into onecol select i from generate_series(1, 1000000) i;-- Create view to count zedstore pages group by page typespostgres=# CREATE VIEW pg_zs_page_counts AS SELECT c.relnamespace::regnamespace, c.oid, c.relname, pg_zs_page_type(c.oid, generate_series(0, c.relpages - 1)), count(*) FROM pg_am am JOIN pg_class c ON (c.relam = am.oid) WHERE am.amname='zedstore' GROUP BY 1,2,3,4;postgres=# select * from pg_zs_page_counts; relnamespace | oid | relname | pg_zs_page_type | count--------------+-------+---------+-----------------+------- public | 32768 | onecol | BTREE | 640 public | 32768 | onecol | FREE | 90 public | 32768 | onecol | META | 1(3 rows)-- Run update query the first timepostgres=# update onecol set a = 200; -- profiling attached in zedstore_update_flames_lz4_first_update.svgTime: 28760.199 ms (00:28.760)postgres=# select * from pg_zs_page_counts; relnamespace | oid | relname | pg_zs_page_type | count--------------+-------+---------+-----------------+------- public | 32768 | onecol | BTREE | 6254 public | 32768 | onecol | FREE | 26915 public | 32768 | onecol | META | 1(6 rows)postgres=# select count(*) from pg_zs_btree_pages('onecol') where attno = 0; count------- 5740(1 row)postgres=# select count(*) from pg_zs_btree_pages('onecol') where attno = 1; count------- 514(1 row)postgres=# select * from pg_zs_btree_pages('onecol') where attno = 1 and totalsz > 0; blkno | nextblk | attno | level | lokey | hikey | nitems | ncompressed | totalsz | uncompressedsz | freespace-------+------------+-------+-------+---------+-----------------+--------+-------------+---------+----------------+----------- 730 | 6580 | 1 | 0 | 999901 | 1182451 | 1 | 1 | 3156 | 778480 | 4980 6580 | 13030 | 1 | 0 | 1182451 | 1380771 | 2 | 1 | 8125 | 859104 | 11 13030 | 19478 | 1 | 0 | 1380771 | 1579091 | 2 | 1 | 8125 | 859104 | 11 19478 | 25931 | 1 | 0 | 1579091 | 1777411 | 2 | 1 | 8125 | 859104 | 11 25931 | 32380 | 1 | 0 | 1777411 | 1975731 | 2 | 1 | 8125 | 859104 | 11 32380 | 4294967295 | 1 | 0 | 1975731 | 281474976645120 | 2 | 1 | 2033 | 105016 | 6103(6 rows)-- Run update query the second timepostgres=# update onecol set a = 200; -- profiling attached in zedstore_update_flames_lz4_second_update.svgTime: 267135.703 ms (04:27.136)As you can see, it took 28s to run the update query for the firsttime, it was slow but expected. However, when we run the same updatequery again it took 4 mins and 27s, almost 10x slower than the firstrun. The profiling result of the second update is attached, it showsthat 57% of all the time it's doing decode_chunk_fixed(), which isused for decoding a chunk in a attstream so that we can confirmwhether the tid of interest is in that chunk and fetch it if true.Right now, each chunk contains at most 60 tids for fixed lengthattributes and at most 30 tids for varlena attributes, and we decodeall the tids each chunk contains one by one. Going back to our test, before and after the first UPDATE, the BTREEpage counts increased from 640 to 6254, however, only 6 out of the 514attribute btree pages actually store data. It seems like a bug that weleft behind 508 empty btree pages, we should fix it, but let's put itaside as a seperate problem. With 6 pages we stored 1M rows, each pagecontains as many as 198,320 tids. This is the reason why the secondUPDATE spent so much time at decoding chunks. The btree structure onlyhelps us locate the page for a given tid, but once we get to the page,the better compression we have, the more chunks we can pack in onepage, the more calls per page to decode_chunk(). Even worse, unlikeINSERT, UPDATE currently initialize a new fetcher every time itfetches a new row, which means it doesn't remember the last positionthe decoder was at in the attstream, so everytime it fetches a newrow, the decoder starts all over from the beginning of the attstream,and we are talking about an attstream that could have 198,320 records.We also haven't done any optimization inside of decode_chunk() itself,like checking first and last tid, stop decoding once found the tid, ordoing binary search for fixed length attributes. So, I think what slows down the second UPDATE are also part of thereasons why the first UPDATE is slow. We still haven't done anyoptimization for UPDATE so far, probably because we didn't expect itto be better than heap, but we should try to make it not too muchworse. \nFurther, In the latest code I'm getting this warning message when it\nis compiled using -O2 optimisation flag.\n\nzedstore_tidpage.c: In function ‘zsbt_collect_dead_tids’:\nzedstore_tidpage.c:978:10: warning: ‘page’ may be used uninitialized\nin this function [-Wmaybe-uninitialized]\n opaque = ZSBtreePageGetOpaque(page);\n ^\nAttached is the patch that fixes it.Applied. Thanks!",
"msg_date": "Fri, 27 Sep 2019 02:39:09 -0700",
"msg_from": "Alexandra Wang <lewang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Of course I forgot to attach the files.",
"msg_date": "Fri, 27 Sep 2019 02:42:35 -0700",
"msg_from": "Alexandra Wang <lewang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 3:09 PM Alexandra Wang <lewang@pivotal.io> wrote:\n>\n> Hi Ashutosh,\n>\n> Sorry I indeed missed your question, thanks for the reminder!\n>\n> On Wed, Sep 25, 2019 at 4:10 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>>\n>> > Further, the UPDATE operation on zedstore table is very slow. I think\n>> > that's because in case of zedstore table we have to update all the\n>> > btree data structures even if one column is updated and that really\n>> > sucks. Please let me know if there is some other reason for it.\n>> >\n>>\n>> There was no answer for this in your previous reply. It seems like you\n>> missed it. As I said earlier, I tried performing UPDATE operation with\n>> optimised build and found that to update around 10 lacs record in\n>> zedstore table it takes around 24k ms whereas for normal heap table it\n>> takes 2k ms. Is that because in case of zedstore table we have to\n>> update all the Btree data structures even if one column is updated or\n>> there is some other reason for it. If yes, could you please let us\n>> know. FYI, I'm trying to update the table with just two columns.\n>\n>\n> Zedstore UPDATE operation currently fetches the old rows, updates the\n> undo pointers stored in the tid btree, and insert new rows into all\n> the attribute btrees with the new tids. So performance of updating one\n> column makes no difference from updating all the columns. That said,\n> the wider the table is, the longer it takes to update, regardless\n> updating one column or all the columns.\n>\n> However, since your test table only has two columns, and we also\n> tested the same on a one-column table and got similar results as\n> yours, there is definitely room for optimizations. Attached file\n> zedstore_update_flames_lz4_first_update.svg is the profiling results\n> for the update query on a one-column table with 1M records. It spent\n> most of the time in zedstoream_fetch_row() and zsbt_tid_update(). For\n> zedstoream_fetch_row(), Taylor and I had some interesting findings\n> which I'm going to talk about next, I haven't dived into\n> zsbt_tid_update() yet and need to think about it more.\n>\n> To understand what slows down zedstore UDPATE, Taylor and I did the\n> following test and profiling on a zedstore table with only one column.\n>\n> postgres=# create table onecol(a int) using zedstore;\n> postgres=# insert into onecol select i from generate_series(1, 1000000) i;\n>\n> -- Create view to count zedstore pages group by page types\n> postgres=# CREATE VIEW pg_zs_page_counts AS\n> SELECT\n> c.relnamespace::regnamespace,\n> c.oid,\n> c.relname,\n> pg_zs_page_type(c.oid, generate_series(0, c.relpages - 1)),\n> count(*)\n> FROM pg_am am\n> JOIN pg_class c ON (c.relam = am.oid)\n> WHERE am.amname='zedstore'\n> GROUP BY 1,2,3,4;\n>\n> postgres=# select * from pg_zs_page_counts;\n> relnamespace | oid | relname | pg_zs_page_type | count\n> --------------+-------+---------+-----------------+-------\n> public | 32768 | onecol | BTREE | 640\n> public | 32768 | onecol | FREE | 90\n> public | 32768 | onecol | META | 1\n> (3 rows)\n>\n> -- Run update query the first time\n> postgres=# update onecol set a = 200; -- profiling attached in zedstore_update_flames_lz4_first_update.svg\n> Time: 28760.199 ms (00:28.760)\n>\n> postgres=# select * from pg_zs_page_counts;\n> relnamespace | oid | relname | pg_zs_page_type | count\n> --------------+-------+---------+-----------------+-------\n> public | 32768 | onecol | BTREE | 6254\n> public | 32768 | onecol | FREE | 26915\n> public | 32768 | onecol | META | 1\n> (6 rows)\n>\n\nOops, the first UPDATE created a lot of free pages.\n\nJust FYI, when the second update was ran, it took around 5 mins (which\nis almost 10-12 times more than what 1st UPDATE took) but this time\nthere was no more free pages added, instead the already available free\npages were used. Here is the stats observed before and after second\nupdate,\n\nbefore:\n=====\npostgres=# select * from pg_zs_page_counts;\n relnamespace | oid | relname | pg_zs_page_type | count\n--------------+-------+---------+-----------------+-------\n public | 16390 | t1 | FREE | 26915\n public | 16390 | t1 | BTREE | 7277\n public | 16390 | t1 | META | 1\n(3 rows)\n\n\nafter:\n====\npostgres=# select * from pg_zs_page_counts;\n relnamespace | oid | relname | pg_zs_page_type | count\n--------------+-------+---------+-----------------+-------\n public | 16390 | t1 | FREE | 26370\n public | 16390 | t1 | BTREE | 7822\n public | 16390 | t1 | META | 1\n(3 rows)\n\nYou may see that around 545 pages got added this time and they were\nall taken from the free pages list.\n\n> postgres=# select count(*) from pg_zs_btree_pages('onecol') where attno = 0;\n> count\n> -------\n> 5740\n> (1 row)\n>\n\nThis could be because currently tid blocks are not compressed as\nagainst the other attribute blocks.\n\n> postgres=# select count(*) from pg_zs_btree_pages('onecol') where attno = 1;\n> count\n> -------\n> 514\n> (1 row)\n>\n> postgres=# select * from pg_zs_btree_pages('onecol') where attno = 1 and totalsz > 0;\n> blkno | nextblk | attno | level | lokey | hikey | nitems | ncompressed | totalsz | uncompressedsz | freespace\n> -------+------------+-------+-------+---------+-----------------+--------+-------------+---------+----------------+-----------\n> 730 | 6580 | 1 | 0 | 999901 | 1182451 | 1 | 1 | 3156 | 778480 | 4980\n> 6580 | 13030 | 1 | 0 | 1182451 | 1380771 | 2 | 1 | 8125 | 859104 | 11\n> 13030 | 19478 | 1 | 0 | 1380771 | 1579091 | 2 | 1 | 8125 | 859104 | 11\n> 19478 | 25931 | 1 | 0 | 1579091 | 1777411 | 2 | 1 | 8125 | 859104 | 11\n> 25931 | 32380 | 1 | 0 | 1777411 | 1975731 | 2 | 1 | 8125 | 859104 | 11\n> 32380 | 4294967295 | 1 | 0 | 1975731 | 281474976645120 | 2 | 1 | 2033 | 105016 | 6103\n> (6 rows)\n>\n> -- Run update query the second time\n> postgres=# update onecol set a = 200; -- profiling attached in zedstore_update_flames_lz4_second_update.svg\n> Time: 267135.703 ms (04:27.136)\n>\n> As you can see, it took 28s to run the update query for the first\n> time, it was slow but expected. However, when we run the same update\n> query again it took 4 mins and 27s, almost 10x slower than the first\n> run. The profiling result of the second update is attached, it shows\n> that 57% of all the time it's doing decode_chunk_fixed(), which is\n> used for decoding a chunk in a attstream so that we can confirm\n> whether the tid of interest is in that chunk and fetch it if true.\n> Right now, each chunk contains at most 60 tids for fixed length\n> attributes and at most 30 tids for varlena attributes, and we decode\n> all the tids each chunk contains one by one.\n>\n> Going back to our test, before and after the first UPDATE, the BTREE\n> page counts increased from 640 to 6254, however, only 6 out of the 514\n> attribute btree pages actually store data. It seems like a bug that we\n> left behind 508 empty btree pages, we should fix it, but let's put it\n> aside as a seperate problem. With 6 pages we stored 1M rows, each page\n> contains as many as 198,320 tids. This is the reason why the second\n> UPDATE spent so much time at decoding chunks. The btree structure only\n> helps us locate the page for a given tid, but once we get to the page,\n> the better compression we have, the more chunks we can pack in one\n> page, the more calls per page to decode_chunk(). Even worse, unlike\n> INSERT, UPDATE currently initialize a new fetcher every time it\n> fetches a new row, which means it doesn't remember the last position\n> the decoder was at in the attstream, so everytime it fetches a new\n> row, the decoder starts all over from the beginning of the attstream,\n> and we are talking about an attstream that could have 198,320 records.\n> We also haven't done any optimization inside of decode_chunk() itself,\n> like checking first and last tid, stop decoding once found the tid, or\n> doing binary search for fixed length attributes.\n>\n> So, I think what slows down the second UPDATE are also part of the\n> reasons why the first UPDATE is slow. We still haven't done any\n> optimization for UPDATE so far, probably because we didn't expect it\n> to be better than heap, but we should try to make it not too much\n> worse.\n>\n\nThat's right, if the situation is too worse, it would be difficult to\ncompromise. So, some fix is certainly required here.\n\n>>\n>> Further, In the latest code I'm getting this warning message when it\n>> is compiled using -O2 optimisation flag.\n>>\n>> zedstore_tidpage.c: In function ‘zsbt_collect_dead_tids’:\n>> zedstore_tidpage.c:978:10: warning: ‘page’ may be used uninitialized\n>> in this function [-Wmaybe-uninitialized]\n>> opaque = ZSBtreePageGetOpaque(page);\n>> ^\n>> Attached is the patch that fixes it.\n>\n>\n> Applied. Thanks!\n\nThanks for that and for sharing the detail information on why update\noperation on zedstore table is so slow.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Sep 2019 16:08:00 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi,\n\nI got chance to spend some time looking into the recent changes done\nin the zedstore code, basically the functions for packing datums into\nthe attribute streams and handling attribute leaf pages. I didn't find\nany issues but there are some minor comments that I found when\nreviewing. I have worked on those and attached is the patch with the\nchanges. See if the changes looks meaningful to you.\n\nThanks,\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Mon, Sep 30, 2019 at 4:08 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Fri, Sep 27, 2019 at 3:09 PM Alexandra Wang <lewang@pivotal.io> wrote:\n> >\n> > Hi Ashutosh,\n> >\n> > Sorry I indeed missed your question, thanks for the reminder!\n> >\n> > On Wed, Sep 25, 2019 at 4:10 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >>\n> >> > Further, the UPDATE operation on zedstore table is very slow. I think\n> >> > that's because in case of zedstore table we have to update all the\n> >> > btree data structures even if one column is updated and that really\n> >> > sucks. Please let me know if there is some other reason for it.\n> >> >\n> >>\n> >> There was no answer for this in your previous reply. It seems like you\n> >> missed it. As I said earlier, I tried performing UPDATE operation with\n> >> optimised build and found that to update around 10 lacs record in\n> >> zedstore table it takes around 24k ms whereas for normal heap table it\n> >> takes 2k ms. Is that because in case of zedstore table we have to\n> >> update all the Btree data structures even if one column is updated or\n> >> there is some other reason for it. If yes, could you please let us\n> >> know. FYI, I'm trying to update the table with just two columns.\n> >\n> >\n> > Zedstore UPDATE operation currently fetches the old rows, updates the\n> > undo pointers stored in the tid btree, and insert new rows into all\n> > the attribute btrees with the new tids. So performance of updating one\n> > column makes no difference from updating all the columns. That said,\n> > the wider the table is, the longer it takes to update, regardless\n> > updating one column or all the columns.\n> >\n> > However, since your test table only has two columns, and we also\n> > tested the same on a one-column table and got similar results as\n> > yours, there is definitely room for optimizations. Attached file\n> > zedstore_update_flames_lz4_first_update.svg is the profiling results\n> > for the update query on a one-column table with 1M records. It spent\n> > most of the time in zedstoream_fetch_row() and zsbt_tid_update(). For\n> > zedstoream_fetch_row(), Taylor and I had some interesting findings\n> > which I'm going to talk about next, I haven't dived into\n> > zsbt_tid_update() yet and need to think about it more.\n> >\n> > To understand what slows down zedstore UDPATE, Taylor and I did the\n> > following test and profiling on a zedstore table with only one column.\n> >\n> > postgres=# create table onecol(a int) using zedstore;\n> > postgres=# insert into onecol select i from generate_series(1, 1000000) i;\n> >\n> > -- Create view to count zedstore pages group by page types\n> > postgres=# CREATE VIEW pg_zs_page_counts AS\n> > SELECT\n> > c.relnamespace::regnamespace,\n> > c.oid,\n> > c.relname,\n> > pg_zs_page_type(c.oid, generate_series(0, c.relpages - 1)),\n> > count(*)\n> > FROM pg_am am\n> > JOIN pg_class c ON (c.relam = am.oid)\n> > WHERE am.amname='zedstore'\n> > GROUP BY 1,2,3,4;\n> >\n> > postgres=# select * from pg_zs_page_counts;\n> > relnamespace | oid | relname | pg_zs_page_type | count\n> > --------------+-------+---------+-----------------+-------\n> > public | 32768 | onecol | BTREE | 640\n> > public | 32768 | onecol | FREE | 90\n> > public | 32768 | onecol | META | 1\n> > (3 rows)\n> >\n> > -- Run update query the first time\n> > postgres=# update onecol set a = 200; -- profiling attached in zedstore_update_flames_lz4_first_update.svg\n> > Time: 28760.199 ms (00:28.760)\n> >\n> > postgres=# select * from pg_zs_page_counts;\n> > relnamespace | oid | relname | pg_zs_page_type | count\n> > --------------+-------+---------+-----------------+-------\n> > public | 32768 | onecol | BTREE | 6254\n> > public | 32768 | onecol | FREE | 26915\n> > public | 32768 | onecol | META | 1\n> > (6 rows)\n> >\n>\n> Oops, the first UPDATE created a lot of free pages.\n>\n> Just FYI, when the second update was ran, it took around 5 mins (which\n> is almost 10-12 times more than what 1st UPDATE took) but this time\n> there was no more free pages added, instead the already available free\n> pages were used. Here is the stats observed before and after second\n> update,\n>\n> before:\n> =====\n> postgres=# select * from pg_zs_page_counts;\n> relnamespace | oid | relname | pg_zs_page_type | count\n> --------------+-------+---------+-----------------+-------\n> public | 16390 | t1 | FREE | 26915\n> public | 16390 | t1 | BTREE | 7277\n> public | 16390 | t1 | META | 1\n> (3 rows)\n>\n>\n> after:\n> ====\n> postgres=# select * from pg_zs_page_counts;\n> relnamespace | oid | relname | pg_zs_page_type | count\n> --------------+-------+---------+-----------------+-------\n> public | 16390 | t1 | FREE | 26370\n> public | 16390 | t1 | BTREE | 7822\n> public | 16390 | t1 | META | 1\n> (3 rows)\n>\n> You may see that around 545 pages got added this time and they were\n> all taken from the free pages list.\n>\n> > postgres=# select count(*) from pg_zs_btree_pages('onecol') where attno = 0;\n> > count\n> > -------\n> > 5740\n> > (1 row)\n> >\n>\n> This could be because currently tid blocks are not compressed as\n> against the other attribute blocks.\n>\n> > postgres=# select count(*) from pg_zs_btree_pages('onecol') where attno = 1;\n> > count\n> > -------\n> > 514\n> > (1 row)\n> >\n> > postgres=# select * from pg_zs_btree_pages('onecol') where attno = 1 and totalsz > 0;\n> > blkno | nextblk | attno | level | lokey | hikey | nitems | ncompressed | totalsz | uncompressedsz | freespace\n> > -------+------------+-------+-------+---------+-----------------+--------+-------------+---------+----------------+-----------\n> > 730 | 6580 | 1 | 0 | 999901 | 1182451 | 1 | 1 | 3156 | 778480 | 4980\n> > 6580 | 13030 | 1 | 0 | 1182451 | 1380771 | 2 | 1 | 8125 | 859104 | 11\n> > 13030 | 19478 | 1 | 0 | 1380771 | 1579091 | 2 | 1 | 8125 | 859104 | 11\n> > 19478 | 25931 | 1 | 0 | 1579091 | 1777411 | 2 | 1 | 8125 | 859104 | 11\n> > 25931 | 32380 | 1 | 0 | 1777411 | 1975731 | 2 | 1 | 8125 | 859104 | 11\n> > 32380 | 4294967295 | 1 | 0 | 1975731 | 281474976645120 | 2 | 1 | 2033 | 105016 | 6103\n> > (6 rows)\n> >\n> > -- Run update query the second time\n> > postgres=# update onecol set a = 200; -- profiling attached in zedstore_update_flames_lz4_second_update.svg\n> > Time: 267135.703 ms (04:27.136)\n> >\n> > As you can see, it took 28s to run the update query for the first\n> > time, it was slow but expected. However, when we run the same update\n> > query again it took 4 mins and 27s, almost 10x slower than the first\n> > run. The profiling result of the second update is attached, it shows\n> > that 57% of all the time it's doing decode_chunk_fixed(), which is\n> > used for decoding a chunk in a attstream so that we can confirm\n> > whether the tid of interest is in that chunk and fetch it if true.\n> > Right now, each chunk contains at most 60 tids for fixed length\n> > attributes and at most 30 tids for varlena attributes, and we decode\n> > all the tids each chunk contains one by one.\n> >\n> > Going back to our test, before and after the first UPDATE, the BTREE\n> > page counts increased from 640 to 6254, however, only 6 out of the 514\n> > attribute btree pages actually store data. It seems like a bug that we\n> > left behind 508 empty btree pages, we should fix it, but let's put it\n> > aside as a seperate problem. With 6 pages we stored 1M rows, each page\n> > contains as many as 198,320 tids. This is the reason why the second\n> > UPDATE spent so much time at decoding chunks. The btree structure only\n> > helps us locate the page for a given tid, but once we get to the page,\n> > the better compression we have, the more chunks we can pack in one\n> > page, the more calls per page to decode_chunk(). Even worse, unlike\n> > INSERT, UPDATE currently initialize a new fetcher every time it\n> > fetches a new row, which means it doesn't remember the last position\n> > the decoder was at in the attstream, so everytime it fetches a new\n> > row, the decoder starts all over from the beginning of the attstream,\n> > and we are talking about an attstream that could have 198,320 records.\n> > We also haven't done any optimization inside of decode_chunk() itself,\n> > like checking first and last tid, stop decoding once found the tid, or\n> > doing binary search for fixed length attributes.\n> >\n> > So, I think what slows down the second UPDATE are also part of the\n> > reasons why the first UPDATE is slow. We still haven't done any\n> > optimization for UPDATE so far, probably because we didn't expect it\n> > to be better than heap, but we should try to make it not too much\n> > worse.\n> >\n>\n> That's right, if the situation is too worse, it would be difficult to\n> compromise. So, some fix is certainly required here.\n>\n> >>\n> >> Further, In the latest code I'm getting this warning message when it\n> >> is compiled using -O2 optimisation flag.\n> >>\n> >> zedstore_tidpage.c: In function ‘zsbt_collect_dead_tids’:\n> >> zedstore_tidpage.c:978:10: warning: ‘page’ may be used uninitialized\n> >> in this function [-Wmaybe-uninitialized]\n> >> opaque = ZSBtreePageGetOpaque(page);\n> >> ^\n> >> Attached is the patch that fixes it.\n> >\n> >\n> > Applied. Thanks!\n>\n> Thanks for that and for sharing the detail information on why update\n> operation on zedstore table is so slow.\n>\n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Tue, 15 Oct 2019 17:19:49 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 15/10/2019 13:49, Ashutosh Sharma wrote:\n> Hi,\n> \n> I got chance to spend some time looking into the recent changes done\n> in the zedstore code, basically the functions for packing datums into\n> the attribute streams and handling attribute leaf pages. I didn't find\n> any issues but there are some minor comments that I found when\n> reviewing. I have worked on those and attached is the patch with the\n> changes. See if the changes looks meaningful to you.\n\nThanks for looking! Applied to the development repository \n(https://github.com/greenplum-db/postgres/tree/zedstore/)\n\n- Heikki\n\n\n",
"msg_date": "Thu, 17 Oct 2019 10:41:10 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Thu, Oct 17, 2019 at 2:11 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 15/10/2019 13:49, Ashutosh Sharma wrote:\n> > Hi,\n> >\n> > I got chance to spend some time looking into the recent changes done\n> > in the zedstore code, basically the functions for packing datums into\n> > the attribute streams and handling attribute leaf pages. I didn't find\n> > any issues but there are some minor comments that I found when\n> > reviewing. I have worked on those and attached is the patch with the\n> > changes. See if the changes looks meaningful to you.\n>\n> Thanks for looking! Applied to the development repository\n\nThank you. Here is one more observation:\n\nWhen a zedstore table is queried using *invalid* ctid, the server\ncrashes due to assertion failure. See below,\n\npostgres=# select * from t1 where ctid = '(0, 0)';\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nI believe above should have either returned 0 rows or failed with some\nuser friendly error.\n\nFurther, when the same table is queried using some non-existing ctid,\nthe query returns 0 rows. See below,\n\npostgres=# select count(*) from t1;\n count\n-------\n 2\n(1 row)\n\npostgres=# select * from t1 where ctid = '(0, 2)';\n a | b\n---+------\n 2 | str2\n(1 row)\n\npostgres=# select * from t1 where ctid = '(0, 3)';\n a | b\n---+---\n(0 rows)\n\npostgres=# select * from t1 where ctid = '(0, 4)';\n a | b\n---+---\n(0 rows)\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 24 Oct 2019 14:50:12 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "> When a zedstore table is queried using *invalid* ctid, the server\n> crashes due to assertion failure. See below,\n>\n> postgres=# select * from t2 where ctid = '(0, 0)';\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n\nThank you for pointing that out! I will look into fixing that some\ntime this week. If we run without assertions the query still fails\nwith this error because zedstoream_tuple_tid_valid incorrectly reports\nthe TID as valid:\n\nERROR: arrived at incorrect block 2 while descending zedstore btree\n\n> I believe above should have either returned 1 rows or failed with some\n> user friendly error.\n\nAgreed. I think it should match the behavior of heap as closely as\npossible.\n\n> When a zedstore table is queried using *invalid* ctid, the server> crashes due to assertion failure. See below,> > postgres=# select * from t2 where ctid = '(0, 0)';> server closed the connection unexpectedly> This probably means the server terminated abnormally> before or while processing the request.> The connection to the server was lost. Attempting reset: Failed.Thank you for pointing that out! I will look into fixing that sometime this week. If we run without assertions the query still failswith this error because zedstoream_tuple_tid_valid incorrectly reportsthe TID as valid:ERROR: arrived at incorrect block 2 while descending zedstore btree> I believe above should have either returned 1 rows or failed with some> user friendly error.Agreed. I think it should match the behavior of heap as closely aspossible.",
"msg_date": "Mon, 28 Oct 2019 15:22:10 -0700",
"msg_from": "Taylor Vesely <tvesely@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Alex Wang and I have been doing some performance analysis of the most\nrecent version of the zedstore branch, and have some interesting\nstatistics to share.\n\nWe specifically focused on TPC-DS query 2, because it plays to what\nshould be the strength of zedstore- namely it does a full table scan\nof only a subset of columns. I've attached the explain verbose output\nfor reference.\n\nWe scan two columns of 'catalog_sales', and two columns of 'web_sales'.\n\n-> Parallel Append\n -> Parallel Seq Scan on tpcds.catalog_sales\n Output: catalog_sales.cs_ext_sales_price,\ncatalog_sales.cs_sold_date_sk\n -> Parallel Seq Scan on tpcds.web_sales\n Output: web_sales.ws_ext_sales_price, web_sales.ws_sold_date_sk\n\nFor heap, it needs to do a full table scan of both tables, and we need\nto read the entire table into memory. For our dataset, that totals\naround 119GB of data.\n\n***HEAP***\ntpcds=# select pg_size_pretty(pg_relation_size('web_sales'));\n pg_size_pretty\n----------------\n 39 GB\n(1 row)\n\ntpcds=# select pg_size_pretty(pg_relation_size('catalog_sales'));\n pg_size_pretty\n----------------\n 80 GB\n(1 row)\n***/HEAP***\n\nWith Zedstore the total relation size is smaller because of\ncompression. When scanning the table, we only scan the blocks with\ndata we are interested in, and leave the rest alone. So the total\nsize we need to scan for these tables totals around 4GB\n\n***ZEDSTORE***\nzedstore=# select pg_size_pretty(pg_relation_size('web_sales'));\n pg_size_pretty\n----------------\n 20 GB\n(1 row)\n\nzedstore=# select pg_size_pretty(pg_relation_size('catalog_sales'));\n pg_size_pretty\n----------------\n 40 GB\n(1 row)\n\nzedstore=# with zedstore_tables as (select d.oid, f.*\nzedstore(# from (select c.oid\nzedstore(# from pg_am am\nzedstore(# join pg_class c on (c.relam = am.oid)\nzedstore(# where am.amname = 'zedstore') d,\nzedstore(# pg_zs_btree_pages(d.oid) f)\nzedstore-# select zs.attno, att.attname, zs.oid::regclass, count(zs.attno)\nas pages\nzedstore-# pg_size_pretty(count(zs.attno) * 8 * 1024) from\nzedstore_tables zs\nzedstore-# left join pg_attribute att on zs.attno = att.attnum\nzedstore-# and zs.oid = att.attrelid\nzedstore-# where zs.oid in ('catalog_sales'::regclass,\n'web_sales'::regclass)\nzedstore-# and (att.attname in\n('cs_ext_sales_price','cs_sold_date_sk','ws_ext_sales_price','ws_sold_date_sk')\nzedstore(# or zs.attno = 0)\nzedstore-# group by zs.attno, att.attname, zs.oid\nzedstore-# order by zs.oid , zs.attno;\n attno | attname | oid | pages | pg_size_pretty\n-------+--------------------+---------------+--------+----------------\n 0 | | catalog_sales | 39549 | 309 MB\n 1 | cs_sold_date_sk | catalog_sales | 2441 | 19 MB\n 24 | cs_ext_sales_price | catalog_sales | 289158 | 2259 MB\n 0 | | web_sales | 20013 | 156 MB\n 1 | ws_sold_date_sk | web_sales | 17578 | 137 MB\n 24 | ws_ext_sales_price | web_sales | 144860 | 1132 MB\n***/ZEDSTORE ***\n\nOn our test machine, our tables were stored on a single spinning disk,\nso our read speed was pretty abysmal with this query. This query is\nI/O bound for us, so it was the single largest factor. With heap, the\ntables are scanned sequentially, and therefore can scan around 150MB of\ntable data per second:\n\n***HEAP***\navg-cpu: %user %nice %system %iowait %steal %idle\n 8.54 0.00 1.85 11.62 0.00 77.98\n\nDevice r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm\n %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util\nsdd 1685.33 0.00 157069.33 0.00 18.67 0.00 1.10\n 0.00 1.56 0.00 2.62 93.20 0.00 0.59 100.00\n\nDevice r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm\n %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util\nsdd 1655.33 0.00 154910.67 0.00 21.33 0.00 1.27\n 0.00 1.62 0.00 2.68 93.58 0.00 0.60 100.13\n\nDevice r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm\n %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util\nsdd 1746.33 0.00 155121.33 0.00 28.00 0.00 1.58\n 0.00 1.48 0.00 2.61 88.83 0.00 0.57 100.00\n***/HEAP***\n\nBecause zedstore resembled random I/O, the read speed was\nsignificantly hindered on our single disk. As a result, we saw ~150x\nslower read speeds.\n\n***ZEDSTORE***\navg-cpu: %user %nice %system %iowait %steal %idle\n 6.24 0.00 1.22 6.34 0.00 86.20\n\nDevice r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm\n %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util\nsdb 129.33 0.00 1034.67 0.00 0.00 0.00 0.00\n 0.00 15.89 0.00 2.05 8.00 0.00 7.67 99.20\n\nDevice r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm\n %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util\nsdb 120.67 0.00 965.33 0.00 0.00 0.00 0.00\n 0.00 16.51 0.00 1.99 8.00 0.00 8.21 99.07\n\nDevice r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm\n %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util\nsdb 121.00 0.00 968.00 0.00 0.00 0.00 0.00\n 0.00 16.76 0.00 2.02 8.00 0.00 8.19 99.07\n***/ZEDSTORE***\n\nThe total query time:\n\n***HEAP***\n Execution Time: 758807.571 ms\n***/HEAP***\n\n***ZEDSTORE***\n Execution Time: 2111576.259 ms\n***/ZEDSTORE***\n\nEvery attribute in zedstore is stored in a btree with the TID as a\nkey. Unlike heap, the TID is a logical address, and not a physical\none. The pages of one attribute are interspersed with the pages of all\nother attributes. When you do a sequential scan on zedstore the pages\nare, therefore, not stored in sequential order, so the access pattern\ncan resemble random I/O.\n\nOn our system, query time for zedstore was around 3x slower than heap\nfor this query. If your storage does not handle semi-random reads very\nwell, then zedstore can be very slow. This setup was a worst case\nscenario because random read was 150x slower than with sequential\nread. On hardware with better random I/O zedstore would really shine.\n\nOn a side note, a second run of this query with zedstore was finished\nin around 57 seconds, because the ~4GB of column data was already in\nthe relcache. The data size is smaller because we only store the\nrelevant columns in memory, also the datums are compressed and\nencoded. Conversely, subsequently running the same query with heap\nstill takes around 750 seconds because our system cannot store 119GB\nof relation data in the relcache/system caches.\n\nOur main takeaway with this is that anything we can do to group\ntogether data that is accessed together can help zedstore to have\nlarger, more frequent sequential reads.\n\n\nOn Mon, Oct 28, 2019 at 3:22 PM Taylor Vesely <tvesely@pivotal.io> wrote:\n\n> > When a zedstore table is queried using *invalid* ctid, the server\n> > crashes due to assertion failure. See below,\n> >\n> > postgres=# select * from t2 where ctid = '(0, 0)';\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > The connection to the server was lost. Attempting reset: Failed.\n>\n> Thank you for pointing that out! I will look into fixing that some\n> time this week. If we run without assertions the query still fails\n> with this error because zedstoream_tuple_tid_valid incorrectly reports\n> the TID as valid:\n>\n> ERROR: arrived at incorrect block 2 while descending zedstore btree\n>\n> > I believe above should have either returned 1 rows or failed with some\n> > user friendly error.\n>\n> Agreed. I think it should match the behavior of heap as closely as\n> possible.\n>",
"msg_date": "Wed, 30 Oct 2019 15:33:58 -0700",
"msg_from": "Taylor Vesely <tvesely@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "> When a zedstore table is queried using *invalid* ctid, the server\n> crashes due to assertion failure. See below,\n>\n> postgres=# select * from t1 where ctid = '(0, 0)';\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n>\n> I believe above should have either returned 0 rows or failed with some\n> user friendly error.\n\nWe pushed a fix for this today. It now returns zero rows, like the\nequivalent query with heap. Thanks for reporting!\n\n> When a zedstore table is queried using *invalid* ctid, the server> crashes due to assertion failure. See below,> > postgres=# select * from t1 where ctid = '(0, 0)';> server closed the connection unexpectedly> This probably means the server terminated abnormally> before or while processing the request.> The connection to the server was lost. Attempting reset: Failed.> > I believe above should have either returned 0 rows or failed with some> user friendly error.We pushed a fix for this today. It now returns zero rows, like theequivalent query with heap. Thanks for reporting!",
"msg_date": "Mon, 4 Nov 2019 16:40:03 -0800",
"msg_from": "Taylor Vesely <tvesely@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hello,\n\nWe (David and I) recently observed that a Zedstore table can be considerably\nbloated when we load data into it with concurrent copies. Also, we found\nthat\nconcurrent insert performance was less than desirable. This is a detailed\nanalysis of the extent of the problem, the cause of the problem and how we\nfixed it. This has input from much of our team: Alex, Ashwin, David, Heikki,\nMelanie, Taylor and myself.\n\nAn example of the bloat that we observed:\nTPC-DS scale = 270:\nTable heap zed(serial) zed(16 parallel COPYs)\nweb_sales 39G 19G 39G\n\nWe found that it was caused due to inefficient page splits resultant from\nout-of-tid-order-inserts into full/full-ish attribute tree leaf pages. The\ndegree of under-utilization was significant - attribute tree leaves with a\nserial data load had 6-8x more datums than the attribute tree leaves\nresultant with a parallel load of 16 sessions.\n\nConsider the scenario below:\n\nAssumptions:\n1. Let us consider two concurrent copy commands executing (sessions S1 and\nS2).\n2. The table has only one (fixed-length for sake of argument) attribute 'a'.\n3. For attribute 'a', a full attribute tree leaf page can accommodate 1500\ndatums.\n\nTID allocations:\nS1: 1-1000\nS2: 1001-2000, 2001-3000\n\nOrder of operations:\n\n1. S2 writes datums for tids 1001-2000, 2001-3000.\nThe resulting leaves are:\nL1:\nlokey = 1 hikey = 2500\nfirsttid = 1001 lasttid = 2500\nL2:\nlokey = 2501 hikey = MaxZSTid\nfirsttid = 2501 lasttid = 3000\n\n2. S1 now writes datums for tids 1-1000.\nWe have to split L1 into L1' and L1''.\nL1':\nlokey = 1 hikey = 1500\nfirsttid = 1 lasttid = 1500\nL1'': [under-utilized page]\nlokey = 1501 hikey = 2000\nfirsttid = 1501 lasttid = 2000\nL2:\nlokey = 2501 hikey = MaxZSTid\nfirsttid = 2501 lasttid = 3000\n\nNote: The lokeys/hikeys reflect ranges of what CAN be inserted whereas\nfirsttid\nand lasttid reflect what actually have been inserted.\n\nL1'' will be an under-utilized page that is not going to be filled again\nbecause\nit inherits the tight hikey from L1. In this example, space wastage in L1''\nis\n66% but it could very easily be close to 100%, especially under concurrent\nworkloads which mixes single and multi-inserts, or even unequally sized\nmulti-inserts.\n\nSolution (kudos to Ashwin!):\n\nFor every multi-insert (and only multi-insert, not for singleton inserts),\nallocate N times more tids. Each session will keep these extra tids in a\nbuffer. Subsequent calls to multi-insert would use these buffered tids. If\nat\nany time a tid allocation request cannot be met by the remaining buffered\ntids,\na new batch of N times the number of tids requested will again be allocated.\n\nIf we take the same example above and say we allocated N=5 times the number\nof\ntids upon the first request for 1000 tids.:\n\nTID allocations:\nS1: 1-5000\nS2: 5001-10000\n\nOrder of operations:\n\n1. S2 writes datums for tids 5001-6000, 6001-7000.\nThe resulting leaves are:\nL1:\nlokey = 1 hikey = 6500\nfirsttid = 5001 lasttid = 6500\nL2:\nlokey = 6501 hikey = MaxZSTid\nfirsttid = 6501 lasttid = 7000\n\n2. S1 writes datums for tids 1-1000.\nL1 will be split into L1' and L1''.\n\nL1':\nlokey = 1 hikey = 5500\nfirsttid = 1 lasttid = 1000\nL1'' [under-utilized page]:\nlokey = 5501 hikey = 6500\nfirsttid = 5501 lasttid = 6500\nL2:\nlokey = 6501 hikey = MaxZSTid\nfirsttid = 6501 lasttid = 7000\n\nSubsequent inserts by S1 will land on L1' whose hikey isn't restrictive.\n\nHowever, we do end up with the inefficient page L1''. With a high enough\nvalue\nof N, we reduce the frequency of such pages. We could further reduce this\nwastage by incorporating a special left split (Since L1 was already full, we\ndon't change it at all -> we simply update it's lokey -> L1 becomes L1''\nand we\nfork of a new leaf to its left: L1'). This would look like:\n\nL1':\nlokey = 1 hikey = 5000\nfirsttid = 1 lasttid = 1000\n\nL1'':\nlokey = 5001 hikey = 6500\nfirsttid = 5001 lasttid = 6500\n\nWe found that with a high enough value of N, we did not get significant\nspace\nbenefits from the left split. Thus, we decided to only incorporate N.\n\nResults: [TPC-DS scale = 50, 16 conc copies]\n\nTable zed N=10 N=100 N=1000 heap\nzed(serial)\ncatalog_sales 15G 9.1G 7.7G 7.5G 15G\n8.0G\ncatalog_returns 1.5G 0.9G 0.7G 0.7G 1.2G\n 0.8G\nstore_returns 2.1G 1.2G 1.1G 1.1G 1.9G\n 1.2G\nstore_sales 17G 11G 10.1G 10.1G 21G 10G\n\nLoad time:\nN=10 30min\nN=100 10min\nN=1000 7min\nzed 100min\nheap 8min\n\n'zed' refers to the zedstore branch without our fix. We see that with N =\n10, we\nget closer to what we get with serial inserts. For N = 100, we even beat\nserial\ninsert.\nWe can attribute the differences in runtime to the fact that by lowering the\nnumber of tid range requests, we reduce the contention on the tid tree -\nwhich\nis a bottleneck for concurrent loads. A significant win!\n\nHow N relates to the other parameters in play:\n\nLet S be the number of concurrent sessions\nLet T be the average number of rows that a session wants to write in t sized\nmulti-insert batches\nLet A be the number of attributes\nNumber of times a session multi-inserts into the tid tree without buffered\nallocation = T/t\nNumber of times a session multi-inserts into the tid tree with buffered\nallocation = T/Nt\nTotal number of multi-inserts into the tid tree = Mt = ST/Nt\nAlso, total number of adverse insert cases where we could have bloat ∝ Mt\nSo, bloat ∝ Mt\nRun-time of a parallel data load ∝ Mt * A\nSo the guidance would be to increase N with the increase in S or in T (t\nwill\nbe relatively constant for a certain table - it is constrained by the size\nof a\nrow and the copy buffer) and also if the table is significantly wide.\nWe can see that it is difficult to provide a default to N, it really should\nbe\na GUC. Also, SINGLE_INSERT_TID_RESERVATION_THRESHOLD and\nSINGLE_INSERT_TID_RESERVATION_SIZE should be turned into GUCs. In our\nimplementation, we treat MULTI_INSERT_TID_RESERVATION_FACTOR = N. We leave\nthe\nGUC implementation for later.\n\nCost of killing the extra unused tids not consumed by multi-inserts:\n\nThe maximum number of tids that can be wasted (W) is capped at (tN - 1) *\nS. This is\nthe worst case: where the last tid allocation request only used 1 tid out\nof the\ntN tids it received and buffered for every session.\nSo average case ~ (tN /2) * S. Number of times the tid tree has to be\naccessed\nto delete these (tN/2) * S tids is S. So taking tid wastage into account,\non\naverage, number of accesses to the tid tree = Mt + W = ST/Nt +\nThus this additional cost of S, and thus cost of tid killing is not really\nsignificant.\n\nRegards,\nSoumyadeep & David\n\nHello,We (David and I) recently observed that a Zedstore table can be considerablybloated when we load data into it with concurrent copies. Also, we found thatconcurrent insert performance was less than desirable. This is a detailedanalysis of the extent of the problem, the cause of the problem and how wefixed it. This has input from much of our team: Alex, Ashwin, David, Heikki,Melanie, Taylor and myself.An example of the bloat that we observed:TPC-DS scale = 270:Table heap zed(serial) zed(16 parallel COPYs)web_sales 39G 19G 39GWe found that it was caused due to inefficient page splits resultant fromout-of-tid-order-inserts into full/full-ish attribute tree leaf pages. Thedegree of under-utilization was significant - attribute tree leaves with aserial data load had 6-8x more datums than the attribute tree leavesresultant with a parallel load of 16 sessions.Consider the scenario below:Assumptions:1. Let us consider two concurrent copy commands executing (sessions S1 and S2).2. The table has only one (fixed-length for sake of argument) attribute 'a'.3. For attribute 'a', a full attribute tree leaf page can accommodate 1500 datums.TID allocations:S1: 1-1000S2: 1001-2000, 2001-3000Order of operations:1. S2 writes datums for tids 1001-2000, 2001-3000.The resulting leaves are:L1:lokey = 1 hikey = 2500firsttid = 1001 lasttid = 2500L2:lokey = 2501 hikey = MaxZSTidfirsttid = 2501 lasttid = 30002. S1 now writes datums for tids 1-1000.We have to split L1 into L1' and L1''.L1':lokey = 1 hikey = 1500firsttid = 1 lasttid = 1500L1'': [under-utilized page]lokey = 1501 hikey = 2000firsttid = 1501 lasttid = 2000L2:lokey = 2501 hikey = MaxZSTidfirsttid = 2501 lasttid = 3000Note: The lokeys/hikeys reflect ranges of what CAN be inserted whereas firsttidand lasttid reflect what actually have been inserted.L1'' will be an under-utilized page that is not going to be filled again becauseit inherits the tight hikey from L1. In this example, space wastage in L1'' is66% but it could very easily be close to 100%, especially under concurrentworkloads which mixes single and multi-inserts, or even unequally sizedmulti-inserts.Solution (kudos to Ashwin!):For every multi-insert (and only multi-insert, not for singleton inserts),allocate N times more tids. Each session will keep these extra tids in abuffer. Subsequent calls to multi-insert would use these buffered tids. If atany time a tid allocation request cannot be met by the remaining buffered tids,a new batch of N times the number of tids requested will again be allocated.If we take the same example above and say we allocated N=5 times the number oftids upon the first request for 1000 tids.:TID allocations:S1: 1-5000S2: 5001-10000Order of operations:1. S2 writes datums for tids 5001-6000, 6001-7000.The resulting leaves are:L1:lokey = 1 hikey = 6500firsttid = 5001 lasttid = 6500L2:lokey = 6501 hikey = MaxZSTidfirsttid = 6501 lasttid = 70002. S1 writes datums for tids 1-1000.L1 will be split into L1' and L1''.L1':lokey = 1 hikey = 5500firsttid = 1 lasttid = 1000L1'' [under-utilized page]:lokey = 5501 hikey = 6500firsttid = 5501 lasttid = 6500L2:lokey = 6501 hikey = MaxZSTidfirsttid = 6501 lasttid = 7000Subsequent inserts by S1 will land on L1' whose hikey isn't restrictive.However, we do end up with the inefficient page L1''. With a high enough valueof N, we reduce the frequency of such pages. We could further reduce thiswastage by incorporating a special left split (Since L1 was already full, wedon't change it at all -> we simply update it's lokey -> L1 becomes L1'' and wefork of a new leaf to its left: L1'). This would look like:L1':lokey = 1 hikey = 5000firsttid = 1 lasttid = 1000L1'':lokey = 5001 hikey = 6500firsttid = 5001 lasttid = 6500We found that with a high enough value of N, we did not get significant spacebenefits from the left split. Thus, we decided to only incorporate N.Results: [TPC-DS scale = 50, 16 conc copies]Table zed N=10 N=100 N=1000 heap zed(serial)catalog_sales 15G 9.1G 7.7G 7.5G 15G 8.0Gcatalog_returns 1.5G 0.9G 0.7G 0.7G 1.2G 0.8Gstore_returns 2.1G 1.2G 1.1G 1.1G 1.9G 1.2Gstore_sales 17G 11G 10.1G 10.1G 21G 10GLoad time:N=10 30minN=100 10minN=1000 7minzed 100minheap 8min'zed' refers to the zedstore branch without our fix. We see that with N = 10, weget closer to what we get with serial inserts. For N = 100, we even beat serialinsert.We can attribute the differences in runtime to the fact that by lowering thenumber of tid range requests, we reduce the contention on the tid tree - whichis a bottleneck for concurrent loads. A significant win!How N relates to the other parameters in play:Let S be the number of concurrent sessionsLet T be the average number of rows that a session wants to write in t sizedmulti-insert batchesLet A be the number of attributesNumber of times a session multi-inserts into the tid tree without bufferedallocation = T/tNumber of times a session multi-inserts into the tid tree with bufferedallocation = T/NtTotal number of multi-inserts into the tid tree = Mt = ST/NtAlso, total number of adverse insert cases where we could have bloat ∝ MtSo, bloat ∝ MtRun-time of a parallel data load ∝ Mt * ASo the guidance would be to increase N with the increase in S or in T (t willbe relatively constant for a certain table - it is constrained by the size of arow and the copy buffer) and also if the table is significantly wide.We can see that it is difficult to provide a default to N, it really should bea GUC. Also, SINGLE_INSERT_TID_RESERVATION_THRESHOLD andSINGLE_INSERT_TID_RESERVATION_SIZE should be turned into GUCs. In ourimplementation, we treat MULTI_INSERT_TID_RESERVATION_FACTOR = N. We leave theGUC implementation for later.Cost of killing the extra unused tids not consumed by multi-inserts:The maximum number of tids that can be wasted (W) is capped at (tN - 1) * S. This isthe worst case: where the last tid allocation request only used 1 tid out of thetN tids it received and buffered for every session.So average case ~ (tN /2) * S. Number of times the tid tree has to be accessedto delete these (tN/2) * S tids is S. So taking tid wastage into account, onaverage, number of accesses to the tid tree = Mt + W = ST/Nt +Thus this additional cost of S, and thus cost of tid killing is not reallysignificant.Regards,Soumyadeep & David",
"msg_date": "Fri, 14 Feb 2020 12:21:59 -0800",
"msg_from": "Soumyadeep Chakraborty <sochakraborty@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hello,\n\nOn Wed, Oct 30, 2019 at 3:34 PM Taylor Vesely <tvesely@pivotal.io> wrote:\n> Because zedstore resembled random I/O, the read speed was\n> significantly hindered on our single disk. As a result, we saw ~150x\n> slower read speeds.\n\nDeep and I have committed a fix for this. The root cause of this problem is\nthat attribute tree (and tid tree) pages are not contiguous enough on disk,\nespecially if we are loading data concurrently into the same table. The\neffect\nof the non-contiguity is especially felt on rotational disks and is\nmagnified\nby increasing the number of sessions that load the table concurrently.\n\nSince a base requirement for a column store is that blocks for a single\ncolumn\nbe physically adjacent (or nearly adjacent), we sought to optimize for this\nrequirement.\n\nWhat we have done is to introduce attribute-level free page maps (FPMs) and\nto\nbatch up relation extension requests. We have introduced a new reloption\nzedstore_rel_extension_factor: whenever we want to extend the relation by a\nsingle page for the tid/attribute tree, we extend it by\nzedstore_rel_extension_factor number of blocks. We return the one block\nrequested and prepend the extra blocks to the attribute-level FPM. This\nmakes\nthe blocks available to other concurrent backends and thus, in spite of\nbackend-interleaved flushes into the same attribute tree, we see more\ncontiguity of leaf blocks.\n\nWe reason about contiguity of blocks by making some major assumptions:\nWe consider that two blocks are near each other if they have block numbers\nthat\nare close to each other. (Refer: BufferGetBlockNumber())\nAlso we assume that if two successive relation extension requests would\nyield\nblocks with block numbers that are close to each other.\n\nRecycling of pages for attribute and tid tree also are now done at the\nattribute-tree level.\n\nExperiment results and methodology:\n\nMetric used to measure performance -> I/O read time reported by the “I/O\nTimings” field in: explain (analyze, buffers, timing, verbose) output with\nthe\ntrack_io_timing GUC on. Before every explain run, we restart the database to\nflush the buffers and clear the OS page cache.\n\nExperiment parameters: TPC-DS Scale = 270, table = store_sales, opt_level =\n-O2\n#parallel COPY sessions loading store_sales = 16.\nN = zedstore_rel_extension_factor\n\nGUCs used:\n\nshared_buffers: 10GB\nmax_wal_size: 1GB\ncheckpoint_flush_after: 1MB\nmax_parallel_workers: 8\nmax_parallel_maintenance_workers: 8\nmaintenance_work_mem: 4GB\nlog_statement: all\neffective_cache_size: 32GB\ntrack_io_timing: on\n\nFor rotational disks:\n\nQuery: select ss_sold_date_sk from store_sales;\nHeap: Table size = 112G. I/O time = 115s. Total exec time =\n212s\nZed (w/o fix): Table size = 59G. I/O time = 634s. Total exec time =\n730s\nZed (N=32): Table size = 59G. I/O time = 91s. Total exec time =\n175s\nZed (N=512): Table size = 59G. I/O time = 7s. Total exec time = 87s\nZed (N=4096): Table size = 59G. I/O time = 2.5s. Total exec time = 82s\n\nQuery: select * from store_sales;\nHeap: Table size = 112G. I/O time = 130s. Total exec time =\n214s\nZed (w/o fix): Table size = 59G. I/O time = 2401s. Total exec time =\n2813s\nZed (N=32): Table size = 59G. I/O time = 929s. Total exec time =\n1300s\nZed (N=512): Table size = 59G. I/O time = 485s. Total exec time =\n847s\nZed (N=4096): Table size = 59G. I/O time = 354s. Total exec time =\n716s\n\nWe also saw discernible differences in I/O time for scale = 50, table size\n= 10G\nfor Zedstore and 21G for heap. Results not reported for brevity.\n\nOur fix doesn't impact COPY performance, so we saw no difference in the time\ntaken to load the data into store_sales.\n\nFor NVMe SSDs:\nWe see no discernible differences in I/O times with and without the fix\n(performance for select * was slightly worse for N=4096). Here\nare some of the results:\n\nQuery: select ss_sold_date_sk from store_sales;\nHeap: Table size = 112G. I/O time = 59s. Total exec time = 123s\nZed (w/o fix): Table size = 59G. I/O time = 20s. Total exec time = 79s\nZed (N=4096): Table size = 59G. I/O time = 21s. Total exec time = 87s\n\nQuery: select * from store_sales;\nHeap: Table size = 112G. I/O time = 64s. Total exec time = 127s\nZed (w/o fix): Table size = 61G. I/O time = 449s. Total exec time = 757s\nZed (N=4096): Table size = 61G. I/O time = 487s. Total exec time = 812s\n\n\nAnalysis of fix:\n\nThe following query inspects the (block distance) absolute difference\nbetween\ntwo logically adjacent leaf blocks for the ss_sold_date_sk attribute of\nstore_sales. It shows us the distribution of the block distances in the\nss_sold_date_sk attribute tree. Output is limited for brevity.\n\nwith blk_dist(dist) as (select abs(nextblk - blkno) as dist from\npg_zs_btree_pages('store_sales'::regclass) where attno=1 and level=0 and\nnextblk != 4294967295)\nselect dist, count(dist) as cnt from blk_dist group by\ndist order by cnt desc limit 5;\n\nW/o fix: #parallel_copies=16,\nW/ fix: #parallel_copies=16, extension_factor=16\n\nW/o fix W/ fix\n\ndist | cnt dist | cnt\n-----+----- -----+------\n 25 | 89 1 | 3228\n 26 | 83 2 | 3192\n 23 | 78 3 | 2664\n 1 | 75 4 | 2218\n 29 | 74 5 | 1866\n\nWe can see that by increasing zedstore_rel_extension_factor, we end up with\na high number of lower block distances.\n\n\nImplications of fix:\n\n1. We have to keep track of the FPM heads for the attribute/tid trees in the\nmeta-page, and since we don't have an extensible meta-page yet, we further\nlimit\nthe number of columns Zedstore can support. We will get around to it\neventually.\n\n2. Worst case extra space wasted on disk from extra free pages that could\nlinger\nafter a bulk load = zedstore_rel_extension_factor * #attributes * 8192\nbytes.\n\nFor zedstore_rel_extension_factor = 16, #attributes = 23:\nwastage = 16*24*8192/1024/1024 = 3M\nFor zedstore_rel_extension_factor = 4096, #attributes = 23:\nwastage = 4096*24*8192/1024/1024 = 768M\n\nNote: The free pages left behind can of course, be used by subsequent\noperations\non the table.\n\nIn conclusion, increasing zedstore_rel_extension_factor for a wide table may\nlead to bloating of the relfile. The percentage of bloat would also be\nmagnified\nif the table doesn't have a lot of data.\n\n3. Amount of extra WAL being written (since we are placing/removing the\nextra\nblocks on the FPMs, something we never did without this fix) is independent\nof\nzedstore_rel_extension_factor and we found that we had written\napproximately 14M\nextra WAL for every 1G relfile.\n\nGuidance on setting zedstore_rel_extension_factor:\n\nUsers should set a high zedstore_rel_extension_factor, when they are loading\ndata on rotational disks, with/without a high degree of concurrency and when\nthey have significant data size.\n\nAttached is a patch with our changes: [1]\nAlso attached is a rebased version of Zedstore on latest PG master. [2]\nGithub branch for Zedstore: [3]\n\n[1] 0001-Attribute-level-FPMs-and-rel-extension-batching.patch\n[2] v4-zedstore.patch\n[3] https://github.com/greenplum-db/postgres/tree/zedstore\n\n--\nAlex & Deep",
"msg_date": "Mon, 30 Mar 2020 13:00:05 -0700",
"msg_from": "Alexandra Wang <lewang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hello,\n\nWe (Jacob and me) have an update for this thread.\n\n1. We recently made some improvements to the table AM APIs for fetching\na single row (tuple_fetch_row_version()) and locking (and fetching) a\ntuple (tuple_lock()), such that they could take a set of columns. We\nextracted these columns at plan time and in some cases, executor time.\nThe changes are in the same spirit as some column-oriented changes that\nare already a part of Zedstore - namely the ability to pass a set of\ncolumns to sequential and index scans among other operations.\n\nWe observed that the two table AM functions are called in contexts\nwhich don't need the entire set of columns to be populated in the\noutput TupleTableSlots associated with these APIs. For instance, in\nDELETE RETURNING, we don't need to fetch all of the columns, just the\nones in the RETURNING clause.\n\nWe saw improvements (see results attached) for a variety of tests - we\nadded a bunch of tests in our storageperf test suite to test these\ncases. We don't see a performance improvement for UPSERT and ON CONFLICT\nDO NOTHING as there is an index lookup pulling in the entire row\npreceding the call to table_tuple_lock() in both these cases. We do\nsee significant improvements (~3x) for DELETE RETURNING and row-level\nlocking and around a ~25x improvement in TidScan runtime.\nPlease refer to src/test/storageperf for the storageperf test suite.\n\n2. We absorbed the scanCols patch [1], replacing some of the existing\nexecutor-level column extraction for scans with the scanCols populated\nduring planning as in [1].\n\n3. We also merged Zedstore upto PG 14 commit: efc5dcfd8a\nPFA the latest version of the Zedstore patch.\n\nRegards,\n\nJacob and Soumyadeep\n\n\n[1] https://www.postgresql.org/message-id/flat/CAAKRu_YxyYOCCO2e83UmHb51sky1hXgeRzQw-PoqT1iHj2ZKVg%40mail.gmail.com#681a254981e915805aec2aea9ea9caf4",
"msg_date": "Tue, 10 Nov 2020 16:13:17 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Hi,\n\nThanks for the updated patch. It's a quite massive amount of code - I I\ndon't think we had many 2MB patches in the past, so this is by no means\na full review.\n\n1) the psql_1.out is missing a bit of expected output (due to 098fb0079)\n\n2) I'm getting crashes in intarray contrib, due to hitting this error in\nlwlock.c (backtrace attached):\n\n\t/* Ensure we will have room to remember the lock */\n\tif (num_held_lwlocks >= MAX_SIMUL_LWLOCKS)\n\t\telog(ERROR, \"too many LWLocks taken\");\n\nI haven't investigates this too much, but it's regular build with\nasserts and TAP tests, so it should be simple to reproduce using \"make\ncheck-world\" I guess.\n\n\n3) I did a very simple benchmark, loading a TPC-H data (for 75GB),\nfollowed by pg_dump, and the duration (in seconds) looks like this:\n\n master zedstore/pglz zedstore/lz4\n -------------------------------------------------\n copy 1855 68092 2131\n dump 751 905 811\n\nAnd the size of the lineitem table (as shown by \\d+) is:\n\n master: 64GB\n zedstore/pglz: 51GB\n zedstore/lz4: 20GB\n\nIt's mostly expected lz4 beats pglz in performance and compression\nratio, but this seems a bit too extreme I guess. Per past benchmarks\n(e.g. [1] and [2]) the difference in compression/decompression time\nshould be maybe 1-2x or something like that, not 35x like here.\n\n[1]\nhttps://www.postgresql.org/message-id/20130621000900.GA12425%40alap2.anarazel.de\n\n[2]\nhttps://www.postgresql.org/message-id/20130605150144.GD28067%40alap2.anarazel.de\n\nFurthermore, the pglz compression is not consuming the most CPU, at\nleast that's what perf says:\n\n 24.82% postgres [.] encode_chunk_varlen\n 20.49% postgres [.] decode_chunk\n 13.01% postgres [.] merge_attstream_guts.isra.0\n 12.68% libc-2.32.so [.] __memmove_avx_unaligned_erms\n 8.72% postgres [.] encode_chunk_fixed\n 6.16% postgres [.] pglz_compress\n 4.36% postgres [.] decode_attstream_cont\n 2.27% postgres [.] 0x00000000000baff0\n 1.84% postgres [.] AllocSetAlloc\n 0.79% postgres [.] append_attstream\n 0.70% postgres [.] palloc\n\nSo I wonder if this is a sign of a deeper issue - maybe the lower\ncompression ratio (for pglz) triggers some sort of feedback loop in\nzedstore, or something like that? Not sure, but this seems strange.\n\n\n4) I looked at some of the code, like merge_attstream etc. and I wonder\nif this might be related to some of the FIXME comments. For example this\nbit in merge_attstream seems interesting:\n\n * FIXME: we don't actually pay attention to the compression anymore.\n * We never repack.\n * FIXME: this is backwords, the normal fast path is if (firsttid1 >\nlasttid2)\n\nBut I suppose that should affect both pglz and lz4, and I'm not sure how\nup to date those comments actually are.\n\nBTW the comments in general need updating and tidying up, to make\nreviews easier. For example the merge_attstream comment references\nattstream1 and attstream2, but those are not the current parameters of\nthe function.\n\n\n5) IHMO there should be a #define specifying the maximum number of items\nper chunk (60). Currently there are literal constants used in various\nplaces, sometimes 60, sometimes 59 etc. which makes it harder to\nunderstand the code. FWIW 60 seems a bit low, but maybe it's OK.\n\n\n6) I do think ZSAttStream should track which compression is used by the\nstream, for two main reasons. Firstly, there's another patch to support\n\"custom compression\" methods, which (also) allows multiple compression\nmethods per column. It'd be a bit strange to support that for varlena\ncolumns in heap table, and not here, I guess. Secondly, I think one of\nthe interesting columnstore features down the road will be execution on\ncompressed data, which however requires compression method designed for\nthat purpose, and it's often datatype-specific (delta encoding, ...).\n\nI don't think we need to go as far as supporting \"custom\" compression\nmethods here, but I think we should allow different built-in compression\nmethods for different attstreams.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 12 Nov 2020 23:40:30 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Nov 12, 2020, at 2:40 PM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> Hi,\n> \n> Thanks for the updated patch. It's a quite massive amount of code - I I\n> don't think we had many 2MB patches in the past, so this is by no means\n> a full review.\n\nThanks for taking a look! You're not kidding about the patch size.\n\nFYI, the tableam changes made recently have been extracted into their\nown patch, which is up at [1].\n\n> 1) the psql_1.out is missing a bit of expected output (due to 098fb0079)\n\nYeah, this patch was rebased as of efc5dcfd8a.\n\n> 2) I'm getting crashes in intarray contrib, due to hitting this error in\n> lwlock.c (backtrace attached):\n> \n> \t/* Ensure we will have room to remember the lock */\n> \tif (num_held_lwlocks >= MAX_SIMUL_LWLOCKS)\n> \t\telog(ERROR, \"too many LWLocks taken\");\n> \n> I haven't investigates this too much, but it's regular build with\n> asserts and TAP tests, so it should be simple to reproduce using \"make\n> check-world\" I guess.\n\nI've only seen this intermittently in installcheck, and I'm not able to\nreproduce with the intarray tests on my machine (macOS). Definitely\nsomething we need to look into. What OS are you testing on?\n\n> It's mostly expected lz4 beats pglz in performance and compression\n> ratio, but this seems a bit too extreme I guess. Per past benchmarks\n> (e.g. [1] and [2]) the difference in compression/decompression time\n> should be maybe 1-2x or something like that, not 35x like here.\n\nYeah, something seems off about that. We'll take a look.\n\n> BTW the comments in general need updating and tidying up, to make\n> reviews easier. For example the merge_attstream comment references\n> attstream1 and attstream2, but those are not the current parameters of\n> the function.\n\nAgreed.\n\n> 5) IHMO there should be a #define specifying the maximum number of items\n> per chunk (60). Currently there are literal constants used in various\n> places, sometimes 60, sometimes 59 etc. which makes it harder to\n> understand the code. FWIW 60 seems a bit low, but maybe it's OK.\n\nYeah, that seems like a good idea.\n\nI think the value 60 comes from the use of simple-8b encoding -- see the\ncomment at the top of zedstore_attstream.c.\n\n> 6) I do think ZSAttStream should track which compression is used by the\n> stream, for two main reasons. Firstly, there's another patch to support\n> \"custom compression\" methods, which (also) allows multiple compression\n> methods per column. It'd be a bit strange to support that for varlena\n> columns in heap table, and not here, I guess. Secondly, I think one of\n> the interesting columnstore features down the road will be execution on\n> compressed data, which however requires compression method designed for\n> that purpose, and it's often datatype-specific (delta encoding, ...).\n> \n> I don't think we need to go as far as supporting \"custom\" compression\n> methods here, but I think we should allow different built-in compression\n> methods for different attstreams.\n\nInteresting. We'll need to read/grok that ML thread.\n\nThanks again for the review!\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/CAE-ML%2B9RmTNzKCNTZPQf8O3b-UjHWGFbSoXpQa3Wvuc8YBbEQw%40mail.gmail.com\n\n\n\n",
"msg_date": "Fri, 13 Nov 2020 19:07:38 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On 11/13/20 8:07 PM, Jacob Champion wrote:\n> On Nov 12, 2020, at 2:40 PM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> Thanks for the updated patch. It's a quite massive amount of code - I I\n>> don't think we had many 2MB patches in the past, so this is by no means\n>> a full review.\n> \n> Thanks for taking a look! You're not kidding about the patch size.\n> \n> FYI, the tableam changes made recently have been extracted into their\n> own patch, which is up at [1].\n> \n>> 1) the psql_1.out is missing a bit of expected output (due to 098fb0079)\n> \n> Yeah, this patch was rebased as of efc5dcfd8a.\n> \n>> 2) I'm getting crashes in intarray contrib, due to hitting this error in\n>> lwlock.c (backtrace attached):\n>>\n>> \t/* Ensure we will have room to remember the lock */\n>> \tif (num_held_lwlocks >= MAX_SIMUL_LWLOCKS)\n>> \t\telog(ERROR, \"too many LWLocks taken\");\n>>\n>> I haven't investigates this too much, but it's regular build with\n>> asserts and TAP tests, so it should be simple to reproduce using \"make\n>> check-world\" I guess.\n> \n> I've only seen this intermittently in installcheck, and I'm not able to\n> reproduce with the intarray tests on my machine (macOS). Definitely\n> something we need to look into. What OS are you testing on?\n> \n\nFedora 32, nothing special. I'm not sure if I ran the tests with pglz or\nlz4, maybe there's some dependence on that, but it does fail for me\nquite reliably with this:\n\n./configure --enable-debug --enable-cassert --enable-tap-tests\n--with-lz4 && make -s clean && make -s -j4 && make check-world\n\n>> It's mostly expected lz4 beats pglz in performance and compression\n>> ratio, but this seems a bit too extreme I guess. Per past benchmarks\n>> (e.g. [1] and [2]) the difference in compression/decompression time\n>> should be maybe 1-2x or something like that, not 35x like here.\n> \n> Yeah, something seems off about that. We'll take a look.\n> \n>> BTW the comments in general need updating and tidying up, to make\n>> reviews easier. For example the merge_attstream comment references\n>> attstream1 and attstream2, but those are not the current parameters of\n>> the function.\n> \n> Agreed.\n> \n>> 5) IHMO there should be a #define specifying the maximum number of items\n>> per chunk (60). Currently there are literal constants used in various\n>> places, sometimes 60, sometimes 59 etc. which makes it harder to\n>> understand the code. FWIW 60 seems a bit low, but maybe it's OK.\n> \n> Yeah, that seems like a good idea.\n> \n> I think the value 60 comes from the use of simple-8b encoding -- see the\n> comment at the top of zedstore_attstream.c.\n> \n\nYeah, I understand where it comes from. I'm just saying that when you\nsee 59 hardcoded, it may not be obvious where it came from, and\nsomething like ITEMS_PER_CHUNK would be better.\n\nI wonder how complicated would it be to allow larger chunks, e.g. by\nusing one bit to say \"there's another 64-bit codeword\". Not sure if it's\nworth the extra complexity, though - it's just that 60 feels a bit low.\n\n>> 6) I do think ZSAttStream should track which compression is used by the\n>> stream, for two main reasons. Firstly, there's another patch to support\n>> \"custom compression\" methods, which (also) allows multiple compression\n>> methods per column. It'd be a bit strange to support that for varlena\n>> columns in heap table, and not here, I guess. Secondly, I think one of\n>> the interesting columnstore features down the road will be execution on\n>> compressed data, which however requires compression method designed for\n>> that purpose, and it's often datatype-specific (delta encoding, ...).\n>>\n>> I don't think we need to go as far as supporting \"custom\" compression\n>> methods here, but I think we should allow different built-in compression\n>> methods for different attstreams.\n> \n> Interesting. We'll need to read/grok that ML thread.\n> \n\nThat thread is a bit long not sure it's worth reading as a whole unless\nyou want to work on that feature. The gist is that to seamlessly support\nmultiple compression algorithms we need to store an ID of the algorithm\nsomewhere. For TOAST that's not too difficult, we can do that in the\nTOAST pointer - the the main challenge is in doing it in a\nbackwards-compatible way. For zedstore we can actually design it from\nthe start.\n\nI wonder if we should track version of the format somewhere, to allow\nfuture improvements. So that if/when we decide to change something in\nthe future, we don't have to scavenge bits etc. Or perhaps just a\n\"uint32 flags\" field, unused/reserved for future use.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 13 Nov 2020 23:00:56 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 4:40 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> master zedstore/pglz zedstore/lz4\n> -------------------------------------------------\n> copy 1855 68092 2131\n> dump 751 905 811\n>\n> And the size of the lineitem table (as shown by \\d+) is:\n>\n> master: 64GB\n> zedstore/pglz: 51GB\n> zedstore/lz4: 20GB\n>\n> It's mostly expected lz4 beats pglz in performance and compression\n> ratio, but this seems a bit too extreme I guess. Per past benchmarks\n> (e.g. [1] and [2]) the difference in compression/decompression time\n> should be maybe 1-2x or something like that, not 35x like here.\n\nI can't speak to the ratio, but in basic backup/restore scenarios pglz\nis absolutely killing me; Performance is just awful; we are cpubound\nin backups throughout the department. Installations defaulting to\nplgz will make this feature show very poorly.\n\nmerlin\n\n\n",
"msg_date": "Mon, 16 Nov 2020 06:59:23 -0600",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "\nOn 11/16/20 1:59 PM, Merlin Moncure wrote:\n> On Thu, Nov 12, 2020 at 4:40 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> master zedstore/pglz zedstore/lz4\n>> -------------------------------------------------\n>> copy 1855 68092 2131\n>> dump 751 905 811\n>>\n>> And the size of the lineitem table (as shown by \\d+) is:\n>>\n>> master: 64GB\n>> zedstore/pglz: 51GB\n>> zedstore/lz4: 20GB\n>>\n>> It's mostly expected lz4 beats pglz in performance and compression\n>> ratio, but this seems a bit too extreme I guess. Per past benchmarks\n>> (e.g. [1] and [2]) the difference in compression/decompression time\n>> should be maybe 1-2x or something like that, not 35x like here.\n> \n> I can't speak to the ratio, but in basic backup/restore scenarios pglz\n> is absolutely killing me; Performance is just awful; we are cpubound\n> in backups throughout the department. Installations defaulting to\n> plgz will make this feature show very poorly.\n> \n\nMaybe. I'm not disputing that pglz is considerably slower than lz4, but\njudging by previous benchmarks I'd expect the compression to be slower\nmaybe by a factor of ~2x. So the 30x difference is suspicious. Similarly\nfor the compression ratio - lz4 is great, but it seems strange it's 1/2\nthe size of pglz. Which is why I'm speculating that something else is\ngoing on.\n\nAs for the \"plgz will make this feature show very poorly\" I think that\ndepends. I think we may end up with pglz doing pretty well (compared to\nheap), but lz4 will probably outperform that. OTOH for various use cases\nit may be more efficient to use something else with worse compression\nratio, but allowing execution on compressed data, etc.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 16 Nov 2020 17:07:29 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 10:07 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n> On 11/16/20 1:59 PM, Merlin Moncure wrote:\n> > On Thu, Nov 12, 2020 at 4:40 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >> master zedstore/pglz zedstore/lz4\n> >> -------------------------------------------------\n> >> copy 1855 68092 2131\n> >> dump 751 905 811\n> >>\n> >> And the size of the lineitem table (as shown by \\d+) is:\n> >>\n> >> master: 64GB\n> >> zedstore/pglz: 51GB\n> >> zedstore/lz4: 20GB\n> >>\n> >> It's mostly expected lz4 beats pglz in performance and compression\n> >> ratio, but this seems a bit too extreme I guess. Per past benchmarks\n> >> (e.g. [1] and [2]) the difference in compression/decompression time\n> >> should be maybe 1-2x or something like that, not 35x like here.\n> >\n> > I can't speak to the ratio, but in basic backup/restore scenarios pglz\n> > is absolutely killing me; Performance is just awful; we are cpubound\n> > in backups throughout the department. Installations defaulting to\n> > plgz will make this feature show very poorly.\n> >\n>\n> Maybe. I'm not disputing that pglz is considerably slower than lz4, but\n> judging by previous benchmarks I'd expect the compression to be slower\n> maybe by a factor of ~2x. So the 30x difference is suspicious. Similarly\n> for the compression ratio - lz4 is great, but it seems strange it's 1/2\n> the size of pglz. Which is why I'm speculating that something else is\n> going on.\n>\n> As for the \"plgz will make this feature show very poorly\" I think that\n> depends. I think we may end up with pglz doing pretty well (compared to\n> heap), but lz4 will probably outperform that. OTOH for various use cases\n> it may be more efficient to use something else with worse compression\n> ratio, but allowing execution on compressed data, etc.\n\nhm, you might be right. Doing some number crunching, I'm getting\nabout 23mb/sec compression on a 600gb backup image on a pretty typical\naws server. That's obviously not great, but your numbers are much\nworse than that, so maybe something else might be going on.\n\n> I think we may end up with pglz doing pretty well (compared to heap)\n\nI *don't* think so, or at least I'm skeptical as long as insertion\ntimes are part of the overall performance measurement. Naturally,\nwith column stores, insertion times are often very peripheral to the\noverall performance picture but for cases that aren't I suspect the\nresults are not going to be pleasant, and advise planning accordingly.\n\nAside, I am very interested in this work. I may be able to support\ntesting in an enterprise environment; lmk if interested -- thank you\n\nmerlin\n\n\n",
"msg_date": "Mon, 16 Nov 2020 10:51:40 -0600",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Nov 13, 2020, at 2:00 PM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> Fedora 32, nothing special. I'm not sure if I ran the tests with pglz or\n> lz4, maybe there's some dependence on that, but it does fail for me\n> quite reliably with this:\n> \n> ./configure --enable-debug --enable-cassert --enable-tap-tests\n> --with-lz4 && make -s clean && make -s -j4 && make check-world\n\nI'm not sure what I messed up the first time, but I am able to reproduce\nreliably now, with and without lz4. It looks like we have a workaround\nin place that significantly increases the number of simultaneous locks\nacquired during indexing:\n\n #define XLR_MAX_BLOCK_ID\t\t\t199\n\nSo that's in need of resolution. I'd expect gin and gist to be pretty\nflaky until we fix that.\n\n--Jacob\n\n",
"msg_date": "Wed, 18 Nov 2020 00:31:04 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Wed, 18 Nov 2020 at 00:31, Jacob Champion <pchampion@vmware.com> wrote:\n>\n> So that's in need of resolution. I'd expect gin and gist to be pretty\n> flaky until we fix that.\n\nJacob and Soumyadeep,\n\nThanks for submitting this. I think a fix is still outstanding? and\nthe patch fails to apply on HEAD in two places.\nPlease can you submit the next version?\n\nDo you mind if we add this for review to the Jan CF?\n\nIt is a lot of code and I think there is significant difficulty for\nthe community to accept that as-is, even though it looks to be a very\nhigh quality submission. So I would like to suggest a strategy for\ncommit: we accept Zedstore as \"Beta\" or \"Experimental\" in PG14,\nperhaps with a WARNING/Caution similar to the one that used to be\ngiven by Postgres in earlier versions when you created a Hash index.\nWe keep Zedstore in \"Beta\" mode until a later release, PG15 or later\nwhen we can declare Zedstore fully safe. That approach allows us to\nget this into the repo asap, and then be fixed and improved\nincrementally from here.\n\ne.g.\n\n\"NOTICE: Caution: Zedstore is an experimental feature in PostgreSQL14\nintended for robustness and performance testing only. Your data and/or\nquery accuracy may be at risk if you rely on this.\"\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 31 Dec 2020 14:22:03 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "On Dec 31, 2020, at 9:22 AM, Simon Riggs <simon@2ndquadrant.com<mailto:simon@2ndquadrant.com>> wrote:\r\n\r\nOn Wed, 18 Nov 2020 at 00:31, Jacob Champion <pchampion@vmware.com<mailto:pchampion@vmware.com>> wrote:\r\n\r\nSo that's in need of resolution. I'd expect gin and gist to be pretty\r\nflaky until we fix that.\r\n\r\nJacob and Soumyadeep,\r\n\r\nThanks for submitting this. I think a fix is still outstanding? and\r\nthe patch fails to apply on HEAD in two places.\r\nPlease can you submit the next version?\r\n\r\nDo you mind if we add this for review to the Jan CF?\r\n\r\nIt is a lot of code and I think there is significant difficulty for\r\nthe community to accept that as-is, even though it looks to be a very\r\nhigh quality submission. So I would like to suggest a strategy for\r\ncommit: we accept Zedstore as \"Beta\" or \"Experimental\" in PG14,\r\nperhaps with a WARNING/Caution similar to the one that used to be\r\ngiven by Postgres in earlier versions when you created a Hash index.\r\nWe keep Zedstore in \"Beta\" mode until a later release, PG15 or later\r\nwhen we can declare Zedstore fully safe. That approach allows us to\r\nget this into the repo asap, and then be fixed and improved\r\nincrementally from here.\r\n\r\nThe goal for Zedstore is to get a Column Store into Postgres, but not necessarily Zedstore. (Zedstore itself would be nice) When designing Zedstore success for us would be:\r\n- significantly more performant on OLAP type queries,\r\n- performant enough to not be terrible with OLTP type queries\r\n- must support compression\r\n- cannot be append only, this was the case initially with Greenplum Column Store and it was a mistake. Customers want to update and delete\r\n- it needs to be feature complete as compared to HEAP unless it doesn’t make sense\r\n\r\nOur initial goal is to get the TableAM and executor molded into a state where the above is possible for anyone wanting a column store implementation.\r\n\r\nGiven the goal of addressing API/Executor issues generically first, we have been trying to peel off and work on the parts that are not tightly linked to Zedstore. Specifically I don’t think it would be ok to merge Zedstore into core when it might affect the performance of HEAP relations.\r\n\r\nInstead of focusing on the larger, more difficult to review Zedstore patch, we are trying to peel off the touch points where Zedstore and the current server interact. Note this isn’t intended to be an exhaustive list, rather a list of the most immediate issues. Some of these issues are critical for Zedstore to work, i.e. column projection, while some of these issues point more towards ensuring the various layers in the code are clean so that folks leveraging the TableAM don’t need to write their own bits from whole cloth but rather can leverage appropriately generic primitives, i.e. DBsize or page inspect.\r\n\r\nAs such, an incomplete list of things currently on our radar:\r\n\r\n1) Column Projection — We have a patch [1] that is a demonstration of what we would like to do. There are several TODOs in the email that can/will be addressed if the general method is acceptable\r\n\r\n2) DBSize —Georgios has a patch [2] that begins to make DBSize less HEAP specific\r\n\r\n3) Reloptions —Jeff Davis has a patch [3] that begins to make these more flexible, having spoken with him we think additional work needs to be done here\r\n\r\n4) PageInspect —needs to be less HEAP specific but no work has been done here that I’m aware of\r\n\r\n5) bitmapHeapScan —currently scans both the index and the relation, there are code comments to address this and we need to look into what a fix would mean\r\n\r\n6) Bulk insertion —Justin Pryzby has a patch [4] we are following along with.\r\n\r\n7) analyze — Denis has a patch which starts to address this [5]\r\n\r\nIdeally we can peel out anything that is useful to any column store. Once those have been discussed and committed the general code should be in better shape as well.\r\n\r\n— Rob\r\n\r\n\r\n[1] https://www.postgresql.org/message-id/flat/CAE-ML+9RmTNzKCNTZPQf8O3b-UjHWGFbSoXpQa3Wvuc8YBbEQw@mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/flat/svffVJPtfDYEIISNS-3FQs64CauSul3RjF7idXOfy4H40YBVwB3TMumHb6WoAElJpHOsN-j8fjxYohEt4VxcsJ0Qd9gizwzsY3rjgtjj440=@pm.me\r\n[3] https://www.postgresql.org/message-id/flat/429fb58fa3218221bb17c7bf9e70e1aa6cfc6b5d.camel@j-davis.com\r\n[4] https://www.postgresql.org/message-id/flat/20200508072545.GA9701@telsasoft.com\r\n[5] https://www.postgresql.org/message-id/flat/C7CFE16B-F192-4124-BEBB-7864285E0FF7@arenadata.io\r\n\r\n\r\n\r\n\r\ne.g.\r\n\r\n\"NOTICE: Caution: Zedstore is an experimental feature in PostgreSQL14\r\nintended for robustness and performance testing only. Your data and/or\r\nquery accuracy may be at risk if you rely on this.\"\r\n\r\n--\r\nSimon Riggs http://www.EnterpriseDB.com/\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n\nOn Dec 31, 2020, at 9:22 AM, Simon Riggs <simon@2ndquadrant.com> wrote:\n\n\nOn Wed, 18 Nov 2020 at 00:31, Jacob Champion <pchampion@vmware.com> wrote:\n\r\nSo that's in need of resolution. I'd expect gin and gist to be pretty\r\nflaky until we fix that.\n\n\r\nJacob and Soumyadeep,\n\r\nThanks for submitting this. I think a fix is still outstanding? and\r\nthe patch fails to apply on HEAD in two places.\r\nPlease can you submit the next version?\n\r\nDo you mind if we add this for review to the Jan CF?\n\n\n\n\n\r\nIt is a lot of code and I think there is significant difficulty for\r\nthe community to accept that as-is, even though it looks to be a very\r\nhigh quality submission. So I would like to suggest a strategy for\r\ncommit: we accept Zedstore as \"Beta\" or \"Experimental\" in PG14,\r\nperhaps with a WARNING/Caution similar to the one that used to be\r\ngiven by Postgres in earlier versions when you created a Hash index.\r\nWe keep Zedstore in \"Beta\" mode until a later release, PG15 or later\r\nwhen we can declare Zedstore fully safe. That approach allows us to\r\nget this into the repo asap, and then be fixed and improved\r\nincrementally from here.\n\n\n\n\n\nThe goal for Zedstore is to get a Column Store into Postgres, but not necessarily Zedstore. (Zedstore itself would be nice) When designing Zedstore success for us would be: \n- significantly more performant on OLAP type queries, \n- performant enough to not be terrible with OLTP type queries \n- must support compression\n- cannot be append only, this was the case initially with Greenplum Column Store and it was a mistake. Customers want to update and delete\n- it needs to be feature complete as compared to HEAP unless it doesn’t make sense\n\n\nOur initial goal is to get the TableAM and executor molded into a state where the above is possible for anyone wanting a column store implementation.\n\n\nGiven the goal of addressing API/Executor issues generically first, we have been trying to peel off and work on the parts that are not tightly linked to Zedstore. Specifically I don’t think it would be ok to merge Zedstore into core when it might affect\r\n the performance of HEAP relations. \n\n\nInstead of focusing on the larger, more difficult to review Zedstore patch, we are trying to peel off the touch points where Zedstore and the current server interact. Note this isn’t intended to be an exhaustive list, rather a list of the most immediate\r\n issues. Some of these issues are critical for Zedstore to work, i.e. column projection, while some of these issues point more towards ensuring the various layers in the code are clean so that folks leveraging the TableAM don’t need to write their own bits\r\n from whole cloth but rather can leverage appropriately generic primitives, i.e. DBsize or page inspect. \n\n\nAs such, an incomplete list of things currently on our radar:\n\n\n1) Column Projection — We have a patch [1] that is a demonstration of what we would like to do. There are several TODOs in the email that can/will be addressed if the general method is acceptable\n\n\n2) DBSize\r\n—Georgios has a patch [2] that begins to make DBSize less HEAP specific\n\n\n3) Reloptions —Jeff Davis has a patch [3] that begins to make these more flexible, having spoken with him we think additional work needs to be done here\n\n\n4) PageInspect —needs to be less HEAP specific but no work has been done here that I’m aware of\n\n\n5) bitmapHeapScan —currently scans both the index and the relation, there are code comments to address this and we need to look into what a fix would mean\n\n\n6) Bulk insertion\r\n—Justin Pryzby has a patch [4] we are following\r\n along with. \n\n\n7) analyze — Denis has a patch which starts to address this\r\n [5] \n\n\nIdeally we can peel out anything that is useful to\r\n any column store. Once those have been discussed and committed the general code should be in better shape as well. \n\n\n— Rob \n\n\n\n\n[1] \r\nhttps://www.postgresql.org/message-id/flat/CAE-ML+9RmTNzKCNTZPQf8O3b-UjHWGFbSoXpQa3Wvuc8YBbEQw@mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/svffVJPtfDYEIISNS-3FQs64CauSul3RjF7idXOfy4H40YBVwB3TMumHb6WoAElJpHOsN-j8fjxYohEt4VxcsJ0Qd9gizwzsY3rjgtjj440=@pm.me\n[3] https://www.postgresql.org/message-id/flat/429fb58fa3218221bb17c7bf9e70e1aa6cfc6b5d.camel@j-davis.com\n[4] https://www.postgresql.org/message-id/flat/20200508072545.GA9701@telsasoft.com\n[5] https://www.postgresql.org/message-id/flat/C7CFE16B-F192-4124-BEBB-7864285E0FF7@arenadata.io\n\n\n\n\n\n\n\n\r\ne.g.\n\r\n\"NOTICE: Caution: Zedstore is an experimental feature in PostgreSQL14\r\nintended for robustness and performance testing only. Your data and/or\r\nquery accuracy may be at risk if you rely on this.\"\n\r\n-- \r\nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Thu, 7 Jan 2021 16:20:48 +0000",
"msg_from": "Robert Eckhardt <eckhardtr@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
},
{
"msg_contents": "Greetings.\n\nThanks for the project. I see the code in github has not been updated for\na long time, is it still in active development?\n\nThanks\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\n",
"msg_date": "Mon, 12 Jul 2021 23:42:35 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zedstore - compressed in-core columnar storage"
}
] |
[
{
"msg_contents": "Continuing the discussion at:\nhttps://www.postgresql.org/message-id/26571.1554741097%40sss.pgh.pa.us\n\nTom wrote:\n> It struck me just as I was pushing it that this test doesn't exercise\n> EPQ with any of the interesting cases for partition routing (ie where\n> the update causes a move to a different partition). It would likely\n> be a good idea to have test coverage for all of these scenarios:\n>\n> * EPQ where the initial update would involve a partition change,\n> and that's still true after reapplying the update to the\n> concurrently-updated tuple version;\n>\n> * EPQ where the initial update would *not* require a partition change,\n> but we need one after reapplying the update to the\n> concurrently-updated tuple version;\n>\n> * EPQ where the initial update would involve a partition change,\n> but that's no longer true after reapplying the update to the\n> concurrently-updated tuple version.\n\nPer what Andres mentioned in his reply on the original thread [1], in\nscenarios 1 and 2 where the 1st session's update causes a row to move,\nsession 2 produces the following error when trying to update the same row:\n\nERROR: tuple to be locked was already moved to another partition due to\nconcurrent update\n\nDo we want those tests like that (with the error that is) in the\neval-plan-qual isolation suite?\n\nI came up with the attached.\n\nThanks,\nAmit\n\n[1]\nhttps://www.postgresql.org/message-id/20190408164138.izvfg2czwcofg5ev%40alap3.anarazel.de",
"msg_date": "Tue, 9 Apr 2019 18:19:49 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "more isolation tests for update tuple routing"
},
{
"msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> Per what Andres mentioned in his reply on the original thread [1], in\n> scenarios 1 and 2 where the 1st session's update causes a row to move,\n> session 2 produces the following error when trying to update the same row:\n> ERROR: tuple to be locked was already moved to another partition due to\n> concurrent update\n\n> Do we want those tests like that (with the error that is) in the\n> eval-plan-qual isolation suite?\n\nSure, but I think one such test is enough.\n\n> I came up with the attached.\n\nI changed the last case so it actually did what I had in mind\n(initial state of the update would be a partition move, but after\nfetching up-to-date tuple it isn't) and pushed it. Thanks for\ndoing the legwork!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Apr 2019 11:45:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: more isolation tests for update tuple routing"
},
{
"msg_contents": "On 2019/04/10 0:45, Tom Lane wrote:\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>> Per what Andres mentioned in his reply on the original thread [1], in\n>> scenarios 1 and 2 where the 1st session's update causes a row to move,\n>> session 2 produces the following error when trying to update the same row:\n>> ERROR: tuple to be locked was already moved to another partition due to\n>> concurrent update\n> \n>> Do we want those tests like that (with the error that is) in the\n>> eval-plan-qual isolation suite?\n> \n> Sure, but I think one such test is enough.\n> \n>> I came up with the attached.\n> \n> I changed the last case so it actually did what I had in mind\n> (initial state of the update would be a partition move, but after\n> fetching up-to-date tuple it isn't) and pushed it. Thanks for\n> doing the legwork!\n\nThank you.\n\nRegards,\nAmit\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 09:34:43 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: more isolation tests for update tuple routing"
}
] |
[
{
"msg_contents": "Dear PostgreSQL community,\n\nI am a GSoC 2019 applicant and am working on 'WAL-G safety features'. I\nhave finished an initial draft of my proposal and I would appreciate your\ncomments and advice on my proposal. I know it is pretty late for the\nimprovement of my proposal, but I will be glad to join in the project this\nsummer even without GSoC! Please help me make my proposal and ideas better.\nThank you!\n\nThe link is\nhttps://docs.google.com/document/d/18cxbj1zId1BpMjgUkZ0MZgb1HdMg1S9h0U1WecZON3U/edit?usp=sharing\n\nSincerely,\nZhichao Liu\n\nDear PostgreSQL community,I am a GSoC 2019 applicant and am working on 'WAL-G safety features'. I have finished an initial draft of my proposal and I would appreciate your comments and advice on my proposal. I know it is pretty late for the improvement of my proposal, but I will be glad to join in the project this summer even without GSoC! Please help me make my proposal and ideas better.Thank you!The link is https://docs.google.com/document/d/18cxbj1zId1BpMjgUkZ0MZgb1HdMg1S9h0U1WecZON3U/edit?usp=sharingSincerely,Zhichao Liu",
"msg_date": "Tue, 9 Apr 2019 09:20:05 -0400",
"msg_from": "Zhichao Liu <zcliu@cs.umd.edu>",
"msg_from_op": true,
"msg_subject": "GSOC 2019 proposal 'WAL-G safety features'"
},
{
"msg_contents": "Hi!\n\n> 9 апр. 2019 г., в 18:20, Zhichao Liu <zcliu@cs.umd.edu> написал(а):\n> \n> Dear PostgreSQL community,\n> \n> I am a GSoC 2019 applicant and am working on 'WAL-G safety features'. I have finished an initial draft of my proposal and I would appreciate your comments and advice on my proposal. I know it is pretty late for the improvement of my proposal, but I will be glad to join in the project this summer even without GSoC! Please help me make my proposal and ideas better.\n> Thank you!\n> \n> The link is https://docs.google.com/document/d/18cxbj1zId1BpMjgUkZ0MZgb1HdMg1S9h0U1WecZON3U/edit?usp=sharing\n\nThis is great that you want to work on WAL-G, you do not need proposal to do this. Just make a PRs, ask questions, etc :)\nOn WAL-G github page you can find way to slack channel. This list is intended for PostgreSQL core features. in GSoC WAL-G is hosted under PostgreSQL umbrella.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 10 Apr 2019 16:55:14 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: GSOC 2019 proposal 'WAL-G safety features'"
}
] |
[
{
"msg_contents": "Hi,\n\nSeveral companies, including EnterpriseDB, NTT, and Postgres Pro, have\ndeveloped technology that permits a block-level incremental backup to\nbe taken from a PostgreSQL server. I believe the idea in all of those\ncases is that non-relation files should be backed up in their\nentirety, but for relation files, only those blocks that have been\nchanged need to be backed up. I would like to propose that we should\nhave a solution for this problem in core, rather than leaving it to\neach individual PostgreSQL company to develop and maintain their own\nsolution. Generally my idea is:\n\n1. There should be a way to tell pg_basebackup to request from the\nserver only those blocks where LSN >= threshold_value. There are\nseveral possible ways for the server to implement this, the simplest\nof which is to just scan all the blocks and send only the ones that\nsatisfy that criterion. That might sound dumb, but it does still save\nnetwork bandwidth, and it works even without any prior setup. It will\nprobably be more efficient in many cases to instead scan all the WAL\ngenerated since that LSN and extract block references from it, but\nthat is only possible if the server has all of that WAL available or\ncan somehow get it from the archive. We could also, as several people\nhave proposed previously, have some kind of additional relation for\nthat stores either a single is-modified bit -- which only helps if the\nreference LSN for the is-modified bit is older than the requested LSN\nbut not too much older -- or the highest LSN for each range of K\nblocks, or something like that. I am at the moment not too concerned\nwith the exact strategy we use here. I believe we may want to\neventually support more than one, since they have different\ntrade-offs.\n\n2. When you use pg_basebackup in this way, each relation file that is\nnot sent in its entirety is replaced by a file with a different name.\nFor example, instead of base/16384/16417, you might get\nbase/16384/partial.16417 or however we decide to name them. Each such\nfile will store near the beginning of the file a list of all the\nblocks contained in that file, and the blocks themselves will follow\nat offsets that can be predicted from the metadata at the beginning of\nthe file. The idea is that you shouldn't have to read the whole file\nto figure out which blocks it contains, and if you know specifically\nwhat blocks you want, you should be able to reasonably efficiently\nread just those blocks. A backup taken in this manner should also\nprobably create some kind of metadata file in the root directory that\nstops the server from starting and lists other salient details of the\nbackup. In particular, you need the threshold LSN for the backup\n(i.e. contains blocks newer than this) and the start LSN for the\nbackup (i.e. the LSN that would have been returned from\npg_start_backup).\n\n3. There should be a new tool that knows how to merge a full backup\nwith any number of incremental backups and produce a complete data\ndirectory with no remaining partial files. The tool should check that\nthe threshold LSN for each incremental backup is less than or equal to\nthe start LSN of the previous backup; if not, there may be changes\nthat happened in between which would be lost, so combining the backups\nis unsafe. Running this tool can be thought of either as restoring\nthe backup or as producing a new synthetic backup from any number of\nincremental backups. This would allow for a strategy of unending\nincremental backups. For instance, on day 1, you take a full backup.\nOn every subsequent day, you take an incremental backup. On day 9,\nyou run pg_combinebackup day1 day2 -o full; rm -rf day1 day2; mv full\nday2. On each subsequent day you do something similar. Now you can\nalways roll back to any of the last seven days by combining the oldest\nbackup you have (which is always a synthetic full backup) with as many\nnewer incrementals as you want, up to the point where you want to\nstop.\n\nOther random points:\n- If the server has multiple ways of finding blocks with an LSN\ngreater than or equal to the threshold LSN, it could make a cost-based\ndecision between those methods, or it could allow the client to\nspecify the method to be used.\n- I imagine that the server would offer this functionality through a\nnew replication command or a syntax extension to an existing command,\nso it could also be used by tools other than pg_basebackup if they\nwished.\n- Combining backups could also be done destructively rather than, as\nproposed above, non-destructively, but you have to be careful about\nwhat happens in case of a failure.\n- The pg_combinebackup tool (or whatever we call it) should probably\nhave an option to exploit hard links to save disk space; this could in\nparticular make construction of a new synthetic full backup much\ncheaper. However you'd better be careful not to use this option when\nactually trying to restore, because if you start the server and run\nrecovery, you don't want to change the copies of those same files that\nare in your backup directory. I guess the server could be taught to\ncomplain about st_nlink > 1 but I'm not sure we want to go there.\n- It would also be possible to collapse multiple incremental backups\ninto a single incremental backup, without combining with a full\nbackup. In the worst case, size(i1+i2) = size(i1) + size(i2), but if\nthe same data is modified repeatedly collapsing backups would save\nlots of space. This doesn't seem like a must-have for v1, though.\n- If you have a SAN and are taking backups using filesystem snapshots,\nthen you don't need this, because your SAN probably already uses\ncopy-on-write magic for those snapshots, and so you are already\ngetting all of the same benefits in terms of saving storage space that\nyou would get from something like this. But not everybody has a SAN.\n- I know that there have been several previous efforts in this area,\nbut none of them have gotten to the point of being committed. I\nintend no disrespect to those efforts. I believe I'm taking a\nslightly different view of the problem here than what has been done\npreviously, trying to focus on the user experience rather than, e.g.,\nthe technology that is used to decide which blocks need to be sent.\nHowever it's possible I've missed a promising patch that takes an\napproach very similar to what I'm outlining here, and if so, I don't\nmind a bit having that pointed out to me.\n- This is just a design proposal at this point; there is no code. If\nthis proposal, or some modified version of it, seems likely to be\nacceptable, I and/or my colleagues might try to implement it.\n- It would also be nice to support *parallel* backup, both for full\nbackups as we can do them today and for incremental backups. But that\nsound like a separate effort. pg_combinebackup could potentially\nsupport parallel operation as well, although that might be too\nambitious for v1.\n- It would also be nice if pg_basebackup could write backups to places\nother than the local disk, like an object store, a tape drive, etc.\nBut that also sounds like a separate effort.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 9 Apr 2019 11:48:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "block-level incremental backup"
},
{
"msg_contents": "Hello,\n\nOn 09.04.2019 18:48, Robert Haas wrote:\n> - It would also be nice if pg_basebackup could write backups to places\n> other than the local disk, like an object store, a tape drive, etc.\n> But that also sounds like a separate effort.\n> \n> Thoughts? \n\n(Just thinking out loud) Also it might be useful to have remote restore \nfacility (i.e. if pg_combinebackup could write to non-local storage), so \nyou don't need to restore the instance into a locale place and copy/move \nto the remote machine. But it seems to me that it is the most nontrivial \nfeature and requires much more effort than other points.\n\nIn pg_probackup we have remote restore via SSH in the beta state. But \nSSH isn't an option for in-core approach I think.\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Tue, 9 Apr 2019 19:32:30 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-09 11:48:38 -0400, Robert Haas wrote:\n> 2. When you use pg_basebackup in this way, each relation file that is\n> not sent in its entirety is replaced by a file with a different name.\n> For example, instead of base/16384/16417, you might get\n> base/16384/partial.16417 or however we decide to name them.\n\nHm. But that means that files that are shipped nearly in their entirety,\nneed to be fully rewritten. Wonder if it's better to ship them as files\nwith holes, and have the metadata in a separate file. That'd then allow\nto just fill in the holes with data from the older version. I'd assume\nthat there's a lot of workloads where some significantly sized relations\nwill get updated in nearly their entirety between backups.\n\n\n> Each such file will store near the beginning of the file a list of all the\n> blocks contained in that file, and the blocks themselves will follow\n> at offsets that can be predicted from the metadata at the beginning of\n> the file. The idea is that you shouldn't have to read the whole file\n> to figure out which blocks it contains, and if you know specifically\n> what blocks you want, you should be able to reasonably efficiently\n> read just those blocks. A backup taken in this manner should also\n> probably create some kind of metadata file in the root directory that\n> stops the server from starting and lists other salient details of the\n> backup. In particular, you need the threshold LSN for the backup\n> (i.e. contains blocks newer than this) and the start LSN for the\n> backup (i.e. the LSN that would have been returned from\n> pg_start_backup).\n\nI wonder if we shouldn't just integrate that into pg_control or such. So\nthat:\n\n> 3. There should be a new tool that knows how to merge a full backup\n> with any number of incremental backups and produce a complete data\n> directory with no remaining partial files.\n\nCould just be part of server startup?\n\n\n> - I imagine that the server would offer this functionality through a\n> new replication command or a syntax extension to an existing command,\n> so it could also be used by tools other than pg_basebackup if they\n> wished.\n\nWould this logic somehow be usable from tools that don't want to copy\nthe data directory via pg_basebackup (e.g. for parallelism, to directly\nsend to some backup service / SAN / whatnot)?\n\n\n> - It would also be nice if pg_basebackup could write backups to places\n> other than the local disk, like an object store, a tape drive, etc.\n> But that also sounds like a separate effort.\n\nIndeed seems separate. But worthwhile.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Apr 2019 09:35:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Having worked in the data storage industry since the '80s, I think backup\nis an important capability. Having said that, the ideas should be expanded\nto an overall data management strategy combining local and remote storage\nincluding cloud.\n\n From my experience, record and transaction consistency is critical to any\nreplication action, including backup. The approach commonly includes a\nstarting baseline, snapshot if you prefer, and a set of incremental changes\nto the snapshot. I always used the transaction logs for both backup and\nremote replication to other DBMS. In standard ECMA-208 @94, you will note a\nfile object with a transaction property. Although the language specifies\nfiles, a file may be any set of records.\n\nSAN based snapshots usually occur on the SAN storage device, meaning if\ncached data (unwritten to disk) will not be snapshotted or inconsistently\nreference and likely result in a corrupted database on restore.\n\nSnapshots are point in time states of storage objects. Between snapshot\nperiods, any number of changes many occur. If a record of \"all changes\"\nare required, snapshot methods must be augmented with a historical record..\nthe transaction log.\n\n Delta block methods for backups have been in practice for many years. ZFS\nhad adopted the practice for block management. The ability of incremental\nbackups, whether block, transactions or other methods, is dependent on\nprior data. Like primary storage, backup media can fail, become lost and be\ninadvertently corrupted. The result of incremental data backup loss is the\nrestored data after the point of loss is likely corrupted.\n\ncheers,\ngarym\n\nOn Tue, Apr 9, 2019 at 10:35 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-04-09 11:48:38 -0400, Robert Haas wrote:\n> > 2. When you use pg_basebackup in this way, each relation file that is\n> > not sent in its entirety is replaced by a file with a different name.\n> > For example, instead of base/16384/16417, you might get\n> > base/16384/partial.16417 or however we decide to name them.\n>\n> Hm. But that means that files that are shipped nearly in their entirety,\n> need to be fully rewritten. Wonder if it's better to ship them as files\n> with holes, and have the metadata in a separate file. That'd then allow\n> to just fill in the holes with data from the older version. I'd assume\n> that there's a lot of workloads where some significantly sized relations\n> will get updated in nearly their entirety between backups.\n>\n>\n> > Each such file will store near the beginning of the file a list of all\n> the\n> > blocks contained in that file, and the blocks themselves will follow\n> > at offsets that can be predicted from the metadata at the beginning of\n> > the file. The idea is that you shouldn't have to read the whole file\n> > to figure out which blocks it contains, and if you know specifically\n> > what blocks you want, you should be able to reasonably efficiently\n> > read just those blocks. A backup taken in this manner should also\n> > probably create some kind of metadata file in the root directory that\n> > stops the server from starting and lists other salient details of the\n> > backup. In particular, you need the threshold LSN for the backup\n> > (i.e. contains blocks newer than this) and the start LSN for the\n> > backup (i.e. the LSN that would have been returned from\n> > pg_start_backup).\n>\n> I wonder if we shouldn't just integrate that into pg_control or such. So\n> that:\n>\n> > 3. There should be a new tool that knows how to merge a full backup\n> > with any number of incremental backups and produce a complete data\n> > directory with no remaining partial files.\n>\n> Could just be part of server startup?\n>\n>\n> > - I imagine that the server would offer this functionality through a\n> > new replication command or a syntax extension to an existing command,\n> > so it could also be used by tools other than pg_basebackup if they\n> > wished.\n>\n> Would this logic somehow be usable from tools that don't want to copy\n> the data directory via pg_basebackup (e.g. for parallelism, to directly\n> send to some backup service / SAN / whatnot)?\n>\n>\n> > - It would also be nice if pg_basebackup could write backups to places\n> > other than the local disk, like an object store, a tape drive, etc.\n> > But that also sounds like a separate effort.\n>\n> Indeed seems separate. But worthwhile.\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n>\n\nHaving worked in the data storage industry since the '80s, I think backup is an important capability. Having said that, the ideas should be expanded to an overall data management strategy combining local and remote storage including cloud.From my experience, record and transaction consistency is critical to any replication action, including backup. The approach commonly includes a starting baseline, snapshot if you prefer, and a set of incremental changes to the snapshot. I always used the transaction logs for both backup and remote replication to other DBMS. In standard ECMA-208 @94, you will note a file object with a transaction property. Although the language specifies files, a file may be any set of records.SAN based snapshots usually occur on the SAN storage device, meaning if cached data (unwritten to disk) will not be snapshotted or inconsistently reference and likely result in a corrupted database on restore. Snapshots are point in time states of storage objects. Between snapshot periods, any number of changes many occur. If a record of \"all changes\" are required, snapshot methods must be augmented with a historical record.. the transaction log. Delta block methods for backups have been in practice for many years. ZFS had adopted the practice for block management. The ability of incremental backups, whether block, transactions or other methods, is dependent on prior data. Like primary storage, backup media can fail, become lost and be inadvertently corrupted. The result of incremental data backup loss is the restored data after the point of loss is likely corrupted.cheers, garymOn Tue, Apr 9, 2019 at 10:35 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-04-09 11:48:38 -0400, Robert Haas wrote:\n> 2. When you use pg_basebackup in this way, each relation file that is\n> not sent in its entirety is replaced by a file with a different name.\n> For example, instead of base/16384/16417, you might get\n> base/16384/partial.16417 or however we decide to name them.\n\nHm. But that means that files that are shipped nearly in their entirety,\nneed to be fully rewritten. Wonder if it's better to ship them as files\nwith holes, and have the metadata in a separate file. That'd then allow\nto just fill in the holes with data from the older version. I'd assume\nthat there's a lot of workloads where some significantly sized relations\nwill get updated in nearly their entirety between backups.\n\n\n> Each such file will store near the beginning of the file a list of all the\n> blocks contained in that file, and the blocks themselves will follow\n> at offsets that can be predicted from the metadata at the beginning of\n> the file. The idea is that you shouldn't have to read the whole file\n> to figure out which blocks it contains, and if you know specifically\n> what blocks you want, you should be able to reasonably efficiently\n> read just those blocks. A backup taken in this manner should also\n> probably create some kind of metadata file in the root directory that\n> stops the server from starting and lists other salient details of the\n> backup. In particular, you need the threshold LSN for the backup\n> (i.e. contains blocks newer than this) and the start LSN for the\n> backup (i.e. the LSN that would have been returned from\n> pg_start_backup).\n\nI wonder if we shouldn't just integrate that into pg_control or such. So\nthat:\n\n> 3. There should be a new tool that knows how to merge a full backup\n> with any number of incremental backups and produce a complete data\n> directory with no remaining partial files.\n\nCould just be part of server startup?\n\n\n> - I imagine that the server would offer this functionality through a\n> new replication command or a syntax extension to an existing command,\n> so it could also be used by tools other than pg_basebackup if they\n> wished.\n\nWould this logic somehow be usable from tools that don't want to copy\nthe data directory via pg_basebackup (e.g. for parallelism, to directly\nsend to some backup service / SAN / whatnot)?\n\n\n> - It would also be nice if pg_basebackup could write backups to places\n> other than the local disk, like an object store, a tape drive, etc.\n> But that also sounds like a separate effort.\n\nIndeed seems separate. But worthwhile.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 9 Apr 2019 11:47:29 -0600",
"msg_from": "Gary M <garym@oedata.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 12:35 PM Andres Freund <andres@anarazel.de> wrote:\n> Hm. But that means that files that are shipped nearly in their entirety,\n> need to be fully rewritten. Wonder if it's better to ship them as files\n> with holes, and have the metadata in a separate file. That'd then allow\n> to just fill in the holes with data from the older version. I'd assume\n> that there's a lot of workloads where some significantly sized relations\n> will get updated in nearly their entirety between backups.\n\nI don't want to rely on holes at the FS level. I don't want to have\nto worry about what Windows does and what every Linux filesystem does\nand what NetBSD and FreeBSD and Dragonfly BSD and MacOS do. And I\ndon't want to have to write documentation for the fine manual\nexplaining to people that they need to use a hole-preserving tool when\nthey copy an incremental backup around. And I don't want to have to\nlisten to complaints from $USER that their backup tool, $THING, is not\nhole-aware. Just - no.\n\nBut what we could do is have some threshold (as git does), beyond\nwhich you just send the whole file. For example if >90% of the blocks\nhave changed, or >80% or whatever, then you just send everything.\nThat way, if you have a database where you have lots and lots of 1GB\nsegments with low churn (so that you can't just use full backups) and\nlots and lots of 1GB segments with high churn (to create the problem\nyou're describing) you'll still be OK.\n\n> > 3. There should be a new tool that knows how to merge a full backup\n> > with any number of incremental backups and produce a complete data\n> > directory with no remaining partial files.\n>\n> Could just be part of server startup?\n\nYes, but I think that sucks. You might not want to start the server\nbut rather just create a new synthetic backup. And realistically,\nit's hard to imagine the server doing anything but synthesizing the\nbackup first and then proceeding as normal. In theory there's no\nreason why it couldn't be smart enough to construct the files it needs\n\"on demand\" in the background, but that sounds really hard and I don't\nthink there's enough value to justify that level of effort. YMMV, of\ncourse.\n\n> > - I imagine that the server would offer this functionality through a\n> > new replication command or a syntax extension to an existing command,\n> > so it could also be used by tools other than pg_basebackup if they\n> > wished.\n>\n> Would this logic somehow be usable from tools that don't want to copy\n> the data directory via pg_basebackup (e.g. for parallelism, to directly\n> send to some backup service / SAN / whatnot)?\n\nWell, I'm imagining it as a piece of server-side functionality that\ncan figure out what has changed using one of several possible methods,\nand then send that stuff to you. So I think if you don't have a\nserver connection you are out of luck. If you have a server\nconnection but just want to be told what has changed rather than\nactually being given that data, that might be something that could be\nworked into the design. I'm not sure whether that's a real need,\nthough, or just extra work.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 9 Apr 2019 13:54:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 12:32 PM Arthur Zakirov <a.zakirov@postgrespro.ru> wrote:\n> In pg_probackup we have remote restore via SSH in the beta state. But\n> SSH isn't an option for in-core approach I think.\n\nThat's a little off-topic for this thread, but I think we should have\nsome kind of extensible mechanism for pg_basebackup and maybe other\ntools, so that you can teach it to send backups to AWS or your\nteletype or etch them on stone tablets or whatever without having to\nmodify core code. But let's not design that mechanism on this thread,\n'cuz that will distract from what I want to talk about here. Feel\nfree to start a new thread for it, though, and I'll jump in. :-)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 9 Apr 2019 13:56:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On 2019-04-09 17:48, Robert Haas wrote:\n> It will\n> probably be more efficient in many cases to instead scan all the WAL\n> generated since that LSN and extract block references from it, but\n> that is only possible if the server has all of that WAL available or\n> can somehow get it from the archive.\n\nThis could be a variant of a replication slot that preserves WAL between\nincremental backup runs.\n\n> 3. There should be a new tool that knows how to merge a full backup\n> with any number of incremental backups and produce a complete data\n> directory with no remaining partial files.\n\nAre there by any chance standard file formats and tools that describe a\nbinary difference between directories? That would be really useful here.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 9 Apr 2019 23:07:39 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On 2019-Apr-09, Peter Eisentraut wrote:\n\n> On 2019-04-09 17:48, Robert Haas wrote:\n\n> > 3. There should be a new tool that knows how to merge a full backup\n> > with any number of incremental backups and produce a complete data\n> > directory with no remaining partial files.\n> \n> Are there by any chance standard file formats and tools that describe a\n> binary difference between directories? That would be really useful here.\n\nVCDIFF? https://tools.ietf.org/html/rfc3284\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 9 Apr 2019 17:28:38 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi!\n\n> 9 апр. 2019 г., в 20:48, Robert Haas <robertmhaas@gmail.com> написал(а):\n> \n> Thoughts?\nThanks for this long and thoughtful post!\n\nAt Yandex, we are using incremental backups for some years now. Initially, we used patched pgbarman, then we implemented this functionality in WAL-G. And there are many things to be done yet. We have more than 1Pb of clusters backuped with this technology.\nMost of the time we use this technology as a part of HA setup in managed PostgreSQL service. So, for us main goals are to operate backups cheaply and restore new node quickly. Here's what I see from our perspective.\n\n1. Yes, this feature is important.\n\n2. This importance comes not from reduced disk storage, magnetic disks and object storages are very cheap.\n\n3. Incremental backups save a lot of network bandwidth. It is non-trivial for the storage system to ingest hundreds of Tb daily.\n\n4. Incremental backups are a redundancy of WAL, intended for parallel application. Incremental backup applied sequentially is not very useful, it will not be much faster than simple WAL replay in many cases.\n\n5. As long as increments duplicate WAL functionality - it is not worth pursuing tradeoffs of storage utilization reduction. We scan WAL during archivation, extract numbers of changed blocks and store changemap for a group of WALs in the archive.\n\n6. This changemaps can be used for the increment of the visibility map (if I recall correctly). But you cannot compare LSNs on a page of visibility map: some operations do not bump them.\n\n7. We use changemaps during backups and during WAL replay - we know blocks that will change far in advance and prefetch them to page cache like pg_prefaulter does.\n\n8. There is similar functionality in RMAN for one well-known database. They used to store 8 sets of change maps. That database also has cool functionality \"increment for catchup\".\n\n9. We call incremental backup a \"delta backup\". This wording describes purpose more precisely: it is not \"next version of DB\", it is \"difference between two DB states\". But wording choice does not matter much.\n\n\nHere are slides from my talk at PgConf.APAC[0]. I've proposed a talk on this matter to PgCon, but it was not accepted. I will try next year :)\n\n> 9 апр. 2019 г., в 20:48, Robert Haas <robertmhaas@gmail.com> написал(а):\n> - This is just a design proposal at this point; there is no code. If\n> this proposal, or some modified version of it, seems likely to be\n> acceptable, I and/or my colleagues might try to implement it.\n\nI'll be happy to help with code, discussion and patch review.\n\nBest regards, Andrey Borodin.\n\n[0] https://yadi.sk/i/Y_S1iqNN5WxS6A\n\n",
"msg_date": "Wed, 10 Apr 2019 16:51:01 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On 09.04.2019 18:48, Robert Haas wrote:\n> 1. There should be a way to tell pg_basebackup to request from the\n> server only those blocks where LSN >= threshold_value.\n\nSome times ago I have implemented alternative version of ptrack utility \n(not one used in pg_probackup)\nwhich detects updated block at file level. It is very simple and may be \nit can be sometimes integrated in master.\nI attached patch to vanilla to this mail.\nRight now it contains just two GUCs:\n\nptrack_map_size: Size of ptrack map (number of elements) used for \nincremental backup: 0 disabled.\nptrack_block_log: Logarithm of ptrack block size (amount of pages)\n\nand one function:\n\npg_ptrack_get_changeset(startlsn pg_lsn) returns \n{relid,relfilenode,reltablespace,forknum,blocknum,segsize,updlsn,path}\n\nIdea is very simple: it creates hash map of fixed size (ptrack_map_size) \nand stores LSN of written pages in this map.\nAs far as postgres default page size seems to be too small for ptrack \nblock (requiring too large hash map or increasing number of conflicts, \nas well as\nincreasing number of random reads) it is possible to configure ptrack \nblock to consists of multiple pages (power of 2).\n\nThis patch is using memory mapping mechanism. Unfortunately there is no \nportable wrapper for it in Postgres, so I have to provide own \nimplementations for Unix/Windows. Certainly it is not good and should be \nrewritten.\n\nHow to use?\n\n1. Define ptrack_map_size in postgres.conf, for example (use simple \nnumber for more uniform hashing):\n\nptrack_map_size = 1000003\n\n2. Remember current lsn.\n\npsql postgres -c \"select pg_current_wal_lsn()\"\n pg_current_wal_lsn\n--------------------\n 0/224A268\n(1 row)\n\n3. Do some updates.\n\n$ pgbench -T 10 postgres\n\n4. Select changed blocks.\n\n select * from pg_ptrack_get_changeset('0/224A268');\n relid | relfilenode | reltablespace | forknum | blocknum | segsize | \nupdlsn | path\n-------+-------------+---------------+---------+----------+---------+-----------+----------------------\n 16390 | 16396 | 1663 | 0 | 1640 | 1 | \n0/224FD88 | base/12710/16396\n 16390 | 16396 | 1663 | 0 | 1641 | 1 | \n0/2258680 | base/12710/16396\n 16390 | 16396 | 1663 | 0 | 1642 | 1 | \n0/22615A0 | base/12710/16396\n...\n\nCertainly ptrack should be used as part of some backup tool (as \npg_basebackup or pg_probackup).\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 10 Apr 2019 17:22:38 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi,\n\nOn Tue, 9 Apr 2019 11:48:38 -0400\nRobert Haas <robertmhaas@gmail.com> wrote:\n\n> Several companies, including EnterpriseDB, NTT, and Postgres Pro, have\n> developed technology that permits a block-level incremental backup to\n> be taken from a PostgreSQL server. I believe the idea in all of those\n> cases is that non-relation files should be backed up in their\n> entirety, but for relation files, only those blocks that have been\n> changed need to be backed up. I would like to propose that we should\n> have a solution for this problem in core, rather than leaving it to\n> each individual PostgreSQL company to develop and maintain their own\n> solution. Generally my idea is:\n> \n> 1. There should be a way to tell pg_basebackup to request from the\n> server only those blocks where LSN >= threshold_value. There are\n> several possible ways for the server to implement this, the simplest\n> of which is to just scan all the blocks and send only the ones that\n> satisfy that criterion. That might sound dumb, but it does still save\n> network bandwidth, and it works even without any prior setup.\n\n+1 this is a simple design and probably a first easy step bringing a lot of\nbenefices already.\n\n> It will probably be more efficient in many cases to instead scan all the WAL\n> generated since that LSN and extract block references from it, but\n> that is only possible if the server has all of that WAL available or\n> can somehow get it from the archive.\n\nI seize the opportunity to discuss about this on the fly.\n\nI've been playing with the idea of producing incremental backups from\narchives since many years. But I've only started PoC'ing on it this year.\n\nMy idea would be create a new tool working on archived WAL. No burden\nserver side. Basic concept is:\n\n* parse archives\n* record latest relevant FPW for the incr backup\n* write new WALs with recorded FPW and removing/rewriting duplicated walrecords.\n\nIt's just a PoC and I hadn't finished the WAL writing part...not even talking\nabout the replay part. I'm not even sure this project is a good idea, but it is\na good educational exercice to me in the meantime. \n\nAnyway, using real life OLTP production archives, my stats were:\n\n # WAL xlogrec kept Size WAL kept\n 127 39% 50%\n 383 22% 38%\n 639 20% 29%\n\nBased on this stats, I expect this would save a lot of time during recovery in\na first step. If it get mature, it might even save a lot of archives space or\nextend the retention period with degraded granularity. It would even help\ntaking full backups with a lower frequency.\n\nAny thoughts about this design would be much appreciated. I suppose this should\nbe offlist or in a new thread to avoid polluting this thread as this is a\nslightly different subject.\n\nRegards,\n\n\nPS: I was surprised to still find some existing piece of code related to\npglesslog in core. This project has been discontinued and WAL format changed in\nthe meantime.\n\n\n",
"msg_date": "Wed, 10 Apr 2019 16:57:11 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 5:28 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Apr-09, Peter Eisentraut wrote:\n> > On 2019-04-09 17:48, Robert Haas wrote:\n> > > 3. There should be a new tool that knows how to merge a full backup\n> > > with any number of incremental backups and produce a complete data\n> > > directory with no remaining partial files.\n> >\n> > Are there by any chance standard file formats and tools that describe a\n> > binary difference between directories? That would be really useful here.\n>\n> VCDIFF? https://tools.ietf.org/html/rfc3284\n\nI don't understand VCDIFF very well, but I see some potential problems\nwith going in this direction.\n\nFirst, suppose we take a full backup on Monday. Then, on Tuesday, we\nwant to take an incremental backup. In my proposal, the backup server\nonly needs to provide the database with one piece of information: the\nstart-LSN of the previous backup. The server determines which blocks\nare recently modified and sends them to the client, which stores them.\nThe end. On the other hand, storing a maximally compact VCDIFF seems\nto require that, for each block modified in the Tuesday backup, we go\nread the corresponding block as it existed on Monday. Assuming that\nthe server is using some efficient method of locating modified blocks,\nthis will approximately double the amount of read I/O required to\ncomplete the backup: either the server or the client must now read not\nonly the current version of the block but the previous versions. If\nthe previous backup is an incremental backup that does not contain\nfull block images but only VCDIFF content, whoever is performing the\nVCDIFF calculation will need to walk the entire backup chain and\nreconstruct the previous contents of the previous block so that it can\ncompute the newest VCDIFF. A customer who does an incremental backup\nevery day and maintains a synthetic full backup from 1 week prior will\nsee a roughly eightfold increase in read I/O compared to the design I\nproposed.\n\nThe same problem exists at restore time. In my design, the total read\nI/O required is equal to the size of the database, plus however much\nmetadata needs to be read from older delta files -- and that should be\nfairly small compared to the actual data being read, at least in\nnormal, non-extreme cases. But if we are going to proceed by applying\na series of delta files, we're going to need to read every older\nbackup in its entirety. If the turnover percentage is significant,\nsay 20%/day, and if the backup chain is say 7 backups long to get back\nto a full backup, this is a huge difference. Instead of having to\nread ~100% of the database size, as in my proposal, we'll need to read\n100% + (6 * 20%) = 220% of the database size.\n\nSince VCDIFF uses an add-copy-run language to described differences,\nwe could try to work around the problem that I just described by\ndescribing each changed data block as an 8192-byte add, and unchanged\nblocks as an 8192-byte copy. If we did that, then I think that the\nproblem at backup time goes away: we can write out a VCDIFF-format\nfile for the changed blocks based just on knowing that those are the\nblocks that have changed, without needing to access the older file. Of\ncourse, if we do it this way, the file will be larger than it would be\nif we actually compared the old and new block contents and wrote out a\nminimal VCDIFF, but it does make taking a backup a lot simpler. Even\nwith this proposal, though, I think we still have trouble with restore\ntime. I proposed putting the metadata about which blocks are included\nin a delta file at the beginning of the file, which allows a restore\nof a new incremental backup to relatively efficiently flip through\nolder backups to find just the blocks that it needs, without having to\nread the whole file. But I think (although I am not quite sure) that\nin the VCDIFF format, the payload for an ADD instruction is stored\nnear the payload. The result would be that you'd have to basically\nread the whole file at restore time to figure out which blocks were\navailable from that file and which ones needed to be retrieved from an\nolder backup. So while this approach would fix the backup-time\nproblem, I believe that it would still require significantly more read\nI/O at restore time than my proposal.\n\nFurthermore, if, at backup time, we have to do anything that requires\naccess to the old data, either the client or the server needs to have\naccess to that data. Nonwithstanding the costs of reading it, that\ndoesn't seem very desirable. The server is quite unlikely to have\naccess to the backups, because most users want to back up to a\ndifferent server in order to guard against a hardware failure. The\nclient is more likely to be running on a machine where it has access\nto the data, because many users back up to the same machine every day,\nso the machine that is taking the current backup probably has the\nolder one. However, accessing that old backup might not be cheap. It\ncould be located in an object store in the cloud someplace, or it\ncould have been written out to a tape drive and the tape removed from\nthe drive. In the design I'm proposing, that stuff doesn't matter,\nbut if you want to run diffs, then it does. Even if the client has\nefficient access to the data and even if it has so much read I/O\nbandwidth that the costs of reading that old data to run diffs doesn't\nmatter, it's still pretty awkward for a tar-format backup. The client\nwould have to take the tar archive sent by the server apart and form a\nnew one.\n\nAnother advantage of storing whole blocks in the incremental backup is\nthat there's no tight coupling between the full backup and the\nincremental backup. Suppose you take a full backup A on S1, and then\nanother full backup B, and then an incremental backup C based on A,\nand then an incremental backup D based on B. If backup B is destroyed\nbeyond retrieval, you can restore the chain A-C-D and get back to the\nsame place that restoring B-D would have gotten you. Backup D doesn't\nreally know or care that it happens to be based on B. It just knows\nthat it can only give you those blocks that have LSN >= LSN_B. You\ncan get those blocks from anywhere that you like. If D instead stored\ndeltas between the blocks as they exist in backup B, then those deltas\nwould have to be applied specifically to backup B, not some\npossibly-later version.\n\nI think the way to think about this problem, or at least the way I\nthink about this problem, is that we need to decide whether want\nfile-level incremental backup, block-level incremental backup, or\nbyte-level incremental backup. pgbackrest implements file-level\nincremental backup: if the file has changed, copy the whole thing.\nThat has an appealing simplicity but risks copying 1GB of data for a\n1-byte change. What I'm proposing here is block-level incremental\nbackup, which is more complicated and still risks copying 8kB of data\nfor a 1-byte change. Using VCDIFF would, I think, give us byte-level\nincremental backup. That would probably do an excellent job of making\nincremental backups as small as they can possibly be, because we would\nnot need to include in the backup image even a single byte of\nunmodified data. It also seems like it does some other compression\ntricks which could shrink incremental backups further. However, my\nintuition is that we won't gain enough in terms of backup size to make\nup for the downsides listed above.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 10 Apr 2019 11:31:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 10:57 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n> My idea would be create a new tool working on archived WAL. No burden\n> server side. Basic concept is:\n>\n> * parse archives\n> * record latest relevant FPW for the incr backup\n> * write new WALs with recorded FPW and removing/rewriting duplicated walrecords.\n>\n> It's just a PoC and I hadn't finished the WAL writing part...not even talking\n> about the replay part. I'm not even sure this project is a good idea, but it is\n> a good educational exercice to me in the meantime.\n>\n> Anyway, using real life OLTP production archives, my stats were:\n>\n> # WAL xlogrec kept Size WAL kept\n> 127 39% 50%\n> 383 22% 38%\n> 639 20% 29%\n>\n> Based on this stats, I expect this would save a lot of time during recovery in\n> a first step. If it get mature, it might even save a lot of archives space or\n> extend the retention period with degraded granularity. It would even help\n> taking full backups with a lower frequency.\n>\n> Any thoughts about this design would be much appreciated. I suppose this should\n> be offlist or in a new thread to avoid polluting this thread as this is a\n> slightly different subject.\n\nInteresting idea, but I don't see how it can work if you only deal\nwith the FPWs and not the other records. For instance, suppose that\nyou take a full backup at time T0, and then at time T1 there are two\nmodifications to a certain block in quick succession. That block is\nthen never touched again. Since no checkpoint intervenes between the\nmodifications, the first one emits an FPI and the second does not.\nCapturing the FPI is fine as far as it goes, but unless you also do\nsomething with the non-FPI change, you lose that second modification.\nYou could fix that by having your tool replicate the effects of WAL\napply outside the server, but that sounds like a ton of work and a ton\nof possible bugs.\n\nI have a related idea, though. Suppose that, as Peter says upthread,\nyou have a replication slot that prevents old WAL from being removed.\nYou also have a background worker that is connected to that slot. It\ndecodes WAL and produces summary files containing all block-references\nextracted from those WAL records and the associated LSN (or maybe some\napproximation of the LSN instead of the exact value, to allow for\ncompression and combining of nearby references). Then you hold onto\nthose summary files after the actual WAL is removed. Now, when\nsomebody asks the server for all blocks changed since a certain LSN,\nit can use those summary files to figure out which blocks to send\nwithout having to read all the pages in the database. Although I\nbelieve that a simple system that finds modified blocks by reading\nthem all is good enough for a first version of this feature and useful\nin its own right, a more efficient system will be a lot more useful,\nand something like this seems to me to be probably the best way to\nimplement it.\n\nThe reason why I think this is likely to be superior to other possible\napproaches, such as the ptrack approach Konstantin suggests elsewhere\non this thread, is because it pushes the work of figuring out which\nblocks have been modified into the background. With a ptrack-type\napproach, the server has to do some non-zero amount of extra work in\nthe foreground every time it modifies a block. With an approach based\non WAL-scanning, the work is done in the background and nobody has to\nwait for it. It's possible that there are other considerations which\naren't occurring to me right now, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 10 Apr 2019 12:21:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 10:22 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> Some times ago I have implemented alternative version of ptrack utility\n> (not one used in pg_probackup)\n> which detects updated block at file level. It is very simple and may be\n> it can be sometimes integrated in master.\n\nI don't think this is completely crash-safe. It looks like it\narranges to msync() the ptrack file at appropriate times (although I\nhaven't exhaustively verified the logic), but it uses MS_ASYNC, so\nit's possible that the ptrack file could get updated on disk either\nbefore or after the relation file itself. I think before is probably\nOK -- it just risks having some blocks look modified when they aren't\nreally -- but after seems like it is very much not OK. And changing\nthis to use MS_SYNC would probably be really expensive. Likely a\nbetter approach would be to hook into the new fsync queue machinery\nthat Thomas Munro added to PostgreSQL 12.\n\nIt looks like your system maps all the blocks in the system into a\nfixed-size map using hashing. If the number of modified blocks\nbetween the full backup and the incremental backup is large compared\nto the size of the ptrack map, you'll start to get a lot of\nfalse-positives. It will look as if much of the database needs to be\nbacked up. For example, in your sample configuration, you have\nptrack_map_size = 1000003. If you've got a 100GB database with 20%\ndaily turnover, that's about 2.6 million blocks. If you set bump a\nrandom entry ~2.6 million times in a map with 1000003 entries, on the\naverage ~92% of the entries end up getting bumped, so you will get\nvery little benefit from incremental backup. This problem drops off\npretty fast if you raise the size of the map, but it's pretty critical\nthat your map is large enough for the database you've got, or you may\nas well not bother.\n\nIt also appears that your system can't really handle resizing of the\nmap in any friendly way. So if your data size grows, you may be faced\nwith either letting the map become progressively less effective, or\nthrowing it out and losing all the data you have.\n\nNone of that is to say that what you're presenting here has no value,\nbut I think it's possible to do better (and I think we should try).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 10 Apr 2019 12:51:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 9:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> I have a related idea, though. Suppose that, as Peter says upthread,\n> you have a replication slot that prevents old WAL from being removed.\n> You also have a background worker that is connected to that slot. It\n> decodes WAL and produces summary files containing all block-references\n> extracted from those WAL records and the associated LSN (or maybe some\n> approximation of the LSN instead of the exact value, to allow for\n> compression and combining of nearby references). Then you hold onto\n> those summary files after the actual WAL is removed. Now, when\n> somebody asks the server for all blocks changed since a certain LSN,\n> it can use those summary files to figure out which blocks to send\n> without having to read all the pages in the database. Although I\n> believe that a simple system that finds modified blocks by reading\n> them all is good enough for a first version of this feature and useful\n> in its own right, a more efficient system will be a lot more useful,\n> and something like this seems to me to be probably the best way to\n> implement it.\n>\n\nNot to fork the conversation from incremental backups, but similar approach\nis what we have been thinking for pg_rewind. Currently, pg_rewind requires\nall the WAL logs to be present on source side from point of divergence to\nrewind. Instead just parse the wal and keep the changed blocks around on\nsourece. Then don't need to retain the WAL but can still rewind using the\nchanged block map. So, rewind becomes much similar to incremental backup\nproposed here after performing rewind activity using target side WAL only.\n\nOn Wed, Apr 10, 2019 at 9:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\nI have a related idea, though. Suppose that, as Peter says upthread,\nyou have a replication slot that prevents old WAL from being removed.\nYou also have a background worker that is connected to that slot. It\ndecodes WAL and produces summary files containing all block-references\nextracted from those WAL records and the associated LSN (or maybe some\napproximation of the LSN instead of the exact value, to allow for\ncompression and combining of nearby references). Then you hold onto\nthose summary files after the actual WAL is removed. Now, when\nsomebody asks the server for all blocks changed since a certain LSN,\nit can use those summary files to figure out which blocks to send\nwithout having to read all the pages in the database. Although I\nbelieve that a simple system that finds modified blocks by reading\nthem all is good enough for a first version of this feature and useful\nin its own right, a more efficient system will be a lot more useful,\nand something like this seems to me to be probably the best way to\nimplement it.Not to fork the conversation from incremental backups, but similar approach is what we have been thinking for pg_rewind. Currently, pg_rewind requires all the WAL logs to be present on source side from point of divergence to rewind. Instead just parse the wal and keep the changed blocks around on sourece. Then don't need to retain the WAL but can still rewind using the changed block map. So, rewind becomes much similar to incremental backup proposed here after performing rewind activity using target side WAL only.",
"msg_date": "Wed, 10 Apr 2019 09:56:42 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 7:51 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > 9 апр. 2019 г., в 20:48, Robert Haas <robertmhaas@gmail.com> написал(а):\n> > - This is just a design proposal at this point; there is no code. If\n> > this proposal, or some modified version of it, seems likely to be\n> > acceptable, I and/or my colleagues might try to implement it.\n>\n> I'll be happy to help with code, discussion and patch review.\n\nThat would be great!\n\nWe should probably give this discussion some more time before we\nplunge into the implementation phase, but I'd love to have some help\nwith that, whether it's with coding or review or whatever.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 10 Apr 2019 13:03:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 12:56 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> Not to fork the conversation from incremental backups, but similar approach is what we have been thinking for pg_rewind. Currently, pg_rewind requires all the WAL logs to be present on source side from point of divergence to rewind. Instead just parse the wal and keep the changed blocks around on sourece. Then don't need to retain the WAL but can still rewind using the changed block map. So, rewind becomes much similar to incremental backup proposed here after performing rewind activity using target side WAL only.\n\nInteresting. So if we build a system like this for incremental\nbackup, or for pg_rewind, the other one can use the same\ninfrastructure. That sound excellent. I'll start a new thread to\ntalk about that, and hopefully you and Heikki and others will chime in\nwith thoughts.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 10 Apr 2019 13:08:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi,\n\nFirst thank you for your answer!\n\nOn Wed, 10 Apr 2019 12:21:03 -0400\nRobert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Apr 10, 2019 at 10:57 AM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> > My idea would be create a new tool working on archived WAL. No burden\n> > server side. Basic concept is:\n> >\n> > * parse archives\n> > * record latest relevant FPW for the incr backup\n> > * write new WALs with recorded FPW and removing/rewriting duplicated\n> > walrecords.\n> >\n> > It's just a PoC and I hadn't finished the WAL writing part...not even\n> > talking about the replay part. I'm not even sure this project is a good\n> > idea, but it is a good educational exercice to me in the meantime.\n> >\n> > Anyway, using real life OLTP production archives, my stats were:\n> >\n> > # WAL xlogrec kept Size WAL kept\n> > 127 39% 50%\n> > 383 22% 38%\n> > 639 20% 29%\n> >\n> > Based on this stats, I expect this would save a lot of time during recovery\n> > in a first step. If it get mature, it might even save a lot of archives\n> > space or extend the retention period with degraded granularity. It would\n> > even help taking full backups with a lower frequency.\n> >\n> > Any thoughts about this design would be much appreciated. I suppose this\n> > should be offlist or in a new thread to avoid polluting this thread as this\n> > is a slightly different subject. \n> \n> Interesting idea, but I don't see how it can work if you only deal\n> with the FPWs and not the other records. For instance, suppose that\n> you take a full backup at time T0, and then at time T1 there are two\n> modifications to a certain block in quick succession. That block is\n> then never touched again. Since no checkpoint intervenes between the\n> modifications, the first one emits an FPI and the second does not.\n> Capturing the FPI is fine as far as it goes, but unless you also do\n> something with the non-FPI change, you lose that second modification.\n> You could fix that by having your tool replicate the effects of WAL\n> apply outside the server, but that sounds like a ton of work and a ton\n> of possible bugs.\n\nIn my current design, the scan is done backward from end to start and I keep all\nthe records appearing after the last occurrence of their respective FPI.\n\nThe next challenge I have to achieve is to deal with multiple blocks records\nwhere some need to be removed and other are FPI to keep (eg. UPDATE).\n\n> I have a related idea, though. Suppose that, as Peter says upthread,\n> you have a replication slot that prevents old WAL from being removed.\n> You also have a background worker that is connected to that slot. It\n> decodes WAL and produces summary files containing all block-references\n> extracted from those WAL records and the associated LSN (or maybe some\n> approximation of the LSN instead of the exact value, to allow for\n> compression and combining of nearby references). Then you hold onto\n> those summary files after the actual WAL is removed. Now, when\n> somebody asks the server for all blocks changed since a certain LSN,\n> it can use those summary files to figure out which blocks to send\n> without having to read all the pages in the database. Although I\n> believe that a simple system that finds modified blocks by reading\n> them all is good enough for a first version of this feature and useful\n> in its own right, a more efficient system will be a lot more useful,\n> and something like this seems to me to be probably the best way to\n> implement it.\n\nSummary files looks like what Andrey Borodin described as delta-files and\nchange maps.\n\n> With an approach based\n> on WAL-scanning, the work is done in the background and nobody has to\n> wait for it.\n\nAgree with this.\n\n\n",
"msg_date": "Wed, 10 Apr 2019 20:21:45 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 2:21 PM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n> In my current design, the scan is done backward from end to start and I keep all\n> the records appearing after the last occurrence of their respective FPI.\n\nOh, interesting. That seems like it would require pretty major\nsurgery on the WAL stream.\n\n> Summary files looks like what Andrey Borodin described as delta-files and\n> change maps.\n\nYep.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 10 Apr 2019 14:38:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-10 14:38:43 -0400, Robert Haas wrote:\n> On Wed, Apr 10, 2019 at 2:21 PM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> > In my current design, the scan is done backward from end to start and I keep all\n> > the records appearing after the last occurrence of their respective FPI.\n> \n> Oh, interesting. That seems like it would require pretty major\n> surgery on the WAL stream.\n\nCan't you just read each segment forward, and then reverse? That's not\nthat much memory? And sure, there's some inefficient cases where records\nspan many segments, but that's rare enough that reading a few segments\nseveral times doesn't strike me as particularly bad?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Apr 2019 11:55:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On 2019-04-10 17:31, Robert Haas wrote:\n> I think the way to think about this problem, or at least the way I\n> think about this problem, is that we need to decide whether want\n> file-level incremental backup, block-level incremental backup, or\n> byte-level incremental backup.\n\nThat is a great analysis. Seems like block-level is the preferred way\nforward.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Apr 2019 21:42:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "\n\nOn 10.04.2019 19:51, Robert Haas wrote:\n> On Wed, Apr 10, 2019 at 10:22 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> Some times ago I have implemented alternative version of ptrack utility\n>> (not one used in pg_probackup)\n>> which detects updated block at file level. It is very simple and may be\n>> it can be sometimes integrated in master.\n> I don't think this is completely crash-safe. It looks like it\n> arranges to msync() the ptrack file at appropriate times (although I\n> haven't exhaustively verified the logic), but it uses MS_ASYNC, so\n> it's possible that the ptrack file could get updated on disk either\n> before or after the relation file itself. I think before is probably\n> OK -- it just risks having some blocks look modified when they aren't\n> really -- but after seems like it is very much not OK. And changing\n> this to use MS_SYNC would probably be really expensive. Likely a\n> better approach would be to hook into the new fsync queue machinery\n> that Thomas Munro added to PostgreSQL 12.\n\nI do not think that MS_SYNC or fsync queue is needed here.\nIf power failure or OS crash cause loose of some writes to ptrack map, \nthen in any case {ostgres will perform recovery and updating pages from \nWAL cause once again marking them in ptrack map. So as in case of CLOG \nand many other Postgres files it is not critical to loose some writes \nbecause them will be restored from WAL. And before truncating WAL, \nPostgres performs checkpoint which flushes all changes to the disk, \nincluding ptrack map updates.\n\n\n> It looks like your system maps all the blocks in the system into a\n> fixed-size map using hashing. If the number of modified blocks\n> between the full backup and the incremental backup is large compared\n> to the size of the ptrack map, you'll start to get a lot of\n> false-positives. It will look as if much of the database needs to be\n> backed up. For example, in your sample configuration, you have\n> ptrack_map_size = 1000003. If you've got a 100GB database with 20%\n> daily turnover, that's about 2.6 million blocks. If you set bump a\n> random entry ~2.6 million times in a map with 1000003 entries, on the\n> average ~92% of the entries end up getting bumped, so you will get\n> very little benefit from incremental backup. This problem drops off\n> pretty fast if you raise the size of the map, but it's pretty critical\n> that your map is large enough for the database you've got, or you may\n> as well not bother.\nThis is why ptrack block size should be larger than page size.\nAssume that it is 1Mb. 1MB is considered to be optimal amount of disk \nIO, when frequent seeks are not degrading read speed (it is most \ncritical for HDD). In other words reading 10 random pages (20%) from \nthis 1Mb block will takes almost the same amount of time (or even \nlonger) than reading all this 1Mb in one operation.\n\nThere will be just 100000 used entries in ptrack map with very small \nprobability of collision.\nActually I have chosen this size (1000003) for ptrack map because with \n1Mb block size is allows to map without noticable number of collisions \n1Tb database which seems to be enough for most Postgres installations. \nBut increasing ptrack map size 10 and even 100 times should not also \ncause problems with modern RAM sizes.\n\n>\n> It also appears that your system can't really handle resizing of the\n> map in any friendly way. So if your data size grows, you may be faced\n> with either letting the map become progressively less effective, or\n> throwing it out and losing all the data you have.\n>\n> None of that is to say that what you're presenting here has no value,\n> but I think it's possible to do better (and I think we should try).\n>\nDefinitely I didn't consider proposed patch as perfect solution and \ncertainly it requires improvements (and may be complete redesign).\nI just want to present this approach (maintaining hash of block's LSN in \nmapped memory) and keeping track of modified blocks at file level \n(unlike current ptrack implementation which logs changes in all places \nin Postgres code where data is updated).\n\nAlso, despite to the fact that this patch may be considered as raw \nprototype, I have spent some time thinking about all aspects of this \napproach including fault tolerance and false positives.\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 22:57:38 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, 10 Apr 2019 14:38:43 -0400\nRobert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Apr 10, 2019 at 2:21 PM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> > In my current design, the scan is done backward from end to start and I\n> > keep all the records appearing after the last occurrence of their\n> > respective FPI. \n> \n> Oh, interesting. That seems like it would require pretty major\n> surgery on the WAL stream.\n\nIndeed.\n\nPresently, the surgery in my code is replacing redundant xlogrecord with noop.\n\nI have now to deal with muti-blocks records. So far, I tried to mark non-needed\nblock with !BKPBLOCK_HAS_DATA and made a simple patch in core to ignore such\nmarked blocks, but it doesn't play well with dependency between xlogrecord, eg.\nduring UPDATE. So my plan is to rewrite them to remove non-needed blocks using\neg. XLOG_FPI.\n\nAs I wrote, this is mainly an hobby project right now for my own education. Not\nsure where it leads me, but I learn a lot while working on it.\n\n\n",
"msg_date": "Wed, 10 Apr 2019 22:46:03 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, 10 Apr 2019 11:55:51 -0700\nAndres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n> \n> On 2019-04-10 14:38:43 -0400, Robert Haas wrote:\n> > On Wed, Apr 10, 2019 at 2:21 PM Jehan-Guillaume de Rorthais\n> > <jgdr@dalibo.com> wrote: \n> > > In my current design, the scan is done backward from end to start and I\n> > > keep all the records appearing after the last occurrence of their\n> > > respective FPI. \n> > \n> > Oh, interesting. That seems like it would require pretty major\n> > surgery on the WAL stream. \n> \n> Can't you just read each segment forward, and then reverse?\n\nNot sure what you mean.\n\nI first look for the very last XLOG record by jumping to the last WAL and\nscanning it forward. \n\nThen, I do a backward from there to record LSN of xlogrecord to keep.\n\nFinally, I clone each WAL and edit them as needed (as described in my previous\nemail). This is my current WIP though.\n\n> That's not that much memory?\n\nI don't know, yet. I did not mesure it.\n\n\n",
"msg_date": "Wed, 10 Apr 2019 22:54:18 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 09:42:47PM +0200, Peter Eisentraut wrote:\n> That is a great analysis. Seems like block-level is the preferred way\n> forward.\n\nIn any solution related to incremental backups I have see from\ncommunity, all of them tend to prefer block-level backups per the\nfiltering which is possible based on the LSN of the page header. The\nholes in the middle of the page are also easier to handle so as an\nincremental page size is reduced in the actual backup. My preference\ntends toward a block-level approach if we were to do something in this\narea, though I fear that performance will be bad if we begin to scan\nall the relation files to fetch a set of blocks since a past LSN.\nHence we need some kind of LSN map so as it is possible to skip a\none block or a group of blocks (say one LSN every 8/16 blocks for\nexample) at once for a given relation if the relation is mostly\nread-only.\n--\nMichael",
"msg_date": "Thu, 11 Apr 2019 13:22:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 12:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n> incremental page size is reduced in the actual backup. My preference\n> tends toward a block-level approach if we were to do something in this\n> area, though I fear that performance will be bad if we begin to scan\n> all the relation files to fetch a set of blocks since a past LSN.\n> Hence we need some kind of LSN map so as it is possible to skip a\n> one block or a group of blocks (say one LSN every 8/16 blocks for\n> example) at once for a given relation if the relation is mostly\n> read-only.\n\nSo, in this thread, I want to focus on the UI and how the incremental\nbackup is stored on disk. Making the process of identifying modified\nblocks efficient is the subject of\nhttp://postgr.es/m/CA+TgmoahOeuuR4pmDP1W=JnRyp4fWhynTOsa68BfxJq-qB_53A@mail.gmail.com\n\nOver there, the merits of what you are describing here and the\ncompeting approaches are under discussion.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 11 Apr 2019 09:45:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "09.04.2019 18:48, Robert Haas writes:\n> Thoughts?\nHi,\nThank you for bringing that up.\nIn-core support of incremental backups is a long-awaited feature.\nHopefully, this take will end up committed in PG13.\n\nSpeaking of UI:\n1) I agree that it should be implemented as a new replication command.\n\n2) There should be a command to get only a map of changes without actual \ndata.\n\nMost backup tools establish server connection, so they can use this \nprotocol to get the list of changed blocks.\nThen they can use this information for any purpose. For example, \ndistribute files between parallel workers to copy the data,\nor estimate backup size before data is sent, or store metadata \nseparately from the data itself.\nMost methods (except straightforward LSN comparison) consist of two \nsteps: get a map of changes and read blocks.\nSo it won't add much of extra work.\n\nexample commands:\nGET_FILELIST [lsn]\nreturning json (or whatever) with filenames and maps of changed blocks\n\nMap format is also the subject of discussion.\nNow in pg_probackup we reuse code from pg_rewind/datapagemap,\nnot sure if this format is good for sending data via the protocol, though.\n\n3) The API should provide functions to request data with a granularity \nof file and block.\nIt will be useful for parallelism and for various future projects.\n\nexample commands:\nGET_DATAFILE [filename [map of blocks] ]\nGET_DATABLOCK [filename] [blkno]\nreturning data in some format\n\n4) The algorithm of collecting changed blocks is another topic.\nThough, it's API should be discussed here:\n\nDo we want to have multiple implementations?\nPersonally, I think that it's good to provide several strategies,\nsince they have different requirements and fit for different workloads.\n\nMaybe we can add a hook to allow custom implementations.\n\nDo we want to allow the backup client to tell what block collection \nmethod to use?\nexample commands:\nGET_FILELIST [lsn] [METHOD lsn | page | ptrack | etc]\nOr should it be server-side cost-based decision?\n\n5) The method based on LSN comparison stands out - it can be done in one \npass.\nSo it probably requires special protocol commands.\nfor example:\nGET_DATAFILES [lsn]\nGET_DATAFILE [filename] [lsn]\n\nThis is pretty simple to implement and pg_basebackup can use this method,\nat least until we have something more advanced in-core.\n\nI'll be happy to help with design, code, review, and testing.\nHope that my experience with pg_probackup will be useful.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 11 Apr 2019 20:29:29 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> Several companies, including EnterpriseDB, NTT, and Postgres Pro, have\n> developed technology that permits a block-level incremental backup to\n> be taken from a PostgreSQL server. I believe the idea in all of those\n> cases is that non-relation files should be backed up in their\n> entirety, but for relation files, only those blocks that have been\n> changed need to be backed up.\n\nI love the general idea of having additional facilities in core to\nsupport block-level incremental backups. I've long been unhappy that\nany such approach ends up being limited to a subset of the files which\nneed to be included in the backup, meaning the rest of the files have to\nbe backed up in their entirety. I don't think we have to solve for that\nas part of this, but I'd like to see a discussion for how to deal with\nthe other files which are being backed up to avoid needing to just\nwholesale copy them.\n\n> I would like to propose that we should\n> have a solution for this problem in core, rather than leaving it to\n> each individual PostgreSQL company to develop and maintain their own\n> solution. \n\nI'm certainly a fan of improving our in-core backup solutions.\n\nI'm quite concerned that trying to graft this on to pg_basebackup\n(which, as you note later, is missing an awful lot of what users expect\nfrom a real backup solution already- retention handling, parallel\ncapabilities, WAL archive management, and many more... but also is just\nnot nearly as developed a tool as the external solutions) is going to\nmake things unnecessairly difficult when what we really want here is\nbetter support from core for block-level incremental backup for the\nexisting external tools to leverage.\n\nPerhaps there's something here which can be done with pg_basebackup to\nhave it work with the block-level approach, but I certainly don't see\nit as a natural next step for it and really does seem like limiting the\nway this is implemented to something that pg_basebackup can easily\ndigest might make it less useful for the more developed tools.\n\nAs an example, I believe all of the other tools mentioned (at least,\nthose that are open source I'm pretty sure all do) support parallel\nbackup and therefore having a way to get the block-level changes in a\nparallel fashion would be a pretty big thing that those tools will want\nand pg_basebackup is single-threaded today and this proposal doesn't\nseem to be contemplating changing that, implying that a serial-based\nblock-level protocol would be fine but that'd be a pretty awful\nrestriction for the other tools.\n\n> Generally my idea is:\n> \n> 1. There should be a way to tell pg_basebackup to request from the\n> server only those blocks where LSN >= threshold_value. There are\n> several possible ways for the server to implement this, the simplest\n> of which is to just scan all the blocks and send only the ones that\n> satisfy that criterion. That might sound dumb, but it does still save\n> network bandwidth, and it works even without any prior setup. It will\n> probably be more efficient in many cases to instead scan all the WAL\n> generated since that LSN and extract block references from it, but\n> that is only possible if the server has all of that WAL available or\n> can somehow get it from the archive. We could also, as several people\n> have proposed previously, have some kind of additional relation for\n> that stores either a single is-modified bit -- which only helps if the\n> reference LSN for the is-modified bit is older than the requested LSN\n> but not too much older -- or the highest LSN for each range of K\n> blocks, or something like that. I am at the moment not too concerned\n> with the exact strategy we use here. I believe we may want to\n> eventually support more than one, since they have different\n> trade-offs.\n\nThis part of the discussion is a another example of how we're limiting\nourselves in this implementation to the \"pg_basebackup can work with\nthis\" case- by only consideration the options of \"scan all the files\" or\n\"use the WAL- if the request is for WAL we have available on the\nserver.\" The other backup solutions mentioned in your initial email,\nand others that weren't, have a WAL archive which includes a lot more\nWAL than just what the primary currently has. When I've thought about\nhow WAL could be used to build a differential or incremental backup, the\nquestion of \"do we have all the WAL we need\" hasn't ever been a\nconsideration- because the backup tool manages the WAL archive and has\nWAL going back across, most likely, weeks or even months. Having a tool\nwhich can essentially \"compress\" WAL would be fantastic and would be\nable to be leveraged by all of the different backup solutions.\n\n> 2. When you use pg_basebackup in this way, each relation file that is\n> not sent in its entirety is replaced by a file with a different name.\n> For example, instead of base/16384/16417, you might get\n> base/16384/partial.16417 or however we decide to name them. Each such\n> file will store near the beginning of the file a list of all the\n> blocks contained in that file, and the blocks themselves will follow\n> at offsets that can be predicted from the metadata at the beginning of\n> the file. The idea is that you shouldn't have to read the whole file\n> to figure out which blocks it contains, and if you know specifically\n> what blocks you want, you should be able to reasonably efficiently\n> read just those blocks. A backup taken in this manner should also\n> probably create some kind of metadata file in the root directory that\n> stops the server from starting and lists other salient details of the\n> backup. In particular, you need the threshold LSN for the backup\n> (i.e. contains blocks newer than this) and the start LSN for the\n> backup (i.e. the LSN that would have been returned from\n> pg_start_backup).\n\nTwo things here- having some file that \"stops the server from starting\"\nis just going to cause a lot of pain, in my experience. Users do a lot\nof really rather.... curious things, and then come asking questions\nabout them, and removing the file that stopped the server from starting\nis going to quickly become one of those questions on stack overflow that\npeople just follow the highest-ranked question for, even though everyone\nwho follows this list will know that doing so results in corruption of\nthe database.\n\nAn alternative approach in developing this feature would be to have\npg_basebackup have an option to run against an *existing* backup, with\nthe entire point being that the existing backup is updated with these\nincremental changes, instead of having some independent tool which takes\nthe result of multiple pg_basebackup runs and then combines them.\n\nAn alternative tool might be one which simply reads the WAL and keeps\ntrack of the FPIs and the updates and then eliminates any duplication\nwhich exists in the set of WAL provided (that is, multiple FPIs for the\nsame page would be merged into one, and only the delta changes to that\npage are preserved, across the entire set of WAL being combined). Of\ncourse, that's complicated by having to deal with the other files in the\ndatabase, so it wouldn't really work on its own.\n\n> 3. There should be a new tool that knows how to merge a full backup\n> with any number of incremental backups and produce a complete data\n> directory with no remaining partial files. The tool should check that\n> the threshold LSN for each incremental backup is less than or equal to\n> the start LSN of the previous backup; if not, there may be changes\n> that happened in between which would be lost, so combining the backups\n> is unsafe. Running this tool can be thought of either as restoring\n> the backup or as producing a new synthetic backup from any number of\n> incremental backups. This would allow for a strategy of unending\n> incremental backups. For instance, on day 1, you take a full backup.\n> On every subsequent day, you take an incremental backup. On day 9,\n> you run pg_combinebackup day1 day2 -o full; rm -rf day1 day2; mv full\n> day2. On each subsequent day you do something similar. Now you can\n> always roll back to any of the last seven days by combining the oldest\n> backup you have (which is always a synthetic full backup) with as many\n> newer incrementals as you want, up to the point where you want to\n> stop.\n\nI'd really prefer that we avoid adding in another low-level tool like\nthe one described here. Users, imv anyway, don't want to deal with\n*more* tools for handling this aspect of backup/recovery. If we had a\ntool in core today which managed multiples backups, kept track of them,\nand all of the WAL during and between them, then we could add options to\nthat tool to do what's being described here in a way that makes sense\nand provides a good interface to users. I don't know that we're going\nto be able to do that with pg_basebackup when, really, the goal here\nisn't actually to make pg_basebackup into an enterprise backup tool,\nit's to make things easier for the external tools to do block-level\nbackups.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 15 Apr 2019 09:01:11 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 09:01:11AM -0400, Stephen Frost wrote:\n> Greetings,\n> \n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > Several companies, including EnterpriseDB, NTT, and Postgres Pro, have\n> > developed technology that permits a block-level incremental backup to\n> > be taken from a PostgreSQL server. I believe the idea in all of those\n> > cases is that non-relation files should be backed up in their\n> > entirety, but for relation files, only those blocks that have been\n> > changed need to be backed up.\n> \n> I love the general idea of having additional facilities in core to\n> support block-level incremental backups. I've long been unhappy that\n> any such approach ends up being limited to a subset of the files which\n> need to be included in the backup, meaning the rest of the files have to\n> be backed up in their entirety. I don't think we have to solve for that\n> as part of this, but I'd like to see a discussion for how to deal with\n> the other files which are being backed up to avoid needing to just\n> wholesale copy them.\n\nI assume you are talking about non-heap/index files. Which of those are\nlarge enough to benefit from incremental backup?\n\n> > I would like to propose that we should\n> > have a solution for this problem in core, rather than leaving it to\n> > each individual PostgreSQL company to develop and maintain their own\n> > solution. \n> \n> I'm certainly a fan of improving our in-core backup solutions.\n> \n> I'm quite concerned that trying to graft this on to pg_basebackup\n> (which, as you note later, is missing an awful lot of what users expect\n> from a real backup solution already- retention handling, parallel\n> capabilities, WAL archive management, and many more... but also is just\n> not nearly as developed a tool as the external solutions) is going to\n> make things unnecessairly difficult when what we really want here is\n> better support from core for block-level incremental backup for the\n> existing external tools to leverage.\n\nI think there is some interesting complexity brought up in this thread. \nWhich options are going to minimize storage I/O, network I/O, have only\nbackground overhead, allow parallel operation, integrate with\npg_basebackup. Eventually we will need to evaluate the incremental\nbackup options against these criteria.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 15 Apr 2019 12:48:57 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 1:29 PM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> 2) There should be a command to get only a map of changes without actual\n> data.\n\nGood idea.\n\n> 4) The algorithm of collecting changed blocks is another topic.\n> Though, it's API should be discussed here:\n>\n> Do we want to have multiple implementations?\n> Personally, I think that it's good to provide several strategies,\n> since they have different requirements and fit for different workloads.\n>\n> Maybe we can add a hook to allow custom implementations.\n\nI'm not sure a hook is going to be practical, but I do think we want\nmore than one strategy.\n\n> I'll be happy to help with design, code, review, and testing.\n> Hope that my experience with pg_probackup will be useful.\n\nGreat, thanks!\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Apr 2019 14:14:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 9:01 AM Stephen Frost <sfrost@snowman.net> wrote:\n> I love the general idea of having additional facilities in core to\n> support block-level incremental backups. I've long been unhappy that\n> any such approach ends up being limited to a subset of the files which\n> need to be included in the backup, meaning the rest of the files have to\n> be backed up in their entirety. I don't think we have to solve for that\n> as part of this, but I'd like to see a discussion for how to deal with\n> the other files which are being backed up to avoid needing to just\n> wholesale copy them.\n\nIdeas? Generally, I don't think that anything other than the main\nforks of relations are worth worrying about, because the files are too\nsmall to really matter. Even if they're big, the main forks of\nrelations will be much bigger. I think.\n\n> I'm quite concerned that trying to graft this on to pg_basebackup\n> (which, as you note later, is missing an awful lot of what users expect\n> from a real backup solution already- retention handling, parallel\n> capabilities, WAL archive management, and many more... but also is just\n> not nearly as developed a tool as the external solutions) is going to\n> make things unnecessairly difficult when what we really want here is\n> better support from core for block-level incremental backup for the\n> existing external tools to leverage.\n>\n> Perhaps there's something here which can be done with pg_basebackup to\n> have it work with the block-level approach, but I certainly don't see\n> it as a natural next step for it and really does seem like limiting the\n> way this is implemented to something that pg_basebackup can easily\n> digest might make it less useful for the more developed tools.\n\nI agree that there are a bunch of things that pg_basebackup does not\ndo, such as backup management. I think a lot of users do not want\nPostgreSQL to do backup management for them. They have an existing\nsolution that they use to manage backups, and they want PostgreSQL to\ninteroperate with it. I think it makes sense for pg_basebackup to be\nin charge of taking the backup, and then other tools can either use it\nas a building block or use the streaming replication protocol to send\napproximately the same commands to the server. I certainly would not\nwant to expose server capabilities that let you take an incremental\nbackup and NOT teach pg_basebackup to use them -- then we'd be in a\nsituation of saying that PostgreSQL has incremental backup, but you\nhave to get external tool XYZ to use it. That will be perceived as\nPostgreSQL does NOT have incremental backup and this external tool\nadds it.\n\n> As an example, I believe all of the other tools mentioned (at least,\n> those that are open source I'm pretty sure all do) support parallel\n> backup and therefore having a way to get the block-level changes in a\n> parallel fashion would be a pretty big thing that those tools will want\n> and pg_basebackup is single-threaded today and this proposal doesn't\n> seem to be contemplating changing that, implying that a serial-based\n> block-level protocol would be fine but that'd be a pretty awful\n> restriction for the other tools.\n\nI mentioned this exact issue in my original email. I spoke positively\nof it. But I think it is different from what is being proposed here.\nWe could have parallel backup without incremental backup, and that\nwould be a good feature. We could have parallel backup without full\nbackup, and that would also be a good feature. We could also have\nboth, which would be best of all. I don't see that my proposal throws\nup any architectural obstacle to parallelism. I assume parallel\nbackup, whether full or incremental, would be implemented by dividing\nup the files that need to be sent across the available connections; if\nincremental backup exists, each connection then has to decide whether\nto send the whole file or only part of it.\n\n> This part of the discussion is a another example of how we're limiting\n> ourselves in this implementation to the \"pg_basebackup can work with\n> this\" case- by only consideration the options of \"scan all the files\" or\n> \"use the WAL- if the request is for WAL we have available on the\n> server.\" The other backup solutions mentioned in your initial email,\n> and others that weren't, have a WAL archive which includes a lot more\n> WAL than just what the primary currently has. When I've thought about\n> how WAL could be used to build a differential or incremental backup, the\n> question of \"do we have all the WAL we need\" hasn't ever been a\n> consideration- because the backup tool manages the WAL archive and has\n> WAL going back across, most likely, weeks or even months. Having a tool\n> which can essentially \"compress\" WAL would be fantastic and would be\n> able to be leveraged by all of the different backup solutions.\n\nI don't think this is a case of limiting ourselves; I think it's a\ncase of keeping separate considerations properly separate. As I said\nin my original email, the client doesn't really need to know how the\nserver is identifying the blocks that have been modified. That is the\nserver's job. I started a separate thread on the WAL-scanning\napproach, so we should take that part of the discussion over there. I\nsee no reason why the server couldn't be taught to reach back into an\navailable archive for WAL that it no longer has locally, but that's\nreally independent of the design ideas being discussed on this thread.\n\n> Two things here- having some file that \"stops the server from starting\"\n> is just going to cause a lot of pain, in my experience. Users do a lot\n> of really rather.... curious things, and then come asking questions\n> about them, and removing the file that stopped the server from starting\n> is going to quickly become one of those questions on stack overflow that\n> people just follow the highest-ranked question for, even though everyone\n> who follows this list will know that doing so results in corruption of\n> the database.\n\nWait, you want to make it maximally easy for users to start the server\nin a state that is 100% certain to result in a corrupted and unusable\ndatabase? Why?? I'd l like to make that a tiny bit difficult. If\nthey really want a corrupted database, they can remove the file.\n\n> An alternative approach in developing this feature would be to have\n> pg_basebackup have an option to run against an *existing* backup, with\n> the entire point being that the existing backup is updated with these\n> incremental changes, instead of having some independent tool which takes\n> the result of multiple pg_basebackup runs and then combines them.\n\nThat would be really unsafe, because if the tool is interrupted before\nit finishes (and fsyncs everything), you no longer have any usable\nbackup. It also doesn't lend itself to several of the scenarios I\ndescribed in my original email -- like endless incrementals that are\nmerged into the full backup after some number of days -- a capability\nupon which others have already remarked positively.\n\n> An alternative tool might be one which simply reads the WAL and keeps\n> track of the FPIs and the updates and then eliminates any duplication\n> which exists in the set of WAL provided (that is, multiple FPIs for the\n> same page would be merged into one, and only the delta changes to that\n> page are preserved, across the entire set of WAL being combined). Of\n> course, that's complicated by having to deal with the other files in the\n> database, so it wouldn't really work on its own.\n\nYou've jumped back to solving the server's problem (which blocks\nshould I send?) rather than the client's problem (what does an\nincremental backup look like once I've taken it and how do I manage\nand restore them?). It does seem possible to figure out the contents\nof modified blocks strictly from looking at the WAL, without any\nexamination of the current database contents. However, it also seems\nvery complicated, because the tool that is figuring out the current\nblock contents just by looking at the WAL would have to know how to\napply any type of WAL record, not just one that contains an FPI. And\nI really don't want to build a client-side tool that knows how to\napply WAL.\n\n> I'd really prefer that we avoid adding in another low-level tool like\n> the one described here. Users, imv anyway, don't want to deal with\n> *more* tools for handling this aspect of backup/recovery. If we had a\n> tool in core today which managed multiples backups, kept track of them,\n> and all of the WAL during and between them, then we could add options to\n> that tool to do what's being described here in a way that makes sense\n> and provides a good interface to users. I don't know that we're going\n> to be able to do that with pg_basebackup when, really, the goal here\n> isn't actually to make pg_basebackup into an enterprise backup tool,\n> it's to make things easier for the external tools to do block-level\n> backups.\n\nWell, I agree with you that the goal is not to make pg_basebackup an\nenterprise backup tool. However, I don't see teaching it to take\nincremental backups as opposed to that goal. I think backup\nmanagement and retention should remain firmly outside the purview of\npg_basebackup and left either to some other in-core tool or maybe even\nto out-of-core tools. However, I don't see any reason why that the\ntask of taking an incremental and/or parallel backup should also be\nleft to another tool.\n\nThere is a very close relationship between the thing that\npg_basebackup already does (copy everything) and the thing that we\nwant to do here (copy everything except blocks that we know haven't\nchanged). If we made it the job of some other tool to take parallel\nand/or incremental backups, that other tool would need to reimplement\na lot of things that pg_basebackup has already got, like tar vs. plain\nformat, fast vs. spread checkpoint, rate-limiting, compression levels,\netc. That seems like a waste. Better to give pg_basebackup the\ncapability to do those things, and then any backup management tool\nthat anyone writes can take advantage of those capabilities.\n\nI come at this, BTW, from the perspective of having just spent a bunch\nof time working on EDB's Backup And Recovery Tool (BART). That tool\nworks in exactly the manner you seem to be advocating: it knows how to\ndo incremental and parallel full backups, and it also does backup\nmanagement. However, this has not turned out to be the best division\nof labor. People who don't want to use the backup management\ncapabilities may still want the parallel or incremental backup\ncapabilities, and if all of that is within the envelope of an\n\"enterprise backup tool,\" they don't have that option. So I want to\nsplit it up. I want pg_basebackup to take all the kinds of backups\nthat PostgreSQL supports -- full, incremental, parallel, serial,\nwhatever -- and I want some other tool -- pgBackRest, BART, barman, or\nsome yet-to-be-invented core thing to do the management of those\nbackups. Then everybody can use exactly the bits they want.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Apr 2019 14:52:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Mon, Apr 15, 2019 at 09:01:11AM -0400, Stephen Frost wrote:\n> > * Robert Haas (robertmhaas@gmail.com) wrote:\n> > > Several companies, including EnterpriseDB, NTT, and Postgres Pro, have\n> > > developed technology that permits a block-level incremental backup to\n> > > be taken from a PostgreSQL server. I believe the idea in all of those\n> > > cases is that non-relation files should be backed up in their\n> > > entirety, but for relation files, only those blocks that have been\n> > > changed need to be backed up.\n> > \n> > I love the general idea of having additional facilities in core to\n> > support block-level incremental backups. I've long been unhappy that\n> > any such approach ends up being limited to a subset of the files which\n> > need to be included in the backup, meaning the rest of the files have to\n> > be backed up in their entirety. I don't think we have to solve for that\n> > as part of this, but I'd like to see a discussion for how to deal with\n> > the other files which are being backed up to avoid needing to just\n> > wholesale copy them.\n> \n> I assume you are talking about non-heap/index files. Which of those are\n> large enough to benefit from incremental backup?\n\nBased on discussions I had with Andrey, specifically the visibility map\nis an issue for them with WAL-G. I haven't spent a lot of time thinking\nabout it, but I can understand how that could be an issue.\n\n> > I'm quite concerned that trying to graft this on to pg_basebackup\n> > (which, as you note later, is missing an awful lot of what users expect\n> > from a real backup solution already- retention handling, parallel\n> > capabilities, WAL archive management, and many more... but also is just\n> > not nearly as developed a tool as the external solutions) is going to\n> > make things unnecessairly difficult when what we really want here is\n> > better support from core for block-level incremental backup for the\n> > existing external tools to leverage.\n> \n> I think there is some interesting complexity brought up in this thread. \n> Which options are going to minimize storage I/O, network I/O, have only\n> background overhead, allow parallel operation, integrate with\n> pg_basebackup. Eventually we will need to evaluate the incremental\n> backup options against these criteria.\n\nThis presumes that we're going to have multiple competeing incremental\nbackup options presented, doesn't it? Are you aware of another effort\ngoing on which aims for inclusion in core? There's been past attempts\nmade, but I don't believe there's anyone else currently planning to or\nworking on something for inclusion in core.\n\nJust to be clear- we're not currently working on one, but I'd really\nlike to see core provide good support for incremental block-level backup\nso that we can leverage when it is there.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 16 Apr 2019 17:44:32 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 5:44 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > I love the general idea of having additional facilities in core to\n> > > support block-level incremental backups. I've long been unhappy that\n> > > any such approach ends up being limited to a subset of the files which\n> > > need to be included in the backup, meaning the rest of the files have to\n> > > be backed up in their entirety. I don't think we have to solve for that\n> > > as part of this, but I'd like to see a discussion for how to deal with\n> > > the other files which are being backed up to avoid needing to just\n> > > wholesale copy them.\n> >\n> > I assume you are talking about non-heap/index files. Which of those are\n> > large enough to benefit from incremental backup?\n>\n> Based on discussions I had with Andrey, specifically the visibility map\n> is an issue for them with WAL-G. I haven't spent a lot of time thinking\n> about it, but I can understand how that could be an issue.\n\nIf I understand correctly, the VM contains 1 byte per 4 heap pages and\nthe FSM contains 1 byte per heap page (plus some overhead for higher\nlevels of the tree). Since the FSM is not WAL-logged, I'm not sure\nthere's a whole lot we can do to avoid having to back it up, although\nmaybe there's some clever idea I'm not quite seeing. The VM is\nWAL-logged, albeit with some strange warts that I have the honor of\ninventing, so there's more possibilities there.\n\nBefore worrying about it too much, it would be useful to hear more\nabout the concerns related to these forks, so that we make sure we're\nsolving the right problem. It seems difficult for a single relation\nto be big enough for these to be much of an issue. For example, on a\n1TB relation, we have 2^40 bytes = 2^27 pages = ~2^25 bits of VM fork\n= 32MB. Not nothing, but 32MB of useless overhead every time you back\nup a 1TB database probably isn't going to break the bank. It might be\nmore of a concern for users with many small tables. For example, if\nsomebody has got a million tables with 1 page in each one, they'll\nhave a million data pages, a million VM pages, and 3 million FSM pages\n(unless the new don't-create-the-FSM-for-small-tables stuff in v12\nkicks in). I don't know if it's worth going to a lot of trouble to\noptimize that case. Creating a million tables with 100 tuples (or\nwhatever) in each one sounds like terrible database design to me.\n\n> > > I'm quite concerned that trying to graft this on to pg_basebackup\n> > > (which, as you note later, is missing an awful lot of what users expect\n> > > from a real backup solution already- retention handling, parallel\n> > > capabilities, WAL archive management, and many more... but also is just\n> > > not nearly as developed a tool as the external solutions) is going to\n> > > make things unnecessairly difficult when what we really want here is\n> > > better support from core for block-level incremental backup for the\n> > > existing external tools to leverage.\n> >\n> > I think there is some interesting complexity brought up in this thread.\n> > Which options are going to minimize storage I/O, network I/O, have only\n> > background overhead, allow parallel operation, integrate with\n> > pg_basebackup. Eventually we will need to evaluate the incremental\n> > backup options against these criteria.\n>\n> This presumes that we're going to have multiple competeing incremental\n> backup options presented, doesn't it? Are you aware of another effort\n> going on which aims for inclusion in core? There's been past attempts\n> made, but I don't believe there's anyone else currently planning to or\n> working on something for inclusion in core.\n\nYeah, I really hope we don't end up with dueling patches. I want to\ncome up with an approach that can be widely-endorsed and then have\neverybody rowing in the same direction. On the other hand, I do think\nthat we may support multiple options in certain places which may have\nthe kinds of trade-offs that Bruce mentions. For instance,\nidentifying changed blocks by scanning the whole cluster and checking\nthe LSN of each block has an advantage in that it requires no prior\nsetup or extra configuration. Like a sequential scan, it always\nworks, and that is an advantage. Of course, for many people, the\ncompeting advantage of a WAL-scanning approach that can save a lot of\nI/O will appear compelling, but maybe not for everyone. I think\nthere's room for two or three approaches there -- not in the sense of\ncompeting patches, but in the sense of giving users a choice based on\ntheir needs.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 16 Apr 2019 18:40:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 06:40:44PM -0400, Robert Haas wrote:\n> Yeah, I really hope we don't end up with dueling patches. I want to\n> come up with an approach that can be widely-endorsed and then have\n> everybody rowing in the same direction. On the other hand, I do think\n> that we may support multiple options in certain places which may have\n> the kinds of trade-offs that Bruce mentions. For instance,\n> identifying changed blocks by scanning the whole cluster and checking\n> the LSN of each block has an advantage in that it requires no prior\n> setup or extra configuration. Like a sequential scan, it always\n> works, and that is an advantage. Of course, for many people, the\n> competing advantage of a WAL-scanning approach that can save a lot of\n> I/O will appear compelling, but maybe not for everyone. I think\n> there's room for two or three approaches there -- not in the sense of\n> competing patches, but in the sense of giving users a choice based on\n> their needs.\n\nWell, by having a separate modblock file for each WAL file, you can keep\nboth WAL and modblock files and use the modblock list to pull pages from\neach WAL file, or from the heap/index files, and it can be done in\nparallel. Having WAL and modblock files in the same directory makes\nretention simpler.\n\nIn fact, you can do an incremental backup just using the modblock files\nand the heap/index files, so you don't even need the WAL.\n\nAlso, instead of storing the file name and block number in the modblock\nfile, using the database oid, relfilenode, and block number (3 int32\nvalues) should be sufficient.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 17 Apr 2019 11:57:35 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Apr 16, 2019 at 5:44 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > > I love the general idea of having additional facilities in core to\n> > > > support block-level incremental backups. I've long been unhappy that\n> > > > any such approach ends up being limited to a subset of the files which\n> > > > need to be included in the backup, meaning the rest of the files have to\n> > > > be backed up in their entirety. I don't think we have to solve for that\n> > > > as part of this, but I'd like to see a discussion for how to deal with\n> > > > the other files which are being backed up to avoid needing to just\n> > > > wholesale copy them.\n> > >\n> > > I assume you are talking about non-heap/index files. Which of those are\n> > > large enough to benefit from incremental backup?\n> >\n> > Based on discussions I had with Andrey, specifically the visibility map\n> > is an issue for them with WAL-G. I haven't spent a lot of time thinking\n> > about it, but I can understand how that could be an issue.\n> \n> If I understand correctly, the VM contains 1 byte per 4 heap pages and\n> the FSM contains 1 byte per heap page (plus some overhead for higher\n> levels of the tree). Since the FSM is not WAL-logged, I'm not sure\n> there's a whole lot we can do to avoid having to back it up, although\n> maybe there's some clever idea I'm not quite seeing. The VM is\n> WAL-logged, albeit with some strange warts that I have the honor of\n> inventing, so there's more possibilities there.\n> \n> Before worrying about it too much, it would be useful to hear more\n> about the concerns related to these forks, so that we make sure we're\n> solving the right problem. It seems difficult for a single relation\n> to be big enough for these to be much of an issue. For example, on a\n> 1TB relation, we have 2^40 bytes = 2^27 pages = ~2^25 bits of VM fork\n> = 32MB. Not nothing, but 32MB of useless overhead every time you back\n> up a 1TB database probably isn't going to break the bank. It might be\n> more of a concern for users with many small tables. For example, if\n> somebody has got a million tables with 1 page in each one, they'll\n> have a million data pages, a million VM pages, and 3 million FSM pages\n> (unless the new don't-create-the-FSM-for-small-tables stuff in v12\n> kicks in). I don't know if it's worth going to a lot of trouble to\n> optimize that case. Creating a million tables with 100 tuples (or\n> whatever) in each one sounds like terrible database design to me.\n\nAs I understand it, the problem is not with backing up an individual\ndatabase or cluster, but rather dealing with backing up thousands of\nindividual clusters with thousands of tables in each, leading to an\nawful lot of tables with lots of FSMs/VMs, all of which end up having to\nget copied and stored wholesale. I'll point this thread out to him and\nhopefully he'll have a chance to share more specific information.\n\n> > > > I'm quite concerned that trying to graft this on to pg_basebackup\n> > > > (which, as you note later, is missing an awful lot of what users expect\n> > > > from a real backup solution already- retention handling, parallel\n> > > > capabilities, WAL archive management, and many more... but also is just\n> > > > not nearly as developed a tool as the external solutions) is going to\n> > > > make things unnecessairly difficult when what we really want here is\n> > > > better support from core for block-level incremental backup for the\n> > > > existing external tools to leverage.\n> > >\n> > > I think there is some interesting complexity brought up in this thread.\n> > > Which options are going to minimize storage I/O, network I/O, have only\n> > > background overhead, allow parallel operation, integrate with\n> > > pg_basebackup. Eventually we will need to evaluate the incremental\n> > > backup options against these criteria.\n> >\n> > This presumes that we're going to have multiple competeing incremental\n> > backup options presented, doesn't it? Are you aware of another effort\n> > going on which aims for inclusion in core? There's been past attempts\n> > made, but I don't believe there's anyone else currently planning to or\n> > working on something for inclusion in core.\n> \n> Yeah, I really hope we don't end up with dueling patches. I want to\n> come up with an approach that can be widely-endorsed and then have\n> everybody rowing in the same direction. On the other hand, I do think\n> that we may support multiple options in certain places which may have\n> the kinds of trade-offs that Bruce mentions. For instance,\n> identifying changed blocks by scanning the whole cluster and checking\n> the LSN of each block has an advantage in that it requires no prior\n> setup or extra configuration. Like a sequential scan, it always\n> works, and that is an advantage. Of course, for many people, the\n> competing advantage of a WAL-scanning approach that can save a lot of\n> I/O will appear compelling, but maybe not for everyone. I think\n> there's room for two or three approaches there -- not in the sense of\n> competing patches, but in the sense of giving users a choice based on\n> their needs.\n\nI can agree with the idea of having multiple options for how to collect\nup the set of changed blocks, though I continue to feel that a\nWAL-scanning approach isn't something that we'd have implemented in the\nbackend at all since it doesn't require the backend and a given backend\nmight not even have all of the WAL that is relevant. I certainly don't\nthink it makes sense to have a backend go get WAL from the archive to\nthen merge the WAL to provide the result to a client asking for it-\nthat's adding entirely unnecessary load to the database server.\n\nAs such, only the LSN-based scanning of relation files to produce the\nset of changed blocks seems to make sense to me to implement in the\nbackend.\n\nJust to be clear- I don't have any problem with a tool being implemented\nin core to support the scanning of WAL to produce a changeset, I just\ndon't think that's something we'd have built into the *backend*, nor do\nI think it would make sense to add that functionality to the replication\n(or any other) protocol, at least not with support for arbitrary LSN\nstarting and ending points.\n\nA thought that occurs to me is to have the functions for supporting the\nWAL merging be included in libcommon and available to both the\nindependent executable that's available for doing WAL merging, and to\nthe backend to be able to WAL merging itself- but for a specific\npurpose: having a way to reduce the amount of WAL that needs to be sent\nto a replica which has a replication slot but that's been disconnected\nfor a while. Of course, there'd have to be some way to handle the other\nfiles for that to work to update a long out-of-date replica. Now, if we\ntaught the backup tool about having a replication slot then perhaps we\ncould have the backend effectively have the same capability proposed\nabove, but without the need to go get the WAL from the archive\nrepository.\n\nI'm still not entirely sure that this makes sense to do in the backend\ndue to the additional load, this is really just some brainstorming.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 17 Apr 2019 17:20:03 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Apr 15, 2019 at 9:01 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > I love the general idea of having additional facilities in core to\n> > support block-level incremental backups. I've long been unhappy that\n> > any such approach ends up being limited to a subset of the files which\n> > need to be included in the backup, meaning the rest of the files have to\n> > be backed up in their entirety. I don't think we have to solve for that\n> > as part of this, but I'd like to see a discussion for how to deal with\n> > the other files which are being backed up to avoid needing to just\n> > wholesale copy them.\n> \n> Ideas? Generally, I don't think that anything other than the main\n> forks of relations are worth worrying about, because the files are too\n> small to really matter. Even if they're big, the main forks of\n> relations will be much bigger. I think.\n\nSadly, I haven't got any great ideas today. I do know that the WAL-G\nfolks have specifically mentioned issues with the visibility map being\nlarge enough across enough of their systems that it kinda sucks to deal\nwith. Perhaps we could do something like the rsync binary-diff protocol\nfor non-relation files? This is clearly just hand-waving but maybe\nthere's something reasonable in that idea.\n\n> > I'm quite concerned that trying to graft this on to pg_basebackup\n> > (which, as you note later, is missing an awful lot of what users expect\n> > from a real backup solution already- retention handling, parallel\n> > capabilities, WAL archive management, and many more... but also is just\n> > not nearly as developed a tool as the external solutions) is going to\n> > make things unnecessairly difficult when what we really want here is\n> > better support from core for block-level incremental backup for the\n> > existing external tools to leverage.\n> >\n> > Perhaps there's something here which can be done with pg_basebackup to\n> > have it work with the block-level approach, but I certainly don't see\n> > it as a natural next step for it and really does seem like limiting the\n> > way this is implemented to something that pg_basebackup can easily\n> > digest might make it less useful for the more developed tools.\n> \n> I agree that there are a bunch of things that pg_basebackup does not\n> do, such as backup management. I think a lot of users do not want\n> PostgreSQL to do backup management for them. They have an existing\n> solution that they use to manage backups, and they want PostgreSQL to\n> interoperate with it. I think it makes sense for pg_basebackup to be\n> in charge of taking the backup, and then other tools can either use it\n> as a building block or use the streaming replication protocol to send\n> approximately the same commands to the server. \n\nThere's something like 6 different backup tools, at least, for\nPostgreSQL that provide backup management, so I have a really hard time\nagreeing with this idea that users don't want a PG backup management\nsystem. Maybe that's not what you're suggesting here, but that's what\ncame across to me.\n\nYes, there are some users who have an existing backup solution and\nthey'd like a better way to integrate PostgreSQL into that solution,\nbut that's usually something like filesystem snapshots or an enterprise\nbackup tool which has a PostgreSQL agent or similar to do the start/stop\nand collect up the WAL, not something that's just calling pg_basebackup.\n\nThose are typically not things we have any visibility into though and\naren't open source either (and, at least as often as not, they don't\nseem to be very well thought through, based on my experience with those\ntools...).\n\nUnless maybe I'm misunderstanding and what you're suggesting here is\nthat the \"existing solution\" is something like the external PG-specific\nbackup tools? But then the rest doesn't seem to make sense, as only\nmaybe one or two of those tools use pg_basebackup internally.\n\n> I certainly would not\n> want to expose server capabilities that let you take an incremental\n> backup and NOT teach pg_basebackup to use them -- then we'd be in a\n> situation of saying that PostgreSQL has incremental backup, but you\n> have to get external tool XYZ to use it. That will be perceived as\n> PostgreSQL does NOT have incremental backup and this external tool\n> adds it.\n\n... but this is exactly the situation we're in already with all of the\n*other* features around backup (parallel backup, backup management, WAL\nmanagement, etc). Users want those features, pg_basebackup/PG core\ndoesn't provide it, and therefore there's a bunch of other tools which\nhave been written that do. In addition, saying that PG has incremental\nbackup but no built-in management of those full-vs-incremental backups\nand telling users that they basically have to build that themselves\nreally feels a lot like we're trying to address a check-box requirement\nrather than making something that our users are going to be happy with.\n\n> > As an example, I believe all of the other tools mentioned (at least,\n> > those that are open source I'm pretty sure all do) support parallel\n> > backup and therefore having a way to get the block-level changes in a\n> > parallel fashion would be a pretty big thing that those tools will want\n> > and pg_basebackup is single-threaded today and this proposal doesn't\n> > seem to be contemplating changing that, implying that a serial-based\n> > block-level protocol would be fine but that'd be a pretty awful\n> > restriction for the other tools.\n> \n> I mentioned this exact issue in my original email. I spoke positively\n> of it. But I think it is different from what is being proposed here.\n> We could have parallel backup without incremental backup, and that\n> would be a good feature. We could have parallel backup without full\n> backup, and that would also be a good feature. We could also have\n> both, which would be best of all. I don't see that my proposal throws\n> up any architectural obstacle to parallelism. I assume parallel\n> backup, whether full or incremental, would be implemented by dividing\n> up the files that need to be sent across the available connections; if\n> incremental backup exists, each connection then has to decide whether\n> to send the whole file or only part of it.\n\nI don't think that I was very clear in what my specific concern here\nwas. I'm not asking for pg_basebackup to have parallel backup (at\nleast, not in this part of the discussion), I'm asking for the\nincremental block-based protocol that's going to be built-in to core to\nbe able to be used in a parallel fashion.\n\nThe existing protocol that pg_basebackup uses is basically, connect to\nthe server and then say \"please give me a tarball of the data directory\"\nand that is then streamed on that connection, making that protocol\nimpossible to use for parallel backup. That's fine as far as it goes\nbecause only pg_basebackup actually uses that protocol (note that nearly\nall of the other tools for doing backups of PostgreSQL don't...). If\nwe're expecting the external tools to use the block-level incremental\nprotocol then that protocol really needs to have a way to be\nparallelized, otherwise we're just going to end up with all of the\nindividual tools doing their own thing for block-level incremental\n(though perhaps they'd reimplement whatever is done in core but in a way\nthat they could parallelize it...), if possible (which I add just in\ncase there's some idea that we end up in a situation where the\nblock-level incremental backup has to coordinate with the backend in\nsome fashion to work... which would mean that *everyone* has to use the\nprotocol even if it isn't parallel and that would be really bad, imv).\n\n> > This part of the discussion is a another example of how we're limiting\n> > ourselves in this implementation to the \"pg_basebackup can work with\n> > this\" case- by only consideration the options of \"scan all the files\" or\n> > \"use the WAL- if the request is for WAL we have available on the\n> > server.\" The other backup solutions mentioned in your initial email,\n> > and others that weren't, have a WAL archive which includes a lot more\n> > WAL than just what the primary currently has. When I've thought about\n> > how WAL could be used to build a differential or incremental backup, the\n> > question of \"do we have all the WAL we need\" hasn't ever been a\n> > consideration- because the backup tool manages the WAL archive and has\n> > WAL going back across, most likely, weeks or even months. Having a tool\n> > which can essentially \"compress\" WAL would be fantastic and would be\n> > able to be leveraged by all of the different backup solutions.\n> \n> I don't think this is a case of limiting ourselves; I think it's a\n> case of keeping separate considerations properly separate. As I said\n> in my original email, the client doesn't really need to know how the\n> server is identifying the blocks that have been modified. That is the\n> server's job. I started a separate thread on the WAL-scanning\n> approach, so we should take that part of the discussion over there. I\n> see no reason why the server couldn't be taught to reach back into an\n> available archive for WAL that it no longer has locally, but that's\n> really independent of the design ideas being discussed on this thread.\n\nI've provided thoughts on that other thread, I'm happy to discuss\nfurther there.\n\n> > Two things here- having some file that \"stops the server from starting\"\n> > is just going to cause a lot of pain, in my experience. Users do a lot\n> > of really rather.... curious things, and then come asking questions\n> > about them, and removing the file that stopped the server from starting\n> > is going to quickly become one of those questions on stack overflow that\n> > people just follow the highest-ranked question for, even though everyone\n> > who follows this list will know that doing so results in corruption of\n> > the database.\n> \n> Wait, you want to make it maximally easy for users to start the server\n> in a state that is 100% certain to result in a corrupted and unusable\n> database? Why?? I'd l like to make that a tiny bit difficult. If\n> they really want a corrupted database, they can remove the file.\n\nNo, I don't want it to be easy for users to start the server in a state\nthat's going to result in a corrupted cluster. That's basically the\ncomplete opposite of what I was going for- having a file that can be\ntrivially removed to start up the cluster is *going* to result in people\nhaving corrupted clusters, no matter how much we tell them \"don't do\nthat\". This is exactly the problem with have with backup_label today.\nI'd really rather not double-down on that.\n\n> > An alternative approach in developing this feature would be to have\n> > pg_basebackup have an option to run against an *existing* backup, with\n> > the entire point being that the existing backup is updated with these\n> > incremental changes, instead of having some independent tool which takes\n> > the result of multiple pg_basebackup runs and then combines them.\n> \n> That would be really unsafe, because if the tool is interrupted before\n> it finishes (and fsyncs everything), you no longer have any usable\n> backup. It also doesn't lend itself to several of the scenarios I\n> described in my original email -- like endless incrementals that are\n> merged into the full backup after some number of days -- a capability\n> upon which others have already remarked positively.\n\nThere's really two things here- the first is that I agree with the\nconcern about potentially destorying the existing backup if the\npg_basebackup doesn't complete, but there's some ways to address that\n(such as filesystem snapshotting), so I'm not sure that the idea is\nquite that bad, but it would need to be more than just what\npg_basebackup does in this case in order to be trustworthy (at least,\nfor most).\n\nThe other part here is the idea of endless incrementals where the blocks\nwhich don't appear to have changed are never re-validated against what's\nin the backup. Unfortunately, latent corruption happens and you really\nwant to have a way to check for that. In past discussions that I've had\nwith David, there's been some idea to check some percentage of the\nblocks that didn't appear to change for each backup against what's in\nthe backup.\n\nI share this just to point out that there's some risk to that approach,\nnot to say that we shouldn't do it or that we should discourage the\ndevelopment of such a feature.\n\n> > An alternative tool might be one which simply reads the WAL and keeps\n> > track of the FPIs and the updates and then eliminates any duplication\n> > which exists in the set of WAL provided (that is, multiple FPIs for the\n> > same page would be merged into one, and only the delta changes to that\n> > page are preserved, across the entire set of WAL being combined). Of\n> > course, that's complicated by having to deal with the other files in the\n> > database, so it wouldn't really work on its own.\n> \n> You've jumped back to solving the server's problem (which blocks\n> should I send?) rather than the client's problem (what does an\n> incremental backup look like once I've taken it and how do I manage\n> and restore them?). It does seem possible to figure out the contents\n> of modified blocks strictly from looking at the WAL, without any\n> examination of the current database contents. However, it also seems\n> very complicated, because the tool that is figuring out the current\n> block contents just by looking at the WAL would have to know how to\n> apply any type of WAL record, not just one that contains an FPI. And\n> I really don't want to build a client-side tool that knows how to\n> apply WAL.\n\nWow. I have to admit that I feel completely opposite of that- I'd\n*love* to have an independent tool (which ideally uses the same code\nthrough the common library, or similar) that can be run to apply WAL.\n\nIn other words, I don't agree that it's the server's problem at all to\nsolve that, or, at least, I don't believe that it needs to be.\n\n> > I'd really prefer that we avoid adding in another low-level tool like\n> > the one described here. Users, imv anyway, don't want to deal with\n> > *more* tools for handling this aspect of backup/recovery. If we had a\n> > tool in core today which managed multiples backups, kept track of them,\n> > and all of the WAL during and between them, then we could add options to\n> > that tool to do what's being described here in a way that makes sense\n> > and provides a good interface to users. I don't know that we're going\n> > to be able to do that with pg_basebackup when, really, the goal here\n> > isn't actually to make pg_basebackup into an enterprise backup tool,\n> > it's to make things easier for the external tools to do block-level\n> > backups.\n> \n> Well, I agree with you that the goal is not to make pg_basebackup an\n> enterprise backup tool. However, I don't see teaching it to take\n> incremental backups as opposed to that goal. I think backup\n> management and retention should remain firmly outside the purview of\n> pg_basebackup and left either to some other in-core tool or maybe even\n> to out-of-core tools. However, I don't see any reason why that the\n> task of taking an incremental and/or parallel backup should also be\n> left to another tool.\n\nI've tried to outline how the incremental backup capability and backup\nmanagement are really very closely related and having those be\nimplemented by independent tools is not a good interface for our users\nto have to live with.\n\n> There is a very close relationship between the thing that\n> pg_basebackup already does (copy everything) and the thing that we\n> want to do here (copy everything except blocks that we know haven't\n> changed). If we made it the job of some other tool to take parallel\n> and/or incremental backups, that other tool would need to reimplement\n> a lot of things that pg_basebackup has already got, like tar vs. plain\n> format, fast vs. spread checkpoint, rate-limiting, compression levels,\n> etc. That seems like a waste. Better to give pg_basebackup the\n> capability to do those things, and then any backup management tool\n> that anyone writes can take advantage of those capabilities.\n\nI don't believe any of the external tools which do backups of PostgreSQL\nsupport tar format. Fast-vs-spread checkpointing isn't in the purview\nof the external tools, they just have to accept the option and pass it\nto pg_start_backup(), which they already know how to do. Rate-limiting\nand compression are implemented by those other tools already, where it's\nbeen desired.\n\nMost of the external tools don't use pg_basebackup, nor the base backup\nprotocol (or, if they do, it's only as an option among others). In my\nopinion, that's pretty clear indication that pg_basebackup and the base\nbackup protocol aren't sufficient to cover any but the simplest of\nuse-cases (though those simple use-cases are handled rather well).\nWe're talking about adding on a capability that's much more complicated\nand is one that a lot of tools have already taken a stab at, let's try\nto do it in a way that those tools can leverage it and avoid having to\nimplement it themselves.\n\n> I come at this, BTW, from the perspective of having just spent a bunch\n> of time working on EDB's Backup And Recovery Tool (BART). That tool\n> works in exactly the manner you seem to be advocating: it knows how to\n> do incremental and parallel full backups, and it also does backup\n> management. However, this has not turned out to be the best division\n> of labor. People who don't want to use the backup management\n> capabilities may still want the parallel or incremental backup\n> capabilities, and if all of that is within the envelope of an\n> \"enterprise backup tool,\" they don't have that option. So I want to\n> split it up. I want pg_basebackup to take all the kinds of backups\n> that PostgreSQL supports -- full, incremental, parallel, serial,\n> whatever -- and I want some other tool -- pgBackRest, BART, barman, or\n> some yet-to-be-invented core thing to do the management of those\n> backups. Then everybody can use exactly the bits they want.\n\nI come at this from years of working with David on pgBackRest, listening\nto what users want, what features they like, what they'd like to see\nadded, and what they don't like about how it works today.\n\nIt's an interesting idea to add in everything to pg_basebackup that\nusers doing backups would like to see, but that's quite a list:\n\n- full backups\n- differential backups\n- incremental backups / block-level backups\n- (server-side) compression\n- (server-side) encryption\n- page-level checksum validation\n- calculating checksums (on the whole file)\n- External object storage (S3, et al)\n- more things...\n\nI'm really not convinced that I agree with the division of labor as\nyou've outlined it, where all of the above is done by pg_basebackup,\nwhere just archiving and backup retention are handled by some external\ntool (except that we already have pg_receivewal, so archiving isn't\nreally an externally handled thing either, unless you want features like\nparallel archive-push or parallel archive-get...).\n\nWhat would really help me, at least, understand the idea here would be\nto understand exactly what the existing tools do that the subset of\nusers you're thinking about doesn't like/want, but which pg_basebackup,\ntoday, does. Is the issue that there's a repository instead of just a\nplain PG directory or set of tar files, like what pg_basebackup produces\ntoday? But how would we do things like have compression, or encryption,\nor block-based incremental backups without some kind of repository or\ndirectory that doesn't actually look exactly like a PG data directory?\n\nAnother thing I really don't understand from this discussion, and part of\nwhy it's taken me a while to respond, is this, from above:\n\n> I think a lot of users do not want\n> PostgreSQL to do backup management for them.\n\nFollowed by:\n\n> I come at this, BTW, from the perspective of having just spent a bunch\n> of time working on EDB's Backup And Recovery Tool (BART). That tool\n> works in exactly the manner you seem to be advocating: it knows how to\n> do incremental and parallel full backups, and it also does backup\n> management.\n\nI certainly can understand that there are PostgreSQL users who want to\nleverage incremental backups without having to use BART or another tool\noutside of whatever enterprise backup system they've got, but surely\nthat's a large pool of users who *do* want a PG backup tool that manages\nbackups, or you wouldn't have spent a considerable amount of your very\nvaluable time hacking on BART. I've certainly seen a fair share of both\nand I don't think we should set out to exclude either.\n\nPerhaps that's what we're both saying too and just talking past each\nother, but I feel like the approach here is \"make it work just for the\nsimple pg_basebackup case and not worry too much about the other tools,\nsince what we do for pg_basebackup will work for them too\" while where\nI'm coming from is \"focus on what the other tools need first, and then\nmake pg_basebackup work with that if there's a sensible way to do so.\"\n\nA third possibility is that it's just too early to be talking about this\nsince it means we've gotta be awful vaugue about it.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 17 Apr 2019 18:43:10 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 11:57:35AM -0400, Bruce Momjian wrote:\n> On Tue, Apr 16, 2019 at 06:40:44PM -0400, Robert Haas wrote:\n> > Yeah, I really hope we don't end up with dueling patches. I want to\n> > come up with an approach that can be widely-endorsed and then have\n> > everybody rowing in the same direction. On the other hand, I do think\n> > that we may support multiple options in certain places which may have\n> > the kinds of trade-offs that Bruce mentions. For instance,\n> > identifying changed blocks by scanning the whole cluster and checking\n> > the LSN of each block has an advantage in that it requires no prior\n> > setup or extra configuration. Like a sequential scan, it always\n> > works, and that is an advantage. Of course, for many people, the\n> > competing advantage of a WAL-scanning approach that can save a lot of\n> > I/O will appear compelling, but maybe not for everyone. I think\n> > there's room for two or three approaches there -- not in the sense of\n> > competing patches, but in the sense of giving users a choice based on\n> > their needs.\n> \n> Well, by having a separate modblock file for each WAL file, you can keep\n> both WAL and modblock files and use the modblock list to pull pages from\n> each WAL file, or from the heap/index files, and it can be done in\n> parallel. Having WAL and modblock files in the same directory makes\n> retention simpler.\n> \n> In fact, you can do an incremental backup just using the modblock files\n> and the heap/index files, so you don't even need the WAL.\n> \n> Also, instead of storing the file name and block number in the modblock\n> file, using the database oid, relfilenode, and block number (3 int32\n> values) should be sufficient.\n\nWould doing it that way constrain the design of new table access\nmethods in some meaningful way?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Thu, 18 Apr 2019 17:32:57 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 05:32:57PM +0200, David Fetter wrote:\n> On Wed, Apr 17, 2019 at 11:57:35AM -0400, Bruce Momjian wrote:\n> > Also, instead of storing the file name and block number in the modblock\n> > file, using the database oid, relfilenode, and block number (3 int32\n> > values) should be sufficient.\n> \n> Would doing it that way constrain the design of new table access\n> methods in some meaningful way?\n\nI think these are the values used in WAL, so I assume table access\nmethods already have to map to those, unless they use their own.\nI actually don't know.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 18 Apr 2019 11:34:32 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 5:20 PM Stephen Frost <sfrost@snowman.net> wrote:\n> As I understand it, the problem is not with backing up an individual\n> database or cluster, but rather dealing with backing up thousands of\n> individual clusters with thousands of tables in each, leading to an\n> awful lot of tables with lots of FSMs/VMs, all of which end up having to\n> get copied and stored wholesale. I'll point this thread out to him and\n> hopefully he'll have a chance to share more specific information.\n\nSounds good.\n\n> I can agree with the idea of having multiple options for how to collect\n> up the set of changed blocks, though I continue to feel that a\n> WAL-scanning approach isn't something that we'd have implemented in the\n> backend at all since it doesn't require the backend and a given backend\n> might not even have all of the WAL that is relevant. I certainly don't\n> think it makes sense to have a backend go get WAL from the archive to\n> then merge the WAL to provide the result to a client asking for it-\n> that's adding entirely unnecessary load to the database server.\n\nMy motivation for wanting to include it in the database server was twofold:\n\n1. I was hoping to leverage the background worker machinery. The\nWAL-scanner would just run all the time in the background, and start\nup and shut down along with the server. If it's a standalone tool,\nthen it can run on a different server or when the server is down, both\nof which are nice. The downside though is that now you probably have\nto put it in crontab or under systemd or something, instead of just\nsetting a couple of GUCs and letting the server handle the rest. For\nme that downside seems rather significant, but YMMV.\n\n2. In order for the information produced by the WAL-scanner to be\nuseful, it's got to be available to the server when the server is\nasked for an incremental backup. If the information is constructed by\na standalone frontend tool, and stored someplace other than under\n$PGDATA, then the server won't have convenient access to it. I guess\nwe could make it the client's job to provide that information to the\nserver, but I kind of liked the simplicity of not needing to give the\nserver anything more than an LSN.\n\n> A thought that occurs to me is to have the functions for supporting the\n> WAL merging be included in libcommon and available to both the\n> independent executable that's available for doing WAL merging, and to\n> the backend to be able to WAL merging itself-\n\nYeah, that might be possible.\n\n> but for a specific\n> purpose: having a way to reduce the amount of WAL that needs to be sent\n> to a replica which has a replication slot but that's been disconnected\n> for a while. Of course, there'd have to be some way to handle the other\n> files for that to work to update a long out-of-date replica. Now, if we\n> taught the backup tool about having a replication slot then perhaps we\n> could have the backend effectively have the same capability proposed\n> above, but without the need to go get the WAL from the archive\n> repository.\n\nHmm, but you can't just skip over WAL records or segments because\nthere are checksums and previous-record pointers and things....\n\n> I'm still not entirely sure that this makes sense to do in the backend\n> due to the additional load, this is really just some brainstorming.\n\nWould it really be that much load?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Apr 2019 12:56:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-18 11:34:32 -0400, Bruce Momjian wrote:\n> On Thu, Apr 18, 2019 at 05:32:57PM +0200, David Fetter wrote:\n> > On Wed, Apr 17, 2019 at 11:57:35AM -0400, Bruce Momjian wrote:\n> > > Also, instead of storing the file name and block number in the modblock\n> > > file, using the database oid, relfilenode, and block number (3 int32\n> > > values) should be sufficient.\n> > \n> > Would doing it that way constrain the design of new table access\n> > methods in some meaningful way?\n> \n> I think these are the values used in WAL, so I assume table access\n> methods already have to map to those, unless they use their own.\n> I actually don't know.\n\nI don't think it'd be a meaningful restriction. Given that we use those\nfor shared_buffer descriptors, WAL etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Apr 2019 10:00:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 6:43 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Sadly, I haven't got any great ideas today. I do know that the WAL-G\n> folks have specifically mentioned issues with the visibility map being\n> large enough across enough of their systems that it kinda sucks to deal\n> with. Perhaps we could do something like the rsync binary-diff protocol\n> for non-relation files? This is clearly just hand-waving but maybe\n> there's something reasonable in that idea.\n\nI guess it all comes down to how complicated you're willing to make\nthe client-server protocol. With the very simple protocol that I\nproposed -- client provides a threshold LSN and server sends blocks\nmodified since then -- the client need not have access to the old\nincremental backup to take a new one. Of course, if it happens to\nhave access to the old backup then it can delta-compress however it\nlikes after-the-fact, but that doesn't help with the amount of network\ntransfer. That problem could be solved by doing something like what\nyou're talking about (with some probably-negligible false match rate)\nbut I have no intention of trying to implement anything that\ncomplicated, and I don't really think it's necessary, at least not for\na first version. What I proposed would already allow, for most users,\na large reduction in transfer and storage costs; what you are talking\nabout here would help more, but also be a lot more work and impose\nsome additional requirements on the system. I don't object to you\nimplementing the more complex system, but I'll pass.\n\n> There's something like 6 different backup tools, at least, for\n> PostgreSQL that provide backup management, so I have a really hard time\n> agreeing with this idea that users don't want a PG backup management\n> system. Maybe that's not what you're suggesting here, but that's what\n> came across to me.\n\nLet me be a little more clear. Different users want different things.\nSome people want a canned PostgreSQL backup solution, while other\npeople just want access to a reasonable set of facilities from which\nthey can construct their own solution. I believe that the proposal I\nam making here could be used either by backup tool authors to enhance\ntheir offerings, or by individuals who want to build up their own\nsolution using facilities provided by core.\n\n> Unless maybe I'm misunderstanding and what you're suggesting here is\n> that the \"existing solution\" is something like the external PG-specific\n> backup tools? But then the rest doesn't seem to make sense, as only\n> maybe one or two of those tools use pg_basebackup internally.\n\nWell, what I'm really talking about is in two pieces: providing some\nnew facilities via the replication protocol, and making pg_basebackup\nable to use those facilities. Nothing would stop other tools from\nusing those facilities directly if they wish.\n\n> ... but this is exactly the situation we're in already with all of the\n> *other* features around backup (parallel backup, backup management, WAL\n> management, etc). Users want those features, pg_basebackup/PG core\n> doesn't provide it, and therefore there's a bunch of other tools which\n> have been written that do. In addition, saying that PG has incremental\n> backup but no built-in management of those full-vs-incremental backups\n> and telling users that they basically have to build that themselves\n> really feels a lot like we're trying to address a check-box requirement\n> rather than making something that our users are going to be happy with.\n\nI disagree. Yes, parallel backup, like incremental backup, needs to\ngo in core. And pg_basebackup should be able to do a parallel backup.\nI will fight tooth, nail, and claw any suggestion that the server\nshould know how to do a parallel backup but pg_basebackup should not\nhave an option to exploit that capability. And similarly for\nincremental.\n\n> I don't think that I was very clear in what my specific concern here\n> was. I'm not asking for pg_basebackup to have parallel backup (at\n> least, not in this part of the discussion), I'm asking for the\n> incremental block-based protocol that's going to be built-in to core to\n> be able to be used in a parallel fashion.\n>\n> The existing protocol that pg_basebackup uses is basically, connect to\n> the server and then say \"please give me a tarball of the data directory\"\n> and that is then streamed on that connection, making that protocol\n> impossible to use for parallel backup. That's fine as far as it goes\n> because only pg_basebackup actually uses that protocol (note that nearly\n> all of the other tools for doing backups of PostgreSQL don't...). If\n> we're expecting the external tools to use the block-level incremental\n> protocol then that protocol really needs to have a way to be\n> parallelized, otherwise we're just going to end up with all of the\n> individual tools doing their own thing for block-level incremental\n> (though perhaps they'd reimplement whatever is done in core but in a way\n> that they could parallelize it...), if possible (which I add just in\n> case there's some idea that we end up in a situation where the\n> block-level incremental backup has to coordinate with the backend in\n> some fashion to work... which would mean that *everyone* has to use the\n> protocol even if it isn't parallel and that would be really bad, imv).\n\nThe obvious way of extending this system to parallel backup is to have\nN connections each streaming a separate tarfile such that when you\ncombine them all you recreate the original data directory. That would\nbe perfectly compatible with what I'm proposing for incremental\nbackup. Maybe you have another idea in mind, but I don't know what it\nis exactly.\n\n> > Wait, you want to make it maximally easy for users to start the server\n> > in a state that is 100% certain to result in a corrupted and unusable\n> > database? Why?? I'd l like to make that a tiny bit difficult. If\n> > they really want a corrupted database, they can remove the file.\n>\n> No, I don't want it to be easy for users to start the server in a state\n> that's going to result in a corrupted cluster. That's basically the\n> complete opposite of what I was going for- having a file that can be\n> trivially removed to start up the cluster is *going* to result in people\n> having corrupted clusters, no matter how much we tell them \"don't do\n> that\". This is exactly the problem with have with backup_label today.\n> I'd really rather not double-down on that.\n\nWell, OK, but short of scanning the entire directory tree on startup,\nI don't see how to achieve that.\n\n> There's really two things here- the first is that I agree with the\n> concern about potentially destorying the existing backup if the\n> pg_basebackup doesn't complete, but there's some ways to address that\n> (such as filesystem snapshotting), so I'm not sure that the idea is\n> quite that bad, but it would need to be more than just what\n> pg_basebackup does in this case in order to be trustworthy (at least,\n> for most).\n\nWell, I did mention in my original email that there could be a\ncombine-backups-destructively option. I guess this is just taking\nthat to the next level: merge a backup being taken into an existing\nbackup on-the-fly. Given you remarks above, it is worth noting that\nthis GREATLY increases the chances of people accidentally causing\ncorruption in ways that are almost undetectable. All they have to do\nis kill -9 the backup tool half way through and then start postgres on\nthe resulting directory.\n\n> The other part here is the idea of endless incrementals where the blocks\n> which don't appear to have changed are never re-validated against what's\n> in the backup. Unfortunately, latent corruption happens and you really\n> want to have a way to check for that. In past discussions that I've had\n> with David, there's been some idea to check some percentage of the\n> blocks that didn't appear to change for each backup against what's in\n> the backup.\n\nSure, I'm not trying to block anybody from developing something like\nthat, and I acknowledge that there is risk in a system like this,\nbut...\n\n> I share this just to point out that there's some risk to that approach,\n> not to say that we shouldn't do it or that we should discourage the\n> development of such a feature.\n\n...it seems we are viewing this, at least, from the same perspective.\n\n> Wow. I have to admit that I feel completely opposite of that- I'd\n> *love* to have an independent tool (which ideally uses the same code\n> through the common library, or similar) that can be run to apply WAL.\n>\n> In other words, I don't agree that it's the server's problem at all to\n> solve that, or, at least, I don't believe that it needs to be.\n\nI mean, I guess I'd love to have that if I could get it by waving a\nmagic wand, but I wouldn't love it if I had to write the code or\nmaintain it. The routines for applying WAL currently all assume that\nyou have a whole bunch of server infrastructure present; that code\nwouldn't run in a frontend environment, I think. I wouldn't want to\nhave a second copy of every WAL apply routine that might have its own\nset of bugs.\n\n> I've tried to outline how the incremental backup capability and backup\n> management are really very closely related and having those be\n> implemented by independent tools is not a good interface for our users\n> to have to live with.\n\nI disagree. I think the \"existing backup tools don't use\npg_basebackup\" argument isn't very compelling, because the reason\nthose tools don't use pg_basebackup is because it can't do what they\nneed. If it did, they'd probably use it. People don't write a whole\nseparate engine for running backups just because it's fun to not reuse\ncode -- they do it because there's no other way to get what they want.\n\n> Most of the external tools don't use pg_basebackup, nor the base backup\n> protocol (or, if they do, it's only as an option among others). In my\n> opinion, that's pretty clear indication that pg_basebackup and the base\n> backup protocol aren't sufficient to cover any but the simplest of\n> use-cases (though those simple use-cases are handled rather well).\n> We're talking about adding on a capability that's much more complicated\n> and is one that a lot of tools have already taken a stab at, let's try\n> to do it in a way that those tools can leverage it and avoid having to\n> implement it themselves.\n\nI mean, again, if it were part of pg_basebackup and available via the\nreplication protocol, they could do exactly that, through either\nmethod. I don't get it. You seem to be arguing that we shouldn't add\nthe necessary capabilities to the replication protocol or\npg_basebackup, but at the same time arguing that pg_basebackup is\ninadequate because it's missing important capabilities. This confuses\nme.\n\n> It's an interesting idea to add in everything to pg_basebackup that\n> users doing backups would like to see, but that's quite a list:\n>\n> - full backups\n> - differential backups\n> - incremental backups / block-level backups\n> - (server-side) compression\n> - (server-side) encryption\n> - page-level checksum validation\n> - calculating checksums (on the whole file)\n> - External object storage (S3, et al)\n> - more things...\n>\n> I'm really not convinced that I agree with the division of labor as\n> you've outlined it, where all of the above is done by pg_basebackup,\n> where just archiving and backup retention are handled by some external\n> tool (except that we already have pg_receivewal, so archiving isn't\n> really an externally handled thing either, unless you want features like\n> parallel archive-push or parallel archive-get...).\n\nYeah, if it were up to me, I'd choose put most of that in the server\nand make it available via the replication protocol, and then give\npg_basebackup able to use that functionality. And external tools\ncould use that functionality via pg_basebackup or by using the\nreplication protocol directly. I actually don't really understand\nwhat the alternative is. If you want server-side compression, for\nexample, that really has to be done on the server. And how would the\nserver expose that, except through the replication protocol? Sure, we\ncould design a new protocol for it. Call it... say... the\nshmeplication protocol. And then you could use the replication\nprotocol for what it does today and the shmeplication protocol for all\nthe cool bits. But why would that be better?\n\n> What would really help me, at least, understand the idea here would be\n> to understand exactly what the existing tools do that the subset of\n> users you're thinking about doesn't like/want, but which pg_basebackup,\n> today, does. Is the issue that there's a repository instead of just a\n> plain PG directory or set of tar files, like what pg_basebackup produces\n> today? But how would we do things like have compression, or encryption,\n> or block-based incremental backups without some kind of repository or\n> directory that doesn't actually look exactly like a PG data directory?\n\nI guess we're still wallowing in the same confusion here.\npg_basebackup, for me, is just a convenient place to stick this\nfunctionality. If the server has the ability to construct and send an\nincremental backup by some means, then it needs a client on the other\nend to receive and store that backup, and since pg_basebackup already\nknows how to do that for full backups, extending it to incremental\nbackups (and/or parallel, encrypted, compressed, and validated\nbackups) seems very natural to me. Otherwise I add server-side\nfunctionality to allow $X and then have to write an entirely new\nclient to interact with that instead of just using the client I've\nalready got. That's more work, and I'm lazy.\n\nNow it's true that if we wanted to build something like the rsync\nprotocol into PostgreSQL, jamming that into pg_basebackup might well\nbe a bridge too far. That would involve taking backups via a method\nso different from what we're currently doing that it would probably\nmake sense to at least consider creating a whole new tool for that\npurpose. But that wasn't my proposal...\n\n> I certainly can understand that there are PostgreSQL users who want to\n> leverage incremental backups without having to use BART or another tool\n> outside of whatever enterprise backup system they've got, but surely\n> that's a large pool of users who *do* want a PG backup tool that manages\n> backups, or you wouldn't have spent a considerable amount of your very\n> valuable time hacking on BART. I've certainly seen a fair share of both\n> and I don't think we should set out to exclude either.\n\nSure, I agree.\n\n> Perhaps that's what we're both saying too and just talking past each\n> other, but I feel like the approach here is \"make it work just for the\n> simple pg_basebackup case and not worry too much about the other tools,\n> since what we do for pg_basebackup will work for them too\" while where\n> I'm coming from is \"focus on what the other tools need first, and then\n> make pg_basebackup work with that if there's a sensible way to do so.\"\n\nI think perhaps the disconnect is that I just don't see how it can\nfail to work for the external tools if it works for pg_basebackup.\nAny given piece of functionality is either available in the\nreplication stream, or it's not. I suspect that for both BART and\npg_backrest, they won't be able to completely give up on having their\nown backup engines solely because core has incremental backup, but I\ndon't know what the alternative to adding features to core one at a\ntime is.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Apr 2019 14:05:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi,\n\n> > Wow. I have to admit that I feel completely opposite of that- I'd\n> > *love* to have an independent tool (which ideally uses the same code\n> > through the common library, or similar) that can be run to apply WAL.\n> >\n> > In other words, I don't agree that it's the server's problem at all to\n> > solve that, or, at least, I don't believe that it needs to be.\n> \n> I mean, I guess I'd love to have that if I could get it by waving a\n> magic wand, but I wouldn't love it if I had to write the code or\n> maintain it. The routines for applying WAL currently all assume that\n> you have a whole bunch of server infrastructure present; that code\n> wouldn't run in a frontend environment, I think. I wouldn't want to\n> have a second copy of every WAL apply routine that might have its own\n> set of bugs.\n\nI'll fight tooth and nail not to have a second implementation of replay,\neven if it's just portions. The code we have is complicated and fragile\nenough, having a [partial] second version would be way worse. There's\nalready plenty improvements we need to make to speed up replay, and a\nlot of them require multiple execution threads (be it processes or OS\nthreads), something not easily feasible in a standalone tool. And\nwithout the already existing concurrent work during replay (primarily\ncheckpointer doing a lot of the necessary IO), it'd also be pretty\nunattractive to use any separate tool.\n\nUnless you just define the server binary as that \"independent tool\".\nWhich I think is entirely reasonable. With the 'consistent' and LSN\nrecovery targets one already can get most of what's needed from such a\ntool, anyway. I'd argue the biggest issue there is that there's no\nequivalent to starting postgres with a private socket directory on\nwindows, and perhaps an option or two making it easier to start postgres\nin a \"private\" mode for things like this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Apr 2019 11:21:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Apr 17, 2019 at 5:20 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > As I understand it, the problem is not with backing up an individual\n> > database or cluster, but rather dealing with backing up thousands of\n> > individual clusters with thousands of tables in each, leading to an\n> > awful lot of tables with lots of FSMs/VMs, all of which end up having to\n> > get copied and stored wholesale. I'll point this thread out to him and\n> > hopefully he'll have a chance to share more specific information.\n> \n> Sounds good.\n\nOk, done.\n\n> > I can agree with the idea of having multiple options for how to collect\n> > up the set of changed blocks, though I continue to feel that a\n> > WAL-scanning approach isn't something that we'd have implemented in the\n> > backend at all since it doesn't require the backend and a given backend\n> > might not even have all of the WAL that is relevant. I certainly don't\n> > think it makes sense to have a backend go get WAL from the archive to\n> > then merge the WAL to provide the result to a client asking for it-\n> > that's adding entirely unnecessary load to the database server.\n> \n> My motivation for wanting to include it in the database server was twofold:\n> \n> 1. I was hoping to leverage the background worker machinery. The\n> WAL-scanner would just run all the time in the background, and start\n> up and shut down along with the server. If it's a standalone tool,\n> then it can run on a different server or when the server is down, both\n> of which are nice. The downside though is that now you probably have\n> to put it in crontab or under systemd or something, instead of just\n> setting a couple of GUCs and letting the server handle the rest. For\n> me that downside seems rather significant, but YMMV.\n\nBackground workers can be used to do pretty much anything. I'm not\nsuggesting that's a bad thing- just that it's such a completely generic\ntool that could be used to put anything/everything into the backend, so\nI'm not sure how much it makes sense as an argument when it comes to\ndesigning a new capability/feature. Yes, there's an advantage there\nwhen it comes to configuration since that means we don't need to set up\na cronjob and can, instead, just set a few GUCs... but it also means\nthat it *must* be done on the server and there's no option to do it\nelsewhere, as you say.\n\nWhen it comes to \"this is something that I can do on the DB server or on\nsome other server\", the usual preference is to use another system for\nit, to reduce load on the server.\n\nIf it comes down to something that needs to/should be an ongoing\nprocess, then the packaging can package that as a daemon-type tool which\nhandles the systemd component to it, assuming the stand-alone tool\nsupports that, which it hopefully would.\n\n> 2. In order for the information produced by the WAL-scanner to be\n> useful, it's got to be available to the server when the server is\n> asked for an incremental backup. If the information is constructed by\n> a standalone frontend tool, and stored someplace other than under\n> $PGDATA, then the server won't have convenient access to it. I guess\n> we could make it the client's job to provide that information to the\n> server, but I kind of liked the simplicity of not needing to give the\n> server anything more than an LSN.\n\nIf the WAL-scanner tool is a stand-alone tool, and it handles picking\nout all of the FPIs and incremental page changes for each relation, then\nwhat does the tool to build out the \"new\" backup really need to tell the\nbackend? I feel like it mainly needs to ask the backend for the\nnon-relation files, which gets into at least one approach that I've\nthought about for redesigning the backup protocol:\n\n1. Ask for a list of files and metadata about them\n2. Allow asking for individual files\n3. Support multiple connections asking for individual files\n\nQuite a few of the existing backup tools for PG use a model along these\nlines (or use tools underneath which do).\n\n> > A thought that occurs to me is to have the functions for supporting the\n> > WAL merging be included in libcommon and available to both the\n> > independent executable that's available for doing WAL merging, and to\n> > the backend to be able to WAL merging itself-\n> \n> Yeah, that might be possible.\n\nI feel like this would be necessary, as it's certainly delicate and\ncritical code and having multiple implementations of it will be\ndifficult to manage.\n\nThat said... we already have independent work going on to do WAL\nmergeing (WAL-G, at least), and if we insist that the WAL replay code\nonly exists in the backend, I strongly suspect we'll end up with\nindependent implementations of that too. Sure, we can distance\nourselves from that and say that we don't have to deal with any bugs\nfrom it... but it seems like the better approach would be to have a\ncommon library that provides it.\n\n> > but for a specific\n> > purpose: having a way to reduce the amount of WAL that needs to be sent\n> > to a replica which has a replication slot but that's been disconnected\n> > for a while. Of course, there'd have to be some way to handle the other\n> > files for that to work to update a long out-of-date replica. Now, if we\n> > taught the backup tool about having a replication slot then perhaps we\n> > could have the backend effectively have the same capability proposed\n> > above, but without the need to go get the WAL from the archive\n> > repository.\n> \n> Hmm, but you can't just skip over WAL records or segments because\n> there are checksums and previous-record pointers and things....\n\nThose aren't what I would be worried about, I'd think? Maybe we're\ntalking about different things, but if there's a way to scan/compress\nWAL so that we have less work to do when replaying, then we should\nleverage that for replicas that have been disconnected for a while too.\n\nOne important bit here is that the replica wouldn't be able to answer\nqueries while it's working through this compressed WAL, since it\nwouldn't reach a consistent state until more-or-less the end of WAL, but\nI am not sure that's a bad thing; who wants to get responses back from a\nvery out-of-date replica?\n\n> > I'm still not entirely sure that this makes sense to do in the backend\n> > due to the additional load, this is really just some brainstorming.\n> \n> Would it really be that much load?\n\nWell, it'd clearly be more than zero. There may be an argument to be\nmade that it's worth it to reduce the overall throughput of the system\nin order to add this capability, but I don't think we've got enough\ninformation at this point to know. My gut feeling, at least, is that\ntracking enough information to do WAL-compression on a high-write system\nis going to be pretty expensive as you'd need to have a data structure\nthat makes it easy to identify every page in the system, and be able to\nfind each of them later on in the stream, and then throw away the old\nFPI in favor of the new one, and then track all the incremental page\nupdates to that page, more-or-less, right?\n\nOn a large system, given how much information has to be tracked, it\nseems like it could be a fair bit of load, but perhaps you've got some\nideas as to how to reduce it..?\n\nThanks!\n\nStephen",
"msg_date": "Thu, 18 Apr 2019 16:59:12 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\nI wanted to respond to this point specifically as I feel like it'll\nreally help clear things up when it comes to the point of view I'm\nseeing this from.\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> > Perhaps that's what we're both saying too and just talking past each\n> > other, but I feel like the approach here is \"make it work just for the\n> > simple pg_basebackup case and not worry too much about the other tools,\n> > since what we do for pg_basebackup will work for them too\" while where\n> > I'm coming from is \"focus on what the other tools need first, and then\n> > make pg_basebackup work with that if there's a sensible way to do so.\"\n> \n> I think perhaps the disconnect is that I just don't see how it can\n> fail to work for the external tools if it works for pg_basebackup.\n\nThe existing backup protocol that pg_basebackup uses *does* *not* *work*\nfor the external backup tools. If it worked, they'd use it, but they\ndon't and that's because you can't do things like a parallel backup,\nwhich we *know* users want because there's a number of tools which\nimplement that exact capability.\n\nI do *not* want another piece of functionality added in this space which\nis limited in the same way because it does *not* help the external\nbackup tools at all.\n\n> Any given piece of functionality is either available in the\n> replication stream, or it's not. I suspect that for both BART and\n> pg_backrest, they won't be able to completely give up on having their\n> own backup engines solely because core has incremental backup, but I\n> don't know what the alternative to adding features to core one at a\n> time is.\n\nThis idea that it's either \"in the replication system\" or \"not in the\nreplication system\" is really bad, in my view, because it can be \"in the\nreplication system\" and at the same time not at all useful to the\nexisting external backup tools, but users and others will see the\n\"checkbox\" as ticked and assume that it's available in a useful fashion\nby the backend and then get upset when they discover the limitations.\n\nThe existing base backup/replication protocol that's used by\npg_basebackup is *not* useful to most of the backup tools, that's quite\nclear since they *don't* use it. Building on to that an incremental\nbackup solution that is similairly limited isn't going to make things\neasier for the external tools.\n\nIf the goal is to make things easier for the external tools by providing\ncapability in the backend / replication protocol then we need to be\nlooking at what those tools require and not at what would be minimally\nsufficient for pg_basebackup. If we don't care about the external tools\nand *just* care about making it work for pg_basebackup, then let's be\nclear about that, and accept that it'll have to be, most likely, ripped\nout and rewritten when we go to add parallel capabilities, for example,\nto pg_basebackup down the road. That's clearly the case for the\nexisting \"base backup\" protocol, so I don't see why it'd be different\nfor an incremental backup system that is similairly designed and\nimplemented.\n\nTo be clear, I'm all for adding feature to core one at a time, but\nthere's different ways to implement features and that's really what\nwe're talking about here- what's the best way to implement this\nfeature, ideally in a way that it's useful, practically, to both\npg_basebackup and the other external backup utilities.\n\nThanks!\n\nStephen",
"msg_date": "Thu, 18 Apr 2019 17:17:02 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\nOk, responding to the rest of this email.\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Apr 17, 2019 at 6:43 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Sadly, I haven't got any great ideas today. I do know that the WAL-G\n> > folks have specifically mentioned issues with the visibility map being\n> > large enough across enough of their systems that it kinda sucks to deal\n> > with. Perhaps we could do something like the rsync binary-diff protocol\n> > for non-relation files? This is clearly just hand-waving but maybe\n> > there's something reasonable in that idea.\n> \n> I guess it all comes down to how complicated you're willing to make\n> the client-server protocol. With the very simple protocol that I\n> proposed -- client provides a threshold LSN and server sends blocks\n> modified since then -- the client need not have access to the old\n> incremental backup to take a new one.\n\nWhere is the client going to get the threshold LSN from?\n\n> Of course, if it happens to\n> have access to the old backup then it can delta-compress however it\n> likes after-the-fact, but that doesn't help with the amount of network\n> transfer.\n\nIf it doesn't have access to the old backup, then I'm a bit confused as\nto how a incremental backup would be possible? Isn't that a requirement\nhere?\n\n> That problem could be solved by doing something like what\n> you're talking about (with some probably-negligible false match rate)\n> but I have no intention of trying to implement anything that\n> complicated, and I don't really think it's necessary, at least not for\n> a first version. What I proposed would already allow, for most users,\n> a large reduction in transfer and storage costs; what you are talking\n> about here would help more, but also be a lot more work and impose\n> some additional requirements on the system. I don't object to you\n> implementing the more complex system, but I'll pass.\n\nI was talking about the rsync binary-diff specifically for the files\nthat aren't easy to deal with in the WAL stream. I wouldn't think we'd\nuse it for other files, and there is definitely a question there of if\nthere's a way to do better than a binary-diff approach for those files.\n\n> > There's something like 6 different backup tools, at least, for\n> > PostgreSQL that provide backup management, so I have a really hard time\n> > agreeing with this idea that users don't want a PG backup management\n> > system. Maybe that's not what you're suggesting here, but that's what\n> > came across to me.\n> \n> Let me be a little more clear. Different users want different things.\n> Some people want a canned PostgreSQL backup solution, while other\n> people just want access to a reasonable set of facilities from which\n> they can construct their own solution. I believe that the proposal I\n> am making here could be used either by backup tool authors to enhance\n> their offerings, or by individuals who want to build up their own\n> solution using facilities provided by core.\n\nThe last thing that I think users really want it so build up their own\nsolution. There may be some organizations who would like to provide\ntheir own tool, but that's a bit different. Personally, I'd *really*\nlike PG to have a good tool in this area and I've been working, as I've\nsaid before, to try to get to a point where we at least have the option\nto add in such a tool that meets our various requirements.\n\nFurther, I'm concerned that the approach being presented here won't be\ninteresting to most of the external tools because it's limited and can't\nbe used in a parallel fashion.\n\n> > Unless maybe I'm misunderstanding and what you're suggesting here is\n> > that the \"existing solution\" is something like the external PG-specific\n> > backup tools? But then the rest doesn't seem to make sense, as only\n> > maybe one or two of those tools use pg_basebackup internally.\n> \n> Well, what I'm really talking about is in two pieces: providing some\n> new facilities via the replication protocol, and making pg_basebackup\n> able to use those facilities. Nothing would stop other tools from\n> using those facilities directly if they wish.\n\nIf those facilities are developed and implemented in the same way as the\nprotocol used by pg_basebackup works, then I strongly suspect that the\nexisting backup tools will treat it similairly- which is to say, they'll\nlargely end up ignoring it.\n\n> > ... but this is exactly the situation we're in already with all of the\n> > *other* features around backup (parallel backup, backup management, WAL\n> > management, etc). Users want those features, pg_basebackup/PG core\n> > doesn't provide it, and therefore there's a bunch of other tools which\n> > have been written that do. In addition, saying that PG has incremental\n> > backup but no built-in management of those full-vs-incremental backups\n> > and telling users that they basically have to build that themselves\n> > really feels a lot like we're trying to address a check-box requirement\n> > rather than making something that our users are going to be happy with.\n> \n> I disagree. Yes, parallel backup, like incremental backup, needs to\n> go in core. And pg_basebackup should be able to do a parallel backup.\n> I will fight tooth, nail, and claw any suggestion that the server\n> should know how to do a parallel backup but pg_basebackup should not\n> have an option to exploit that capability. And similarly for\n> incremental.\n\nThese aren't independent things though, the way it seems like you're\nportraying them, because there are ways we can implement incremental\nbackup that would support it being parallelized, and ways we can\nimplement it that wouldn't work with parallelism at all, and all I'm\nargueing for is that we add in this feature in a way that it can be\nparallelized (since that's what most of the external tools do today...),\neven though pg_basebackup can't be, but in a way that pg_basebackup can\nalso use it (albeit in a serial fashion).\n\n> > I don't think that I was very clear in what my specific concern here\n> > was. I'm not asking for pg_basebackup to have parallel backup (at\n> > least, not in this part of the discussion), I'm asking for the\n> > incremental block-based protocol that's going to be built-in to core to\n> > be able to be used in a parallel fashion.\n> >\n> > The existing protocol that pg_basebackup uses is basically, connect to\n> > the server and then say \"please give me a tarball of the data directory\"\n> > and that is then streamed on that connection, making that protocol\n> > impossible to use for parallel backup. That's fine as far as it goes\n> > because only pg_basebackup actually uses that protocol (note that nearly\n> > all of the other tools for doing backups of PostgreSQL don't...). If\n> > we're expecting the external tools to use the block-level incremental\n> > protocol then that protocol really needs to have a way to be\n> > parallelized, otherwise we're just going to end up with all of the\n> > individual tools doing their own thing for block-level incremental\n> > (though perhaps they'd reimplement whatever is done in core but in a way\n> > that they could parallelize it...), if possible (which I add just in\n> > case there's some idea that we end up in a situation where the\n> > block-level incremental backup has to coordinate with the backend in\n> > some fashion to work... which would mean that *everyone* has to use the\n> > protocol even if it isn't parallel and that would be really bad, imv).\n> \n> The obvious way of extending this system to parallel backup is to have\n> N connections each streaming a separate tarfile such that when you\n> combine them all you recreate the original data directory. That would\n> be perfectly compatible with what I'm proposing for incremental\n> backup. Maybe you have another idea in mind, but I don't know what it\n> is exactly.\n\nSo, while that's an obvious approach, it isn't the most sensible- and\nwe know that from experience in actually implementing parallel backup of\nPG files. I'm happy to discuss the approach we use in pgBackRest if\nyou'd like to discuss this further, but it seems a bit far afield from\nthe topic of discussion here and it seems like you're not interested or\noffering to work on supporting parallel backup in core.\n\nI don't think what you're proposing here wouldn't, technically, work for\nthe various external tools, what I'm saying is that they aren't going to\nactually use it, which means that you're really implementing it *only*\nfor pg_basebackup's benefit... and only for as long as pg_basebackup is\nserial in nature.\n\n> > > Wait, you want to make it maximally easy for users to start the server\n> > > in a state that is 100% certain to result in a corrupted and unusable\n> > > database? Why?? I'd l like to make that a tiny bit difficult. If\n> > > they really want a corrupted database, they can remove the file.\n> >\n> > No, I don't want it to be easy for users to start the server in a state\n> > that's going to result in a corrupted cluster. That's basically the\n> > complete opposite of what I was going for- having a file that can be\n> > trivially removed to start up the cluster is *going* to result in people\n> > having corrupted clusters, no matter how much we tell them \"don't do\n> > that\". This is exactly the problem with have with backup_label today.\n> > I'd really rather not double-down on that.\n> \n> Well, OK, but short of scanning the entire directory tree on startup,\n> I don't see how to achieve that.\n\nOk, so, this is a bit of spit-balling, just to be clear, but we\ncurrently track things like \"where we know the heap files are\nconsistant\" by storing it in the control file as a checkpoint LSN, and\nthen we have a backup_label file to say where we need to get to in order\nto be consistent from a backup. Perhaps there's a way to use those to\ncross-validate while we are updating a data directory to be consistent?\nMaybe we update those files as we go, and add a cross-check flag between\nthem, so that we know from two places that we're restoring from a backup\n(incremental or full), and then also know where we need to start from\nand where we need to get to, in order to be conistant.\n\nOf course, users can still get past this by hacking these files around\nand maybe we can provide a tool along the lines of pg_resetwal which\nlets them force the files to agree, but then we can at least throw big\nglaring warnings and tell users \"this is really bad, type YES to\ncontinue\".\n\n> > There's really two things here- the first is that I agree with the\n> > concern about potentially destorying the existing backup if the\n> > pg_basebackup doesn't complete, but there's some ways to address that\n> > (such as filesystem snapshotting), so I'm not sure that the idea is\n> > quite that bad, but it would need to be more than just what\n> > pg_basebackup does in this case in order to be trustworthy (at least,\n> > for most).\n> \n> Well, I did mention in my original email that there could be a\n> combine-backups-destructively option. I guess this is just taking\n> that to the next level: merge a backup being taken into an existing\n> backup on-the-fly. Given you remarks above, it is worth noting that\n> this GREATLY increases the chances of people accidentally causing\n> corruption in ways that are almost undetectable. All they have to do\n> is kill -9 the backup tool half way through and then start postgres on\n> the resulting directory.\n\nRight, we need to come up with a way to detect if that happens and\ncomplain loudly, and not continue to move forward unless and until the\nuser explicitly insists that it's the right thing to do.\n\n> > The other part here is the idea of endless incrementals where the blocks\n> > which don't appear to have changed are never re-validated against what's\n> > in the backup. Unfortunately, latent corruption happens and you really\n> > want to have a way to check for that. In past discussions that I've had\n> > with David, there's been some idea to check some percentage of the\n> > blocks that didn't appear to change for each backup against what's in\n> > the backup.\n> \n> Sure, I'm not trying to block anybody from developing something like\n> that, and I acknowledge that there is risk in a system like this,\n> but...\n> \n> > I share this just to point out that there's some risk to that approach,\n> > not to say that we shouldn't do it or that we should discourage the\n> > development of such a feature.\n> \n> ...it seems we are viewing this, at least, from the same perspective.\n\nGreat, but I feel like the question here is if we're comfortable putting\nout this capability *without* some mechanism to verify that the existing\nblocks are clean/not corrupted/changed, or if we feel like this risk is\nenough that we want to include a check of the existing blocks, in some\nfashion, as part of the incremental backup feature.\n\nPersonally, and in discussion with David, we've generally felt like we\ndon't want this feature until we have a way to verify the blocks that\naren't being backed up every time and we are assuming are clean/correct,\n(at least some portion of them anyway, with a way to make sure we\neventually check them all) because we are concerned that users will get\nbit by latent corruption and then be quite unhappy with us for not\npicking up on that.\n\n> > Wow. I have to admit that I feel completely opposite of that- I'd\n> > *love* to have an independent tool (which ideally uses the same code\n> > through the common library, or similar) that can be run to apply WAL.\n> >\n> > In other words, I don't agree that it's the server's problem at all to\n> > solve that, or, at least, I don't believe that it needs to be.\n> \n> I mean, I guess I'd love to have that if I could get it by waving a\n> magic wand, but I wouldn't love it if I had to write the code or\n> maintain it. The routines for applying WAL currently all assume that\n> you have a whole bunch of server infrastructure present; that code\n> wouldn't run in a frontend environment, I think. I wouldn't want to\n> have a second copy of every WAL apply routine that might have its own\n> set of bugs.\n\nI agree that we don't want to have multiple implementations or copies of\nthe WAL apply routines. On the other hand, while I agree that there's\nsome server infrastructure they depend on today, I feel like a lot of\nthat infrastructure is things that we'd actually like to have in at\nleast some of the client tools (and likely pg_basebackup specifically).\nI understand that it's not trivial to implement, of course, or to pull\nout into a common library. We are already seeing some efforts to\nconsolidate common routines in the client libraries (Peter E's recent\nwork around the error messaging being a good example) and I feel like\nthat's something we should encourage and expect to see happening more in\nthe future as we add more sophisticated client utilities.\n\n> > I've tried to outline how the incremental backup capability and backup\n> > management are really very closely related and having those be\n> > implemented by independent tools is not a good interface for our users\n> > to have to live with.\n> \n> I disagree. I think the \"existing backup tools don't use\n> pg_basebackup\" argument isn't very compelling, because the reason\n> those tools don't use pg_basebackup is because it can't do what they\n> need. If it did, they'd probably use it. People don't write a whole\n> separate engine for running backups just because it's fun to not reuse\n> code -- they do it because there's no other way to get what they want.\n\nI understand that you disagree but I don't clearly understand the\nsubsequent justification for why you disagree. As I understand it, you\ndisagree that an incremental backup capability and backup management are\nclosely related, but that's because the existing tools don't leverage\npg_basebackup (or the backup protocol), but aren't those pretty\ndistinct things? I accept that perhaps it's my fault for implying that\nthese topics were related in the emails I've sent, and while replying to\nvarious parts of the discussion which has traveled across a number of\ntopics, some related and some not. I see incremental backups and backup\nmanagement as related because, in part, of expiration- if you expire out\na 'full' backup then you must expire out any incremental or differential\nbackups based on it. Just generally that association of which\nincremental depends on which full (or prior differential, or prior\nincremental) is extremely important and necessary to avoid corrupt\nsystems (consider that you might apply an incremental to a full backup,\nbut the incremental taken was actually based on another incremental and\nnot based on the full, or variations of that...).\n\nIn short, I don't think I could confidently trust any incremental backup\nthat's taken without having a clear link to the backup it's based on,\nand having it be expired when the backup it depends on is expired.\n\n> > Most of the external tools don't use pg_basebackup, nor the base backup\n> > protocol (or, if they do, it's only as an option among others). In my\n> > opinion, that's pretty clear indication that pg_basebackup and the base\n> > backup protocol aren't sufficient to cover any but the simplest of\n> > use-cases (though those simple use-cases are handled rather well).\n> > We're talking about adding on a capability that's much more complicated\n> > and is one that a lot of tools have already taken a stab at, let's try\n> > to do it in a way that those tools can leverage it and avoid having to\n> > implement it themselves.\n> \n> I mean, again, if it were part of pg_basebackup and available via the\n> replication protocol, they could do exactly that, through either\n> method. I don't get it.\n\nNo, they can't. Today there exists *exactly* this situation:\npg_basebackup uses the base backup protocol for doing backups, and the\nexternal tools don't use it.\n\nWhy?\n\nBecause it can't be used in a parallel manner, making it largely\nuninteresting as a mechanism for doing backups of systems at any scale.\n\nYes, sure, they *could* technically use it, but from a *practical*\nstandpoint they don't because it *sucks*. Let's not do that for\nincremental backups.\n\n> You seem to be arguing that we shouldn't add\n> the necessary capabilities to the replication protocol or\n> pg_basebackup, but at the same time arguing that pg_basebackup is\n> inadequate because it's missing important capabilities. This confuses\n> me.\n\nI'm sorry for not being clear. I'm not argueing that we *shouldn't* add\nsuch capabilities. I *want* these capabilities to be added, but I want\nthem added in a way that's actually useful to the external tools and not\nsomething that only works for pg_basebackup (which is currently\nsingle-threaded).\n\nI hope that's the kind of feedback you've been looking for on this\nthread.\n\n> > It's an interesting idea to add in everything to pg_basebackup that\n> > users doing backups would like to see, but that's quite a list:\n> >\n> > - full backups\n> > - differential backups\n> > - incremental backups / block-level backups\n> > - (server-side) compression\n> > - (server-side) encryption\n> > - page-level checksum validation\n> > - calculating checksums (on the whole file)\n> > - External object storage (S3, et al)\n> > - more things...\n> >\n> > I'm really not convinced that I agree with the division of labor as\n> > you've outlined it, where all of the above is done by pg_basebackup,\n> > where just archiving and backup retention are handled by some external\n> > tool (except that we already have pg_receivewal, so archiving isn't\n> > really an externally handled thing either, unless you want features like\n> > parallel archive-push or parallel archive-get...).\n> \n> Yeah, if it were up to me, I'd choose put most of that in the server\n> and make it available via the replication protocol, and then give\n> pg_basebackup able to use that functionality.\n\nI'm all about that. I don't know that the client-side tool would still\nbe called 'pg_basebackup' at that point, but I definitely want to get to\na point where we have all of these capabilities available in core.\n\n> And external tools\n> could use that functionality via pg_basebackup or by using the\n> replication protocol directly. I actually don't really understand\n> what the alternative is. If you want server-side compression, for\n> example, that really has to be done on the server. And how would the\n> server expose that, except through the replication protocol? Sure, we\n> could design a new protocol for it. Call it... say... the\n> shmeplication protocol. And then you could use the replication\n> protocol for what it does today and the shmeplication protocol for all\n> the cool bits. But why would that be better?\n\nThe replication protocol (or base backup protocol, really..) is what we\nmake it, in the end. Of course server-side compression needs to be done\non the server and we need a way to tell the server \"please compress this\nfor us before sending it\". I'm not suggesting there's some alternative\nto that. What I'm suggesting is that when we go to implement the\nincremental backup protocol that we have a way for that to be\nparallelized (at least... maybe other things too) because that's what\nthe external tools would really like.\n\nEven pg_dump works in the way that it connects and builds a list of\nthings to run against and then farms that out to the parallel processes,\nso we have an example of how this is done in core today.\n\n> > What would really help me, at least, understand the idea here would be\n> > to understand exactly what the existing tools do that the subset of\n> > users you're thinking about doesn't like/want, but which pg_basebackup,\n> > today, does. Is the issue that there's a repository instead of just a\n> > plain PG directory or set of tar files, like what pg_basebackup produces\n> > today? But how would we do things like have compression, or encryption,\n> > or block-based incremental backups without some kind of repository or\n> > directory that doesn't actually look exactly like a PG data directory?\n> \n> I guess we're still wallowing in the same confusion here.\n> pg_basebackup, for me, is just a convenient place to stick this\n> functionality. If the server has the ability to construct and send an\n> incremental backup by some means, then it needs a client on the other\n> end to receive and store that backup, and since pg_basebackup already\n> knows how to do that for full backups, extending it to incremental\n> backups (and/or parallel, encrypted, compressed, and validated\n> backups) seems very natural to me. Otherwise I add server-side\n> functionality to allow $X and then have to write an entirely new\n> client to interact with that instead of just using the client I've\n> already got. That's more work, and I'm lazy.\n\nI'm not suggesting that we don't add this functionality to\npg_basebackup, I'm just saying that we should be thinking about how the\nexternal tools will want to leverage this new capability because it's\nmaterially different from the basic minimum that pg_basebackup requires.\nYes, it'd be a bit more work and a somewhat more complicated protocol\nthan the simple approach needed by pg_basebackup, but that's what those\nother tools will want. If we don't care about them, ok, I get that, but\nI thought the idea here was to build something that's useful to both the\nexternal tools and pg_basebackup. We won't get that if we focus on just\nimplementing a protocol for pg_basebackup to use.\n\n> Now it's true that if we wanted to build something like the rsync\n> protocol into PostgreSQL, jamming that into pg_basebackup might well\n> be a bridge too far. That would involve taking backups via a method\n> so different from what we're currently doing that it would probably\n> make sense to at least consider creating a whole new tool for that\n> purpose. But that wasn't my proposal...\n\nThe idea around the rsync binary-diff protocol was *specifically* for\nthings that we can't do through block-level updates with WAL scanning,\njust to be clear. I wasn't thinking that would be good for the relation\nfiles since we have more information for those in the LSN, et al.\n\nThanks!\n\nStephen",
"msg_date": "Thu, 18 Apr 2019 18:39:46 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> > > Wow. I have to admit that I feel completely opposite of that- I'd\n> > > *love* to have an independent tool (which ideally uses the same code\n> > > through the common library, or similar) that can be run to apply WAL.\n> > >\n> > > In other words, I don't agree that it's the server's problem at all to\n> > > solve that, or, at least, I don't believe that it needs to be.\n> > \n> > I mean, I guess I'd love to have that if I could get it by waving a\n> > magic wand, but I wouldn't love it if I had to write the code or\n> > maintain it. The routines for applying WAL currently all assume that\n> > you have a whole bunch of server infrastructure present; that code\n> > wouldn't run in a frontend environment, I think. I wouldn't want to\n> > have a second copy of every WAL apply routine that might have its own\n> > set of bugs.\n> \n> I'll fight tooth and nail not to have a second implementation of replay,\n> even if it's just portions. The code we have is complicated and fragile\n> enough, having a [partial] second version would be way worse. There's\n> already plenty improvements we need to make to speed up replay, and a\n> lot of them require multiple execution threads (be it processes or OS\n> threads), something not easily feasible in a standalone tool. And\n> without the already existing concurrent work during replay (primarily\n> checkpointer doing a lot of the necessary IO), it'd also be pretty\n> unattractive to use any separate tool.\n\nI agree that we don't want another implementation and that there's a lot\nthat we want to do to improve replay performance. We've already got\nfrontend tools which work with multiple execution threads, so I'm not\nsure I get the \"not easily feasible\" bit, and the argument about the\ncheckpointer seems largely related to that (as in- if we didn't have\nmultiple threads/processes then things would perform quite badly... but\nwe can and do have multiple threads/processes in frontend tools today,\neven in pg_basebackup).\n\nYou certainly bring up some good concerns though and they make me think\nof other bits that would seem like they'd possibly be larger issues for\na frontend tool- like having a large pool of memory for cacheing (aka\nshared buffers) the changes. If what we're talking about here is *just*\nreplay though, without having the system available for reads, I wonder\nif we might want a different solution there.\n\n> Unless you just define the server binary as that \"independent tool\".\n\nThat's certainly an interesting idea.\n\n> Which I think is entirely reasonable. With the 'consistent' and LSN\n> recovery targets one already can get most of what's needed from such a\n> tool, anyway. I'd argue the biggest issue there is that there's no\n> equivalent to starting postgres with a private socket directory on\n> windows, and perhaps an option or two making it easier to start postgres\n> in a \"private\" mode for things like this.\n\nThis would mean building in a way to do parallel WAL replay into the\nserver binary though, as discussed above, and it seems like making that\nwork in a way that allows us to still be available as a read-only\nstandby would be quite a bit more difficult. We could possibly support\nparallel WAL replay only when we aren't a replica but from the same\nbinary. The concerns mentioned about making it easier to start PG in a\nprivate mode don't seem too bad but I am not entirely sure that the\ntools which want to leverage that kind of capability would want to have\nto exec out to the PG binary to use it.\n\nA lot of this part of the discussion feels like a tangent though, unless\nI'm missing something. The \"WAL compression\" tool contemplated\npreviously would be much simpler and not the full-blown WAL replay\ncapability, which would be left to the server, unless you're suggesting\nthat even that should be exclusively the purview of the backend? Though\nthat ship's already sailed, given that external projects have\nimplemented it. Having a library to provide that which external\nprojects could leverage would be nicer than having everyone write their\nown version.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 19 Apr 2019 20:04:41 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 6:39 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Where is the client going to get the threshold LSN from?\n>\n> If it doesn't have access to the old backup, then I'm a bit confused as\n> to how a incremental backup would be possible? Isn't that a requirement\n> here?\n\nI explained this in the very first email that I wrote on this thread,\nand then wrote a very extensive further reply on this exact topic to\nPeter Eisentraut. It's a bit disheartening to see you arguing against\nmy ideas when it's not clear that you've actually read and understood\nthem.\n\n> > The obvious way of extending this system to parallel backup is to have\n> > N connections each streaming a separate tarfile such that when you\n> > combine them all you recreate the original data directory. That would\n> > be perfectly compatible with what I'm proposing for incremental\n> > backup. Maybe you have another idea in mind, but I don't know what it\n> > is exactly.\n>\n> So, while that's an obvious approach, it isn't the most sensible- and\n> we know that from experience in actually implementing parallel backup of\n> PG files. I'm happy to discuss the approach we use in pgBackRest if\n> you'd like to discuss this further, but it seems a bit far afield from\n> the topic of discussion here and it seems like you're not interested or\n> offering to work on supporting parallel backup in core.\n\nIf there's some way of modifying my proposal so that it makes life\nbetter for external backup tools, I'm certainly willing to consider\nthat, but you're going to have to tell me what you have in mind. If\nthat means describing what pgbackrest does, then do it.\n\nMy concern here is that you seem to want a lot of complicated stuff\nthat will require *significant* setup in order for people to be able\nto use it. From what I am able to gather from your remarks so far,\nyou think people should archive their WAL to a separate machine, and\nthen the WAL-summarizer should run there, and then data from that\nshould be fed back to the backup client, which should then give the\nserver a list of modified files (and presumably, someday, blocks) and\nthe server then returns that data, which the client then\ncross-verifies with checksums and awesome sauce.\n\nWhich is all fine, but actually requires quite a bit of set-up and\nquite a bit of buy-in to the tool. And I have no problem with people\nhaving that level of buy-in to the tool. EnterpriseDB offers a number\nof tools which require similar levels of setup and configuration, and\nit's not inappropriate for an enterprise-grade backup tool to have all\nthat stuff. However, for those who may not want to do all that, my\noriginal proposal lets you take an incremental backup by doing the\nfollowing list of steps:\n\n1. Take an incremental backup.\n\nIf you'd like, you can also:\n\n0. Enable the WAL-scanning background worker to make incremental\nbackups much faster.\n\nYou do not need a WAL archive, and you do not need EITHER the backup\ntool or the server to have access to previous backups, and you do not\nneed the client to have any access to archived WAL or the summary\nfiles produced from it. The only thing you need to know the\nstart-of-backup LSN for the previous backup.\n\nI expect you to reply with a long complaint about how my proposal is\ntotally inadequate, but actually I think for most people, most of the\ntime, it would not only be adequate, but extremely convenient. And\ndespite your protestations to the contrary, it does not block\nparallelism, checksum verification, or any other cool features that\nsomebody may want to add later. It'll work just fine with those\nthings.\n\nAnd for the record, I am willing to put some effort into parallelism.\nI just think that it makes more sense to do the incremental part\nfirst. I think that incremental backup is likely to have less effect\non parallel backup than the other way around. What I'm NOT willing to\ndo is build a whole bunch of infrastructure that will help pgbackrest\ndo amazing things but will not provide a simple and convenient way of\ntaking incremental backups using only core tools. I do care about\nhaving something that's good for pgbackrest and other out-of-core\ntools. I just care about it MUCH LESS than I care about making\nPostgreSQL core awesome.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 20 Apr 2019 00:05:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> What I'm NOT willing to\n> do is build a whole bunch of infrastructure that will help pgbackrest\n> do amazing things but will not provide a simple and convenient way of\n> taking incremental backups using only core tools. I do care about\n> having something that's good for pgbackrest and other out-of-core\n> tools. I just care about it MUCH LESS than I care about making\n> PostgreSQL core awesome.\n\nThen I misunderstood your original proposal where you talked about\nproviding something that the various external tools could use. If you'd\nlike to *just* provide a mechanism for pg_basebackup to be able to do a\ntrivial incremental backup, great, but it's not going to be useful or\nused by the external tools, just like the existing base backup protocol\nisn't used by the external tools because it can't be used in a parallel\nfashion.\n\nAs such, and with all the other missing bits from pg_basebackup, it\nlooks likely to me that such a feature is going to be lackluster, at\nbest, and end up being only marginally interesting, when it could have\nbeen much more and leveraged by all of the existing tools. I agree that\nmaking a parallel-supporting protocol work is harder but I actually\ndon't think it would be *that* much more difficult to do.\n\nThat's frankly discouraging, but I'm not going to tell you where to\nspend your time.\n\nMaking PG core awesome when it comes to backup is going to involve so\nmuch more than just marginal improvements to pg_basebackup, but it's\nalso something that I'm very much supportive of and have invested a\ngreat deal in, by spending time and resources working to build a tool\nthat gets closer to what an in-core solution would look like than\nanything that exists today.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 20 Apr 2019 00:19:51 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi!\n\nSorry for the delay.\n\n> 18 апр. 2019 г., в 21:56, Robert Haas <robertmhaas@gmail.com> написал(а):\n> \n> On Wed, Apr 17, 2019 at 5:20 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> As I understand it, the problem is not with backing up an individual\n>> database or cluster, but rather dealing with backing up thousands of\n>> individual clusters with thousands of tables in each, leading to an\n>> awful lot of tables with lots of FSMs/VMs, all of which end up having to\n>> get copied and stored wholesale. I'll point this thread out to him and\n>> hopefully he'll have a chance to share more specific information.\n> \n> Sounds good.\n\nDuring introduction of WAL-delta backups, we faced two things:\n1. Heavy spike in network load. We shift beginning of backup randomly, but variation is not very big: night is short and we want to make big backups during low rps time. This low variation of time of starts of small backups creates big network spike.\n2. Incremental backups became very cheap if measured in used resources of a single cluster.\n\n1st is not a big problem, actually, bit we realized that we can do incremental backups not just at night, but, for example, 4 times a day. Or every hour. Or every minute. Why not, if they are cheap enough?\n\nIncremental backup of 1Tb DB made with distance of few minutes (small change set) is few Gbs. All of this size is made of FSM (no LSN) and VM (hard to use LSN).\nSure, this overhead size is fine if we make daily backup. But at some frequency of backups it will be too much.\n\nI think that problem of incrementing FSM and VM is too distant now.\nBut if I had to implement it right now I'd choose following way: do not backup FSM and VM, recreate it during restore. Looks like it is possible, but too much AM-specific.\nIt is hard when you write backup tool in Go and cannot simply link with PG.\n\n> 15 апр. 2019 г., в 18:01, Stephen Frost <sfrost@snowman.net> написал(а):\n> ...the goal here\n> isn't actually to make pg_basebackup into an enterprise backup tool,\n> ...\n\nBTW, I'm all hands for extensibility and \"hackability\". But, personally, I'd be happy if pg_basebackup would be ubiquitous and sufficient. And tools like WAL-G and others became part of a history. There is not fundamental reason why external backup tool can be better than backup tool in core. (Unlike many PLs, data types, hooks, tuners etc)\n\n\nHere's 53 mentions of \"parallel backup\". I want to note that there may be parallel read from disk and parallel network transmission. Things between these two are neglectable and can be single-threaded. From my POV, it's not about threads, it's about saturated IO controllers.\nAlso I think parallel restore matters more than parallel backup. Backups themself can be slow, on many clusters we even throttle disk IO. But users may want parallel backup to catch-up standby.\n\nThanks.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 20 Apr 2019 21:44:35 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 12:19 AM Stephen Frost <sfrost@snowman.net> wrote:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > What I'm NOT willing to\n> > do is build a whole bunch of infrastructure that will help pgbackrest\n> > do amazing things but will not provide a simple and convenient way of\n> > taking incremental backups using only core tools. I do care about\n> > having something that's good for pgbackrest and other out-of-core\n> > tools. I just care about it MUCH LESS than I care about making\n> > PostgreSQL core awesome.\n>\n> Then I misunderstood your original proposal where you talked about\n> providing something that the various external tools could use. If you'd\n> like to *just* provide a mechanism for pg_basebackup to be able to do a\n> trivial incremental backup, great, but it's not going to be useful or\n> used by the external tools, just like the existing base backup protocol\n> isn't used by the external tools because it can't be used in a parallel\n> fashion.\n\nWell, what I meant - and perhaps I wasn't clear enough about this - is\nthat it could be used by an external solution for *managing* backups,\nnot so much an external engine for *taking* backups. But actually, I\nreally don't see any reason why the latter wouldn't also be possible.\nIt was already suggested upthread by Anastasia that there should be a\nway to ask the server to give only the identity of the modified blocks\nwithout the contents of those blocks; if we provide that, then a tool\ncan get those and do whatever it likes with them, including fetching\nthem in parallel by some other means. Another obvious extension would\nbe to add a command that says 'give me this file' or 'give me this\nfile but only this list of blocks' which would give clients lots of\noptions: they could provide their own lists of blocks to fetch\ncomputed by whatever internal magic they have, or they could request\nthe server's modified-block map information first and then schedule\nfetching those blocks in parallel using this new command. So it seems\nlike with some pretty straightforward extensions this can be made\nusable by and valuable to people wanting to build external backup\nengines, too. I do not necessarily feel obliged to implement every\nfeature that might help with that kind of thing just because I've\nexpressed an interest in this general area, but I might do some of\nthem, and maybe people like you or Anastasia who want to make these\nfacilities available to external tools can help with some of the work,\ntoo.\n\nThat being said, as long as there is significant demand for\nvalue-added backup features over and above what is in core, there are\nprobably going to be non-core backup tools that do things their own\nway instead of just leaning on whatever the server provides natively.\nIn a certain sense that's regrettable, because it means that somebody\n- or perhaps multiple somebodys - goes to the trouble of doing\nsomething outside core and then somebody else puts something in core\nthat obsoletes it and therein lies duplication of effort. On the\nother hand, it also allows people to innovate way faster than can be\ndone in core, it allows competition among different possible designs,\nand it's just kinda the way we roll around here. I can't get very\nworked up about it.\n\nOne thing I'm definitely not going to do here is abandon my goal of\nproducing a *simple* incremental backup solution that can be deployed\n*easily* by users. I understand from your remarks that such a solution\nwill not suit everybody. However, unlike you, I do not believe that\npg_basebackup was a failure. I certainly agree that it has some\nlimitations that mean that it is hard to use in large deployments, but\nit's also *extremely* convenient for people with a fairly small\ndatabase when they just need a quick and easy backup. Adding some\nmore features to it - such as incremental backup - will make it useful\nto more people in more cases. There will doubtless still be people\nwho need more, and that's OK: those people can use a third-party tool.\nI will not get anywhere trying to solve every problem at once.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 20 Apr 2019 16:11:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 12:44 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> Incremental backup of 1Tb DB made with distance of few minutes (small change set) is few Gbs. All of this size is made of FSM (no LSN) and VM (hard to use LSN).\n> Sure, this overhead size is fine if we make daily backup. But at some frequency of backups it will be too much.\n\nIt seems like if the backups are only a few minutes apart, PITR might\nbe a better choice than super-frequent incremental backups. What do\nyou think about that?\n\n> I think that problem of incrementing FSM and VM is too distant now.\n> But if I had to implement it right now I'd choose following way: do not backup FSM and VM, recreate it during restore. Looks like it is possible, but too much AM-specific.\n\nInteresting idea - that's worth some more thought.\n\n> BTW, I'm all hands for extensibility and \"hackability\". But, personally, I'd be happy if pg_basebackup would be ubiquitous and sufficient. And tools like WAL-G and others became part of a history. There is not fundamental reason why external backup tool can be better than backup tool in core. (Unlike many PLs, data types, hooks, tuners etc)\n\n+1\n\n> Here's 53 mentions of \"parallel backup\". I want to note that there may be parallel read from disk and parallel network transmission. Things between these two are neglectable and can be single-threaded. From my POV, it's not about threads, it's about saturated IO controllers.\n> Also I think parallel restore matters more than parallel backup. Backups themself can be slow, on many clusters we even throttle disk IO. But users may want parallel backup to catch-up standby.\n\nI'm not sure I entirely understand your point here -- are you saying\nthat parallel backup is important, or that it's not important, or\nsomething in between? Do you think it's more or less important than\nincremental backup?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 20 Apr 2019 16:13:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Sat, Apr 20, 2019 at 12:19 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Robert Haas (robertmhaas@gmail.com) wrote:\n> > > What I'm NOT willing to\n> > > do is build a whole bunch of infrastructure that will help pgbackrest\n> > > do amazing things but will not provide a simple and convenient way of\n> > > taking incremental backups using only core tools. I do care about\n> > > having something that's good for pgbackrest and other out-of-core\n> > > tools. I just care about it MUCH LESS than I care about making\n> > > PostgreSQL core awesome.\n> >\n> > Then I misunderstood your original proposal where you talked about\n> > providing something that the various external tools could use. If you'd\n> > like to *just* provide a mechanism for pg_basebackup to be able to do a\n> > trivial incremental backup, great, but it's not going to be useful or\n> > used by the external tools, just like the existing base backup protocol\n> > isn't used by the external tools because it can't be used in a parallel\n> > fashion.\n> \n> Well, what I meant - and perhaps I wasn't clear enough about this - is\n> that it could be used by an external solution for *managing* backups,\n> not so much an external engine for *taking* backups. But actually, I\n> really don't see any reason why the latter wouldn't also be possible.\n> It was already suggested upthread by Anastasia that there should be a\n> way to ask the server to give only the identity of the modified blocks\n> without the contents of those blocks; if we provide that, then a tool\n> can get those and do whatever it likes with them, including fetching\n> them in parallel by some other means. Another obvious extension would\n> be to add a command that says 'give me this file' or 'give me this\n> file but only this list of blocks' which would give clients lots of\n> options: they could provide their own lists of blocks to fetch\n> computed by whatever internal magic they have, or they could request\n> the server's modified-block map information first and then schedule\n> fetching those blocks in parallel using this new command. So it seems\n> like with some pretty straightforward extensions this can be made\n> usable by and valuable to people wanting to build external backup\n> engines, too. I do not necessarily feel obliged to implement every\n> feature that might help with that kind of thing just because I've\n> expressed an interest in this general area, but I might do some of\n> them, and maybe people like you or Anastasia who want to make these\n> facilities available to external tools can help with some of the work,\n> too.\n\nYes, if we spend a bit of time thinking about how this could be\nimplemented in a way that could be used by multiple connections\nconcurrently then we could provide something that both pg_basebackup and\nthe external tools could use. Getting a list first and then supporting\na 'give me this file' API, or 'give me these blocks from this file'\nwould be very similar to what many of the external tools today. I agree\nthat I don't think it'd be hard to do. I'm suggesting that we do that\ninstead of, at a protocol level, something similar to what was done with\npg_basebackup which prevents that.\n\nI don't really agree that implementing \"give me a list of files\" and\n\"give me this file\" is really somehow an 'extension' to the tar-based\napproach that pg_basebackup uses today, it's really a rather different\nthing, and I mention that as a parallel (hah!) to what we're discussing\nhere regarding the incremental backup approach.\n\nHaving been around for a while working on backup-related things, if I\nwas to implement the protocol for pg_basebackup today, I'd definitely\nimplement \"give me a list\" and \"give me this file\" rather than the\ntar-based approach, because I've learned that people want to be\nable to do parallel backups and that's a decent way to do that. I\nwouldn't set out and implement something new that's there's just no hope\nof making parallel. Maybe the first write of pg_basebackup would still\nbe simple and serial since it's certainly more work to make a frontend\ntool like that work in parallel, but at least the protocol would be\nready to support a parallel option being added alter without being\nrewritten.\n\nAnd that's really what I was trying to get at here- if we've got the\nchoice now to decide what this is going to look like from a protocol\nlevel, it'd be great if we could make it able to support being used in a\nparallel fashion, even if pg_basebackup is still single-threaded.\n\n> That being said, as long as there is significant demand for\n> value-added backup features over and above what is in core, there are\n> probably going to be non-core backup tools that do things their own\n> way instead of just leaning on whatever the server provides natively.\n> In a certain sense that's regrettable, because it means that somebody\n> - or perhaps multiple somebodys - goes to the trouble of doing\n> something outside core and then somebody else puts something in core\n> that obsoletes it and therein lies duplication of effort. On the\n> other hand, it also allows people to innovate way faster than can be\n> done in core, it allows competition among different possible designs,\n> and it's just kinda the way we roll around here. I can't get very\n> worked up about it.\n\nYes, that's largely the tact we've taken with it- build something\noutside of core, where we can move a lot faster with the implementation\nand innovate quickly, until we get to a stable system that's as portable\nand in a compatible language to what's in core today. I don't have any\nproblem with new things going into core, in fact, I'm all for it, but if\nsomeone asks me \"I'd like to do this thing in core and I'd like it to be\nuseful for external tools\" then I'll do my best to share my experiences\nwith what's been done in core vs. what's been done in this space outside\nof core and what some lessons learned from that have been and ways that\nwe could at least try to make it so that external tools will be able to\nuse whatever is implemented in core.\n\n> One thing I'm definitely not going to do here is abandon my goal of\n> producing a *simple* incremental backup solution that can be deployed\n> *easily* by users. I understand from your remarks that such a solution\n> will not suit everybody. However, unlike you, I do not believe that\n> pg_basebackup was a failure. I certainly agree that it has some\n> limitations that mean that it is hard to use in large deployments, but\n> it's also *extremely* convenient for people with a fairly small\n> database when they just need a quick and easy backup. Adding some\n> more features to it - such as incremental backup - will make it useful\n> to more people in more cases. There will doubtless still be people\n> who need more, and that's OK: those people can use a third-party tool.\n> I will not get anywhere trying to solve every problem at once.\n\nI don't get this at all. What I've really been focused on has been the\nprotocol-level questions of what this is going to look like, because\nthat's what I see the external tools potentially using. pg_basebackup\nitself could remain single-threaded and could provide exactly the same\ninterface, no matter if the protocol is \"give me all the blocks across\nthe entire cluster as a single compressed stream\" or the protocol is\n\"give me a list of files that changed\" and \"give me a list of these\nblocks in this file\" or even \"give me all the blocks that changed in\nthis file\".\n\nI also don't think pg_basebackup is a failure, and I didn't mean to\nimply that, and I'm sorry for some of the hyperbole which lead to that\nimpression coming across. pg_basebackup is great, for what it is, and I\nregularly recommend it in certain use-cases as being a simple tool that\ndoes one thing and does it pretty well, for smaller clusters. The\nprotocol it uses is unfortunately only useful in a single-threaded\nmanner though and it'd be great if we could avoid implementing similar\nthings in the protocol in the future.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 20 Apr 2019 16:32:32 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "\n\n> 21 апр. 2019 г., в 1:13, Robert Haas <robertmhaas@gmail.com> написал(а):\n> \n> On Sat, Apr 20, 2019 at 12:44 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> Incremental backup of 1Tb DB made with distance of few minutes (small change set) is few Gbs. All of this size is made of FSM (no LSN) and VM (hard to use LSN).\n>> Sure, this overhead size is fine if we make daily backup. But at some frequency of backups it will be too much.\n> \n> It seems like if the backups are only a few minutes apart, PITR might\n> be a better choice than super-frequent incremental backups. What do\n> you think about that?\nPITR is painfully slow on heavily loaded clusters. I observed restorations when 5 seconds of WAL were restored in 4 seconds. Backup was only few hours past primary node, but could catch up only at night.\nAnd during this process only one of 56 cpu cores was used. And SSD RAID throughput was not 100% utilized.\n\nBlock level delta backups can be restored very efficiently: if we restore from newest to past steps, we write no more than cluster size at last backup.\n\n>> I think that problem of incrementing FSM and VM is too distant now.\n>> But if I had to implement it right now I'd choose following way: do not backup FSM and VM, recreate it during restore. Looks like it is possible, but too much AM-specific.\n> \n> Interesting idea - that's worth some more thought.\n\nCore routines to recreate VM and FSM would be cool :) But this need to be done without extra IO, not an easy trick.\n\n>> Here's 53 mentions of \"parallel backup\". I want to note that there may be parallel read from disk and parallel network transmission. Things between these two are neglectable and can be single-threaded. From my POV, it's not about threads, it's about saturated IO controllers.\n>> Also I think parallel restore matters more than parallel backup. Backups themself can be slow, on many clusters we even throttle disk IO. But users may want parallel backup to catch-up standby.\n> \n> I'm not sure I entirely understand your point here -- are you saying\n> that parallel backup is important, or that it's not important, or\n> something in between? Do you think it's more or less important than\n> incremental backup?\nI think that there is no such thing as parallel backup. Backup creation is composite process of many subprocesses.\n\nIn my experience, parallel network transmission is cool and very important, it makes upload 3 times faster. But my experience is limited to cloud storages. Would this hold if storage backend is local FS? I have no idea.\nParallel reading from disk has the same effect. Compression and encryption can be single threaded, I think it will not be bottleneck (unless one uses lzma's neighborhood on Pareto frontier).\n\nFor me, I think the most important thing is incremental backups (with parallel steps merge) and then parallel backup.\nBut there is huge fraction of users, who can benefit from parallel backup and do not need incremental backup at all.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 21 Apr 2019 14:05:02 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 4:32 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Having been around for a while working on backup-related things, if I\n> was to implement the protocol for pg_basebackup today, I'd definitely\n> implement \"give me a list\" and \"give me this file\" rather than the\n> tar-based approach, because I've learned that people want to be\n> able to do parallel backups and that's a decent way to do that. I\n> wouldn't set out and implement something new that's there's just no hope\n> of making parallel. Maybe the first write of pg_basebackup would still\n> be simple and serial since it's certainly more work to make a frontend\n> tool like that work in parallel, but at least the protocol would be\n> ready to support a parallel option being added alter without being\n> rewritten.\n>\n> And that's really what I was trying to get at here- if we've got the\n> choice now to decide what this is going to look like from a protocol\n> level, it'd be great if we could make it able to support being used in a\n> parallel fashion, even if pg_basebackup is still single-threaded.\n\nI think we're getting closer to a meeting of the minds here, but I\ndon't think it's intrinsically necessary to rewrite the whole method\nof operation of pg_basebackup to implement incremental backup in a\nsensible way. One could instead just do a straightforward extension\nto the existing BASE_BACKUP command to enable incremental backup.\nThen, to enable parallel full backup and all sorts of out-of-core\nhacking, one could expand the command language to allow tools to\naccess individual steps: START_BACKUP, SEND_FILE_LIST,\nSEND_FILE_CONTENTS, STOP_BACKUP, or whatever. The second thing makes\nfor an appealing project, but I do not think there is a technical\nreason why it has to be done first. Or for that matter why it has to\nbe done second. As I keep saying, incremental backup and full backup\nare separate projects and I believe it's completely reasonable for\nwhoever is doing the work to decide on the order in which they would\nlike to do the work.\n\nHaving said that, I'm curious what people other than Stephen (and\nother pgbackrest hackers) think about the relative value of parallel\nbackup vs. incremental backup. Stephen appears quite convinced that\nparallel backup is full of win and incremental backup is a bit of a\nyawn by comparison, and while I certainly would not want to discount\nthe value of his experience in this area, it sometimes happens on this\nmailing list that [ drum roll please ] not everybody agrees about\neverything. So, what do other people think?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 21 Apr 2019 19:02:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "\n\nOn 22.04.2019 2:02, Robert Haas wrote:\n> On Sat, Apr 20, 2019 at 4:32 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> Having been around for a while working on backup-related things, if I\n>> was to implement the protocol for pg_basebackup today, I'd definitely\n>> implement \"give me a list\" and \"give me this file\" rather than the\n>> tar-based approach, because I've learned that people want to be\n>> able to do parallel backups and that's a decent way to do that. I\n>> wouldn't set out and implement something new that's there's just no hope\n>> of making parallel. Maybe the first write of pg_basebackup would still\n>> be simple and serial since it's certainly more work to make a frontend\n>> tool like that work in parallel, but at least the protocol would be\n>> ready to support a parallel option being added alter without being\n>> rewritten.\n>>\n>> And that's really what I was trying to get at here- if we've got the\n>> choice now to decide what this is going to look like from a protocol\n>> level, it'd be great if we could make it able to support being used in a\n>> parallel fashion, even if pg_basebackup is still single-threaded.\n> I think we're getting closer to a meeting of the minds here, but I\n> don't think it's intrinsically necessary to rewrite the whole method\n> of operation of pg_basebackup to implement incremental backup in a\n> sensible way. One could instead just do a straightforward extension\n> to the existing BASE_BACKUP command to enable incremental backup.\n> Then, to enable parallel full backup and all sorts of out-of-core\n> hacking, one could expand the command language to allow tools to\n> access individual steps: START_BACKUP, SEND_FILE_LIST,\n> SEND_FILE_CONTENTS, STOP_BACKUP, or whatever. The second thing makes\n> for an appealing project, but I do not think there is a technical\n> reason why it has to be done first. Or for that matter why it has to\n> be done second. As I keep saying, incremental backup and full backup\n> are separate projects and I believe it's completely reasonable for\n> whoever is doing the work to decide on the order in which they would\n> like to do the work.\n>\n> Having said that, I'm curious what people other than Stephen (and\n> other pgbackrest hackers) think about the relative value of parallel\n> backup vs. incremental backup. Stephen appears quite convinced that\n> parallel backup is full of win and incremental backup is a bit of a\n> yawn by comparison, and while I certainly would not want to discount\n> the value of his experience in this area, it sometimes happens on this\n> mailing list that [ drum roll please ] not everybody agrees about\n> everything. So, what do other people think?\n>\n\nBased on the experience of pg_probackup users I can say that there is \nno 100% winer and depending on use case either\nparallel either incremental backups are preferable.\n- If size of database is not so larger and intensity of updates is high \nenough, then parallel backup within one data center is definitely more \nefficient solution.\n- If size of database is very large and data is rarely updated or \ndatabase is mostly append-only, then incremental backup is preferable.\n- Some customers need to collect at central server backups of databases \ninstalled at many nodes with slow and unreliable connection (assume DBMS \ninstalled at locomotives). Definitely parallelism can not help here, \nunlike support of incremental backup.\n- Parallel backup more aggressively consumes resources of the system, \ninterfering with normal work of application. So performing parallel \nbackup may cause significant degradation of application speed.\n\npg_probackup supports both features: parallel and incremental backups \nand it is up to user how to use it in more efficient way for particular \nconfiguration.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 22 Apr 2019 10:38:18 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Sat, Apr 20, 2019 at 4:32 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Having been around for a while working on backup-related things, if I\n> > was to implement the protocol for pg_basebackup today, I'd definitely\n> > implement \"give me a list\" and \"give me this file\" rather than the\n> > tar-based approach, because I've learned that people want to be\n> > able to do parallel backups and that's a decent way to do that. I\n> > wouldn't set out and implement something new that's there's just no hope\n> > of making parallel. Maybe the first write of pg_basebackup would still\n> > be simple and serial since it's certainly more work to make a frontend\n> > tool like that work in parallel, but at least the protocol would be\n> > ready to support a parallel option being added alter without being\n> > rewritten.\n> >\n> > And that's really what I was trying to get at here- if we've got the\n> > choice now to decide what this is going to look like from a protocol\n> > level, it'd be great if we could make it able to support being used in a\n> > parallel fashion, even if pg_basebackup is still single-threaded.\n> \n> I think we're getting closer to a meeting of the minds here, but I\n> don't think it's intrinsically necessary to rewrite the whole method\n> of operation of pg_basebackup to implement incremental backup in a\n> sensible way. \n\nIt wasn't my intent to imply that the whole method of operation of\npg_basebackup would have to change for this.\n\n> One could instead just do a straightforward extension\n> to the existing BASE_BACKUP command to enable incremental backup.\n\nOk, how do you envision that? As I mentioned up-thread, I am concerned\nthat we're talking too high-level here and it's making the discussion\nmore difficult than it would be if we were to put together specific\nideas and then discuss them.\n\nOne way I can imagine to extend BASE_BACKUP is by adding LSN as an\noptional parameter and then having the database server scan the entire\ncluster and send a tarball which contains essentially a 'diff' file of\nsome kind for each file where we can construct a diff based on the LSN,\nand then the complete contents of the file for everything else that\nneeds to be in the backup.\n\nSo, sure, that would work, but it wouldn't be able to be parallelized\nand I don't think it'd end up being very exciting for the external tools\nbecause of that, but it would be fine for pg_basebackup.\n\nOn the other hand, if you added new commands for 'list of files changed\nsince this LSN' and 'give me this file' and 'give me this file with the\nchanges in it since this LSN', then pg_basebackup could work with that\npretty easily in a single-threaded model (maybe with two connections to\nthe backend, but still in a single process, or maybe just by slurping up\nthe file list and then asking for each one) and the external tools could\nleverage those new capabilities too for their backups, both full backups\nand incremental ones. This also wouldn't have to change how\npg_basebackup does full backups today one bit, so what we're really\ntalking about here is the direction to take the new code that's being\nwritten, not about rewriting existing code. I agree that it'd be a bit\nmore work... but hopefully not *that* much more, and it would mean we\ncould later add parallel backup to pg_basebackup more easily too, if we\nwanted to.\n\n> Then, to enable parallel full backup and all sorts of out-of-core\n> hacking, one could expand the command language to allow tools to\n> access individual steps: START_BACKUP, SEND_FILE_LIST,\n> SEND_FILE_CONTENTS, STOP_BACKUP, or whatever. The second thing makes\n> for an appealing project, but I do not think there is a technical\n> reason why it has to be done first. Or for that matter why it has to\n> be done second. As I keep saying, incremental backup and full backup\n> are separate projects and I believe it's completely reasonable for\n> whoever is doing the work to decide on the order in which they would\n> like to do the work.\n\nI didn't mean to imply that one had to be done before the other from a\ntechnical standpoint. I agree that they don't depend on each other.\n\nYou're certainly welcome to do what you would like, I simply wanted to\nshare my experiences and try to help move this in a direction that would\ninvolve less code rewrite in the future and to have a feature that would\nbe more appealing to the external tools.\n\n> Having said that, I'm curious what people other than Stephen (and\n> other pgbackrest hackers) \n\nWhile David and I do talk, we haven't really discussed this proposal all\nthat much, so please don't assume that he shares my thoughts here. I'd\nalso like to hear what others think, particularly those who have been\nworking in this area.\n\n> think about the relative value of parallel\n> backup vs. incremental backup. Stephen appears quite convinced that\n> parallel backup is full of win and incremental backup is a bit of a\n> yawn by comparison, and while I certainly would not want to discount\n> the value of his experience in this area, it sometimes happens on this\n> mailing list that [ drum roll please ] not everybody agrees about\n> everything. So, what do other people think?\n\nI'm afraid this is painting my position here with an extremely broad\nbrush and so I'd like to clarify a bit: I'm *all* for incremental\nbackups. Incremental and differential backups were supported by\npgBackRest very early on and are used extensively. Today's pgBackRest\ndoes that at a file level, but I would very much like to get to a block\nlevel shortly after we finish rewriting it into C and porting it to\nWindows (and probably the other platforms PG runs on today), which isn't\nvery far off now. I'd like to make sure that whatever core ends up with\nas an incremental backup solution also matches very closely what we do\nwith pgBackRest too, but everything that's been discussed here seems\npretty reasonable when it comes to the bits around how the blocks are\ndetected and the files get stitched back together, so I don't expect\nthere to be too much of an issue there.\n\nWhat I'm afraid will be lackluster is adding block-level incremental\nbackup support to pg_basebackup without any support for managing\nbackups or anything else. I'm also concerned that it's going to mean\nthat people who want to use incremental backup with pg_basebackup are\ngoing to have to write a lot of their own management code (probably in\nshell scripts and such...) around that and if they get anything wrong\nthere then people are going to end up with bad backups that they can't\nrestore from, or they'll have corrupted clusters if they do manage to\nget them restored.\n\nIt'd also be nice to have as much exposed through the common library as\npossible when it comes to, well, everything being discussed, so that the\nexternal tools could leverage that code and avoid having to write their\nown. This would probably apply more to the WAL-scanning discussion, but\nfigured I'd mention it here too.\n\nIf the protocol was implemented in a way that we could leverage it from\nexternal tools in a parallel fashion then I'd be more excited about the\noverall body of work, although, thinking about it a bit more, I have to\nadmit that I'm not sure that pgBackRest would end up using it in any\ncase, no matter how it's implemented, since it wouldn't support\ncompression or encryption, both of which we support doing in-stream\nbefore the data leaves the server, though the external tools which don't\nsupport those options likely would find the parallel option more\nappealing.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 22 Apr 2019 13:08:05 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-19 20:04:41 -0400, Stephen Frost wrote:\n> I agree that we don't want another implementation and that there's a lot\n> that we want to do to improve replay performance. We've already got\n> frontend tools which work with multiple execution threads, so I'm not\n> sure I get the \"not easily feasible\" bit, and the argument about the\n> checkpointer seems largely related to that (as in- if we didn't have\n> multiple threads/processes then things would perform quite badly... but\n> we can and do have multiple threads/processes in frontend tools today,\n> even in pg_basebackup).\n\nYou need not just multiple execution threads, but basically a new\nimplementation of shared buffers, locking, process monitoring, with most\nof the related infrastructure. You're literally talking about\nreimplementing a very substantial portion of the backend. I'm not sure\nI can transport in written words - via a public medium - how bad an idea\nit would be to go there.\n\n\n> You certainly bring up some good concerns though and they make me think\n> of other bits that would seem like they'd possibly be larger issues for\n> a frontend tool- like having a large pool of memory for cacheing (aka\n> shared buffers) the changes. If what we're talking about here is *just*\n> replay though, without having the system available for reads, I wonder\n> if we might want a different solution there.\n\nNo.\n\n\n> > Which I think is entirely reasonable. With the 'consistent' and LSN\n> > recovery targets one already can get most of what's needed from such a\n> > tool, anyway. I'd argue the biggest issue there is that there's no\n> > equivalent to starting postgres with a private socket directory on\n> > windows, and perhaps an option or two making it easier to start postgres\n> > in a \"private\" mode for things like this.\n> \n> This would mean building in a way to do parallel WAL replay into the\n> server binary though, as discussed above, and it seems like making that\n> work in a way that allows us to still be available as a read-only\n> standby would be quite a bit more difficult. We could possibly support\n> parallel WAL replay only when we aren't a replica but from the same\n> binary.\n\nI'm doubtful that we should try to implement parallel WAL apply that\ncan't support HS - a substantial portion of the the logic to avoid\nissues around relfilenode reuse, consistency etc is going to be to be\nnecessary for non-HS aware apply anyway. But if somebody had a concrete\nproposal for something that's fundamentally only doable without HS, I\ncould be convinced.\n\n\n> The concerns mentioned about making it easier to start PG in a\n> private mode don't seem too bad but I am not entirely sure that the\n> tools which want to leverage that kind of capability would want to have\n> to exec out to the PG binary to use it.\n\nTough luck. But even leaving infeasability aside, it seems like a quite\nbad idea to do this in-process inside a tool that manages backup &\nrecovery. Creating threads / sub-processes with complicated needs (like\nany pared down version of pg to do just recovery would have) from within\na library has substantial complications. So you'd not want to do this\nin-process anyway.\n\n\n> A lot of this part of the discussion feels like a tangent though, unless\n> I'm missing something.\n\nI'm replying to:\n\nOn 2019-04-17 18:43:10 -0400, Stephen Frost wrote:\n> Wow. I have to admit that I feel completely opposite of that- I'd\n> *love* to have an independent tool (which ideally uses the same code\n> through the common library, or similar) that can be run to apply WAL.\n\nAnd I'm basically saying that anything that starts from this premise is\nfatally flawed (in the ex falso quodlibet kind of sense ;)).\n\n\n> The \"WAL compression\" tool contemplated\n> previously would be much simpler and not the full-blown WAL replay\n> capability, which would be left to the server, unless you're suggesting\n> that even that should be exclusively the purview of the backend? Though\n> that ship's already sailed, given that external projects have\n> implemented it.\n\nI'm extremely doubtful of such tools (but it's not what I was responding\ntoo, see above). I'd be extremely surprised if even one of them came\nclose to being correct. The old FPI removal tool had data corrupting\nbugs left and right.\n\n\n> Having a library to provide that which external\n> projects could leverage would be nicer than having everyone write their\n> own version.\n\nNo, I don't think that's necessarily true. Something complicated that's\nhard to get right doesn't have to be provided by core. Even if other\nprojects decide that their risk/reward assesment is different than core\npostgres'. We don't have to take on all kind of work and complexity for\nexternal tools.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 10:36:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 1:08 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > I think we're getting closer to a meeting of the minds here, but I\n> > don't think it's intrinsically necessary to rewrite the whole method\n> > of operation of pg_basebackup to implement incremental backup in a\n> > sensible way.\n>\n> It wasn't my intent to imply that the whole method of operation of\n> pg_basebackup would have to change for this.\n\nCool.\n\n> > One could instead just do a straightforward extension\n> > to the existing BASE_BACKUP command to enable incremental backup.\n>\n> Ok, how do you envision that? As I mentioned up-thread, I am concerned\n> that we're talking too high-level here and it's making the discussion\n> more difficult than it would be if we were to put together specific\n> ideas and then discuss them.\n>\n> One way I can imagine to extend BASE_BACKUP is by adding LSN as an\n> optional parameter and then having the database server scan the entire\n> cluster and send a tarball which contains essentially a 'diff' file of\n> some kind for each file where we can construct a diff based on the LSN,\n> and then the complete contents of the file for everything else that\n> needs to be in the backup.\n\n/me scratches head. Isn't that pretty much what I described in my\noriginal post? I even described what that \"'diff' file of some kind\"\nwould look like in some detail in the paragraph of that emailed\nnumbered \"2.\", and I described the reasons for that choice at length\nin http://postgr.es/m/CA+TgmoZrqdV-tB8nY9P+1pQLqKXp5f1afghuoHh5QT6ewdkJ6g@mail.gmail.com\n\nI can't figure out how I'm managing to be so unclear about things\nabout which I thought I'd been rather explicit.\n\n> So, sure, that would work, but it wouldn't be able to be parallelized\n> and I don't think it'd end up being very exciting for the external tools\n> because of that, but it would be fine for pg_basebackup.\n\nStop being such a pessimist. Yes, if we only add the option to the\nBASE_BACKUP command, it won't directly be very exciting for external\ntools, but a lot of the work that is needed to do things that ARE\nexciting for external tools will have been done. For instance, if the\nwork to figure out which blocks have been modified via WAL-scanning\ngets done, and initially that's only exposed via BASE_BACKUP, it won't\nbe much work for somebody to write code for a new code that exposes\nthat information directly through some new replication command.\nThere's a difference between something that's going in the wrong\ndirection and something that's going in the right direction but not as\nfar or as fast as you'd like. And I'm 99% sure that everything I'm\nproposing here falls in the latter category rather than the former.\n\n> On the other hand, if you added new commands for 'list of files changed\n> since this LSN' and 'give me this file' and 'give me this file with the\n> changes in it since this LSN', then pg_basebackup could work with that\n> pretty easily in a single-threaded model (maybe with two connections to\n> the backend, but still in a single process, or maybe just by slurping up\n> the file list and then asking for each one) and the external tools could\n> leverage those new capabilities too for their backups, both full backups\n> and incremental ones. This also wouldn't have to change how\n> pg_basebackup does full backups today one bit, so what we're really\n> talking about here is the direction to take the new code that's being\n> written, not about rewriting existing code. I agree that it'd be a bit\n> more work... but hopefully not *that* much more, and it would mean we\n> could later add parallel backup to pg_basebackup more easily too, if we\n> wanted to.\n\nFor purposes of implementing parallel pg_basebackup, it would probably\nbe better if the server rather than the client decided which files to\nsend via which connection. If the client decides, then every time the\nserver finishes sending a file, the client has to request another\nfile, and that introduces some latency: after the server finishes\nsending each file, it has to wait for the client to finish receiving\nthe data, and it has to wait for the client to tell it what file to\nsend next. If the server decides, then it can just send data at top\nspeed without a break. So the ideal interface for pg_basebackup would\nreally be something like:\n\nSTART_PARALLEL_BACKUP blah blah PARTICIPANTS 4;\n\n...returning a cookie that can be then be used by each participant for\nan argument to a new commands:\n\nJOIN_PARALLLEL_BACKUP 'cookie';\n\nHowever, that is obviously extremely inconvenient for third-party\ntools. It's possible we need both an interface like this -- for use\nby parallel pg_basebackup -- and a\nSTART_BACKUP/SEND_FILE_LIST/SEND_FILE_CONTENTS/STOP_BACKUP type\ninterface for use by external tools. On the other hand, maybe the\nadditional overhead caused by managing the list of files to be fetched\non the client side is negligible. It'd be interesting to see, though,\nhow busy the server is when running an incremental backup managed by\nan external tool like BART or pgbackrest on a cluster with a gazillion\nlittle-tiny relations. I wonder if we'd find that it spends most of\nits time waiting for the client.\n\n> What I'm afraid will be lackluster is adding block-level incremental\n> backup support to pg_basebackup without any support for managing\n> backups or anything else. I'm also concerned that it's going to mean\n> that people who want to use incremental backup with pg_basebackup are\n> going to have to write a lot of their own management code (probably in\n> shell scripts and such...) around that and if they get anything wrong\n> there then people are going to end up with bad backups that they can't\n> restore from, or they'll have corrupted clusters if they do manage to\n> get them restored.\n\nI think that this is another complaint that basically falls into the\ncategory of saying that this proposal might not fix everything for\neverybody, but that complaint could be levied against any reasonable\ndevelopment proposal.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Apr 2019 13:44:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-04-19 20:04:41 -0400, Stephen Frost wrote:\n> > I agree that we don't want another implementation and that there's a lot\n> > that we want to do to improve replay performance. We've already got\n> > frontend tools which work with multiple execution threads, so I'm not\n> > sure I get the \"not easily feasible\" bit, and the argument about the\n> > checkpointer seems largely related to that (as in- if we didn't have\n> > multiple threads/processes then things would perform quite badly... but\n> > we can and do have multiple threads/processes in frontend tools today,\n> > even in pg_basebackup).\n> \n> You need not just multiple execution threads, but basically a new\n> implementation of shared buffers, locking, process monitoring, with most\n> of the related infrastructure. You're literally talking about\n> reimplementing a very substantial portion of the backend. I'm not sure\n> I can transport in written words - via a public medium - how bad an idea\n> it would be to go there.\n\nYes, there'd be some need for locking and process monitoring, though if\nwe aren't supporting ongoing read queries at the same time, there's a\nwhole bunch of things that we don't need from the existing backend.\n\n> > > Which I think is entirely reasonable. With the 'consistent' and LSN\n> > > recovery targets one already can get most of what's needed from such a\n> > > tool, anyway. I'd argue the biggest issue there is that there's no\n> > > equivalent to starting postgres with a private socket directory on\n> > > windows, and perhaps an option or two making it easier to start postgres\n> > > in a \"private\" mode for things like this.\n> > \n> > This would mean building in a way to do parallel WAL replay into the\n> > server binary though, as discussed above, and it seems like making that\n> > work in a way that allows us to still be available as a read-only\n> > standby would be quite a bit more difficult. We could possibly support\n> > parallel WAL replay only when we aren't a replica but from the same\n> > binary.\n> \n> I'm doubtful that we should try to implement parallel WAL apply that\n> can't support HS - a substantial portion of the the logic to avoid\n> issues around relfilenode reuse, consistency etc is going to be to be\n> necessary for non-HS aware apply anyway. But if somebody had a concrete\n> proposal for something that's fundamentally only doable without HS, I\n> could be convinced.\n\nI'd certainly prefer that we support parallel WAL replay *with* HS, that\njust seems like a much larger problem, but I'd be quite happy to be told\nthat it wouldn't be that much harder.\n\n> > A lot of this part of the discussion feels like a tangent though, unless\n> > I'm missing something.\n> \n> I'm replying to:\n> \n> On 2019-04-17 18:43:10 -0400, Stephen Frost wrote:\n> > Wow. I have to admit that I feel completely opposite of that- I'd\n> > *love* to have an independent tool (which ideally uses the same code\n> > through the common library, or similar) that can be run to apply WAL.\n> \n> And I'm basically saying that anything that starts from this premise is\n> fatally flawed (in the ex falso quodlibet kind of sense ;)).\n\nI'd just say that it'd be... difficult. :)\n\n> > The \"WAL compression\" tool contemplated\n> > previously would be much simpler and not the full-blown WAL replay\n> > capability, which would be left to the server, unless you're suggesting\n> > that even that should be exclusively the purview of the backend? Though\n> > that ship's already sailed, given that external projects have\n> > implemented it.\n> \n> I'm extremely doubtful of such tools (but it's not what I was responding\n> too, see above). I'd be extremely surprised if even one of them came\n> close to being correct. The old FPI removal tool had data corrupting\n> bugs left and right.\n\nI have concerns about it myself, which is why I'd actually really like\nto see something in core that does it, and does it the right way, that\nother projects could then leverage (ideally by just linking into the\nlibrary without having to rewrite what's in core, though that might not\nbe an option for things like WAL-G that are in Go and possibly don't\nwant to link in some C library).\n\n> > Having a library to provide that which external\n> > projects could leverage would be nicer than having everyone write their\n> > own version.\n> \n> No, I don't think that's necessarily true. Something complicated that's\n> hard to get right doesn't have to be provided by core. Even if other\n> projects decide that their risk/reward assesment is different than core\n> postgres'. We don't have to take on all kind of work and complexity for\n> external tools.\n\nNo, it doesn't have to be provided by core, but I sure would like it to\nbe and I'd be much more comfortable if it was because then we'd also\ntake care to not break whatever assumptions are made (or to do so in a\nway that can be detected and/or handled) as new code is written. As\ndiscussed above, as long as it isn't provided by core, it's not going to\nbe trusted, likely will have bugs, and probably will be broken by things\nhappening in core moving forward. The only option left is \"well, we\njust won't have that capability at all\". Maybe that's what you're\ngetting at here, but not sure I agree with that as the result.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 22 Apr 2019 14:03:41 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Apr 22, 2019 at 1:08 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > One could instead just do a straightforward extension\n> > > to the existing BASE_BACKUP command to enable incremental backup.\n> >\n> > Ok, how do you envision that? As I mentioned up-thread, I am concerned\n> > that we're talking too high-level here and it's making the discussion\n> > more difficult than it would be if we were to put together specific\n> > ideas and then discuss them.\n> >\n> > One way I can imagine to extend BASE_BACKUP is by adding LSN as an\n> > optional parameter and then having the database server scan the entire\n> > cluster and send a tarball which contains essentially a 'diff' file of\n> > some kind for each file where we can construct a diff based on the LSN,\n> > and then the complete contents of the file for everything else that\n> > needs to be in the backup.\n> \n> /me scratches head. Isn't that pretty much what I described in my\n> original post? I even described what that \"'diff' file of some kind\"\n> would look like in some detail in the paragraph of that emailed\n> numbered \"2.\", and I described the reasons for that choice at length\n> in http://postgr.es/m/CA+TgmoZrqdV-tB8nY9P+1pQLqKXp5f1afghuoHh5QT6ewdkJ6g@mail.gmail.com\n> \n> I can't figure out how I'm managing to be so unclear about things\n> about which I thought I'd been rather explicit.\n\nThere was basically zero discussion about what things would look like at\na protocol level (I went back and skimmed over the thread before sending\nmy last email to specifically see if I was going to get this response\nback..). I get the idea behind the diff file, the contents of which I\nwasn't getting into above.\n\n> > So, sure, that would work, but it wouldn't be able to be parallelized\n> > and I don't think it'd end up being very exciting for the external tools\n> > because of that, but it would be fine for pg_basebackup.\n> \n> Stop being such a pessimist. Yes, if we only add the option to the\n> BASE_BACKUP command, it won't directly be very exciting for external\n> tools, but a lot of the work that is needed to do things that ARE\n> exciting for external tools will have been done. For instance, if the\n> work to figure out which blocks have been modified via WAL-scanning\n> gets done, and initially that's only exposed via BASE_BACKUP, it won't\n> be much work for somebody to write code for a new code that exposes\n> that information directly through some new replication command.\n> There's a difference between something that's going in the wrong\n> direction and something that's going in the right direction but not as\n> far or as fast as you'd like. And I'm 99% sure that everything I'm\n> proposing here falls in the latter category rather than the former.\n\nI didn't mean to imply that you're doing in the wrong direction here and\nI thought I said somewhere in my last email more-or-less exactly the\nsame, that a great deal of the work needed for block-level incremental\nbackup would be done, but specifically that this proposal wouldn't allow\nexternal tools to leverage that. It sounds like what you're suggesting\nnow is that you're happy to implement the backend code, expose it in a\nway that works just for pg_basebackup, and that if someone else wants to\nadd things to the protocol to make it easier for external tools to\nleverage, great. All I can say is that that's basically how we ended up\nin the situation we're in today where pg_basebackup doesn't support\nparallel backup but a bunch of external tools do and they don't go\nthrough the backend to get there, even though they'd probably prefer to.\n\n> > On the other hand, if you added new commands for 'list of files changed\n> > since this LSN' and 'give me this file' and 'give me this file with the\n> > changes in it since this LSN', then pg_basebackup could work with that\n> > pretty easily in a single-threaded model (maybe with two connections to\n> > the backend, but still in a single process, or maybe just by slurping up\n> > the file list and then asking for each one) and the external tools could\n> > leverage those new capabilities too for their backups, both full backups\n> > and incremental ones. This also wouldn't have to change how\n> > pg_basebackup does full backups today one bit, so what we're really\n> > talking about here is the direction to take the new code that's being\n> > written, not about rewriting existing code. I agree that it'd be a bit\n> > more work... but hopefully not *that* much more, and it would mean we\n> > could later add parallel backup to pg_basebackup more easily too, if we\n> > wanted to.\n> \n> For purposes of implementing parallel pg_basebackup, it would probably\n> be better if the server rather than the client decided which files to\n> send via which connection. If the client decides, then every time the\n> server finishes sending a file, the client has to request another\n> file, and that introduces some latency: after the server finishes\n> sending each file, it has to wait for the client to finish receiving\n> the data, and it has to wait for the client to tell it what file to\n> send next. If the server decides, then it can just send data at top\n> speed without a break. So the ideal interface for pg_basebackup would\n> really be something like:\n> \n> START_PARALLEL_BACKUP blah blah PARTICIPANTS 4;\n> \n> ...returning a cookie that can be then be used by each participant for\n> an argument to a new commands:\n> \n> JOIN_PARALLLEL_BACKUP 'cookie';\n> \n> However, that is obviously extremely inconvenient for third-party\n> tools. It's possible we need both an interface like this -- for use\n> by parallel pg_basebackup -- and a\n> START_BACKUP/SEND_FILE_LIST/SEND_FILE_CONTENTS/STOP_BACKUP type\n> interface for use by external tools. On the other hand, maybe the\n> additional overhead caused by managing the list of files to be fetched\n> on the client side is negligible. It'd be interesting to see, though,\n> how busy the server is when running an incremental backup managed by\n> an external tool like BART or pgbackrest on a cluster with a gazillion\n> little-tiny relations. I wonder if we'd find that it spends most of\n> its time waiting for the client.\n\nThanks for sharing your thoughts on that, certainly having the backend\nable to be more intelligent about streaming files to avoid latency is\ngood and possibly the best approach. Another alternative to reducing\nthe latency would be to have a way for the client to request a set of\nfiles, but I don't know that it'd be better.\n\nI'm not really sure why the above is extremely inconvenient for\nthird-party tools, beyond just that they've already been written to work\nwith an assumption that the server-side of things isn't as intelligent\nas PG is.\n\n> > What I'm afraid will be lackluster is adding block-level incremental\n> > backup support to pg_basebackup without any support for managing\n> > backups or anything else. I'm also concerned that it's going to mean\n> > that people who want to use incremental backup with pg_basebackup are\n> > going to have to write a lot of their own management code (probably in\n> > shell scripts and such...) around that and if they get anything wrong\n> > there then people are going to end up with bad backups that they can't\n> > restore from, or they'll have corrupted clusters if they do manage to\n> > get them restored.\n> \n> I think that this is another complaint that basically falls into the\n> category of saying that this proposal might not fix everything for\n> everybody, but that complaint could be levied against any reasonable\n> development proposal.\n\nI'm disappointed that the concerns about the trouble that end users are\nlikely to have with this didn't garner more discussion.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 22 Apr 2019 14:26:40 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-22 14:26:40 -0400, Stephen Frost wrote:\n> I'm disappointed that the concerns about the trouble that end users are\n> likely to have with this didn't garner more discussion.\n\nMy impression is that endusers are having a lot more trouble due to\nimportant backup/restore features not being in core/pg_basebackup, than\ndue to external tools having a harder time to implement certain\nfeatures. Focusing on external tools being able to provide all those\nfeatures, because core hasn't yet, is imo entirely the wrong thing to\nconcentrate upon. And it's not like things largely haven't been\nimplemented in pg_basebackup for fundamental architectural reasons.\nIt's because we've built like 5 different external tools with randomly\ndiffering featureset and licenses.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 11:33:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-04-22 14:26:40 -0400, Stephen Frost wrote:\n> > I'm disappointed that the concerns about the trouble that end users are\n> > likely to have with this didn't garner more discussion.\n> \n> My impression is that endusers are having a lot more trouble due to\n> important backup/restore features not being in core/pg_basebackup, than\n> due to external tools having a harder time to implement certain\n> features.\n\nI had been referring specifically to the concern I raised about\nincremental block-level backups being added to pg_basebackup and how\nthat'll make using pg_basebackup more complicated and therefore more\ndifficult for end-users to get right, particularly if the end user is\nhaving to handle management of the association between the full backup\nand the incremental backups. I wasn't referring to anything regarding\nexternal tools.\n\n> Focusing on external tools being able to provide all those\n> features, because core hasn't yet, is imo entirely the wrong thing to\n> concentrate upon. And it's not like things largely haven't been\n> implemented in pg_basebackup for fundamental architectural reasons.\n> It's because we've built like 5 different external tools with randomly\n> differing featureset and licenses.\n\nThere's a few challenges when it comes to adding backup features to\ncore. One of the reasons is that core naturally moves slower when it\ncomes to development than external projects do, as was discusssed\nearlier on this thread. Another is that, when it comes to backup,\nspecifically, people want to back up their *existing* systems, which\nmeans that they need a backup tool that's going to work with whatever\nversion of PG they've currently got deployed and that's often a few\nyears old already. Certainly when I've thought about features that we'd\nlike to see and considered if there's something that could be\nimplemented in core vs. implemented outside of core, the answer often\nends up being \"well, if we do it ourselves then we can make it work for\nPG 9.2 and above, and have it working for existing users, but if we work\nit in as part of core, it won't be available until next year and only\nfor version 12 and above, and users can only use it once they've\nupgraded..\"\n\nThanks,\n\nStephen",
"msg_date": "Mon, 22 Apr 2019 15:03:04 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 2:26 PM Stephen Frost <sfrost@snowman.net> wrote:\n> There was basically zero discussion about what things would look like at\n> a protocol level (I went back and skimmed over the thread before sending\n> my last email to specifically see if I was going to get this response\n> back..). I get the idea behind the diff file, the contents of which I\n> wasn't getting into above.\n\nWell, I wrote:\n\n\"There should be a way to tell pg_basebackup to request from the\nserver only those blocks where LSN >= threshold_value.\"\n\nI guess I assumed that people would interested in the details take\nthat to mean \"and therefore the protocol would grow an option for this\ntype of request in whatever way is the most straightforward possible\nextension of the current functionality is,\" which is indeed how you\neventually interpreted it when you said we could \"extend BASE_BACKUP\nis by adding LSN as an optional parameter.\"\n\nI could have been more explicit, but sometimes people tell me that my\nemails are too long.\n\n> external tools to leverage that. It sounds like what you're suggesting\n> now is that you're happy to implement the backend code, expose it in a\n> way that works just for pg_basebackup, and that if someone else wants to\n> add things to the protocol to make it easier for external tools to\n> leverage, great.\n\nYep, that's more or less it, although I am potentially willing to do\nsome modest amount of that other work along the way. I just don't\nwant to prioritize it higher than getting the actual thing I want to\nbuild built, which I think is a pretty fair position for me to take.\n\n> All I can say is that that's basically how we ended up\n> in the situation we're in today where pg_basebackup doesn't support\n> parallel backup but a bunch of external tools do and they don't go\n> through the backend to get there, even though they'd probably prefer to.\n\nI certainly agree that core should try to do things in a way that is\nuseful to external tools when that can be done without undue effort,\nbut only if it can actually be done without undo effort. Let's see\nwhether that's the case here:\n\n- Anastasia wants a command added that dumps out whatever the server\nknows about what files have changed, which I already agreed was a\nreasonable extension of my initial proposal.\n\n- You said that for this to be useful to pgbackrest, it'd have to use\na whole different mechanism that includes commands to request\nindividual files and blocks within those files, which would be a\nsignificant rewrite of pg_basebackup that you agreed is more closely\nrelated to parallel backup than to the project under discussion on\nthis thread. And that even then pgbackrest probably wouldn't use it\nbecause it also does server-side compression and encryption which are\nnot included in this proposal.\n\nIt seems to me that the first one falls into the category a reasonable\nadditional effort and the second one falls into the category of lots\nof extra and unrelated work that wouldn't even get used.\n\n> Thanks for sharing your thoughts on that, certainly having the backend\n> able to be more intelligent about streaming files to avoid latency is\n> good and possibly the best approach. Another alternative to reducing\n> the latency would be to have a way for the client to request a set of\n> files, but I don't know that it'd be better.\n\nI don't know either. This is an area that needs more thought, I\nthink, although as discussed, it's more related to parallel backup\nthan $SUBJECT.\n\n> I'm not really sure why the above is extremely inconvenient for\n> third-party tools, beyond just that they've already been written to work\n> with an assumption that the server-side of things isn't as intelligent\n> as PG is.\n\nWell, one thing you might want to do is have a tool that connects to\nthe server, enters backup mode, requests information on what blocks\nhave changed, copies those blocks via direct filesystem access, and\nthen exits backup mode. Such a tool would really benefit from a\nSTART_BACKUP / SEND_FILE_LIST / SEND_FILE_CONTENTS / STOP_BACKUP\ncommand language, because it would just skip ever issuing the\nSEND_FILE_CONTENTS command in favor of doing that part of the work via\nother means. On the other hand, a START_PARALLEL_BACKUP LSN '1/234'\ncommand is useless to such a tool.\n\nContrariwise, a tool that has its own magic - perhaps based on\nWAL-scanning or something like ptrack - to know which files currently\nexist and which blocks are modified could use SEND_FILE_CONTENTS but\nnot SEND_FILE_LIST. And a filesystem-snapshot based technique might\nuse START_BACKUP and STOP_BACKUP but nothing else.\n\nIn short, providing granular commands like this lets the client be\nreally intelligent even if the server isn't, and lets the client have\nfine-grained control of the process. This is very good if you're an\nout-of-core tool maintainer and your tool is trying to be smarter than\n- or even just differently-designed than - core.\n\nBut if what you really want is just a maximally-efficient parallel\nbackup, you don't need the commands to be fine-grained like this. You\ndon't even really *want* the commands to be fine-grained like this,\nbecause it's better if the server works it all out so as to avoid\nunnecessary network round-trips. You just want to tell the server\n\"hey, I want to do a parallel backup with 5 participants - hit me!\"\nand have it do that in the most efficient way that it knows how,\nwithout forcing the client to make any decisions that can be made just\nas well, and perhaps more efficiently, on the server.\n\nOn the third hand, one advantage of having the fine-grained commands\nis that it would not only make it easier for out-of-core tools to do\ncool things, but also in-core tools. For instance, you can imagine\nbeing able to do something like:\n\npg_basebackup -D outputdir -d conninfo --copy-files-from=$PGDATA\n\nIf the client is using what I'm calling fine-grained commands, this is\neasy to implement. If it's just calling a piece of server side\nfunctionality that sends back a tarball as a blob, it's not.\n\nSo each approach has some pros and cons.\n\n> I'm disappointed that the concerns about the trouble that end users are\n> likely to have with this didn't garner more discussion.\n\nWell, we can keep discussing things. I've tried to reply to as many\nof your concerns as I can, but I believe you've written more email on\nthis thread than everyone else combined, so perhaps I haven't entirely\nbeen able to keep up.\n\nThat being said, as far as I can tell, those concerns were not\nseconded by anyone else. Also, if I understand correctly, when I\nasked how we could avoid that problem, you that you didn't know. And\nI said it seemed like we would need to a very expensive operation at\nserver startup, or magic. So I feel that perhaps it is a problem that\n(1) is not of great general concern and (2) to which no really\nsuperior engineering solution is possible.\n\nI may, however, be mistaken.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Apr 2019 16:08:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "22.04.2019 2:02, Robert Haas wrote:\n> I think we're getting closer to a meeting of the minds here, but I\n> don't think it's intrinsically necessary to rewrite the whole method\n> of operation of pg_basebackup to implement incremental backup in a\n> sensible way. One could instead just do a straightforward extension\n> to the existing BASE_BACKUP command to enable incremental backup.\n> Then, to enable parallel full backup and all sorts of out-of-core\n> hacking, one could expand the command language to allow tools to\n> access individual steps: START_BACKUP, SEND_FILE_LIST,\n> SEND_FILE_CONTENTS, STOP_BACKUP, or whatever. The second thing makes\n> for an appealing project, but I do not think there is a technical\n> reason why it has to be done first. Or for that matter why it has to\n> be done second. As I keep saying, incremental backup and full backup\n> are separate projects and I believe it's completely reasonable for\n> whoever is doing the work to decide on the order in which they would\n> like to do the work.\n>\n> Having said that, I'm curious what people other than Stephen (and\n> other pgbackrest hackers) think about the relative value of parallel\n> backup vs. incremental backup. Stephen appears quite convinced that\n> parallel backup is full of win and incremental backup is a bit of a\n> yawn by comparison, and while I certainly would not want to discount\n> the value of his experience in this area, it sometimes happens on this\n> mailing list that [ drum roll please ] not everybody agrees about\n> everything. So, what do other people think?\n>\nPersonally, I believe that incremental backups are more useful to implement\nfirst since they benefit both backup speed and the space taken by a backup.\nFrankly speaking, I'm a bit surprised that the discussion of parallel \nbackups\ntook so much of this thread.\nOf course, we must keep it in mind, while designing the API to avoid \nintroducing\nany architectural obstacles, but any further discussion of parallelism is a\nsubject of another topic.\n\n\nI understand Stephen's concerns about the difficulties of incremental backup\nmanagement.\nEven with an assumption that user is ready to manage backup chains, \nretention,\nand other stuff, we must consider the format of backup metadata that \nwill allow\nus to perform some primitive commands:\n\n1) Tell whether this backup full or incremental.\n\n2) Tell what backup is a parent of this incremental backup.\nProbably, we can limit it to just returning \"start_lsn\", which later can be\ncompared to \"stop_lsn\" of parent backup.\n\n3) Take an incremental backup based on this backup.\nHere we must help a backup manager to retrieve the LSN to pass it to\npg_basebackup.\n\n4) Restore an incremental backup into a directory (on top of already \nrestored\nfull backup).\nOne may use it to perform \"merge\" or \"restore\" of the incremental backup,\ndepending on the destination directory.\nI wonder if it is possible to integrate it into any existing tool, or we \nend up\nwith something like pg_basebackup/pg_baserestore as in case of\npg_dump/pg_restore.\n\nHave you designed these? I may only recall \"pg_combinebackup\" from the very\nfirst message in this thread, which looks more like a sketch to explain the\nidea, rather than the thought-out feature design. I also found a page\nhttps://wiki.postgresql.org/wiki/Incremental_backup that raises the same\nquestions.\nI'm volunteering to write a draft patch or, more likely, set of patches, \nwhich\nwill allow us to discuss the subject in more detail.\nAnd to do that I wish we agree on the API and data format (at least \nbroadly).\nLooking forward to hearing your thoughts.\n\n\nAs I see it, ideally the backup management tools should concentrate more on\nmanaging multiple backups, while all the logic of taking a single backup \n(of any\nkind) should be integrated into the core. It means that any out-of-core \nclient\nwon't have to walk the PGDATA directory and care about all the postgres \nspecific\nknowledge of data files consisting of blocks with headers and LSNs and \nso on. It\nsimply requests data and gets it.\nUnderstandably, it won't be implemented in one take and what is more \nprobably,\nit is not reachable fully.\nStill, it will be great to do our best to provide such tools (both \nexisting and\nfuture) with conveniently formatted data and API to get it.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 14:08:12 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "I hope it's alright to throw in my $0.02 as a user. I've been following\nthis (and the other thread on reading WAL to find modified blocks,\nprefaulting, whatever else) since the start with great excitement and would\nlove to see the built-in backup capabilities in Postgres greatly improved.\nI know this is not completely on-topic for just incremental backups, so I\napologize in advance. It just seemed like the most apt place to chime in.\n\n\nJust to preface where I am coming from, I have been using pgBackRest for\nthe past couple years and used wal-e prior to that. I am not a big *nix\nuser other than all my servers, do all my development on Windows / use\nprimarily Java. The command line is not where I feel most comfortable\ndespite my best efforts over the last 5-6 years. Prior to Postgres, I used\nSQL Server for quite a few years at previous companies but was more a\njunior / intermediate skill set back then. I just wanted to put that out\nthere so you can see where my bias's are.\n\n\n\n\nWith all that said, I would not be comfortable using pg_basebackup as my\nmain backup tool simply because I’d have to cobble together numerous tools\nto get backups stored in a safe (not on the same server) location, I’d have\nto manage expiring backups and the WAL which is no longer needed, along\nwith the rest of the stuff that makes these backup management tools useful.\n\n\nThe command line scares me, and even if I was able to get all that working,\nI would not feel warm and fuzzy I didn’t mess something up horribly and I\nmay hit an edge case which destroys backups, silently corrupts data, etc.\n\nI love that there are tools that manage all of it; backups, wal archiving,\nremote storage, integrate with cloud storage (S3 and the like), manages the\nretention of these backups with all their dependencies for me, and has all\nthe restore options necessary built in as well.\n\n\nBlock level incremental backup would be amazing for my use case. I have\nsmall updates / deletes that happen to data all over some of my largest\ntables. With pgBackRest, since the diff/incremental backups are at the file\nlevel, I can have a single update / delete which touched a random spot in a\ntable and now requires that whole 1gb file to be backed up again. That\nsaid, even if pg_basebackup was the only tool that did incremental block\nlevel backup tomorrow, I still wouldn’t start using it directly. I went\ninto the issues I’d have to deal with if I used pg_basebackup above, and\nincremental backups without a management tool make me think using it\ncorrectly would be much harder.\n\n\nI know this thread is just about incremental backup, and that pretty much\neverything in core is built up from small features into larger more complex\nones. I understand that and am not trying to dump on any efforts, I am\nsuper excited to see work being done in this area! I just wanted to share\nmy perspective on how crucial good backup management is to me (and I’m sure\na few others may share my sentiment considering how popular all the\nexternal tools are).\n\nI would never put a system in production unless I have some backup\nmanagement in place. If core builds a backup management tool which uses\npg_basebackup as building blocks for its solution…awesome! That may be\nsomething I’d use. If pg_basebackup can be improved so it can be used as\nthe basis most external backup management tools can build on top of, that’s\nalso great. All the external tools which practically every Postgres company\nhave built show that it’s obviously a need for a lot of users. Core will\nnever solve every single problem for all users, I know that. It would just\nbe great to see some of the fundamental features of backup management baked\ninto core in an extensible way.\n\nWith that, there could be a recommended way to set up backups\n(full/incremental, parallel, compressed), point in time recovery, backup\nretention, and perform restores (to a point in time, on a replica server,\netc) with just the tooling within core with a nice and simple user\ninterface, and great performance.\n\nIf those features core supports in the internal tooling are built in an\nextensible way (as has been discussed), there could be much less\nduplication of work implementing the same base features over and over for\neach external tool. Those companies can focus on more value-added features\nto their own products that core would never support, or on improving the\ntooling/performance/features core provides.\n\n\nWell, this is way longer and a lot less coherent than I was hoping, so I\napologize for that. Hopefully my stream of thoughts made a little bit of\nsense to someone.\n\n\n-Adam\n\nI hope it's alright to throw in my $0.02 as a user. I've been following this (and the other thread on reading WAL to find modified blocks, prefaulting, whatever else) since the start with great excitement and would love to see the built-in backup capabilities in Postgres greatly improved. I know this is not completely on-topic for just incremental backups, so I apologize in advance. It just seemed like the most apt place to chime in.Just to preface where I am coming from, I have been using pgBackRest for the past couple years and used wal-e prior to that. I am not a big *nix user other than all my servers, do all my development on Windows / use primarily Java. The command line is not where I feel most comfortable despite my best efforts over the last 5-6 years. Prior to Postgres, I used SQL Server for quite a few years at previous companies but was more a junior / intermediate skill set back then. I just wanted to put that out there so you can see where my bias's are. With all that said, I would not be comfortable using pg_basebackup as my main backup tool simply because I’d have to cobble together numerous tools to get backups stored in a safe (not on the same server) location, I’d have to manage expiring backups and the WAL which is no longer needed, along with the rest of the stuff that makes these backup management tools useful.The command line scares me, and even if I was able to get all that working, I would not feel warm and fuzzy I didn’t mess something up horribly and I may hit an edge case which destroys backups, silently corrupts data, etc. I love that there are tools that manage all of it; backups, wal archiving, remote storage, integrate with cloud storage (S3 and the like), manages the retention of these backups with all their dependencies for me, and has all the restore options necessary built in as well.Block level incremental backup would be amazing for my use case. I have small updates / deletes that happen to data all over some of my largest tables. With pgBackRest, since the diff/incremental backups are at the file level, I can have a single update / delete which touched a random spot in a table and now requires that whole 1gb file to be backed up again. That said, even if pg_basebackup was the only tool that did incremental block level backup tomorrow, I still wouldn’t start using it directly. I went into the issues I’d have to deal with if I used pg_basebackup above, and incremental backups without a management tool make me think using it correctly would be much harder.I know this thread is just about incremental backup, and that pretty much everything in core is built up from small features into larger more complex ones. I understand that and am not trying to dump on any efforts, I am super excited to see work being done in this area! I just wanted to share my perspective on how crucial good backup management is to me (and I’m sure a few others may share my sentiment considering how popular all the external tools are).I would never put a system in production unless I have some backup management in place. If core builds a backup management tool which uses pg_basebackup as building blocks for its solution…awesome! That may be something I’d use. If pg_basebackup can be improved so it can be used as the basis most external backup management tools can build on top of, that’s also great. All the external tools which practically every Postgres company have built show that it’s obviously a need for a lot of users. Core will never solve every single problem for all users, I know that. It would just be great to see some of the fundamental features of backup management baked into core in an extensible way. With that, there could be a recommended way to set up backups (full/incremental, parallel, compressed), point in time recovery, backup retention, and perform restores (to a point in time, on a replica server, etc) with just the tooling within core with a nice and simple user interface, and great performance.If those features core supports in the internal tooling are built in an extensible way (as has been discussed), there could be much less duplication of work implementing the same base features over and over for each external tool. Those companies can focus on more value-added features to their own products that core would never support, or on improving the tooling/performance/features core provides.Well, this is way longer and a lot less coherent than I was hoping, so I apologize for that. Hopefully my stream of thoughts made a little bit of sense to someone.-Adam",
"msg_date": "Tue, 23 Apr 2019 15:12:27 -0400",
"msg_from": "Adam Brusselback <adambrusselback@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Apr 22, 2019 at 2:26 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > There was basically zero discussion about what things would look like at\n> > a protocol level (I went back and skimmed over the thread before sending\n> > my last email to specifically see if I was going to get this response\n> > back..). I get the idea behind the diff file, the contents of which I\n> > wasn't getting into above.\n> \n> Well, I wrote:\n> \n> \"There should be a way to tell pg_basebackup to request from the\n> server only those blocks where LSN >= threshold_value.\"\n> \n> I guess I assumed that people would interested in the details take\n> that to mean \"and therefore the protocol would grow an option for this\n> type of request in whatever way is the most straightforward possible\n> extension of the current functionality is,\" which is indeed how you\n> eventually interpreted it when you said we could \"extend BASE_BACKUP\n> is by adding LSN as an optional parameter.\"\n\nLooking at it from what I'm sitting, I brought up two ways that we\ncould extend the protocol to \"request from the server only those blocks\nwhere LSN >= threshold_value\" with one being the modification to\nBASE_BACKUP and the other being a new set of commands that could be\nparallelized. If I had assumed that you'd be thinking the same way I am\nabout extending the backup protocol, I wouldn't have said anything now\nand then would have complained after you wrote a patch that just\nextended the BASE_BACKUP command, at which point I likely would have\nbeen told that it's now been done and that I should have mentioned it\nearlier.\n\n> > external tools to leverage that. It sounds like what you're suggesting\n> > now is that you're happy to implement the backend code, expose it in a\n> > way that works just for pg_basebackup, and that if someone else wants to\n> > add things to the protocol to make it easier for external tools to\n> > leverage, great.\n> \n> Yep, that's more or less it, although I am potentially willing to do\n> some modest amount of that other work along the way. I just don't\n> want to prioritize it higher than getting the actual thing I want to\n> build built, which I think is a pretty fair position for me to take.\n\nAt least in part then it seems like we're viewing the level of effort\naround what I'm talking about quite differently, and I feel like that's\nlargely because every time I mention parallel anything there's this\nassumption that I'm asking you to parallelize pg_basebackup or write a\nwhole bunch more code to provide a fully optimized server-side parallel\nimplementation for backups. That really wasn't what I was going for. I\nwas thinking it would be a modest amount of additional work add\nincremental backup via a few new commands, instead of through the\nBASE_BACKUP protocol command, that would make parallelization possible.\n\nNow, through this discussion, you've brought up some really good points\nabout how the initial thoughts I had around how we could add some\nrelatively simple commands, as part of this work, to make it easier for\nsomeone to later add parallel support to pg_basebackup (either full or\nincremental), or for external tools to leverage, might not be the best\nsolution when it comes to having parallel backup in core, and therefore\nwouldn't actually end up being useful towards that end. That's\ncertainly a fair point and possibly enough to justify not spending even\nthe modest time I was thinking it'd need, but I'm not convinced. Now,\nthat said, if you are convinced that's the case, and you're doing the\nwork, then it's certainly your prerogative to go in the direction you're\nconvinced of. I don't mean any of this discussion to imply that I'd\nobject to a commit that extended BASE_BACKUP in the way outlined above,\nbut I understood the question to be \"what do people think of this idea?\"\nand to that I'm still of the opinion that spending a modest amount of\ntime to provide a way to parallelize an incremental backup is worth it,\neven if it isn't optimal and isn't the direct goal of this effort.\n\nThere's a tangent on all of this that's pretty key though, which is the\nquestion around just how the blocks are identified. If the WAL scanning\nis done to figure out the blocks, then that's quite a bit different from\nthe other idea of \"open this relation and scan it, but only give me the\nblocks after this LSN\". It's the latter case that I've been mostly\nthinking about in this thread, which is part of why I was thinking it'd\nbe a modest amount of work to have protocol commands that accepted a\nfile (or perhaps a relation..) to scan and return blocks from instead of\nbaking this into BASE_BACKUP which by definition just serially scans the\ndata directory and returns things as it finds them. For the case where\nwe have WAL scanning happening and modfiles which are being read and\nused to figure out the blocks to send, it seems like it might be more\ncomplicated and therefore potentially quite a bit more work to have a\nparallel version of that.\n\n> > All I can say is that that's basically how we ended up\n> > in the situation we're in today where pg_basebackup doesn't support\n> > parallel backup but a bunch of external tools do and they don't go\n> > through the backend to get there, even though they'd probably prefer to.\n> \n> I certainly agree that core should try to do things in a way that is\n> useful to external tools when that can be done without undue effort,\n> but only if it can actually be done without undo effort. Let's see\n> whether that's the case here:\n> \n> - Anastasia wants a command added that dumps out whatever the server\n> knows about what files have changed, which I already agreed was a\n> reasonable extension of my initial proposal.\n\nThat seems like a useful thing to have, I agree.\n\n> - You said that for this to be useful to pgbackrest, it'd have to use\n> a whole different mechanism that includes commands to request\n> individual files and blocks within those files, which would be a\n> significant rewrite of pg_basebackup that you agreed is more closely\n> related to parallel backup than to the project under discussion on\n> this thread. And that even then pgbackrest probably wouldn't use it\n> because it also does server-side compression and encryption which are\n> not included in this proposal.\n\nYes, having thought about it a bit more, without adding in the other\nfeatures that we already support in pgBackRest, it's unlikely we'd use\nit in the form that I was contemplating. That said, it'd at least be\ncloser to something we could use and adding those other features, such\nas compression and encryption, would almost certainly be simpler and\neasier if there were already protocol commands like those we discussed\nfor parallel work.\n\n> > Thanks for sharing your thoughts on that, certainly having the backend\n> > able to be more intelligent about streaming files to avoid latency is\n> > good and possibly the best approach. Another alternative to reducing\n> > the latency would be to have a way for the client to request a set of\n> > files, but I don't know that it'd be better.\n> \n> I don't know either. This is an area that needs more thought, I\n> think, although as discussed, it's more related to parallel backup\n> than $SUBJECT.\n\nYes, I agree with that.\n\n> > I'm not really sure why the above is extremely inconvenient for\n> > third-party tools, beyond just that they've already been written to work\n> > with an assumption that the server-side of things isn't as intelligent\n> > as PG is.\n> \n> Well, one thing you might want to do is have a tool that connects to\n> the server, enters backup mode, requests information on what blocks\n> have changed, copies those blocks via direct filesystem access, and\n> then exits backup mode. Such a tool would really benefit from a\n> START_BACKUP / SEND_FILE_LIST / SEND_FILE_CONTENTS / STOP_BACKUP\n> command language, because it would just skip ever issuing the\n> SEND_FILE_CONTENTS command in favor of doing that part of the work via\n> other means. On the other hand, a START_PARALLEL_BACKUP LSN '1/234'\n> command is useless to such a tool.\n\nThat's true, but I hardly ever hear people talking about how wonderful\nit is that pgBackRest uses SSH to grab the data. What I hear, often, is\nthat people would really like backups to be done over the PG protocol on\nthe same port that replication is done on. A possible compromise is\nhaving a dedicated port for the backup agent to use, but it's definitely\nnot the preference.\n\n> Contrariwise, a tool that has its own magic - perhaps based on\n> WAL-scanning or something like ptrack - to know which files currently\n> exist and which blocks are modified could use SEND_FILE_CONTENTS but\n> not SEND_FILE_LIST. And a filesystem-snapshot based technique might\n> use START_BACKUP and STOP_BACKUP but nothing else.\n> \n> In short, providing granular commands like this lets the client be\n> really intelligent even if the server isn't, and lets the client have\n> fine-grained control of the process. This is very good if you're an\n> out-of-core tool maintainer and your tool is trying to be smarter than\n> - or even just differently-designed than - core.\n> \n> But if what you really want is just a maximally-efficient parallel\n> backup, you don't need the commands to be fine-grained like this. You\n> don't even really *want* the commands to be fine-grained like this,\n> because it's better if the server works it all out so as to avoid\n> unnecessary network round-trips. You just want to tell the server\n> \"hey, I want to do a parallel backup with 5 participants - hit me!\"\n> and have it do that in the most efficient way that it knows how,\n> without forcing the client to make any decisions that can be made just\n> as well, and perhaps more efficiently, on the server.\n> \n> On the third hand, one advantage of having the fine-grained commands\n> is that it would not only make it easier for out-of-core tools to do\n> cool things, but also in-core tools. For instance, you can imagine\n> being able to do something like:\n> \n> pg_basebackup -D outputdir -d conninfo --copy-files-from=$PGDATA\n> \n> If the client is using what I'm calling fine-grained commands, this is\n> easy to implement. If it's just calling a piece of server side\n> functionality that sends back a tarball as a blob, it's not.\n> \n> So each approach has some pros and cons.\n\nI agree that each has some pros and cons. Certainly one of the big\n'cons' here is that it'd be a lot more backend work to implement the\n'maximally-efficient parallel backup', while the fine-grained commands\nwouldn't require nearly as much but would still allow a great deal of\nthe benefit for both in-core and out-of-core tools, potentially.\n\n> > I'm disappointed that the concerns about the trouble that end users are\n> > likely to have with this didn't garner more discussion.\n> \n> Well, we can keep discussing things. I've tried to reply to as many\n> of your concerns as I can, but I believe you've written more email on\n> this thread than everyone else combined, so perhaps I haven't entirely\n> been able to keep up.\n>\n> That being said, as far as I can tell, those concerns were not\n> seconded by anyone else. Also, if I understand correctly, when I\n> asked how we could avoid that problem, you that you didn't know. And\n> I said it seemed like we would need to a very expensive operation at\n> server startup, or magic. So I feel that perhaps it is a problem that\n> (1) is not of great general concern and (2) to which no really\n> superior engineering solution is possible.\n\nThe comments that Anastasia had around the issues with being able to\nidentify the full backup that goes with a given incremental backup, et\nal, certainly echoed some my concerns regarding this part of the\ndiscussion.\n\nAs for the concerns about trying to avoid corruption from starting up an\ninvalid cluster, I didn't see much discussion about the idea of some\nkind of cross-check between pg_control and backup_label. That was all\nvery hand-wavy, so I'm not too surprised, but I don't think it's\ncompletely impossible to have something better than \"well, if you just\nremove this one file, then you get a non-obviously corrupt cluster that\nyou can happily start up\". I'll certainly accept that it requires more\nthought though and if we're willing to continue a discussion around\nthat, great.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 24 Apr 2019 09:28:15 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 9:28 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Looking at it from what I'm sitting, I brought up two ways that we\n> could extend the protocol to \"request from the server only those blocks\n> where LSN >= threshold_value\" with one being the modification to\n> BASE_BACKUP and the other being a new set of commands that could be\n> parallelized. If I had assumed that you'd be thinking the same way I am\n> about extending the backup protocol, I wouldn't have said anything now\n> and then would have complained after you wrote a patch that just\n> extended the BASE_BACKUP command, at which point I likely would have\n> been told that it's now been done and that I should have mentioned it\n> earlier.\n\nFair enough.\n\n> At least in part then it seems like we're viewing the level of effort\n> around what I'm talking about quite differently, and I feel like that's\n> largely because every time I mention parallel anything there's this\n> assumption that I'm asking you to parallelize pg_basebackup or write a\n> whole bunch more code to provide a fully optimized server-side parallel\n> implementation for backups. That really wasn't what I was going for. I\n> was thinking it would be a modest amount of additional work add\n> incremental backup via a few new commands, instead of through the\n> BASE_BACKUP protocol command, that would make parallelization possible.\n\nI'm not sure about that. It doesn't seem crazy difficult, but there\nare a few wrinkles. One is that if the client is requesting files one\nat a time, it's got to have a list of all the files that it needs to\nrequest, and that means that it has to ask the server to make a\npreparatory pass over the whole PGDATA directory to get a list of all\nthe files that exist. That overhead is not otherwise needed. Another\nis that the list of files might be really large, and that means that\nthe client would either use a lot of memory to hold that great big\nlist, or need to deal with spilling the list to a spool file\nsomeplace, or else have a server protocol that lets the list be\nfetched in incrementally in chunks. A third is that, as you mention\nfurther on, it means that the client has to care a lot more about\nexactly how the server is figuring out which blocks have been\nmodified. If it just says BASE_BACKUP ..., the server an be\ninternally reading each block and checking the LSN, or using\nWAL-scanning or ptrack or whatever and the client doesn't need to know\nor care. But if the client is asking for a list of modified files or\nblocks, then that presumes the information is available, and not too\nexpensively, without actually reading the files. Fourth, MAX_RATE\nprobably won't actually limit to the correct rate overall if the limit\nis applied separately to each file.\n\nI'd be afraid that a patch that tried to handle all that as part of\nthis project would get rejected on the grounds that it was trying to\nsolve too many unrelated problems. Also, though not everybody has to\nagree on what constitutes a \"modest amount of additional work,\" I\nwould not describe solving all of those problems as a modest effort,\nbut rather a pretty substantial one.\n\n> There's a tangent on all of this that's pretty key though, which is the\n> question around just how the blocks are identified. If the WAL scanning\n> is done to figure out the blocks, then that's quite a bit different from\n> the other idea of \"open this relation and scan it, but only give me the\n> blocks after this LSN\". It's the latter case that I've been mostly\n> thinking about in this thread, which is part of why I was thinking it'd\n> be a modest amount of work to have protocol commands that accepted a\n> file (or perhaps a relation..) to scan and return blocks from instead of\n> baking this into BASE_BACKUP which by definition just serially scans the\n> data directory and returns things as it finds them. For the case where\n> we have WAL scanning happening and modfiles which are being read and\n> used to figure out the blocks to send, it seems like it might be more\n> complicated and therefore potentially quite a bit more work to have a\n> parallel version of that.\n\nYeah. I don't entirely agree that the first one is simple, as per the\nabove, but I definitely agree that the second one is more complicated\nthan the first one.\n\n> > Well, one thing you might want to do is have a tool that connects to\n> > the server, enters backup mode, requests information on what blocks\n> > have changed, copies those blocks via direct filesystem access, and\n> > then exits backup mode. Such a tool would really benefit from a\n> > START_BACKUP / SEND_FILE_LIST / SEND_FILE_CONTENTS / STOP_BACKUP\n> > command language, because it would just skip ever issuing the\n> > SEND_FILE_CONTENTS command in favor of doing that part of the work via\n> > other means. On the other hand, a START_PARALLEL_BACKUP LSN '1/234'\n> > command is useless to such a tool.\n>\n> That's true, but I hardly ever hear people talking about how wonderful\n> it is that pgBackRest uses SSH to grab the data. What I hear, often, is\n> that people would really like backups to be done over the PG protocol on\n> the same port that replication is done on. A possible compromise is\n> having a dedicated port for the backup agent to use, but it's definitely\n> not the preference.\n\nIf you happen to be on the same system where the backup is running,\nreading straight from the data directory might be a lot faster.\nOtherwise, I tend to agree with you that using libpq is probably best.\n\n> I agree that each has some pros and cons. Certainly one of the big\n> 'cons' here is that it'd be a lot more backend work to implement the\n> 'maximally-efficient parallel backup', while the fine-grained commands\n> wouldn't require nearly as much but would still allow a great deal of\n> the benefit for both in-core and out-of-core tools, potentially.\n\nI agree.\n\n> The comments that Anastasia had around the issues with being able to\n> identify the full backup that goes with a given incremental backup, et\n> al, certainly echoed some my concerns regarding this part of the\n> discussion.\n>\n> As for the concerns about trying to avoid corruption from starting up an\n> invalid cluster, I didn't see much discussion about the idea of some\n> kind of cross-check between pg_control and backup_label. That was all\n> very hand-wavy, so I'm not too surprised, but I don't think it's\n> completely impossible to have something better than \"well, if you just\n> remove this one file, then you get a non-obviously corrupt cluster that\n> you can happily start up\". I'll certainly accept that it requires more\n> thought though and if we're willing to continue a discussion around\n> that, great.\n\nI think there are three different issues here that need to be\nconsidered separately.\n\nIssue #1: If you manually add files to your backup, remove files from\nyour backup, or change files in your backup, bad things will happen.\nThere is fundamentally nothing we can do to prevent this completely,\nbut it may be possible to make the system more resilient against\nham-handed modifications, at least to the extent of detecting them.\nThat's maybe a topic for another thread, but it's an interesting one:\nAndres and I were brainstorming about it at some point.\n\nIssue #2: You can only restore an LSN-based incremental backup\ncorrectly if you have a base backup whose start-of-backup LSN is\ngreater than or equal to the threshold LSN used to take the\nincremental backup. If #1 is not in play, this is just a simple\ncross-check at restoration time: retrieve the 'START WAL LOCATION'\nfrom the prior backup's backup_label file and the threshold LSN for\nthe incremental backup from wherever you decide to store it and\ncompare them; if they do not have the right relationship, ERROR. As\nto whether #1 might end up in play here, anything's possible, but\nwouldn't manually editing LSNs in backup metadata files be pretty\nobviously a bad idea? (Then again, I didn't really think the whole\nbackup_label thing was that confusing either, and obviously I was\nwrong about that. Still, editing a file requires a little more work\nthan removing it... you have to not only lie to the system, you have\nto decide which lie to tell!)\n\nIssue #3: Even if you clearly understand the rule articulated in #2,\nyou might find it hard to follow in practice. If you take a full\nbackup on Sunday and an incremental against Sunday's backup or against\nthe previous day's backup on each subsequent day, it's not really that\nhard to understand. But in more complex scenarios it could be hard to\nget right. For example if you've been removing your backups when they\nare a month old and and then you start doing the same thing once you\nadd incrementals to the picture you might easily remove a full backup\nupon which a newer incremental depends. I see the need for good tools\nto manage this kind of complexity, but have no plan as part of this\nproject to provide them. I think that just requires too many\nassumptions about where those backups are being stored and how they\nare being catalogued and managed; I don't believe I currently am\nknowledgeable enough to design something that would be good enough to\nmeet core standards for inclusion, and I don't want to waste energy\ntrying. If someone else wants to try, that's OK with me, but I think\nit's probably better to let this be a thing that people experiment\nwith outside of core for a while until we see what ends up being a\nwinner. I realize that this is a debatable position, but as I'm sure\nyou realize by now, I have a strong desire to limit the scope of this\nproject in such a way that I can get it done, 'cuz a bird in the hand\nis worth two in the bush.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 Apr 2019 11:58:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Apr 24, 2019 at 9:28 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > At least in part then it seems like we're viewing the level of effort\n> > around what I'm talking about quite differently, and I feel like that's\n> > largely because every time I mention parallel anything there's this\n> > assumption that I'm asking you to parallelize pg_basebackup or write a\n> > whole bunch more code to provide a fully optimized server-side parallel\n> > implementation for backups. That really wasn't what I was going for. I\n> > was thinking it would be a modest amount of additional work add\n> > incremental backup via a few new commands, instead of through the\n> > BASE_BACKUP protocol command, that would make parallelization possible.\n> \n> I'm not sure about that. It doesn't seem crazy difficult, but there\n> are a few wrinkles. One is that if the client is requesting files one\n> at a time, it's got to have a list of all the files that it needs to\n> request, and that means that it has to ask the server to make a\n> preparatory pass over the whole PGDATA directory to get a list of all\n> the files that exist. That overhead is not otherwise needed. Another\n> is that the list of files might be really large, and that means that\n> the client would either use a lot of memory to hold that great big\n> list, or need to deal with spilling the list to a spool file\n> someplace, or else have a server protocol that lets the list be\n> fetched in incrementally in chunks.\n\nSo, I had a thought about that when I was composing the last email and\nwhile I'm still unsure about it, maybe it'd be useful to mention it\nhere- do we really need a list of every *file*, or could we reduce that\ndown to a list of relations + forks for the main data directory, and\nthen always include whatever other directories/files are appropriate?\n\nWhen it comes to operating in chunks, well, if we're getting a list of\nrelations instead of files, we do have this thing called cursors..\n\n> A third is that, as you mention\n> further on, it means that the client has to care a lot more about\n> exactly how the server is figuring out which blocks have been\n> modified. If it just says BASE_BACKUP ..., the server an be\n> internally reading each block and checking the LSN, or using\n> WAL-scanning or ptrack or whatever and the client doesn't need to know\n> or care. But if the client is asking for a list of modified files or\n> blocks, then that presumes the information is available, and not too\n> expensively, without actually reading the files.\n\nI would think the client would be able to just ask for the list of\nmodified files, when it comes to building up the list of files to ask\nfor, which could potentially be done based on mtime instead of by WAL\nscanning or by scanning the files themselves. Don't get me wrong, I'd\nprefer that we work based on the WAL, since I have more confidence in\nthat, but certainly quite a few of the tools do work off mtime these\ndays and while it's not perfect, the risk/reward there is pretty\npalatable to a lot of people.\n\n> Fourth, MAX_RATE\n> probably won't actually limit to the correct rate overall if the limit\n> is applied separately to each file.\n\nSure, I hadn't been thinking about MAX_RATE and that would certainly\ncomplicate things if we're offering to provide MAX_RATE-type\ncapabilities as part of this new set of commands.\n\n> I'd be afraid that a patch that tried to handle all that as part of\n> this project would get rejected on the grounds that it was trying to\n> solve too many unrelated problems. Also, though not everybody has to\n> agree on what constitutes a \"modest amount of additional work,\" I\n> would not describe solving all of those problems as a modest effort,\n> but rather a pretty substantial one.\n\nI suspect some of that's driven by how they get solved and if we decide\nwe have to solve all of them. With things like MAX_RATE + incremental\nbackups, I wonder how that's going to end up working, when you have the\noption to apply the limit to the network, or to the disk I/O. You might\nhave addressed that elsewhere, I've not looked, and I'm not too\nparticular about it personally either, but a definition could be \"max\nrate at which we'll read the file you asked for on this connection\" and\nthat would be pretty straight-forward, I'd think.\n\n> > > Well, one thing you might want to do is have a tool that connects to\n> > > the server, enters backup mode, requests information on what blocks\n> > > have changed, copies those blocks via direct filesystem access, and\n> > > then exits backup mode. Such a tool would really benefit from a\n> > > START_BACKUP / SEND_FILE_LIST / SEND_FILE_CONTENTS / STOP_BACKUP\n> > > command language, because it would just skip ever issuing the\n> > > SEND_FILE_CONTENTS command in favor of doing that part of the work via\n> > > other means. On the other hand, a START_PARALLEL_BACKUP LSN '1/234'\n> > > command is useless to such a tool.\n> >\n> > That's true, but I hardly ever hear people talking about how wonderful\n> > it is that pgBackRest uses SSH to grab the data. What I hear, often, is\n> > that people would really like backups to be done over the PG protocol on\n> > the same port that replication is done on. A possible compromise is\n> > having a dedicated port for the backup agent to use, but it's definitely\n> > not the preference.\n> \n> If you happen to be on the same system where the backup is running,\n> reading straight from the data directory might be a lot faster.\n\nYes, that's certainly true.\n\n> > The comments that Anastasia had around the issues with being able to\n> > identify the full backup that goes with a given incremental backup, et\n> > al, certainly echoed some my concerns regarding this part of the\n> > discussion.\n> >\n> > As for the concerns about trying to avoid corruption from starting up an\n> > invalid cluster, I didn't see much discussion about the idea of some\n> > kind of cross-check between pg_control and backup_label. That was all\n> > very hand-wavy, so I'm not too surprised, but I don't think it's\n> > completely impossible to have something better than \"well, if you just\n> > remove this one file, then you get a non-obviously corrupt cluster that\n> > you can happily start up\". I'll certainly accept that it requires more\n> > thought though and if we're willing to continue a discussion around\n> > that, great.\n> \n> I think there are three different issues here that need to be\n> considered separately.\n> \n> Issue #1: If you manually add files to your backup, remove files from\n> your backup, or change files in your backup, bad things will happen.\n> There is fundamentally nothing we can do to prevent this completely,\n> but it may be possible to make the system more resilient against\n> ham-handed modifications, at least to the extent of detecting them.\n> That's maybe a topic for another thread, but it's an interesting one:\n> Andres and I were brainstorming about it at some point.\n\nI'd certainly be interested in hearing about ways we can improve on\nthat. I'm alright with it being on another thread as it's a broader\nconcern than just what we're talking about here.\n\n> Issue #2: You can only restore an LSN-based incremental backup\n> correctly if you have a base backup whose start-of-backup LSN is\n> greater than or equal to the threshold LSN used to take the\n> incremental backup. If #1 is not in play, this is just a simple\n> cross-check at restoration time: retrieve the 'START WAL LOCATION'\n> from the prior backup's backup_label file and the threshold LSN for\n> the incremental backup from wherever you decide to store it and\n> compare them; if they do not have the right relationship, ERROR. As\n> to whether #1 might end up in play here, anything's possible, but\n> wouldn't manually editing LSNs in backup metadata files be pretty\n> obviously a bad idea? (Then again, I didn't really think the whole\n> backup_label thing was that confusing either, and obviously I was\n> wrong about that. Still, editing a file requires a little more work\n> than removing it... you have to not only lie to the system, you have\n> to decide which lie to tell!)\n\nYes, that'd certainly be at least one cross-check, but what if you've\ngot an incremental backup based on a prior incremental backup that's\nbased on a prior full, and you skip the incremental backup inbetween\nsomehow? Or are we just going to state outright that we don't support\nincremental-on-incremental (in which case, all backups would actually be\neither 'full' or 'differential' in the pgBackRest parlance, anyway, and\nthat parlance comes from my recollection of how other tools describe the\ndifferent backup types, but that was from many moons ago and might be\nentirely wrong)?\n\n> Issue #3: Even if you clearly understand the rule articulated in #2,\n> you might find it hard to follow in practice. If you take a full\n> backup on Sunday and an incremental against Sunday's backup or against\n> the previous day's backup on each subsequent day, it's not really that\n> hard to understand. But in more complex scenarios it could be hard to\n> get right. For example if you've been removing your backups when they\n> are a month old and and then you start doing the same thing once you\n> add incrementals to the picture you might easily remove a full backup\n> upon which a newer incremental depends. I see the need for good tools\n> to manage this kind of complexity, but have no plan as part of this\n> project to provide them. I think that just requires too many\n> assumptions about where those backups are being stored and how they\n> are being catalogued and managed; I don't believe I currently am\n> knowledgeable enough to design something that would be good enough to\n> meet core standards for inclusion, and I don't want to waste energy\n> trying. If someone else wants to try, that's OK with me, but I think\n> it's probably better to let this be a thing that people experiment\n> with outside of core for a while until we see what ends up being a\n> winner. I realize that this is a debatable position, but as I'm sure\n> you realize by now, I have a strong desire to limit the scope of this\n> project in such a way that I can get it done, 'cuz a bird in the hand\n> is worth two in the bush.\n\nEven if what we're talking about here is really only \"differentials\", or\nbackups where the incremental contains all the changes from a prior full\nbackup, if the only check is \"full LSN is greater than or equal to the\nincremental backup LSN\", then you have a potential problem that's larger\nthan just the incrementals no longer being valid because you removed the\nfull backup on which they were taken- you might think that an *earlier*\nfull backup is the one for a given incremental and perform a restore\nwith the wrong full/incremental matchup and end up with a corrupted\ndatabase.\n\nThese are exactly the kind of issues that make me really wonder if this\nis the right natural progression for pg_basebackup or any backup tool to\ngo in. Maybe there's some additional things we can do to make it harder\nfor someone to end up with a corrupted database when they restore, but\nit's really hard to get things like expiration correct. We see users\nalready ending up with problems because they don't manage expiration of\ntheir WAL correctly, and now we're adding another level of serious\ncomplication to the expiration requirements that, as we've seen even on\nthis thread, some users are just not going to ever feel comfortable\nwith doing on their own.\n\nPerhaps it's not relevant and I get that you want to build this cool\nincremental backup capability into pg_basebackup and I'm not going to\nstop you from doing it, but if I was going to build a backup tool,\nadding support for block-level incremental backup wouldn't be where I'd\nstart, and, in fact, I might not even get to it even after investing\nover 5 years in the project and even after building in proper backup\nmanagement. The idea of implementing block-level incrementals while\npushing the backup management, expiration, and dependency between\nincrementals and fulls on to the user to figure out just strikes me as\nentirely backwards and, frankly, to be gratuitously 'itch scratching' at\nthe expense of what users really want and need here.\n\nOne of the great things about pg_basebackup is its simplicity and\nability to be a one-time \"give me a snapshot of the database\" and this\nis building in a complicated feature to it that *requires* users to\nbuild their own basic capabilities externally in order to be able to use\nit. I've tried to avoid getting into that here and I won't go on about\nit, since it's your time to do with as you feel appropriate, but I do\nworry that it makes us, as a project, look a bit more cavalier about\nwhat users are asking for vs. what cool new thing we want to play with\nthan I, at least, would like us to be (so, I'll caveat that with \"in\nthis area anyway\", since I suspect saying this will probably come back\nto bite me in some other discussion later ;).\n\nThanks,\n\nStephen",
"msg_date": "Wed, 24 Apr 2019 12:57:36 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 12:57 PM Stephen Frost <sfrost@snowman.net> wrote:\n> So, I had a thought about that when I was composing the last email and\n> while I'm still unsure about it, maybe it'd be useful to mention it\n> here- do we really need a list of every *file*, or could we reduce that\n> down to a list of relations + forks for the main data directory, and\n> then always include whatever other directories/files are appropriate?\n\nI'm not quite sure what the difference is here. I agree that we could\ntry to compact the list of file names by saying 16384 (24 segments)\ninstead of 16384, 16384.1, ..., 16384.23, but I doubt that saves\nanything meaningful. I don't see how we can leave anything out\naltogether. If there's a filename called boaty.mcboatface in the\nserver directory, I think we've got to back it up, and that won't\nhappen unless the client knows that it is there, and it won't know\nunless we include it in a list.\n\n> When it comes to operating in chunks, well, if we're getting a list of\n> relations instead of files, we do have this thing called cursors..\n\nSure... but they don't work for replication commands and I am\ndefinitely not volunteering to change that.\n\n> I would think the client would be able to just ask for the list of\n> modified files, when it comes to building up the list of files to ask\n> for, which could potentially be done based on mtime instead of by WAL\n> scanning or by scanning the files themselves. Don't get me wrong, I'd\n> prefer that we work based on the WAL, since I have more confidence in\n> that, but certainly quite a few of the tools do work off mtime these\n> days and while it's not perfect, the risk/reward there is pretty\n> palatable to a lot of people.\n\nThat approach, as with a few others that have been suggested, requires\nthat the client have access to the previous backup, which makes me\nuninterested in implementing it. I want a version of incremental\nbackup where the client needs to know the LSN of the previous backup\nand nothing else. That way, if you store your actual backups on a\ntape drive in an airless vault at the bottom of the Pacific Ocean, you\ncan still take incremental backup against them, as long as you\nremember to note the LSNs before you ship the backups to the vault.\nWoohoo! It also allows for the wire protocol to be very simple and\nthe client to be very simple; neither of those things is essential,\nbut both are nice.\n\nAlso, I think using mtimes is just asking to get burned. Yeah, almost\nnobody will, but an LSN-based approach is more granular (block level)\nand more reliable (can't be fooled by resetting a clock backward, or\nby a filesystem being careless with file metadata), so I think it\nmakes sense to focus on getting that to work. It's worth keeping in\nmind that there may be somewhat different expectations for an external\ntool vs. a core feature. Stupid as it may sound, I think people using\nan external tool are more likely to do things read the directions, and\nthose directions can say things like \"use a reasonable filesystem and\ndon't set your clock backward.\" When stuff goes into core, people\nassume that they should be able to run it on any filesystem on any\nhardware where they can get it to work and it should just work. And\nyou also get a lot more users, so even if the percentage of people not\nreading the directions were to stay constant, the actual number of\nsuch people will go up a lot. So picking what we seem to both agree to\nbe the most robust way of detecting changes seems like the way to go\nfrom here.\n\n> I suspect some of that's driven by how they get solved and if we decide\n> we have to solve all of them. With things like MAX_RATE + incremental\n> backups, I wonder how that's going to end up working, when you have the\n> option to apply the limit to the network, or to the disk I/O. You might\n> have addressed that elsewhere, I've not looked, and I'm not too\n> particular about it personally either, but a definition could be \"max\n> rate at which we'll read the file you asked for on this connection\" and\n> that would be pretty straight-forward, I'd think.\n\nI mean, it's just so people can tell pg_basebackup what rate they want\nvia a command-line option and have it happen like that. They don't\ncare about the rates for individual files.\n\n> > Issue #1: If you manually add files to your backup, remove files from\n> > your backup, or change files in your backup, bad things will happen.\n> > There is fundamentally nothing we can do to prevent this completely,\n> > but it may be possible to make the system more resilient against\n> > ham-handed modifications, at least to the extent of detecting them.\n> > That's maybe a topic for another thread, but it's an interesting one:\n> > Andres and I were brainstorming about it at some point.\n>\n> I'd certainly be interested in hearing about ways we can improve on\n> that. I'm alright with it being on another thread as it's a broader\n> concern than just what we're talking about here.\n\nMight be a good topic to chat about at PGCon.\n\n> > Issue #2: You can only restore an LSN-based incremental backup\n> > correctly if you have a base backup whose start-of-backup LSN is\n> > greater than or equal to the threshold LSN used to take the\n> > incremental backup. If #1 is not in play, this is just a simple\n> > cross-check at restoration time: retrieve the 'START WAL LOCATION'\n> > from the prior backup's backup_label file and the threshold LSN for\n> > the incremental backup from wherever you decide to store it and\n> > compare them; if they do not have the right relationship, ERROR. As\n> > to whether #1 might end up in play here, anything's possible, but\n> > wouldn't manually editing LSNs in backup metadata files be pretty\n> > obviously a bad idea? (Then again, I didn't really think the whole\n> > backup_label thing was that confusing either, and obviously I was\n> > wrong about that. Still, editing a file requires a little more work\n> > than removing it... you have to not only lie to the system, you have\n> > to decide which lie to tell!)\n>\n> Yes, that'd certainly be at least one cross-check, but what if you've\n> got an incremental backup based on a prior incremental backup that's\n> based on a prior full, and you skip the incremental backup inbetween\n> somehow? Or are we just going to state outright that we don't support\n> incremental-on-incremental (in which case, all backups would actually be\n> either 'full' or 'differential' in the pgBackRest parlance, anyway, and\n> that parlance comes from my recollection of how other tools describe the\n> different backup types, but that was from many moons ago and might be\n> entirely wrong)?\n\nI have every intention of supporting that case, just as I described in\nmy original email, and the algorithm that I just described handles it.\nYou just have to repeat the checks for every backup in the chain. If\nyou have a backup A, and a backup B intended as an incremental vs. A,\nand a backup C intended as an incremental vs. B, then the threshold\nLSN for C is presumably the starting LSN for B, and the threshold LSN\nfor B is presumably the starting LSN for A. If you try to restore\nA-B-C you'll check C vs. B and find that all is well and similarly for\nB vs. A. If you try to restore A-C, you'll find out that A's start\nLSN precedes C's threshold LSN and error out.\n\n> Even if what we're talking about here is really only \"differentials\", or\n> backups where the incremental contains all the changes from a prior full\n> backup, if the only check is \"full LSN is greater than or equal to the\n> incremental backup LSN\", then you have a potential problem that's larger\n> than just the incrementals no longer being valid because you removed the\n> full backup on which they were taken- you might think that an *earlier*\n> full backup is the one for a given incremental and perform a restore\n> with the wrong full/incremental matchup and end up with a corrupted\n> database.\n\nNo, the proposed check is explicitly designed to prevent that. You'd\nget a restore failure (which is not great either, of course).\n\n> management. The idea of implementing block-level incrementals while\n> pushing the backup management, expiration, and dependency between\n> incrementals and fulls on to the user to figure out just strikes me as\n> entirely backwards and, frankly, to be gratuitously 'itch scratching' at\n> the expense of what users really want and need here.\n\nWell, not everybody needs or wants the same thing. I wouldn't be\nproposing it if my employer didn't think it was gonna solve a real\nproblem...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 25 Apr 2019 07:32:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "23.04.2019 14:08, Anastasia Lubennikova wrote:\n> I'm volunteering to write a draft patch or, more likely, set of \n> patches, which\n> will allow us to discuss the subject in more detail.\n> And to do that I wish we agree on the API and data format (at least \n> broadly).\n> Looking forward to hearing your thoughts. \n\nThough the previous discussion stalled,\nI still hope that we could agree on basic points such as a map file \nformat and protocol extension,\nwhich is necessary to start implementing the feature.\n\n--------- Proof Of Concept patch ---------\n\nIn attachments, you can find a prototype of incremental pg_basebackup, \nwhich consists of 2 features:\n\n1) To perform incremental backup one should call pg_basebackup with a \nnew argument:\n\npg_basebackup -D 'basedir' --prev-backup-start-lsn 'lsn'\n\nwhere lsn is a start_lsn of parent backup (can be found in \n\"backup_label\" file)\n\nIt calls BASE_BACKUP replication command with a new argument \nPREV_BACKUP_START_LSN 'lsn'.\n\nFor datafiles, only pages with LSN > prev_backup_start_lsn will be \nincluded in the backup.\nThey are saved into 'filename.partial' file, 'filename.blockmap' file \ncontains an array of BlockNumbers.\nFor example, if we backuped blocks 1,3,5, filename.partial will contain \n3 blocks, and 'filename.blockmap' will contain array {1,3,5}.\n\nNon-datafiles use the same format as before.\n\n2) To merge incremental backup into a full backup call\n\npg_basebackup -D 'basedir' --incremental-pgdata 'incremental_basedir' \n--merge-backups\n\nIt will move all files from 'incremental_basedir' to 'basedir' handling \n'.partial' files correctly.\n\n\n--------- Questions to discuss ---------\n\nPlease note that it is just a proof-of-concept patch and it can be \noptimized in many ways.\nLet's concentrate on issues that affect the protocol or data format.\n\n1) Whether we collect block maps using simple \"read everything page by \npage\" approach\nor WAL scanning or any other page tracking algorithm, we must choose a \nmap format.\nI implemented the simplest one, while there are more ideas:\n\n- We can have a map not per file, but per relation or maybe per tablespace,\nwhich will make implementation more complex, but probably more optimal.\nThe only problem I see with existing implementation is that even if only \na few blocks changed,\nwe still must pad it to 512 bytes per tar format requirements.\n\n- We can save LSNs into the block map.\n\ntypedef struct BlockMapItem {\n BlockNumber blkno;\n XLogRecPtr lsn;\n} BlockMapItem;\n\nIn my implementation, invalid prev_backup_start_lsn means fallback to \nregular basebackup\nwithout any block maps. Alternatively, we can define another meaning of \nthis value and send a block map for all files.\nBackup utilities can use these maps to speed up backup merge or restore.\n\n2) We can implement BASE_BACKUP SEND_FILELIST replication command,\nwhich will return a list of filenames with file sizes and block maps if \nlsn was provided.\n\nTo avoid changing format, we can simply send tar headers for each file:\n- tarHeader(\"filename.blockmap\") followed by blockmap for relation files \nif prev_backup_start_lsn is provided;\n- tarHeader(\"filename\") without actual file content for non relation \nfiles or for all files in \"FULL\" backup\n\nThe caller can parse messages and use them for any purpose, for example, \nto perform a parallel backup.\n\nThoughts?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 10 Jul 2019 21:16:59 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi Anastasia,\n\nOn Wed, Jul 10, 2019 at 11:47 PM Anastasia Lubennikova <\na.lubennikova@postgrespro.ru> wrote:\n\n> 23.04.2019 14:08, Anastasia Lubennikova wrote:\n> > I'm volunteering to write a draft patch or, more likely, set of\n> > patches, which\n> > will allow us to discuss the subject in more detail.\n> > And to do that I wish we agree on the API and data format (at least\n> > broadly).\n> > Looking forward to hearing your thoughts.\n>\n> Though the previous discussion stalled,\n> I still hope that we could agree on basic points such as a map file\n> format and protocol extension,\n> which is necessary to start implementing the feature.\n>\n\nIt's great that you too come up with the PoC patch. I didn't look at your\nchanges in much details but we at EnterpriseDB too working on this feature\nand started implementing it.\n\nAttached series of patches I had so far... (which needed further\noptimization and adjustments though)\n\nHere is the overall design (as proposed by Robert) we are trying to\nimplement:\n\n1. Extend the BASE_BACKUP command that can be used with replication\nconnections. Add a new [ LSN 'lsn' ] option.\n\n2. Extend pg_basebackup with a new --lsn=LSN option that causes it to send\nthe option added to the server in #1.\n\nHere are the implementation details when we have a valid LSN\n\nsendFile() in basebackup.c is the function which mostly does the thing for\nus. If the filename looks like a relation file, then we'll need to consider\nsending only a partial file. The way to do that is probably:\n\nA. Read the whole file into memory.\n\nB. Check the LSN of each block. Build a bitmap indicating which blocks have\nan LSN greater than or equal to the threshold LSN.\n\nC. If more than 90% of the bits in the bitmap are set, send the whole file\njust as if this were a full backup. This 90% is a constant now; we might\nmake it a GUC later.\n\nD. Otherwise, send a file with .partial added to the name. The .partial\nfile contains an indication of which blocks were changed at the beginning,\nfollowed by the data blocks. It also includes a checksum/CRC.\nCurrently, a .partial file format looks like:\n - start with a 4-byte magic number\n - then store a 4-byte CRC covering the header\n - then a 4-byte count of the number of blocks included in the file\n - then the block numbers, each as a 4-byte quantity\n - then the data blocks\n\n\nWe are also working on combining these incremental back-ups with the full\nbackup and for that, we are planning to add a new utility called\npg_combinebackup. Will post the details on that later once we have on the\nsame page for taking backup.\n\nThanks\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation",
"msg_date": "Thu, 11 Jul 2019 17:00:22 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 5:00 PM Jeevan Chalke <\njeevan.chalke@enterprisedb.com> wrote:\n\n> Hi Anastasia,\n>\n> On Wed, Jul 10, 2019 at 11:47 PM Anastasia Lubennikova <\n> a.lubennikova@postgrespro.ru> wrote:\n>\n>> 23.04.2019 14:08, Anastasia Lubennikova wrote:\n>> > I'm volunteering to write a draft patch or, more likely, set of\n>> > patches, which\n>> > will allow us to discuss the subject in more detail.\n>> > And to do that I wish we agree on the API and data format (at least\n>> > broadly).\n>> > Looking forward to hearing your thoughts.\n>>\n>> Though the previous discussion stalled,\n>> I still hope that we could agree on basic points such as a map file\n>> format and protocol extension,\n>> which is necessary to start implementing the feature.\n>>\n>\n> It's great that you too come up with the PoC patch. I didn't look at your\n> changes in much details but we at EnterpriseDB too working on this feature\n> and started implementing it.\n>\n> Attached series of patches I had so far... (which needed further\n> optimization and adjustments though)\n>\n> Here is the overall design (as proposed by Robert) we are trying to\n> implement:\n>\n> 1. Extend the BASE_BACKUP command that can be used with replication\n> connections. Add a new [ LSN 'lsn' ] option.\n>\n> 2. Extend pg_basebackup with a new --lsn=LSN option that causes it to send\n> the option added to the server in #1.\n>\n> Here are the implementation details when we have a valid LSN\n>\n> sendFile() in basebackup.c is the function which mostly does the thing for\n> us. If the filename looks like a relation file, then we'll need to consider\n> sending only a partial file. The way to do that is probably:\n>\n> A. Read the whole file into memory.\n>\n> B. Check the LSN of each block. Build a bitmap indicating which blocks\n> have an LSN greater than or equal to the threshold LSN.\n>\n> C. If more than 90% of the bits in the bitmap are set, send the whole file\n> just as if this were a full backup. This 90% is a constant now; we might\n> make it a GUC later.\n>\n> D. Otherwise, send a file with .partial added to the name. The .partial\n> file contains an indication of which blocks were changed at the beginning,\n> followed by the data blocks. It also includes a checksum/CRC.\n> Currently, a .partial file format looks like:\n> - start with a 4-byte magic number\n> - then store a 4-byte CRC covering the header\n> - then a 4-byte count of the number of blocks included in the file\n> - then the block numbers, each as a 4-byte quantity\n> - then the data blocks\n>\n>\n> We are also working on combining these incremental back-ups with the full\n> backup and for that, we are planning to add a new utility called\n> pg_combinebackup. Will post the details on that later once we have on the\n> same page for taking backup.\n>\n\nFor combining a full backup with one or more incremental backup, we are\nadding\na new utility called pg_combinebackup in src/bin.\n\nHere is the overall design as proposed by Robert.\n\npg_combinebackup starts from the LAST backup specified and work backward. It\nmust NOT start with the full backup and work forward. This is important both\nfor reasons of efficiency and of correctness. For example, if you start by\ncopying over the full backup and then later apply the incremental backups on\ntop of it then you'll copy data and later end up overwriting it or removing\nit. Any files that are leftover at the end that aren't in the final\nincremental backup even as .partial files need to be removed, or the result\nis\nwrong. We should aim for a system where every block in the output directory\nis\nwritten exactly once and nothing ever has to be created and then removed.\n\nTo make that work, we should start by examining the final incremental\nbackup.\nWe should proceed with one file at a time. For each file:\n\n1. If the complete file is present in the incremental backup, then just\ncopy it\nto the output directory - and move on to the next file.\n\n2. Otherwise, we have a .partial file. Work backward through the backup\nchain\nuntil we find a complete version of the file. That might happen when we get\n\\back to the full backup at the start of the chain, but it might also happen\nsooner - at which point we do not need to and should not look at earlier\nbackups for that file. During this phase, we should read only the HEADER of\neach .partial file, building a map of which blocks we're ultimately going to\nneed to read from each backup. We can also compute the offset within each\nfile\nwhere that block is stored at this stage, again using the header\ninformation.\n\n3. Now, we can write the output file - reading each block in turn from the\ncorrect backup and writing it to the write output file, using the map we\nconstructed in the previous step. We should probably keep all of the input\nfiles open over steps 2 and 3 and then close them at the end because\nrepeatedly closing and opening them is going to be expensive. When that's\ndone,\ngo on to the next file and start over at step 1.\n\n\nWe are already started working on this design.\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\n\nOn Thu, Jul 11, 2019 at 5:00 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:Hi Anastasia,On Wed, Jul 10, 2019 at 11:47 PM Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:23.04.2019 14:08, Anastasia Lubennikova wrote:> I'm volunteering to write a draft patch or, more likely, set of > patches, which> will allow us to discuss the subject in more detail.> And to do that I wish we agree on the API and data format (at least > broadly).> Looking forward to hearing your thoughts. \nThough the previous discussion stalled,I still hope that we could agree on basic points such as a map file format and protocol extension,which is necessary to start implementing the feature.It's great that you too come up with the PoC patch. I didn't look at your changes in much details but we at EnterpriseDB too working on this feature and started implementing it.Attached series of patches I had so far... (which needed further optimization and adjustments though)Here is the overall design (as proposed by Robert) we are trying to implement:1. Extend the BASE_BACKUP command that can be used with replication connections. Add a new [ LSN 'lsn' ] option.2. Extend pg_basebackup with a new --lsn=LSN option that causes it to send the option added to the server in #1.Here are the implementation details when we have a valid LSNsendFile() in basebackup.c is the function which mostly does the thing for us. If the filename looks like a relation file, then we'll need to consider sending only a partial file. The way to do that is probably:A. Read the whole file into memory.B. Check the LSN of each block. Build a bitmap indicating which blocks have an LSN greater than or equal to the threshold LSN.C. If more than 90% of the bits in the bitmap are set, send the whole file just as if this were a full backup. This 90% is a constant now; we might make it a GUC later.D. Otherwise, send a file with .partial added to the name. The .partial file contains an indication of which blocks were changed at the beginning, followed by the data blocks. It also includes a checksum/CRC.Currently, a .partial file format looks like: - start with a 4-byte magic number - then store a 4-byte CRC covering the header - then a 4-byte count of the number of blocks included in the file - then the block numbers, each as a 4-byte quantity - then the data blocksWe are also working on combining these incremental back-ups with the full backup and for that, we are planning to add a new utility called pg_combinebackup. Will post the details on that later once we have on the same page for taking backup.For combining a full backup with one or more incremental backup, we are addinga new utility called pg_combinebackup in src/bin.Here is the overall design as proposed by Robert.pg_combinebackup starts from the LAST backup specified and work backward. Itmust NOT start with the full backup and work forward. This is important bothfor reasons of efficiency and of correctness. For example, if you start bycopying over the full backup and then later apply the incremental backups ontop of it then you'll copy data and later end up overwriting it or removingit. Any files that are leftover at the end that aren't in the finalincremental backup even as .partial files need to be removed, or the result iswrong. We should aim for a system where every block in the output directory iswritten exactly once and nothing ever has to be created and then removed.To make that work, we should start by examining the final incremental backup.We should proceed with one file at a time. For each file:1. If the complete file is present in the incremental backup, then just copy itto the output directory - and move on to the next file.2. Otherwise, we have a .partial file. Work backward through the backup chainuntil we find a complete version of the file. That might happen when we get\\back to the full backup at the start of the chain, but it might also happensooner - at which point we do not need to and should not look at earlierbackups for that file. During this phase, we should read only the HEADER ofeach .partial file, building a map of which blocks we're ultimately going toneed to read from each backup. We can also compute the offset within each filewhere that block is stored at this stage, again using the header information.3. Now, we can write the output file - reading each block in turn from thecorrect backup and writing it to the write output file, using the map weconstructed in the previous step. We should probably keep all of the inputfiles open over steps 2 and 3 and then close them at the end becauserepeatedly closing and opening them is going to be expensive. When that's done,go on to the next file and start over at step 1.We are already started working on this design.-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB Corporation",
"msg_date": "Wed, 17 Jul 2019 10:51:51 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 10:22 AM Jeevan Chalke <\njeevan.chalke@enterprisedb.com> wrote:\n\n>\n>\n> On Thu, Jul 11, 2019 at 5:00 PM Jeevan Chalke <\n> jeevan.chalke@enterprisedb.com> wrote:\n>\n>> Hi Anastasia,\n>>\n>> On Wed, Jul 10, 2019 at 11:47 PM Anastasia Lubennikova <\n>> a.lubennikova@postgrespro.ru> wrote:\n>>\n>>> 23.04.2019 14:08, Anastasia Lubennikova wrote:\n>>> > I'm volunteering to write a draft patch or, more likely, set of\n>>> > patches, which\n>>> > will allow us to discuss the subject in more detail.\n>>> > And to do that I wish we agree on the API and data format (at least\n>>> > broadly).\n>>> > Looking forward to hearing your thoughts.\n>>>\n>>> Though the previous discussion stalled,\n>>> I still hope that we could agree on basic points such as a map file\n>>> format and protocol extension,\n>>> which is necessary to start implementing the feature.\n>>>\n>>\n>> It's great that you too come up with the PoC patch. I didn't look at your\n>> changes in much details but we at EnterpriseDB too working on this feature\n>> and started implementing it.\n>>\n>> Attached series of patches I had so far... (which needed further\n>> optimization and adjustments though)\n>>\n>> Here is the overall design (as proposed by Robert) we are trying to\n>> implement:\n>>\n>> 1. Extend the BASE_BACKUP command that can be used with replication\n>> connections. Add a new [ LSN 'lsn' ] option.\n>>\n>> 2. Extend pg_basebackup with a new --lsn=LSN option that causes it to\n>> send the option added to the server in #1.\n>>\n>> Here are the implementation details when we have a valid LSN\n>>\n>> sendFile() in basebackup.c is the function which mostly does the thing\n>> for us. If the filename looks like a relation file, then we'll need to\n>> consider sending only a partial file. The way to do that is probably:\n>>\n>> A. Read the whole file into memory.\n>>\n>> B. Check the LSN of each block. Build a bitmap indicating which blocks\n>> have an LSN greater than or equal to the threshold LSN.\n>>\n>> C. If more than 90% of the bits in the bitmap are set, send the whole\n>> file just as if this were a full backup. This 90% is a constant now; we\n>> might make it a GUC later.\n>>\n>> D. Otherwise, send a file with .partial added to the name. The .partial\n>> file contains an indication of which blocks were changed at the beginning,\n>> followed by the data blocks. It also includes a checksum/CRC.\n>> Currently, a .partial file format looks like:\n>> - start with a 4-byte magic number\n>> - then store a 4-byte CRC covering the header\n>> - then a 4-byte count of the number of blocks included in the file\n>> - then the block numbers, each as a 4-byte quantity\n>> - then the data blocks\n>>\n>>\n>> We are also working on combining these incremental back-ups with the full\n>> backup and for that, we are planning to add a new utility called\n>> pg_combinebackup. Will post the details on that later once we have on the\n>> same page for taking backup.\n>>\n>\n> For combining a full backup with one or more incremental backup, we are\n> adding\n> a new utility called pg_combinebackup in src/bin.\n>\n> Here is the overall design as proposed by Robert.\n>\n> pg_combinebackup starts from the LAST backup specified and work backward.\n> It\n> must NOT start with the full backup and work forward. This is important\n> both\n> for reasons of efficiency and of correctness. For example, if you start by\n> copying over the full backup and then later apply the incremental backups\n> on\n> top of it then you'll copy data and later end up overwriting it or removing\n> it. Any files that are leftover at the end that aren't in the final\n> incremental backup even as .partial files need to be removed, or the\n> result is\n> wrong. We should aim for a system where every block in the output\n> directory is\n> written exactly once and nothing ever has to be created and then removed.\n>\n> To make that work, we should start by examining the final incremental\n> backup.\n> We should proceed with one file at a time. For each file:\n>\n> 1. If the complete file is present in the incremental backup, then just\n> copy it\n> to the output directory - and move on to the next file.\n>\n> 2. Otherwise, we have a .partial file. Work backward through the backup\n> chain\n> until we find a complete version of the file. That might happen when we get\n> \\back to the full backup at the start of the chain, but it might also\n> happen\n> sooner - at which point we do not need to and should not look at earlier\n> backups for that file. During this phase, we should read only the HEADER of\n> each .partial file, building a map of which blocks we're ultimately going\n> to\n> need to read from each backup. We can also compute the offset within each\n> file\n> where that block is stored at this stage, again using the header\n> information.\n>\n> 3. Now, we can write the output file - reading each block in turn from the\n> correct backup and writing it to the write output file, using the map we\n> constructed in the previous step. We should probably keep all of the input\n> files open over steps 2 and 3 and then close them at the end because\n> repeatedly closing and opening them is going to be expensive. When that's\n> done,\n> go on to the next file and start over at step 1.\n>\n>\n> At what stage you will apply the WAL generated in between the START/STOP\nbackup.\n\n\n> We are already started working on this design.\n>\n> --\n> Jeevan Chalke\n> Technical Architect, Product Development\n> EnterpriseDB Corporation\n>\n>\n\n-- \nIbrar Ahmed\n\nOn Wed, Jul 17, 2019 at 10:22 AM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:On Thu, Jul 11, 2019 at 5:00 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:Hi Anastasia,On Wed, Jul 10, 2019 at 11:47 PM Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:23.04.2019 14:08, Anastasia Lubennikova wrote:> I'm volunteering to write a draft patch or, more likely, set of > patches, which> will allow us to discuss the subject in more detail.> And to do that I wish we agree on the API and data format (at least > broadly).> Looking forward to hearing your thoughts. \nThough the previous discussion stalled,I still hope that we could agree on basic points such as a map file format and protocol extension,which is necessary to start implementing the feature.It's great that you too come up with the PoC patch. I didn't look at your changes in much details but we at EnterpriseDB too working on this feature and started implementing it.Attached series of patches I had so far... (which needed further optimization and adjustments though)Here is the overall design (as proposed by Robert) we are trying to implement:1. Extend the BASE_BACKUP command that can be used with replication connections. Add a new [ LSN 'lsn' ] option.2. Extend pg_basebackup with a new --lsn=LSN option that causes it to send the option added to the server in #1.Here are the implementation details when we have a valid LSNsendFile() in basebackup.c is the function which mostly does the thing for us. If the filename looks like a relation file, then we'll need to consider sending only a partial file. The way to do that is probably:A. Read the whole file into memory.B. Check the LSN of each block. Build a bitmap indicating which blocks have an LSN greater than or equal to the threshold LSN.C. If more than 90% of the bits in the bitmap are set, send the whole file just as if this were a full backup. This 90% is a constant now; we might make it a GUC later.D. Otherwise, send a file with .partial added to the name. The .partial file contains an indication of which blocks were changed at the beginning, followed by the data blocks. It also includes a checksum/CRC.Currently, a .partial file format looks like: - start with a 4-byte magic number - then store a 4-byte CRC covering the header - then a 4-byte count of the number of blocks included in the file - then the block numbers, each as a 4-byte quantity - then the data blocksWe are also working on combining these incremental back-ups with the full backup and for that, we are planning to add a new utility called pg_combinebackup. Will post the details on that later once we have on the same page for taking backup.For combining a full backup with one or more incremental backup, we are addinga new utility called pg_combinebackup in src/bin.Here is the overall design as proposed by Robert.pg_combinebackup starts from the LAST backup specified and work backward. Itmust NOT start with the full backup and work forward. This is important bothfor reasons of efficiency and of correctness. For example, if you start bycopying over the full backup and then later apply the incremental backups ontop of it then you'll copy data and later end up overwriting it or removingit. Any files that are leftover at the end that aren't in the finalincremental backup even as .partial files need to be removed, or the result iswrong. We should aim for a system where every block in the output directory iswritten exactly once and nothing ever has to be created and then removed.To make that work, we should start by examining the final incremental backup.We should proceed with one file at a time. For each file:1. If the complete file is present in the incremental backup, then just copy itto the output directory - and move on to the next file.2. Otherwise, we have a .partial file. Work backward through the backup chainuntil we find a complete version of the file. That might happen when we get\\back to the full backup at the start of the chain, but it might also happensooner - at which point we do not need to and should not look at earlierbackups for that file. During this phase, we should read only the HEADER ofeach .partial file, building a map of which blocks we're ultimately going toneed to read from each backup. We can also compute the offset within each filewhere that block is stored at this stage, again using the header information.3. Now, we can write the output file - reading each block in turn from thecorrect backup and writing it to the write output file, using the map weconstructed in the previous step. We should probably keep all of the inputfiles open over steps 2 and 3 and then close them at the end becauserepeatedly closing and opening them is going to be expensive. When that's done,go on to the next file and start over at step 1.At what stage you will apply the WAL generated in between the START/STOP backup. We are already started working on this design.-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB Corporation\n-- Ibrar Ahmed",
"msg_date": "Wed, 17 Jul 2019 13:44:44 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 2:15 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n> At what stage you will apply the WAL generated in between the START/STOP\n> backup.\n>\n\nIn this design, we are not touching any WAL related code. The WAL files will\nget copied with each backup either full or incremental. And thus, the last\nincremental backup will have the final WAL files which will be copied as-is\nin the combined full-backup and they will get apply automatically if that\nthe data directory is used to start the server.\n\n\n> --\n> Ibrar Ahmed\n>\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\n\nOn Wed, Jul 17, 2019 at 2:15 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:At what stage you will apply the WAL generated in between the START/STOP backup. In this design, we are not touching any WAL related code. The WAL files willget copied with each backup either full or incremental. And thus, the lastincremental backup will have the final WAL files which will be copied as-isin the combined full-backup and they will get apply automatically if thatthe data directory is used to start the server. -- Ibrar Ahmed\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB Corporation",
"msg_date": "Wed, 17 Jul 2019 19:13:36 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 6:43 PM Jeevan Chalke <\njeevan.chalke@enterprisedb.com> wrote:\n\n> On Wed, Jul 17, 2019 at 2:15 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>>\n>> At what stage you will apply the WAL generated in between the START/STOP\n>> backup.\n>>\n>\n> In this design, we are not touching any WAL related code. The WAL files\n> will\n> get copied with each backup either full or incremental. And thus, the last\n> incremental backup will have the final WAL files which will be copied as-is\n> in the combined full-backup and they will get apply automatically if that\n> the data directory is used to start the server.\n>\n\nOk, so you keep all the WAL files since the first backup, right?\n\n>\n>\n>> --\n>> Ibrar Ahmed\n>>\n>\n> --\n> Jeevan Chalke\n> Technical Architect, Product Development\n> EnterpriseDB Corporation\n>\n>\n\n-- \nIbrar Ahmed\n\nOn Wed, Jul 17, 2019 at 6:43 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:On Wed, Jul 17, 2019 at 2:15 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:At what stage you will apply the WAL generated in between the START/STOP backup. In this design, we are not touching any WAL related code. The WAL files willget copied with each backup either full or incremental. And thus, the lastincremental backup will have the final WAL files which will be copied as-isin the combined full-backup and they will get apply automatically if thatthe data directory is used to start the server.Ok, so you keep all the WAL files since the first backup, right? -- Ibrar Ahmed\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB Corporation\n-- Ibrar Ahmed",
"msg_date": "Wed, 17 Jul 2019 19:08:07 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 7:38 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Wed, Jul 17, 2019 at 6:43 PM Jeevan Chalke <\n> jeevan.chalke@enterprisedb.com> wrote:\n>\n>> On Wed, Jul 17, 2019 at 2:15 PM Ibrar Ahmed <ibrar.ahmad@gmail.com>\n>> wrote:\n>>\n>>>\n>>> At what stage you will apply the WAL generated in between the START/STOP\n>>> backup.\n>>>\n>>\n>> In this design, we are not touching any WAL related code. The WAL files\n>> will\n>> get copied with each backup either full or incremental. And thus, the last\n>> incremental backup will have the final WAL files which will be copied\n>> as-is\n>> in the combined full-backup and they will get apply automatically if that\n>> the data directory is used to start the server.\n>>\n>\n> Ok, so you keep all the WAL files since the first backup, right?\n>\n\nThe WAL files will anyway be copied while taking a backup (full or\nincremental),\nbut only last incremental backup's WAL files are copied to the combined\nsynthetic full backup.\n\n\n>>\n>>> --\n>>> Ibrar Ahmed\n>>>\n>>\n>> --\n>> Jeevan Chalke\n>> Technical Architect, Product Development\n>> EnterpriseDB Corporation\n>>\n>>\n>\n> --\n> Ibrar Ahmed\n>\n\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\n\nOn Wed, Jul 17, 2019 at 7:38 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Wed, Jul 17, 2019 at 6:43 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:On Wed, Jul 17, 2019 at 2:15 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:At what stage you will apply the WAL generated in between the START/STOP backup. In this design, we are not touching any WAL related code. The WAL files willget copied with each backup either full or incremental. And thus, the lastincremental backup will have the final WAL files which will be copied as-isin the combined full-backup and they will get apply automatically if thatthe data directory is used to start the server.Ok, so you keep all the WAL files since the first backup, right? The WAL files will anyway be copied while taking a backup (full or incremental),but only last incremental backup's WAL files are copied to the combinedsynthetic full backup. -- Ibrar Ahmed\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB Corporation\n-- Ibrar Ahmed\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB Corporation",
"msg_date": "Wed, 17 Jul 2019 20:11:53 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi Jeevan,\n\nThe idea is very nice.\nWhen Insert/update/delete and truncate/drop happens at various\ncombinations, How the incremental backup handles the copying of the\nblocks?\n\n\nOn Wed, Jul 17, 2019 at 8:12 PM Jeevan Chalke\n<jeevan.chalke@enterprisedb.com> wrote:\n>\n>\n>\n> On Wed, Jul 17, 2019 at 7:38 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>\n>>\n>>\n>> On Wed, Jul 17, 2019 at 6:43 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:\n>>>\n>>> On Wed, Jul 17, 2019 at 2:15 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>>>\n>>>>\n>>>> At what stage you will apply the WAL generated in between the START/STOP backup.\n>>>\n>>>\n>>> In this design, we are not touching any WAL related code. The WAL files will\n>>> get copied with each backup either full or incremental. And thus, the last\n>>> incremental backup will have the final WAL files which will be copied as-is\n>>> in the combined full-backup and they will get apply automatically if that\n>>> the data directory is used to start the server.\n>>\n>>\n>> Ok, so you keep all the WAL files since the first backup, right?\n>\n>\n> The WAL files will anyway be copied while taking a backup (full or incremental),\n> but only last incremental backup's WAL files are copied to the combined\n> synthetic full backup.\n>\n>>>\n>>>>\n>>>> --\n>>>> Ibrar Ahmed\n>>>\n>>>\n>>> --\n>>> Jeevan Chalke\n>>> Technical Architect, Product Development\n>>> EnterpriseDB Corporation\n>>>\n>>\n>>\n>> --\n>> Ibrar Ahmed\n>\n>\n>\n> --\n> Jeevan Chalke\n> Technical Architect, Product Development\n> EnterpriseDB Corporation\n>\n\n\n--\nRegards,\nvignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 20 Jul 2019 23:22:34 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi Vignesh,\n\nThis backup technology is extending the pg_basebackup itself, which means\nwe can\nstill take online backups. This is internally done using pg_start_backup and\npg_stop_backup. pg_start_backup performs a checkpoint, and this checkpoint\nis\nused in the recovery process while starting the cluster from a backup\nimage. What\nincremental backup will just modify (as compared to traditional\npg_basebackup)\nis - After doing the checkpoint, instead of copying the entire relation\nfiles,\nit takes an input LSN and scan all the blocks in all relation files, and\nstore\nthe blocks having LSN >= InputLSN. This means it considers all the changes\nthat are already written into relation files including insert/update/delete\netc\nup to the checkpoint performed by pg_start_backup internally, and as Jeevan\nChalke\nmentioned upthread the incremental backup will also contain copy of WAL\nfiles.\nOnce this incremental backup is combined with the parent backup by means of\nnew\ncombine process (that will be introduced as part of this feature itself)\nshould\nideally look like a full pg_basebackup. Note that any changes done by these\ninsert/delete/update operations while the incremental backup was being taken\nwill be still available via WAL files and as normal restore process, will be\nreplayed from the checkpoint onwards up to a consistent point.\n\nMy two cents!\n\nRegards,\nJeevan Ladhe\n\nOn Sat, Jul 20, 2019 at 11:22 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> Hi Jeevan,\n>\n> The idea is very nice.\n> When Insert/update/delete and truncate/drop happens at various\n> combinations, How the incremental backup handles the copying of the\n> blocks?\n>\n>\n> On Wed, Jul 17, 2019 at 8:12 PM Jeevan Chalke\n> <jeevan.chalke@enterprisedb.com> wrote:\n> >\n> >\n> >\n> > On Wed, Jul 17, 2019 at 7:38 PM Ibrar Ahmed <ibrar.ahmad@gmail.com>\n> wrote:\n> >>\n> >>\n> >>\n> >> On Wed, Jul 17, 2019 at 6:43 PM Jeevan Chalke <\n> jeevan.chalke@enterprisedb.com> wrote:\n> >>>\n> >>> On Wed, Jul 17, 2019 at 2:15 PM Ibrar Ahmed <ibrar.ahmad@gmail.com>\n> wrote:\n> >>>>\n> >>>>\n> >>>> At what stage you will apply the WAL generated in between the\n> START/STOP backup.\n> >>>\n> >>>\n> >>> In this design, we are not touching any WAL related code. The WAL\n> files will\n> >>> get copied with each backup either full or incremental. And thus, the\n> last\n> >>> incremental backup will have the final WAL files which will be copied\n> as-is\n> >>> in the combined full-backup and they will get apply automatically if\n> that\n> >>> the data directory is used to start the server.\n> >>\n> >>\n> >> Ok, so you keep all the WAL files since the first backup, right?\n> >\n> >\n> > The WAL files will anyway be copied while taking a backup (full or\n> incremental),\n> > but only last incremental backup's WAL files are copied to the combined\n> > synthetic full backup.\n> >\n> >>>\n> >>>>\n> >>>> --\n> >>>> Ibrar Ahmed\n> >>>\n> >>>\n> >>> --\n> >>> Jeevan Chalke\n> >>> Technical Architect, Product Development\n> >>> EnterpriseDB Corporation\n> >>>\n> >>\n> >>\n> >> --\n> >> Ibrar Ahmed\n> >\n> >\n> >\n> > --\n> > Jeevan Chalke\n> > Technical Architect, Product Development\n> > EnterpriseDB Corporation\n> >\n>\n>\n> --\n> Regards,\n> vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\nHi Vignesh,This backup technology is extending the pg_basebackup itself, which means we canstill take online backups. This is internally done using pg_start_backup andpg_stop_backup. pg_start_backup performs a checkpoint, and this checkpoint isused in the recovery process while starting the cluster from a backup image. Whatincremental backup will just modify (as compared to traditional pg_basebackup)is - After doing the checkpoint, instead of copying the entire relation files,it takes an input LSN and scan all the blocks in all relation files, and storethe blocks having LSN >= InputLSN. This means it considers all the changesthat are already written into relation files including insert/update/delete etcup to the checkpoint performed by pg_start_backup internally, and as Jeevan Chalkementioned upthread the incremental backup will also contain copy of WAL files.Once this incremental backup is combined with the parent backup by means of newcombine process (that will be introduced as part of this feature itself) shouldideally look like a full pg_basebackup. Note that any changes done by theseinsert/delete/update operations while the incremental backup was being takenwill be still available via WAL files and as normal restore process, will bereplayed from the checkpoint onwards up to a consistent point.My two cents!Regards,Jeevan LadheOn Sat, Jul 20, 2019 at 11:22 PM vignesh C <vignesh21@gmail.com> wrote:Hi Jeevan,\n\nThe idea is very nice.\nWhen Insert/update/delete and truncate/drop happens at various\ncombinations, How the incremental backup handles the copying of the\nblocks?\n\n\nOn Wed, Jul 17, 2019 at 8:12 PM Jeevan Chalke\n<jeevan.chalke@enterprisedb.com> wrote:\n>\n>\n>\n> On Wed, Jul 17, 2019 at 7:38 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>\n>>\n>>\n>> On Wed, Jul 17, 2019 at 6:43 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:\n>>>\n>>> On Wed, Jul 17, 2019 at 2:15 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>>>\n>>>>\n>>>> At what stage you will apply the WAL generated in between the START/STOP backup.\n>>>\n>>>\n>>> In this design, we are not touching any WAL related code. The WAL files will\n>>> get copied with each backup either full or incremental. And thus, the last\n>>> incremental backup will have the final WAL files which will be copied as-is\n>>> in the combined full-backup and they will get apply automatically if that\n>>> the data directory is used to start the server.\n>>\n>>\n>> Ok, so you keep all the WAL files since the first backup, right?\n>\n>\n> The WAL files will anyway be copied while taking a backup (full or incremental),\n> but only last incremental backup's WAL files are copied to the combined\n> synthetic full backup.\n>\n>>>\n>>>>\n>>>> --\n>>>> Ibrar Ahmed\n>>>\n>>>\n>>> --\n>>> Jeevan Chalke\n>>> Technical Architect, Product Development\n>>> EnterpriseDB Corporation\n>>>\n>>\n>>\n>> --\n>> Ibrar Ahmed\n>\n>\n>\n> --\n> Jeevan Chalke\n> Technical Architect, Product Development\n> EnterpriseDB Corporation\n>\n\n\n--\nRegards,\nvignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 23 Jul 2019 23:18:50 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Thanks Jeevan.\n\n1) If relation file has changed due to truncate or vacuum.\n During incremental backup the new files will be copied.\n There are chances that both the old file and new file\n will be present. I'm not sure if cleaning up of the\n old file is handled.\n2) Just a small thought on building the bitmap,\n can the bitmap be built and maintained as\n and when the changes are happening in the system.\n If we are building the bitmap while doing the incremental backup,\n Scanning through each file might take more time.\n This can be a configurable parameter, the system can run\n without capturing this information by default, but if there are some\n of them who will be taking incremental backup frequently this\n configuration can be enabled which should track the modified blocks.\n\n What is your thought on this?\n-- \nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\nOn Tue, Jul 23, 2019 at 11:19 PM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n>\n> Hi Vignesh,\n>\n> This backup technology is extending the pg_basebackup itself, which means we can\n> still take online backups. This is internally done using pg_start_backup and\n> pg_stop_backup. pg_start_backup performs a checkpoint, and this checkpoint is\n> used in the recovery process while starting the cluster from a backup image. What\n> incremental backup will just modify (as compared to traditional pg_basebackup)\n> is - After doing the checkpoint, instead of copying the entire relation files,\n> it takes an input LSN and scan all the blocks in all relation files, and store\n> the blocks having LSN >= InputLSN. This means it considers all the changes\n> that are already written into relation files including insert/update/delete etc\n> up to the checkpoint performed by pg_start_backup internally, and as Jeevan Chalke\n> mentioned upthread the incremental backup will also contain copy of WAL files.\n> Once this incremental backup is combined with the parent backup by means of new\n> combine process (that will be introduced as part of this feature itself) should\n> ideally look like a full pg_basebackup. Note that any changes done by these\n> insert/delete/update operations while the incremental backup was being taken\n> will be still available via WAL files and as normal restore process, will be\n> replayed from the checkpoint onwards up to a consistent point.\n>\n> My two cents!\n>\n> Regards,\n> Jeevan Ladhe\n>\n> On Sat, Jul 20, 2019 at 11:22 PM vignesh C <vignesh21@gmail.com> wrote:\n>>\n>> Hi Jeevan,\n>>\n>> The idea is very nice.\n>> When Insert/update/delete and truncate/drop happens at various\n>> combinations, How the incremental backup handles the copying of the\n>> blocks?\n>>\n>>\n>> On Wed, Jul 17, 2019 at 8:12 PM Jeevan Chalke\n>> <jeevan.chalke@enterprisedb.com> wrote:\n>> >\n>> >\n>> >\n>> > On Wed, Jul 17, 2019 at 7:38 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>> >>\n>> >>\n>> >>\n>> >> On Wed, Jul 17, 2019 at 6:43 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:\n>> >>>\n>> >>> On Wed, Jul 17, 2019 at 2:15 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>> >>>>\n>> >>>>\n>> >>>> At what stage you will apply the WAL generated in between the START/STOP backup.\n>> >>>\n>> >>>\n>> >>> In this design, we are not touching any WAL related code. The WAL files will\n>> >>> get copied with each backup either full or incremental. And thus, the last\n>> >>> incremental backup will have the final WAL files which will be copied as-is\n>> >>> in the combined full-backup and they will get apply automatically if that\n>> >>> the data directory is used to start the server.\n>> >>\n>> >>\n>> >> Ok, so you keep all the WAL files since the first backup, right?\n>> >\n>> >\n>> > The WAL files will anyway be copied while taking a backup (full or incremental),\n>> > but only last incremental backup's WAL files are copied to the combined\n>> > synthetic full backup.\n>> >\n>> >>>\n>> >>>>\n>> >>>> --\n>> >>>> Ibrar Ahmed\n>> >>>\n>> >>>\n>> >>> --\n>> >>> Jeevan Chalke\n>> >>> Technical Architect, Product Development\n>> >>> EnterpriseDB Corporation\n>> >>>\n>> >>\n>> >>\n>> >> --\n>> >> Ibrar Ahmed\n>> >\n>> >\n>> >\n>> > --\n>> > Jeevan Chalke\n>> > Technical Architect, Product Development\n>> > EnterpriseDB Corporation\n>> >\n>>\n>>\n>> --\n>> Regards,\n>> vignesh\n>>\n>>\n>>\n\n\n",
"msg_date": "Wed, 24 Jul 2019 09:33:34 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi Vignesh,\n\nPlease find my comments inline below:\n\n1) If relation file has changed due to truncate or vacuum.\n> During incremental backup the new files will be copied.\n> There are chances that both the old file and new file\n> will be present. I'm not sure if cleaning up of the\n> old file is handled.\n>\n\nWhen an incremental backup is taken it either copies the file in its\nentirety if\na file is changed more than 90%, or writes .partial with changed blocks\nbitmap\nand actual data. For the files that are unchanged, it writes 0 bytes and\nstill\ncreates a .partial file for unchanged files too. This means there is a\n.partitial\nfile for all the files that are to be looked up in full backup.\nWhile composing a synthetic backup from incremental backup the\npg_combinebackup\ntool will only look for those relation files in full(parent) backup which\nare\nhaving .partial files in the incremental backup. So, if vacuum/truncate\nhappened\nbetween full and incremental backup, then the incremental backup image will\nnot\nhave a 0-length .partial file for that relation, and so the synthetic backup\nthat is restored using pg_combinebackup will not have that file as well.\n\n\n> 2) Just a small thought on building the bitmap,\n> can the bitmap be built and maintained as\n> and when the changes are happening in the system.\n> If we are building the bitmap while doing the incremental backup,\n> Scanning through each file might take more time.\n> This can be a configurable parameter, the system can run\n> without capturing this information by default, but if there are some\n> of them who will be taking incremental backup frequently this\n> configuration can be enabled which should track the modified blocks.\n\n\nIIUC, this will need changes in the backend. Honestly, I think backup is a\nmaintenance task and hampering the backend for this does not look like a\ngood\nidea. But, having said that even if we have to provide this as a switch for\nsome\nof the users, it will need a different infrastructure than what we are\nbuilding\nhere for constructing bitmap, where we scan all the files one by one. Maybe\nfor\nthe initial version, we can go with the current proposal that Robert has\nsuggested,\nand add this switch at a later point as an enhancement.\n- My thoughts.\n\nRegards,\nJeevan Ladhe\n\nHi Vignesh,Please find my comments inline below:1) If relation file has changed due to truncate or vacuum. During incremental backup the new files will be copied. There are chances that both the old file and new file will be present. I'm not sure if cleaning up of the old file is handled.When an incremental backup is taken it either copies the file in its entirety ifa file is changed more than 90%, or writes .partial with changed blocks bitmapand actual data. For the files that are unchanged, it writes 0 bytes and stillcreates a .partial file for unchanged files too. This means there is a .partitialfile for all the files that are to be looked up in full backup.While composing a synthetic backup from incremental backup the pg_combinebackuptool will only look for those relation files in full(parent) backup which arehaving .partial files in the incremental backup. So, if vacuum/truncate happenedbetween full and incremental backup, then the incremental backup image will nothave a 0-length .partial file for that relation, and so the synthetic backupthat is restored using pg_combinebackup will not have that file as well. 2) Just a small thought on building the bitmap, can the bitmap be built and maintained as and when the changes are happening in the system. If we are building the bitmap while doing the incremental backup, Scanning through each file might take more time. This can be a configurable parameter, the system can run without capturing this information by default, but if there are some of them who will be taking incremental backup frequently this configuration can be enabled which should track the modified blocks.IIUC, this will need changes in the backend. Honestly, I think backup is amaintenance task and hampering the backend for this does not look like a goodidea. But, having said that even if we have to provide this as a switch for someof the users, it will need a different infrastructure than what we are buildinghere for constructing bitmap, where we scan all the files one by one. Maybe forthe initial version, we can go with the current proposal that Robert has suggested,and add this switch at a later point as an enhancement.- My thoughts.Regards,Jeevan Ladhe",
"msg_date": "Fri, 26 Jul 2019 11:21:43 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 11:21 AM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n> Hi Vignesh,\n>\n> Please find my comments inline below:\n>\n> 1) If relation file has changed due to truncate or vacuum.\n>> During incremental backup the new files will be copied.\n>> There are chances that both the old file and new file\n>> will be present. I'm not sure if cleaning up of the\n>> old file is handled.\n>>\n>\n> When an incremental backup is taken it either copies the file in its\n> entirety if\n> a file is changed more than 90%, or writes .partial with changed blocks\n> bitmap\n> and actual data. For the files that are unchanged, it writes 0 bytes and\n> still\n> creates a .partial file for unchanged files too. This means there is a\n> .partitial\n> file for all the files that are to be looked up in full backup.\n> While composing a synthetic backup from incremental backup the\n> pg_combinebackup\n> tool will only look for those relation files in full(parent) backup which\n> are\n> having .partial files in the incremental backup. So, if vacuum/truncate\n> happened\n> between full and incremental backup, then the incremental backup image\n> will not\n> have a 0-length .partial file for that relation, and so the synthetic\n> backup\n> that is restored using pg_combinebackup will not have that file as well.\n>\n>\nThanks Jeevan for the update, I feel this logic is good.\nIt will handle the case of deleting the old relation files.\n\n>\n>\n>> 2) Just a small thought on building the bitmap,\n>> can the bitmap be built and maintained as\n>> and when the changes are happening in the system.\n>> If we are building the bitmap while doing the incremental backup,\n>> Scanning through each file might take more time.\n>> This can be a configurable parameter, the system can run\n>> without capturing this information by default, but if there are some\n>> of them who will be taking incremental backup frequently this\n>> configuration can be enabled which should track the modified blocks.\n>\n>\n> IIUC, this will need changes in the backend. Honestly, I think backup is a\n> maintenance task and hampering the backend for this does not look like a\n> good\n> idea. But, having said that even if we have to provide this as a switch\n> for some\n> of the users, it will need a different infrastructure than what we are\n> building\n> here for constructing bitmap, where we scan all the files one by one.\n> Maybe for\n> the initial version, we can go with the current proposal that Robert has\n> suggested,\n> and add this switch at a later point as an enhancement.\n>\n>\nThat sounds fair to me.\n\n\nRegards,\nvignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Jul 26, 2019 at 11:21 AM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:Hi Vignesh,Please find my comments inline below:1) If relation file has changed due to truncate or vacuum. During incremental backup the new files will be copied. There are chances that both the old file and new file will be present. I'm not sure if cleaning up of the old file is handled.When an incremental backup is taken it either copies the file in its entirety ifa file is changed more than 90%, or writes .partial with changed blocks bitmapand actual data. For the files that are unchanged, it writes 0 bytes and stillcreates a .partial file for unchanged files too. This means there is a .partitialfile for all the files that are to be looked up in full backup.While composing a synthetic backup from incremental backup the pg_combinebackuptool will only look for those relation files in full(parent) backup which arehaving .partial files in the incremental backup. So, if vacuum/truncate happenedbetween full and incremental backup, then the incremental backup image will nothave a 0-length .partial file for that relation, and so the synthetic backupthat is restored using pg_combinebackup will not have that file as well.> Thanks Jeevan for the update, I feel this logic is good. It will handle the case of deleting the old relation files. 2) Just a small thought on building the bitmap, can the bitmap be built and maintained as and when the changes are happening in the system. If we are building the bitmap while doing the incremental backup, Scanning through each file might take more time. This can be a configurable parameter, the system can run without capturing this information by default, but if there are some of them who will be taking incremental backup frequently this configuration can be enabled which should track the modified blocks.IIUC, this will need changes in the backend. Honestly, I think backup is amaintenance task and hampering the backend for this does not look like a goodidea. But, having said that even if we have to provide this as a switch for someof the users, it will need a different infrastructure than what we are buildinghere for constructing bitmap, where we scan all the files one by one. Maybe forthe initial version, we can go with the current proposal that Robert has suggested,and add this switch at a later point as an enhancement. > That sounds fair to me.Regards,vigneshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 26 Jul 2019 13:23:57 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 2:17 PM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> In attachments, you can find a prototype of incremental pg_basebackup,\n> which consists of 2 features:\n>\n> 1) To perform incremental backup one should call pg_basebackup with a\n> new argument:\n>\n> pg_basebackup -D 'basedir' --prev-backup-start-lsn 'lsn'\n>\n> where lsn is a start_lsn of parent backup (can be found in\n> \"backup_label\" file)\n>\n> It calls BASE_BACKUP replication command with a new argument\n> PREV_BACKUP_START_LSN 'lsn'.\n>\n> For datafiles, only pages with LSN > prev_backup_start_lsn will be\n> included in the backup.\n> They are saved into 'filename.partial' file, 'filename.blockmap' file\n> contains an array of BlockNumbers.\n> For example, if we backuped blocks 1,3,5, filename.partial will contain\n> 3 blocks, and 'filename.blockmap' will contain array {1,3,5}.\n\nI think it's better to keep both the information about changed blocks\nand the contents of the changed blocks in a single file. The list of\nchanged blocks is probably quite short, and I don't really want to\ndouble the number of files in the backup if there's no real need. I\nsuspect it's just overall a bit simpler to keep everything together.\nI don't think this is a make-or-break thing, and welcome contrary\narguments, but that's my preference.\n\n> 2) To merge incremental backup into a full backup call\n>\n> pg_basebackup -D 'basedir' --incremental-pgdata 'incremental_basedir'\n> --merge-backups\n>\n> It will move all files from 'incremental_basedir' to 'basedir' handling\n> '.partial' files correctly.\n\nThis, to me, looks like it's much worse than the design that I\nproposed originally. It means that:\n\n1. You can't take an incremental backup without having the full backup\navailable at the time you want to take the incremental backup.\n\n2. You're always storing a full backup, which means that you need more\ndisk space, and potentially much more I/O while taking the backup.\nYou save on transfer bandwidth, but you add a lot of disk reads and\nwrites, costs which have to be paid even if the backup is never\nrestored.\n\n> 1) Whether we collect block maps using simple \"read everything page by\n> page\" approach\n> or WAL scanning or any other page tracking algorithm, we must choose a\n> map format.\n> I implemented the simplest one, while there are more ideas:\n\nI think we should start simple.\n\nI haven't had a chance to look at Jeevan's patch at all, or yours in\nany detail, as yet, so these are just some very preliminary comments.\nIt will be good, however, if we can agree on who is going to do what\npart of this as we try to drive this forward together. I'm sorry that\nI didn't communicate EDB's plans to work on this more clearly;\nduplicated effort serves nobody well.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 29 Jul 2019 16:28:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi Jeevan\n\n\nI reviewed first two patches -\n\n\n0001-Add-support-for-command-line-option-to-pass-LSN.patch and\n\n0002-Add-TAP-test-to-test-LSN-option.patch\n\n\nfrom the set of incremental backup patches, and the changes look good to me.\n\n\nI had some concerns around the way we are working around with the fact that\n\npg_lsn_in() accepts the lsn with 0 as a valid lsn and I think that itself is\n\ncontradictory to the definition of InvalidXLogRecPtr. I have started a\nseparate\n\nnew thread[1] for the same.\n\n\nAlso, I observe that now commit 21f428eb, has already moved the lsn decoding\n\nlogic to a separate function pg_lsn_in_internal(), so the function\n\ndecode_lsn_internal() from patch 0001 will go away and the dependent code\nneeds\n\nto be modified.\n\n\nI shall review the rest of the patches, and post the comments.\n\n\nRegards,\n\nJeevan Ladhe\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAOgcT0NOM9oR0Hag_3VpyW0uF3iCU=BDUFSPfk9JrWXRcWQHqw@mail.gmail.com\n\nOn Thu, Jul 11, 2019 at 5:00 PM Jeevan Chalke <\njeevan.chalke@enterprisedb.com> wrote:\n\n> Hi Anastasia,\n>\n> On Wed, Jul 10, 2019 at 11:47 PM Anastasia Lubennikova <\n> a.lubennikova@postgrespro.ru> wrote:\n>\n>> 23.04.2019 14:08, Anastasia Lubennikova wrote:\n>> > I'm volunteering to write a draft patch or, more likely, set of\n>> > patches, which\n>> > will allow us to discuss the subject in more detail.\n>> > And to do that I wish we agree on the API and data format (at least\n>> > broadly).\n>> > Looking forward to hearing your thoughts.\n>>\n>> Though the previous discussion stalled,\n>> I still hope that we could agree on basic points such as a map file\n>> format and protocol extension,\n>> which is necessary to start implementing the feature.\n>>\n>\n> It's great that you too come up with the PoC patch. I didn't look at your\n> changes in much details but we at EnterpriseDB too working on this feature\n> and started implementing it.\n>\n> Attached series of patches I had so far... (which needed further\n> optimization and adjustments though)\n>\n> Here is the overall design (as proposed by Robert) we are trying to\n> implement:\n>\n> 1. Extend the BASE_BACKUP command that can be used with replication\n> connections. Add a new [ LSN 'lsn' ] option.\n>\n> 2. Extend pg_basebackup with a new --lsn=LSN option that causes it to send\n> the option added to the server in #1.\n>\n> Here are the implementation details when we have a valid LSN\n>\n> sendFile() in basebackup.c is the function which mostly does the thing for\n> us. If the filename looks like a relation file, then we'll need to consider\n> sending only a partial file. The way to do that is probably:\n>\n> A. Read the whole file into memory.\n>\n> B. Check the LSN of each block. Build a bitmap indicating which blocks\n> have an LSN greater than or equal to the threshold LSN.\n>\n> C. If more than 90% of the bits in the bitmap are set, send the whole file\n> just as if this were a full backup. This 90% is a constant now; we might\n> make it a GUC later.\n>\n> D. Otherwise, send a file with .partial added to the name. The .partial\n> file contains an indication of which blocks were changed at the beginning,\n> followed by the data blocks. It also includes a checksum/CRC.\n> Currently, a .partial file format looks like:\n> - start with a 4-byte magic number\n> - then store a 4-byte CRC covering the header\n> - then a 4-byte count of the number of blocks included in the file\n> - then the block numbers, each as a 4-byte quantity\n> - then the data blocks\n>\n>\n> We are also working on combining these incremental back-ups with the full\n> backup and for that, we are planning to add a new utility called\n> pg_combinebackup. Will post the details on that later once we have on the\n> same page for taking backup.\n>\n> Thanks\n> --\n> Jeevan Chalke\n> Technical Architect, Product Development\n> EnterpriseDB Corporation\n>\n>\n\nHi JeevanI reviewed first two patches -\n\n0001-Add-support-for-command-line-option-to-pass-LSN.patch and\n0002-Add-TAP-test-to-test-LSN-option.patch\n\nfrom the set of incremental backup patches, and the changes look good to me.\n\nI had some concerns around the way we are working around with the fact that\npg_lsn_in() accepts the lsn with 0 as a valid lsn and I think that itself is\ncontradictory to the definition of InvalidXLogRecPtr. I have started a separate\nnew thread[1] for the same.\n\nAlso, I observe that now commit 21f428eb, has already moved the lsn decoding\nlogic to a separate function pg_lsn_in_internal(), so the function\ndecode_lsn_internal() from patch 0001 will go away and the dependent code needs\nto be modified.\n\nI shall review the rest of the patches, and post the comments.\n\nRegards,\nJeevan Ladhe\n\n[1] https://www.postgresql.org/message-id/CAOgcT0NOM9oR0Hag_3VpyW0uF3iCU=BDUFSPfk9JrWXRcWQHqw@mail.gmail.comOn Thu, Jul 11, 2019 at 5:00 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:Hi Anastasia,On Wed, Jul 10, 2019 at 11:47 PM Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:23.04.2019 14:08, Anastasia Lubennikova wrote:> I'm volunteering to write a draft patch or, more likely, set of > patches, which> will allow us to discuss the subject in more detail.> And to do that I wish we agree on the API and data format (at least > broadly).> Looking forward to hearing your thoughts. \nThough the previous discussion stalled,I still hope that we could agree on basic points such as a map file format and protocol extension,which is necessary to start implementing the feature.It's great that you too come up with the PoC patch. I didn't look at your changes in much details but we at EnterpriseDB too working on this feature and started implementing it.Attached series of patches I had so far... (which needed further optimization and adjustments though)Here is the overall design (as proposed by Robert) we are trying to implement:1. Extend the BASE_BACKUP command that can be used with replication connections. Add a new [ LSN 'lsn' ] option.2. Extend pg_basebackup with a new --lsn=LSN option that causes it to send the option added to the server in #1.Here are the implementation details when we have a valid LSNsendFile() in basebackup.c is the function which mostly does the thing for us. If the filename looks like a relation file, then we'll need to consider sending only a partial file. The way to do that is probably:A. Read the whole file into memory.B. Check the LSN of each block. Build a bitmap indicating which blocks have an LSN greater than or equal to the threshold LSN.C. If more than 90% of the bits in the bitmap are set, send the whole file just as if this were a full backup. This 90% is a constant now; we might make it a GUC later.D. Otherwise, send a file with .partial added to the name. The .partial file contains an indication of which blocks were changed at the beginning, followed by the data blocks. It also includes a checksum/CRC.Currently, a .partial file format looks like: - start with a 4-byte magic number - then store a 4-byte CRC covering the header - then a 4-byte count of the number of blocks included in the file - then the block numbers, each as a 4-byte quantity - then the data blocksWe are also working on combining these incremental back-ups with the full backup and for that, we are planning to add a new utility called pg_combinebackup. Will post the details on that later once we have on the same page for taking backup.Thanks-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB Corporation",
"msg_date": "Tue, 30 Jul 2019 07:09:06 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 1:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> I haven't had a chance to look at Jeevan's patch at all, or yours in\n> any detail, as yet, so these are just some very preliminary comments.\n> It will be good, however, if we can agree on who is going to do what\n> part of this as we try to drive this forward together. I'm sorry that\n> I didn't communicate EDB's plans to work on this more clearly;\n> duplicated effort serves nobody well.\n>\n\nI had a look over Anastasia's PoC patch to understand the approach she has\ntaken and here are my observations.\n\n1.\nThe patch first creates a .blockmap file for each relation file containing\nan array of all modified block numbers. This is done by reading all blocks\n(in a chunk of 4 (32kb in total) in a loop) from a file and checking the\npage\nLSN with given LSN. Later, to create .partial file, a relation file is\nopened\nagain and all blocks are read in a chunk of 4 in a loop. If found modified,\nit is copied into another memory and after scanning all 4 blocks, all copied\nblocks are sent to the .partial file.\n\nIn this approach, each file is opened and read twice which looks more\nexpensive\nto me. Whereas in my patch, I do that just once. However, I read the entire\nfile in memory to check which blocks are modified but in Anastasia's design\nmax TAR_SEND_SIZE (32kb) will be read at a time but, in a loop. I need to do\nthat as we wanted to know how heavily the file got modified so that we can\nsend the entire file if it was modified beyond the threshold (currently\n90%).\n\n2.\nAlso, while sending modified blocks, they are copied in another buffer,\ninstead\nthey can be just sent from the read files contents (in BLCKSZ block size).\nHere, the .blockmap created earlier was not used. In my implementation, we\nare\nsending just a .partial file with a header containing all required details\nlike\nthe number of blocks changes along with the block numbers including CRC\nfollowed by the blocks itself.\n\n3.\nI tried compiling Anastasia's patch, but getting an error. So could not see\nor\ntest how it goes. Also, like a normal backup option, the incremental backup\noption needs to verify the checksum if requested.\n\n4.\nWhile combining full and incremental backup, files from the incremental\nbackup\nare just copied into the full backup directory. While the design I posted\nearlier, we are trying another way round to avoid over-writing and other\nissues\nas I explained earlier.\n\nI am almost done writing the patch for pg_combinebackup and will post soon.\n\n\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\nThanks\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Tue, Jul 30, 2019 at 1:58 AM Robert Haas <robertmhaas@gmail.com> wrote:I haven't had a chance to look at Jeevan's patch at all, or yours inany detail, as yet, so these are just some very preliminary comments.It will be good, however, if we can agree on who is going to do whatpart of this as we try to drive this forward together. I'm sorry thatI didn't communicate EDB's plans to work on this more clearly;duplicated effort serves nobody well.I had a look over Anastasia's PoC patch to understand the approach she hastaken and here are my observations.1.The patch first creates a .blockmap file for each relation file containingan array of all modified block numbers. This is done by reading all blocks(in a chunk of 4 (32kb in total) in a loop) from a file and checking the pageLSN with given LSN. Later, to create .partial file, a relation file is openedagain and all blocks are read in a chunk of 4 in a loop. If found modified,it is copied into another memory and after scanning all 4 blocks, all copiedblocks are sent to the .partial file.In this approach, each file is opened and read twice which looks more expensiveto me. Whereas in my patch, I do that just once. However, I read the entirefile in memory to check which blocks are modified but in Anastasia's designmax TAR_SEND_SIZE (32kb) will be read at a time but, in a loop. I need to dothat as we wanted to know how heavily the file got modified so that we cansend the entire file if it was modified beyond the threshold (currently 90%).2.Also, while sending modified blocks, they are copied in another buffer, insteadthey can be just sent from the read files contents (in BLCKSZ block size).Here, the .blockmap created earlier was not used. In my implementation, we aresending just a .partial file with a header containing all required details likethe number of blocks changes along with the block numbers including CRCfollowed by the blocks itself.3.I tried compiling Anastasia's patch, but getting an error. So could not see ortest how it goes. Also, like a normal backup option, the incremental backupoption needs to verify the checksum if requested.4.While combining full and incremental backup, files from the incremental backupare just copied into the full backup directory. While the design I postedearlier, we are trying another way round to avoid over-writing and other issuesas I explained earlier.I am almost done writing the patch for pg_combinebackup and will post soon. \n-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company\n\n\nThanks-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 30 Jul 2019 09:39:37 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 1:28 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Jul 10, 2019 at 2:17 PM Anastasia Lubennikova\n> <a.lubennikova@postgrespro.ru> wrote:\n> > In attachments, you can find a prototype of incremental pg_basebackup,\n> > which consists of 2 features:\n> >\n> > 1) To perform incremental backup one should call pg_basebackup with a\n> > new argument:\n> >\n> > pg_basebackup -D 'basedir' --prev-backup-start-lsn 'lsn'\n> >\n> > where lsn is a start_lsn of parent backup (can be found in\n> > \"backup_label\" file)\n> >\n> > It calls BASE_BACKUP replication command with a new argument\n> > PREV_BACKUP_START_LSN 'lsn'.\n> >\n> > For datafiles, only pages with LSN > prev_backup_start_lsn will be\n> > included in the backup.\n> > They are saved into 'filename.partial' file, 'filename.blockmap' file\n> > contains an array of BlockNumbers.\n> > For example, if we backuped blocks 1,3,5, filename.partial will contain\n> > 3 blocks, and 'filename.blockmap' will contain array {1,3,5}.\n>\n> I think it's better to keep both the information about changed blocks\n> and the contents of the changed blocks in a single file. The list of\n> changed blocks is probably quite short, and I don't really want to\n> double the number of files in the backup if there's no real need. I\n> suspect it's just overall a bit simpler to keep everything together.\n> I don't think this is a make-or-break thing, and welcome contrary\n> arguments, but that's my preference.\n>\n\nI had experience working on a similar product and I agree with Robert to\nkeep\nthe changed block info and the changed block in a single file make more\nsense.\n+1\n\n>\n> > 2) To merge incremental backup into a full backup call\n> >\n> > pg_basebackup -D 'basedir' --incremental-pgdata 'incremental_basedir'\n> > --merge-backups\n> >\n> > It will move all files from 'incremental_basedir' to 'basedir' handling\n> > '.partial' files correctly.\n>\n> This, to me, looks like it's much worse than the design that I\n> proposed originally. It means that:\n>\n> 1. You can't take an incremental backup without having the full backup\n> available at the time you want to take the incremental backup.\n>\n> 2. You're always storing a full backup, which means that you need more\n> disk space, and potentially much more I/O while taking the backup.\n> You save on transfer bandwidth, but you add a lot of disk reads and\n> writes, costs which have to be paid even if the backup is never\n> restored.\n>\n> > 1) Whether we collect block maps using simple \"read everything page by\n> > page\" approach\n> > or WAL scanning or any other page tracking algorithm, we must choose a\n> > map format.\n> > I implemented the simplest one, while there are more ideas:\n>\n> I think we should start simple.\n>\n> I haven't had a chance to look at Jeevan's patch at all, or yours in\n> any detail, as yet, so these are just some very preliminary comments.\n> It will be good, however, if we can agree on who is going to do what\n> part of this as we try to drive this forward together. I'm sorry that\n> I didn't communicate EDB's plans to work on this more clearly;\n> duplicated effort serves nobody well.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\n-- \nIbrar Ahmed\n\nOn Tue, Jul 30, 2019 at 1:28 AM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Jul 10, 2019 at 2:17 PM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> In attachments, you can find a prototype of incremental pg_basebackup,\n> which consists of 2 features:\n>\n> 1) To perform incremental backup one should call pg_basebackup with a\n> new argument:\n>\n> pg_basebackup -D 'basedir' --prev-backup-start-lsn 'lsn'\n>\n> where lsn is a start_lsn of parent backup (can be found in\n> \"backup_label\" file)\n>\n> It calls BASE_BACKUP replication command with a new argument\n> PREV_BACKUP_START_LSN 'lsn'.\n>\n> For datafiles, only pages with LSN > prev_backup_start_lsn will be\n> included in the backup.\n> They are saved into 'filename.partial' file, 'filename.blockmap' file\n> contains an array of BlockNumbers.\n> For example, if we backuped blocks 1,3,5, filename.partial will contain\n> 3 blocks, and 'filename.blockmap' will contain array {1,3,5}.\n\nI think it's better to keep both the information about changed blocks\nand the contents of the changed blocks in a single file. The list of\nchanged blocks is probably quite short, and I don't really want to\ndouble the number of files in the backup if there's no real need. I\nsuspect it's just overall a bit simpler to keep everything together.\nI don't think this is a make-or-break thing, and welcome contrary\narguments, but that's my preference.I had experience working on a similar product and I agree with Robert to keepthe changed block info and the changed block in a single file make more sense. +1\n\n> 2) To merge incremental backup into a full backup call\n>\n> pg_basebackup -D 'basedir' --incremental-pgdata 'incremental_basedir'\n> --merge-backups\n>\n> It will move all files from 'incremental_basedir' to 'basedir' handling\n> '.partial' files correctly.\n\nThis, to me, looks like it's much worse than the design that I\nproposed originally. It means that:\n\n1. You can't take an incremental backup without having the full backup\navailable at the time you want to take the incremental backup.\n\n2. You're always storing a full backup, which means that you need more\ndisk space, and potentially much more I/O while taking the backup.\nYou save on transfer bandwidth, but you add a lot of disk reads and\nwrites, costs which have to be paid even if the backup is never\nrestored.\n\n> 1) Whether we collect block maps using simple \"read everything page by\n> page\" approach\n> or WAL scanning or any other page tracking algorithm, we must choose a\n> map format.\n> I implemented the simplest one, while there are more ideas:\n\nI think we should start simple.\n\nI haven't had a chance to look at Jeevan's patch at all, or yours in\nany detail, as yet, so these are just some very preliminary comments.\nIt will be good, however, if we can agree on who is going to do what\npart of this as we try to drive this forward together. I'm sorry that\nI didn't communicate EDB's plans to work on this more clearly;\nduplicated effort serves nobody well.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- Ibrar Ahmed",
"msg_date": "Tue, 30 Jul 2019 18:27:07 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 1:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jul 10, 2019 at 2:17 PM Anastasia Lubennikova\n> <a.lubennikova@postgrespro.ru> wrote:\n> > In attachments, you can find a prototype of incremental pg_basebackup,\n> > which consists of 2 features:\n> >\n> > 1) To perform incremental backup one should call pg_basebackup with a\n> > new argument:\n> >\n> > pg_basebackup -D 'basedir' --prev-backup-start-lsn 'lsn'\n> >\n> > where lsn is a start_lsn of parent backup (can be found in\n> > \"backup_label\" file)\n> >\n> > It calls BASE_BACKUP replication command with a new argument\n> > PREV_BACKUP_START_LSN 'lsn'.\n> >\n> > For datafiles, only pages with LSN > prev_backup_start_lsn will be\n> > included in the backup.\n>>\nOne thought, if the file is not modified no need to check the lsn.\n>>\n> > They are saved into 'filename.partial' file, 'filename.blockmap' file\n> > contains an array of BlockNumbers.\n> > For example, if we backuped blocks 1,3,5, filename.partial will contain\n> > 3 blocks, and 'filename.blockmap' will contain array {1,3,5}.\n>\n> I think it's better to keep both the information about changed blocks\n> and the contents of the changed blocks in a single file. The list of\n> changed blocks is probably quite short, and I don't really want to\n> double the number of files in the backup if there's no real need. I\n> suspect it's just overall a bit simpler to keep everything together.\n> I don't think this is a make-or-break thing, and welcome contrary\n> arguments, but that's my preference.\n>\nI feel Robert's suggestion is good.\nWe can probably keep one meta file for each backup with some basic information\nof all the files being backed up, this metadata file will be useful in the\nbelow case:\nTable dropped before incremental backup\nTable truncated and Insert/Update/Delete operations before incremental backup\n\nI feel if we have the metadata, we can add some optimization to decide the\nabove scenario with the metadata information to identify the file deletion\nand avoiding write and delete for pg_combinebackup which Jeevan has told in\nhis previous mail.\n\nProbably it can also help us to decide which work the worker needs to do\nif we are planning to backup in parallel.\n\nRegards,\nvignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jul 2019 23:29:30 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 1:59 PM vignesh C <vignesh21@gmail.com> wrote:\n> I feel Robert's suggestion is good.\n> We can probably keep one meta file for each backup with some basic information\n> of all the files being backed up, this metadata file will be useful in the\n> below case:\n> Table dropped before incremental backup\n> Table truncated and Insert/Update/Delete operations before incremental backup\n\nThere's really no need for this with the design I proposed. The files\nthat should exist when you restore in incremental backup are exactly\nthe set of files that exist in the final incremental backup, except\nthat any .partial files need to be replaced with a correct\nreconstruction of the underlying file. You don't need to know what\ngot dropped or truncated; you only need to know what's supposed to be\nthere at the end.\n\nYou may be thinking, as I once did, that restoring an incremental\nbackup would consist of restoring the full backup first and then\nlayering the incrementals over it, but if you read what I proposed, it\nactually works the other way around: you restore the files that are\npresent in the incremental, and as needed, pull pieces of them from\nearlier incremental and/or full backups. I think this is a *much*\nbetter design than doing it the other way; it avoids any risk of\ngetting the wrong answer due to truncations or drops, and it also is\nfaster, because you only read older backups to the extent that you\nactually need their contents.\n\nI think it's a good idea to try to keep all the information about a\nsingle file being backup in one place. It's just less confusing. If,\nfor example, you have a metadata file that tells you which files are\ndropped - that is, which files you DON'T have - then what happen if\none of those files is present in the data directory after all? Well,\nthen you have inconsistent information and are confused, and maybe\nyour code won't even notice the inconsistency. Similarly, if the\nmetadata file is separate from the block data, then what happens if\none file is missing, or isn't from the same backup as the other file?\nThat shouldn't happen, of course, but if it does, you'll get confused.\nThere's no perfect solution to these kinds of problems: if we suppose\nthat the backup can be corrupted by having missing or extra files, why\nnot also corruption within a single file? Still, on balance I tend to\nthink that keeping related stuff together minimizes the surface area\nfor bugs. I realize that's arguable, though.\n\nOne consideration that goes the other way: if you have a manifest file\nthat says what files are supposed to be present in the backup, then\nyou can detect a disappearing file, which is impossible with the\ndesign I've proposed (and with the current full backup machinery).\nThat might be worth fixing, but it's a separate feature that has\nlittle to do with incremental backup.\n\n> Probably it can also help us to decide which work the worker needs to do\n> if we are planning to backup in parallel.\n\nI don't think we need a manifest file for parallel backup. One\nprocess or thread can scan the directory tree, make a list of which\nfiles are present, and then hand individual files off to other\nprocesses or threads. In short, the directory listing serves as the\nmanifest.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 31 Jul 2019 16:03:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 9:39 AM Jeevan Chalke <\njeevan.chalke@enterprisedb.com> wrote:\n\n>\n>\n>\n> I am almost done writing the patch for pg_combinebackup and will post soon.\n>\n\nAttached patch which implements the pg_combinebackup utility used to combine\nfull basebackup with one or more incremental backups.\n\nI have tested it manually and it works for all best cases.\n\nLet me know if you have any inputs/suggestions/review comments?\n\nThanks\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 1 Aug 2019 17:06:25 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 5:06 PM Jeevan Chalke\n<jeevan.chalke@enterprisedb.com> wrote:\n>\n> On Tue, Jul 30, 2019 at 9:39 AM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:\n>>\n>> I am almost done writing the patch for pg_combinebackup and will post soon.\n>\n>\n> Attached patch which implements the pg_combinebackup utility used to combine\n> full basebackup with one or more incremental backups.\n>\n> I have tested it manually and it works for all best cases.\n>\n> Let me know if you have any inputs/suggestions/review comments?\n>\nSome comments:\n1) There will be some link files created for tablespace, we might\nrequire some special handling for it\n\n2)\n+ while (numretries <= maxretries)\n+ {\n+ rc = system(copycmd);\n+ if (rc == 0)\n+ return;\n+\n+ pg_log_info(\"could not copy, retrying after %d seconds\",\n+ sleeptime);\n+ pg_usleep(numretries++ * sleeptime * 1000000L);\n+ }\nRetry functionality is hanlded only for copying of full files, should\nwe handle retry for copying of partial files\n\n3)\n+ maxretries = atoi(optarg);\n+ if (maxretries < 0)\n+ {\n+ pg_log_error(\"invalid value for maxretries\");\n+ fprintf(stderr, _(\"%s: -r maxretries must be >= 0\\n\"), progname);\n+ exit(1);\n+ }\n+ break;\n+ case 's':\n+ sleeptime = atoi(optarg);\n+ if (sleeptime <= 0 || sleeptime > 60)\n+ {\n+ pg_log_error(\"invalid value for sleeptime\");\n+ fprintf(stderr, _(\"%s: -s sleeptime must be between 1 and 60\\n\"), progname);\n+ exit(1);\n+ }\n+ break;\nwe can have some range for maxretries similar to sleeptime\n\n4)\n+ fp = fopen(filename, \"r\");\n+ if (fp == NULL)\n+ {\n+ pg_log_error(\"could not read file \\\"%s\\\": %m\", filename);\n+ exit(1);\n+ }\n+\n+ labelfile = malloc(statbuf.st_size + 1);\n+ if (fread(labelfile, 1, statbuf.st_size, fp) != statbuf.st_size)\n+ {\n+ pg_log_error(\"corrupted file \\\"%s\\\": %m\", filename);\n+ free(labelfile);\n+ exit(1);\n+ }\nShould we check for malloc failure\n\n5) Should we add display of progress as backup may take some time,\nthis can be added as enhancement. We can get other's opinion on this.\n\n6)\n+ if (nIncrDir == MAX_INCR_BK_COUNT)\n+ {\n+ pg_log_error(\"too many incremental backups to combine\");\n+ fprintf(stderr, _(\"Try \\\"%s --help\\\" for more information.\\n\"), progname);\n+ exit(1);\n+ }\n+\n+ IncrDirs[nIncrDir] = optarg;\n+ nIncrDir++;\n+ break;\n\nIf the backup count increases providing the input may be difficult,\nShall user provide all the incremental backups from a parent folder\nand can we handle the ordering of incremental backup internally\n\n7)\n+ if (isPartialFile)\n+ {\n+ if (verbose)\n+ pg_log_info(\"combining partial file \\\"%s.partial\\\"\", fn);\n+\n+ combine_partial_files(fn, IncrDirs, nIncrDir, subdirpath, outfn);\n+ }\n+ else\n+ copy_whole_file(infn, outfn);\n\nAdd verbose for copying whole file\n\n8) We can also check if approximate space is available in disk before\nstarting combine backup, this can be added as enhancement. We can get\nother's opinion on this.\n\n9)\n+ printf(_(\" -i, --incr-backup=DIRECTORY incremental backup directory\n(maximum %d)\\n\"), MAX_INCR_BK_COUNT);\n+ printf(_(\" -o, --output-dir=DIRECTORY combine backup into directory\\n\"));\n+ printf(_(\"\\nGeneral options:\\n\"));\n+ printf(_(\" -n, --no-clean do not clean up after errors\\n\"));\n\nCombine backup into directory can be combine backup directory\n\n10)\n+/* Max number of incremental backups to be combined. */\n+#define MAX_INCR_BK_COUNT 10\n+\n+/* magic number in incremental backup's .partial file */\n\nMAX_INCR_BK_COUNT can be increased little, some applications use 1\nfull backup at the beginning of the month and use 30 incremental\nbackups rest of the days in the month\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Aug 2019 18:42:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 9:13 AM vignesh C <vignesh21@gmail.com> wrote:\n> + rc = system(copycmd);\n\nI don't think this patch should be calling system() in the first place.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Aug 2019 09:42:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Aug 2, 2019 at 9:13 AM vignesh C <vignesh21@gmail.com> wrote:\n> > + rc = system(copycmd);\n> \n> I don't think this patch should be calling system() in the first place.\n\n+1.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 5 Aug 2019 10:22:32 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "I have not looked at the patch in detail, but just some nits from my side.\n\nOn Fri, Aug 2, 2019 at 6:13 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Thu, Aug 1, 2019 at 5:06 PM Jeevan Chalke\n> <jeevan.chalke@enterprisedb.com> wrote:\n> >\n> > On Tue, Jul 30, 2019 at 9:39 AM Jeevan Chalke <\n> jeevan.chalke@enterprisedb.com> wrote:\n> >>\n> >> I am almost done writing the patch for pg_combinebackup and will post\n> soon.\n> >\n> >\n> > Attached patch which implements the pg_combinebackup utility used to\n> combine\n> > full basebackup with one or more incremental backups.\n> >\n> > I have tested it manually and it works for all best cases.\n> >\n> > Let me know if you have any inputs/suggestions/review comments?\n> >\n> Some comments:\n> 1) There will be some link files created for tablespace, we might\n> require some special handling for it\n>\n> 2)\n> + while (numretries <= maxretries)\n> + {\n> + rc = system(copycmd);\n> + if (rc == 0)\n> + return;\n>\n> Use API to copy the file instead of \"system\", better to use the secure\ncopy.\n\n\n> + pg_log_info(\"could not copy, retrying after %d seconds\",\n> + sleeptime);\n> + pg_usleep(numretries++ * sleeptime * 1000000L);\n> + }\n> Retry functionality is hanlded only for copying of full files, should\n> we handle retry for copying of partial files\n>\n> The log and the sleep time does not match, you are multiplying sleeptime\nwith numretries++ and logging only \"sleeptime\"\n\nWhy we are retiring here, capture proper copy error and act accordingly.\nBlindly retiring does not make sense.\n\n3)\n> + maxretries = atoi(optarg);\n> + if (maxretries < 0)\n> + {\n> + pg_log_error(\"invalid value for maxretries\");\n> + fprintf(stderr, _(\"%s: -r maxretries must be >= 0\\n\"), progname);\n> + exit(1);\n> + }\n> + break;\n> + case 's':\n> + sleeptime = atoi(optarg);\n> + if (sleeptime <= 0 || sleeptime > 60)\n> + {\n> + pg_log_error(\"invalid value for sleeptime\");\n> + fprintf(stderr, _(\"%s: -s sleeptime must be between 1 and 60\\n\"),\n> progname);\n> + exit(1);\n> + }\n> + break;\n> we can have some range for maxretries similar to sleeptime\n>\n> 4)\n> + fp = fopen(filename, \"r\");\n> + if (fp == NULL)\n> + {\n> + pg_log_error(\"could not read file \\\"%s\\\": %m\", filename);\n> + exit(1);\n> + }\n> +\n> + labelfile = malloc(statbuf.st_size + 1);\n> + if (fread(labelfile, 1, statbuf.st_size, fp) != statbuf.st_size)\n> + {\n> + pg_log_error(\"corrupted file \\\"%s\\\": %m\", filename);\n> + free(labelfile);\n> + exit(1);\n> + }\n> Should we check for malloc failure\n>\n> Use pg_malloc instead of malloc\n\n\n> 5) Should we add display of progress as backup may take some time,\n> this can be added as enhancement. We can get other's opinion on this.\n>\n> Yes, we should, but this is not the right time to do that.\n\n\n> 6)\n> + if (nIncrDir == MAX_INCR_BK_COUNT)\n> + {\n> + pg_log_error(\"too many incremental backups to combine\");\n> + fprintf(stderr, _(\"Try \\\"%s --help\\\" for more information.\\n\"),\n> progname);\n> + exit(1);\n> + }\n> +\n> + IncrDirs[nIncrDir] = optarg;\n> + nIncrDir++;\n> + break;\n>\n> If the backup count increases providing the input may be difficult,\n> Shall user provide all the incremental backups from a parent folder\n> and can we handle the ordering of incremental backup internally\n>\n> Why we have that limit at first place?\n\n\n> 7)\n> + if (isPartialFile)\n> + {\n> + if (verbose)\n> + pg_log_info(\"combining partial file \\\"%s.partial\\\"\", fn);\n> +\n> + combine_partial_files(fn, IncrDirs, nIncrDir, subdirpath, outfn);\n> + }\n> + else\n> + copy_whole_file(infn, outfn);\n>\n> Add verbose for copying whole file\n>\n> 8) We can also check if approximate space is available in disk before\n> starting combine backup, this can be added as enhancement. We can get\n> other's opinion on this.\n>\n> 9)\n> + printf(_(\" -i, --incr-backup=DIRECTORY incremental backup directory\n> (maximum %d)\\n\"), MAX_INCR_BK_COUNT);\n> + printf(_(\" -o, --output-dir=DIRECTORY combine backup into\n> directory\\n\"));\n> + printf(_(\"\\nGeneral options:\\n\"));\n> + printf(_(\" -n, --no-clean do not clean up after\n> errors\\n\"));\n>\n> Combine backup into directory can be combine backup directory\n>\n> 10)\n> +/* Max number of incremental backups to be combined. */\n> +#define MAX_INCR_BK_COUNT 10\n> +\n> +/* magic number in incremental backup's .partial file */\n>\n> MAX_INCR_BK_COUNT can be increased little, some applications use 1\n> full backup at the beginning of the month and use 30 incremental\n> backups rest of the days in the month\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\n-- \nIbrar Ahmed\n\nI have not looked at the patch in detail, but just some nits from my side.On Fri, Aug 2, 2019 at 6:13 PM vignesh C <vignesh21@gmail.com> wrote:On Thu, Aug 1, 2019 at 5:06 PM Jeevan Chalke\n<jeevan.chalke@enterprisedb.com> wrote:\n>\n> On Tue, Jul 30, 2019 at 9:39 AM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:\n>>\n>> I am almost done writing the patch for pg_combinebackup and will post soon.\n>\n>\n> Attached patch which implements the pg_combinebackup utility used to combine\n> full basebackup with one or more incremental backups.\n>\n> I have tested it manually and it works for all best cases.\n>\n> Let me know if you have any inputs/suggestions/review comments?\n>\nSome comments:\n1) There will be some link files created for tablespace, we might\nrequire some special handling for it\n\n2)\n+ while (numretries <= maxretries)\n+ {\n+ rc = system(copycmd);\n+ if (rc == 0)\n+ return;Use API to copy the file instead of \"system\", better to use the secure copy. \n+ pg_log_info(\"could not copy, retrying after %d seconds\",\n+ sleeptime);\n+ pg_usleep(numretries++ * sleeptime * 1000000L);\n+ }\nRetry functionality is hanlded only for copying of full files, should\nwe handle retry for copying of partial files\nThe log and the sleep time does not match, you are multiplying sleeptime with numretries++ and logging only \"sleeptime\" Why we are retiring here, capture proper copy error and act accordingly. Blindly retiring does not make sense. \n3)\n+ maxretries = atoi(optarg);\n+ if (maxretries < 0)\n+ {\n+ pg_log_error(\"invalid value for maxretries\");\n+ fprintf(stderr, _(\"%s: -r maxretries must be >= 0\\n\"), progname);\n+ exit(1);\n+ }\n+ break;\n+ case 's':\n+ sleeptime = atoi(optarg);\n+ if (sleeptime <= 0 || sleeptime > 60)\n+ {\n+ pg_log_error(\"invalid value for sleeptime\");\n+ fprintf(stderr, _(\"%s: -s sleeptime must be between 1 and 60\\n\"), progname);\n+ exit(1);\n+ }\n+ break;\nwe can have some range for maxretries similar to sleeptime\n\n4)\n+ fp = fopen(filename, \"r\");\n+ if (fp == NULL)\n+ {\n+ pg_log_error(\"could not read file \\\"%s\\\": %m\", filename);\n+ exit(1);\n+ }\n+\n+ labelfile = malloc(statbuf.st_size + 1);\n+ if (fread(labelfile, 1, statbuf.st_size, fp) != statbuf.st_size)\n+ {\n+ pg_log_error(\"corrupted file \\\"%s\\\": %m\", filename);\n+ free(labelfile);\n+ exit(1);\n+ }\nShould we check for malloc failure\nUse pg_malloc instead of malloc \n\n5) Should we add display of progress as backup may take some time,\nthis can be added as enhancement. We can get other's opinion on this.\nYes, we should, but this is not the right time to do that. \n6)\n+ if (nIncrDir == MAX_INCR_BK_COUNT)\n+ {\n+ pg_log_error(\"too many incremental backups to combine\");\n+ fprintf(stderr, _(\"Try \\\"%s --help\\\" for more information.\\n\"), progname);\n+ exit(1);\n+ }\n+\n+ IncrDirs[nIncrDir] = optarg;\n+ nIncrDir++;\n+ break;\n\nIf the backup count increases providing the input may be difficult,\nShall user provide all the incremental backups from a parent folder\nand can we handle the ordering of incremental backup internally\nWhy we have that limit at first place? \n7)\n+ if (isPartialFile)\n+ {\n+ if (verbose)\n+ pg_log_info(\"combining partial file \\\"%s.partial\\\"\", fn);\n+\n+ combine_partial_files(fn, IncrDirs, nIncrDir, subdirpath, outfn);\n+ }\n+ else\n+ copy_whole_file(infn, outfn);\n\nAdd verbose for copying whole file\n\n8) We can also check if approximate space is available in disk before\nstarting combine backup, this can be added as enhancement. We can get\nother's opinion on this.\n\n9)\n+ printf(_(\" -i, --incr-backup=DIRECTORY incremental backup directory\n(maximum %d)\\n\"), MAX_INCR_BK_COUNT);\n+ printf(_(\" -o, --output-dir=DIRECTORY combine backup into directory\\n\"));\n+ printf(_(\"\\nGeneral options:\\n\"));\n+ printf(_(\" -n, --no-clean do not clean up after errors\\n\"));\n\nCombine backup into directory can be combine backup directory\n\n10)\n+/* Max number of incremental backups to be combined. */\n+#define MAX_INCR_BK_COUNT 10\n+\n+/* magic number in incremental backup's .partial file */\n\nMAX_INCR_BK_COUNT can be increased little, some applications use 1\nfull backup at the beginning of the month and use 30 incremental\nbackups rest of the days in the month\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n-- Ibrar Ahmed",
"msg_date": "Tue, 6 Aug 2019 23:31:50 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Aug 6, 2019 at 11:31 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n> I have not looked at the patch in detail, but just some nits from my side.\n>\n> On Fri, Aug 2, 2019 at 6:13 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n>> On Thu, Aug 1, 2019 at 5:06 PM Jeevan Chalke\n>> <jeevan.chalke@enterprisedb.com> wrote:\n>> >\n>> > On Tue, Jul 30, 2019 at 9:39 AM Jeevan Chalke <\n>> jeevan.chalke@enterprisedb.com> wrote:\n>> >>\n>> >> I am almost done writing the patch for pg_combinebackup and will post\n>> soon.\n>> >\n>> >\n>> > Attached patch which implements the pg_combinebackup utility used to\n>> combine\n>> > full basebackup with one or more incremental backups.\n>> >\n>> > I have tested it manually and it works for all best cases.\n>> >\n>> > Let me know if you have any inputs/suggestions/review comments?\n>> >\n>> Some comments:\n>> 1) There will be some link files created for tablespace, we might\n>> require some special handling for it\n>>\n>> 2)\n>> + while (numretries <= maxretries)\n>> + {\n>> + rc = system(copycmd);\n>> + if (rc == 0)\n>> + return;\n>>\n>> Use API to copy the file instead of \"system\", better to use the secure\n> copy.\n>\nAh, it is a local copy, simple copy API is enough.\n\n>\n>\n>> + pg_log_info(\"could not copy, retrying after %d seconds\",\n>> + sleeptime);\n>> + pg_usleep(numretries++ * sleeptime * 1000000L);\n>> + }\n>> Retry functionality is hanlded only for copying of full files, should\n>> we handle retry for copying of partial files\n>>\n>> The log and the sleep time does not match, you are multiplying sleeptime\n> with numretries++ and logging only \"sleeptime\"\n>\n> Why we are retiring here, capture proper copy error and act accordingly.\n> Blindly retiring does not make sense.\n>\n> 3)\n>> + maxretries = atoi(optarg);\n>> + if (maxretries < 0)\n>> + {\n>> + pg_log_error(\"invalid value for maxretries\");\n>> + fprintf(stderr, _(\"%s: -r maxretries must be >= 0\\n\"), progname);\n>> + exit(1);\n>> + }\n>> + break;\n>> + case 's':\n>> + sleeptime = atoi(optarg);\n>> + if (sleeptime <= 0 || sleeptime > 60)\n>> + {\n>> + pg_log_error(\"invalid value for sleeptime\");\n>> + fprintf(stderr, _(\"%s: -s sleeptime must be between 1 and 60\\n\"),\n>> progname);\n>> + exit(1);\n>> + }\n>> + break;\n>> we can have some range for maxretries similar to sleeptime\n>>\n>> 4)\n>> + fp = fopen(filename, \"r\");\n>> + if (fp == NULL)\n>> + {\n>> + pg_log_error(\"could not read file \\\"%s\\\": %m\", filename);\n>> + exit(1);\n>> + }\n>> +\n>> + labelfile = malloc(statbuf.st_size + 1);\n>> + if (fread(labelfile, 1, statbuf.st_size, fp) != statbuf.st_size)\n>> + {\n>> + pg_log_error(\"corrupted file \\\"%s\\\": %m\", filename);\n>> + free(labelfile);\n>> + exit(1);\n>> + }\n>> Should we check for malloc failure\n>>\n>> Use pg_malloc instead of malloc\n>\n>\n>> 5) Should we add display of progress as backup may take some time,\n>> this can be added as enhancement. We can get other's opinion on this.\n>>\n>> Yes, we should, but this is not the right time to do that.\n>\n>\n>> 6)\n>> + if (nIncrDir == MAX_INCR_BK_COUNT)\n>> + {\n>> + pg_log_error(\"too many incremental backups to combine\");\n>> + fprintf(stderr, _(\"Try \\\"%s --help\\\" for more information.\\n\"),\n>> progname);\n>> + exit(1);\n>> + }\n>> +\n>> + IncrDirs[nIncrDir] = optarg;\n>> + nIncrDir++;\n>> + break;\n>>\n>> If the backup count increases providing the input may be difficult,\n>> Shall user provide all the incremental backups from a parent folder\n>> and can we handle the ordering of incremental backup internally\n>>\n>> Why we have that limit at first place?\n>\n>\n>> 7)\n>> + if (isPartialFile)\n>> + {\n>> + if (verbose)\n>> + pg_log_info(\"combining partial file \\\"%s.partial\\\"\", fn);\n>> +\n>> + combine_partial_files(fn, IncrDirs, nIncrDir, subdirpath, outfn);\n>> + }\n>> + else\n>> + copy_whole_file(infn, outfn);\n>>\n>> Add verbose for copying whole file\n>>\n>> 8) We can also check if approximate space is available in disk before\n>> starting combine backup, this can be added as enhancement. We can get\n>> other's opinion on this.\n>>\n>> 9)\n>> + printf(_(\" -i, --incr-backup=DIRECTORY incremental backup directory\n>> (maximum %d)\\n\"), MAX_INCR_BK_COUNT);\n>> + printf(_(\" -o, --output-dir=DIRECTORY combine backup into\n>> directory\\n\"));\n>> + printf(_(\"\\nGeneral options:\\n\"));\n>> + printf(_(\" -n, --no-clean do not clean up after\n>> errors\\n\"));\n>>\n>> Combine backup into directory can be combine backup directory\n>>\n>> 10)\n>> +/* Max number of incremental backups to be combined. */\n>> +#define MAX_INCR_BK_COUNT 10\n>> +\n>> +/* magic number in incremental backup's .partial file */\n>>\n>> MAX_INCR_BK_COUNT can be increased little, some applications use 1\n>> full backup at the beginning of the month and use 30 incremental\n>> backups rest of the days in the month\n>>\n>> Regards,\n>> Vignesh\n>> EnterpriseDB: http://www.enterprisedb.com\n>>\n>>\n>>\n>\n> --\n> Ibrar Ahmed\n>\n\n\n-- \nIbrar Ahmed\n\nOn Tue, Aug 6, 2019 at 11:31 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:I have not looked at the patch in detail, but just some nits from my side.On Fri, Aug 2, 2019 at 6:13 PM vignesh C <vignesh21@gmail.com> wrote:On Thu, Aug 1, 2019 at 5:06 PM Jeevan Chalke\n<jeevan.chalke@enterprisedb.com> wrote:\n>\n> On Tue, Jul 30, 2019 at 9:39 AM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:\n>>\n>> I am almost done writing the patch for pg_combinebackup and will post soon.\n>\n>\n> Attached patch which implements the pg_combinebackup utility used to combine\n> full basebackup with one or more incremental backups.\n>\n> I have tested it manually and it works for all best cases.\n>\n> Let me know if you have any inputs/suggestions/review comments?\n>\nSome comments:\n1) There will be some link files created for tablespace, we might\nrequire some special handling for it\n\n2)\n+ while (numretries <= maxretries)\n+ {\n+ rc = system(copycmd);\n+ if (rc == 0)\n+ return;Use API to copy the file instead of \"system\", better to use the secure copy.Ah, it is a local copy, simple copy API is enough. \n+ pg_log_info(\"could not copy, retrying after %d seconds\",\n+ sleeptime);\n+ pg_usleep(numretries++ * sleeptime * 1000000L);\n+ }\nRetry functionality is hanlded only for copying of full files, should\nwe handle retry for copying of partial files\nThe log and the sleep time does not match, you are multiplying sleeptime with numretries++ and logging only \"sleeptime\" Why we are retiring here, capture proper copy error and act accordingly. Blindly retiring does not make sense. \n3)\n+ maxretries = atoi(optarg);\n+ if (maxretries < 0)\n+ {\n+ pg_log_error(\"invalid value for maxretries\");\n+ fprintf(stderr, _(\"%s: -r maxretries must be >= 0\\n\"), progname);\n+ exit(1);\n+ }\n+ break;\n+ case 's':\n+ sleeptime = atoi(optarg);\n+ if (sleeptime <= 0 || sleeptime > 60)\n+ {\n+ pg_log_error(\"invalid value for sleeptime\");\n+ fprintf(stderr, _(\"%s: -s sleeptime must be between 1 and 60\\n\"), progname);\n+ exit(1);\n+ }\n+ break;\nwe can have some range for maxretries similar to sleeptime\n\n4)\n+ fp = fopen(filename, \"r\");\n+ if (fp == NULL)\n+ {\n+ pg_log_error(\"could not read file \\\"%s\\\": %m\", filename);\n+ exit(1);\n+ }\n+\n+ labelfile = malloc(statbuf.st_size + 1);\n+ if (fread(labelfile, 1, statbuf.st_size, fp) != statbuf.st_size)\n+ {\n+ pg_log_error(\"corrupted file \\\"%s\\\": %m\", filename);\n+ free(labelfile);\n+ exit(1);\n+ }\nShould we check for malloc failure\nUse pg_malloc instead of malloc \n\n5) Should we add display of progress as backup may take some time,\nthis can be added as enhancement. We can get other's opinion on this.\nYes, we should, but this is not the right time to do that. \n6)\n+ if (nIncrDir == MAX_INCR_BK_COUNT)\n+ {\n+ pg_log_error(\"too many incremental backups to combine\");\n+ fprintf(stderr, _(\"Try \\\"%s --help\\\" for more information.\\n\"), progname);\n+ exit(1);\n+ }\n+\n+ IncrDirs[nIncrDir] = optarg;\n+ nIncrDir++;\n+ break;\n\nIf the backup count increases providing the input may be difficult,\nShall user provide all the incremental backups from a parent folder\nand can we handle the ordering of incremental backup internally\nWhy we have that limit at first place? \n7)\n+ if (isPartialFile)\n+ {\n+ if (verbose)\n+ pg_log_info(\"combining partial file \\\"%s.partial\\\"\", fn);\n+\n+ combine_partial_files(fn, IncrDirs, nIncrDir, subdirpath, outfn);\n+ }\n+ else\n+ copy_whole_file(infn, outfn);\n\nAdd verbose for copying whole file\n\n8) We can also check if approximate space is available in disk before\nstarting combine backup, this can be added as enhancement. We can get\nother's opinion on this.\n\n9)\n+ printf(_(\" -i, --incr-backup=DIRECTORY incremental backup directory\n(maximum %d)\\n\"), MAX_INCR_BK_COUNT);\n+ printf(_(\" -o, --output-dir=DIRECTORY combine backup into directory\\n\"));\n+ printf(_(\"\\nGeneral options:\\n\"));\n+ printf(_(\" -n, --no-clean do not clean up after errors\\n\"));\n\nCombine backup into directory can be combine backup directory\n\n10)\n+/* Max number of incremental backups to be combined. */\n+#define MAX_INCR_BK_COUNT 10\n+\n+/* magic number in incremental backup's .partial file */\n\nMAX_INCR_BK_COUNT can be increased little, some applications use 1\nfull backup at the beginning of the month and use 30 incremental\nbackups rest of the days in the month\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n-- Ibrar Ahmed\n-- Ibrar Ahmed",
"msg_date": "Tue, 6 Aug 2019 23:37:21 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 7:13 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Aug 2, 2019 at 9:13 AM vignesh C <vignesh21@gmail.com> wrote:\n> > + rc = system(copycmd);\n>\n> I don't think this patch should be calling system() in the first place.\n>\n\nSo, do you mean we should just do fread() and fwrite() for the whole file?\n\nI thought it is better if it was done by the OS itself instead of reading\n1GB\ninto the memory and writing the same to the file.\n\n\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Mon, Aug 5, 2019 at 7:13 PM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Aug 2, 2019 at 9:13 AM vignesh C <vignesh21@gmail.com> wrote:> + rc = system(copycmd);\nI don't think this patch should be calling system() in the first place.So, do you mean we should just do fread() and fwrite() for the whole file? I thought it is better if it was done by the OS itself instead of reading 1GBinto the memory and writing the same to the file. \n-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 7 Aug 2019 15:16:43 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 2:47 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com>\nwrote:\n\n>\n>\n> On Mon, Aug 5, 2019 at 7:13 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Fri, Aug 2, 2019 at 9:13 AM vignesh C <vignesh21@gmail.com> wrote:\n>> > + rc = system(copycmd);\n>>\n>> I don't think this patch should be calling system() in the first place.\n>>\n>\n> So, do you mean we should just do fread() and fwrite() for the whole file?\n>\n> I thought it is better if it was done by the OS itself instead of reading\n> 1GB\n> into the memory and writing the same to the file.\n>\n> It is not necessary to read the whole 1GB into Ram.\n\n\n>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>\n>\n> --\n> Jeevan Chalke\n> Technical Architect, Product Development\n> EnterpriseDB Corporation\n> The Enterprise PostgreSQL Company\n>\n>\n\n-- \nIbrar Ahmed\n\nOn Wed, Aug 7, 2019 at 2:47 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:On Mon, Aug 5, 2019 at 7:13 PM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Aug 2, 2019 at 9:13 AM vignesh C <vignesh21@gmail.com> wrote:> + rc = system(copycmd);\nI don't think this patch should be calling system() in the first place.So, do you mean we should just do fread() and fwrite() for the whole file? I thought it is better if it was done by the OS itself instead of reading 1GBinto the memory and writing the same to the file. It is not necessary to read the whole 1GB into Ram. \n-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company\n-- Ibrar Ahmed",
"msg_date": "Wed, 7 Aug 2019 14:52:12 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi Jeevan,\n\nI have reviewed the backup part at code level and still looking into the\nrestore(combine) and functional part of it. But, here are my comments so\nfar:\n\nThe patches need rebase.\n----------------------------------------------------\n+ if (!XLogRecPtrIsInvalid(previous_lsn))\n+ appendStringInfo(labelfile, \"PREVIOUS WAL LOCATION: %X/%X\\n\",\n+ (uint32) (previous_lsn >> 32), (uint32)\nprevious_lsn);\n\nMay be we should rename to something like:\n\"INCREMENTAL BACKUP START WAL LOCATION\" or simply \"INCREMENTAL BACKUP START\nLOCATION\"\nto make it more intuitive?\n\n----------------------------------------------------\n\n+typedef struct\n\n+{\n\n+ uint32 magic;\n\n+ pg_crc32c checksum;\n\n+ uint32 nblocks;\n\n+ uint32 blocknumbers[FLEXIBLE_ARRAY_MEMBER];\n\n+} partial_file_header;\n\n\nFile header structure is defined in both the files basebackup.c and\npg_combinebackup.c. I think it is better to move this to\nreplication/basebackup.h.\n\n----------------------------------------------------\n\n+ bool isrelfile = false;\n\nI think we can avoid having flag isrelfile in sendFile().\nSomething like this:\n\nif (startincrptr && OidIsValid(dboid) && looks_like_rel_name(filename))\n{\n//include the code here that is under \"if (isrelfile)\" block.\n}\nelse\n{\n_tarWriteHeader(tarfilename, NULL, statbuf, false);\nwhile ((cnt = fread(buf, 1, Min(sizeof(buf), statbuf->st_size - len), fp))\n> 0)\n{\n...\n}\n}\n\n----------------------------------------------------\n\nAlso, having isrelfile as part of following condition:\n{code}\n+ while (!isrelfile &&\n+ (cnt = fread(buf, 1, Min(sizeof(buf), statbuf->st_size - len),\nfp)) > 0)\n{code}\n\nis confusing, because even the relation files in full backup are going to be\nbacked up by this loop only, but still, the condition reads '(!isrelfile\n&&...)'.\n\n----------------------------------------------------\n\nverify_page_checksum()\n{\nwhile(1)\n{\n....\nbreak;\n}\n}\n\nIMHO, while labels are not advisable in general, it may be better to use a\nlabel\nhere rather than a while(1) loop, so that we can move to the label in case\nwe\nwant to retry once. I think here it opens doors for future bugs if someone\nhappens to add code here, ending up adding some condition and then the\nbreak becomes conditional. That will leave us in an infinite loop.\n\n----------------------------------------------------\n\n+/* magic number in incremental backup's .partial file */\n+#define INCREMENTAL_BACKUP_MAGIC 0x494E4352\n\nSimilar to structure partial_file_header, I think above macro can also be\nmoved\nto basebackup.h instead of defining it twice.\n\n----------------------------------------------------\n\nIn sendFile():\n\n+ buf = (char *) malloc(RELSEG_SIZE * BLCKSZ);\n\nI think this is a huge memory request (1GB) and may fail on busy/loaded\nserver at\ntimes. We should check for failures of malloc, maybe throw some error on\ngetting ENOMEM as errno.\n\n----------------------------------------------------\n\n+ /* Perform incremenatl backup stuff here. */\n+ if ((cnt = fread(buf, 1, Min(RELSEG_SIZE * BLCKSZ,\nstatbuf->st_size), fp)) > 0)\n+ {\n\nHere, should not we expect statbuf->st_size < (RELSEG_SIZE * BLCKSZ), and it\nshould be safe to read just statbuf_st_size always I guess? But, I am ok\nwith\nhaving this extra guard here.\n\n----------------------------------------------------\n\nIn sendFile(), I am sorry if I am missing something, but I am not able to\nunderstand why 'cnt' and 'i' should have different values when they are\nbeing\npassed to verify_page_checksum(). I think passing only one of them should be\nsufficient.\n\n----------------------------------------------------\n\n+ XLogRecPtr pglsn;\n+\n+ for (i = 0; i < cnt / BLCKSZ; i++)\n+ {\n\nMaybe we should just have a variable no_of_blocks to store a number of\nblocks,\nrather than calculating this say RELSEG_SIZE(i.e. 131072) times in the worst\ncase.\n\n----------------------------------------------------\n+ len += cnt;\n+ throttle(cnt);\n+ }\n\nSorry if I am missing something, but, should not it be just:\n\nlen = cnt;\n\n----------------------------------------------------\n\nAs I said earlier in my previous email, we now do not need\n+decode_lsn_internal()\nas it is already taken care by the introduction of function\npg_lsn_in_internal().\n\nRegards,\nJeevan Ladhe\n\nHi Jeevan,I have reviewed the backup part at code level and still looking into therestore(combine) and functional part of it. But, here are my comments so far:The patches need rebase.----------------------------------------------------+ if (!XLogRecPtrIsInvalid(previous_lsn))+ appendStringInfo(labelfile, \"PREVIOUS WAL LOCATION: %X/%X\\n\",+ (uint32) (previous_lsn >> 32), (uint32) previous_lsn);May be we should rename to something like:\"INCREMENTAL BACKUP START WAL LOCATION\" or simply \"INCREMENTAL BACKUP START LOCATION\"to make it more intuitive?----------------------------------------------------+typedef struct +{ + uint32 magic; + pg_crc32c checksum; + uint32 nblocks; + uint32 blocknumbers[FLEXIBLE_ARRAY_MEMBER]; +} partial_file_header; File header structure is defined in both the files basebackup.c andpg_combinebackup.c. I think it is better to move this to replication/basebackup.h.----------------------------------------------------+ bool isrelfile = false;I think we can avoid having flag isrelfile in sendFile(). Something like this:if (startincrptr && OidIsValid(dboid) && looks_like_rel_name(filename)){//include the code here that is under \"if (isrelfile)\" block.}else{ _tarWriteHeader(tarfilename, NULL, statbuf, false); while ((cnt = fread(buf, 1, Min(sizeof(buf), statbuf->st_size - len), fp)) > 0) { ... }}----------------------------------------------------Also, having isrelfile as part of following condition:{code}+ while (!isrelfile &&+ (cnt = fread(buf, 1, Min(sizeof(buf), statbuf->st_size - len), fp)) > 0){code}is confusing, because even the relation files in full backup are going to bebacked up by this loop only, but still, the condition reads '(!isrelfile &&...)'.----------------------------------------------------verify_page_checksum(){ while(1) { .... break; }}IMHO, while labels are not advisable in general, it may be better to use a labelhere rather than a while(1) loop, so that we can move to the label in case wewant to retry once. I think here it opens doors for future bugs if someonehappens to add code here, ending up adding some condition and then thebreak becomes conditional. That will leave us in an infinite loop.----------------------------------------------------+/* magic number in incremental backup's .partial file */+#define INCREMENTAL_BACKUP_MAGIC 0x494E4352Similar to structure partial_file_header, I think above macro can also be movedto basebackup.h instead of defining it twice.----------------------------------------------------In sendFile():+ buf = (char *) malloc(RELSEG_SIZE * BLCKSZ);I think this is a huge memory request (1GB) and may fail on busy/loaded server attimes. We should check for failures of malloc, maybe throw some error ongetting ENOMEM as errno.----------------------------------------------------+ /* Perform incremenatl backup stuff here. */+ if ((cnt = fread(buf, 1, Min(RELSEG_SIZE * BLCKSZ, statbuf->st_size), fp)) > 0)+ {Here, should not we expect statbuf->st_size < (RELSEG_SIZE * BLCKSZ), and itshould be safe to read just statbuf_st_size always I guess? But, I am ok withhaving this extra guard here.----------------------------------------------------In sendFile(), I am sorry if I am missing something, but I am not able tounderstand why 'cnt' and 'i' should have different values when they are beingpassed to verify_page_checksum(). I think passing only one of them should besufficient.----------------------------------------------------+ XLogRecPtr pglsn;++ for (i = 0; i < cnt / BLCKSZ; i++)+ {Maybe we should just have a variable no_of_blocks to store a number of blocks,rather than calculating this say RELSEG_SIZE(i.e. 131072) times in the worstcase.----------------------------------------------------+ len += cnt;+ throttle(cnt);+ } Sorry if I am missing something, but, should not it be just:len = cnt;----------------------------------------------------As I said earlier in my previous email, we now do not need +decode_lsn_internal()as it is already taken care by the introduction of function pg_lsn_in_internal().Regards,Jeevan Ladhe",
"msg_date": "Fri, 9 Aug 2019 06:07:14 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 5:46 AM Jeevan Chalke\n<jeevan.chalke@enterprisedb.com> wrote:\n> So, do you mean we should just do fread() and fwrite() for the whole file?\n>\n> I thought it is better if it was done by the OS itself instead of reading 1GB\n> into the memory and writing the same to the file.\n\nWell, 'cp' is just a C program. If they can write code to copy a\nfile, so can we, and then we're not dependent on 'cp' being installed,\nworking properly, being in the user's path or at the hard-coded\npathname we expect, etc. There's an existing copy_file() function in\nsrc/backed/storage/file/copydir.c which I'd probably look into\nadapting for frontend use. I'm not sure whether it would be important\nto adapt the data-flushing code that's present in that routine or\nwhether we could get by with just the loop to read() and write() data.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 9 Aug 2019 09:06:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 8:37 PM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> + if (!XLogRecPtrIsInvalid(previous_lsn))\n> + appendStringInfo(labelfile, \"PREVIOUS WAL LOCATION: %X/%X\\n\",\n> + (uint32) (previous_lsn >> 32), (uint32) previous_lsn);\n>\n> May be we should rename to something like:\n> \"INCREMENTAL BACKUP START WAL LOCATION\" or simply \"INCREMENTAL BACKUP START LOCATION\"\n> to make it more intuitive?\n\nSo, I think that you are right that PREVIOUS WAL LOCATION might not be\nentirely clear, but at least in my view, INCREMENTAL BACKUP START WAL\nLOCATION is definitely not clear. This backup is an incremental\nbackup, and it has a start WAL location, so you'd end up with START\nWAL LOCATION and INCREMENTAL BACKUP START WAL LOCATION and those sound\nlike they ought to both be the same thing, but they're not. Perhaps\nsomething like REFERENCE WAL LOCATION or REFERENCE WAL LOCATION FOR\nINCREMENTAL BACKUP would be clearer.\n\n> File header structure is defined in both the files basebackup.c and\n> pg_combinebackup.c. I think it is better to move this to replication/basebackup.h.\n\nOr some other header, but yeah, definitely don't duplicate the struct\ndefinition (or any other kind of definition).\n\n> IMHO, while labels are not advisable in general, it may be better to use a label\n> here rather than a while(1) loop, so that we can move to the label in case we\n> want to retry once. I think here it opens doors for future bugs if someone\n> happens to add code here, ending up adding some condition and then the\n> break becomes conditional. That will leave us in an infinite loop.\n\nI'm not sure which style is better here, but I don't really buy this argument.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 9 Aug 2019 09:10:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi Robert,\n\nOn Fri, Aug 9, 2019 at 6:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Aug 8, 2019 at 8:37 PM Jeevan Ladhe\n> <jeevan.ladhe@enterprisedb.com> wrote:\n> > + if (!XLogRecPtrIsInvalid(previous_lsn))\n> > + appendStringInfo(labelfile, \"PREVIOUS WAL LOCATION: %X/%X\\n\",\n> > + (uint32) (previous_lsn >> 32), (uint32)\n> previous_lsn);\n> >\n> > May be we should rename to something like:\n> > \"INCREMENTAL BACKUP START WAL LOCATION\" or simply \"INCREMENTAL BACKUP\n> START LOCATION\"\n> > to make it more intuitive?\n>\n> So, I think that you are right that PREVIOUS WAL LOCATION might not be\n> entirely clear, but at least in my view, INCREMENTAL BACKUP START WAL\n> LOCATION is definitely not clear. This backup is an incremental\n> backup, and it has a start WAL location, so you'd end up with START\n> WAL LOCATION and INCREMENTAL BACKUP START WAL LOCATION and those sound\n> like they ought to both be the same thing, but they're not. Perhaps\n> something like REFERENCE WAL LOCATION or REFERENCE WAL LOCATION FOR\n> INCREMENTAL BACKUP would be clearer.\n>\n\nAgree, how about INCREMENTAL BACKUP REFERENCE WAL LOCATION ?\n\n\n> > File header structure is defined in both the files basebackup.c and\n> > pg_combinebackup.c. I think it is better to move this to\n> replication/basebackup.h.\n>\n> Or some other header, but yeah, definitely don't duplicate the struct\n> definition (or any other kind of definition).\n>\n\nThanks.\n\n\n> > IMHO, while labels are not advisable in general, it may be better to use\n> a label\n> > here rather than a while(1) loop, so that we can move to the label in\n> case we\n> > want to retry once. I think here it opens doors for future bugs if\n> someone\n> > happens to add code here, ending up adding some condition and then the\n> > break becomes conditional. That will leave us in an infinite loop.\n>\n> I'm not sure which style is better here, but I don't really buy this\n> argument.\n\n\nNo issues. I am ok either way.\n\nRegards,\nJeevan Ladhe\n\nHi Robert,On Fri, Aug 9, 2019 at 6:40 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Aug 8, 2019 at 8:37 PM Jeevan Ladhe<jeevan.ladhe@enterprisedb.com> wrote:> + if (!XLogRecPtrIsInvalid(previous_lsn))> + appendStringInfo(labelfile, \"PREVIOUS WAL LOCATION: %X/%X\\n\",> + (uint32) (previous_lsn >> 32), (uint32) previous_lsn);>> May be we should rename to something like:> \"INCREMENTAL BACKUP START WAL LOCATION\" or simply \"INCREMENTAL BACKUP START LOCATION\"> to make it more intuitive?\nSo, I think that you are right that PREVIOUS WAL LOCATION might not beentirely clear, but at least in my view, INCREMENTAL BACKUP START WALLOCATION is definitely not clear. This backup is an incrementalbackup, and it has a start WAL location, so you'd end up with STARTWAL LOCATION and INCREMENTAL BACKUP START WAL LOCATION and those soundlike they ought to both be the same thing, but they're not. Perhapssomething like REFERENCE WAL LOCATION or REFERENCE WAL LOCATION FORINCREMENTAL BACKUP would be clearer.Agree, how about INCREMENTAL BACKUP REFERENCE WAL LOCATION ? > File header structure is defined in both the files basebackup.c and> pg_combinebackup.c. I think it is better to move this to replication/basebackup.h.\nOr some other header, but yeah, definitely don't duplicate the structdefinition (or any other kind of definition).Thanks. > IMHO, while labels are not advisable in general, it may be better to use a label> here rather than a while(1) loop, so that we can move to the label in case we> want to retry once. I think here it opens doors for future bugs if someone> happens to add code here, ending up adding some condition and then the> break becomes conditional. That will leave us in an infinite loop.\nI'm not sure which style is better here, but I don't really buy this argument.No issues. I am ok either way.Regards,Jeevan Ladhe",
"msg_date": "Fri, 9 Aug 2019 23:55:47 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Aug 9, 2019 at 6:36 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Aug 7, 2019 at 5:46 AM Jeevan Chalke\n> <jeevan.chalke@enterprisedb.com> wrote:\n> > So, do you mean we should just do fread() and fwrite() for the whole\n> file?\n> >\n> > I thought it is better if it was done by the OS itself instead of\n> reading 1GB\n> > into the memory and writing the same to the file.\n>\n> Well, 'cp' is just a C program. If they can write code to copy a\n> file, so can we, and then we're not dependent on 'cp' being installed,\n> working properly, being in the user's path or at the hard-coded\n> pathname we expect, etc. There's an existing copy_file() function in\n> src/backed/storage/file/copydir.c which I'd probably look into\n> adapting for frontend use. I'm not sure whether it would be important\n> to adapt the data-flushing code that's present in that routine or\n> whether we could get by with just the loop to read() and write() data.\n>\n\nAgree that we can certainly use open(), read(), write(), and close() here,\nbut\ngiven that pg_basebackup.c and basbackup.c are using file operations, I\nthink\nusing fopen(), fread(), fwrite(), and fclose() will be better here, at-least\nfor consistetncy.\n\nLet me know if we still want to go with native OS calls.\n\n\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Fri, Aug 9, 2019 at 6:36 PM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Aug 7, 2019 at 5:46 AM Jeevan Chalke<jeevan.chalke@enterprisedb.com> wrote:> So, do you mean we should just do fread() and fwrite() for the whole file?>> I thought it is better if it was done by the OS itself instead of reading 1GB> into the memory and writing the same to the file.\nWell, 'cp' is just a C program. If they can write code to copy afile, so can we, and then we're not dependent on 'cp' being installed,working properly, being in the user's path or at the hard-codedpathname we expect, etc. There's an existing copy_file() function insrc/backed/storage/file/copydir.c which I'd probably look intoadapting for frontend use. I'm not sure whether it would be importantto adapt the data-flushing code that's present in that routine orwhether we could get by with just the loop to read() and write() data.Agree that we can certainly use open(), read(), write(), and close() here, butgiven that pg_basebackup.c and basbackup.c are using file operations, I thinkusing fopen(), fread(), fwrite(), and fclose() will be better here, at-leastfor consistetncy.Let me know if we still want to go with native OS calls. \n-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 12 Aug 2019 17:27:29 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Aug 9, 2019 at 11:56 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n> Hi Robert,\n>\n> On Fri, Aug 9, 2019 at 6:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Thu, Aug 8, 2019 at 8:37 PM Jeevan Ladhe\n>> <jeevan.ladhe@enterprisedb.com> wrote:\n>> > + if (!XLogRecPtrIsInvalid(previous_lsn))\n>> > + appendStringInfo(labelfile, \"PREVIOUS WAL LOCATION:\n>> %X/%X\\n\",\n>> > + (uint32) (previous_lsn >> 32), (uint32)\n>> previous_lsn);\n>> >\n>> > May be we should rename to something like:\n>> > \"INCREMENTAL BACKUP START WAL LOCATION\" or simply \"INCREMENTAL BACKUP\n>> START LOCATION\"\n>> > to make it more intuitive?\n>>\n>> So, I think that you are right that PREVIOUS WAL LOCATION might not be\n>> entirely clear, but at least in my view, INCREMENTAL BACKUP START WAL\n>> LOCATION is definitely not clear. This backup is an incremental\n>> backup, and it has a start WAL location, so you'd end up with START\n>> WAL LOCATION and INCREMENTAL BACKUP START WAL LOCATION and those sound\n>> like they ought to both be the same thing, but they're not. Perhaps\n>> something like REFERENCE WAL LOCATION or REFERENCE WAL LOCATION FOR\n>> INCREMENTAL BACKUP would be clearer.\n>>\n>\n> Agree, how about INCREMENTAL BACKUP REFERENCE WAL LOCATION ?\n>\n\n+1 for INCREMENTAL BACKUP REFERENCE WA.\n\n\n>\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Fri, Aug 9, 2019 at 11:56 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:Hi Robert,On Fri, Aug 9, 2019 at 6:40 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Aug 8, 2019 at 8:37 PM Jeevan Ladhe<jeevan.ladhe@enterprisedb.com> wrote:> + if (!XLogRecPtrIsInvalid(previous_lsn))> + appendStringInfo(labelfile, \"PREVIOUS WAL LOCATION: %X/%X\\n\",> + (uint32) (previous_lsn >> 32), (uint32) previous_lsn);>> May be we should rename to something like:> \"INCREMENTAL BACKUP START WAL LOCATION\" or simply \"INCREMENTAL BACKUP START LOCATION\"> to make it more intuitive?\nSo, I think that you are right that PREVIOUS WAL LOCATION might not beentirely clear, but at least in my view, INCREMENTAL BACKUP START WALLOCATION is definitely not clear. This backup is an incrementalbackup, and it has a start WAL location, so you'd end up with STARTWAL LOCATION and INCREMENTAL BACKUP START WAL LOCATION and those soundlike they ought to both be the same thing, but they're not. Perhapssomething like REFERENCE WAL LOCATION or REFERENCE WAL LOCATION FORINCREMENTAL BACKUP would be clearer.Agree, how about INCREMENTAL BACKUP REFERENCE WAL LOCATION ?+1 for INCREMENTAL BACKUP REFERENCE WA. -- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 12 Aug 2019 17:29:50 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 5:29 PM Jeevan Chalke <\njeevan.chalke@enterprisedb.com> wrote:\n\n>\n>\n> On Fri, Aug 9, 2019 at 11:56 PM Jeevan Ladhe <\n> jeevan.ladhe@enterprisedb.com> wrote:\n>\n>> Hi Robert,\n>>\n>> On Fri, Aug 9, 2019 at 6:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>>> On Thu, Aug 8, 2019 at 8:37 PM Jeevan Ladhe\n>>> <jeevan.ladhe@enterprisedb.com> wrote:\n>>> > + if (!XLogRecPtrIsInvalid(previous_lsn))\n>>> > + appendStringInfo(labelfile, \"PREVIOUS WAL LOCATION:\n>>> %X/%X\\n\",\n>>> > + (uint32) (previous_lsn >> 32), (uint32)\n>>> previous_lsn);\n>>> >\n>>> > May be we should rename to something like:\n>>> > \"INCREMENTAL BACKUP START WAL LOCATION\" or simply \"INCREMENTAL BACKUP\n>>> START LOCATION\"\n>>> > to make it more intuitive?\n>>>\n>>> So, I think that you are right that PREVIOUS WAL LOCATION might not be\n>>> entirely clear, but at least in my view, INCREMENTAL BACKUP START WAL\n>>> LOCATION is definitely not clear. This backup is an incremental\n>>> backup, and it has a start WAL location, so you'd end up with START\n>>> WAL LOCATION and INCREMENTAL BACKUP START WAL LOCATION and those sound\n>>> like they ought to both be the same thing, but they're not. Perhaps\n>>> something like REFERENCE WAL LOCATION or REFERENCE WAL LOCATION FOR\n>>> INCREMENTAL BACKUP would be clearer.\n>>>\n>>\n>> Agree, how about INCREMENTAL BACKUP REFERENCE WAL LOCATION ?\n>>\n>\n> +1 for INCREMENTAL BACKUP REFERENCE WA.\n>\n\nSorry for the typo:\n+1 for the INCREMENTAL BACKUP REFERENCE WAL LOCATION.\n\n\n>\n>>\n>\n> --\n> Jeevan Chalke\n> Technical Architect, Product Development\n> EnterpriseDB Corporation\n> The Enterprise PostgreSQL Company\n>\n>\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Mon, Aug 12, 2019 at 5:29 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:On Fri, Aug 9, 2019 at 11:56 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:Hi Robert,On Fri, Aug 9, 2019 at 6:40 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Aug 8, 2019 at 8:37 PM Jeevan Ladhe<jeevan.ladhe@enterprisedb.com> wrote:> + if (!XLogRecPtrIsInvalid(previous_lsn))> + appendStringInfo(labelfile, \"PREVIOUS WAL LOCATION: %X/%X\\n\",> + (uint32) (previous_lsn >> 32), (uint32) previous_lsn);>> May be we should rename to something like:> \"INCREMENTAL BACKUP START WAL LOCATION\" or simply \"INCREMENTAL BACKUP START LOCATION\"> to make it more intuitive?\nSo, I think that you are right that PREVIOUS WAL LOCATION might not beentirely clear, but at least in my view, INCREMENTAL BACKUP START WALLOCATION is definitely not clear. This backup is an incrementalbackup, and it has a start WAL location, so you'd end up with STARTWAL LOCATION and INCREMENTAL BACKUP START WAL LOCATION and those soundlike they ought to both be the same thing, but they're not. Perhapssomething like REFERENCE WAL LOCATION or REFERENCE WAL LOCATION FORINCREMENTAL BACKUP would be clearer.Agree, how about INCREMENTAL BACKUP REFERENCE WAL LOCATION ?+1 for INCREMENTAL BACKUP REFERENCE WA.Sorry for the typo:+1 for the INCREMENTAL BACKUP REFERENCE WAL LOCATION. -- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 12 Aug 2019 17:33:21 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 7:57 AM Jeevan Chalke\n<jeevan.chalke@enterprisedb.com> wrote:\n> Agree that we can certainly use open(), read(), write(), and close() here, but\n> given that pg_basebackup.c and basbackup.c are using file operations, I think\n> using fopen(), fread(), fwrite(), and fclose() will be better here, at-least\n> for consistetncy.\n\nOh, that's fine. Whatever's more consistent with the pre-existing\ncode. Just, let's not use system().\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 12 Aug 2019 08:11:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 4:57 PM Jeevan Chalke <\njeevan.chalke@enterprisedb.com> wrote:\n\n>\n>\n> On Fri, Aug 9, 2019 at 6:36 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Wed, Aug 7, 2019 at 5:46 AM Jeevan Chalke\n>> <jeevan.chalke@enterprisedb.com> wrote:\n>> > So, do you mean we should just do fread() and fwrite() for the whole\n>> file?\n>> >\n>> > I thought it is better if it was done by the OS itself instead of\n>> reading 1GB\n>> > into the memory and writing the same to the file.\n>>\n>> Well, 'cp' is just a C program. If they can write code to copy a\n>> file, so can we, and then we're not dependent on 'cp' being installed,\n>> working properly, being in the user's path or at the hard-coded\n>> pathname we expect, etc. There's an existing copy_file() function in\n>> src/backed/storage/file/copydir.c which I'd probably look into\n>> adapting for frontend use. I'm not sure whether it would be important\n>> to adapt the data-flushing code that's present in that routine or\n>> whether we could get by with just the loop to read() and write() data.\n>>\n>\n> Agree that we can certainly use open(), read(), write(), and close() here,\n> but\n> given that pg_basebackup.c and basbackup.c are using file operations, I\n> think\n> using fopen(), fread(), fwrite(), and fclose() will be better here,\n> at-least\n> for consistetncy.\n>\n\n+1 for using fopen(), fread(), fwrite(), and fclose()\n\n\n> Let me know if we still want to go with native OS calls.\n>\n>\n\n-1 for OS call\n\n\n>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>\n>\n> --\n> Jeevan Chalke\n> Technical Architect, Product Development\n> EnterpriseDB Corporation\n> The Enterprise PostgreSQL Company\n>\n>\n\n-- \nIbrar Ahmed\n\nOn Mon, Aug 12, 2019 at 4:57 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:On Fri, Aug 9, 2019 at 6:36 PM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Aug 7, 2019 at 5:46 AM Jeevan Chalke<jeevan.chalke@enterprisedb.com> wrote:> So, do you mean we should just do fread() and fwrite() for the whole file?>> I thought it is better if it was done by the OS itself instead of reading 1GB> into the memory and writing the same to the file.\nWell, 'cp' is just a C program. If they can write code to copy afile, so can we, and then we're not dependent on 'cp' being installed,working properly, being in the user's path or at the hard-codedpathname we expect, etc. There's an existing copy_file() function insrc/backed/storage/file/copydir.c which I'd probably look intoadapting for frontend use. I'm not sure whether it would be importantto adapt the data-flushing code that's present in that routine orwhether we could get by with just the loop to read() and write() data.Agree that we can certainly use open(), read(), write(), and close() here, butgiven that pg_basebackup.c and basbackup.c are using file operations, I thinkusing fopen(), fread(), fwrite(), and fclose() will be better here, at-leastfor consistetncy. +1 for using fopen(), fread(), fwrite(), and fclose()Let me know if we still want to go with native OS calls. -1 for OS call \n-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company\n-- Ibrar Ahmed",
"msg_date": "Wed, 14 Aug 2019 01:47:26 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> Some comments:\n> 1) There will be some link files created for tablespace, we might\n> require some special handling for it\n>\n\nYep. I have that in my ToDo.\nWill start working on that soon.\n\n\n> 2)\n> Retry functionality is hanlded only for copying of full files, should\n> we handle retry for copying of partial files\n> 3)\n> we can have some range for maxretries similar to sleeptime\n>\n\nI took help from pg_standby code related to maxentries and sleeptime.\n\nHowever, as we don't want to use system() call now, I have\nremoved all this kludge and just used fread/fwrite as discussed.\n\n\n> 4)\n> Should we check for malloc failure\n>\n\nUsed pg_malloc() instead. Same is also suggested by Ibrar.\n\n\n>\n> 5) Should we add display of progress as backup may take some time,\n> this can be added as enhancement. We can get other's opinion on this.\n>\n\nCan be done afterward once we have the functionality in place.\n\n\n>\n> 6)\n> If the backup count increases providing the input may be difficult,\n> Shall user provide all the incremental backups from a parent folder\n> and can we handle the ordering of incremental backup internally\n>\n\nI am not sure of this yet. We need to provide the tablespace mapping too.\nBut thanks for putting a point here. Will keep that in mind when I revisit\nthis.\n\n\n>\n> 7)\n> Add verbose for copying whole file\n>\nDone\n\n\n>\n> 8) We can also check if approximate space is available in disk before\n> starting combine backup, this can be added as enhancement. We can get\n> other's opinion on this.\n>\n\nHmm... will leave it for now. User will get an error anyway.\n\n\n>\n> 9)\n> Combine backup into directory can be combine backup directory\n>\nDone\n\n\n>\n> 10)\n> MAX_INCR_BK_COUNT can be increased little, some applications use 1\n> full backup at the beginning of the month and use 30 incremental\n> backups rest of the days in the month\n>\n\nYeah, agree. But using any number here is debatable.\nLet's see others opinion too.\n\n\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\n\nAttached new sets of patches with refactoring done separately.\nIncremental backup patch became small now and hopefully more\nreadable than the first version.\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 16 Aug 2019 15:53:35 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Aug 9, 2019 at 6:07 AM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n> Hi Jeevan,\n>\n> I have reviewed the backup part at code level and still looking into the\n> restore(combine) and functional part of it. But, here are my comments so\n> far:\n>\n\nThank you Jeevan Ladhe for reviewing the changes.\n\n\n>\n> The patches need rebase.\n>\n\nDone.\n\n\n> May be we should rename to something like:\n> \"INCREMENTAL BACKUP START WAL LOCATION\" or simply \"INCREMENTAL BACKUP\n> START LOCATION\"\n> to make it more intuitive?\n>\n\nAs discussed, used \"INCREMENTAL BACKUP REFERENCE WAL LOCATION\".\n\nFile header structure is defined in both the files basebackup.c and\n> pg_combinebackup.c. I think it is better to move this to\n> replication/basebackup.h.\n>\n\nYep. Was that in my cleanup list. Done now.\n\n\n> I think we can avoid having flag isrelfile in sendFile().\n> Something like this:\n>\nAlso, having isrelfile as part of following condition:\n> is confusing, because even the relation files in full backup are going to\n> be\n> backed up by this loop only, but still, the condition reads '(!isrelfile\n> &&...)'.\n>\n\nIn the refactored patch I have moved full backup code in a separate\nfunction.\nAnd now all incremental backup code is also done in its own function.\nHopefully, the code is now more readable.\n\n\n>\n> IMHO, while labels are not advisable in general, it may be better to use a\n> label\n> here rather than a while(1) loop, so that we can move to the label in case\n> we\n> want to retry once. I think here it opens doors for future bugs if someone\n> happens to add code here, ending up adding some condition and then the\n> break becomes conditional. That will leave us in an infinite loop.\n>\n\nI kept it as is as I don't see any correctness issue here.\n\nSimilar to structure partial_file_header, I think above macro can also be\n> moved\n> to basebackup.h instead of defining it twice.\n>\n\nYes. Done.\n\n\n> I think this is a huge memory request (1GB) and may fail on busy/loaded\n> server at\n> times. We should check for failures of malloc, maybe throw some error on\n> getting ENOMEM as errno.\n>\n\nAgree. Done.\n\n\n> Here, should not we expect statbuf->st_size < (RELSEG_SIZE * BLCKSZ), and\n> it\n> should be safe to read just statbuf_st_size always I guess? But, I am ok\n> with\n> having this extra guard here.\n>\n\nYes, we can do this way. Added an Assert() before that and used just\nstatbuf->st_size.\n\nIn sendFile(), I am sorry if I am missing something, but I am not able to\n> understand why 'cnt' and 'i' should have different values when they are\n> being\n> passed to verify_page_checksum(). I think passing only one of them should\n> be\n> sufficient.\n>\n\nAs discussed offline, you meant to say i and blkno.\nThese two are different. i represent the current block offset from the read\nbuffer whereas blkno is the offset from the start of the page. For\nincremental\nbackup, they are same as we read the whole file but they are different in\ncase\nof regular full backup where we read 4 blocks at a time. i value there will\nbe\nbetween 0 and 3.\n\n\n> Maybe we should just have a variable no_of_blocks to store a number of\n> blocks,\n> rather than calculating this say RELSEG_SIZE(i.e. 131072) times in the\n> worst\n> case.\n>\n\nOK. Done.\n\n\n> Sorry if I am missing something, but, should not it be just:\n>\n> len = cnt;\n>\n\nYeah. Done.\n\n\n> As I said earlier in my previous email, we now do not need\n> +decode_lsn_internal()\n> as it is already taken care by the introduction of function\n> pg_lsn_in_internal().\n>\n\nYes. Done that and rebased on latest HEAD.\n\n\n>\n> Regards,\n> Jeevan Ladhe\n>\n\nPatches attached in the previous reply.\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Fri, Aug 9, 2019 at 6:07 AM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:Hi Jeevan,I have reviewed the backup part at code level and still looking into therestore(combine) and functional part of it. But, here are my comments so far:Thank you Jeevan Ladhe for reviewing the changes. The patches need rebase.Done. May be we should rename to something like:\"INCREMENTAL BACKUP START WAL LOCATION\" or simply \"INCREMENTAL BACKUP START LOCATION\"to make it more intuitive?As discussed, used \"INCREMENTAL BACKUP REFERENCE WAL LOCATION\".File header structure is defined in both the files basebackup.c andpg_combinebackup.c. I think it is better to move this to replication/basebackup.h.Yep. Was that in my cleanup list. Done now. I think we can avoid having flag isrelfile in sendFile(). Something like this:Also, having isrelfile as part of following condition:is confusing, because even the relation files in full backup are going to bebacked up by this loop only, but still, the condition reads '(!isrelfile &&...)'.In the refactored patch I have moved full backup code in a separate function.And now all incremental backup code is also done in its own function.Hopefully, the code is now more readable. IMHO, while labels are not advisable in general, it may be better to use a labelhere rather than a while(1) loop, so that we can move to the label in case wewant to retry once. I think here it opens doors for future bugs if someonehappens to add code here, ending up adding some condition and then thebreak becomes conditional. That will leave us in an infinite loop.I kept it as is as I don't see any correctness issue here.Similar to structure partial_file_header, I think above macro can also be movedto basebackup.h instead of defining it twice.Yes. Done. I think this is a huge memory request (1GB) and may fail on busy/loaded server attimes. We should check for failures of malloc, maybe throw some error ongetting ENOMEM as errno.Agree. Done. Here, should not we expect statbuf->st_size < (RELSEG_SIZE * BLCKSZ), and itshould be safe to read just statbuf_st_size always I guess? But, I am ok withhaving this extra guard here.Yes, we can do this way. Added an Assert() before that and used just statbuf->st_size.In sendFile(), I am sorry if I am missing something, but I am not able tounderstand why 'cnt' and 'i' should have different values when they are beingpassed to verify_page_checksum(). I think passing only one of them should besufficient.As discussed offline, you meant to say i and blkno.These two are different. i represent the current block offset from the readbuffer whereas blkno is the offset from the start of the page. For incrementalbackup, they are same as we read the whole file but they are different in caseof regular full backup where we read 4 blocks at a time. i value there will bebetween 0 and 3. Maybe we should just have a variable no_of_blocks to store a number of blocks,rather than calculating this say RELSEG_SIZE(i.e. 131072) times in the worstcase.OK. Done. Sorry if I am missing something, but, should not it be just:len = cnt;Yeah. Done. As I said earlier in my previous email, we now do not need +decode_lsn_internal()as it is already taken care by the introduction of function pg_lsn_in_internal().Yes. Done that and rebased on latest HEAD. Regards,Jeevan Ladhe\nPatches attached in the previous reply.-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 16 Aug 2019 16:12:52 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Aug 16, 2019 at 3:24 PM Jeevan Chalke <\njeevan.chalke@enterprisedb.com> wrote:\n\n>\n>\n> On Fri, Aug 2, 2019 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n>> Some comments:\n>> 1) There will be some link files created for tablespace, we might\n>> require some special handling for it\n>>\n>\n> Yep. I have that in my ToDo.\n> Will start working on that soon.\n>\n>\n>> 2)\n>> Retry functionality is hanlded only for copying of full files, should\n>> we handle retry for copying of partial files\n>> 3)\n>> we can have some range for maxretries similar to sleeptime\n>>\n>\n> I took help from pg_standby code related to maxentries and sleeptime.\n>\n> However, as we don't want to use system() call now, I have\n> removed all this kludge and just used fread/fwrite as discussed.\n>\n>\n>> 4)\n>> Should we check for malloc failure\n>>\n>\n> Used pg_malloc() instead. Same is also suggested by Ibrar.\n>\n>\n>>\n>> 5) Should we add display of progress as backup may take some time,\n>> this can be added as enhancement. We can get other's opinion on this.\n>>\n>\n> Can be done afterward once we have the functionality in place.\n>\n>\n>>\n>> 6)\n>> If the backup count increases providing the input may be difficult,\n>> Shall user provide all the incremental backups from a parent folder\n>> and can we handle the ordering of incremental backup internally\n>>\n>\n> I am not sure of this yet. We need to provide the tablespace mapping too.\n> But thanks for putting a point here. Will keep that in mind when I revisit\n> this.\n>\n>\n>>\n>> 7)\n>> Add verbose for copying whole file\n>>\n> Done\n>\n>\n>>\n>> 8) We can also check if approximate space is available in disk before\n>> starting combine backup, this can be added as enhancement. We can get\n>> other's opinion on this.\n>>\n>\n> Hmm... will leave it for now. User will get an error anyway.\n>\n>\n>>\n>> 9)\n>> Combine backup into directory can be combine backup directory\n>>\n> Done\n>\n>\n>>\n>> 10)\n>> MAX_INCR_BK_COUNT can be increased little, some applications use 1\n>> full backup at the beginning of the month and use 30 incremental\n>> backups rest of the days in the month\n>>\n>\n> Yeah, agree. But using any number here is debatable.\n> Let's see others opinion too.\n>\nWhy not use a list?\n\n\n>\n>\n>> Regards,\n>> Vignesh\n>> EnterpriseDB: http://www.enterprisedb.com\n>>\n>\n>\n> Attached new sets of patches with refactoring done separately.\n> Incremental backup patch became small now and hopefully more\n> readable than the first version.\n>\n> --\n> Jeevan Chalke\n> Technical Architect, Product Development\n> EnterpriseDB Corporation\n> The Enterprise PostgreSQL Company\n>\n>\n\n+ buf = (char *) malloc(statbuf->st_size);\n\n+ if (buf == NULL)\n\n+ ereport(ERROR,\n\n+ (errcode(ERRCODE_OUT_OF_MEMORY),\n\n+ errmsg(\"out of memory\")));\n\nWhy are you using malloc, you can use palloc here.\n\n\n\n\n-- \nIbrar Ahmed\n\nOn Fri, Aug 16, 2019 at 3:24 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:On Fri, Aug 2, 2019 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:Some comments:1) There will be some link files created for tablespace, we mightrequire some special handling for itYep. I have that in my ToDo.Will start working on that soon. 2)Retry functionality is hanlded only for copying of full files, shouldwe handle retry for copying of partial files3)we can have some range for maxretries similar to sleeptimeI took help from pg_standby code related to maxentries and sleeptime.However, as we don't want to use system() call now, I have removed all this kludge and just used fread/fwrite as discussed. 4)Should we check for malloc failureUsed pg_malloc() instead. Same is also suggested by Ibrar. \n5) Should we add display of progress as backup may take some time,this can be added as enhancement. We can get other's opinion on this.Can be done afterward once we have the functionality in place. \n6)If the backup count increases providing the input may be difficult,Shall user provide all the incremental backups from a parent folderand can we handle the ordering of incremental backup internallyI am not sure of this yet. We need to provide the tablespace mapping too.But thanks for putting a point here. Will keep that in mind when I revisit this. \n7)Add verbose for copying whole fileDone \n8) We can also check if approximate space is available in disk beforestarting combine backup, this can be added as enhancement. We can getother's opinion on this.Hmm... will leave it for now. User will get an error anyway. \n9)Combine backup into directory can be combine backup directoryDone \n10)MAX_INCR_BK_COUNT can be increased little, some applications use 1full backup at the beginning of the month and use 30 incrementalbackups rest of the days in the monthYeah, agree. But using any number here is debatable.Let's see others opinion too.Why not use a list? \nRegards,VigneshEnterpriseDB: http://www.enterprisedb.com\nAttached new sets of patches with refactoring done separately.Incremental backup patch became small now and hopefully morereadable than the first version.-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company \n+ buf = (char *) malloc(statbuf->st_size);\n+ if (buf == NULL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OUT_OF_MEMORY),\n+ errmsg(\"out of memory\")));Why are you using malloc, you can use palloc here.-- Ibrar Ahmed",
"msg_date": "Fri, 16 Aug 2019 16:12:32 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Aug 16, 2019 at 4:12 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n>\n>\n> On Fri, Aug 16, 2019 at 3:24 PM Jeevan Chalke <\n> jeevan.chalke@enterprisedb.com> wrote:\n>\n>>\n>>\n>> On Fri, Aug 2, 2019 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:\n>>\n>>> Some comments:\n>>> 1) There will be some link files created for tablespace, we might\n>>> require some special handling for it\n>>>\n>>\n>> Yep. I have that in my ToDo.\n>> Will start working on that soon.\n>>\n>>\n>>> 2)\n>>> Retry functionality is hanlded only for copying of full files, should\n>>> we handle retry for copying of partial files\n>>> 3)\n>>> we can have some range for maxretries similar to sleeptime\n>>>\n>>\n>> I took help from pg_standby code related to maxentries and sleeptime.\n>>\n>> However, as we don't want to use system() call now, I have\n>> removed all this kludge and just used fread/fwrite as discussed.\n>>\n>>\n>>> 4)\n>>> Should we check for malloc failure\n>>>\n>>\n>> Used pg_malloc() instead. Same is also suggested by Ibrar.\n>>\n>>\n>>>\n>>> 5) Should we add display of progress as backup may take some time,\n>>> this can be added as enhancement. We can get other's opinion on this.\n>>>\n>>\n>> Can be done afterward once we have the functionality in place.\n>>\n>>\n>>>\n>>> 6)\n>>> If the backup count increases providing the input may be difficult,\n>>> Shall user provide all the incremental backups from a parent folder\n>>> and can we handle the ordering of incremental backup internally\n>>>\n>>\n>> I am not sure of this yet. We need to provide the tablespace mapping too.\n>> But thanks for putting a point here. Will keep that in mind when I\n>> revisit this.\n>>\n>>\n>>>\n>>> 7)\n>>> Add verbose for copying whole file\n>>>\n>> Done\n>>\n>>\n>>>\n>>> 8) We can also check if approximate space is available in disk before\n>>> starting combine backup, this can be added as enhancement. We can get\n>>> other's opinion on this.\n>>>\n>>\n>> Hmm... will leave it for now. User will get an error anyway.\n>>\n>>\n>>>\n>>> 9)\n>>> Combine backup into directory can be combine backup directory\n>>>\n>> Done\n>>\n>>\n>>>\n>>> 10)\n>>> MAX_INCR_BK_COUNT can be increased little, some applications use 1\n>>> full backup at the beginning of the month and use 30 incremental\n>>> backups rest of the days in the month\n>>>\n>>\n>> Yeah, agree. But using any number here is debatable.\n>> Let's see others opinion too.\n>>\n> Why not use a list?\n>\n>\n>>\n>>\n>>> Regards,\n>>> Vignesh\n>>> EnterpriseDB: http://www.enterprisedb.com\n>>>\n>>\n>>\n>> Attached new sets of patches with refactoring done separately.\n>> Incremental backup patch became small now and hopefully more\n>> readable than the first version.\n>>\n>> --\n>> Jeevan Chalke\n>> Technical Architect, Product Development\n>> EnterpriseDB Corporation\n>> The Enterprise PostgreSQL Company\n>>\n>>\n>\n> + buf = (char *) malloc(statbuf->st_size);\n>\n> + if (buf == NULL)\n>\n> + ereport(ERROR,\n>\n> + (errcode(ERRCODE_OUT_OF_MEMORY),\n>\n> + errmsg(\"out of memory\")));\n>\n> Why are you using malloc, you can use palloc here.\n>\n>\n>\n> Hi, I gave another look at the patch and have some quick comments.\n\n\n-\n> char *extptr = strstr(fn, \".partial\");\n\nI think there should be a better and strict way to check the file\nextension.\n\n-\n> + extptr = strstr(outfn, \".partial\");\n> + Assert (extptr != NULL);\n\nWhy are you checking that again, you just appended that in the above\nstatement?\n\n-\n> + if (verbose && statbuf.st_size > (RELSEG_SIZE * BLCKSZ))\n> + pg_log_info(\"found big file \\\"%s\\\" (size: %.2lfGB): %m\",\nfromfn,\n> + (double) statbuf.st_size /\n(RELSEG_SIZE * BLCKSZ));\n\nThis is not just a log, you find a file which is bigger which surely has\nsome problem.\n\n-\n> + * We do read entire 1GB file in memory while taking incremental\nbackup; so\n> + * I don't see any reason why can't we do that here. Also,\ncopying data in\n> + * chunks is expensive. However, for bigger files, we still\nslice at 1GB\n> + * border.\n\n\nWhat do you mean by bigger file, a file greater than 1GB? In which case you\nget file > 1GB?\n\n\n-- \nIbrar Ahmed\n\n\nOn Fri, Aug 16, 2019 at 4:12 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Fri, Aug 16, 2019 at 3:24 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:On Fri, Aug 2, 2019 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:Some comments:1) There will be some link files created for tablespace, we mightrequire some special handling for itYep. I have that in my ToDo.Will start working on that soon. 2)Retry functionality is hanlded only for copying of full files, shouldwe handle retry for copying of partial files3)we can have some range for maxretries similar to sleeptimeI took help from pg_standby code related to maxentries and sleeptime.However, as we don't want to use system() call now, I have removed all this kludge and just used fread/fwrite as discussed. 4)Should we check for malloc failureUsed pg_malloc() instead. Same is also suggested by Ibrar. \n5) Should we add display of progress as backup may take some time,this can be added as enhancement. We can get other's opinion on this.Can be done afterward once we have the functionality in place. \n6)If the backup count increases providing the input may be difficult,Shall user provide all the incremental backups from a parent folderand can we handle the ordering of incremental backup internallyI am not sure of this yet. We need to provide the tablespace mapping too.But thanks for putting a point here. Will keep that in mind when I revisit this. \n7)Add verbose for copying whole fileDone \n8) We can also check if approximate space is available in disk beforestarting combine backup, this can be added as enhancement. We can getother's opinion on this.Hmm... will leave it for now. User will get an error anyway. \n9)Combine backup into directory can be combine backup directoryDone \n10)MAX_INCR_BK_COUNT can be increased little, some applications use 1full backup at the beginning of the month and use 30 incrementalbackups rest of the days in the monthYeah, agree. But using any number here is debatable.Let's see others opinion too.Why not use a list? \nRegards,VigneshEnterpriseDB: http://www.enterprisedb.com\nAttached new sets of patches with refactoring done separately.Incremental backup patch became small now and hopefully morereadable than the first version.-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company \n+ buf = (char *) malloc(statbuf->st_size);\n+ if (buf == NULL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OUT_OF_MEMORY),\n+ errmsg(\"out of memory\")));Why are you using malloc, you can use palloc here.Hi, I gave another look at the patch and have some quick comments.-> char *extptr = strstr(fn, \".partial\");I think there should be a better and strict way to check the file extension. -> + extptr = strstr(outfn, \".partial\");> + Assert (extptr != NULL);Why are you checking that again, you just appended that in the above statement?-> + if (verbose && statbuf.st_size > (RELSEG_SIZE * BLCKSZ))> + pg_log_info(\"found big file \\\"%s\\\" (size: %.2lfGB): %m\", fromfn,> + (double) statbuf.st_size / (RELSEG_SIZE * BLCKSZ));This is not just a log, you find a file which is bigger which surely has some problem.-> + * We do read entire 1GB file in memory while taking incremental backup; so> + * I don't see any reason why can't we do that here. Also, copying data in> + * chunks is expensive. However, for bigger files, we still slice at 1GB> + * border.What do you mean by bigger file, a file greater than 1GB? In which case you get file > 1GB? -- Ibrar Ahmed",
"msg_date": "Fri, 16 Aug 2019 19:36:50 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Aug 16, 2019 at 8:07 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n> What do you mean by bigger file, a file greater than 1GB? In which case you get file > 1GB?\n>\n>\n>\nFew comments:\nComment:\n+ buf = (char *) malloc(statbuf->st_size);\n+ if (buf == NULL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OUT_OF_MEMORY),\n+ errmsg(\"out of memory\")));\n+\n+ if ((cnt = fread(buf, 1, statbuf->st_size, fp)) > 0)\n+ {\n+ Bitmapset *mod_blocks = NULL;\n+ int nmodblocks = 0;\n+\n+ if (cnt % BLCKSZ != 0)\n+ {\n\nWe can use same size as full page size.\nAfter pg start backup full page write will be enabled.\nWe can use the same file size to maintain data consistency.\n\nComment:\n/* Validate given LSN and convert it into XLogRecPtr. */\n+ opt->lsn = pg_lsn_in_internal(strVal(defel->arg), &have_error);\n+ if (XLogRecPtrIsInvalid(opt->lsn))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n+ errmsg(\"invalid value for LSN\")));\n\nValidate input lsn is less than current system lsn.\n\nComment:\n/* Validate given LSN and convert it into XLogRecPtr. */\n+ opt->lsn = pg_lsn_in_internal(strVal(defel->arg), &have_error);\n+ if (XLogRecPtrIsInvalid(opt->lsn))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n+ errmsg(\"invalid value for LSN\")));\n\nShould we check if it is same timeline as the system's timeline.\n\nComment:\n+ if (fread(blkdata, 1, BLCKSZ, infp) != BLCKSZ)\n+ {\n+ pg_log_error(\"could not read from file \\\"%s\\\": %m\", outfn);\n+ cleanup_filemaps(filemaps, fmindex + 1);\n+ exit(1);\n+ }\n+\n+ /* Finally write one block to the output file */\n+ if (fwrite(blkdata, 1, BLCKSZ, outfp) != BLCKSZ)\n+ {\n+ pg_log_error(\"could not write to file \\\"%s\\\": %m\", outfn);\n+ cleanup_filemaps(filemaps, fmindex + 1);\n+ exit(1);\n+ }\n\nShould we support compression formats supported by pg_basebackup.\nThis can be an enhancement after the functionality is completed.\n\nComment:\nWe should provide some mechanism to validate the backup. To identify\nif some backup is corrupt or some file is missing(deleted) in a\nbackup.\n\nComment:\n+ ofp = fopen(tofn, \"wb\");\n+ if (ofp == NULL)\n+ {\n+ pg_log_error(\"could not create file \\\"%s\\\": %m\", tofn);\n+ exit(1);\n+ }\n\nifp should be closed in the error flow.\n\nComment:\n+ fp = fopen(filename, \"r\");\n+ if (fp == NULL)\n+ {\n+ pg_log_error(\"could not read file \\\"%s\\\": %m\", filename);\n+ exit(1);\n+ }\n+\n+ labelfile = pg_malloc(statbuf.st_size + 1);\n+ if (fread(labelfile, 1, statbuf.st_size, fp) != statbuf.st_size)\n+ {\n+ pg_log_error(\"corrupted file \\\"%s\\\": %m\", filename);\n+ pg_free(labelfile);\n+ exit(1);\n+ }\n\nfclose can be moved above.\n\nComment:\n+ if (!modifiedblockfound)\n+ {\n+ copy_whole_file(fm->filename, outfn);\n+ cleanup_filemaps(filemaps, fmindex + 1);\n+ return;\n+ }\n+\n+ /* Write all blocks to the output file */\n+\n+ if (fstat(fileno(fm->fp), &statbuf) != 0)\n+ {\n+ pg_log_error(\"could not stat file \\\"%s\\\": %m\", fm->filename);\n+ pg_free(filemaps);\n+ exit(1);\n+ }\n\nSome error flow, cleanup_filemaps need to be called to close the file\ndescriptors that are opened.\n\nComment:\n+/*\n+ * When to send the whole file, % blocks modified (90%)\n+ */\n+#define WHOLE_FILE_THRESHOLD 0.9\n+\n\nThis can be user configured value.\nThis can be an enhancement after the functionality is completed.\n\n\nComment:\nWe can add a readme file with all the details regarding incremental\nbackup and combine backup.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Aug 2019 16:46:32 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Aug 16, 2019 at 6:23 AM Jeevan Chalke\n<jeevan.chalke@enterprisedb.com> wrote:\n> [ patches ]\n\nReviewing 0002 and 0003:\n\n- Commit message for 0003 claims magic number and checksum are 0, but\nthat (fortunately) doesn't seem to be the case.\n\n- looks_like_rel_name actually checks whether it looks like a\n*non-temporary* relation name; suggest adjusting the function name.\n\n- The names do_full_backup and do_incremental_backup are quite\nconfusing because you're really talking about what to do with one\nfile. I suggest sendCompleteFile() and sendPartialFile().\n\n- Is there any good reason to have 'refptr' as a global variable, or\ncould we just pass the LSN around via function arguments? I know it's\njust mimicking startptr, but storing startptr in a global variable\ndoesn't seem like a great idea either, so if it's not too annoying,\nlet's pass it down via function arguments instead. Also, refptr is a\ncrappy name (even worse than startptr); whether we end up with a\nglobal variable or a bunch of local variables, let's make the name(s)\nclear and unambiguous, like incremental_reference_lsn. Yeah, I know\nthat's long, but I still think it's better than being unclear.\n\n- do_incremental_backup looks like it can never report an error from\nfread(), which is bad. But I see that this is just copied from the\nexisting code which has the same problem, so I started a separate\nthread about that.\n\n- I think that passing cnt and blkindex to verify_page_checksum()\ndoesn't look very good from an abstraction point of view. Granted,\nthe existing code isn't great either, but I think this makes the\nproblem worse. I suggest passing \"int backup_distance\" to this\nfunction, computed as cnt - BLCKSZ * blkindex. Then, you can\nfseek(-backup_distance), fread(BLCKSZ), and then fseek(backup_distance\n- BLCKSZ).\n\n- While I generally support the use of while and for loops rather than\ngoto for flow control, a while (1) loop that ends with a break is\nfunctionally a goto anyway. I think there are several ways this could\nbe revised. The most obvious one is probably to use goto, but I vote\nfor inverting the sense of the test: if (PageIsNew(page) ||\nPageGetLSN(page) >= startptr) break; This approach also saves a level\nof indentation for more than half of the function.\n\n- I am not sure that it's a good idea for sendwholefile = true to\nresult in dumping the entire file onto the wire in a single CopyData\nmessage. I don't know of a concrete problem in typical\nconfigurations, but someone who increases RELSEG_SIZE might be able to\noverflow CopyData's length word. At 2GB the length word would be\nnegative, which might break, and at 4GB it would wrap around, which\nwould certainly break. See CopyData in\nhttps://www.postgresql.org/docs/12/protocol-message-formats.html To\navoid this issue, and maybe some others, I suggest defining a\nreasonably large chunk size, say 1MB as a constant in this file\nsomeplace, and sending the data as a series of chunks of that size.\n\n- I don't think that the way concurrent truncation is handled is\ncorrect for partial files. Right now it just falls through to code\nwhich appends blocks of zeroes in either the complete-file or\npartial-file case. I think that logic should be moved into the\nfunction that handles the complete-file case. In the partial-file\ncase, the blocks that we actually send need to match the list of block\nnumbers we promised to send. We can't just send the promised blocks\nand then tack a bunch of zero-filled blocks onto the end that the file\nheader doesn't know about.\n\n- For reviewer convenience, please use the -v option to git\nformat-patch when posting and reposting a patch series. Using -v2,\n-v3, etc. on successive versions really helps.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 27 Aug 2019 14:29:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Due to the inherent nature of pg_basebackup, the incremental backup also\nallows taking backup in tar and compressed format. But, pg_combinebackup\ndoes not understand how to restore this. I think we should either make\npg_combinebackup support restoration of tar incremental backup or restrict\ntaking the incremental backup in tar format until pg_combinebackup\nsupports the restoration by making option '--lsn' and '-Ft' exclusive.\n\nIt is arguable that one can take the incremental backup in tar format,\nextract\nthat manually and then give the resultant directory as input to the\npg_combinebackup, but I think that kills the purpose of having\npg_combinebackup utility.\n\nThoughts?\n\nRegards,\nJeevan Ladhe\n\nDue to the inherent nature of pg_basebackup, the incremental backup alsoallows taking backup in tar and compressed format. But, pg_combinebackupdoes not understand how to restore this. I think we should either makepg_combinebackup support restoration of tar incremental backup or restricttaking the incremental backup in tar format until pg_combinebackupsupports the restoration by making option '--lsn' and '-Ft' exclusive.It is arguable that one can take the incremental backup in tar format, extractthat manually and then give the resultant directory as input to thepg_combinebackup, but I think that kills the purpose of havingpg_combinebackup utility.Thoughts?Regards,Jeevan Ladhe",
"msg_date": "Thu, 29 Aug 2019 20:11:04 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi,\n\nI am doing some testing on pg_basebackup and pg_combinebackup patches. I\nhave also tried to create tap test for pg_combinebackup by taking\nreference from pg_basebackup tap cases.\nAttaching first draft test patch.\n\nI have done some testing with compression options, both -z and -Z level is\nworking with incremental backup.\n\nA minor comment : It is mentioned in pg_combinebackup help that maximum 10\nincremental backup can be given with -i option, but I found maximum 9\nincremental backup directories can be given at a time.\n\nThanks & Regards,\nRajkumar Raghuwanshi\nQMG, EnterpriseDB Corporation\n\n\nOn Thu, Aug 29, 2019 at 10:06 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n> Due to the inherent nature of pg_basebackup, the incremental backup also\n> allows taking backup in tar and compressed format. But, pg_combinebackup\n> does not understand how to restore this. I think we should either make\n> pg_combinebackup support restoration of tar incremental backup or restrict\n> taking the incremental backup in tar format until pg_combinebackup\n> supports the restoration by making option '--lsn' and '-Ft' exclusive.\n>\n> It is arguable that one can take the incremental backup in tar format,\n> extract\n> that manually and then give the resultant directory as input to the\n> pg_combinebackup, but I think that kills the purpose of having\n> pg_combinebackup utility.\n>\n> Thoughts?\n>\n> Regards,\n> Jeevan Ladhe\n>",
"msg_date": "Fri, 30 Aug 2019 18:26:31 +0530",
"msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Here are some comments:\n\n\n+/* The reference XLOG position for the incremental backup. */\n\n+static XLogRecPtr refptr;\n\nAs Robert already pointed we may want to pass this as parameter around\ninstead\nof a global variable. Also, can be renamed to something like:\nincr_backup_refptr.\nI see in your earlier version of patch this was named startincrptr, which I\nthink was more meaningful.\n\n---------\n\n /*\n\n * If incremental backup, see whether the filename is a relation\nfilename\n * or not.\n\n */\n\nCan be reworded something like:\n\"If incremental backup, check if it is relation file and can be sent\npartially.\"\n\n---------\n\n+ if (verify_checksum)\n+ {\n+ ereport(WARNING,\n+ (errmsg(\"cannot verify checksum in file \\\"%s\\\",\nblock \"\n+ \"%d: read buffer size %d and page size %d \"\n+ \"differ\",\n+ readfilename, blkno, (int) cnt, BLCKSZ)));\n+ verify_checksum = false;\n+ }\n\nFor do_incremental_backup() it does not make sense to show the block number\nin\nwarning as it is always going to be 0 when we throw this warning.\nFurther, I think this can be rephrased as:\n\"cannot verify checksum in file \\\"%s\\\", read file size %d is not multiple of\npage size %d\".\n\nOr maybe we can just say:\n\"cannot verify checksum in file \\\"%s\\\"\" if checksum requested, disable the\nchecksum and leave it to the following message:\n\n+ ereport(WARNING,\n+ (errmsg(\"file size (%d) not in multiple of page size\n(%d), sending whole file\",\n+ (int) cnt, BLCKSZ)));\n\n---------\n\nIf you agree on the above comment for blkno, then we can shift declaration\nof blkno\ninside the condition \" if (!sendwholefile)\" in\ndo_incremental_backup(), or\navoid it altogether, and just pass \"i\" as blkindex, as well as blkno to\nverify_page_checksum(). May be add a comment why they are same in case of\nincremental backup.\n\n---------\n\nI think we should give the user hint from where he should be reading the\ninput\nlsn for incremental backup in the --help option as well as documentation?\nSomething like - \"To take an incremental backup, please provide value of\n\"--lsn\"\nas the \"START WAL LOCATION\" of previously taken full backup or incremental\nbackup from backup_lable file.\n\n---------\n\npg_combinebackup:\n\n+static bool made_new_outputdata = false;\n+static bool found_existing_outputdata = false;\n\nBoth of these are global, I understand that we need them global so that\nthey are\naccessible in cleanup_directories_atexit(). But they are passed to\nverify_dir_is_empty_or_create() as parameters, which I think is not needed.\nInstead verify_dir_is_empty_or_create() can directly change the globals.\n\n---------\n\nI see that checksum_failure is never set and always remains as false. May be\nit is something that you wanted to set in combine_partial_files() when a\nthe corrupted partial file is detected?\n\n---------\n\nI think the logic for verifying the backup chain should be moved out from\nmain()\nfunction to a separate function.\n\n---------\n\n+ /*\n+ * Verify the backup chain. INCREMENTAL BACKUP REFERENCE WAL LOCATION of\n+ * the incremental backup must match with the START WAL LOCATION of the\n+ * previous backup, until we reach a full backup in which there is no\n+ * INCREMENTAL BACKUP REFERENCE WAL LOCATION.\n+ */\n\nThe current logic assumes the incremental backup directories are to be\nprovided\nas input in the serial order the backups were taken. This is bit confusing\nunless clarified in pg_combinebackup help menu or documentation. I think we\nshould clarify it at both the places.\n\n---------\n\nI think scan_directory() should be rather renamed as do_combinebackup().\n\nRegards,\nJeevan Ladhe\n\nOn Thu, Aug 29, 2019 at 8:11 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n> Due to the inherent nature of pg_basebackup, the incremental backup also\n> allows taking backup in tar and compressed format. But, pg_combinebackup\n> does not understand how to restore this. I think we should either make\n> pg_combinebackup support restoration of tar incremental backup or restrict\n> taking the incremental backup in tar format until pg_combinebackup\n> supports the restoration by making option '--lsn' and '-Ft' exclusive.\n>\n> It is arguable that one can take the incremental backup in tar format,\n> extract\n> that manually and then give the resultant directory as input to the\n> pg_combinebackup, but I think that kills the purpose of having\n> pg_combinebackup utility.\n>\n> Thoughts?\n>\n> Regards,\n> Jeevan Ladhe\n>\n\nHere are some comments:+/* The reference XLOG position for the incremental backup. */ +static XLogRecPtr refptr; As Robert already pointed we may want to pass this as parameter around insteadof a global variable. Also, can be renamed to something like: incr_backup_refptr.I see in your earlier version of patch this was named startincrptr, which Ithink was more meaningful.--------- /* * If incremental backup, see whether the filename is a relation filename * or not. */Can be reworded something like:\"If incremental backup, check if it is relation file and can be sent partially.\"---------+ if (verify_checksum)+ {+ ereport(WARNING,+ (errmsg(\"cannot verify checksum in file \\\"%s\\\", block \"+ \"%d: read buffer size %d and page size %d \"+ \"differ\",+ readfilename, blkno, (int) cnt, BLCKSZ)));+ verify_checksum = false;+ }For do_incremental_backup() it does not make sense to show the block number inwarning as it is always going to be 0 when we throw this warning.Further, I think this can be rephrased as:\"cannot verify checksum in file \\\"%s\\\", read file size %d is not multiple ofpage size %d\".Or maybe we can just say:\"cannot verify checksum in file \\\"%s\\\"\" if checksum requested, disable thechecksum and leave it to the following message:+ ereport(WARNING,+ (errmsg(\"file size (%d) not in multiple of page size (%d), sending whole file\",+ (int) cnt, BLCKSZ))); ---------If you agree on the above comment for blkno, then we can shift declaration of blknoinside the condition \" if (!sendwholefile)\" in do_incremental_backup(), oravoid it altogether, and just pass \"i\" as blkindex, as well as blkno toverify_page_checksum(). May be add a comment why they are same in case ofincremental backup.---------I think we should give the user hint from where he should be reading the inputlsn for incremental backup in the --help option as well as documentation?Something like - \"To take an incremental backup, please provide value of \"--lsn\"as the \"START WAL LOCATION\" of previously taken full backup or incrementalbackup from backup_lable file. ---------pg_combinebackup:+static bool made_new_outputdata = false;+static bool found_existing_outputdata = false;Both of these are global, I understand that we need them global so that they areaccessible in cleanup_directories_atexit(). But they are passed toverify_dir_is_empty_or_create() as parameters, which I think is not needed.Instead verify_dir_is_empty_or_create() can directly change the globals.---------I see that checksum_failure is never set and always remains as false. May beit is something that you wanted to set in combine_partial_files() when athe corrupted partial file is detected?---------I think the logic for verifying the backup chain should be moved out from main()function to a separate function.---------+ /*+ * Verify the backup chain. INCREMENTAL BACKUP REFERENCE WAL LOCATION of+ * the incremental backup must match with the START WAL LOCATION of the+ * previous backup, until we reach a full backup in which there is no+ * INCREMENTAL BACKUP REFERENCE WAL LOCATION.+ */The current logic assumes the incremental backup directories are to be providedas input in the serial order the backups were taken. This is bit confusingunless clarified in pg_combinebackup help menu or documentation. I think weshould clarify it at both the places.---------I think scan_directory() should be rather renamed as do_combinebackup().Regards,Jeevan LadheOn Thu, Aug 29, 2019 at 8:11 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:Due to the inherent nature of pg_basebackup, the incremental backup alsoallows taking backup in tar and compressed format. But, pg_combinebackupdoes not understand how to restore this. I think we should either makepg_combinebackup support restoration of tar incremental backup or restricttaking the incremental backup in tar format until pg_combinebackupsupports the restoration by making option '--lsn' and '-Ft' exclusive.It is arguable that one can take the incremental backup in tar format, extractthat manually and then give the resultant directory as input to thepg_combinebackup, but I think that kills the purpose of havingpg_combinebackup utility.Thoughts?Regards,Jeevan Ladhe",
"msg_date": "Fri, 30 Aug 2019 18:51:50 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Thu, Aug 29, 2019 at 10:41 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> Due to the inherent nature of pg_basebackup, the incremental backup also\n> allows taking backup in tar and compressed format. But, pg_combinebackup\n> does not understand how to restore this. I think we should either make\n> pg_combinebackup support restoration of tar incremental backup or restrict\n> taking the incremental backup in tar format until pg_combinebackup\n> supports the restoration by making option '--lsn' and '-Ft' exclusive.\n>\n> It is arguable that one can take the incremental backup in tar format, extract\n> that manually and then give the resultant directory as input to the\n> pg_combinebackup, but I think that kills the purpose of having\n> pg_combinebackup utility.\n\nI don't agree. You're right that you would have to untar (and\nuncompress) the backup to run pg_combinebackup, but you would also\nhave to do that to restore a non-incremental backup, so it doesn't\nseem much different. It's true that for an incremental backup you\nmight need to untar and uncompress multiple prior backups rather than\njust one, but that's just the nature of an incremental backup. And,\non a practical level, if you want compression, which is pretty likely\nif you're thinking about incremental backups, the way to get that is\nto use tar format with -z or -Z.\n\nIt might be interesting to teach pg_combinebackup to be able to read\ntar-format backups, but I think that there are several variants of the\ntar format, and I suspect it would need to read them all. If someone\nun-tars and re-tars a backup with a different tar tool, we don't want\nit to become unreadable. So we'd either have to write our own\nde-tarring library or add an external dependency on one. I don't\nthink it's worth doing that at this point; I definitely don't think it\nneeds to be part of the first patch.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 30 Aug 2019 22:58:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Sat, Aug 31, 2019 at 7:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Aug 29, 2019 at 10:41 AM Jeevan Ladhe\n> <jeevan.ladhe@enterprisedb.com> wrote:\n> > Due to the inherent nature of pg_basebackup, the incremental backup also\n> > allows taking backup in tar and compressed format. But, pg_combinebackup\n> > does not understand how to restore this. I think we should either make\n> > pg_combinebackup support restoration of tar incremental backup or\n> restrict\n> > taking the incremental backup in tar format until pg_combinebackup\n> > supports the restoration by making option '--lsn' and '-Ft' exclusive.\n> >\n> > It is arguable that one can take the incremental backup in tar format,\n> extract\n> > that manually and then give the resultant directory as input to the\n> > pg_combinebackup, but I think that kills the purpose of having\n> > pg_combinebackup utility.\n>\n> I don't agree. You're right that you would have to untar (and\n> uncompress) the backup to run pg_combinebackup, but you would also\n> have to do that to restore a non-incremental backup, so it doesn't\n> seem much different. It's true that for an incremental backup you\n> might need to untar and uncompress multiple prior backups rather than\n> just one, but that's just the nature of an incremental backup. And,\n> on a practical level, if you want compression, which is pretty likely\n> if you're thinking about incremental backups, the way to get that is\n> to use tar format with -z or -Z.\n>\n> It might be interesting to teach pg_combinebackup to be able to read\n> tar-format backups, but I think that there are several variants of the\n> tar format, and I suspect it would need to read them all. If someone\n> un-tars and re-tars a backup with a different tar tool, we don't want\n> it to become unreadable. So we'd either have to write our own\n> de-tarring library or add an external dependency on one.\n\n\nAre we using any tar library in pg_basebackup.c? We already have the\ncapability\nin pg_basebackup to do that.\n\n\n\n> I don't\n> think it's worth doing that at this point; I definitely don't think it\n> needs to be part of the first patch.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\n-- \nIbrar Ahmed\n\nOn Sat, Aug 31, 2019 at 7:59 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Aug 29, 2019 at 10:41 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> Due to the inherent nature of pg_basebackup, the incremental backup also\n> allows taking backup in tar and compressed format. But, pg_combinebackup\n> does not understand how to restore this. I think we should either make\n> pg_combinebackup support restoration of tar incremental backup or restrict\n> taking the incremental backup in tar format until pg_combinebackup\n> supports the restoration by making option '--lsn' and '-Ft' exclusive.\n>\n> It is arguable that one can take the incremental backup in tar format, extract\n> that manually and then give the resultant directory as input to the\n> pg_combinebackup, but I think that kills the purpose of having\n> pg_combinebackup utility.\n\nI don't agree. You're right that you would have to untar (and\nuncompress) the backup to run pg_combinebackup, but you would also\nhave to do that to restore a non-incremental backup, so it doesn't\nseem much different. It's true that for an incremental backup you\nmight need to untar and uncompress multiple prior backups rather than\njust one, but that's just the nature of an incremental backup. And,\non a practical level, if you want compression, which is pretty likely\nif you're thinking about incremental backups, the way to get that is\nto use tar format with -z or -Z.\n\nIt might be interesting to teach pg_combinebackup to be able to read\ntar-format backups, but I think that there are several variants of the\ntar format, and I suspect it would need to read them all. If someone\nun-tars and re-tars a backup with a different tar tool, we don't want\nit to become unreadable. So we'd either have to write our own\nde-tarring library or add an external dependency on one. Are we using any tar library in pg_basebackup.c? We already have the capabilityin pg_basebackup to do that. I don't\nthink it's worth doing that at this point; I definitely don't think it\nneeds to be part of the first patch.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- Ibrar Ahmed",
"msg_date": "Sun, 1 Sep 2019 00:40:28 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi Robert,\n\nOn Sat, Aug 31, 2019 at 8:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Aug 29, 2019 at 10:41 AM Jeevan Ladhe\n> <jeevan.ladhe@enterprisedb.com> wrote:\n> > Due to the inherent nature of pg_basebackup, the incremental backup also\n> > allows taking backup in tar and compressed format. But, pg_combinebackup\n> > does not understand how to restore this. I think we should either make\n> > pg_combinebackup support restoration of tar incremental backup or\n> restrict\n> > taking the incremental backup in tar format until pg_combinebackup\n> > supports the restoration by making option '--lsn' and '-Ft' exclusive.\n> >\n> > It is arguable that one can take the incremental backup in tar format,\n> extract\n> > that manually and then give the resultant directory as input to the\n> > pg_combinebackup, but I think that kills the purpose of having\n> > pg_combinebackup utility.\n>\n> I don't agree. You're right that you would have to untar (and\n> uncompress) the backup to run pg_combinebackup, but you would also\n> have to do that to restore a non-incremental backup, so it doesn't\n> seem much different.\n>\n\nThanks. Yes I agree about the similarity between restoring non-incremental\nand incremental backup in this case.\n\n\n> I don't think it's worth doing that at this point; I definitely don't\n> think it\n> needs to be part of the first patch.\n>\n\nMakes sense.\n\nRegards,\nJeevan Ladhe\n\nHi Robert,On Sat, Aug 31, 2019 at 8:29 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Aug 29, 2019 at 10:41 AM Jeevan Ladhe<jeevan.ladhe@enterprisedb.com> wrote:> Due to the inherent nature of pg_basebackup, the incremental backup also> allows taking backup in tar and compressed format. But, pg_combinebackup> does not understand how to restore this. I think we should either make> pg_combinebackup support restoration of tar incremental backup or restrict> taking the incremental backup in tar format until pg_combinebackup> supports the restoration by making option '--lsn' and '-Ft' exclusive.>> It is arguable that one can take the incremental backup in tar format, extract> that manually and then give the resultant directory as input to the> pg_combinebackup, but I think that kills the purpose of having> pg_combinebackup utility.\nI don't agree. You're right that you would have to untar (anduncompress) the backup to run pg_combinebackup, but you would alsohave to do that to restore a non-incremental backup, so it doesn'tseem much different. Thanks. Yes I agree about the similarity between restoring non-incrementaland incremental backup in this case. I don't think it's worth doing that at this point; I definitely don't think itneeds to be part of the first patch.Makes sense.Regards,Jeevan Ladhe",
"msg_date": "Mon, 2 Sep 2019 13:09:39 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Aug 16, 2019 at 3:54 PM Jeevan Chalke\n<jeevan.chalke@enterprisedb.com> wrote:\n>\n0003:\n+/*\n+ * When to send the whole file, % blocks modified (90%)\n+ */\n+#define WHOLE_FILE_THRESHOLD 0.9\n\nHow this threshold is selected. Is it by some test?\n\n\n- magic number, currently 0 (4 bytes)\nI think in the patch we are using (#define INCREMENTAL_BACKUP_MAGIC\n0x494E4352) as a magic number, not 0\n\n\n+ Assert(statbuf->st_size <= (RELSEG_SIZE * BLCKSZ));\n+\n+ buf = (char *) malloc(statbuf->st_size);\n+ if (buf == NULL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OUT_OF_MEMORY),\n+ errmsg(\"out of memory\")));\n+\n+ if ((cnt = fread(buf, 1, statbuf->st_size, fp)) > 0)\n+ {\n+ Bitmapset *mod_blocks = NULL;\n+ int nmodblocks = 0;\n+\n+ if (cnt % BLCKSZ != 0)\n+ {\n\nIt will be good to add some comments for the if block and also for the\nassert. Actully, it's not very clear from the code.\n\n0004:\n+#include <time.h>\n+#include <sys/stat.h>\n+#include <unistd.h>\nHeader file include order (sys/state.h should be before time.h)\n\n\n\n+ printf(_(\"%s combines full backup with incremental backup.\\n\\n\"), progname);\n/backup/backups\n\n\n+ * scan_file\n+ *\n+ * Checks whether given file is partial file or not. If partial, then combines\n+ * it into a full backup file, else copies as is to the output directory.\n+ */\n\n/If partial, then combines/ If partial, then combine\n\n\n\n+static void\n+combine_partial_files(const char *fn, char **IncrDirs, int nIncrDir,\n+ const char *subdirpath, const char *outfn)\n+ /*\n+ * Open all files from all incremental backup directories and create a file\n+ * map.\n+ */\n+ basefilefound = false;\n+ for (i = (nIncrDir - 1), fmindex = 0; i >= 0; i--, fmindex++)\n+ {\n+ fm = &filemaps[fmindex];\n+\n.....\n+ }\n+\n+\n+ /* Process all opened files. */\n+ lastblkno = 0;\n+ modifiedblockfound = false;\n+ for (i = 0; i < fmindex; i++)\n+ {\n+ char *buf;\n+ int hsize;\n+ int k;\n+ int blkstartoffset;\n......\n+ }\n+\n+ for (i = 0; i <= lastblkno; i++)\n+ {\n+ char blkdata[BLCKSZ];\n+ FILE *infp;\n+ int offset;\n...\n+ }\n}\n\nCan we breakdown this function in 2-3 functions. At least creating a\nfile map can directly go to a separate function.\n\nI have read 0003 and 0004 patch and there are few cosmetic comments.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Sep 2019 12:11:38 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Sat, Aug 31, 2019 at 3:41 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> Are we using any tar library in pg_basebackup.c? We already have the capability\n> in pg_basebackup to do that.\n\nI think pg_basebackup is using homebrew code to generate tar files,\nbut I'm reluctant to do that for reading tar files. For generating a\nfile, you can always emit the newest and \"best\" tar format, but for\nreading a file, you probably want to be prepared for older or cruftier\nvariants. Maybe not -- I'm not super-familiar with the tar on-disk\nformat. But I think there must be a reason why tar libraries exist,\nand I don't want to write a new one.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 3 Sep 2019 08:59:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 6:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sat, Aug 31, 2019 at 3:41 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> > Are we using any tar library in pg_basebackup.c? We already have the\n> capability\n> > in pg_basebackup to do that.\n>\n> I think pg_basebackup is using homebrew code to generate tar files,\n> but I'm reluctant to do that for reading tar files. For generating a\n> file, you can always emit the newest and \"best\" tar format, but for\n> reading a file, you probably want to be prepared for older or cruftier\n> variants. Maybe not -- I'm not super-familiar with the tar on-disk\n> format. But I think there must be a reason why tar libraries exist,\n> and I don't want to write a new one.\n>\n+1 using the library to tar. But I think reason not using tar library is\nTAR is\none of the most simple file format. What is the best/newest format of TAR?\n\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \nIbrar Ahmed\n\nOn Tue, Sep 3, 2019 at 6:00 PM Robert Haas <robertmhaas@gmail.com> wrote:On Sat, Aug 31, 2019 at 3:41 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> Are we using any tar library in pg_basebackup.c? We already have the capability\n> in pg_basebackup to do that.\n\nI think pg_basebackup is using homebrew code to generate tar files,\nbut I'm reluctant to do that for reading tar files. For generating a\nfile, you can always emit the newest and \"best\" tar format, but for\nreading a file, you probably want to be prepared for older or cruftier\nvariants. Maybe not -- I'm not super-familiar with the tar on-disk\nformat. But I think there must be a reason why tar libraries exist,\nand I don't want to write a new one.+1 using the library to tar. But I think reason not using tar library is TAR isone of the most simple file format. What is the best/newest format of TAR?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n-- Ibrar Ahmed",
"msg_date": "Tue, 3 Sep 2019 19:04:59 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 10:05 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> +1 using the library to tar. But I think reason not using tar library is TAR is\n> one of the most simple file format. What is the best/newest format of TAR?\n\nSo, I don't really want to go down this path at all, as I already\nsaid. You can certainly do your own research on this topic if you\nwish.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 3 Sep 2019 10:39:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> +1 using the library to tar.\n\nUh, *what* library?\n\npg_dump's pg_backup_tar.c is about 1300 lines, a very large fraction\nof which is boilerplate for interfacing to pg_backup_archiver's APIs.\nThe stuff that actually knows specifically about tar looks to be maybe\na couple hundred lines, plus there's another couple hundred lines of\n(rather duplicative?) code in src/port/tar.c. None of it is rocket\nscience.\n\nI can't believe that it'd be a good tradeoff to create a new external\ndependency to replace that amount of code. In case you haven't noticed,\nour luck with depending on external libraries has been abysmal.\n\nPossibly there's an argument for refactoring things so that there's\nmore stuff in tar.c and less elsewhere, but let's not go looking\nfor external code to depend on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Sep 2019 11:00:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 8:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> > +1 using the library to tar.\n>\n> Uh, *what* library?\n>\n\nI was just replying the Robert that he said\n\n\"But I think there must be a reason why tar libraries exist,\nand I don't want to write a new one.\"\n\nI said I am ok to use a library \"what he is proposing/thinking\",\nbut explained to him that TAR is the most simpler format that\nwhy PG has its own code.\n\n\n> pg_dump's pg_backup_tar.c is about 1300 lines, a very large fraction\n> of which is boilerplate for interfacing to pg_backup_archiver's APIs.\n> The stuff that actually knows specifically about tar looks to be maybe\n> a couple hundred lines, plus there's another couple hundred lines of\n> (rather duplicative?) code in src/port/tar.c. None of it is rocket\n> science.\n>\n> I can't believe that it'd be a good tradeoff to create a new external\n> dependency to replace that amount of code. In case you haven't noticed,\n> our luck with depending on external libraries has been abysmal.\n>\n> Possibly there's an argument for refactoring things so that there's\n> more stuff in tar.c and less elsewhere, but let's not go looking\n> for external code to depend on.\n>\n> regards, tom lane\n>\n\n\n-- \nIbrar Ahmed\n\nOn Tue, Sep 3, 2019 at 8:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> +1 using the library to tar.\n\nUh, *what* library?I was just replying the Robert that he said \"But I think there must be a reason why tar libraries exist,and I don't want to write a new one.\"I said I am ok to use a library \"what he is proposing/thinking\", but explained to him that TAR is the most simpler format thatwhy PG has its own code.\n\npg_dump's pg_backup_tar.c is about 1300 lines, a very large fraction\nof which is boilerplate for interfacing to pg_backup_archiver's APIs.\nThe stuff that actually knows specifically about tar looks to be maybe\na couple hundred lines, plus there's another couple hundred lines of\n(rather duplicative?) code in src/port/tar.c. None of it is rocket\nscience.\n\nI can't believe that it'd be a good tradeoff to create a new external\ndependency to replace that amount of code. In case you haven't noticed,\nour luck with depending on external libraries has been abysmal.\n\nPossibly there's an argument for refactoring things so that there's\nmore stuff in tar.c and less elsewhere, but let's not go looking\nfor external code to depend on.\n\n regards, tom lane\n-- Ibrar Ahmed",
"msg_date": "Tue, 3 Sep 2019 21:44:25 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 7:39 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Sep 3, 2019 at 10:05 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> > +1 using the library to tar. But I think reason not using tar library is\n> TAR is\n> > one of the most simple file format. What is the best/newest format of\n> TAR?\n>\n> So, I don't really want to go down this path at all, as I already\n> said. You can certainly do your own research on this topic if you\n> wish.\n>\n> I did that and have experience working on the TAR format. I was curious\nabout what\n\"best/newest\" you are talking.\n\n\n\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \nIbrar Ahmed\n\nOn Tue, Sep 3, 2019 at 7:39 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Sep 3, 2019 at 10:05 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> +1 using the library to tar. But I think reason not using tar library is TAR is\n> one of the most simple file format. What is the best/newest format of TAR?\n\nSo, I don't really want to go down this path at all, as I already\nsaid. You can certainly do your own research on this topic if you\nwish.\nI did that and have experience working on the TAR format. I was curious about what\"best/newest\" you are talking. \n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n-- Ibrar Ahmed",
"msg_date": "Tue, 3 Sep 2019 21:46:13 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 12:11 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Aug 16, 2019 at 3:54 PM Jeevan Chalke\n> <jeevan.chalke@enterprisedb.com> wrote:\n> >\n> 0003:\n> +/*\n> + * When to send the whole file, % blocks modified (90%)\n> + */\n> +#define WHOLE_FILE_THRESHOLD 0.9\n>\n> How this threshold is selected. Is it by some test?\n>\n>\n> - magic number, currently 0 (4 bytes)\n> I think in the patch we are using (#define INCREMENTAL_BACKUP_MAGIC\n> 0x494E4352) as a magic number, not 0\n>\n>\n> + Assert(statbuf->st_size <= (RELSEG_SIZE * BLCKSZ));\n> +\n> + buf = (char *) malloc(statbuf->st_size);\n> + if (buf == NULL)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OUT_OF_MEMORY),\n> + errmsg(\"out of memory\")));\n> +\n> + if ((cnt = fread(buf, 1, statbuf->st_size, fp)) > 0)\n> + {\n> + Bitmapset *mod_blocks = NULL;\n> + int nmodblocks = 0;\n> +\n> + if (cnt % BLCKSZ != 0)\n> + {\n>\n> It will be good to add some comments for the if block and also for the\n> assert. Actully, it's not very clear from the code.\n>\n> 0004:\n> +#include <time.h>\n> +#include <sys/stat.h>\n> +#include <unistd.h>\n> Header file include order (sys/state.h should be before time.h)\n>\n>\n>\n> + printf(_(\"%s combines full backup with incremental backup.\\n\\n\"), progname);\n> /backup/backups\n>\n>\n> + * scan_file\n> + *\n> + * Checks whether given file is partial file or not. If partial, then combines\n> + * it into a full backup file, else copies as is to the output directory.\n> + */\n>\n> /If partial, then combines/ If partial, then combine\n>\n>\n>\n> +static void\n> +combine_partial_files(const char *fn, char **IncrDirs, int nIncrDir,\n> + const char *subdirpath, const char *outfn)\n> + /*\n> + * Open all files from all incremental backup directories and create a file\n> + * map.\n> + */\n> + basefilefound = false;\n> + for (i = (nIncrDir - 1), fmindex = 0; i >= 0; i--, fmindex++)\n> + {\n> + fm = &filemaps[fmindex];\n> +\n> .....\n> + }\n> +\n> +\n> + /* Process all opened files. */\n> + lastblkno = 0;\n> + modifiedblockfound = false;\n> + for (i = 0; i < fmindex; i++)\n> + {\n> + char *buf;\n> + int hsize;\n> + int k;\n> + int blkstartoffset;\n> ......\n> + }\n> +\n> + for (i = 0; i <= lastblkno; i++)\n> + {\n> + char blkdata[BLCKSZ];\n> + FILE *infp;\n> + int offset;\n> ...\n> + }\n> }\n>\n> Can we breakdown this function in 2-3 functions. At least creating a\n> file map can directly go to a separate function.\n>\n> I have read 0003 and 0004 patch and there are few cosmetic comments.\n>\n I have not yet completed the review for 0004, but I have few more\ncomments. Tomorrow I will try to complete the review and some testing\nas well.\n\n1. It seems that the output full backup generated with\npg_combinebackup also contains the \"INCREMENTAL BACKUP REFERENCE WAL\nLOCATION\". It seems confusing\nbecause now this is a full backup, not the incremental backup.\n\n2.\n+ FILE *outfp;\n+ FileOffset outblocks[RELSEG_SIZE];\n+ int i;\n+ FileMap *filemaps;\n+ int fmindex;\n+ bool basefilefound;\n+ bool modifiedblockfound;\n+ uint32 lastblkno;\n+ FileMap *fm;\n+ struct stat statbuf;\n+ uint32 nblocks;\n+\n+ memset(outblocks, 0, sizeof(FileOffset) * RELSEG_SIZE);\n\nI don't think you need to memset this explicitly as you can initialize\nthe array itself no?\nFileOffset outblocks[RELSEG_SIZE] = {{0}}\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Sep 2019 17:21:36 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 12:46 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> I did that and have experience working on the TAR format. I was curious about what\n> \"best/newest\" you are talking.\n\nWell, why not go look it up?\n\nOn my MacBook, tar is documented to understand three different tar\nformats: gnutar, ustar, and v7, and two sets of extensions to the tar\nformat: numeric extensions required by POSIX, and Solaris extensions.\nIt also understands the pax and restricted-pax formats which are\nderived from the ustar format. I don't know what your system\nsupports, but it's probably not hugely different; the fact that there\nare multiple tar formats has been documented in the tar man page on\nevery machine I've checked for the past 20 years. Here, 'man tar'\nrefers the reader to 'man libarchive-formats', which contains the\ndetails mentioned above.\n\nA quick Google search for 'multiple tar formats' also finds\nhttps://en.wikipedia.org/wiki/Tar_(computing)#File_format and\nhttps://www.gnu.org/software/tar/manual/html_chapter/tar_8.html each\nof which explains a good deal of the complexity in this area.\n\nI don't really understand why I have to explain to you what I mean\nwhen I say there are multiple tar formats when you can look it up on\nGoogle and find that there are multiple tar formats. Again, the point\nis that the current code only generates tar archives and therefore\nonly needs to generate one format, but if we add code that reads a tar\narchive, it probably needs to read several formats, because there are\nseveral formats that are popular enough to be widely-supported.\n\nIt's possible that somebody else here knows more about this topic and\ncould make better judgements than I can, but my view at present is\nthat if we want to read tar archives, we probably would want to do it\nby depending on libarchive. And I don't think we should do that for\nthis project because I don't think it would provide much value.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 4 Sep 2019 09:31:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Sep 03, 2019 at 08:59:53AM -0400, Robert Haas wrote:\n> I think pg_basebackup is using homebrew code to generate tar files,\n> but I'm reluctant to do that for reading tar files.\n\nYes. This code has not actually changed since its introduction.\nPlease note that we also have code which reads directly data from a\ntarball in pg_basebackup.c when appending the recovery parameters to\npostgresql.auto.conf for -R. There could be some consolidation here\nwith what you are doing.\n\n> For generating a\n> file, you can always emit the newest and \"best\" tar format, but for\n> reading a file, you probably want to be prepared for older or cruftier\n> variants. Maybe not -- I'm not super-familiar with the tar on-disk\n> format. But I think there must be a reason why tar libraries exist,\n> and I don't want to write a new one.\n\nWe need to be sure as well that the library chosen does not block\naccess to a feature in all the various platforms we have.\n--\nMichael",
"msg_date": "Thu, 5 Sep 2019 11:07:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Sep 4, 2019 at 10:08 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > For generating a\n> > file, you can always emit the newest and \"best\" tar format, but for\n> > reading a file, you probably want to be prepared for older or cruftier\n> > variants. Maybe not -- I'm not super-familiar with the tar on-disk\n> > format. But I think there must be a reason why tar libraries exist,\n> > and I don't want to write a new one.\n>\n> We need to be sure as well that the library chosen does not block\n> access to a feature in all the various platforms we have.\n\nWell, again, my preference is to just not make this particular feature\nwork natively with tar files. Then I don't need to choose a library,\nso the question is moot.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 4 Sep 2019 23:25:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi,\n\nAttached new set of patches adding support for the tablespace handling.\n\nThis patchset also fixes the issues reported by Vignesh, Robert, Jeevan\nLadhe,\nand Dilip Kumar.\n\nPlease have a look and let me know if I missed any comments to account.\n\nThanks\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 9 Sep 2019 16:38:15 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Aug 27, 2019 at 4:46 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> Few comments:\n> Comment:\n> + buf = (char *) malloc(statbuf->st_size);\n> + if (buf == NULL)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OUT_OF_MEMORY),\n> + errmsg(\"out of memory\")));\n> +\n> + if ((cnt = fread(buf, 1, statbuf->st_size, fp)) > 0)\n> + {\n> + Bitmapset *mod_blocks = NULL;\n> + int nmodblocks = 0;\n> +\n> + if (cnt % BLCKSZ != 0)\n> + {\n>\n> We can use same size as full page size.\n> After pg start backup full page write will be enabled.\n> We can use the same file size to maintain data consistency.\n>\n\nCan you please explain which size?\nThe aim here is to read entire file in-memory and thus used\nstatbuf->st_size.\n\nComment:\n> Should we check if it is same timeline as the system's timeline.\n>\n\nAt the time of taking the incremental backup, we can't check that.\nHowever, while combining, I made sure that the timeline is the same for all\nbackups.\n\n\n>\n> Comment:\n>\n> Should we support compression formats supported by pg_basebackup.\n> This can be an enhancement after the functionality is completed.\n>\n\nFor the incremental backup, it just works out of the box.\nFor combining backup, as discussed up-thread, the user has to\nuncompress first, combine them, compress if required.\n\n\n> Comment:\n> We should provide some mechanism to validate the backup. To identify\n> if some backup is corrupt or some file is missing(deleted) in a\n> backup.\n>\n\nMaybe, but not for the first version.\n\n\n> Comment:\n> +/*\n> + * When to send the whole file, % blocks modified (90%)\n> + */\n> +#define WHOLE_FILE_THRESHOLD 0.9\n> +\n> This can be user configured value.\n> This can be an enhancement after the functionality is completed.\n>\n\nYes.\n\n\n> Comment:\n> We can add a readme file with all the details regarding incremental\n> backup and combine backup.\n>\n\nWill have a look.\n\n\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nThanks\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Tue, Aug 27, 2019 at 4:46 PM vignesh C <vignesh21@gmail.com> wrote:Few comments:Comment:+ buf = (char *) malloc(statbuf->st_size);+ if (buf == NULL)+ ereport(ERROR,+ (errcode(ERRCODE_OUT_OF_MEMORY),+ errmsg(\"out of memory\")));++ if ((cnt = fread(buf, 1, statbuf->st_size, fp)) > 0)+ {+ Bitmapset *mod_blocks = NULL;+ int nmodblocks = 0;++ if (cnt % BLCKSZ != 0)+ {\nWe can use same size as full page size.After pg start backup full page write will be enabled.We can use the same file size to maintain data consistency.Can you please explain which size?The aim here is to read entire file in-memory and thus used statbuf->st_size.Comment:Should we check if it is same timeline as the system's timeline.At the time of taking the incremental backup, we can't check that.However, while combining, I made sure that the timeline is the same for all backups. \nComment:Should we support compression formats supported by pg_basebackup.This can be an enhancement after the functionality is completed.For the incremental backup, it just works out of the box.For combining backup, as discussed up-thread, the user has touncompress first, combine them, compress if required. \nComment:We should provide some mechanism to validate the backup. To identifyif some backup is corrupt or some file is missing(deleted) in abackup.Maybe, but not for the first version. Comment:+/*+ * When to send the whole file, % blocks modified (90%)+ */+#define WHOLE_FILE_THRESHOLD 0.9+This can be user configured value.This can be an enhancement after the functionality is completed.Yes. Comment:We can add a readme file with all the details regarding incrementalbackup and combine backup.Will have a look. \nRegards,VigneshEnterpriseDB: http://www.enterprisedb.com\nThanks-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 9 Sep 2019 16:51:34 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Aug 27, 2019 at 11:59 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Aug 16, 2019 at 6:23 AM Jeevan Chalke\n> <jeevan.chalke@enterprisedb.com> wrote:\n> > [ patches ]\n>\n> Reviewing 0002 and 0003:\n>\n> - Commit message for 0003 claims magic number and checksum are 0, but\n> that (fortunately) doesn't seem to be the case.\n>\n\nOops, updated commit message.\n\n\n>\n> - looks_like_rel_name actually checks whether it looks like a\n> *non-temporary* relation name; suggest adjusting the function name.\n>\n> - The names do_full_backup and do_incremental_backup are quite\n> confusing because you're really talking about what to do with one\n> file. I suggest sendCompleteFile() and sendPartialFile().\n>\n\nChanged function names.\n\n\n>\n> - Is there any good reason to have 'refptr' as a global variable, or\n> could we just pass the LSN around via function arguments? I know it's\n> just mimicking startptr, but storing startptr in a global variable\n> doesn't seem like a great idea either, so if it's not too annoying,\n> let's pass it down via function arguments instead. Also, refptr is a\n> crappy name (even worse than startptr); whether we end up with a\n> global variable or a bunch of local variables, let's make the name(s)\n> clear and unambiguous, like incremental_reference_lsn. Yeah, I know\n> that's long, but I still think it's better than being unclear.\n>\n\nRenamed variable.\nHowever, I have kept that as global only as it needs many functions to\nchange their signature, like, sendFile(), sendDir(), sendTablspeace() etc.\n\n\n> - do_incremental_backup looks like it can never report an error from\n> fread(), which is bad. But I see that this is just copied from the\n> existing code which has the same problem, so I started a separate\n> thread about that.\n>\n> - I think that passing cnt and blkindex to verify_page_checksum()\n> doesn't look very good from an abstraction point of view. Granted,\n> the existing code isn't great either, but I think this makes the\n> problem worse. I suggest passing \"int backup_distance\" to this\n> function, computed as cnt - BLCKSZ * blkindex. Then, you can\n> fseek(-backup_distance), fread(BLCKSZ), and then fseek(backup_distance\n> - BLCKSZ).\n>\n\nYep. Done these changes in the refactoring patch.\n\n\n>\n> - While I generally support the use of while and for loops rather than\n> goto for flow control, a while (1) loop that ends with a break is\n> functionally a goto anyway. I think there are several ways this could\n> be revised. The most obvious one is probably to use goto, but I vote\n> for inverting the sense of the test: if (PageIsNew(page) ||\n> PageGetLSN(page) >= startptr) break; This approach also saves a level\n> of indentation for more than half of the function.\n>\n\nI have used this new inverted condition, but we still need a while(1) loop.\n\n\n> - I am not sure that it's a good idea for sendwholefile = true to\n> result in dumping the entire file onto the wire in a single CopyData\n> message. I don't know of a concrete problem in typical\n> configurations, but someone who increases RELSEG_SIZE might be able to\n> overflow CopyData's length word. At 2GB the length word would be\n> negative, which might break, and at 4GB it would wrap around, which\n> would certainly break. See CopyData in\n> https://www.postgresql.org/docs/12/protocol-message-formats.html To\n> avoid this issue, and maybe some others, I suggest defining a\n> reasonably large chunk size, say 1MB as a constant in this file\n> someplace, and sending the data as a series of chunks of that size.\n>\n\nOK. Done as per the suggestions.\n\n\n>\n> - I don't think that the way concurrent truncation is handled is\n> correct for partial files. Right now it just falls through to code\n> which appends blocks of zeroes in either the complete-file or\n> partial-file case. I think that logic should be moved into the\n> function that handles the complete-file case. In the partial-file\n> case, the blocks that we actually send need to match the list of block\n> numbers we promised to send. We can't just send the promised blocks\n> and then tack a bunch of zero-filled blocks onto the end that the file\n> header doesn't know about.\n>\n\nWell, in partial file case we won't end up inside that block. So we are\nnever sending zeroes at the end in case of partial file.\n\n\n> - For reviewer convenience, please use the -v option to git\n> format-patch when posting and reposting a patch series. Using -v2,\n> -v3, etc. on successive versions really helps.\n>\n\nSure. Thanks for letting me know about this option.\n\n\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nThanks\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Tue, Aug 27, 2019 at 11:59 PM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Aug 16, 2019 at 6:23 AM Jeevan Chalke<jeevan.chalke@enterprisedb.com> wrote:> [ patches ]\nReviewing 0002 and 0003:\n- Commit message for 0003 claims magic number and checksum are 0, butthat (fortunately) doesn't seem to be the case.Oops, updated commit message. \n- looks_like_rel_name actually checks whether it looks like a*non-temporary* relation name; suggest adjusting the function name.\n- The names do_full_backup and do_incremental_backup are quiteconfusing because you're really talking about what to do with onefile. I suggest sendCompleteFile() and sendPartialFile().Changed function names. \n- Is there any good reason to have 'refptr' as a global variable, orcould we just pass the LSN around via function arguments? I know it'sjust mimicking startptr, but storing startptr in a global variabledoesn't seem like a great idea either, so if it's not too annoying,let's pass it down via function arguments instead. Also, refptr is acrappy name (even worse than startptr); whether we end up with aglobal variable or a bunch of local variables, let's make the name(s)clear and unambiguous, like incremental_reference_lsn. Yeah, I knowthat's long, but I still think it's better than being unclear.Renamed variable.However, I have kept that as global only as it needs many functions tochange their signature, like, sendFile(), sendDir(), sendTablspeace() etc.\n- do_incremental_backup looks like it can never report an error fromfread(), which is bad. But I see that this is just copied from theexisting code which has the same problem, so I started a separatethread about that.\n- I think that passing cnt and blkindex to verify_page_checksum()doesn't look very good from an abstraction point of view. Granted,the existing code isn't great either, but I think this makes theproblem worse. I suggest passing \"int backup_distance\" to thisfunction, computed as cnt - BLCKSZ * blkindex. Then, you canfseek(-backup_distance), fread(BLCKSZ), and then fseek(backup_distance- BLCKSZ).Yep. Done these changes in the refactoring patch. \n- While I generally support the use of while and for loops rather thangoto for flow control, a while (1) loop that ends with a break isfunctionally a goto anyway. I think there are several ways this couldbe revised. The most obvious one is probably to use goto, but I votefor inverting the sense of the test: if (PageIsNew(page) ||PageGetLSN(page) >= startptr) break; This approach also saves a levelof indentation for more than half of the function.I have used this new inverted condition, but we still need a while(1) loop.\n- I am not sure that it's a good idea for sendwholefile = true toresult in dumping the entire file onto the wire in a single CopyDatamessage. I don't know of a concrete problem in typicalconfigurations, but someone who increases RELSEG_SIZE might be able tooverflow CopyData's length word. At 2GB the length word would benegative, which might break, and at 4GB it would wrap around, whichwould certainly break. See CopyData in\nhttps://www.postgresql.org/docs/12/protocol-message-formats.html Toavoid this issue, and maybe some others, I suggest defining areasonably large chunk size, say 1MB as a constant in this filesomeplace, and sending the data as a series of chunks of that size.OK. Done as per the suggestions. \n- I don't think that the way concurrent truncation is handled iscorrect for partial files. Right now it just falls through to codewhich appends blocks of zeroes in either the complete-file orpartial-file case. I think that logic should be moved into thefunction that handles the complete-file case. In the partial-filecase, the blocks that we actually send need to match the list of blocknumbers we promised to send. We can't just send the promised blocksand then tack a bunch of zero-filled blocks onto the end that the fileheader doesn't know about.Well, in partial file case we won't end up inside that block. So we arenever sending zeroes at the end in case of partial file.\n- For reviewer convenience, please use the -v option to gitformat-patch when posting and reposting a patch series. Using -v2,-v3, etc. on successive versions really helps.Sure. Thanks for letting me know about this option. \n-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company\nThanks-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 9 Sep 2019 17:00:33 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Aug 30, 2019 at 6:52 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n> Here are some comments:\n> Or maybe we can just say:\n> \"cannot verify checksum in file \\\"%s\\\"\" if checksum requested, disable the\n> checksum and leave it to the following message:\n>\n> + ereport(WARNING,\n> + (errmsg(\"file size (%d) not in multiple of page size\n> (%d), sending whole file\",\n> + (int) cnt, BLCKSZ)));\n>\n>\nOpted for the above suggestion.\n\n\n>\n> I think we should give the user hint from where he should be reading the\n> input\n> lsn for incremental backup in the --help option as well as documentation?\n> Something like - \"To take an incremental backup, please provide value of\n> \"--lsn\"\n> as the \"START WAL LOCATION\" of previously taken full backup or incremental\n> backup from backup_lable file.\n>\n\nAdded this in the documentation. In help, it will be too crowdy.\n\n\n> pg_combinebackup:\n>\n> +static bool made_new_outputdata = false;\n> +static bool found_existing_outputdata = false;\n>\n> Both of these are global, I understand that we need them global so that\n> they are\n> accessible in cleanup_directories_atexit(). But they are passed to\n> verify_dir_is_empty_or_create() as parameters, which I think is not needed.\n> Instead verify_dir_is_empty_or_create() can directly change the globals.\n>\n\nAfter adding support for a tablespace, these two functions take different\nvalues depending upon the context.\n\n\n> The current logic assumes the incremental backup directories are to be\n> provided\n> as input in the serial order the backups were taken. This is bit confusing\n> unless clarified in pg_combinebackup help menu or documentation. I think we\n> should clarify it at both the places.\n>\n\nAdded in doc.\n\n\n>\n> I think scan_directory() should be rather renamed as do_combinebackup().\n>\n\nI am not sure about this renaming. scan_directory() is called recursively\nto scan each sub-directories too. If we rename it then it is not actually\nrecursively doing a combinebackup. Combine backup is a single whole\nprocess.\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Fri, Aug 30, 2019 at 6:52 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:Here are some comments:Or maybe we can just say:\"cannot verify checksum in file \\\"%s\\\"\" if checksum requested, disable thechecksum and leave it to the following message:+ ereport(WARNING,+ (errmsg(\"file size (%d) not in multiple of page size (%d), sending whole file\",+ (int) cnt, BLCKSZ))); Opted for the above suggestion. I think we should give the user hint from where he should be reading the inputlsn for incremental backup in the --help option as well as documentation?Something like - \"To take an incremental backup, please provide value of \"--lsn\"as the \"START WAL LOCATION\" of previously taken full backup or incrementalbackup from backup_lable file. Added this in the documentation. In help, it will be too crowdy. pg_combinebackup:+static bool made_new_outputdata = false;+static bool found_existing_outputdata = false;Both of these are global, I understand that we need them global so that they areaccessible in cleanup_directories_atexit(). But they are passed toverify_dir_is_empty_or_create() as parameters, which I think is not needed.Instead verify_dir_is_empty_or_create() can directly change the globals.After adding support for a tablespace, these two functions take different values depending upon the context.The current logic assumes the incremental backup directories are to be providedas input in the serial order the backups were taken. This is bit confusingunless clarified in pg_combinebackup help menu or documentation. I think weshould clarify it at both the places.Added in doc. I think scan_directory() should be rather renamed as do_combinebackup().I am not sure about this renaming. scan_directory() is called recursivelyto scan each sub-directories too. If we rename it then it is not actuallyrecursively doing a combinebackup. Combine backup is a single whole process. -- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 9 Sep 2019 17:12:39 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 12:11 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Fri, Aug 16, 2019 at 3:54 PM Jeevan Chalke\n> <jeevan.chalke@enterprisedb.com> wrote:\n> >\n> 0003:\n> +/*\n> + * When to send the whole file, % blocks modified (90%)\n> + */\n> +#define WHOLE_FILE_THRESHOLD 0.9\n>\n> How this threshold is selected. Is it by some test?\n>\n\nCurrently, it is set arbitrarily. If required, we will make it a GUC.\n\n\n>\n> - magic number, currently 0 (4 bytes)\n> I think in the patch we are using (#define INCREMENTAL_BACKUP_MAGIC\n> 0x494E4352) as a magic number, not 0\n>\n\nYes. Robert too reported this. Updated the commit message.\n\n\n>\n> Can we breakdown this function in 2-3 functions. At least creating a\n> file map can directly go to a separate function.\n>\n\nSeparated out filemap changes to separate function. Rest kept as is to have\nan easy followup.\n\n\n>\n> I have read 0003 and 0004 patch and there are few cosmetic comments.\n>\n\nCan you please post those too?\n\nOther comments are fixed.\n\n\n>\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nThanks\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Tue, Sep 3, 2019 at 12:11 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Fri, Aug 16, 2019 at 3:54 PM Jeevan Chalke<jeevan.chalke@enterprisedb.com> wrote:>0003:+/*+ * When to send the whole file, % blocks modified (90%)+ */+#define WHOLE_FILE_THRESHOLD 0.9\nHow this threshold is selected. Is it by some test?Currently, it is set arbitrarily. If required, we will make it a GUC.\n\n- magic number, currently 0 (4 bytes)I think in the patch we are using (#define INCREMENTAL_BACKUP_MAGIC0x494E4352) as a magic number, not 0Yes. Robert too reported this. Updated the commit message. \nCan we breakdown this function in 2-3 functions. At least creating afile map can directly go to a separate function.Separated out filemap changes to separate function. Rest kept as is to have an easy followup. \nI have read 0003 and 0004 patch and there are few cosmetic comments.Can you please post those too?Other comments are fixed. \n\n-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com\nThanks-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 9 Sep 2019 17:17:39 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Wed, Sep 4, 2019 at 5:21 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n>\n> I have not yet completed the review for 0004, but I have few more\n> comments. Tomorrow I will try to complete the review and some testing\n> as well.\n>\n> 1. It seems that the output full backup generated with\n> pg_combinebackup also contains the \"INCREMENTAL BACKUP REFERENCE WAL\n> LOCATION\". It seems confusing\n> because now this is a full backup, not the incremental backup.\n>\n\nYes, that was remaining and was in my TODO.\nDone in the new patchset. Also, taking --label as an input like\npg_basebackup.\n\n\n>\n> 2.\n> + memset(outblocks, 0, sizeof(FileOffset) * RELSEG_SIZE);\n>\n> I don't think you need to memset this explicitly as you can initialize\n> the array itself no?\n> FileOffset outblocks[RELSEG_SIZE] = {{0}}\n>\n\nI didn't see any issue with memset either but changed this per your\nsuggestion.\n\n\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Wed, Sep 4, 2019 at 5:21 PM Dilip Kumar <dilipbalaut@gmail.com> wrote: I have not yet completed the review for 0004, but I have few morecomments. Tomorrow I will try to complete the review and some testingas well.\n1. It seems that the output full backup generated withpg_combinebackup also contains the \"INCREMENTAL BACKUP REFERENCE WALLOCATION\". It seems confusingbecause now this is a full backup, not the incremental backup.Yes, that was remaining and was in my TODO.Done in the new patchset. Also, taking --label as an input like pg_basebackup. \n2.+ memset(outblocks, 0, sizeof(FileOffset) * RELSEG_SIZE);\nI don't think you need to memset this explicitly as you can initializethe array itself no?FileOffset outblocks[RELSEG_SIZE] = {{0}}I didn't see any issue with memset either but changed this per your suggestion. \n-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 9 Sep 2019 17:21:34 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Hi,\n\nOne of my colleague at EDB, Rajkumar Raghuwanshi, while testing this\nfeature reported an issue. He reported that if a full base-backup is\ntaken, and then created a database, and then took an incremental backup,\ncombining full backup with incremental backup is then failing.\n\nI had a look over this issue and observed that when the new database is\ncreated, the catalog files are copied as-is into the new directory\ncorresponding to a newly created database. And as they are just copied,\nthe LSN on those pages are not changed. Due to this incremental backup\nthinks that its an existing file and thus do not copy the blocks from\nthese new files, leading to the failure.\n\nI have surprised to know that even though we are creating new files from\nold files, we kept the LSN unmodified. I didn't see any other parameter\nin basebackup which tells that this is a new file from last LSN or\nsomething.\n\nI tried looking for any other DDL doing similar stuff like creating a new\npage with existing LSN. But I could not find any other commands than\nCREATE DATABASE and ALTER DATABASE .. SET TABLESPACE.\n\nSuggestions/thoughts?\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nHi,One of my colleague at EDB, Rajkumar Raghuwanshi, while testing thisfeature reported an issue. He reported that if a full base-backup istaken, and then created a database, and then took an incremental backup,combining full backup with incremental backup is then failing.I had a look over this issue and observed that when the new database iscreated, the catalog files are copied as-is into the new directorycorresponding to a newly created database. And as they are just copied,the LSN on those pages are not changed. Due to this incremental backupthinks that its an existing file and thus do not copy the blocks fromthese new files, leading to the failure.I have surprised to know that even though we are creating new files fromold files, we kept the LSN unmodified. I didn't see any other parameterin basebackup which tells that this is a new file from last LSN orsomething.I tried looking for any other DDL doing similar stuff like creating a newpage with existing LSN. But I could not find any other commands thanCREATE DATABASE and ALTER DATABASE .. SET TABLESPACE.Suggestions/thoughts?-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 12 Sep 2019 18:43:18 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Sep 9, 2019 at 4:51 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com>\nwrote:\n>\n>\n>\n> On Tue, Aug 27, 2019 at 4:46 PM vignesh C <vignesh21@gmail.com> wrote:\n>>\n>> Few comments:\n>> Comment:\n>> + buf = (char *) malloc(statbuf->st_size);\n>> + if (buf == NULL)\n>> + ereport(ERROR,\n>> + (errcode(ERRCODE_OUT_OF_MEMORY),\n>> + errmsg(\"out of memory\")));\n>> +\n>> + if ((cnt = fread(buf, 1, statbuf->st_size, fp)) > 0)\n>> + {\n>> + Bitmapset *mod_blocks = NULL;\n>> + int nmodblocks = 0;\n>> +\n>> + if (cnt % BLCKSZ != 0)\n>> + {\n>>\n>> We can use same size as full page size.\n>> After pg start backup full page write will be enabled.\n>> We can use the same file size to maintain data consistency.\n>\n>\n> Can you please explain which size?\n> The aim here is to read entire file in-memory and thus used\nstatbuf->st_size.\n>\nInstead of reading the whole file here, we can read the file page by page.\nThere is a possibility of data inconsistency if data is not read page by\npage, data will be consistent if read page by page as full page write will\nbe enabled at this time.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Sep 9, 2019 at 4:51 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:>>>> On Tue, Aug 27, 2019 at 4:46 PM vignesh C <vignesh21@gmail.com> wrote:>>>> Few comments:>> Comment:>> + buf = (char *) malloc(statbuf->st_size);>> + if (buf == NULL)>> + ereport(ERROR,>> + (errcode(ERRCODE_OUT_OF_MEMORY),>> + errmsg(\"out of memory\")));>> +>> + if ((cnt = fread(buf, 1, statbuf->st_size, fp)) > 0)>> + {>> + Bitmapset *mod_blocks = NULL;>> + int nmodblocks = 0;>> +>> + if (cnt % BLCKSZ != 0)>> + {>>>> We can use same size as full page size.>> After pg start backup full page write will be enabled.>> We can use the same file size to maintain data consistency.>>> Can you please explain which size?> The aim here is to read entire file in-memory and thus used statbuf->st_size.>Instead of reading the whole file here, we can read the file page by page. There is a possibility of data inconsistency if data is not read page by page, data will be consistent if read page by page as full page write will be enabled at this time.Regards,VigneshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 13 Sep 2019 22:38:12 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 1:08 PM vignesh C <vignesh21@gmail.com> wrote:\n> Instead of reading the whole file here, we can read the file page by page. There is a possibility of data inconsistency if data is not read page by page, data will be consistent if read page by page as full page write will be enabled at this time.\n\nI think you are confused about what \"full page writes\" means. It has\nto do what gets written to the write-ahead log, not the way that the\npages themselves are written. There is no portable way to ensure that\nan 8kB read or write is atomic, and generally it isn't.\n\nIt shouldn't matter whether the file is read all at once, page by\npage, or byte by byte, except for performance. Recovery is going to\nrun when that backup is restored, and any inconsistencies should get\nfixed up at that time.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 15 Sep 2019 21:36:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 9:13 AM Jeevan Chalke\n<jeevan.chalke@enterprisedb.com> wrote:\n> I had a look over this issue and observed that when the new database is\n> created, the catalog files are copied as-is into the new directory\n> corresponding to a newly created database. And as they are just copied,\n> the LSN on those pages are not changed. Due to this incremental backup\n> thinks that its an existing file and thus do not copy the blocks from\n> these new files, leading to the failure.\n\n*facepalm*\n\nWell, this shoots a pretty big hole in my design for this feature. I\ndon't know why I didn't think of this when I wrote out that design\noriginally. Ugh.\n\nUnless we change the way that CREATE DATABASE and any similar\noperations work so that they always stamp pages with new LSNs, I think\nwe have to give up on the idea of being able to take an incremental\nbackup by just specifying an LSN. We'll instead need to get a list of\nfiles from the server first, and then request the entirety of any that\nwe don't have, plus the changed blocks from the ones that we do have.\nI guess that will make Stephen happy, since it's more like the design\nhe wanted originally (and should generalize more simply to parallel\nbackup).\n\nOne question I have is: is there any scenario in which an existing\npage gets modified after the full backup and before the incremental\nbackup but does not end up with an LSN that follows the full backup's\nstart LSN? If there is, then the whole concept of using LSNs to tell\nwhich blocks have been modified doesn't really work. I can't think of\na way that can happen off-hand, but then, I thought my last design was\ngood, too.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 15 Sep 2019 21:44:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 7:22 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Sep 12, 2019 at 9:13 AM Jeevan Chalke\n> <jeevan.chalke@enterprisedb.com> wrote:\n> > I had a look over this issue and observed that when the new database is\n> > created, the catalog files are copied as-is into the new directory\n> > corresponding to a newly created database. And as they are just copied,\n> > the LSN on those pages are not changed. Due to this incremental backup\n> > thinks that its an existing file and thus do not copy the blocks from\n> > these new files, leading to the failure.\n>\n> *facepalm*\n>\n> Well, this shoots a pretty big hole in my design for this feature. I\n> don't know why I didn't think of this when I wrote out that design\n> originally. Ugh.\n>\n> Unless we change the way that CREATE DATABASE and any similar\n> operations work so that they always stamp pages with new LSNs, I think\n> we have to give up on the idea of being able to take an incremental\n> backup by just specifying an LSN.\n>\n\nThis seems to be a blocking problem for the LSN based design. Can we\nthink of using creation time for file? Basically, if the file\ncreation time is later than backup-labels \"START TIME:\", then include\nthat file entirely. I think one big point against this is clock skew\nlike what if somebody tinkers with the clock. And also, this can\ncover cases like\nwhat Jeevan has pointed but might not cover other cases which we found\nproblematic.\n\n> We'll instead need to get a list of\n> files from the server first, and then request the entirety of any that\n> we don't have, plus the changed blocks from the ones that we do have.\n> I guess that will make Stephen happy, since it's more like the design\n> he wanted originally (and should generalize more simply to parallel\n> backup).\n>\n> One question I have is: is there any scenario in which an existing\n> page gets modified after the full backup and before the incremental\n> backup but does not end up with an LSN that follows the full backup's\n> start LSN?\n>\n\nI think the operations covered by WAL flag XLR_SPECIAL_REL_UPDATE will\nhave similar problems.\n\nOne related point is how do incremental backups handle the case where\nvacuum truncates the relation partially? Basically, with current\npatch/design, it doesn't appear that such information can be passed\nvia incremental backup. I am not sure if this is a problem, but it\nwould be good if we can somehow handle this.\n\nIsn't some operations where at the end we directly call heap_sync\nwithout writing WAL will have a similar problem as well? Similarly,\nit is not very clear if unlogged relations are handled in some way if\nnot, the same could be documented.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Sep 2019 14:01:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 4:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> This seems to be a blocking problem for the LSN based design.\n\nWell, only the simplest version of it, I think.\n\n> Can we think of using creation time for file? Basically, if the file\n> creation time is later than backup-labels \"START TIME:\", then include\n> that file entirely. I think one big point against this is clock skew\n> like what if somebody tinkers with the clock. And also, this can\n> cover cases like\n> what Jeevan has pointed but might not cover other cases which we found\n> problematic.\n\nWell that would mean, for example, that if you copied the data\ndirectory from one machine to another, the next \"incremental\" backup\nwould turn into a full backup. That sucks. And in other situations,\nlike resetting the clock, it could mean that you end up with a corrupt\nbackup without any real ability for PostgreSQL to detect it. I'm not\nsaying that it is impossible to create a practically useful system\nbased on file time stamps, but I really don't like it.\n\n> I think the operations covered by WAL flag XLR_SPECIAL_REL_UPDATE will\n> have similar problems.\n\nI'm not sure quite what you mean by that. Can you elaborate? It\nappears to me that the XLR_SPECIAL_REL_UPDATE operations are all\nthings that create files, remove files, or truncate files, and the\nsketch in my previous email would handle the first two of those cases\ncorrectly. See below for the third.\n\n> One related point is how do incremental backups handle the case where\n> vacuum truncates the relation partially? Basically, with current\n> patch/design, it doesn't appear that such information can be passed\n> via incremental backup. I am not sure if this is a problem, but it\n> would be good if we can somehow handle this.\n\nAs to this, if you're taking a full backup of a particular file,\nthere's no problem. If you're taking a partial backup of a particular\nfile, you need to include the current length of the file and the\nidentity and contents of each modified block. Then you're fine.\n\n> Isn't some operations where at the end we directly call heap_sync\n> without writing WAL will have a similar problem as well?\n\nMaybe. Can you give an example?\n\n> Similarly,\n> it is not very clear if unlogged relations are handled in some way if\n> not, the same could be documented.\n\nI think that we don't need to back up the contents of unlogged\nrelations at all, right? Restoration from an online backup always\ninvolves running recovery, and so unlogged relations will anyway get\nzapped.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 16 Sep 2019 09:30:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Sep 16, 2019 at 4:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Can we think of using creation time for file? Basically, if the file\n> > creation time is later than backup-labels \"START TIME:\", then include\n> > that file entirely. I think one big point against this is clock skew\n> > like what if somebody tinkers with the clock. And also, this can\n> > cover cases like\n> > what Jeevan has pointed but might not cover other cases which we found\n> > problematic.\n> \n> Well that would mean, for example, that if you copied the data\n> directory from one machine to another, the next \"incremental\" backup\n> would turn into a full backup. That sucks. And in other situations,\n> like resetting the clock, it could mean that you end up with a corrupt\n> backup without any real ability for PostgreSQL to detect it. I'm not\n> saying that it is impossible to create a practically useful system\n> based on file time stamps, but I really don't like it.\n\nIn a number of cases, trying to make sure that on a failover or copy of\nthe backup the next 'incremental' is really an 'incremental' is\ndangerous. A better strategy to address this, and the other issues\nrealized on this thread recently, is to:\n\n- Have a manifest of every file in each backup\n- Always back up new files that weren't in the prior backup\n- Keep a checksum of each file\n- Track the timestamp of each file as of when it was backed up\n- Track the file size of each file\n- Track the starting timestamp of each backup\n- Always include files with a modification time after the starting\n timestamp of the prior backup, or if the file size has changed\n- In the event of any anomolies (which includes things like a timeline\n switch), use checksum matching (aka 'delta checksum backup') to\n perform the backup instead of using timestamps (or just always do that\n if you want to be particularly careful- having an option for it is\n great)\n- Probably other things I'm not thinking of off-hand, but this is at\n least a good start. Make sure to checksum this information too.\n\nI agree entirely that it is dangerous to simply rely on creation time as\ncompared to some other time, or to rely on modification time of a given\nfile across multiple backups (which has been shown to reliably cause\ncorruption, at least with rsync and its 1-second granularity on\nmodification time).\n\nBy having a manifest for each backed up file for each backup, you also\ngain the ability to validate that a backup in the repository hasn't been\ncorrupted post-backup, a feature that at least some other database\nbackup and restore systems have (referring specifically to the big O in\nthis particular case, but I bet others do too).\n\nHaving a system of keeping track of which backups are full and which are\ndifferential in an overall system also gives you the ability to do\nthings like expiration in a sensible way, including handling WAL\nexpiration.\n\nAs also mentioned up-thread, this likely also allows you to have a\nsimpler approach to parallelizing the overall backup.\n\nI'd like to clarify that while I would like to have an easier way to\nparallelize backups, that's a relatively minor complaint- the much\nbigger issue that I have with this feature is that trying to address\neverything correctly while having only the amount of information that\ncould be passed on the command-line about the prior full/incremental is\ngoing to be extremely difficult, complicated, and likely to lead to\nsubtle bugs in the actual code, and probably less than subtle bugs in\nhow users end up using it, since they'll have to implement the\nexpiration and tracking of information between backups themselves\n(unless something's changed in that part during this discussion- I admit\nthat I've not read every email in this thread).\n\n> > One related point is how do incremental backups handle the case where\n> > vacuum truncates the relation partially? Basically, with current\n> > patch/design, it doesn't appear that such information can be passed\n> > via incremental backup. I am not sure if this is a problem, but it\n> > would be good if we can somehow handle this.\n> \n> As to this, if you're taking a full backup of a particular file,\n> there's no problem. If you're taking a partial backup of a particular\n> file, you need to include the current length of the file and the\n> identity and contents of each modified block. Then you're fine.\n\nI would also expect this to be fine but if there's an example of where\nthis is an issue, please share. The only issue that I can think of\noff-hand is orphaned-file risk, whereby you have something like CREATE\nDATABASE or perhaps ALTER TABLE .. SET TABLESPACE or such, take a\nbackup while that's happening, but that doesn't complete during the\nbackup (or recovery, or perhaps even in some other scenarios, it's\nunfortunately quite complicated). This orphaned file risk isn't newly\ndiscovered but fixing it is pretty complicated- would love to discuss\nideas around how to handle it.\n\n> > Isn't some operations where at the end we directly call heap_sync\n> > without writing WAL will have a similar problem as well?\n> \n> Maybe. Can you give an example?\n\nI'd be curious to hear what the concern is here also.\n\n> > Similarly,\n> > it is not very clear if unlogged relations are handled in some way if\n> > not, the same could be documented.\n> \n> I think that we don't need to back up the contents of unlogged\n> relations at all, right? Restoration from an online backup always\n> involves running recovery, and so unlogged relations will anyway get\n> zapped.\n\nUnlogged relations shouldn't be in the backup at all, since, yes, they\nget zapped at the start of recovery. We recently taught pg_basebackup\nhow to avoid backing them up so this shouldn't be an issue, as they\nshould be skipped for incrementals as well as fulls. I expect the\norphaned file problem also exists for UNLOGGED->LOGGED transitions.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 16 Sep 2019 10:38:17 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 9:30 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Isn't some operations where at the end we directly call heap_sync\n> > without writing WAL will have a similar problem as well?\n>\n> Maybe. Can you give an example?\n\nLooking through the code, I found two cases where we do this. One is\na bulk insert operation with wal_level = minimal, and the other is\nCLUSTER or VACUUM FULL with wal_level = minimal. In both of these\ncases we are generating new blocks whose LSNs will be 0/0. So, I think\nwe need a rule that if the server is asked to back up all blocks in a\nfile with LSNs > some threshold LSN, it must also include any blocks\nwhose LSN is 0/0. Those blocks are either uninitialized or are\npopulated without WAL logging, so they always need to be copied.\n\nOutside of unlogged and temporary tables, I don't know of any case\nwhere make a critical modification to an already-existing block\nwithout bumping the LSN. I hope there is no such case.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 16 Sep 2019 11:52:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 10:38 AM Stephen Frost <sfrost@snowman.net> wrote:\n> In a number of cases, trying to make sure that on a failover or copy of\n> the backup the next 'incremental' is really an 'incremental' is\n> dangerous. A better strategy to address this, and the other issues\n> realized on this thread recently, is to:\n>\n> - Have a manifest of every file in each backup\n> - Always back up new files that weren't in the prior backup\n> - Keep a checksum of each file\n> - Track the timestamp of each file as of when it was backed up\n> - Track the file size of each file\n> - Track the starting timestamp of each backup\n> - Always include files with a modification time after the starting\n> timestamp of the prior backup, or if the file size has changed\n> - In the event of any anomolies (which includes things like a timeline\n> switch), use checksum matching (aka 'delta checksum backup') to\n> perform the backup instead of using timestamps (or just always do that\n> if you want to be particularly careful- having an option for it is\n> great)\n> - Probably other things I'm not thinking of off-hand, but this is at\n> least a good start. Make sure to checksum this information too.\n\nI agree with some of these ideas but not all of them. I think having\na backup manifest is a good idea; that would allow taking a new\nincremental backup to work from the manifest rather than the data\ndirectory, which could be extremely useful, because it might be a lot\nfaster and the manifest could also be copied to a machine other than\nthe one where the entire backup is stored. If the backup itself has\nbeen pushed off to S3 or whatever, you can't access it quickly, but\nyou could keep the manifest around.\n\nI also agree that backing up all files that weren't in the previous\nbackup is a good strategy. I proposed that fairly explicitly a few\nemails back; but also, the contrary is obviously nonsense. And I also\nagree with, and proposed, that we record the size along with the file.\n\nI don't really agree with your comments about checksums and\ntimestamps. I think that, if possible, there should be ONE method of\ndetermining whether a block has changed in some important way, and I\nthink if we can make LSN work, that would be for the best. If you use\nmultiple methods of detecting changes without any clearly-defined\nreason for so doing, maybe what you're saying is that you don't really\nbelieve that any of the methods are reliable but if we throw the\nkitchen sink at the problem it should come out OK. Any bugs in one\nmechanism are likely to be masked by one of the others, but that's not\nas as good as one method that is known to be altogether reliable.\n\n> By having a manifest for each backed up file for each backup, you also\n> gain the ability to validate that a backup in the repository hasn't been\n> corrupted post-backup, a feature that at least some other database\n> backup and restore systems have (referring specifically to the big O in\n> this particular case, but I bet others do too).\n\nAgreed. The manifest only lets you validate to a limited extent, but\nthat's still useful.\n\n> Having a system of keeping track of which backups are full and which are\n> differential in an overall system also gives you the ability to do\n> things like expiration in a sensible way, including handling WAL\n> expiration.\n\nTrue, but I'm not sure that functionality belongs in core. It\ncertainly needs to be possible for out-of-core code to do this part of\nthe work if desired, because people want to integrate with enterprise\nbackup systems, and we can't come in and say, well, you back up\neverything else using Netbackup or Tivoli, but for PostgreSQL you have\nto use pg_backrest. I mean, maybe you can win that argument, but I\nknow I can't.\n\n> I'd like to clarify that while I would like to have an easier way to\n> parallelize backups, that's a relatively minor complaint- the much\n> bigger issue that I have with this feature is that trying to address\n> everything correctly while having only the amount of information that\n> could be passed on the command-line about the prior full/incremental is\n> going to be extremely difficult, complicated, and likely to lead to\n> subtle bugs in the actual code, and probably less than subtle bugs in\n> how users end up using it, since they'll have to implement the\n> expiration and tracking of information between backups themselves\n> (unless something's changed in that part during this discussion- I admit\n> that I've not read every email in this thread).\n\nWell, the evidence seems to show that you are right, at least to some\nextent. I consider it a positive good if the client needs to give the\nserver only a limited amount of information. After all, you could\nalways take an incremental backup by shipping every byte of the\nprevious backup to the server, having it compare everything to the\ncurrent contents, and having it then send you back the stuff that is\nnew or different. But that would be dumb, because most of the point of\nan incremental backup is to save on sending lots of data over the\nnetwork unnecessarily. Now, it seems that I took that goal to an\nunhealthy extreme, because as we've now realized, sending only an LSN\nand nothing else isn't enough to get a correct backup. So we need to\nsend more, and it doesn't have to be the absolutely most\nstripped-down, bear-bones version of what could be sent. But it should\nbe fairly minimal, I think; that's kinda the point of the feature.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 16 Sep 2019 12:23:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Sep 16, 2019 at 10:38 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > In a number of cases, trying to make sure that on a failover or copy of\n> > the backup the next 'incremental' is really an 'incremental' is\n> > dangerous. A better strategy to address this, and the other issues\n> > realized on this thread recently, is to:\n> >\n> > - Have a manifest of every file in each backup\n> > - Always back up new files that weren't in the prior backup\n> > - Keep a checksum of each file\n> > - Track the timestamp of each file as of when it was backed up\n> > - Track the file size of each file\n> > - Track the starting timestamp of each backup\n> > - Always include files with a modification time after the starting\n> > timestamp of the prior backup, or if the file size has changed\n> > - In the event of any anomolies (which includes things like a timeline\n> > switch), use checksum matching (aka 'delta checksum backup') to\n> > perform the backup instead of using timestamps (or just always do that\n> > if you want to be particularly careful- having an option for it is\n> > great)\n> > - Probably other things I'm not thinking of off-hand, but this is at\n> > least a good start. Make sure to checksum this information too.\n> \n> I agree with some of these ideas but not all of them. I think having\n> a backup manifest is a good idea; that would allow taking a new\n> incremental backup to work from the manifest rather than the data\n> directory, which could be extremely useful, because it might be a lot\n> faster and the manifest could also be copied to a machine other than\n> the one where the entire backup is stored. If the backup itself has\n> been pushed off to S3 or whatever, you can't access it quickly, but\n> you could keep the manifest around.\n\nYes, those are also good reasons for having a manifest.\n\n> I also agree that backing up all files that weren't in the previous\n> backup is a good strategy. I proposed that fairly explicitly a few\n> emails back; but also, the contrary is obviously nonsense. And I also\n> agree with, and proposed, that we record the size along with the file.\n\nSure, I didn't mean to imply that there was something wrong with that.\nIncluding the checksum and other metadata is also valuable, both for\nhelping to identify corruption in the backup archive and for forensics,\nif not for other reasons.\n\n> I don't really agree with your comments about checksums and\n> timestamps. I think that, if possible, there should be ONE method of\n> determining whether a block has changed in some important way, and I\n> think if we can make LSN work, that would be for the best. If you use\n> multiple methods of detecting changes without any clearly-defined\n> reason for so doing, maybe what you're saying is that you don't really\n> believe that any of the methods are reliable but if we throw the\n> kitchen sink at the problem it should come out OK. Any bugs in one\n> mechanism are likely to be masked by one of the others, but that's not\n> as as good as one method that is known to be altogether reliable.\n\nI disagree with this on a couple of levels. The first is pretty simple-\nwe don't have all of the information. The user may have some reason to\nbelieve that timestamp-based is a bad idea, for example, and therefore\nhaving an option to perform a checksum-based backup makes sense. rsync\nis a pretty good tool in my view and it has a very similar option-\nbecause there are trade-offs to be made. LSN is great, if you don't\nmind reading every file of your database start-to-finish every time, but\nin a running system which hasn't suffered from clock skew or other odd\nissues (some of which we can also detect), it's pretty painful to scan\nabsolutely everything like that for an incremental.\n\nPerhaps the discussion has already moved on to having some way of our\nown to track if a given file has changed without having to scan all of\nit- if so, that's a discussion I'd be interested in. I'm not against\nother approaches here besides timestamps if there's a solid reason why\nthey're better and they're also able to avoid scanning the entire\ndatabase.\n\n> > By having a manifest for each backed up file for each backup, you also\n> > gain the ability to validate that a backup in the repository hasn't been\n> > corrupted post-backup, a feature that at least some other database\n> > backup and restore systems have (referring specifically to the big O in\n> > this particular case, but I bet others do too).\n> \n> Agreed. The manifest only lets you validate to a limited extent, but\n> that's still useful.\n\nIf you track the checksum of the file in the manifest then it's a pretty\nstrong validation that the backup repo hasn't been corrupted between the\nbackup and the restore. Of course, the database could have been\ncorrupted at the source, and perhaps that's what you were getting at\nwith your 'limited extent' but that isn't what I was referring to.\n\nClaiming that the backup has been 'validated' by only looking at file\nsizes certainly wouldn't be acceptable. I can't imagine you were\nsuggesting that as you're certainly capable of realizing that, but I got\nthe feeling you weren't agreeing that having the checksum of the file\nmade sense to include in the manifest, so I feel like I'm missing\nsomething here.\n\n> > Having a system of keeping track of which backups are full and which are\n> > differential in an overall system also gives you the ability to do\n> > things like expiration in a sensible way, including handling WAL\n> > expiration.\n> \n> True, but I'm not sure that functionality belongs in core. It\n> certainly needs to be possible for out-of-core code to do this part of\n> the work if desired, because people want to integrate with enterprise\n> backup systems, and we can't come in and say, well, you back up\n> everything else using Netbackup or Tivoli, but for PostgreSQL you have\n> to use pg_backrest. I mean, maybe you can win that argument, but I\n> know I can't.\n\nI'm pretty baffled by this argument, particularly in this context. We\nalready have tooling around trying to manage WAL archives in core- see\npg_archivecleanup. Further, we're talking about pg_basebackup here, not\nabout Netbackup or Tivoli, and the results of a pg_basebackup (that is,\na set of tar files, or a data directory) could happily be backed up\nusing whatever Enterprise tool folks want to use- in much the same way\nthat a pgbackrest repo is also able to be backed up using whatever\nEnterprise tools someone wishes to use. We designed it quite carefully\nto work with exactly that use-case, so the distinction here is quite\nlost on me. Perhaps you could clarify what use-case these changes to\npg_basebackup solve, when working with a Netbackup or Tivoli system,\nthat pgbackrest doesn't, since you bring it up here?\n\n> > I'd like to clarify that while I would like to have an easier way to\n> > parallelize backups, that's a relatively minor complaint- the much\n> > bigger issue that I have with this feature is that trying to address\n> > everything correctly while having only the amount of information that\n> > could be passed on the command-line about the prior full/incremental is\n> > going to be extremely difficult, complicated, and likely to lead to\n> > subtle bugs in the actual code, and probably less than subtle bugs in\n> > how users end up using it, since they'll have to implement the\n> > expiration and tracking of information between backups themselves\n> > (unless something's changed in that part during this discussion- I admit\n> > that I've not read every email in this thread).\n> \n> Well, the evidence seems to show that you are right, at least to some\n> extent. I consider it a positive good if the client needs to give the\n> server only a limited amount of information. After all, you could\n> always take an incremental backup by shipping every byte of the\n> previous backup to the server, having it compare everything to the\n> current contents, and having it then send you back the stuff that is\n> new or different. But that would be dumb, because most of the point of\n> an incremental backup is to save on sending lots of data over the\n> network unnecessarily. Now, it seems that I took that goal to an\n> unhealthy extreme, because as we've now realized, sending only an LSN\n> and nothing else isn't enough to get a correct backup. So we need to\n> send more, and it doesn't have to be the absolutely most\n> stripped-down, bear-bones version of what could be sent. But it should\n> be fairly minimal, I think; that's kinda the point of the feature.\n\nRight- much of the point of an incremental backup feature is to try and\nminimize the amount of work that's done while still getting a good\nbackup. I don't agree that we should focus solely on network bandwidth\nas there are also trade-offs to be made around disk bandwidth to\nconsider, see above discussion regarding timestamps vs. checksum'ing\nevery file.\n\nAs for if we should be sending more to the server, or asking the server\nto send more to us, I don't really have a good feel for what's \"best\".\nAt least one implementation I'm familiar with builds a manifest on the\nPG server side and then compares the results of that to the manifest\nstored with the backup (where that comparison is actually done is on\nwhatever system the \"backup\" was started from, typically a backup\nserver). Perhaps there's an argument for sending the manifest from the\nbackup repository to PostgreSQL for it to then compare against the data\ndirectory but I'm not really sure how it could possibly do that more\nefficiently and that's moving work to the PG server that it doesn't\nreally need to do.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 16 Sep 2019 13:10:50 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Sep 16, 2019 at 9:30 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > Isn't some operations where at the end we directly call heap_sync\n> > > without writing WAL will have a similar problem as well?\n> >\n> > Maybe. Can you give an example?\n> \n> Looking through the code, I found two cases where we do this. One is\n> a bulk insert operation with wal_level = minimal, and the other is\n> CLUSTER or VACUUM FULL with wal_level = minimal. In both of these\n> cases we are generating new blocks whose LSNs will be 0/0. So, I think\n> we need a rule that if the server is asked to back up all blocks in a\n> file with LSNs > some threshold LSN, it must also include any blocks\n> whose LSN is 0/0. Those blocks are either uninitialized or are\n> populated without WAL logging, so they always need to be copied.\n\nI'm not sure I see a way around it but this seems pretty unfortunate-\nevery single incremental backup will have all of those included even\nthough the full backup likely also does (I say likely since someone\ncould do a full backup, set the WAL to minimal, load a bunch of data,\nand then restart back to a WAL level where we can do a new backup, and\nthen do an incremental, so we don't *know* that the full includes those\nblocks unless we also track a block-level checksum or similar). Then\nagain, doing these kinds of server bounces to change the WAL level\naround is, hopefully, relatively rare..\n\n> Outside of unlogged and temporary tables, I don't know of any case\n> where make a critical modification to an already-existing block\n> without bumping the LSN. I hope there is no such case.\n\nI believe we all do. :)\n\nThanks,\n\nStephen",
"msg_date": "Mon, 16 Sep 2019 13:39:33 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 1:10 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I disagree with this on a couple of levels. The first is pretty simple-\n> we don't have all of the information. The user may have some reason to\n> believe that timestamp-based is a bad idea, for example, and therefore\n> having an option to perform a checksum-based backup makes sense. rsync\n> is a pretty good tool in my view and it has a very similar option-\n> because there are trade-offs to be made. LSN is great, if you don't\n> mind reading every file of your database start-to-finish every time, but\n> in a running system which hasn't suffered from clock skew or other odd\n> issues (some of which we can also detect), it's pretty painful to scan\n> absolutely everything like that for an incremental.\n\nThere's a separate thread on using WAL-scanning to avoid having to\nscan all the data every time. I pointed it out to you early in this\nthread, too.\n\n> If you track the checksum of the file in the manifest then it's a pretty\n> strong validation that the backup repo hasn't been corrupted between the\n> backup and the restore. Of course, the database could have been\n> corrupted at the source, and perhaps that's what you were getting at\n> with your 'limited extent' but that isn't what I was referring to.\n\nYeah, that all seems fair. Without the checksum, you can only validate\nthat you have the right files and that they are the right sizes, which\nis not bad, but the checksums certainly make it stronger. But,\nwouldn't having to checksum all of the files add significantly to the\ncost of taking the backup? If so, I can imagine that some people might\nwant to pay that cost but others might not. If it's basically free to\nchecksum the data while we have it in memory anyway, then I guess\nthere's little to be lost.\n\n> I'm pretty baffled by this argument, particularly in this context. We\n> already have tooling around trying to manage WAL archives in core- see\n> pg_archivecleanup. Further, we're talking about pg_basebackup here, not\n> about Netbackup or Tivoli, and the results of a pg_basebackup (that is,\n> a set of tar files, or a data directory) could happily be backed up\n> using whatever Enterprise tool folks want to use- in much the same way\n> that a pgbackrest repo is also able to be backed up using whatever\n> Enterprise tools someone wishes to use. We designed it quite carefully\n> to work with exactly that use-case, so the distinction here is quite\n> lost on me. Perhaps you could clarify what use-case these changes to\n> pg_basebackup solve, when working with a Netbackup or Tivoli system,\n> that pgbackrest doesn't, since you bring it up here?\n\nI'm not an expert on any of those systems, but I doubt that\neverybody's OK with backing everything up to a pgbackrest repository\nand then separately backing up that repository to some other system.\nThat sounds like a pretty large storage cost.\n\n> As for if we should be sending more to the server, or asking the server\n> to send more to us, I don't really have a good feel for what's \"best\".\n> At least one implementation I'm familiar with builds a manifest on the\n> PG server side and then compares the results of that to the manifest\n> stored with the backup (where that comparison is actually done is on\n> whatever system the \"backup\" was started from, typically a backup\n> server). Perhaps there's an argument for sending the manifest from the\n> backup repository to PostgreSQL for it to then compare against the data\n> directory but I'm not really sure how it could possibly do that more\n> efficiently and that's moving work to the PG server that it doesn't\n> really need to do.\n\nI agree with all that, but... if the server builds a manifest on the\nPG server that is to be compared with the backup's manifest, the one\nthe PG server builds can't really include checksums, I think. To get\nthe checksums, it would have to read the entire cluster while building\nthe manifest, which sounds insane. Presumably it would have to build a\nchecksum-free version of the manifest, and then the client could\nchecksum the files as they're streamed down and write out a revised\nmanifest that adds the checksums.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 16 Sep 2019 15:02:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Sep 16, 2019 at 1:10 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > I disagree with this on a couple of levels. The first is pretty simple-\n> > we don't have all of the information. The user may have some reason to\n> > believe that timestamp-based is a bad idea, for example, and therefore\n> > having an option to perform a checksum-based backup makes sense. rsync\n> > is a pretty good tool in my view and it has a very similar option-\n> > because there are trade-offs to be made. LSN is great, if you don't\n> > mind reading every file of your database start-to-finish every time, but\n> > in a running system which hasn't suffered from clock skew or other odd\n> > issues (some of which we can also detect), it's pretty painful to scan\n> > absolutely everything like that for an incremental.\n> \n> There's a separate thread on using WAL-scanning to avoid having to\n> scan all the data every time. I pointed it out to you early in this\n> thread, too.\n\nAs discussed nearby, not everything that needs to be included in the\nbackup is actually going to be in the WAL though, right? How would that\never be able to handle the case where someone starts the server under\nwal_level = logical, takes a full backup, then restarts with wal_level =\nminimal, writes out a bunch of new data, and then restarts back to\nwal_level = logical and takes an incremental?\n\nHow would we even detect that such a thing happened?\n\n> > If you track the checksum of the file in the manifest then it's a pretty\n> > strong validation that the backup repo hasn't been corrupted between the\n> > backup and the restore. Of course, the database could have been\n> > corrupted at the source, and perhaps that's what you were getting at\n> > with your 'limited extent' but that isn't what I was referring to.\n> \n> Yeah, that all seems fair. Without the checksum, you can only validate\n> that you have the right files and that they are the right sizes, which\n> is not bad, but the checksums certainly make it stronger. But,\n> wouldn't having to checksum all of the files add significantly to the\n> cost of taking the backup? If so, I can imagine that some people might\n> want to pay that cost but others might not. If it's basically free to\n> checksum the data while we have it in memory anyway, then I guess\n> there's little to be lost.\n\nOn larger systems, so many of the files are 1GB in size that checking\nthe file size is quite close to meaningless. Yes, having to checksum\nall of the files definitely adds to the cost of taking the backup, but\nto avoid it we need strong assurances that a given file hasn't been\nchanged since our last full backup. WAL, today at least, isn't quite\nthat, and timestamps can possibly be fooled with, so if you'd like to be\nparticularly careful, there doesn't seem to be a lot of alternatives.\n\n> > I'm pretty baffled by this argument, particularly in this context. We\n> > already have tooling around trying to manage WAL archives in core- see\n> > pg_archivecleanup. Further, we're talking about pg_basebackup here, not\n> > about Netbackup or Tivoli, and the results of a pg_basebackup (that is,\n> > a set of tar files, or a data directory) could happily be backed up\n> > using whatever Enterprise tool folks want to use- in much the same way\n> > that a pgbackrest repo is also able to be backed up using whatever\n> > Enterprise tools someone wishes to use. We designed it quite carefully\n> > to work with exactly that use-case, so the distinction here is quite\n> > lost on me. Perhaps you could clarify what use-case these changes to\n> > pg_basebackup solve, when working with a Netbackup or Tivoli system,\n> > that pgbackrest doesn't, since you bring it up here?\n> \n> I'm not an expert on any of those systems, but I doubt that\n> everybody's OK with backing everything up to a pgbackrest repository\n> and then separately backing up that repository to some other system.\n> That sounds like a pretty large storage cost.\n\nI'm not asking you to be an expert on those systems, just to help me\nunderstand the statements you're making. How is backing up to a\npgbackrest repo different than running a pg_basebackup in the context of\nusing some other Enterprise backup system? In both cases, you'll have a\nfull copy of the backup (presumably compressed) somewhere out on a disk\nor filesystem which is then backed up by the Enterprise tool.\n\n> > As for if we should be sending more to the server, or asking the server\n> > to send more to us, I don't really have a good feel for what's \"best\".\n> > At least one implementation I'm familiar with builds a manifest on the\n> > PG server side and then compares the results of that to the manifest\n> > stored with the backup (where that comparison is actually done is on\n> > whatever system the \"backup\" was started from, typically a backup\n> > server). Perhaps there's an argument for sending the manifest from the\n> > backup repository to PostgreSQL for it to then compare against the data\n> > directory but I'm not really sure how it could possibly do that more\n> > efficiently and that's moving work to the PG server that it doesn't\n> > really need to do.\n> \n> I agree with all that, but... if the server builds a manifest on the\n> PG server that is to be compared with the backup's manifest, the one\n> the PG server builds can't really include checksums, I think. To get\n> the checksums, it would have to read the entire cluster while building\n> the manifest, which sounds insane. Presumably it would have to build a\n> checksum-free version of the manifest, and then the client could\n> checksum the files as they're streamed down and write out a revised\n> manifest that adds the checksums.\n\nUnless files can be excluded based on some relatively strong criteria,\nthen yes, the approach would be to use checksums of the files and would\nnecessairly include all files, meaning that you'd have to read them all.\n\nThat's not great, of course, which is why there are trade-offs to be\nmade, one of which typically involves using timestamps, but doing so\nquite carefully, to perform the file exclusion. Other ideas are great\nbut it seems like WAL isn't really a great idea unless we make some\nchanges there and we, as in PG, haven't got a robust \"we know this file\nchanged as of this point\" to work from. I worry that we're putting too\nmuch faith into a system to do something independent of what it was\nactually built and designed to do, and thinking that because we could\ntrust it for X, we can trust it for Y.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 16 Sep 2019 15:38:47 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 11:09 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > On Mon, Sep 16, 2019 at 9:30 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > Isn't some operations where at the end we directly call heap_sync\n> > > > without writing WAL will have a similar problem as well?\n> > >\n> > > Maybe. Can you give an example?\n> >\n> > Looking through the code, I found two cases where we do this. One is\n> > a bulk insert operation with wal_level = minimal, and the other is\n> > CLUSTER or VACUUM FULL with wal_level = minimal. In both of these\n> > cases we are generating new blocks whose LSNs will be 0/0. So, I think\n> > we need a rule that if the server is asked to back up all blocks in a\n> > file with LSNs > some threshold LSN, it must also include any blocks\n> > whose LSN is 0/0. Those blocks are either uninitialized or are\n> > populated without WAL logging, so they always need to be copied.\n>\n> I'm not sure I see a way around it but this seems pretty unfortunate-\n> every single incremental backup will have all of those included even\n> though the full backup likely also does\n>\n\nYeah, this is quite unfortunate. One more thing to note is that the\nsame is true for other operation like 'create index' (ex. nbtree\nbypasses buffer manager while creating the index, doesn't write wal\nfor wal_level=minimal and then syncs at the end).\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Sep 2019 14:51:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 7:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Sep 16, 2019 at 4:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > This seems to be a blocking problem for the LSN based design.\n>\n> Well, only the simplest version of it, I think.\n>\n> > Can we think of using creation time for file? Basically, if the file\n> > creation time is later than backup-labels \"START TIME:\", then include\n> > that file entirely. I think one big point against this is clock skew\n> > like what if somebody tinkers with the clock. And also, this can\n> > cover cases like\n> > what Jeevan has pointed but might not cover other cases which we found\n> > problematic.\n>\n> Well that would mean, for example, that if you copied the data\n> directory from one machine to another, the next \"incremental\" backup\n> would turn into a full backup. That sucks. And in other situations,\n> like resetting the clock, it could mean that you end up with a corrupt\n> backup without any real ability for PostgreSQL to detect it. I'm not\n> saying that it is impossible to create a practically useful system\n> based on file time stamps, but I really don't like it.\n>\n> > I think the operations covered by WAL flag XLR_SPECIAL_REL_UPDATE will\n> > have similar problems.\n>\n> I'm not sure quite what you mean by that. Can you elaborate? It\n> appears to me that the XLR_SPECIAL_REL_UPDATE operations are all\n> things that create files, remove files, or truncate files, and the\n> sketch in my previous email would handle the first two of those cases\n> correctly. See below for the third.\n>\n> > One related point is how do incremental backups handle the case where\n> > vacuum truncates the relation partially? Basically, with current\n> > patch/design, it doesn't appear that such information can be passed\n> > via incremental backup. I am not sure if this is a problem, but it\n> > would be good if we can somehow handle this.\n>\n> As to this, if you're taking a full backup of a particular file,\n> there's no problem. If you're taking a partial backup of a particular\n> file, you need to include the current length of the file and the\n> identity and contents of each modified block. Then you're fine.\n>\n\nRight, this should address that point.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Sep 2019 14:54:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 3:38 PM Stephen Frost <sfrost@snowman.net> wrote:\n> As discussed nearby, not everything that needs to be included in the\n> backup is actually going to be in the WAL though, right? How would that\n> ever be able to handle the case where someone starts the server under\n> wal_level = logical, takes a full backup, then restarts with wal_level =\n> minimal, writes out a bunch of new data, and then restarts back to\n> wal_level = logical and takes an incremental?\n\nFair point. I think the WAL-scanning approach can only work if\nwal_level > minimal. But, I also think that few people run with\nwal_level = minimal in this era where the default has been changed to\nreplica; and I think we can detect the WAL level in use while scanning\nWAL. It can only change at a checkpoint.\n\n> On larger systems, so many of the files are 1GB in size that checking\n> the file size is quite close to meaningless. Yes, having to checksum\n> all of the files definitely adds to the cost of taking the backup, but\n> to avoid it we need strong assurances that a given file hasn't been\n> changed since our last full backup. WAL, today at least, isn't quite\n> that, and timestamps can possibly be fooled with, so if you'd like to be\n> particularly careful, there doesn't seem to be a lot of alternatives.\n\nI see your points, but it feels like you're trying to talk down the\nWAL-based approach over what seem to me to be fairly manageable corner\ncases.\n\n> I'm not asking you to be an expert on those systems, just to help me\n> understand the statements you're making. How is backing up to a\n> pgbackrest repo different than running a pg_basebackup in the context of\n> using some other Enterprise backup system? In both cases, you'll have a\n> full copy of the backup (presumably compressed) somewhere out on a disk\n> or filesystem which is then backed up by the Enterprise tool.\n\nWell, I think that what people really want is to be able to backup\nstraight into the enterprise tool, without an intermediate step.\n\nMy basic point here is: As with practically all PostgreSQL\ndevelopment, I think we should try to expose capabilities and avoid\nmaking policy on behalf of users.\n\nI'm not objecting to the idea of having tools that can help users\nfigure out how much WAL they need to retain -- but insofar as we can\ndo it, such tools should work regardless of where that WAL is actually\nstored. I dislike the idea that PostgreSQL would provide something\nakin to a \"pgbackrest repository\" in core, or I at least I think it\nwould be important that we're careful about how much functionality\ngets tied to the presence and use of such a thing, because, at least\nbased on my experience working at EnterpriseDB, larger customers often\ndon't want to do it that way.\n\n> That's not great, of course, which is why there are trade-offs to be\n> made, one of which typically involves using timestamps, but doing so\n> quite carefully, to perform the file exclusion. Other ideas are great\n> but it seems like WAL isn't really a great idea unless we make some\n> changes there and we, as in PG, haven't got a robust \"we know this file\n> changed as of this point\" to work from. I worry that we're putting too\n> much faith into a system to do something independent of what it was\n> actually built and designed to do, and thinking that because we could\n> trust it for X, we can trust it for Y.\n\nThat seems like a considerable overreaction to me based on the\nproblems reported thus far. The fact is, WAL was originally intended\nfor crash recovery and has subsequently been generalized to be usable\nfor point-in-time recovery, standby servers, and logical decoding.\nIt's clearly established at this point as the canonical way that you\nknow what in the database has changed, which is the same need that we\nhave for incremental backup.\n\nAt any rate, the same criticism can be leveled - IMHO with a lot more\nvalidity - at timestamps. Last-modification timestamps are completely\noutside of our control; they are owned by the OS and various operating\nsystems can and do have varying behavior. They can go backwards when\nthings have changed; they can go forwards when things have not\nchanged. They were clearly not intended to meet this kind of\nrequirement. Even, they were intended for that purpose much less so\nthan WAL, which was actually designed for a requirement in this\ngeneral ballpark, if not this thing precisely.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 Sep 2019 10:55:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Sep 16, 2019 at 3:38 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > As discussed nearby, not everything that needs to be included in the\n> > backup is actually going to be in the WAL though, right? How would that\n> > ever be able to handle the case where someone starts the server under\n> > wal_level = logical, takes a full backup, then restarts with wal_level =\n> > minimal, writes out a bunch of new data, and then restarts back to\n> > wal_level = logical and takes an incremental?\n> \n> Fair point. I think the WAL-scanning approach can only work if\n> wal_level > minimal. But, I also think that few people run with\n> wal_level = minimal in this era where the default has been changed to\n> replica; and I think we can detect the WAL level in use while scanning\n> WAL. It can only change at a checkpoint.\n\nWe need to be sure that we can detect if the WAL level has ever been set\nto minimal between a full and an incremental and, if so, either refuse\nto run the incremental, or promote it to a full, or make it a\nchecksum-based incremental instead of trusting the WAL stream.\n\nI'm also glad that we ended up changing the default though and I do hope\nthat there's relatively few people running with minimal and that there's\neven fewer who play around with flipping it back and forth.\n\n> > On larger systems, so many of the files are 1GB in size that checking\n> > the file size is quite close to meaningless. Yes, having to checksum\n> > all of the files definitely adds to the cost of taking the backup, but\n> > to avoid it we need strong assurances that a given file hasn't been\n> > changed since our last full backup. WAL, today at least, isn't quite\n> > that, and timestamps can possibly be fooled with, so if you'd like to be\n> > particularly careful, there doesn't seem to be a lot of alternatives.\n> \n> I see your points, but it feels like you're trying to talk down the\n> WAL-based approach over what seem to me to be fairly manageable corner\n> cases.\n\nJust to be clear, I see your points and I like the general idea of\nfinding solutions, but it seems like the issues are likely to be pretty\ncomplex and I'm not sure that's being appreciated very well.\n\n> > I'm not asking you to be an expert on those systems, just to help me\n> > understand the statements you're making. How is backing up to a\n> > pgbackrest repo different than running a pg_basebackup in the context of\n> > using some other Enterprise backup system? In both cases, you'll have a\n> > full copy of the backup (presumably compressed) somewhere out on a disk\n> > or filesystem which is then backed up by the Enterprise tool.\n> \n> Well, I think that what people really want is to be able to backup\n> straight into the enterprise tool, without an intermediate step.\n\nOk.. I can understand that but I don't get how these changes to\npg_basebackup will help facilitate that. If they don't and what you're\ntalking about here is independent, then great, that clarifies things,\nbut if you're saying that these changes to pg_basebackup are to help\nwith backing up directly into those Enterprise systems then I'm just\nasking for some help in understanding how- what's the use-case here that\nwe're adding to pg_basebackup that makes it work with these Enterprise\nsystems?\n\nI'm not trying to be difficult here, I'm just trying to understand.\n\n> My basic point here is: As with practically all PostgreSQL\n> development, I think we should try to expose capabilities and avoid\n> making policy on behalf of users.\n> \n> I'm not objecting to the idea of having tools that can help users\n> figure out how much WAL they need to retain -- but insofar as we can\n> do it, such tools should work regardless of where that WAL is actually\n> stored. \n\nHow would that tool work, if it's to be able to work regardless of where\nthe WAL is actually stored..? Today, pg_archivecleanup just works\nagainst a POSIX filesystem- are you thinking that the tool would have a\npluggable storage system, so that it could work with, say, a POSIX\nfilesystem, or a CIFS mount, or a s3-like system?\n\n> I dislike the idea that PostgreSQL would provide something\n> akin to a \"pgbackrest repository\" in core, or I at least I think it\n> would be important that we're careful about how much functionality\n> gets tied to the presence and use of such a thing, because, at least\n> based on my experience working at EnterpriseDB, larger customers often\n> don't want to do it that way.\n\nThis seems largely independent of the above discussion, but since we're\ndiscussing it, I've certainly had various experiences in this area too-\nsome larger customers would like to use an s3-like store (which\npgbackrest already supports and will be supporting others going forward\nas it has a pluggable storage mechanism for the repo...), and then\nthere's customers who would like to point their Enterprise backup\nsolution at a directory on disk to back it up (which pgbackrest also\nsupports, as mentioned previously), and lastly there's customers who\nreally want to just backup the PG data directory and they'd like it to\n\"just work\", thank you, and no they don't have any thought or concern\nabout how to handle WAL, but surely it can't be that important, can it?\n\nThe last is tongue-in-cheek and I'm half-kidding there, but this is why\nI was trying to understand the comments above about what the use-case is\nhere that we're trying to solve for that answers the call for the\nEnterprise software crowd, and ideally what distinguishes that from\npgbackrest, but just the clear cut \"this is what this change will do to\nmake pg_basebackup work for Enterprise customers\" would be great, or\neven a \"well, pg_basebackup today works for them because it does X and\nit'll continue to be able to do X even after this change.\"\n\nI'll take a wild shot in the dark to try to help move us through this-\nis it that pg_basebackup can stream out to stdout in some cases..?\nThough that's quite limited since it means you can't have additional\ntablespaces and you can't stream the WAL, and how would that work with\nthe manifest idea that's being discussed..? If there's a directory\nthat's got manifest files in it for each backup, so we have the file\nsizes for them, those would need to be accessible when we go to do the\nincremental backup and couldn't be stored off somewhere else, I wouldn't\nthink..\n\n> > That's not great, of course, which is why there are trade-offs to be\n> > made, one of which typically involves using timestamps, but doing so\n> > quite carefully, to perform the file exclusion. Other ideas are great\n> > but it seems like WAL isn't really a great idea unless we make some\n> > changes there and we, as in PG, haven't got a robust \"we know this file\n> > changed as of this point\" to work from. I worry that we're putting too\n> > much faith into a system to do something independent of what it was\n> > actually built and designed to do, and thinking that because we could\n> > trust it for X, we can trust it for Y.\n> \n> That seems like a considerable overreaction to me based on the\n> problems reported thus far. The fact is, WAL was originally intended\n> for crash recovery and has subsequently been generalized to be usable\n> for point-in-time recovery, standby servers, and logical decoding.\n> It's clearly established at this point as the canonical way that you\n> know what in the database has changed, which is the same need that we\n> have for incremental backup.\n\nProvided the WAL level is at the level that you need it to be that will\nbe true for things which are actually supported with PITR, replication\nto standby servers, et al. I can see how it might come across as an\noverreaction but this strikes me as a pretty glaring issue and I worry\nthat if it was overlooked until now that there'll be other more subtle\nissues, and backups are just plain complicated to get right, just to\nbegin with already, something that I don't think people appreciate until\nthey've been dealing with them for quite a while.\n\nNot that this would be the first time we've had issues in this area, and\nwe'd likely work through them over time, but I'm sure we'd all prefer to\nget it as close to right as possible the first time around, and that's\ngoing to require some pretty in depth review.\n\n> At any rate, the same criticism can be leveled - IMHO with a lot more\n> validity - at timestamps. Last-modification timestamps are completely\n> outside of our control; they are owned by the OS and various operating\n> systems can and do have varying behavior. They can go backwards when\n> things have changed; they can go forwards when things have not\n> changed. They were clearly not intended to meet this kind of\n> requirement. Even, they were intended for that purpose much less so\n> than WAL, which was actually designed for a requirement in this\n> general ballpark, if not this thing precisely.\n\nWhile I understand that timestamps may be used for a lot of things and\nthat the time on a system could go forward or backward, the actual\nrequirement is:\n\n- If the file was modified after the backup was done, the timestamp (or\n the size) needs to be different. Doesn't actually matter if it's\n forwards, or backwards, different is all that's needed. The timestamp\n also needs to be before the backup started for it to be considered an\n option to skip it.\n\nIs it possible for that to be fool'd? Yes, of course, but it isn't as\nsimply fooled as your typical \"just copy files newer than X\" issue that\nother tools have, at least, if you're keeping a manifest of all of the\nfiles, et al, as discussed earlier.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 17 Sep 2019 12:09:08 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 12:09 PM Stephen Frost <sfrost@snowman.net> wrote:\n> We need to be sure that we can detect if the WAL level has ever been set\n> to minimal between a full and an incremental and, if so, either refuse\n> to run the incremental, or promote it to a full, or make it a\n> checksum-based incremental instead of trusting the WAL stream.\n\nSure. What about checksum collisions?\n\n> Just to be clear, I see your points and I like the general idea of\n> finding solutions, but it seems like the issues are likely to be pretty\n> complex and I'm not sure that's being appreciated very well.\n\nDefinitely possible, but it's more helpful if you can point out the\nactual issues.\n\n> Ok.. I can understand that but I don't get how these changes to\n> pg_basebackup will help facilitate that. If they don't and what you're\n> talking about here is independent, then great, that clarifies things,\n> but if you're saying that these changes to pg_basebackup are to help\n> with backing up directly into those Enterprise systems then I'm just\n> asking for some help in understanding how- what's the use-case here that\n> we're adding to pg_basebackup that makes it work with these Enterprise\n> systems?\n>\n> I'm not trying to be difficult here, I'm just trying to understand.\n\nMan, I feel like we're totally drifting off into the weeds here. I'm\nnot arguing that these changes to pg_basebackup will help enterprise\nusers except insofar as those users want incremental backup. All of\nthis discussion started with this comment from you:\n\n\"Having a system of keeping track of which backups are full and which\nare differential in an overall system also gives you the ability to do\nthings like expiration in a sensible way, including handling WAL\nexpiration.\"\n\nAll I was doing was saying that for an enterprise user, the overall\nsystem might be something entirely outside of our control, like\nNetBackup or Tivoli. Therefore, whatever functionality we provide to\ndo that kind of thing should be able to be used in such contexts. That\nhardly seems like a controversial proposition.\n\n> How would that tool work, if it's to be able to work regardless of where\n> the WAL is actually stored..? Today, pg_archivecleanup just works\n> against a POSIX filesystem- are you thinking that the tool would have a\n> pluggable storage system, so that it could work with, say, a POSIX\n> filesystem, or a CIFS mount, or a s3-like system?\n\nAgain, I was making a general statement about design goals -- \"we\nshould try to work nicely with enterprise backup products\" -- not\nproposing a specific design for a specific thing. I don't think the\nidea of some pluggability in that area is a bad one, but it's not even\nslightly what this thread is about.\n\n> Provided the WAL level is at the level that you need it to be that will\n> be true for things which are actually supported with PITR, replication\n> to standby servers, et al. I can see how it might come across as an\n> overreaction but this strikes me as a pretty glaring issue and I worry\n> that if it was overlooked until now that there'll be other more subtle\n> issues, and backups are just plain complicated to get right, just to\n> begin with already, something that I don't think people appreciate until\n> they've been dealing with them for quite a while.\n\nPermit me to be unpersuaded. If it was such a glaring issue, and if\nexperience is the key to spotting such issues, then why didn't YOU\nspot it?\n\nI'm not arguing that this stuff isn't hard. It is. Nor am I arguing\nthat I didn't screw up. I did. But designs need to be accepted or\nrejected based on facts, not FUD. You've raised some good technical\npoints and if you've got more concerns, I'd like to hear them, but I\ndon't think arguing vaguely that a certain approach will probably run\ninto trouble gets us anywhere.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 Sep 2019 12:58:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: block-level incremental backup"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Sep 17, 2019 at 12:09 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > We need to be sure that we can detect if the WAL level has ever been set\n> > to minimal between a full and an incremental and, if so, either refuse\n> > to run the incremental, or promote it to a full, or make it a\n> > checksum-based incremental instead of trusting the WAL stream.\n> \n> Sure. What about checksum collisions?\n\nCertainly possible, of course, but a sha256 of each file is at least\nsomewhat better than, say, our page-level checksums. I do agree that\nhaving the option to just say \"promote it to a full\", or \"do a\nbyte-by-byte comparison against the prior backed up file\" would be\nuseful for those who are concerned about sha256 collision probabilities.\n\nHaving a cross-check of \"does this X% of files that we decided not to\nback up due to whatever really still match what we think is in the\nbackup?\" is definitely a valuable feature and one which I'd hope we get\nto at some point.\n\n> > Ok.. I can understand that but I don't get how these changes to\n> > pg_basebackup will help facilitate that. If they don't and what you're\n> > talking about here is independent, then great, that clarifies things,\n> > but if you're saying that these changes to pg_basebackup are to help\n> > with backing up directly into those Enterprise systems then I'm just\n> > asking for some help in understanding how- what's the use-case here that\n> > we're adding to pg_basebackup that makes it work with these Enterprise\n> > systems?\n> >\n> > I'm not trying to be difficult here, I'm just trying to understand.\n> \n> Man, I feel like we're totally drifting off into the weeds here. I'm\n> not arguing that these changes to pg_basebackup will help enterprise\n> users except insofar as those users want incremental backup. All of\n> this discussion started with this comment from you:\n> \n> \"Having a system of keeping track of which backups are full and which\n> are differential in an overall system also gives you the ability to do\n> things like expiration in a sensible way, including handling WAL\n> expiration.\"\n> \n> All I was doing was saying that for an enterprise user, the overall\n> system might be something entirely outside of our control, like\n> NetBackup or Tivoli. Therefore, whatever functionality we provide to\n> do that kind of thing should be able to be used in such contexts. That\n> hardly seems like a controversial proposition.\n\nAnd all I was trying to understand was how what pg_basebackup does in\nthis context is really different from what can be done with pgbackrest,\nsince you brought it up:\n\n\"True, but I'm not sure that functionality belongs in core. It\ncertainly needs to be possible for out-of-core code to do this part of\nthe work if desired, because people want to integrate with enterprise\nbackup systems, and we can't come in and say, well, you back up\neverything else using Netbackup or Tivoli, but for PostgreSQL you have\nto use pg_backrest. I mean, maybe you can win that argument, but I\nknow I can't.\"\n\nWhat it sounds like you're argueing here is that what pg_basebackup\n\"has\" in it is that it specifically doesn't include any kind of\nexpiration management of any kind, and that's somehow helpful to people\nwho want to use Enterprise backup solutions. Maybe that's what you were\ngetting at, in which case, I'm sorry for misunderstanding and dragging\nit out, and thanks for helping me understand.\n\n> > How would that tool work, if it's to be able to work regardless of where\n> > the WAL is actually stored..? Today, pg_archivecleanup just works\n> > against a POSIX filesystem- are you thinking that the tool would have a\n> > pluggable storage system, so that it could work with, say, a POSIX\n> > filesystem, or a CIFS mount, or a s3-like system?\n> \n> Again, I was making a general statement about design goals -- \"we\n> should try to work nicely with enterprise backup products\" -- not\n> proposing a specific design for a specific thing. I don't think the\n> idea of some pluggability in that area is a bad one, but it's not even\n> slightly what this thread is about.\n\nWell, I agree with you, as I said up-thread, that this seemed to be\ngoing in a different and perhaps not entirely relevant direction.\n\n> > Provided the WAL level is at the level that you need it to be that will\n> > be true for things which are actually supported with PITR, replication\n> > to standby servers, et al. I can see how it might come across as an\n> > overreaction but this strikes me as a pretty glaring issue and I worry\n> > that if it was overlooked until now that there'll be other more subtle\n> > issues, and backups are just plain complicated to get right, just to\n> > begin with already, something that I don't think people appreciate until\n> > they've been dealing with them for quite a while.\n> \n> Permit me to be unpersuaded. If it was such a glaring issue, and if\n> experience is the key to spotting such issues, then why didn't YOU\n> spot it?\n\nI'm not designing the feature..? Sure, I agreed earlier with the\ngeneral idea that we might be able to use WAL scanning and/or the LSN to\nfigure out if a page had changed, but the next step would have been, I\nwould have thought anyway, for someone to go do the analysis that has\nonly recently been started to look at the places when we write and the\ncases where we write the WAL and actually build up confidence that this\napproach isn't missing anything. Instead, we seem to have come a long\nway in the development of this without having done that, and that does\nshake my confidence in this effort.\n\n> I'm not arguing that this stuff isn't hard. It is. Nor am I arguing\n> that I didn't screw up. I did. But designs need to be accepted or\n> rejected based on facts, not FUD. You've raised some good technical\n> points and if you've got more concerns, I'd like to hear them, but I\n> don't think arguing vaguely that a certain approach will probably run\n> into trouble gets us anywhere.\n\nThis just gets back to what I was saying earlier. It seems like we're\npresuming this is going to 'just work' because, say, replication works\ngreat, or crash recovery works great, and those are based on WAL. I'm\nstill hopeful that we can do something based on WAL or LSN here, but it\nneeds a careful review of when we are, and when we aren't, writing out\nWAL for basically everything we do, an effort that I'm glad to see might\nbe starting to happen, but a quick \"oh, this is why in this one case\nwith this one thing, and we're all good now\" doesn't instill confidence\nin me, at least.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 17 Sep 2019 13:48:00 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: block-level incremental backup"
}
] |
[
{
"msg_contents": "I've lost count of the number of gripes I've seen where somebody\ntried to write something like \"SELECT TIMESTAMP something\", modeling\nthis on what you can do if the something is a literal constant, but\nit didn't work because they were working with client infrastructure\nthat put a $n parameter symbol there instead.\n\n(I suspect that the last couple of doc comments that came through\ntoday boil down to this.)\n\nIt occurred to me that maybe we should just let this case work,\ninstead of insisting that it not work. The main stumbling block\nto that would be if substituting PARAM for Sconst in the grammar\nleads to ambiguities, but a quick test says that bison doesn't\nsee any. I did this:\n\n c_expr: columnref { $$ = $1; }\n | AexprConst { $$ = $1; }\n+ | func_name PARAM { ... }\n+ | func_name '(' func_arg_list opt_sort_clause ')' PARAM { ... }\n+ | ConstTypename PARAM { ... }\n+ | ConstInterval PARAM opt_interval { ... }\n+ | ConstInterval '(' Iconst ')' PARAM { ... }\n | PARAM opt_indirection\n {\n ParamRef *p = makeNode(ParamRef);\n p->number = $1;\n\n(where those correspond to all the AexprConst productions that allow a\ntype name of some form before Sconst), and bison is happy. I didn't\nbother to write the code to convert these into TypeCast-atop-ParamRef\nparse trees, but that seems like a pretty trivial addition.\n\nThoughts? I suppose the main hazard is that even if this doesn't\ncause ambiguities today, it might create issues down the road when\nwe wish we could support SQL20xx's latest bit of brain-damaged syntax.\n\nDocumenting it in any way that doesn't make it seem like a wart\nwould be tricky too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Apr 2019 12:28:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Possibly-crazy idea for getting rid of some user confusion"
},
{
"msg_contents": "At Tue, 09 Apr 2019 12:28:16 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <19970.1554827296@sss.pgh.pa.us>\n> I've lost count of the number of gripes I've seen where somebody\n> tried to write something like \"SELECT TIMESTAMP something\", modeling\n> this on what you can do if the something is a literal constant, but\n> it didn't work because they were working with client infrastructure\n> that put a $n parameter symbol there instead.\n> \n> (I suspect that the last couple of doc comments that came through\n> today boil down to this.)\n> \n> It occurred to me that maybe we should just let this case work,\n> instead of insisting that it not work. The main stumbling block\n> to that would be if substituting PARAM for Sconst in the grammar\n> leads to ambiguities, but a quick test says that bison doesn't\n> see any. I did this:\n> \n> c_expr: columnref { $$ = $1; }\n> | AexprConst { $$ = $1; }\n> + | func_name PARAM { ... }\n> + | func_name '(' func_arg_list opt_sort_clause ')' PARAM { ... }\n> + | ConstTypename PARAM { ... }\n> + | ConstInterval PARAM opt_interval { ... }\n> + | ConstInterval '(' Iconst ')' PARAM { ... }\n> | PARAM opt_indirection\n> {\n> ParamRef *p = makeNode(ParamRef);\n> p->number = $1;\n> \n> (where those correspond to all the AexprConst productions that allow a\n> type name of some form before Sconst), and bison is happy. I didn't\n> bother to write the code to convert these into TypeCast-atop-ParamRef\n> parse trees, but that seems like a pretty trivial addition.\n> \n> Thoughts? I suppose the main hazard is that even if this doesn't\n> cause ambiguities today, it might create issues down the road when\n> we wish we could support SQL20xx's latest bit of brain-damaged syntax.\n\nIf I understand that correctly, couldn't we move such form of\n\"constant\"s from AexprConst to c_expr? The following worked for\nme.\n\n+param_or_const: PARAM {... }\n+ | Sconst { $$ = makeStringConst($1, @1); }\n+ | Iconst { $$ = makeIntConst($1, @1); }\n...\nc_expr: columnref { $$ = $1; }\n | AexprConst { $$ = $1; }\n+ | ConstTypename param_or_const { ... }\n...\n- | ConstTypename Sconst { ... }\n\nAnd emits more reasonable error messages. Anyway that rules are\nthemselves warts no matter where they are placed, but no need to\nhave two distinct rules that are effectively identical.\n\n> Documenting it in any way that doesn't make it seem like a wart\n> would be tricky too.\n\nOnly invisible warts are good warts. We lost as it is already\nvisible:p\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 10:53:17 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Possibly-crazy idea for getting rid of some user confusion"
}
] |
[
{
"msg_contents": "Hi, hackers!When we create/alter object we already add dependency row to pg_depend (or, pg_shdepend) table for object that is depends (e.g. trigger depends on table which it created).But when I add comment on table, no one dependency row is added. Why?\n",
"msg_date": "Tue, 09 Apr 2019 22:06:10 +0300",
"msg_from": "=?utf-8?B?0JTQvNC40YLRgNC40Lkg0JLQvtGA0L7QvdC40L0=?=\n <carriingfate92@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Dependences records and comments"
},
{
"msg_contents": "=?utf-8?B?0JTQvNC40YLRgNC40Lkg0JLQvtGA0L7QvdC40L0=?= <carriingfate92@yandex.ru> writes:\n> When we create/alter object we already add dependency row to pg_depend\n> (or, pg_shdepend) table for object that is depends (e.g. trigger depends\n> on table which it created). But when I add comment on table, no one\n> dependency row is added. Why?\n\nComments aren't interesting for dependency purposes: nothing can depend\non a comment, nor does a comment depend on anything but its one owning\nobject, they aren't relevant for CASCADE/RESTRICT rules, etc. So\ntracking them in pg_depend would just bloat pg_depend for little gain.\nIt's simpler to have hard-wired logic to look for a comment and delete it\nwhen any object is deleted.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Apr 2019 00:57:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dependences records and comments"
}
] |
[
{
"msg_contents": "Horiguchi-san,\n\n> -----Original Message-----\n> From: Kyotaro HORIGUCHI [mailto:horiguchi.kyotaro@lab.ntt.co.jp]\n> Sent: Tuesday, April 09, 2019 5:37 PM\n> To: hosoya.yuzuko@lab.ntt.co.jp\n> Cc: Langote_Amit_f8@lab.ntt.co.jp; thibaut.madelaine@dalibo.com; \n> imai.yoshikazu@jp.fujitsu.com; pgsql-hackers@lists.postgresql.org\n> Subject: Re: Problem with default partition pruning\n> \n> Hi.\n> \n> At Tue, 9 Apr 2019 16:41:47 +0900, \"Yuzuko Hosoya\" \n> <hosoya.yuzuko@lab.ntt.co.jp> wrote in \n> <00cf01d4eea7$afa43370$0eec9a50$@lab.ntt.co.jp>\n> > > So still it is wrong that the new code is added at the beginning \n> > > of the loop on clauses in gen_partprune_steps_internal.\n> > >\n> > > > If partqual results true and the \n> > > > clause is long, the partqual is evaluated uselessly at every recursion.\n> > > >\n> > > > Maybe we should do that when we find that the current clause \n> > > > doesn't match part attributes. Specifically just after the for \n> > > > loop \"for (i =\n> > > > 0 ; i < part_scheme->partnattrs; i++)\".\n> > >\n> > I think we should check whether WHERE clause contradicts partition \n> > constraint even when the clause matches part attributes. So I moved\n> \n> Why? If clauses contains a clause on a partition key, the clause is \n> involved in determination of whether a partition survives or not in \n> ordinary way. Could you show how or on what configuration (tables and\n> query) it happens that such a matching clause needs to be checked against partqual?\n> \nWe found that partition pruning didn't work as expect when we scanned a sub-partition using WHERE\nclause which contradicts the sub-partition's constraint by Thibaut tests.\nThe example discussed in this thread as follows.\n\npostgres=# \\d+ test2\n Partitioned table \"public.test2\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n--------+---------+-----------+----------+---------+----------+--------------+-------------\n id | integer | | | | plain | | \n val | text | | | | extended | | \nPartition key: RANGE (id)\nPartitions: test2_0_20 FOR VALUES FROM (0) TO (20), PARTITIONED,\n test2_20_plus_def DEFAULT\n\npostgres=# \\d+ test2_0_20\n Partitioned table \"public.test2_0_20\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n--------+---------+-----------+----------+---------+----------+--------------+-------------\n id | integer | | | | plain | | \n val | text | | | | extended | | \nPartition of: test2 FOR VALUES FROM (0) TO (20) Partition constraint: ((id IS NOT NULL) AND (id >=\n0) AND (id < 20)) Partition key: RANGE (id)\nPartitions: test2_0_10 FOR VALUES FROM (0) TO (10),\n test2_10_20_def DEFAULT\n\npostgres=# explain (costs off) select * from test2 where id=5 or id=20;\n QUERY PLAN \n-----------------------------------------\n Append\n -> Seq Scan on test2_0_10\n Filter: ((id = 5) OR (id = 20))\n -> Seq Scan on test2_10_20_def\n Filter: ((id = 5) OR (id = 20))\n -> Seq Scan on test2_20_plus_def\n Filter: ((id = 5) OR (id = 20))\n(7 rows)\n\npostgres=# explain (costs off) select * from test2_0_20 where id=25;\n QUERY PLAN \n-----------------------------\n Seq Scan on test2_10_20_def\n Filter: (id = 25)\n(2 rows)\n\nSo I think we have to check if WHERE clause contradicts sub-partition's constraint regardless of\nwhether the clause matches part attributes or not.\n\n> The \"if (partconstr)\" block uselessly runs for every clause in the clause tree other than\nBoolExpr.\n> If we want do that, isn't just doing predicate_refuted_by(partconstr, \n> clauses, false) sufficient before looping over clauses?\nYes, I tried doing that in the original patch.\n\n> \n> \n> > \"if (partqual)\" block to the beginning of the loop you mentioned.\n> >\n> > I'm attaching the latest version. Could you please check it again?\n> \n> regards.\n> \n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\nBest regards,\nYuzuko Hosoya\n\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 11:24:11 +0900",
"msg_from": "\"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi. (The thread seems broken for Thunderbird)\n\nAt Wed, 10 Apr 2019 11:24:11 +0900, \"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp> wrote in <00df01d4ef44$7bb79370$7326ba50$@lab.ntt.co.jp>\n> > Why? If clauses contains a clause on a partition key, the clause is \n> > involved in determination of whether a partition survives or not in \n> > ordinary way. Could you show how or on what configuration (tables and\n> > query) it happens that such a matching clause needs to be checked against partqual?\n> > \n> We found that partition pruning didn't work as expect when we scanned a sub-partition using WHERE\n> clause which contradicts the sub-partition's constraint by Thibaut tests.\n> The example discussed in this thread as follows.\n> \n> postgres=# \\d+ test2\n> Partitioned table \"public.test2\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n> --------+---------+-----------+----------+---------+----------+--------------+-------------\n> id | integer | | | | plain | | \n> val | text | | | | extended | | \n> Partition key: RANGE (id)\n> Partitions: test2_0_20 FOR VALUES FROM (0) TO (20), PARTITIONED,\n> test2_20_plus_def DEFAULT\n> \n> postgres=# \\d+ test2_0_20\n> Partitioned table \"public.test2_0_20\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n> --------+---------+-----------+----------+---------+----------+--------------+-------------\n> id | integer | | | | plain | | \n> val | text | | | | extended | | \n> Partition of: test2 FOR VALUES FROM (0) TO (20) Partition constraint: ((id IS NOT NULL) AND (id >=\n> 0) AND (id < 20)) Partition key: RANGE (id)\n> Partitions: test2_0_10 FOR VALUES FROM (0) TO (10),\n> test2_10_20_def DEFAULT\n> \n> postgres=# explain (costs off) select * from test2 where id=5 or id=20;\n> QUERY PLAN \n> -----------------------------------------\n> Append\n> -> Seq Scan on test2_0_10\n> Filter: ((id = 5) OR (id = 20))\n> -> Seq Scan on test2_10_20_def\n> Filter: ((id = 5) OR (id = 20))\n> -> Seq Scan on test2_20_plus_def\n> Filter: ((id = 5) OR (id = 20))\n> (7 rows)\n\nI think this is problematic.\n\n> postgres=# explain (costs off) select * from test2_0_20 where id=25;\n> QUERY PLAN \n> -----------------------------\n> Seq Scan on test2_10_20_def\n> Filter: (id = 25)\n> (2 rows)\n> \n> So I think we have to check if WHERE clause contradicts sub-partition's constraint regardless of\n> whether the clause matches part attributes or not.\n\nIf that is the only issue here, doesn't Amit's proposal work?\n\nAnd that doesn't seem to justify rechecking key clauses to\npartquals for every leaf node in an expression tree. I thought\nthat you are trying to resolve is the issue on non-key caluses\nthat contradicts to partition constraints?\n\n> > The \"if (partconstr)\" block uselessly runs for every clause in the clause tree other than\n> BoolExpr.\n> > If we want do that, isn't just doing predicate_refuted_by(partconstr, \n> > clauses, false) sufficient before looping over clauses?\n> Yes, I tried doing that in the original patch.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 13:05:35 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
}
] |
[
{
"msg_contents": "Hi all,\n\nI was wondering if there exists either a test suite of pathological failure\ncases for postgres, or a dataset of failure scenarios. I'm not exactly sure\nwhat such a dataset would look like, possibly a bunch of snapshots of test\ndatabases when undergoing a bunch of different failure scenarios?\n\nI'm experimenting with machine learning and I had an idea to build a\nclassifier to determine if a running postgres database is having issues.\nRight now \"issues\" is very ambiguously defined, but I'm thinking of\nproblems I've encountered at work, such as resource saturation, long\nrunning transactions, lock contention, etc. I know a lot of this is already\ncovered by existing monitoring solutions, but I'm specifically interested\nto see if a ML model can learn monitoring rules on its own.\n\nIf the classifier turns out to be feasible then my hope would to be to\nexpand the ML model to have some diagnostic capabilities -- I've had\ndifficulty in the past figuring out exactly what is going wrong with\npostgres when my workplace's production environment was having database\nissues.\n\nThanks,\n\nBen Simmons\n\nHi all,I was wondering if there exists either a test suite of pathological failure cases for postgres, or a dataset of failure scenarios. I'm not exactly sure what such a dataset would look like, possibly a bunch of snapshots of test databases when undergoing a bunch of different failure scenarios?I'm experimenting with machine learning and I had an idea to build a classifier to determine if a running postgres database is having issues. Right now \"issues\" is very ambiguously defined, but I'm thinking of problems I've encountered at work, such as resource saturation, long running transactions, lock contention, etc. I know a lot of this is already covered by existing monitoring solutions, but I'm specifically interested to see if a ML model can learn monitoring rules on its own. If the classifier turns out to be feasible then my hope would to be to expand the ML model to have some diagnostic capabilities -- I've had difficulty in the past figuring out exactly what is going wrong with postgres when my workplace's production environment was having database issues.Thanks,Ben Simmons",
"msg_date": "Wed, 10 Apr 2019 11:41:46 -0700",
"msg_from": "Ben Simmons <simmons.a.ben@gmail.com>",
"msg_from_op": true,
"msg_subject": "Postgres \"failures\" dataset for machine learning"
}
] |
[
{
"msg_contents": "Hi all,\n\nI understood that v11 includes predicate locking for gist indexes, as per\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3ad55863e9392bff73377911ebbf9760027ed405\n.\n\nI tried this in combination with an exclude constraint as following:\n\ndrop table if exists t;\ncreate table t(period tsrange);\nalter table t add constraint bla exclude using gist(period with &&);\n-- t1\nbegin transaction isolation level serializable;\nselect * from t where period && tsrange(now()::timestamp, now()::timestamp\n+ interval '1 hour');\ninsert into t(period) values(tsrange(now()::timestamp, now()::timestamp +\ninterval '1 hour'));\n-- t2\nbegin transaction isolation level serializable;\nselect * from t where period && tsrange(now()::timestamp, now()::timestamp\n+ interval '1 hour');\ninsert into t(period) values(tsrange(now()::timestamp, now()::timestamp +\ninterval '1 hour'));\n-- t1\ncommit;\n-- t2\nERROR: conflicting key value violates exclusion constraint \"bla\"\nDETAIL: Key (period)=([\"2019-04-10 20:59:20.6265\",\"2019-04-10\n21:59:20.6265\")) conflicts with existing key (period)=([\"2019-04-10\n20:59:13.332622\",\"2019-04-10 21:59:13.332622\")).\n\nI kinda expected/hoped that transaction t2 would get aborted by a\nserialization error, and not an exclude constraint violation. This makes\nthe application session bound to transaction t2 failing, as only\nserialization errors are retried.\n\nWe introduced the same kind of improvement/fix for btree indexes earlier,\nsee\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fcff8a575198478023ada8a48e13b50f70054766.\nShould this also be applied for (exclude) constraints backed by a gist\nindex (as gist indexes now support predicate locking), or am I creating\nincorrect assumptions something here?\n\nThanks.\n\nHi all,I understood that v11 includes predicate locking for gist indexes, as per https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3ad55863e9392bff73377911ebbf9760027ed405.I tried this in combination with an exclude constraint as following:drop table if exists t;create table t(period tsrange);alter table t add constraint bla exclude using gist(period with &&);-- t1begin transaction isolation level serializable;select * from t where period && tsrange(now()::timestamp, now()::timestamp + interval '1 hour');insert into t(period) values(tsrange(now()::timestamp, now()::timestamp + interval '1 hour'));-- t2begin transaction isolation level serializable;select * from t where period && tsrange(now()::timestamp, now()::timestamp + interval '1 hour');insert into t(period) values(tsrange(now()::timestamp, now()::timestamp + interval '1 hour'));-- t1commit;-- t2ERROR: conflicting key value violates exclusion constraint \"bla\"DETAIL: Key (period)=([\"2019-04-10 20:59:20.6265\",\"2019-04-10 21:59:20.6265\")) conflicts with existing key (period)=([\"2019-04-10 20:59:13.332622\",\"2019-04-10 21:59:13.332622\")).I kinda expected/hoped that transaction t2 would get aborted by a serialization error, and not an exclude constraint violation. This makes the application session bound to transaction t2 failing, as only serialization errors are retried.We introduced the same kind of improvement/fix for btree indexes earlier, see https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fcff8a575198478023ada8a48e13b50f70054766. Should this also be applied for (exclude) constraints backed by a gist index (as gist indexes now support predicate locking), or am I creating incorrect assumptions something here?Thanks.",
"msg_date": "Wed, 10 Apr 2019 23:43:36 +0200",
"msg_from": "Peter Billen <peter.billen@gmail.com>",
"msg_from_op": true,
"msg_subject": "serializable transaction: exclude constraint violation (backed by\n GIST index) instead of ssi conflict"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 9:43 AM Peter Billen <peter.billen@gmail.com> wrote:\n> I understood that v11 includes predicate locking for gist indexes, as per https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3ad55863e9392bff73377911ebbf9760027ed405.\n>\n> I tried this in combination with an exclude constraint as following:\n>\n> drop table if exists t;\n> create table t(period tsrange);\n> alter table t add constraint bla exclude using gist(period with &&);\n> -- t1\n> begin transaction isolation level serializable;\n> select * from t where period && tsrange(now()::timestamp, now()::timestamp + interval '1 hour');\n> insert into t(period) values(tsrange(now()::timestamp, now()::timestamp + interval '1 hour'));\n> -- t2\n> begin transaction isolation level serializable;\n> select * from t where period && tsrange(now()::timestamp, now()::timestamp + interval '1 hour');\n> insert into t(period) values(tsrange(now()::timestamp, now()::timestamp + interval '1 hour'));\n> -- t1\n> commit;\n> -- t2\n> ERROR: conflicting key value violates exclusion constraint \"bla\"\n> DETAIL: Key (period)=([\"2019-04-10 20:59:20.6265\",\"2019-04-10 21:59:20.6265\")) conflicts with existing key (period)=([\"2019-04-10 20:59:13.332622\",\"2019-04-10 21:59:13.332622\")).\n>\n> I kinda expected/hoped that transaction t2 would get aborted by a serialization error, and not an exclude constraint violation. This makes the application session bound to transaction t2 failing, as only serialization errors are retried.\n>\n> We introduced the same kind of improvement/fix for btree indexes earlier, see https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fcff8a575198478023ada8a48e13b50f70054766. Should this also be applied for (exclude) constraints backed by a gist index (as gist indexes now support predicate locking), or am I creating incorrect assumptions something here?\n\nHi Peter,\n\nYeah, I agree, the behaviour you are expecting is desirable and we\nshould figure out how to do that. The basic trick for btree unique\nconstraints was to figure out where the index *would* have written, to\ngive the SSI machinery a chance to object to that before raising the\nUCV. I wonder if we can use the same technique here... at first\nglance, check_exclusion_or_unique_constraint() is raising the error,\nbut is not index AM specific code, and it is somewhat removed from the\nGIST code that would do the equivalent\nCheckForSerializableConflictIn() call. I haven't looked into it\nproperly, but that certainly complicates matters somewhat... Perhaps\nthe index AM would actually need a new entrypoint that could be called\nbefore the error is raised, or perhaps there is an easier way.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Apr 2019 10:54:44 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: serializable transaction: exclude constraint violation (backed by\n GIST index) instead of ssi conflict"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 10:54 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Apr 11, 2019 at 9:43 AM Peter Billen <peter.billen@gmail.com> wrote:\n> > I kinda expected/hoped that transaction t2 would get aborted by a serialization error, and not an exclude constraint violation. This makes the application session bound to transaction t2 failing, as only serialization errors are retried.\n\n> Yeah, I agree, the behaviour you are expecting is desirable and we\n> should figure out how to do that. The basic trick for btree unique\n> constraints was to figure out where the index *would* have written, to\n> give the SSI machinery a chance to object to that before raising the\n> UCV. I wonder if we can use the same technique here... at first\n> glance, check_exclusion_or_unique_constraint() is raising the error,\n> but is not index AM specific code, and it is somewhat removed from the\n> GIST code that would do the equivalent\n> CheckForSerializableConflictIn() call. I haven't looked into it\n> properly, but that certainly complicates matters somewhat... Perhaps\n> the index AM would actually need a new entrypoint that could be called\n> before the error is raised, or perhaps there is an easier way.\n\nAdding Kevin (architect of SSI and reviewer/committer of my UCV\ninterception patch) and Shubham (author of GIST SSI support) to the CC\nlist in case they have thoughts on this.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Apr 2019 11:14:12 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: serializable transaction: exclude constraint violation (backed by\n GIST index) instead of ssi conflict"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 1:14 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Thu, Apr 11, 2019 at 10:54 AM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > On Thu, Apr 11, 2019 at 9:43 AM Peter Billen <peter.billen@gmail.com>\n> wrote:\n> > > I kinda expected/hoped that transaction t2 would get aborted by a\n> serialization error, and not an exclude constraint violation. This makes\n> the application session bound to transaction t2 failing, as only\n> serialization errors are retried.\n>\n> > Yeah, I agree, the behaviour you are expecting is desirable and we\n> > should figure out how to do that. The basic trick for btree unique\n> > constraints was to figure out where the index *would* have written, to\n> > give the SSI machinery a chance to object to that before raising the\n> > UCV. I wonder if we can use the same technique here... at first\n> > glance, check_exclusion_or_unique_constraint() is raising the error,\n> > but is not index AM specific code, and it is somewhat removed from the\n> > GIST code that would do the equivalent\n> > CheckForSerializableConflictIn() call. I haven't looked into it\n> > properly, but that certainly complicates matters somewhat... Perhaps\n> > the index AM would actually need a new entrypoint that could be called\n> > before the error is raised, or perhaps there is an easier way.\n>\n> Adding Kevin (architect of SSI and reviewer/committer of my UCV\n> interception patch) and Shubham (author of GIST SSI support) to the CC\n> list in case they have thoughts on this.\n>\n\nThanks Thomas, appreciated!\n\nI was fiddling some more, and I am experiencing the same behavior with an\nexclude constraint backed by a btree index. I tried as following:\n\ndrop table if exists t;\ncreate table t(i int);\nalter table t add constraint bla exclude using btree(i with =);\n\n-- t1\nbegin transaction isolation level serializable;\nselect * from t where i = 1;\ninsert into t(i) values(1);\n\n-- t2\nbegin transaction isolation level serializable;\nselect * from t where i = 1;\ninsert into t(i) values(1);\n\n-- t1\ncommit;\n\n-- t2\nERROR: conflicting key value violates exclusion constraint \"bla\"\nDETAIL: Key (i)=(1) conflicts with existing key (i)=(1).\n\nLooking back, I now believe that\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fcff8a575198478023ada8a48e13b50f70054766\nwas intended only for *unique* constraints, and not for *exclude*\nconstraints as well. This is not explicitly mentioned in the commit\nmessage, though only tests for unique constraints are added in that commit.\n\nI believe we are after multiple issues/improvements:\n\n1. Could we extend\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fcff8a575198478023ada8a48e13b50f70054766\nby adding support for exclude constraints?\n2. Fully support gist & constraints in serializable transactions. I did not\nyet test a unique constraint backed by a gist constraint, which is also\ninteresting to test I assume. This test would tell us if there currently is\na status quo between btree and gist indexes.\n\nThanks.\n\nOn Thu, Apr 11, 2019 at 1:14 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Thu, Apr 11, 2019 at 10:54 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Apr 11, 2019 at 9:43 AM Peter Billen <peter.billen@gmail.com> wrote:\n> > I kinda expected/hoped that transaction t2 would get aborted by a serialization error, and not an exclude constraint violation. This makes the application session bound to transaction t2 failing, as only serialization errors are retried.\n\n> Yeah, I agree, the behaviour you are expecting is desirable and we\n> should figure out how to do that. The basic trick for btree unique\n> constraints was to figure out where the index *would* have written, to\n> give the SSI machinery a chance to object to that before raising the\n> UCV. I wonder if we can use the same technique here... at first\n> glance, check_exclusion_or_unique_constraint() is raising the error,\n> but is not index AM specific code, and it is somewhat removed from the\n> GIST code that would do the equivalent\n> CheckForSerializableConflictIn() call. I haven't looked into it\n> properly, but that certainly complicates matters somewhat... Perhaps\n> the index AM would actually need a new entrypoint that could be called\n> before the error is raised, or perhaps there is an easier way.\n\nAdding Kevin (architect of SSI and reviewer/committer of my UCV\ninterception patch) and Shubham (author of GIST SSI support) to the CC\nlist in case they have thoughts on this.Thanks Thomas, appreciated!I was fiddling some more, and I am experiencing the same behavior with an exclude constraint backed by a btree index. I tried as following:drop table if exists t;create table t(i int);alter table t add constraint bla exclude using btree(i with =);-- t1begin transaction isolation level serializable;select * from t where i = 1;insert into t(i) values(1);-- t2begin transaction isolation level serializable;select * from t where i = 1;insert into t(i) values(1);-- t1commit;-- t2ERROR: conflicting key value violates exclusion constraint \"bla\"DETAIL: Key (i)=(1) conflicts with existing key (i)=(1).Looking back, I now believe that https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fcff8a575198478023ada8a48e13b50f70054766 was intended only for *unique* constraints, and not for *exclude* constraints as well. This is not explicitly mentioned in the commit message, though only tests for unique constraints are added in that commit.I believe we are after multiple issues/improvements:1. Could we extend https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fcff8a575198478023ada8a48e13b50f70054766 by adding support for exclude constraints?2. Fully support gist & constraints in serializable transactions. I did not yet test a unique constraint backed by a gist constraint, which is also interesting to test I assume. This test would tell us if there currently is a status quo between btree and gist indexes.Thanks.",
"msg_date": "Thu, 11 Apr 2019 18:12:13 +0200",
"msg_from": "Peter Billen <peter.billen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: serializable transaction: exclude constraint violation (backed by\n GIST index) instead of ssi conflict"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 6:12 PM Peter Billen <peter.billen@gmail.com> wrote:\n\n>\n> I believe we are after multiple issues/improvements:\n>\n> 1. Could we extend\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fcff8a575198478023ada8a48e13b50f70054766\n> by adding support for exclude constraints?\n> 2. Fully support gist & constraints in serializable transactions. I did\n> not yet test a unique constraint backed by a gist constraint, which is also\n> interesting to test I assume. This test would tell us if there currently is\n> a status quo between btree and gist indexes.\n>\n\nRegarding the remark in (2), I forgot that a unique constraint cannot be\nbacked by a gist index, so forget the test I mentioned.\n\nOn Thu, Apr 11, 2019 at 6:12 PM Peter Billen <peter.billen@gmail.com> wrote:I believe we are after multiple issues/improvements:1. Could we extend https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fcff8a575198478023ada8a48e13b50f70054766 by adding support for exclude constraints?2. Fully support gist & constraints in serializable transactions. I did not yet test a unique constraint backed by a gist constraint, which is also interesting to test I assume. This test would tell us if there currently is a status quo between btree and gist indexes.Regarding the remark in (2), I forgot that a unique constraint cannot be backed by a gist index, so forget the test I mentioned.",
"msg_date": "Thu, 11 Apr 2019 20:01:11 +0200",
"msg_from": "Peter Billen <peter.billen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: serializable transaction: exclude constraint violation (backed by\n GIST index) instead of ssi conflict"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 6:01 AM Peter Billen <peter.billen@gmail.com> wrote:\n> On Thu, Apr 11, 2019 at 6:12 PM Peter Billen <peter.billen@gmail.com> wrote:\n>> I believe we are after multiple issues/improvements:\n>>\n>> 1. Could we extend https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fcff8a575198478023ada8a48e13b50f70054766 by adding support for exclude constraints?\n>> 2. Fully support gist & constraints in serializable transactions. I did not yet test a unique constraint backed by a gist constraint, which is also interesting to test I assume. This test would tell us if there currently is a status quo between btree and gist indexes.\n>\n> Regarding the remark in (2), I forgot that a unique constraint cannot be backed by a gist index, so forget the test I mentioned.\n\nYeah, well we can't directly extend the existing work because unique\nconstraints are *entirely* handled inside the btree code (in fact no\nother index types even support unique constraints, yet). This\nexclusion constraints stuff is handled differently: the error handling\nyou're seeing is raised by generic code in\nsrc/backend/executor/execIndexing.c , but the code that knows how to\nactually perform the necessary SSI checks is index-specific, in this\ncase in gist.c. To do the moral equivalent of the UCV change, we'll\nneed to get these two bits of code to communicate across the \"index\nAM\" boundary (the way that index implementations such as GIST are\nplugged into Postgres). The question is how.\n\nOne (bad) idea is that we could actually perform the (illegal)\naminsert just before we raise that error! We know we're going to roll\nback anyway, because that's either going to fail when gist.c calls\nCheckForSerializableConflictIn(), or if not, when we raise\n\"conflicting key value violates exclusion constraint ...\". That's a\nbit messy though, because it modifies the index unnecessarily and\npossibly breaks important invariants. An improved version of that\nidea is to add a new optional index AM interface \"amcheckinsert()\"\nthat shares whatever code it needs to share to do all the work that\ninsert would do except the actual modification. That way,\ncheck_exclusion_or_unique_constraint() would give every index AM a\nchance to raise an SSI error if it wants to. This seems like it\nshould work, but I don't want to propose messing around with the index\nAM interface lightly. It wouldn't usually get called, just in the\nerror path.\n\nAnyone got a better idea?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 13 Apr 2019 12:06:52 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: serializable transaction: exclude constraint violation (backed by\n GIST index) instead of ssi conflict"
}
] |
[
{
"msg_contents": "Over at https://www.postgresql.org/message-id/CA%2BTgmobFVe4J4AA7z9OMUzKnm09Tt%2BsybhxeL_Ddst3q3wqpzQ%40mail.gmail.com\nI mentioned parsing the WAL to extract block references so that\nincremental backup could efficiently determine which blocks needed to\nbe copied. Ashwin replied in\nhttp://postgr.es/m/CALfoeitO-vkfjubMFQRmgyXghL-uUnZLNxbr=obrQQsm8kFO4A@mail.gmail.com\nto mention that the same approach could be useful for pg_upgrade:\n\n# Currently, pg_rewind requires\n# all the WAL logs to be present on source side from point of divergence to\n# rewind. Instead just parse the wal and keep the changed blocks around on\n# sourece. Then don't need to retain the WAL but can still rewind using the\n# changed block map.\n\nSince there are at least two possible use case for an efficient way to\nlearn when blocks last changed, and in each case the performance\nbenefits of being able to learn that are potentially quite large, I'm\nstarting this thread to brainstorm how such a system might work.\n\nIt seems to me that there are basically two ways of storing this kind\nof information, plus a bunch of variants. One way is to store files\nthat cover a range of LSNs, and basically contain a synopsis of the\nWAL for those LSNs. You omit all the actual data and just mention\nwhich blocks were changed by some record in that part of the WAL. In\nthis type of scheme, the storage required is roughly proportional to\nthe volume of WAL for which you wish to retain data. Pruning old data\nis easy; just remove the files that provide information about LSNs\nthat you don't care about any more. The other way is to store data\nabout each block, or each range of blocks, or all the blocks that hash\nonto a certain slot; for each, store the newest LSN that has modified\nthat block, or a block in that range, or a block that hashes onto that\nthat slot. In this system, storage is roughly proportional to the\nsize of the database cluster, except maybe in the hashing case, but I\n*think* that case will degrade unless you basically expand the map to\nbe roughly proportional to the size of the cluster anyway. I may be\nwrong.\n\nOf these two variants, I am inclined to prefer the version where each\nfile is a summary of the block references within some range of LSNs.\nIt seems simpler to implement to me. You just read a bunch of WAL\nfiles and then when you get tired you stop and emit an output file.\nYou need to protect yourself against untimely crashes. One way is to\nstick a checksum into the output file. After you finish writing it,\nfsync() it before you start writing the next one. After a restart,\nread the latest such file and see if the checksum is OK. If not,\nregenerate it; if not, assume it's good and move on. Files other than\nthe last one can be assumed good. Another way is to create the file\nwith a temporary name, fsync() it, and then rename it into place and\nfsync() again. The background worker that generates the files can\nhave a GUC to remove them when they are older than some threshold\namount of time, or you can keep them forever and let the user manually\nremove stuff they no longer want based on LSN. That's pretty much it.\n\nThe version where you keep an LSN per block/range/hash bucket seems\nmore complex in terms of durability. Now you have to worry not only\nabout creating new files, but about modifying them. That seems to add\nsome complexity. I think it may be possible to make it work without\ndoing write-ahead logging for every change, but it certainly needs\ncareful thought to avoid torn page problems and checkpoint\nsynchronization issues. Moreover, it potentially uses lots and lots\nof inodes if there are many relations in the cluster. You can avoid\nthat by not creating maps for small files, if you like, or by\nswitching to the hash bucket approach. But either of those approaches\nis lossy. Omitting the maps for small files means you always have to\nassume everything in those files changed. The hash bucket approach is\nvulnerable to hash collisions; you have to assume that all blocks that\nhash to a given bucket have changed. Those are probably manageable\ndisadvantages, but I think they are real ones.\n\nThere is one thing that does worry me about the file-per-LSN-range\napproach, and that is memory consumption when trying to consume the\ninformation. Suppose you have a really high velocity system. I don't\nknow exactly what the busiest systems around are doing in terms of\ndata churn these days, but let's say just for kicks that we are\ndirtying 100GB/hour. That means, roughly 12.5 million block\nreferences per hour. If each block reference takes 12 bytes, that's\nmaybe 150MB/hour in block reference files. If you run a daily\nincremental backup, you've got to load all the block references for\nthe last 24 hours and deduplicate them, which means you're going to\nneed about 3.6GB of memory. If you run a weekly incremental backup,\nyou're going to need about 25GB of memory. That is not ideal. One\ncan keep the memory consumption to a more reasonable level by using\ntemporary files. For instance, say you realize you're going to need\n25GB of memory to store all the block references you have, but you\nonly have 1GB of memory that you're allowed to use. Well, just\nhash-partition the data 32 ways by dboid/tsoid/relfilenode/segno,\nwriting each batch to a separate temporary file, and then process each\nof those 32 files separately. That does add some additional I/O, but\nit's not crazily complicated and doesn't seem too terrible, at least\nto me. Still, it's something not to like.\n\nAnother problem to think about is whether the most recent data is\ngoing to be available when you need it. This concern applies to\neither approach. In the case of incremental backup, the server is\ngoing to be up and running, so if the WAL-scanner gets behind, you can\njust wait for it to catch up. In the case of pg_rewind, the server is\ngoing to be down, so that doesn't work. You probably need to figure\nout how new the data you have is, and then scan all the newer WAL to\npick up any additional block references. That's a bit of a pain, but\nI don't see any real alternative. In the case of the\nfile-per-LSN-range approach, it's easy to see what LSNs are covered by\nthe files. In the case of the LSN-per-block/range/hash bucket\napproach, you can presumably rely on the last checkpoint having\nflushed all the pending changes to disk, but the WAL scanner could've\nbeen arbitrarily far behind at that point, so you'd have to store a\npiece of metadata someplace that tells you exactly how far the WAL\nscanner had progressed and then have pg_rewind fish it out.\n\nYet another question is how to make sure WAL doesn't get removed\nbefore we finish scanning it. Peter mentioned on the other thread\nthat we could use a variant replication slot, which immediately made\nme wonder why we'd need a variant. Actually, the biggest problem I\nsee here is that if we use a replication slot, somebody might try to\ndrop it or use it for some other purpose, and that would mess things\nup. I guess we could pull the usual trick of reserving names that\nstart with 'pg_' for internal purposes. Or we could just hard-code\nthe LSN that was last scanned for this purpose as a bespoke constraint\non WAL discard. Not sure what is best.\n\nI think all of this should be optional functionality. It's going to\nbe really useful for people with large databases, I think, but people\nwith small databases may not care, and it won't be entirely free. If\nit's not enabled, then the functionality that would otherwise exploit\nit can fall back to doing things in a less efficient way; nothing\nneeds to break hard.\n\nOpinions?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 10 Apr 2019 17:49:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 5:49 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> There is one thing that does worry me about the file-per-LSN-range\n> approach, and that is memory consumption when trying to consume the\n> information. Suppose you have a really high velocity system. I don't\n> know exactly what the busiest systems around are doing in terms of\n> data churn these days, but let's say just for kicks that we are\n> dirtying 100GB/hour. That means, roughly 12.5 million block\n> references per hour. If each block reference takes 12 bytes, that's\n> maybe 150MB/hour in block reference files. If you run a daily\n> incremental backup, you've got to load all the block references for\n> the last 24 hours and deduplicate them, which means you're going to\n> need about 3.6GB of memory. If you run a weekly incremental backup,\n> you're going to need about 25GB of memory. That is not ideal. One\n> can keep the memory consumption to a more reasonable level by using\n> temporary files. For instance, say you realize you're going to need\n> 25GB of memory to store all the block references you have, but you\n> only have 1GB of memory that you're allowed to use. Well, just\n> hash-partition the data 32 ways by dboid/tsoid/relfilenode/segno,\n> writing each batch to a separate temporary file, and then process each\n> of those 32 files separately. That does add some additional I/O, but\n> it's not crazily complicated and doesn't seem too terrible, at least\n> to me. Still, it's something not to like.\n\nOh, I'm being dumb. We should just have the process that writes out\nthese files sort the records first. Then when we read them back in to\nuse them, we can just do a merge pass like MergeAppend would do. Then\nyou never need very much memory at all.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 10 Apr 2019 20:11:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On 2019-04-10 23:49, Robert Haas wrote:\n> It seems to me that there are basically two ways of storing this kind\n> of information, plus a bunch of variants. One way is to store files\n> that cover a range of LSNs, and basically contain a synopsis of the\n> WAL for those LSNs. You omit all the actual data and just mention\n> which blocks were changed by some record in that part of the WAL.\n\nThat seems better than the other variant.\n\n> Yet another question is how to make sure WAL doesn't get removed\n> before we finish scanning it. Peter mentioned on the other thread\n> that we could use a variant replication slot, which immediately made\n> me wonder why we'd need a variant. Actually, the biggest problem I\n> see here is that if we use a replication slot, somebody might try to\n> drop it or use it for some other purpose, and that would mess things\n> up. I guess we could pull the usual trick of reserving names that\n> start with 'pg_' for internal purposes. Or we could just hard-code\n> the LSN that was last scanned for this purpose as a bespoke constraint\n> on WAL discard. Not sure what is best.\n\nThe word \"variant\" was used as a hedge ;-), but now that I think about\nit ...\n\nI had in mind that you could have different overlapping incremental\nbackup jobs in existence at the same time. Maybe a daily one to a\nnearby disk and a weekly one to a faraway cloud. Each one of these\nwould need a separate replication slot, so that the information that is\nrequired for *that* incremental backup series is preserved between runs.\n So just one reserved replication slot that feeds the block summaries\nwouldn't work. Perhaps what would work is a flag on the replication\nslot itself \"keep block summaries for this slot\". Then when all the\nslots with the block summary flag are past an LSN, you can clean up the\nsummaries before that LSN.\n\n> I think all of this should be optional functionality. It's going to\n> be really useful for people with large databases, I think, but people\n> with small databases may not care, and it won't be entirely free. If\n> it's not enabled, then the functionality that would otherwise exploit\n> it can fall back to doing things in a less efficient way; nothing\n> needs to break hard.\n\nWith the flag on the slot scheme you wouldn't need a separate knob to\nturn this on, because it's just enabled when a backup software has\ncreated an appropriate slot.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 11 Apr 2019 09:52:13 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 3:52 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I had in mind that you could have different overlapping incremental\n> backup jobs in existence at the same time. Maybe a daily one to a\n> nearby disk and a weekly one to a faraway cloud. Each one of these\n> would need a separate replication slot, so that the information that is\n> required for *that* incremental backup series is preserved between runs.\n> So just one reserved replication slot that feeds the block summaries\n> wouldn't work. Perhaps what would work is a flag on the replication\n> slot itself \"keep block summaries for this slot\". Then when all the\n> slots with the block summary flag are past an LSN, you can clean up the\n> summaries before that LSN.\n\nI don't think that quite works. There are two different LSNs. One is\nthe LSN of the oldest WAL archive that we need to keep around so that\nit can be summarized, and the other is the LSN of the oldest summary\nwe need to keep around so it can be used for incremental backup\npurposes. You can't keep both of those LSNs in the same slot.\nFurthermore, the LSN stored in the slot is defined as the amount of\nWAL we need to keep, not the amount of something else (summaries) that\nwe need to keep. Reusing that same field to mean something different\nsounds inadvisable.\n\nIn other words, I think there are two problems which we need to\nclearly separate: one is retaining WAL so we can generate summaries,\nand the other is retaining summaries so we can generate incremental\nbackups. Even if we solve the second problem by using some kind of\nreplication slot, we still need to solve the first problem somehow.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 11 Apr 2019 09:27:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 2:50 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Over at\n> https://www.postgresql.org/message-id/CA%2BTgmobFVe4J4AA7z9OMUzKnm09Tt%2BsybhxeL_Ddst3q3wqpzQ%40mail.gmail.com\n> I mentioned parsing the WAL to extract block references so that\n> incremental backup could efficiently determine which blocks needed to\n> be copied. Ashwin replied in\n>\n> https://urldefense.proofpoint.com/v2/url?u=http-3A__postgr.es_m_CALfoeitO-2DvkfjubMFQRmgyXghL-2DuUnZLNxbr-3DobrQQsm8kFO4A-40mail.gmail.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=gxIaqms7ncm0pvqXLI_xjkgwSStxAET2rnZQpzba2KM&m=W07oy16p6VEfYKCgfRXQpRz9pfy_of-a_8DAjAg5TGk&s=YAtoa9HWqQ1PPjt1CGui1Fo_a20j0n95LRonCXucBz4&e=\n> to mention that the same approach could be useful for pg_upgrade:\n>\n\nThank you for initiating this separate thread. Just typo above not\npg_upgrade but pg_rewind.\n\nLet me explain first the thought I have around how to leverage this for\npg_rewind, actually any type of incremental recovery to be exact. Would\nlove to hear thoughts on it.\n\nCurrently, incremental recovery of any form, if replica goes down and comes\nup or trying to bring back primary after failover to replica, requires\n*all* the WAL to be present from point of disconnect. So, its boolean in\nthose terms, if WAL available can incrementally recovery otherwise have to\nperform full basebackup. If we come up with this mechanism to find and\nstore changed blocks from WAL, we can provide intermediate level of\nincremental recovery which will be better than full recovery.\n\nWAL allows tuple level granularity for recovery (if we ignore FPI for a\nmoment). Modified blocks from WAL, if WAL is not available will provide\nblock level incremental recovery.\n\nSo, pg_basebackup (or some other tool or just option to it) and pg_rewind\ncan leverage the changed blocks if WAL can't be retained due to space\nconstraints and perform the recovery.\n\npg_rewind can also be optimized as it currently copies blocks from src to\ntarget which were present in target WAL to rewind. So, such blocks can be\neasily skipped from copying again.\n\nDepending on pattern of changes in WAL and size, instead of replaying all\nthe WAL logs for incremental recovery, just copying over the changed blocks\ncould prove more efficient.\n\nIt seems to me that there are basically two ways of storing this kind\n> of information, plus a bunch of variants. One way is to store files\n> that cover a range of LSNs, and basically contain a synopsis of the\n> WAL for those LSNs. You omit all the actual data and just mention\n> which blocks were changed by some record in that part of the WAL. In\n> this type of scheme, the storage required is roughly proportional to\n> the volume of WAL for which you wish to retain data. Pruning old data\n> is easy; just remove the files that provide information about LSNs\n> that you don't care about any more. The other way is to store data\n> about each block, or each range of blocks, or all the blocks that hash\n> onto a certain slot; for each, store the newest LSN that has modified\n> that block, or a block in that range, or a block that hashes onto that\n> that slot. In this system, storage is roughly proportional to the\n> size of the database cluster, except maybe in the hashing case, but I\n> *think* that case will degrade unless you basically expand the map to\n> be roughly proportional to the size of the cluster anyway. I may be\n> wrong.\n>\n> Of these two variants, I am inclined to prefer the version where each\n> file is a summary of the block references within some range of LSNs.\n> It seems simpler to implement to me. You just read a bunch of WAL\n> files and then when you get tired you stop and emit an output file.\n> You need to protect yourself against untimely crashes. One way is to\n> stick a checksum into the output file. After you finish writing it,\n> fsync() it before you start writing the next one. After a restart,\n> read the latest such file and see if the checksum is OK. If not,\n> regenerate it; if not, assume it's good and move on. Files other than\n> the last one can be assumed good. Another way is to create the file\n> with a temporary name, fsync() it, and then rename it into place and\n> fsync() again. The background worker that generates the files can\n> have a GUC to remove them when they are older than some threshold\n> amount of time, or you can keep them forever and let the user manually\n> remove stuff they no longer want based on LSN. That's pretty much it.\n>\n\n+1 for first option. Seems simpler and straight-forward.\n\nOn Wed, Apr 10, 2019 at 2:50 PM Robert Haas <robertmhaas@gmail.com> wrote:Over at https://www.postgresql.org/message-id/CA%2BTgmobFVe4J4AA7z9OMUzKnm09Tt%2BsybhxeL_Ddst3q3wqpzQ%40mail.gmail.com\nI mentioned parsing the WAL to extract block references so that\nincremental backup could efficiently determine which blocks needed to\nbe copied. Ashwin replied in\nhttps://urldefense.proofpoint.com/v2/url?u=http-3A__postgr.es_m_CALfoeitO-2DvkfjubMFQRmgyXghL-2DuUnZLNxbr-3DobrQQsm8kFO4A-40mail.gmail.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=gxIaqms7ncm0pvqXLI_xjkgwSStxAET2rnZQpzba2KM&m=W07oy16p6VEfYKCgfRXQpRz9pfy_of-a_8DAjAg5TGk&s=YAtoa9HWqQ1PPjt1CGui1Fo_a20j0n95LRonCXucBz4&e=\nto mention that the same approach could be useful for pg_upgrade:Thank you for initiating this separate thread. Just typo above not pg_upgrade but pg_rewind.Let me explain first the thought I have around how to leverage this for pg_rewind, actually any type of incremental recovery to be exact. Would love to hear thoughts on it.Currently, incremental recovery of any form, if replica goes down and comes up or trying to bring back primary after failover to replica, requires *all* the WAL to be present from point of disconnect. So, its boolean in those terms, if WAL available can incrementally recovery otherwise have to perform full basebackup. If we come up with this mechanism to find and store changed blocks from WAL, we can provide intermediate level of incremental recovery which will be better than full recovery.WAL allows tuple level granularity for recovery (if we ignore FPI for a moment). Modified blocks from WAL, if WAL is not available will provide block level incremental recovery.So, pg_basebackup (or some other tool or just option to it) and pg_rewind can leverage the changed blocks if WAL can't be retained due to space constraints and perform the recovery.pg_rewind can also be optimized as it currently copies blocks from src to target which were present in target WAL to rewind. So, such blocks can be easily skipped from copying again.Depending on pattern of changes in WAL and size, instead of replaying all the WAL logs for incremental recovery, just copying over the changed blocks could prove more efficient.\nIt seems to me that there are basically two ways of storing this kind\nof information, plus a bunch of variants. One way is to store files\nthat cover a range of LSNs, and basically contain a synopsis of the\nWAL for those LSNs. You omit all the actual data and just mention\nwhich blocks were changed by some record in that part of the WAL. In\nthis type of scheme, the storage required is roughly proportional to\nthe volume of WAL for which you wish to retain data. Pruning old data\nis easy; just remove the files that provide information about LSNs\nthat you don't care about any more. The other way is to store data\nabout each block, or each range of blocks, or all the blocks that hash\nonto a certain slot; for each, store the newest LSN that has modified\nthat block, or a block in that range, or a block that hashes onto that\nthat slot. In this system, storage is roughly proportional to the\nsize of the database cluster, except maybe in the hashing case, but I\n*think* that case will degrade unless you basically expand the map to\nbe roughly proportional to the size of the cluster anyway. I may be\nwrong.\n\nOf these two variants, I am inclined to prefer the version where each\nfile is a summary of the block references within some range of LSNs.\nIt seems simpler to implement to me. You just read a bunch of WAL\nfiles and then when you get tired you stop and emit an output file.\nYou need to protect yourself against untimely crashes. One way is to\nstick a checksum into the output file. After you finish writing it,\nfsync() it before you start writing the next one. After a restart,\nread the latest such file and see if the checksum is OK. If not,\nregenerate it; if not, assume it's good and move on. Files other than\nthe last one can be assumed good. Another way is to create the file\nwith a temporary name, fsync() it, and then rename it into place and\nfsync() again. The background worker that generates the files can\nhave a GUC to remove them when they are older than some threshold\namount of time, or you can keep them forever and let the user manually\nremove stuff they no longer want based on LSN. That's pretty much it.+1 for first option. Seems simpler and straight-forward.",
"msg_date": "Thu, 11 Apr 2019 09:54:50 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 6:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Apr 11, 2019 at 3:52 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > I had in mind that you could have different overlapping incremental\n> > backup jobs in existence at the same time. Maybe a daily one to a\n> > nearby disk and a weekly one to a faraway cloud. Each one of these\n> > would need a separate replication slot, so that the information that is\n> > required for *that* incremental backup series is preserved between runs.\n> > So just one reserved replication slot that feeds the block summaries\n> > wouldn't work. Perhaps what would work is a flag on the replication\n> > slot itself \"keep block summaries for this slot\". Then when all the\n> > slots with the block summary flag are past an LSN, you can clean up the\n> > summaries before that LSN.\n>\n> I don't think that quite works. There are two different LSNs. One is\n> the LSN of the oldest WAL archive that we need to keep around so that\n> it can be summarized, and the other is the LSN of the oldest summary\n> we need to keep around so it can be used for incremental backup\n> purposes. You can't keep both of those LSNs in the same slot.\n> Furthermore, the LSN stored in the slot is defined as the amount of\n> WAL we need to keep, not the amount of something else (summaries) that\n> we need to keep. Reusing that same field to mean something different\n> sounds inadvisable.\n>\n> In other words, I think there are two problems which we need to\n> clearly separate: one is retaining WAL so we can generate summaries,\n> and the other is retaining summaries so we can generate incremental\n> backups. Even if we solve the second problem by using some kind of\n> replication slot, we still need to solve the first problem somehow.\n>\n\nJust a thought for first problem, may not to simpler, can replication slot\nbe enhanced to define X amount of WAL to retain, after reaching such limit\ncollect summary and let the WAL be deleted.\n\nOn Thu, Apr 11, 2019 at 6:27 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Apr 11, 2019 at 3:52 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I had in mind that you could have different overlapping incremental\n> backup jobs in existence at the same time. Maybe a daily one to a\n> nearby disk and a weekly one to a faraway cloud. Each one of these\n> would need a separate replication slot, so that the information that is\n> required for *that* incremental backup series is preserved between runs.\n> So just one reserved replication slot that feeds the block summaries\n> wouldn't work. Perhaps what would work is a flag on the replication\n> slot itself \"keep block summaries for this slot\". Then when all the\n> slots with the block summary flag are past an LSN, you can clean up the\n> summaries before that LSN.\n\nI don't think that quite works. There are two different LSNs. One is\nthe LSN of the oldest WAL archive that we need to keep around so that\nit can be summarized, and the other is the LSN of the oldest summary\nwe need to keep around so it can be used for incremental backup\npurposes. You can't keep both of those LSNs in the same slot.\nFurthermore, the LSN stored in the slot is defined as the amount of\nWAL we need to keep, not the amount of something else (summaries) that\nwe need to keep. Reusing that same field to mean something different\nsounds inadvisable.\n\nIn other words, I think there are two problems which we need to\nclearly separate: one is retaining WAL so we can generate summaries,\nand the other is retaining summaries so we can generate incremental\nbackups. Even if we solve the second problem by using some kind of\nreplication slot, we still need to solve the first problem somehow.Just a thought for first problem, may not to simpler, can replication slot be enhanced to define X amount of WAL to retain, after reaching such limit collect summary and let the WAL be deleted.",
"msg_date": "Thu, 11 Apr 2019 10:00:35 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "At Thu, 11 Apr 2019 10:00:35 -0700, Ashwin Agrawal <aagrawal@pivotal.io> wrote in <CALfoeis0qOyGk+KQ3AbkfRVv=XbsSecqHfKSag=i_SLWMT+B0A@mail.gmail.com>\n> On Thu, Apr 11, 2019 at 6:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> > On Thu, Apr 11, 2019 at 3:52 AM Peter Eisentraut\n> > <peter.eisentraut@2ndquadrant.com> wrote:\n> > > I had in mind that you could have different overlapping incremental\n> > > backup jobs in existence at the same time. Maybe a daily one to a\n> > > nearby disk and a weekly one to a faraway cloud. Each one of these\n> > > would need a separate replication slot, so that the information that is\n> > > required for *that* incremental backup series is preserved between runs.\n> > > So just one reserved replication slot that feeds the block summaries\n> > > wouldn't work. Perhaps what would work is a flag on the replication\n> > > slot itself \"keep block summaries for this slot\". Then when all the\n> > > slots with the block summary flag are past an LSN, you can clean up the\n> > > summaries before that LSN.\n> >\n> > I don't think that quite works. There are two different LSNs. One is\n> > the LSN of the oldest WAL archive that we need to keep around so that\n> > it can be summarized, and the other is the LSN of the oldest summary\n> > we need to keep around so it can be used for incremental backup\n> > purposes. You can't keep both of those LSNs in the same slot.\n> > Furthermore, the LSN stored in the slot is defined as the amount of\n> > WAL we need to keep, not the amount of something else (summaries) that\n> > we need to keep. Reusing that same field to mean something different\n> > sounds inadvisable.\n> >\n> > In other words, I think there are two problems which we need to\n> > clearly separate: one is retaining WAL so we can generate summaries,\n> > and the other is retaining summaries so we can generate incremental\n> > backups. Even if we solve the second problem by using some kind of\n> > replication slot, we still need to solve the first problem somehow.\n> \n> Just a thought for first problem, may not to simpler, can replication slot\n> be enhanced to define X amount of WAL to retain, after reaching such limit\n> collect summary and let the WAL be deleted.\n\nI think Peter is saying that a slot for block summary doesn't\nkeep WAL segments themselves, but keeps maybe segmented block\nsummaries. n block-summary-slots maintains n block summaries and\nthe newest block summary is \"active\", in other words,\ncontinuously updated by WAL records pass-by. When backup-tool\nrequests for block summary, for example, for the oldest slot, the\nacitve summary is closed then a new summary is opened from the\nLSN at the time, which is the new LSN of the slot. Then the\nconcatenated block summary is sent. Finally the oldest summary is\nremoved.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Fri, 12 Apr 2019 09:59:33 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 08:11:11PM -0400, Robert Haas wrote:\n> On Wed, Apr 10, 2019 at 5:49 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > There is one thing that does worry me about the file-per-LSN-range\n> > approach, and that is memory consumption when trying to consume the\n> > information. Suppose you have a really high velocity system. I don't\n> > know exactly what the busiest systems around are doing in terms of\n> > data churn these days, but let's say just for kicks that we are\n> > dirtying 100GB/hour. That means, roughly 12.5 million block\n> > references per hour. If each block reference takes 12 bytes, that's\n> > maybe 150MB/hour in block reference files. If you run a daily\n> > incremental backup, you've got to load all the block references for\n> > the last 24 hours and deduplicate them, which means you're going to\n> > need about 3.6GB of memory. If you run a weekly incremental backup,\n> > you're going to need about 25GB of memory. That is not ideal. One\n> > can keep the memory consumption to a more reasonable level by using\n> > temporary files. For instance, say you realize you're going to need\n> > 25GB of memory to store all the block references you have, but you\n> > only have 1GB of memory that you're allowed to use. Well, just\n> > hash-partition the data 32 ways by dboid/tsoid/relfilenode/segno,\n> > writing each batch to a separate temporary file, and then process each\n> > of those 32 files separately. That does add some additional I/O, but\n> > it's not crazily complicated and doesn't seem too terrible, at least\n> > to me. Still, it's something not to like.\n> \n> Oh, I'm being dumb. We should just have the process that writes out\n> these files sort the records first. Then when we read them back in to\n> use them, we can just do a merge pass like MergeAppend would do. Then\n> you never need very much memory at all.\n\nCan I throw out a simple idea? What if, when we finish writing a WAL\nfile, we create a new file 000000010000000000000001.modblock which\nlists all the heap/index files and block numbers modified in that WAL\nfile? How much does that help with the list I posted earlier?\n\n\tI think there is some interesting complexity brought up in this thread.\n\tWhich options are going to minimize storage I/O, network I/O, have only\n\tbackground overhead, allow parallel operation, integrate with\n\tpg_basebackup. Eventually we will need to evaluate the incremental\n\tbackup options against these criteria.\n\nI am thinking tools could retain modblock files along with WAL, could\npull full-page-writes from WAL, or from PGDATA. It avoids the need to\nscan 16MB WAL files, and the WAL files and modblock files could be\nexpired independently.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 15 Apr 2019 16:31:14 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 4:31 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Can I throw out a simple idea? What if, when we finish writing a WAL\n> file, we create a new file 000000010000000000000001.modblock which\n> lists all the heap/index files and block numbers modified in that WAL\n> file? How much does that help with the list I posted earlier?\n>\n> I think there is some interesting complexity brought up in this thread.\n> Which options are going to minimize storage I/O, network I/O, have only\n> background overhead, allow parallel operation, integrate with\n> pg_basebackup. Eventually we will need to evaluate the incremental\n> backup options against these criteria.\n>\n> I am thinking tools could retain modblock files along with WAL, could\n> pull full-page-writes from WAL, or from PGDATA. It avoids the need to\n> scan 16MB WAL files, and the WAL files and modblock files could be\n> expired independently.\n\nThat is pretty much exactly what I was intending to propose.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Apr 2019 21:04:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 09:04:13PM -0400, Robert Haas wrote:\n> On Mon, Apr 15, 2019 at 4:31 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Can I throw out a simple idea? What if, when we finish writing a WAL\n> > file, we create a new file 000000010000000000000001.modblock which\n> > lists all the heap/index files and block numbers modified in that WAL\n> > file? How much does that help with the list I posted earlier?\n> >\n> > I think there is some interesting complexity brought up in this thread.\n> > Which options are going to minimize storage I/O, network I/O, have only\n> > background overhead, allow parallel operation, integrate with\n> > pg_basebackup. Eventually we will need to evaluate the incremental\n> > backup options against these criteria.\n> >\n> > I am thinking tools could retain modblock files along with WAL, could\n> > pull full-page-writes from WAL, or from PGDATA. It avoids the need to\n> > scan 16MB WAL files, and the WAL files and modblock files could be\n> > expired independently.\n> \n> That is pretty much exactly what I was intending to propose.\n\nOK, good. Some of your wording was vague so I was unclear exactly what\nyou were suggesting.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 15 Apr 2019 22:22:34 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 09:04:13PM -0400, Robert Haas wrote:\n> That is pretty much exactly what I was intending to propose.\n\nAny caller of XLogWrite() could switch to a new segment once the\ncurrent one is done, and I am not sure that we would want some random\nbackend to potentially slow down to do that kind of operation.\n\nOr would a separate background worker do this work by itself? An\nexternal tool can do that easily already:\nhttps://github.com/michaelpq/pg_plugins/tree/master/pg_wal_blocks\n--\nMichael",
"msg_date": "Tue, 16 Apr 2019 12:45:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 10:22 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > I am thinking tools could retain modblock files along with WAL, could\n> > > pull full-page-writes from WAL, or from PGDATA. It avoids the need to\n> > > scan 16MB WAL files, and the WAL files and modblock files could be\n> > > expired independently.\n> >\n> > That is pretty much exactly what I was intending to propose.\n>\n> OK, good. Some of your wording was vague so I was unclear exactly what\n> you were suggesting.\n\nWell, I guess the part that isn't like what I was suggesting is the\nidea that there should be exactly one modified block file per segment.\nThe biggest problem with that idea is that a single WAL record can be\nsplit across two segments (or, in pathological cases, perhaps more).\nI think it makes sense to talk about the blocks modified by WAL\nbetween LSN A and LSN B, but it doesn't make much sense to talk about\nthe block modified by the WAL in segment XYZ.\n\nYou can make it kinda make sense by saying \"the blocks modified by\nrecords *beginning in* segment XYZ\" or alternatively \"the blocks\nmodified by records *ending in* segment XYZ\", but that seems confusing\nto me. For example, suppose you decide on the first one --\n000000010000000100000068.modblock will contain all blocks modified by\nrecords that begin in 000000010000000100000068. Well, that means that\nto generate the 000000010000000100000068.modblock, you will need\naccess to 000000010000000100000068 AND probably also\n000000010000000100000069 and in rare cases perhaps\n00000001000000010000006A or even later files. I think that's actually\npretty confusing.\n\nIt seems better to me to give the files names like\n${TLI}.${STARTLSN}.${ENDLSN}.modblock, e.g.\n00000001.0000000168000058.00000001687DBBB8.modblock, so that you can\nsee exactly which *records* are covered by that segment.\n\nAnd I suspect it may also be a good idea to bunch up the records from\nseveral WAL files. Especially if you are using 16MB WAL files,\ncollecting all of the block references from a single WAL file is going\nto produce a very small file. I suspect that the modified block files\nwill end up being 100x smaller than the WAL itself, perhaps more, and\nI don't think anybody will appreciate us adding another PostgreSQL\nsystems that spews out huge numbers of tiny little files. If, for\nexample, somebody's got a big cluster that is churning out a WAL\nsegment every second, they would probably still be happy to have a new\nmodified block file only, say, every 10 seconds.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Apr 2019 15:43:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 11:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Apr 15, 2019 at 09:04:13PM -0400, Robert Haas wrote:\n> > That is pretty much exactly what I was intending to propose.\n>\n> Any caller of XLogWrite() could switch to a new segment once the\n> current one is done, and I am not sure that we would want some random\n> backend to potentially slow down to do that kind of operation.\n>\n> Or would a separate background worker do this work by itself? An\n> external tool can do that easily already:\n> https://github.com/michaelpq/pg_plugins/tree/master/pg_wal_blocks\n\nI was thinking that a dedicated background worker would be a good\noption, but Stephen Frost seems concerned (over on the other thread)\nabout how much load that would generate. That never really occurred\nto me as a serious issue and I suspect for many people it wouldn't be,\nbut there might be some.\n\nIt's cool that you have a command-line tool that does this as well.\nOver there, it was also discussed that we might want to have both a\ncommand-line tool and a background worker. I think, though, that we\nwould want to get the output in some kind of compressed binary format,\nrather than text. e.g.\n\n4-byte database OID\n4-byte tablespace OID\nany number of relation OID/block OID pairings for that\ndatabase/tablespace combination\n4-byte zero to mark the end of the relation OID/block OID list\nand then repeat all of the above any number of times\n\nThat might be too dumb and I suspect we want some headers and a\nchecksum, but we should try to somehow exploit the fact that there\naren't likely to be many distinct databases or many distinct\ntablespaces mentioned -- whereas relation OID and block number will\nprobably have a lot more entropy.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Apr 2019 15:51:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 03:43:30PM -0400, Robert Haas wrote:\n> You can make it kinda make sense by saying \"the blocks modified by\n> records *beginning in* segment XYZ\" or alternatively \"the blocks\n> modified by records *ending in* segment XYZ\", but that seems confusing\n> to me. For example, suppose you decide on the first one --\n> 000000010000000100000068.modblock will contain all blocks modified by\n> records that begin in 000000010000000100000068. Well, that means that\n> to generate the 000000010000000100000068.modblock, you will need\n> access to 000000010000000100000068 AND probably also\n> 000000010000000100000069 and in rare cases perhaps\n> 00000001000000010000006A or even later files. I think that's actually\n> pretty confusing.\n> \n> It seems better to me to give the files names like\n> ${TLI}.${STARTLSN}.${ENDLSN}.modblock, e.g.\n> 00000001.0000000168000058.00000001687DBBB8.modblock, so that you can\n> see exactly which *records* are covered by that segment.\n\nHow would you choose the STARTLSN/ENDLSN? If you could do it per\ncheckpoint, rather than per-WAL, I think that would be great.\n\n> And I suspect it may also be a good idea to bunch up the records from\n> several WAL files. Especially if you are using 16MB WAL files,\n> collecting all of the block references from a single WAL file is going\n> to produce a very small file. I suspect that the modified block files\n> will end up being 100x smaller than the WAL itself, perhaps more, and\n> I don't think anybody will appreciate us adding another PostgreSQL\n> systems that spews out huge numbers of tiny little files. If, for\n> example, somebody's got a big cluster that is churning out a WAL\n> segment every second, they would probably still be happy to have a new\n> modified block file only, say, every 10 seconds.\n\nAgreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 18 Apr 2019 15:51:57 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 3:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n> How would you choose the STARTLSN/ENDLSN? If you could do it per\n> checkpoint, rather than per-WAL, I think that would be great.\n\nI thought of that too. It seems appealing, because you probably only\nreally care whether a particular block was modified between one\ncheckpoint and the next, not exactly when during that interval it was\nmodified. However, the simple algorithm of \"just stop when you get to\na checkpoint record\" does not work, because the checkpoint record\nitself points back to a much earlier LSN, and I think that it's that\nearlier LSN that is interesting. So if you want to make this work you\nhave to be more clever, and I'm not sure I'm clever enough.\n\nI think it's important that a .modblock file not be too large, because\nthen it will use too much memory, and that it not cover too much WAL,\nbecause then it will be too imprecise about when the blocks were\nmodified. Perhaps we should have a threshold for each -- e.g. emit\nthe next .modblock file after finding 2^20 distinct block references\nor scanning 1GB of WAL. Then individual files would probably be in\nthe single-digit numbers of megabytes in size, assuming we do a decent\njob with the compression, and you never need to scan more than 1GB of\nWAL to regenerate one. If the starting point for a backup falls in\nthe middle of such a file, and you include the whole file, at worst\nyou have ~8GB of extra blocks to read, but in most cases less, because\nyour writes probably have some locality and the file may not actually\ncontain the full 2^20 block references. You could also make it more\nfine-grained than that if you don't mind having more smaller files\nfloating around.\n\nIt would definitely be better if we could set things up so that we\ncould always switch to the next .modblock file when we cross a\npotential redo start point, but they're not noted in the WAL so I\ndon't see how to do that. I don't know if it would be possible to\ninsert some new kind of log record concurrently with fixing the redo\nlocation, so that redo always started at a record of this new type.\nThat would certainly be helpful for this kind of thing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Apr 2019 16:25:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 04:25:24PM -0400, Robert Haas wrote:\n> On Thu, Apr 18, 2019 at 3:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > How would you choose the STARTLSN/ENDLSN? If you could do it per\n> > checkpoint, rather than per-WAL, I think that would be great.\n> \n> I thought of that too. It seems appealing, because you probably only\n> really care whether a particular block was modified between one\n> checkpoint and the next, not exactly when during that interval it was\n> modified. However, the simple algorithm of \"just stop when you get to\n> a checkpoint record\" does not work, because the checkpoint record\n> itself points back to a much earlier LSN, and I think that it's that\n> earlier LSN that is interesting. So if you want to make this work you\n> have to be more clever, and I'm not sure I'm clever enough.\n\nOK, so let's back up and study how this will be used. Someone wanting\nto make a useful incremental backup will need the changed blocks from\nthe time of the start of the base backup. It is fine if they\nincrementally back up some blocks modified _before_ the base backup, but\nthey need all blocks after, until some marker. They will obviously\nstill use WAL to recover to a point after the incremental backup, so\nthere is no need to get every modifified block up to current, just up to\nsome cut-off point where WAL can be discarded.\n\nI can see a 1GB marker being used for that. It would prevent an\nincremental backup from being done until the first 1G modblock files was\nwritten, since until then there is no record of modified blocks, but\nthat seems fine. A 1G marker would allow for consistent behavior\nindependent of server restarts and base backups.\n\nHow would the modblock file record all the modified blocks across\nrestarts and crashes? I assume that 1G of WAL would not be available\nfor scanning. I suppose that writing a modblock file to some PGDATA\nlocation when WAL is removed would work since during a crash the\nmodblock file could be updated with the contents of the existing pg_wal\nfiles.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 18 Apr 2019 17:47:56 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On 2019-04-18 17:47:56 -0400, Bruce Momjian wrote:\n> I can see a 1GB marker being used for that. It would prevent an\n> incremental backup from being done until the first 1G modblock files was\n> written, since until then there is no record of modified blocks, but\n> that seems fine. A 1G marker would allow for consistent behavior\n> independent of server restarts and base backups.\n\nThat doesn't seem like a good idea - it'd make writing regression tests\nfor this impractical.\n\n\n",
"msg_date": "Thu, 18 Apr 2019 14:52:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 03:51:14PM -0400, Robert Haas wrote:\n> I was thinking that a dedicated background worker would be a good\n> option, but Stephen Frost seems concerned (over on the other thread)\n> about how much load that would generate. That never really occurred\n> to me as a serious issue and I suspect for many people it wouldn't be,\n> but there might be some.\n\nWAL segment size can go up to 1GB, and this does not depend on the\ncompilation anymore. So scanning a very large segment is not going to\nbe free. I think that the performance concerns of Stephen are legit\nas now we have on the WAL partitions sequential read and write\npatterns.\n\n> It's cool that you have a command-line tool that does this as well.\n> Over there, it was also discussed that we might want to have both a\n> command-line tool and a background worker. I think, though, that we\n> would want to get the output in some kind of compressed binary format,\n> rather than text. e.g.\n\nIf you want to tweak it the way you want, please feel free to reuse\nit for any patch submitted to upstream. Reshaping or redirecting the\ndata is not a big issue once the basics with the WAL reader are in\nplace.\n--\nMichael",
"msg_date": "Fri, 19 Apr 2019 09:38:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Apr 15, 2019 at 11:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > Any caller of XLogWrite() could switch to a new segment once the\n> > current one is done, and I am not sure that we would want some random\n> > backend to potentially slow down to do that kind of operation.\n> >\n> > Or would a separate background worker do this work by itself? An\n> > external tool can do that easily already:\n> > https://github.com/michaelpq/pg_plugins/tree/master/pg_wal_blocks\n> \n> I was thinking that a dedicated background worker would be a good\n> option, but Stephen Frost seems concerned (over on the other thread)\n> about how much load that would generate. That never really occurred\n> to me as a serious issue and I suspect for many people it wouldn't be,\n> but there might be some.\n\nWhile I do think we should at least be thinking about the load caused\nfrom scanning the WAL to generate a list of blocks that are changed, the\nload I was more concerned with in the other thread is the effort\nrequired to actually merge all of those changes together over a large\namount of WAL. I'm also not saying that we couldn't have either of\nthose pieces done as a background worker, just that it'd be really nice\nto have an external tool (or library) that can be used on an independent\nsystem to do that work.\n\n> It's cool that you have a command-line tool that does this as well.\n> Over there, it was also discussed that we might want to have both a\n> command-line tool and a background worker. I think, though, that we\n> would want to get the output in some kind of compressed binary format,\n> rather than text. e.g.\n> \n> 4-byte database OID\n> 4-byte tablespace OID\n> any number of relation OID/block OID pairings for that\n> database/tablespace combination\n> 4-byte zero to mark the end of the relation OID/block OID list\n> and then repeat all of the above any number of times\n\nI agree that we'd like to get the data in a binary format of some kind.\n\n> That might be too dumb and I suspect we want some headers and a\n> checksum, but we should try to somehow exploit the fact that there\n> aren't likely to be many distinct databases or many distinct\n> tablespaces mentioned -- whereas relation OID and block number will\n> probably have a lot more entropy.\n\nI'm not remembering exactly where this idea came from, but I don't\nbelieve it's my own (and I think there's some tool which already does\nthis.. maybe it's rsync?), but I certainly don't think we want to\nrepeat the relation OID for every block, and I don't think we really\nwant to store a block number for every block. Instead, something like:\n\n4-byte database OID\n4-byte tablespace OID\nrelation OID\n\nstarting-ending block numbers\nbitmap covering range of blocks\nstarting-ending block numbers\nbitmap covering range of blocks\n4-byte zero to mark the end of the relation\n...\n4-byte database OID\n4-byte tablespace OID\nrelation OID\n\nstarting-ending block numbers\nbitmap covering range of blocks\n4-byte zero to mark the end of the relation\n...\n\nOnly for relations which actually have changes though, of course.\n\nHaven't implemented it, so it's entirely possible there's reasons why it\nwouldn't work, but I do like the bitmap idea. I definitely think we\nneed a checksum, as you mentioned.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 19 Apr 2019 20:39:51 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 5:47 PM Bruce Momjian <bruce@momjian.us> wrote:\n> How would the modblock file record all the modified blocks across\n> restarts and crashes? I assume that 1G of WAL would not be available\n> for scanning. I suppose that writing a modblock file to some PGDATA\n> location when WAL is removed would work since during a crash the\n> modblock file could be updated with the contents of the existing pg_wal\n> files.\n\nI think you've got to prevent the WAL from being removed until a\n.modblock file has been written. In more detail, you should (a) scan\nall the WAL segments that will be summarized in the .modblock file,\n(b) write the file under a temporary name, (c) fsync it, (d) rename it\ninto place, (e) fsync it again, and (f) then allow those WAL segments\nto be removed, if they are otherwise eligible to be removed.\n\nIf 1GB of WAL is too much to keep around (which I doubt, except on\nsystems that are so small and low-activity that they don't need\nincremental backups anyway), then you'd have to scan less WAL at once\nand write smaller .modblock files.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 20 Apr 2019 00:09:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 8:39 PM Stephen Frost <sfrost@snowman.net> wrote:\n> While I do think we should at least be thinking about the load caused\n> from scanning the WAL to generate a list of blocks that are changed, the\n> load I was more concerned with in the other thread is the effort\n> required to actually merge all of those changes together over a large\n> amount of WAL. I'm also not saying that we couldn't have either of\n> those pieces done as a background worker, just that it'd be really nice\n> to have an external tool (or library) that can be used on an independent\n> system to do that work.\n\nOh. Well, I already explained my algorithm for doing that upthread,\nwhich I believe would be quite cheap.\n\n1. When you generate the .modblock files, stick all the block\nreferences into a buffer. qsort(). Dedup. Write out in sorted\norder.\n\n2. When you want to use a bunch of .modblock files, do the same thing\nMergeAppend does, or what merge-sort does when it does a merge pass.\nRead the first 1MB of each file (or whatever amount). Repeatedly pull\nan item from whichever file has the lowest remaining value, using a\nbinary heap. When no buffered data remains for a particular file,\nread another chunk from that file.\n\nIf each .modblock file covers 1GB of WAL, you could the data from\nacross 1TB of WAL using only 1GB of memory, and that's assuming you\nhave a 1MB buffer for each .modblock file. You probably don't need\nsuch a large buffer. If you use, say, a 128kB buffer, you could merge\nthe data from across 8TB of WAL using 1GB of memory. And if you have\n8TB of WAL and you can't spare 1GB for the task of computing which\nblocks need to be included in your incremental backup, it's time for a\nhardware upgrade.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 20 Apr 2019 00:17:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 8:38 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Apr 18, 2019 at 03:51:14PM -0400, Robert Haas wrote:\n> > I was thinking that a dedicated background worker would be a good\n> > option, but Stephen Frost seems concerned (over on the other thread)\n> > about how much load that would generate. That never really occurred\n> > to me as a serious issue and I suspect for many people it wouldn't be,\n> > but there might be some.\n>\n> WAL segment size can go up to 1GB, and this does not depend on the\n> compilation anymore. So scanning a very large segment is not going to\n> be free.\n\nThe segment size doesn't have much to do with it. If you make\nsegments bigger, you'll have to scan fewer larger ones; if you make\nthem smaller, you'll have more smaller ones. The only thing that\nreally matters is the amount of I/O and CPU required, and that doesn't\nchange very much as you vary the segment size.\n\n> I think that the performance concerns of Stephen are legit\n> as now we have on the WAL partitions sequential read and write\n> patterns.\n\nAs to that, what I'm proposing here is no different than what we are\nalready doing with physical and logical replication, except that it's\nprobably a bit cheaper. Physical replication reads all the WAL and\nsends it all out over the network. Logical replication reads all the\nWAL, does a bunch of computation, and then sends the results, possibly\nfiltered, out over the network. This would read the WAL and then\nwrite a relatively small file to your local disk.\n\nI think the impact will be about the same as having one additional\nstandby, give or take.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 20 Apr 2019 00:21:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Apr 19, 2019 at 8:39 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > While I do think we should at least be thinking about the load caused\n> > from scanning the WAL to generate a list of blocks that are changed, the\n> > load I was more concerned with in the other thread is the effort\n> > required to actually merge all of those changes together over a large\n> > amount of WAL. I'm also not saying that we couldn't have either of\n> > those pieces done as a background worker, just that it'd be really nice\n> > to have an external tool (or library) that can be used on an independent\n> > system to do that work.\n> \n> Oh. Well, I already explained my algorithm for doing that upthread,\n> which I believe would be quite cheap.\n> \n> 1. When you generate the .modblock files, stick all the block\n> references into a buffer. qsort(). Dedup. Write out in sorted\n> order.\n\nHaving all of the block references in a sorted order does seem like it\nwould help, but would also make those potentially quite a bit larger\nthan necessary (I had some thoughts about making them smaller elsewhere\nin this discussion). That might be worth it though. I suppose it might\nalso be possible to line up the bitmaps suggested elsewhere to do\nessentially a BitmapOr of them to identify the blocks changed (while\neffectively de-duping at the same time).\n\n> 2. When you want to use a bunch of .modblock files, do the same thing\n> MergeAppend does, or what merge-sort does when it does a merge pass.\n> Read the first 1MB of each file (or whatever amount). Repeatedly pull\n> an item from whichever file has the lowest remaining value, using a\n> binary heap. When no buffered data remains for a particular file,\n> read another chunk from that file.\n\nSure, this is essentially a MergeAppend/MergeSort+GroupAgg on top to get\nthe distinct set, if I'm following what you're suggesting here.\n\n> If each .modblock file covers 1GB of WAL, you could the data from\n> across 1TB of WAL using only 1GB of memory, and that's assuming you\n> have a 1MB buffer for each .modblock file. You probably don't need\n> such a large buffer. If you use, say, a 128kB buffer, you could merge\n> the data from across 8TB of WAL using 1GB of memory. And if you have\n> 8TB of WAL and you can't spare 1GB for the task of computing which\n> blocks need to be included in your incremental backup, it's time for a\n> hardware upgrade.\n\nHow much additional work is it going to be to sort/dedup for each 1GB of\nWAL though, along with the resulting size? I'm specifically thinking\nabout some of the very high WAL-generation rate systems that I've seen,\nwith TBs of WAL per day. I get that once you've got nicely sorted\ninputs that it isn't too bad to generate a distinct set from that, but\nit seems like that moves the problem to the sorting side then rather\nthan eliminating it, and may make other things less efficient,\nparticularly when there's workloads that have strings of modified\nbuffers together and we could capture that more efficiently.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 20 Apr 2019 00:42:38 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 12:09:42AM -0400, Robert Haas wrote:\n> On Thu, Apr 18, 2019 at 5:47 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > How would the modblock file record all the modified blocks across\n> > restarts and crashes? I assume that 1G of WAL would not be available\n> > for scanning. I suppose that writing a modblock file to some PGDATA\n> > location when WAL is removed would work since during a crash the\n> > modblock file could be updated with the contents of the existing pg_wal\n> > files.\n> \n> I think you've got to prevent the WAL from being removed until a\n> .modblock file has been written. In more detail, you should (a) scan\n> all the WAL segments that will be summarized in the .modblock file,\n> (b) write the file under a temporary name, (c) fsync it, (d) rename it\n> into place, (e) fsync it again, and (f) then allow those WAL segments\n> to be removed, if they are otherwise eligible to be removed.\n\nMakes sense. So when you are about to remove WAL, you create the\n.modblock files for all complete WAL files and only create a new one\nwhen you are about to remove a WAL that was not in a previous .modblock\nfile.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 20 Apr 2019 09:18:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 12:21:36AM -0400, Robert Haas wrote:\n> As to that, what I'm proposing here is no different than what we are\n> already doing with physical and logical replication, except that it's\n> probably a bit cheaper. Physical replication reads all the WAL and\n> sends it all out over the network. Logical replication reads all the\n> WAL, does a bunch of computation, and then sends the results, possibly\n> filtered, out over the network. This would read the WAL and then\n> write a relatively small file to your local disk.\n> \n> I think the impact will be about the same as having one additional\n> standby, give or take.\n\nGood point.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 20 Apr 2019 09:18:32 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 9:18 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > I think you've got to prevent the WAL from being removed until a\n> > .modblock file has been written. In more detail, you should (a) scan\n> > all the WAL segments that will be summarized in the .modblock file,\n> > (b) write the file under a temporary name, (c) fsync it, (d) rename it\n> > into place, (e) fsync it again, and (f) then allow those WAL segments\n> > to be removed, if they are otherwise eligible to be removed.\n>\n> Makes sense. So when you are about to remove WAL, you create the\n> .modblock files for all complete WAL files and only create a new one\n> when you are about to remove a WAL that was not in a previous .modblock\n> file.\n\nThere will often be a partial WAL record at the end of each file. So\nif you make a .modblock file for WAL files 1-10, you can probably\nremove files 1-9, but you probably have to keep WAL file 10 around\nuntil you generate the NEXT .modblock file, because otherwise you\nwouldn't be able to read and parse the WAL record that spans the end\nof file 10 and the beginning of file 11.\n\nThis is a detail that is very very very important to get right.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 20 Apr 2019 16:17:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 12:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > Oh. Well, I already explained my algorithm for doing that upthread,\n> > which I believe would be quite cheap.\n> >\n> > 1. When you generate the .modblock files, stick all the block\n> > references into a buffer. qsort(). Dedup. Write out in sorted\n> > order.\n>\n> Having all of the block references in a sorted order does seem like it\n> would help, but would also make those potentially quite a bit larger\n> than necessary (I had some thoughts about making them smaller elsewhere\n> in this discussion). That might be worth it though. I suppose it might\n> also be possible to line up the bitmaps suggested elsewhere to do\n> essentially a BitmapOr of them to identify the blocks changed (while\n> effectively de-duping at the same time).\n\nI don't see why this would make them bigger than necessary. If you\nsort by relfilenode/fork/blocknumber and dedup, then references to\nnearby blocks will be adjacent in the file. You can then decide what\nformat will represent that most efficiently on output. Whether or not\na bitmap is better idea than a list of block numbers or something else\ndepends on what percentage of blocks are modified and how clustered\nthey are.\n\n> > 2. When you want to use a bunch of .modblock files, do the same thing\n> > MergeAppend does, or what merge-sort does when it does a merge pass.\n> > Read the first 1MB of each file (or whatever amount). Repeatedly pull\n> > an item from whichever file has the lowest remaining value, using a\n> > binary heap. When no buffered data remains for a particular file,\n> > read another chunk from that file.\n>\n> Sure, this is essentially a MergeAppend/MergeSort+GroupAgg on top to get\n> the distinct set, if I'm following what you're suggesting here.\n\nYeah, something like that.\n\n> > If each .modblock file covers 1GB of WAL, you could the data from\n> > across 1TB of WAL using only 1GB of memory, and that's assuming you\n> > have a 1MB buffer for each .modblock file. You probably don't need\n> > such a large buffer. If you use, say, a 128kB buffer, you could merge\n> > the data from across 8TB of WAL using 1GB of memory. And if you have\n> > 8TB of WAL and you can't spare 1GB for the task of computing which\n> > blocks need to be included in your incremental backup, it's time for a\n> > hardware upgrade.\n>\n> How much additional work is it going to be to sort/dedup for each 1GB of\n> WAL though, along with the resulting size?\n\nWell all you have to do is quicksort an array with a million or so\nelements. I don't know off-hand how many CPU cycles that takes but I\ndoubt it's a whole lot. And for the amount of CPU time and memory\nthat it saves you when you actually go to use the files, I think it's\ngot to be worth it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 20 Apr 2019 16:21:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 04:17:08PM -0400, Robert Haas wrote:\n> On Sat, Apr 20, 2019 at 9:18 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > I think you've got to prevent the WAL from being removed until a\n> > > .modblock file has been written. In more detail, you should (a) scan\n> > > all the WAL segments that will be summarized in the .modblock file,\n> > > (b) write the file under a temporary name, (c) fsync it, (d) rename it\n> > > into place, (e) fsync it again, and (f) then allow those WAL segments\n> > > to be removed, if they are otherwise eligible to be removed.\n> >\n> > Makes sense. So when you are about to remove WAL, you create the\n> > .modblock files for all complete WAL files and only create a new one\n> > when you are about to remove a WAL that was not in a previous .modblock\n> > file.\n> \n> There will often be a partial WAL record at the end of each file. So\n> if you make a .modblock file for WAL files 1-10, you can probably\n> remove files 1-9, but you probably have to keep WAL file 10 around\n> until you generate the NEXT .modblock file, because otherwise you\n> wouldn't be able to read and parse the WAL record that spans the end\n> of file 10 and the beginning of file 11.\n> \n> This is a detail that is very very very important to get right.\n\nGood point. You mentioned:\n\n\tIt seems better to me to give the files names like\n\t${TLI}.${STARTLSN}.${ENDLSN}.modblock, e.g.\n\t00000001.0000000168000058.00000001687DBBB8.modblock, so that you can\n\tsee exactly which *records* are covered by that segment.\n\nbut it seems like it should be ${TLI}.${ENDLSN}... (END first) because\nyou would not want to delete the modblock file until you are about to\ndelete the final WAL, not the first WAL, but as you mentioned, it might\nbe ENDLSN-1.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 20 Apr 2019 17:54:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 5:54 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Good point. You mentioned:\n>\n> It seems better to me to give the files names like\n> ${TLI}.${STARTLSN}.${ENDLSN}.modblock, e.g.\n> 00000001.0000000168000058.00000001687DBBB8.modblock, so that you can\n> see exactly which *records* are covered by that segment.\n>\n> but it seems like it should be ${TLI}.${ENDLSN}... (END first) because\n> you would not want to delete the modblock file until you are about to\n> delete the final WAL, not the first WAL, but as you mentioned, it might\n> be ENDLSN-1.\n\nHmm. It seems to me that it is almost universally the convention to\nput the starting point prior to the ending point. If you are taking a\nbiology class, the teacher will not tell you to study chapters six\nthrough three.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 21 Apr 2019 18:24:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 12:21:36AM -0400, Robert Haas wrote:\n> The segment size doesn't have much to do with it. If you make\n> segments bigger, you'll have to scan fewer larger ones; if you make\n> them smaller, you'll have more smaller ones. The only thing that\n> really matters is the amount of I/O and CPU required, and that doesn't\n> change very much as you vary the segment size.\n\nIf you create the extra file when a segment is finished and we switch\nto a new one, then the extra work would happen for a random backend,\nand it is going to be more costly to scan a 1GB segment than a 16MB\nsegment as a one-time operation, and less backends would see a\nslowdown at equal WAL data generated. From what I can see, you are\nnot planning to do such operations when a segment finishes being\nwritten, which would be much better.\n\n> As to that, what I'm proposing here is no different than what we are\n> already doing with physical and logical replication, except that it's\n> probably a bit cheaper. Physical replication reads all the WAL and\n> sends it all out over the network. Logical replication reads all the\n> WAL, does a bunch of computation, and then sends the results, possibly\n> filtered, out over the network. This would read the WAL and then\n> write a relatively small file to your local disk.\n> \n> I think the impact will be about the same as having one additional\n> standby, give or take.\n\nIf you put the load on an extra process, yeah I don't think that it\nwould be noticeable.\n--\nMichael",
"msg_date": "Mon, 22 Apr 2019 11:20:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Sun, Apr 21, 2019 at 06:24:50PM -0400, Robert Haas wrote:\n> On Sat, Apr 20, 2019 at 5:54 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Good point. You mentioned:\n> >\n> > It seems better to me to give the files names like\n> > ${TLI}.${STARTLSN}.${ENDLSN}.modblock, e.g.\n> > 00000001.0000000168000058.00000001687DBBB8.modblock, so that you can\n> > see exactly which *records* are covered by that segment.\n> >\n> > but it seems like it should be ${TLI}.${ENDLSN}... (END first) because\n> > you would not want to delete the modblock file until you are about to\n> > delete the final WAL, not the first WAL, but as you mentioned, it might\n> > be ENDLSN-1.\n> \n> Hmm. It seems to me that it is almost universally the convention to\n> put the starting point prior to the ending point. If you are taking a\n> biology class, the teacher will not tell you to study chapters six\n> through three.\n\nMy point is that most WAL archive tools will order and remove files\nbased on their lexical ordering, so if you put the start first, the file\nwill normally be removed when it should be kept, e.g., if you have WAL\nfiles like:\n\n\t000000010000000000000001\n\t000000010000000000000002\n\t000000010000000000000003\n\t000000010000000000000004\n\t000000010000000000000005\n\nputting the start first and archiving some wal would lead to:\n\n\t000000010000000000000001-000000010000000000000004.modblock\n\t000000010000000000000003\n\t000000010000000000000004\n\t000000010000000000000005\n\nWe removed 1 and 2, but kept the modblock file, which looks out of\norder. Having the end at the start would have:\n\n\t000000010000000000000003\n\t000000010000000000000004\n\t000000010000000000000004-000000010000000000000001.modblock\n\t000000010000000000000005\n\nMy point is that you would normally only remove the modblock file when 4\nis removed because this modblock files is useful for incremental backups\nfrom base backups that happened between 1 and 4.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 22 Apr 2019 11:48:21 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 11:20:43AM +0900, Michael Paquier wrote:\n> On Sat, Apr 20, 2019 at 12:21:36AM -0400, Robert Haas wrote:\n> > The segment size doesn't have much to do with it. If you make\n> > segments bigger, you'll have to scan fewer larger ones; if you make\n> > them smaller, you'll have more smaller ones. The only thing that\n> > really matters is the amount of I/O and CPU required, and that doesn't\n> > change very much as you vary the segment size.\n> \n> If you create the extra file when a segment is finished and we switch\n> to a new one, then the extra work would happen for a random backend,\n> and it is going to be more costly to scan a 1GB segment than a 16MB\n> segment as a one-time operation, and less backends would see a\n> slowdown at equal WAL data generated. From what I can see, you are\n> not planning to do such operations when a segment finishes being\n> written, which would be much better.\n\nI think your point is that the 16MB is more likely to be in memory,\nwhile the 1GB is less likely.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 22 Apr 2019 11:50:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Sun, Apr 21, 2019 at 10:21 PM Michael Paquier <michael@paquier.xyz> wrote:\n> If you create the extra file when a segment is finished and we switch\n> to a new one, then the extra work would happen for a random backend,\n> and it is going to be more costly to scan a 1GB segment than a 16MB\n> segment as a one-time operation, and less backends would see a\n> slowdown at equal WAL data generated. From what I can see, you are\n> not planning to do such operations when a segment finishes being\n> written, which would be much better.\n\nWell, my plan was to do it all from a background worker, so I do not\nthink a random backend would ever have to do any extra work.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Apr 2019 12:04:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 11:48 AM Bruce Momjian <bruce@momjian.us> wrote:\n> My point is that most WAL archive tools will order and remove files\n> based on their lexical ordering, so if you put the start first, the file\n> will normally be removed when it should be kept, e.g., if you have WAL\n> files like:\n>\n> 000000010000000000000001\n> 000000010000000000000002\n> 000000010000000000000003\n> 000000010000000000000004\n> 000000010000000000000005\n>\n> putting the start first and archiving some wal would lead to:\n>\n> 000000010000000000000001-000000010000000000000004.modblock\n> 000000010000000000000003\n> 000000010000000000000004\n> 000000010000000000000005\n>\n> We removed 1 and 2, but kept the modblock file, which looks out of\n> order. Having the end at the start would have:\n>\n> 000000010000000000000003\n> 000000010000000000000004\n> 000000010000000000000004-000000010000000000000001.modblock\n> 000000010000000000000005\n>\n> My point is that you would normally only remove the modblock file when 4\n> is removed because this modblock files is useful for incremental backups\n> from base backups that happened between 1 and 4.\n\nThat's an interesting point. On the other hand, I think it would be\ntypical to want the master to retain .modblock files for much longer\nthan it retains WAL segments, and in my design, the WAL archive\nwouldn't see those files at all; they'd be stored on the master. I\nwas actually thinking that they should possibly be stored in a\nseparate directory to avoid confusion.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Apr 2019 12:15:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 12:15:32PM -0400, Robert Haas wrote:\n> On Mon, Apr 22, 2019 at 11:48 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > My point is that you would normally only remove the modblock file when 4\n> > is removed because this modblock files is useful for incremental backups\n> > from base backups that happened between 1 and 4.\n> \n> That's an interesting point. On the other hand, I think it would be\n> typical to want the master to retain .modblock files for much longer\n> than it retains WAL segments, and in my design, the WAL archive\n> wouldn't see those files at all; they'd be stored on the master. I\n> was actually thinking that they should possibly be stored in a\n> separate directory to avoid confusion.\n\nI assumed the modblock files would be stored in the WAL archive so some\nexternal tools could generate incremental backups using just the WAL\nfiles. I assumed they would also be sent to standby servers so\nincremental backups could be done on standby servers too.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 22 Apr 2019 12:35:42 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 12:35 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I assumed the modblock files would be stored in the WAL archive so some\n> external tools could generate incremental backups using just the WAL\n> files. I assumed they would also be sent to standby servers so\n> incremental backups could be done on standby servers too.\n\nYeah, that's another possible approach. I am not sure what is best.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Apr 2019 13:11:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 01:11:22PM -0400, Robert Haas wrote:\n> On Mon, Apr 22, 2019 at 12:35 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I assumed the modblock files would be stored in the WAL archive so some\n> > external tools could generate incremental backups using just the WAL\n> > files. I assumed they would also be sent to standby servers so\n> > incremental backups could be done on standby servers too.\n> \n> Yeah, that's another possible approach. I am not sure what is best.\n\nI am thinking you need to allow any of these, and putting the WAL files\nin pg_wal and having them streamed and archived gives that flexibility.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 22 Apr 2019 13:15:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 04:25:24PM -0400, Robert Haas wrote:\n>On Thu, Apr 18, 2019 at 3:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n>> How would you choose the STARTLSN/ENDLSN? If you could do it per\n>> checkpoint, rather than per-WAL, I think that would be great.\n>\n>I thought of that too. It seems appealing, because you probably only\n>really care whether a particular block was modified between one\n>checkpoint and the next, not exactly when during that interval it was\n>modified. \n\nThat's probably true for incremental backups, but there are other use\ncases that could leverage this information.\n\nSome time ago there was a discussion about prefetching blocks during\nrecovery on a standby, and that's a great example of a use case that\nbenefit from this - look which blocks where modified in the next chunk\nof WAL, prefetch them. But that requires fairly detailed information\nabout which blocks were modified in the next few megabytes of WAL.\n\nSo just doing it once per checkpoint (or even anything above a single\nWAL segment) and removing all the detailed LSN location makes it useless\nfor this use case.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 23 Apr 2019 01:04:25 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 01:15:49PM -0400, Bruce Momjian wrote:\n>On Mon, Apr 22, 2019 at 01:11:22PM -0400, Robert Haas wrote:\n>> On Mon, Apr 22, 2019 at 12:35 PM Bruce Momjian <bruce@momjian.us> wrote:\n>> > I assumed the modblock files would be stored in the WAL archive so some\n>> > external tools could generate incremental backups using just the WAL\n>> > files. I assumed they would also be sent to standby servers so\n>> > incremental backups could be done on standby servers too.\n>>\n>> Yeah, that's another possible approach. I am not sure what is best.\n>\n>I am thinking you need to allow any of these, and putting the WAL files\n>in pg_wal and having them streamed and archived gives that flexibility.\n>\n\nI agree - this would be quite useful for the prefetching use case I've\nalready mentioned in my previous message.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 23 Apr 2019 01:07:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Sat, Apr 20, 2019 at 04:21:52PM -0400, Robert Haas wrote:\n>On Sat, Apr 20, 2019 at 12:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n>> > Oh. Well, I already explained my algorithm for doing that upthread,\n>> > which I believe would be quite cheap.\n>> >\n>> > 1. When you generate the .modblock files, stick all the block\n>> > references into a buffer. qsort(). Dedup. Write out in sorted\n>> > order.\n>>\n>> Having all of the block references in a sorted order does seem like it\n>> would help, but would also make those potentially quite a bit larger\n>> than necessary (I had some thoughts about making them smaller elsewhere\n>> in this discussion). That might be worth it though. I suppose it might\n>> also be possible to line up the bitmaps suggested elsewhere to do\n>> essentially a BitmapOr of them to identify the blocks changed (while\n>> effectively de-duping at the same time).\n>\n>I don't see why this would make them bigger than necessary. If you\n>sort by relfilenode/fork/blocknumber and dedup, then references to\n>nearby blocks will be adjacent in the file. You can then decide what\n>format will represent that most efficiently on output. Whether or not\n>a bitmap is better idea than a list of block numbers or something else\n>depends on what percentage of blocks are modified and how clustered\n>they are.\n>\n\nNot sure I understand correctly - do you suggest to deduplicate and sort\nthe data before writing them into the .modblock files? Because that the\nthe sorting would make this information mostly useless for the recovery\nprefetching use case I mentioned elsewhere. For that to work we need\ninformation about both the LSN and block, in the LSN order.\n\nSo if we want to allow that use case to leverage this infrastructure, we\nneed to write the .modfiles kinda \"raw\" and do this processing in some\nlater step.\n\nNow, maybe the incremental backup use case is so much more important the\nright thing to do is ignore this other use case, and I'm OK with that -\nas long as it's a conscious choice.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 23 Apr 2019 01:21:27 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 01:21:27AM +0200, Tomas Vondra wrote:\n> On Sat, Apr 20, 2019 at 04:21:52PM -0400, Robert Haas wrote:\n> > On Sat, Apr 20, 2019 at 12:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > > Oh. Well, I already explained my algorithm for doing that upthread,\n> > > > which I believe would be quite cheap.\n> > > >\n> > > > 1. When you generate the .modblock files, stick all the block\n> > > > references into a buffer. qsort(). Dedup. Write out in sorted\n> > > > order.\n> > > \n> > > Having all of the block references in a sorted order does seem like it\n> > > would help, but would also make those potentially quite a bit larger\n> > > than necessary (I had some thoughts about making them smaller elsewhere\n> > > in this discussion). That might be worth it though. I suppose it might\n> > > also be possible to line up the bitmaps suggested elsewhere to do\n> > > essentially a BitmapOr of them to identify the blocks changed (while\n> > > effectively de-duping at the same time).\n> > \n> > I don't see why this would make them bigger than necessary. If you\n> > sort by relfilenode/fork/blocknumber and dedup, then references to\n> > nearby blocks will be adjacent in the file. You can then decide what\n> > format will represent that most efficiently on output. Whether or not\n> > a bitmap is better idea than a list of block numbers or something else\n> > depends on what percentage of blocks are modified and how clustered\n> > they are.\n> > \n> \n> Not sure I understand correctly - do you suggest to deduplicate and sort\n> the data before writing them into the .modblock files? Because that the\n> the sorting would make this information mostly useless for the recovery\n> prefetching use case I mentioned elsewhere. For that to work we need\n> information about both the LSN and block, in the LSN order.\n> \n> So if we want to allow that use case to leverage this infrastructure, we\n> need to write the .modfiles kinda \"raw\" and do this processing in some\n> later step.\n> \n> Now, maybe the incremental backup use case is so much more important the\n> right thing to do is ignore this other use case, and I'm OK with that -\n> as long as it's a conscious choice.\n\nI think the concern is that the more graunular the modblock files are\n(with less de-duping), the larger they will be.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 22 Apr 2019 19:44:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 07:44:45PM -0400, Bruce Momjian wrote:\n>On Tue, Apr 23, 2019 at 01:21:27AM +0200, Tomas Vondra wrote:\n>> On Sat, Apr 20, 2019 at 04:21:52PM -0400, Robert Haas wrote:\n>> > On Sat, Apr 20, 2019 at 12:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n>> > > > Oh. Well, I already explained my algorithm for doing that upthread,\n>> > > > which I believe would be quite cheap.\n>> > > >\n>> > > > 1. When you generate the .modblock files, stick all the block\n>> > > > references into a buffer. qsort(). Dedup. Write out in sorted\n>> > > > order.\n>> > >\n>> > > Having all of the block references in a sorted order does seem like it\n>> > > would help, but would also make those potentially quite a bit larger\n>> > > than necessary (I had some thoughts about making them smaller elsewhere\n>> > > in this discussion). That might be worth it though. I suppose it might\n>> > > also be possible to line up the bitmaps suggested elsewhere to do\n>> > > essentially a BitmapOr of them to identify the blocks changed (while\n>> > > effectively de-duping at the same time).\n>> >\n>> > I don't see why this would make them bigger than necessary. If you\n>> > sort by relfilenode/fork/blocknumber and dedup, then references to\n>> > nearby blocks will be adjacent in the file. You can then decide what\n>> > format will represent that most efficiently on output. Whether or not\n>> > a bitmap is better idea than a list of block numbers or something else\n>> > depends on what percentage of blocks are modified and how clustered\n>> > they are.\n>> >\n>>\n>> Not sure I understand correctly - do you suggest to deduplicate and sort\n>> the data before writing them into the .modblock files? Because that the\n>> the sorting would make this information mostly useless for the recovery\n>> prefetching use case I mentioned elsewhere. For that to work we need\n>> information about both the LSN and block, in the LSN order.\n>>\n>> So if we want to allow that use case to leverage this infrastructure, we\n>> need to write the .modfiles kinda \"raw\" and do this processing in some\n>> later step.\n>>\n>> Now, maybe the incremental backup use case is so much more important the\n>> right thing to do is ignore this other use case, and I'm OK with that -\n>> as long as it's a conscious choice.\n>\n>I think the concern is that the more graunular the modblock files are\n>(with less de-duping), the larger they will be.\n>\n\nWell, I understand that concern - all I'm saying is that makes this\nuseless for some use cases (that may or may not be important enough).\n\nHowever, it seems to me those files are guaranteed to be much smaller\nthan the WAL segments, so I don't see how size alone could be an issue\nas long as we do the merging and deduplication when recycling the\nsegments. At that point the standby can't request the WAL from the\nprimary anyway, so it won't need the raw .mdblock files either.\n\nAnd we probably only care about the size of the data we need to keep for\na long time. And that we can deduplicate/reorder any way we want.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 23 Apr 2019 02:13:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 02:13:29AM +0200, Tomas Vondra wrote:\n> Well, I understand that concern - all I'm saying is that makes this\n> useless for some use cases (that may or may not be important enough).\n> \n> However, it seems to me those files are guaranteed to be much smaller\n> than the WAL segments, so I don't see how size alone could be an issue\n> as long as we do the merging and deduplication when recycling the\n> segments. At that point the standby can't request the WAL from the\n> primary anyway, so it won't need the raw .mdblock files either.\n> \n> And we probably only care about the size of the data we need to keep for\n> a long time. And that we can deduplicate/reorder any way we want.\n\nWell, the interesting question is whether the server will generate a\nsingle modblock file for all WAL in pg_wal only right before we are\nready to expire some WAL, or whether modblock files will be generated\noffline, perhaps independent of the server, and perhaps by aggregating\nsmaller modblock files.\n\nTo throw out an idea, what if we had an executable that could generate a\nmodblock file by scanning a set of WAL files? How far would that take\nus to meeing incremental backup needs? I can imagine db/relfilenode oid\nvolatility could be a problem, but might be fixable.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 22 Apr 2019 20:52:11 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 08:52:11PM -0400, Bruce Momjian wrote:\n> Well, the interesting question is whether the server will generate a\n> single modblock file for all WAL in pg_wal only right before we are\n> ready to expire some WAL, or whether modblock files will be generated\n> offline, perhaps independent of the server, and perhaps by aggregating\n> smaller modblock files.\n> \n> To throw out an idea, what if we had an executable that could generate a\n> modblock file by scanning a set of WAL files? How far would that take\n> us to meeing incremental backup needs? I can imagine db/relfilenode oid\n> volatility could be a problem, but might be fixable.\n\nWell, this actually brings up a bunch of questions:\n\n* How often do we create blockmod files? Per segment, per checkpoint,\n at WAL deletion time (1GB?)\n\n* What is the blockmod file format? dboid, relfilenode, blocknum?\n Use compression? Sorted?\n\n* How do we create incremental backups?\n\n* What is the incremental backup file format?\n\n* How do we apply incremental backups to base backups?\n\nAnd there are some secondary questions:\n\n* Can blockmod files be merged?\n\n* Can incremental backups be merged?\n\n* Can blockmod files be used for restore prefetching?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 22 Apr 2019 21:18:52 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 7:04 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Some time ago there was a discussion about prefetching blocks during\n> recovery on a standby, and that's a great example of a use case that\n> benefit from this - look which blocks where modified in the next chunk\n> of WAL, prefetch them. But that requires fairly detailed information\n> about which blocks were modified in the next few megabytes of WAL.\n>\n> So just doing it once per checkpoint (or even anything above a single\n> WAL segment) and removing all the detailed LSN location makes it useless\n> for this use case.\n\nFor this particular use case, wouldn't you want to read the WAL itself\nand use that to issue prefetch requests? Because if you use the\n.modblock files, the data file blocks will end up in memory but the\nWAL blocks won't, and you'll still be waiting for I/O.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Apr 2019 21:51:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Sat, Apr 20, 2019 at 04:21:52PM -0400, Robert Haas wrote:\n> >On Sat, Apr 20, 2019 at 12:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n> >>> Oh. Well, I already explained my algorithm for doing that upthread,\n> >>> which I believe would be quite cheap.\n> >>>\n> >>> 1. When you generate the .modblock files, stick all the block\n> >>> references into a buffer. qsort(). Dedup. Write out in sorted\n> >>> order.\n> >>\n> >>Having all of the block references in a sorted order does seem like it\n> >>would help, but would also make those potentially quite a bit larger\n> >>than necessary (I had some thoughts about making them smaller elsewhere\n> >>in this discussion). That might be worth it though. I suppose it might\n> >>also be possible to line up the bitmaps suggested elsewhere to do\n> >>essentially a BitmapOr of them to identify the blocks changed (while\n> >>effectively de-duping at the same time).\n> >\n> >I don't see why this would make them bigger than necessary. If you\n> >sort by relfilenode/fork/blocknumber and dedup, then references to\n> >nearby blocks will be adjacent in the file. You can then decide what\n> >format will represent that most efficiently on output. Whether or not\n> >a bitmap is better idea than a list of block numbers or something else\n> >depends on what percentage of blocks are modified and how clustered\n> >they are.\n> \n> Not sure I understand correctly - do you suggest to deduplicate and sort\n> the data before writing them into the .modblock files? Because that the\n> the sorting would make this information mostly useless for the recovery\n> prefetching use case I mentioned elsewhere. For that to work we need\n> information about both the LSN and block, in the LSN order.\n\nI'm not sure I follow- why does the prefetching need to get the blocks\nin LSN order..? Once the blocks that we know are going to change in the\nnext segment have been identified, we could prefetch them all and have\nthem ready for when replay gets to them. I'm not sure that we\nspecifically need to have them pre-fetched in the same order that the\nreplay happens and it might even be better to fetch them in an order\nthat's as sequential as possible to get them in as quickly as possible.\n\n> So if we want to allow that use case to leverage this infrastructure, we\n> need to write the .modfiles kinda \"raw\" and do this processing in some\n> later step.\n\nIf we really need the LSN info for the blocks, then we could still\nde-dup, picking the 'first modified in this segment at LSN X', or keep\nboth first and last, or I suppose every LSN if we really want, and then\nhave that information included with the other information about the\nblock. Downstream clients could then sort based on the LSN info if they\nwant to have a list of blocks in sorted-by-LSN-order.\n\n> Now, maybe the incremental backup use case is so much more important the\n> right thing to do is ignore this other use case, and I'm OK with that -\n> as long as it's a conscious choice.\n\nI'd certainly like to have a way to prefetch, but I'm not entirely sure\nthat it makes sense to combine it with this, so while I sketched out\nsome ideas about how to do that above, I don't want it to come across as\nbeing a strong endorsement of the overall idea.\n\nFor pre-fetching purposes, for an async streaming replica, it seems like\nthe wal sender process could potentially just scan the WAL and have a\nlist of blocks ready to pass to the replica which are \"this is what's\ncoming soon\" or similar, rather than working with the modfiles at all.\nNot sure if we'd always send that or if we wait for the replica to ask\nfor it. Though for doing WAL replay from the archive, being able to ask\nfor the modfile first to do prefetching before replaying the WAL itself\ncould certainly be beneficial, so maybe it does make sense to have that\ninformation there too.. still not sure we really need it in LSN order\nor that we need to prefetch in LSN order though.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 23 Apr 2019 10:22:46 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 10:22:46AM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>> On Sat, Apr 20, 2019 at 04:21:52PM -0400, Robert Haas wrote:\n>> >On Sat, Apr 20, 2019 at 12:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n>> >>> Oh. Well, I already explained my algorithm for doing that upthread,\n>> >>> which I believe would be quite cheap.\n>> >>>\n>> >>> 1. When you generate the .modblock files, stick all the block\n>> >>> references into a buffer. qsort(). Dedup. Write out in sorted\n>> >>> order.\n>> >>\n>> >>Having all of the block references in a sorted order does seem like it\n>> >>would help, but would also make those potentially quite a bit larger\n>> >>than necessary (I had some thoughts about making them smaller elsewhere\n>> >>in this discussion). That might be worth it though. I suppose it might\n>> >>also be possible to line up the bitmaps suggested elsewhere to do\n>> >>essentially a BitmapOr of them to identify the blocks changed (while\n>> >>effectively de-duping at the same time).\n>> >\n>> >I don't see why this would make them bigger than necessary. If you\n>> >sort by relfilenode/fork/blocknumber and dedup, then references to\n>> >nearby blocks will be adjacent in the file. You can then decide what\n>> >format will represent that most efficiently on output. Whether or not\n>> >a bitmap is better idea than a list of block numbers or something else\n>> >depends on what percentage of blocks are modified and how clustered\n>> >they are.\n>>\n>> Not sure I understand correctly - do you suggest to deduplicate and sort\n>> the data before writing them into the .modblock files? Because that the\n>> the sorting would make this information mostly useless for the recovery\n>> prefetching use case I mentioned elsewhere. For that to work we need\n>> information about both the LSN and block, in the LSN order.\n>\n>I'm not sure I follow- why does the prefetching need to get the blocks\n>in LSN order..? Once the blocks that we know are going to change in the\n>next segment have been identified, we could prefetch them all and have\n>them ready for when replay gets to them. I'm not sure that we\n>specifically need to have them pre-fetched in the same order that the\n>replay happens and it might even be better to fetch them in an order\n>that's as sequential as possible to get them in as quickly as possible.\n>\n\nThat means we'd have to prefetch all blocks for the whole WAL segment,\nwhich is pretty useless, IMO. A single INSERT (especially for indexes) is\noften just ~100B, so a single 16MB segment can fit ~160k of them. Surely\nwe don't want to prefetch all of that at once? And it's even worse for\nlarger WAL segments, which are likely to get more common now that it's an\ninitdb option.\n\nI'm pretty sure the prefetching needs to be more like \"prefetch the next\n1024 blocks we'll need\" or \"prefetch blocks from the next X megabytes of\nWAL\". That doesn't mean we can't do some additional optimization (like\nreordering them a bit), but there's a point where it gets detrimental\nbecause the kernel will just evict some of the prefetched blocks before we\nactually access them. And the larger amount of blocks you prefetch the\nmore likely that is.\n\n\n>> So if we want to allow that use case to leverage this infrastructure, we\n>> need to write the .modfiles kinda \"raw\" and do this processing in some\n>> later step.\n>\n>If we really need the LSN info for the blocks, then we could still\n>de-dup, picking the 'first modified in this segment at LSN X', or keep\n>both first and last, or I suppose every LSN if we really want, and then\n>have that information included with the other information about the\n>block. Downstream clients could then sort based on the LSN info if they\n>want to have a list of blocks in sorted-by-LSN-order.\n>\n\nPossibly. I don't think keeping just the first block occurence is enough,\nparticularly for large WAL segmnent sizes, but I agree we can reduce the\nstuff a bit (say, ignore references that are less than 1MB apart or so).\nWe just can't remove all the LSN information entirely.\n\n>> Now, maybe the incremental backup use case is so much more important the\n>> right thing to do is ignore this other use case, and I'm OK with that -\n>> as long as it's a conscious choice.\n>\n>I'd certainly like to have a way to prefetch, but I'm not entirely sure\n>that it makes sense to combine it with this, so while I sketched out\n>some ideas about how to do that above, I don't want it to come across as\n>being a strong endorsement of the overall idea.\n>\n>For pre-fetching purposes, for an async streaming replica, it seems like\n>the wal sender process could potentially just scan the WAL and have a\n>list of blocks ready to pass to the replica which are \"this is what's\n>coming soon\" or similar, rather than working with the modfiles at all.\n>Not sure if we'd always send that or if we wait for the replica to ask\n>for it. Though for doing WAL replay from the archive, being able to ask\n>for the modfile first to do prefetching before replaying the WAL itself\n>could certainly be beneficial, so maybe it does make sense to have that\n>information there too.. still not sure we really need it in LSN order\n>or that we need to prefetch in LSN order though.\n>\n\nWell, how exactly should the prefetching extract the block list is still\nan open question. But this seems to deal with pretty much the same stuff,\nso it might make sense to support the prefetching use case too. Or maybe\nnot, I'm not sure - but IMHO we should try.\n\nI don't think it makes sense to do prefetching when the standby is fully\ncaught up with the primary. It gets more important when it falls behind by\na significant amount of WAL, at which point it's likely to fetch already\nclosed WAL segments - either from primary or archive. So either it can get\nthe block list from there, or it may extract the info on it's own (using\nsome of the infrastructure used to build the mdblock files).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 17:27:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Tue, Apr 23, 2019 at 10:22:46AM -0400, Stephen Frost wrote:\n> >* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> >>On Sat, Apr 20, 2019 at 04:21:52PM -0400, Robert Haas wrote:\n> >>>On Sat, Apr 20, 2019 at 12:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n> >>>>> Oh. Well, I already explained my algorithm for doing that upthread,\n> >>>>> which I believe would be quite cheap.\n> >>>>>\n> >>>>> 1. When you generate the .modblock files, stick all the block\n> >>>>> references into a buffer. qsort(). Dedup. Write out in sorted\n> >>>>> order.\n> >>>>\n> >>>>Having all of the block references in a sorted order does seem like it\n> >>>>would help, but would also make those potentially quite a bit larger\n> >>>>than necessary (I had some thoughts about making them smaller elsewhere\n> >>>>in this discussion). That might be worth it though. I suppose it might\n> >>>>also be possible to line up the bitmaps suggested elsewhere to do\n> >>>>essentially a BitmapOr of them to identify the blocks changed (while\n> >>>>effectively de-duping at the same time).\n> >>>\n> >>>I don't see why this would make them bigger than necessary. If you\n> >>>sort by relfilenode/fork/blocknumber and dedup, then references to\n> >>>nearby blocks will be adjacent in the file. You can then decide what\n> >>>format will represent that most efficiently on output. Whether or not\n> >>>a bitmap is better idea than a list of block numbers or something else\n> >>>depends on what percentage of blocks are modified and how clustered\n> >>>they are.\n> >>\n> >>Not sure I understand correctly - do you suggest to deduplicate and sort\n> >>the data before writing them into the .modblock files? Because that the\n> >>the sorting would make this information mostly useless for the recovery\n> >>prefetching use case I mentioned elsewhere. For that to work we need\n> >>information about both the LSN and block, in the LSN order.\n> >\n> >I'm not sure I follow- why does the prefetching need to get the blocks\n> >in LSN order..? Once the blocks that we know are going to change in the\n> >next segment have been identified, we could prefetch them all and have\n> >them ready for when replay gets to them. I'm not sure that we\n> >specifically need to have them pre-fetched in the same order that the\n> >replay happens and it might even be better to fetch them in an order\n> >that's as sequential as possible to get them in as quickly as possible.\n> \n> That means we'd have to prefetch all blocks for the whole WAL segment,\n> which is pretty useless, IMO. A single INSERT (especially for indexes) is\n> often just ~100B, so a single 16MB segment can fit ~160k of them. Surely\n> we don't want to prefetch all of that at once? And it's even worse for\n> larger WAL segments, which are likely to get more common now that it's an\n> initdb option.\n\nAh, yeah, I had been thinking about FPIs and blocks and figuring that\n16MB wasn't that bad but you're certainly right that we could end up\ntouching a lot more blocks in a given WAL segment.\n\n> I'm pretty sure the prefetching needs to be more like \"prefetch the next\n> 1024 blocks we'll need\" or \"prefetch blocks from the next X megabytes of\n> WAL\". That doesn't mean we can't do some additional optimization (like\n> reordering them a bit), but there's a point where it gets detrimental\n> because the kernel will just evict some of the prefetched blocks before we\n> actually access them. And the larger amount of blocks you prefetch the\n> more likely that is.\n\nSince we're prefetching, maybe we shouldn't be just pulling the blocks\ninto the filesystem cache but instead into something we have more\ncontrol over..?\n\n> >>So if we want to allow that use case to leverage this infrastructure, we\n> >>need to write the .modfiles kinda \"raw\" and do this processing in some\n> >>later step.\n> >\n> >If we really need the LSN info for the blocks, then we could still\n> >de-dup, picking the 'first modified in this segment at LSN X', or keep\n> >both first and last, or I suppose every LSN if we really want, and then\n> >have that information included with the other information about the\n> >block. Downstream clients could then sort based on the LSN info if they\n> >want to have a list of blocks in sorted-by-LSN-order.\n> \n> Possibly. I don't think keeping just the first block occurence is enough,\n> particularly for large WAL segmnent sizes, but I agree we can reduce the\n> stuff a bit (say, ignore references that are less than 1MB apart or so).\n> We just can't remove all the LSN information entirely.\n\nYeah, it depends on how much memory we want to use for prefetching and\nhow large the WAL segments are.\n\n> >>Now, maybe the incremental backup use case is so much more important the\n> >>right thing to do is ignore this other use case, and I'm OK with that -\n> >>as long as it's a conscious choice.\n> >\n> >I'd certainly like to have a way to prefetch, but I'm not entirely sure\n> >that it makes sense to combine it with this, so while I sketched out\n> >some ideas about how to do that above, I don't want it to come across as\n> >being a strong endorsement of the overall idea.\n> >\n> >For pre-fetching purposes, for an async streaming replica, it seems like\n> >the wal sender process could potentially just scan the WAL and have a\n> >list of blocks ready to pass to the replica which are \"this is what's\n> >coming soon\" or similar, rather than working with the modfiles at all.\n> >Not sure if we'd always send that or if we wait for the replica to ask\n> >for it. Though for doing WAL replay from the archive, being able to ask\n> >for the modfile first to do prefetching before replaying the WAL itself\n> >could certainly be beneficial, so maybe it does make sense to have that\n> >information there too.. still not sure we really need it in LSN order\n> >or that we need to prefetch in LSN order though.\n> \n> Well, how exactly should the prefetching extract the block list is still\n> an open question. But this seems to deal with pretty much the same stuff,\n> so it might make sense to support the prefetching use case too. Or maybe\n> not, I'm not sure - but IMHO we should try.\n\nYeah, makes sense to me to at least be thinking about it and seeing if\nthere's a way to make adding prefetching easier.\n\n> I don't think it makes sense to do prefetching when the standby is fully\n> caught up with the primary. It gets more important when it falls behind by\n> a significant amount of WAL, at which point it's likely to fetch already\n> closed WAL segments - either from primary or archive. So either it can get\n> the block list from there, or it may extract the info on it's own (using\n> some of the infrastructure used to build the mdblock files).\n\nI don't know.. if we're able to keep up with the primary because we did\na bunch of prefetching and that made WAL replay fast enough that we\ncatch up, aren't we going to possibly fall right back behind again\npretty quickly if we don't prefetch for the blocks that are about to be\nsent? If we are caught up and there's no data to be sent then we don't\nhave much choice, but as soon as there's WAL data to be sent our way,\nany more than just one block, then grabbing the list of blocks that we\nknow we're going to need during replay seems to make a lot of sense to\nme. In other words, I'd think we would just always do this and how much\nwe do would be dictated by both how far behind we are and how much\nmemory we want to allocate for this.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 23 Apr 2019 11:43:05 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 11:43:05AM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>> On Tue, Apr 23, 2019 at 10:22:46AM -0400, Stephen Frost wrote:\n>> >* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>> >>On Sat, Apr 20, 2019 at 04:21:52PM -0400, Robert Haas wrote:\n>> >>>On Sat, Apr 20, 2019 at 12:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n>> >>>>> Oh. Well, I already explained my algorithm for doing that upthread,\n>> >>>>> which I believe would be quite cheap.\n>> >>>>>\n>> >>>>> 1. When you generate the .modblock files, stick all the block\n>> >>>>> references into a buffer. qsort(). Dedup. Write out in sorted\n>> >>>>> order.\n>> >>>>\n>> >>>>Having all of the block references in a sorted order does seem like it\n>> >>>>would help, but would also make those potentially quite a bit larger\n>> >>>>than necessary (I had some thoughts about making them smaller elsewhere\n>> >>>>in this discussion). That might be worth it though. I suppose it might\n>> >>>>also be possible to line up the bitmaps suggested elsewhere to do\n>> >>>>essentially a BitmapOr of them to identify the blocks changed (while\n>> >>>>effectively de-duping at the same time).\n>> >>>\n>> >>>I don't see why this would make them bigger than necessary. If you\n>> >>>sort by relfilenode/fork/blocknumber and dedup, then references to\n>> >>>nearby blocks will be adjacent in the file. You can then decide what\n>> >>>format will represent that most efficiently on output. Whether or not\n>> >>>a bitmap is better idea than a list of block numbers or something else\n>> >>>depends on what percentage of blocks are modified and how clustered\n>> >>>they are.\n>> >>\n>> >>Not sure I understand correctly - do you suggest to deduplicate and sort\n>> >>the data before writing them into the .modblock files? Because that the\n>> >>the sorting would make this information mostly useless for the recovery\n>> >>prefetching use case I mentioned elsewhere. For that to work we need\n>> >>information about both the LSN and block, in the LSN order.\n>> >\n>> >I'm not sure I follow- why does the prefetching need to get the blocks\n>> >in LSN order..? Once the blocks that we know are going to change in the\n>> >next segment have been identified, we could prefetch them all and have\n>> >them ready for when replay gets to them. I'm not sure that we\n>> >specifically need to have them pre-fetched in the same order that the\n>> >replay happens and it might even be better to fetch them in an order\n>> >that's as sequential as possible to get them in as quickly as possible.\n>>\n>> That means we'd have to prefetch all blocks for the whole WAL segment,\n>> which is pretty useless, IMO. A single INSERT (especially for indexes) is\n>> often just ~100B, so a single 16MB segment can fit ~160k of them. Surely\n>> we don't want to prefetch all of that at once? And it's even worse for\n>> larger WAL segments, which are likely to get more common now that it's an\n>> initdb option.\n>\n>Ah, yeah, I had been thinking about FPIs and blocks and figuring that\n>16MB wasn't that bad but you're certainly right that we could end up\n>touching a lot more blocks in a given WAL segment.\n>\n>> I'm pretty sure the prefetching needs to be more like \"prefetch the next\n>> 1024 blocks we'll need\" or \"prefetch blocks from the next X megabytes of\n>> WAL\". That doesn't mean we can't do some additional optimization (like\n>> reordering them a bit), but there's a point where it gets detrimental\n>> because the kernel will just evict some of the prefetched blocks before we\n>> actually access them. And the larger amount of blocks you prefetch the\n>> more likely that is.\n>\n>Since we're prefetching, maybe we shouldn't be just pulling the blocks\n>into the filesystem cache but instead into something we have more\n>control over..?\n>\n\nWell, yeah, and there was a discussion about that in the prefetching\nthread IIRC. But but in that case it's probably even more imporant to\nprefetch only a fairly small chunk of blocks (instead of the whole WAL\nsegment worth of blocks), because shared buffers are usually much smaller\ncompared to page cache. So we'd probably want some sort of small ring\nbuffer there, so the LSN information is even more important.\n\n>> >>So if we want to allow that use case to leverage this infrastructure, we\n>> >>need to write the .modfiles kinda \"raw\" and do this processing in some\n>> >>later step.\n>> >\n>> >If we really need the LSN info for the blocks, then we could still\n>> >de-dup, picking the 'first modified in this segment at LSN X', or keep\n>> >both first and last, or I suppose every LSN if we really want, and then\n>> >have that information included with the other information about the\n>> >block. Downstream clients could then sort based on the LSN info if they\n>> >want to have a list of blocks in sorted-by-LSN-order.\n>>\n>> Possibly. I don't think keeping just the first block occurence is enough,\n>> particularly for large WAL segmnent sizes, but I agree we can reduce the\n>> stuff a bit (say, ignore references that are less than 1MB apart or so).\n>> We just can't remove all the LSN information entirely.\n>\n>Yeah, it depends on how much memory we want to use for prefetching and\n>how large the WAL segments are.\n>\n\nRight.\n\n>> >>Now, maybe the incremental backup use case is so much more important the\n>> >>right thing to do is ignore this other use case, and I'm OK with that -\n>> >>as long as it's a conscious choice.\n>> >\n>> >I'd certainly like to have a way to prefetch, but I'm not entirely sure\n>> >that it makes sense to combine it with this, so while I sketched out\n>> >some ideas about how to do that above, I don't want it to come across as\n>> >being a strong endorsement of the overall idea.\n>> >\n>> >For pre-fetching purposes, for an async streaming replica, it seems like\n>> >the wal sender process could potentially just scan the WAL and have a\n>> >list of blocks ready to pass to the replica which are \"this is what's\n>> >coming soon\" or similar, rather than working with the modfiles at all.\n>> >Not sure if we'd always send that or if we wait for the replica to ask\n>> >for it. Though for doing WAL replay from the archive, being able to ask\n>> >for the modfile first to do prefetching before replaying the WAL itself\n>> >could certainly be beneficial, so maybe it does make sense to have that\n>> >information there too.. still not sure we really need it in LSN order\n>> >or that we need to prefetch in LSN order though.\n>>\n>> Well, how exactly should the prefetching extract the block list is still\n>> an open question. But this seems to deal with pretty much the same stuff,\n>> so it might make sense to support the prefetching use case too. Or maybe\n>> not, I'm not sure - but IMHO we should try.\n>\n>Yeah, makes sense to me to at least be thinking about it and seeing if\n>there's a way to make adding prefetching easier.\n>\n\nAgreed.\n\n>> I don't think it makes sense to do prefetching when the standby is fully\n>> caught up with the primary. It gets more important when it falls behind by\n>> a significant amount of WAL, at which point it's likely to fetch already\n>> closed WAL segments - either from primary or archive. So either it can get\n>> the block list from there, or it may extract the info on it's own (using\n>> some of the infrastructure used to build the mdblock files).\n>\n>I don't know.. if we're able to keep up with the primary because we did\n>a bunch of prefetching and that made WAL replay fast enough that we\n>catch up, aren't we going to possibly fall right back behind again\n>pretty quickly if we don't prefetch for the blocks that are about to be\n>sent? If we are caught up and there's no data to be sent then we don't\n>have much choice, but as soon as there's WAL data to be sent our way,\n>any more than just one block, then grabbing the list of blocks that we\n>know we're going to need during replay seems to make a lot of sense to\n>me. In other words, I'd think we would just always do this and how much\n>we do would be dictated by both how far behind we are and how much\n>memory we want to allocate for this.\n>\n\nWell, the thing is that for prefetching to be possible you actually have\nto be a bit behind. Otherwise you can't really look forward which blocks\nwill be needed, right?\n\nIMHO the main use case for prefetching is when there's a spike of activity\non the primary, making the standby to fall behind, and then hours takes\nhours to catch up. I don't think the cases with just a couple of MBs of\nlag are the issue prefetching is meant to improve (if it does, great).\n\nAnyway, not sure if this is the right thread to discuss prefetching in\nmuch more detail.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 18:07:40 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-23 18:07:40 +0200, Tomas Vondra wrote:\n> Well, the thing is that for prefetching to be possible you actually have\n> to be a bit behind. Otherwise you can't really look forward which blocks\n> will be needed, right?\n> \n> IMHO the main use case for prefetching is when there's a spike of activity\n> on the primary, making the standby to fall behind, and then hours takes\n> hours to catch up. I don't think the cases with just a couple of MBs of\n> lag are the issue prefetching is meant to improve (if it does, great).\n\nI'd be surprised if a good implementation didn't. Even just some smarter\nIO scheduling in the startup process could help a good bit. E.g. no need\nto sequentially read the first and then the second block for an update\nrecord, if you can issue both at the same time - just about every\nstorage system these days can do a number of IO requests in parallel,\nand it nearly halves latency effects. And reading a few records (as in a\nfew hundred bytes commonly) ahead, allows to do much more than that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Apr 2019 09:34:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 09:34:54AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-04-23 18:07:40 +0200, Tomas Vondra wrote:\n>> Well, the thing is that for prefetching to be possible you actually have\n>> to be a bit behind. Otherwise you can't really look forward which blocks\n>> will be needed, right?\n>>\n>> IMHO the main use case for prefetching is when there's a spike of activity\n>> on the primary, making the standby to fall behind, and then hours takes\n>> hours to catch up. I don't think the cases with just a couple of MBs of\n>> lag are the issue prefetching is meant to improve (if it does, great).\n>\n>I'd be surprised if a good implementation didn't. Even just some smarter\n>IO scheduling in the startup process could help a good bit. E.g. no need\n>to sequentially read the first and then the second block for an update\n>record, if you can issue both at the same time - just about every\n>storage system these days can do a number of IO requests in parallel,\n>and it nearly halves latency effects. And reading a few records (as in a\n>few hundred bytes commonly) ahead, allows to do much more than that.\n>\n\nI don't disagree with that - prefetching certainly can improve utilization\nof the storage system. The question is whether it can meaningfully improve\nperformance of the recovery process in cases when it does not lag. And I\nthink it can't (perhaps with remote_apply being an exception).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 19:01:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-23 19:01:29 +0200, Tomas Vondra wrote:\n> On Tue, Apr 23, 2019 at 09:34:54AM -0700, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2019-04-23 18:07:40 +0200, Tomas Vondra wrote:\n> > > Well, the thing is that for prefetching to be possible you actually have\n> > > to be a bit behind. Otherwise you can't really look forward which blocks\n> > > will be needed, right?\n> > > \n> > > IMHO the main use case for prefetching is when there's a spike of activity\n> > > on the primary, making the standby to fall behind, and then hours takes\n> > > hours to catch up. I don't think the cases with just a couple of MBs of\n> > > lag are the issue prefetching is meant to improve (if it does, great).\n> > \n> > I'd be surprised if a good implementation didn't. Even just some smarter\n> > IO scheduling in the startup process could help a good bit. E.g. no need\n> > to sequentially read the first and then the second block for an update\n> > record, if you can issue both at the same time - just about every\n> > storage system these days can do a number of IO requests in parallel,\n> > and it nearly halves latency effects. And reading a few records (as in a\n> > few hundred bytes commonly) ahead, allows to do much more than that.\n> > \n> \n> I don't disagree with that - prefetching certainly can improve utilization\n> of the storage system. The question is whether it can meaningfully improve\n> performance of the recovery process in cases when it does not lag. And I\n> think it can't (perhaps with remote_apply being an exception).\n\nWell. I think a few dozen records behind doesn't really count as \"lag\",\nand I think that's where it'd start to help (and for some record types\nlike updates it'd start to help even for single records). It'd convert\nscenarios where we'd currently fall behind slowly into scenarios where\nwe can keep up - but where there's no meaningful lag while we keep up.\nWhat's your argument for me being wrong?\n\nAnd even if we'd keep up without any prefetching, issuing requests in a\nmore efficient manner allows for more efficient concurrent use of the\nstorage system. It'll often effectively reduce the amount of random\niops.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Apr 2019 10:09:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 10:09:39AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-04-23 19:01:29 +0200, Tomas Vondra wrote:\n>> On Tue, Apr 23, 2019 at 09:34:54AM -0700, Andres Freund wrote:\n>> > Hi,\n>> >\n>> > On 2019-04-23 18:07:40 +0200, Tomas Vondra wrote:\n>> > > Well, the thing is that for prefetching to be possible you actually have\n>> > > to be a bit behind. Otherwise you can't really look forward which blocks\n>> > > will be needed, right?\n>> > >\n>> > > IMHO the main use case for prefetching is when there's a spike of activity\n>> > > on the primary, making the standby to fall behind, and then hours takes\n>> > > hours to catch up. I don't think the cases with just a couple of MBs of\n>> > > lag are the issue prefetching is meant to improve (if it does, great).\n>> >\n>> > I'd be surprised if a good implementation didn't. Even just some smarter\n>> > IO scheduling in the startup process could help a good bit. E.g. no need\n>> > to sequentially read the first and then the second block for an update\n>> > record, if you can issue both at the same time - just about every\n>> > storage system these days can do a number of IO requests in parallel,\n>> > and it nearly halves latency effects. And reading a few records (as in a\n>> > few hundred bytes commonly) ahead, allows to do much more than that.\n>> >\n>>\n>> I don't disagree with that - prefetching certainly can improve utilization\n>> of the storage system. The question is whether it can meaningfully improve\n>> performance of the recovery process in cases when it does not lag. And I\n>> think it can't (perhaps with remote_apply being an exception).\n>\n>Well. I think a few dozen records behind doesn't really count as \"lag\",\n>and I think that's where it'd start to help (and for some record types\n>like updates it'd start to help even for single records). It'd convert\n>scenarios where we'd currently fall behind slowly into scenarios where\n>we can keep up - but where there's no meaningful lag while we keep up.\n>What's your argument for me being wrong?\n>\n\nI was not saying you are wrong. I think we actually agree on the main\npoints. My point is that prefetching is most valuable for cases when the\nstandby can't keep up and falls behind significantly - at which point we\nhave sufficient queue of blocks to prefetch. I don't care about the case\nwhen the standby can keep up even without prefetching, because the metric\nwe need to optimize (i.e. lag) is close to 0 even without prefetching.\n\n>And even if we'd keep up without any prefetching, issuing requests in a\n>more efficient manner allows for more efficient concurrent use of the\n>storage system. It'll often effectively reduce the amount of random\n>iops.\n\nMaybe, although the metric we (and users) care about the most is the\namount of lag. If the system keeps up even without prefetching, no one\nwill complain about I/O utilization.\n\nWhen the lag is close to 0, the average throughput/IOPS/... is bound to be\nthe same in both cases, because it does not affect how fast the standby\nreceives WAL from the primary. Except that it's somewhat \"spikier\" with\nprefetching, because we issue requests in bursts. Which may actually be a\nbad thing.\n\nOf course, maybe prefetching will make it much more efficient even in the\n\"no lag\" case, and while it won't improve the recovery, it'll leave more\nI/O bandwidth for the other processes (say, queries on hot standby).\n\nSo to be clear, I'm not against prefetching even in this case, but it's\nnot the primary reason why I think we need to do that.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 20:01:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 9:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> For this particular use case, wouldn't you want to read the WAL itself\n> and use that to issue prefetch requests? Because if you use the\n> .modblock files, the data file blocks will end up in memory but the\n> WAL blocks won't, and you'll still be waiting for I/O.\n\nI'm still interested in the answer to this question, but I don't see a\nreply that specifically concerns it. Apologies if I have missed one.\n\nStepping back a bit, I think that the basic issue under discussion\nhere is how granular you want your .modblock files. At one extreme,\none can imagine an application that wants to know exactly which blocks\nwere accessed at exact which LSNs. At the other extreme, if you want\nto run a daily incremental backup, you just want to know which blocks\nhave been modified between the start of the previous backup and the\nstart of the current backup - i.e. sometime in the last ~24 hours.\nThese are quite different things. When you only want approximate\ninformation - is there a chance that this block was changed within\nthis LSN range, or not? - you can sort and deduplicate in advance;\nwhen you want exact information, you cannot do that. Furthermore, if\nyou want exact information, you must store an LSN for every record; if\nyou want approximate information, you emit a file for each LSN range\nand consider it sufficient to know that the change happened somewhere\nwithin the range of LSNs encompassed by that file.\n\nIt's pretty clear in my mind that what I want to do here is provide\napproximate information, not exact information. Being able to sort\nand deduplicate in advance seems critical to be able to make something\nlike this work on high-velocity systems. If you are generating a\nterabyte of WAL between incremental backups, and you don't do any\nsorting or deduplication prior to the point when you actually try to\ngenerate the modified block map, you are going to need a whole lot of\nmemory (and CPU time, though that's less critical, I think) to process\nall of that data. If you can read modblock files which are already\nsorted and deduplicated, you can generate results incrementally and\nsend them to the client incrementally and you never really need more\nthan some fixed amount of memory no matter how much data you are\nprocessing.\n\nWhile I'm convinced that this particular feature should provide\napproximate rather than exact information, the degree of approximation\nis up for debate, and maybe it's best to just make that configurable.\nSome applications might work best with small modblock files covering\nonly ~16MB of WAL each, or even less, while others might prefer larger\nquanta, say 1GB or even more. For incremental backup, I believe that\nthe quanta will depend on the system velocity. On a system that isn't\nvery busy, fine-grained modblock files will make incremental backup\nmore efficient. If each modblock file covers only 16MB of data, and\nthe backup manages to start someplace in the middle of that 16MB, then\nyou'll only be including 16MB or less of unnecessary block references\nin the backup so you won't incur much extra work. On the other hand,\non a busy system, you probably do not want such a small quantum,\nbecause you will then up with gazillions of modblock files and that\nwill be hard to manage. It could also have performance problems,\nbecause merging data from a couple of hundred files is fine, but\nmerging data from a couple of hundred thousand files is going to be\ninefficient. My experience hacking on and testing tuplesort.c a few\nyears ago (with valuable tutelage by Peter Geoghegan) showed me that\nthere is a slow drop-off in efficiency as the merge order increases --\nand in this case, at some point you will blow out the size of the OS\nfile descriptor table and have to start opening and closing files\nevery time you access a different one, and that will be unpleasant.\nFinally, deduplication will tend to be more effective across larger\nnumbers of block references, at least on some access patterns.\n\nSo all of that is to say that if somebody wants modblock files each of\nwhich covers 1MB of WAL, I think that the same tools I'm proposing to\nbuild here for incremental backup could support that use case with\njust a configuration change. Moreover, the resulting files would\nstill be usable by the incremental backup engine. So that's good: the\nsame system can, at least to some extent, be reused for whatever other\npurposes people want to know about modified blocks. On the other\nhand, the incremental backup engine will likely not cope smoothly with\nhaving hundreds of thousands or millions of modblock files shoved down\nits gullet, so if there is a dramatic difference in the granularity\nrequirements of different consumers, another approach is likely\nindicated. Especially if some consumer wants to see block references\nin the exact order in which they appear in WAL, or wants to know the\nexact LSN of each reference, it's probably best to go for a different\napproach. For example, pg_waldump could grow a new option which spits\nout just the block references and in a format designed to be easily\nmachine-parseable; or a hypothetical background worker that does\nprefetching for recovery could just contain its own copy of the\nxlogreader machinery.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 Apr 2019 09:25:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 09:25:12AM -0400, Robert Haas wrote:\n>On Mon, Apr 22, 2019 at 9:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> For this particular use case, wouldn't you want to read the WAL itself\n>> and use that to issue prefetch requests? Because if you use the\n>> .modblock files, the data file blocks will end up in memory but the\n>> WAL blocks won't, and you'll still be waiting for I/O.\n>\n>I'm still interested in the answer to this question, but I don't see a\n>reply that specifically concerns it. Apologies if I have missed one.\n>\n\nI don't think prefetching WAL blocks is all that important. The WAL\nsegment was probably received fairly recently (either from primary or\narchive) and so it's reasonable to assume it's still in page cache. And\neven if it's not, sequential reads are handled by readahead pretty well.\nWhich is a form of prefetching.\n\nBut even if WAL prefetching was useful in some cases, I think it's mostly\northogonal issue - it certainly does not make prefetching of data pages\nunnecessary.\n\n>Stepping back a bit, I think that the basic issue under discussion\n>here is how granular you want your .modblock files. At one extreme,\n>one can imagine an application that wants to know exactly which blocks\n>were accessed at exact which LSNs. At the other extreme, if you want\n>to run a daily incremental backup, you just want to know which blocks\n>have been modified between the start of the previous backup and the\n>start of the current backup - i.e. sometime in the last ~24 hours.\n>These are quite different things. When you only want approximate\n>information - is there a chance that this block was changed within\n>this LSN range, or not? - you can sort and deduplicate in advance;\n>when you want exact information, you cannot do that. Furthermore, if\n>you want exact information, you must store an LSN for every record; if\n>you want approximate information, you emit a file for each LSN range\n>and consider it sufficient to know that the change happened somewhere\n>within the range of LSNs encompassed by that file.\n>\n\nThose are the extreme design options, yes. But I think there may be a\nreasonable middle ground, that would allow using the modblock files for\nboth use cases.\n\n>It's pretty clear in my mind that what I want to do here is provide\n>approximate information, not exact information. Being able to sort\n>and deduplicate in advance seems critical to be able to make something\n>like this work on high-velocity systems.\n\nDo you have any analysis / data to support that claim? I mean, it's\nobvious that sorting and deduplicating the data right away makes\nsubsequent processing more efficient, but it's not clear to me that not\ndoing it would make it useless for high-velocity systems.\n\n> If you are generating a\n>terabyte of WAL between incremental backups, and you don't do any\n>sorting or deduplication prior to the point when you actually try to\n>generate the modified block map, you are going to need a whole lot of\n>memory (and CPU time, though that's less critical, I think) to process\n>all of that data. If you can read modblock files which are already\n>sorted and deduplicated, you can generate results incrementally and\n>send them to the client incrementally and you never really need more\n>than some fixed amount of memory no matter how much data you are\n>processing.\n>\n\nSure, but that's not what I proposed elsewhere in this thread. My proposal\nwas to keep mdblocks \"raw\" for WAL segments that were not recycled yet (so\n~3 last checkpoints), and deduplicate them after that. So vast majority of\nthe 1TB of WAL will have already deduplicated data.\n\nAlso, maybe we can do partial deduplication, in a way that would be useful\nfor prefetching. Say we only deduplicate 1MB windows - that would work at\nleast for cases that touch the same page frequently (say, by inserting to\nthe tail of an index, or so).\n\n>While I'm convinced that this particular feature should provide\n>approximate rather than exact information, the degree of approximation\n>is up for debate, and maybe it's best to just make that configurable.\n>Some applications might work best with small modblock files covering\n>only ~16MB of WAL each, or even less, while others might prefer larger\n>quanta, say 1GB or even more. For incremental backup, I believe that\n>the quanta will depend on the system velocity. On a system that isn't\n>very busy, fine-grained modblock files will make incremental backup\n>more efficient. If each modblock file covers only 16MB of data, and\n>the backup manages to start someplace in the middle of that 16MB, then\n>you'll only be including 16MB or less of unnecessary block references\n>in the backup so you won't incur much extra work. On the other hand,\n>on a busy system, you probably do not want such a small quantum,\n>because you will then up with gazillions of modblock files and that\n>will be hard to manage. It could also have performance problems,\n>because merging data from a couple of hundred files is fine, but\n>merging data from a couple of hundred thousand files is going to be\n>inefficient. My experience hacking on and testing tuplesort.c a few\n>years ago (with valuable tutelage by Peter Geoghegan) showed me that\n>there is a slow drop-off in efficiency as the merge order increases --\n>and in this case, at some point you will blow out the size of the OS\n>file descriptor table and have to start opening and closing files\n>every time you access a different one, and that will be unpleasant.\n>Finally, deduplication will tend to be more effective across larger\n>numbers of block references, at least on some access patterns.\n>\n\nI agree with those observations in general, but I don't think it somehow\nproves we have to deduplicate/sort the data.\n\nFWIW no one cares about low-velocity systems. While raw modblock files\nwould not be an issue on them, it's also mostly uninteresting from the\nprefetching perspective. It's the high-velocity sytems that have lag.\n\n>So all of that is to say that if somebody wants modblock files each of\n>which covers 1MB of WAL, I think that the same tools I'm proposing to\n>build here for incremental backup could support that use case with\n>just a configuration change. Moreover, the resulting files would\n>still be usable by the incremental backup engine. So that's good: the\n>same system can, at least to some extent, be reused for whatever other\n>purposes people want to know about modified blocks.\n\n+1 to configuration change, at least during the development phase. It'll\nallow comfortable testing and benchmarking.\n\n>On the other hand, the incremental backup engine will likely not cope\n>smoothly with having hundreds of thousands or millions of modblock files\n>shoved down its gullet, so if there is a dramatic difference in the\n>granularity requirements of different consumers, another approach is\n>likely indicated. Especially if some consumer wants to see block\n>references in the exact order in which they appear in WAL, or wants to\n>know the exact LSN of each reference, it's probably best to go for a\n>different approach. For example, pg_waldump could grow a new option\n>which spits out just the block references and in a format designed to be\n>easily machine-parseable; or a hypothetical background worker that does\n>prefetching for recovery could just contain its own copy of the\n>xlogreader machinery.\n>\n\nAgain, I don't think we have to keep the raw modblock files forever. Send\nthem to the archive, remove/deduplicate/sort them after we recycle the WAL\nsegment, or something like that. That way the incremental backups don't\nneed to deal with excessive number of modblock files.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 24 Apr 2019 16:10:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: finding changed blocks using WAL scanning"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 10:10 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> >I'm still interested in the answer to this question, but I don't see a\n> >reply that specifically concerns it. Apologies if I have missed one.\n>\n> I don't think prefetching WAL blocks is all that important. The WAL\n> segment was probably received fairly recently (either from primary or\n> archive) and so it's reasonable to assume it's still in page cache. And\n> even if it's not, sequential reads are handled by readahead pretty well.\n> Which is a form of prefetching.\n\nTrue. But if you are going to need to read the WAL anyway to apply\nit, why shouldn't the prefetcher just read it first and use that to\ndrive prefetching, instead of using the modblock files? It's strictly\nless I/O, because you were going to read the WAL files anyway and now\nyou don't have to also read some other modblock file, and it doesn't\nreally seem to have any disadvantages.\n\n> >It's pretty clear in my mind that what I want to do here is provide\n> >approximate information, not exact information. Being able to sort\n> >and deduplicate in advance seems critical to be able to make something\n> >like this work on high-velocity systems.\n>\n> Do you have any analysis / data to support that claim? I mean, it's\n> obvious that sorting and deduplicating the data right away makes\n> subsequent processing more efficient, but it's not clear to me that not\n> doing it would make it useless for high-velocity systems.\n\nI did include some analysis of this point in my original post. It\ndoes depend on your assumptions. If you assume that users will be OK\nwith memory usage that runs into the tens of gigabytes when the amount\nof change since the last incremental backup is very large, then there\nis probably no big problem, but that assumption sounds shaky to me.\n\n(The customers I seem to end up on the phone with seem to be\ndisproportionately those running enormous systems on dramatically\nunderpowered hardware, which is not infrequently related to the reason\nI end up on the phone with them.)\n\n> Sure, but that's not what I proposed elsewhere in this thread. My proposal\n> was to keep mdblocks \"raw\" for WAL segments that were not recycled yet (so\n> ~3 last checkpoints), and deduplicate them after that. So vast majority of\n> the 1TB of WAL will have already deduplicated data.\n\nOK, I missed that proposal. My biggest concern about this is that I\ndon't see how to square this with the proposal elsewhere on this\nthread that these files should be put someplace that makes them\nsubject to archiving. If the files are managed by the master in a\nseparate directory it can easily do this sort of thing, but if they're\narchived then you can't. Now maybe that's just a reason not to adopt\nthat proposal, but I don't see how to adopt both that proposal and\nthis one, unless we just say that we're going to spew craptons of tiny\nlittle non-deduplicated modblock files into the archive.\n\n> Also, maybe we can do partial deduplication, in a way that would be useful\n> for prefetching. Say we only deduplicate 1MB windows - that would work at\n> least for cases that touch the same page frequently (say, by inserting to\n> the tail of an index, or so).\n\nMaybe, but I'm not sure that's really optimal for any use case.\n\n> FWIW no one cares about low-velocity systems. While raw modblock files\n> would not be an issue on them, it's also mostly uninteresting from the\n> prefetching perspective. It's the high-velocity sytems that have lag.\n\nI don't think that's particularly fair. Low-velocity systems are some\nof the best candidates for incremental backup, and people who are\nrunning such systems probably care about that.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 Apr 2019 13:56:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: finding changed blocks using WAL scanning"
}
] |
[
{
"msg_contents": "Back in [1] I wrote\n\n> I've wondered for some time whether we couldn't make a useful\n> reduction in the run time of the PG regression tests by looking\n> for scripts that run significantly longer than others in their\n> parallel groups, and making an effort to trim the runtimes of\n> those particular scripts.\n\nI finally got some time to pursue that, and attached is a proposed patch\nthat moves some tests around and slightly adjusts some other ones.\nTo cut to the chase: on my workstation, this cuts the time for\n\"make installcheck-parallel\" from 21.9 sec to 13.9 sec, or almost 40%.\nI think that's a worthwhile improvement, considering how often all of us\nrun those tests.\n\nSaid workstation is an 8-core machine, so an objection could made\nthat maybe I'm optimizing too much for multicore. But even laptops\nhave multiple cores these days. To check ostensibly-worse cases,\nI also tried this patch on dromedary's host (old dual-core Intel), and\nfound that installcheck-parallel went from about 92 seconds to about 82.\nOn gaur's host (single-core HPPA), the time went from 840 sec to 774.\nSo there's close to 10% savings even on very lame machines.\n\nIn no particular order, here's what I did:\n\n* Move the strings and numerology tests to be part of the second\nparallel test group; there is no reason to run them serially.\n\n* Move the insert and insert_conflict tests to be part of the \"copy\"\nparallel group. There is no reason to run them serially, plus they\nwere obviously placed with the aid of a dartboard, or at least without\nconcern for fixing comments one line away.\n\n* Move the select and errors tests into the preceding parallel group\ninstead of running them serially. (This required adjusting the\nconstraints test, which uses a table named \"tmp\" as select also does.\nI fixed that by making it a temp table in the constraints test.)\n\n* create_index.sql ran much longer than other tests in its parallel\ngroup, so I split out the SP-GiST-related tests into a new file\ncreate_index_spgist.sql, and moved the delete_test_table test case\nto btree_index.sql.\n\n* Likewise, join.sql needed to be split up, so I moved the \"exercises\nfor the hash join code\" portion into a new file join_hash.sql.\n\n* Likewise, I split up indexing.sql by moving the \"fastpath\" test into\na new file index_fastpath.sql.\n\n* psql and stats_ext both ran considerably longer than other tests\nin their group. I fixed that by moving them into the next parallel\ngroup, where the rules test has a similar runtime. (To make it\nsafe to run stats_ext in parallel with rules, I adjusted the latter\nto only dump views/rules from the pg_catalog and public schemas,\nwhich was what it was doing anyway. stats_ext makes some views in\na transient schema, which now will not affect rules.)\n\n* The plpgsql test ran much longer than others, which turns out to be\nlargely due to the 2-second timeout in its test of statement_timeout.\nIn view of the experience reflected in commit f1e671a0b, just\nreducing that timeout seems unsafe. What I did instead was to shove\nthat test case and some related ones into a new plpgsql test file,\nsrc/pl/plpgsql/src/sql/plpgsql_trap.sql, so that it's not part of the\ncore regression tests at all. (We've talked before about moving\nchunks of plpgsql.sql into the plpgsql module, so this is sort of a\ndown payment on that.) Now, if you think about the time to do \ncheck-world rather than just the core regression tests, this isn't\nobviously a win, and in fact it might be a loss because the plpgsql\ntests run serially not in parallel with anything else. However,\nby that same token, the parallel-testing overload we were concerned\nabout in f1e671a0b should be much less bad in the plpgsql context.\nI therefore took a chance on reducing the timeout down to 1 second.\nIf the buildfarm doesn't like that, we can change it back to 2 seconds\nagain. It should still be a net win because of the fact that\ncheck-world runs the core tests more than once.\n\n* Another thing I changed in the SP-GiST tests was to adjust the tests\nthat are trying to verify that KNN indexscan gives the same ordering\nas seqscan-and-sort. Those were using FULL JOIN to match up rank()\nresults, which is horribly inefficient on this data set, because there\nare 1000 duplicate entries in quad_point_tbl and hence 1000 rows with\nthe same rank; we proceed to form 1000000 join rows that we then have\nto filter away again. What I did about that was to replace rank()\nwith row_number() so that the primary join key is unique, shaving well\nover a second off the test's runtime. There is a small problem, namely\nthat the data set has two points that are different but yet have exactly\nthe same distance to the origin, causing their sort ordering to be\nunderdetermined. I think however that it's okay to simplify the queries\nso that they just verify that we get the same values and ordering of the\ndistance results. The purpose of this test is not to see whether <->\ngets the right answer, it is to see whether SP-GiST can return results\nin the correct order according to <->, so I think it's okay to compare\nonly the distances and not the underlying points.\n\n* Also, in polygon.sql, I removed quad_poly_tbl_ord_seq1 and\nquad_poly_tbl_ord_idx1; the related queries are very expensive and\nit's not clear what coverage they provide that isn't provided by\nthe near-duplicate tests involving quad_poly_tbl_ord_seq2 and\nquad_poly_tbl_ord_idx2. (Note: polygon.sql seems to run proportionally\nmuch slower on some machines than others. Unpatched, on my workstation\nit's 3x slower than timestamptz, whereas on say longfin it's a good bit\nfaster. It might be interesting to look into why that is. But anyway,\nthis part of the patch benefits machines where it's slower.)\n\nThere are still a few tests that seem like maybe it'd be worth trimming,\nbut I felt like I'd hit a point of diminishing returns, so I stopped\nhere.\n\nThoughts? Anyone object to making these sorts of changes\npost-feature-freeze?\n\n\t\t\tregards, tom lane\n\n[1] https://postgr.es/m/16646.1549770618@sss.pgh.pa.us",
"msg_date": "Wed, 10 Apr 2019 18:35:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Reducing the runtime of the core regression tests"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-10 18:35:15 -0400, Tom Lane wrote:\n> on my workstation, this cuts the time for \"make installcheck-parallel\"\n> from 21.9 sec to 13.9 sec, or almost 40%. I think that's a worthwhile\n> improvement, considering how often all of us run those tests.\n\nAwesome.\n\n\n> * The plpgsql test ran much longer than others, which turns out to be\n> largely due to the 2-second timeout in its test of statement_timeout.\n> In view of the experience reflected in commit f1e671a0b, just\n> reducing that timeout seems unsafe. What I did instead was to shove\n> that test case and some related ones into a new plpgsql test file,\n> src/pl/plpgsql/src/sql/plpgsql_trap.sql, so that it's not part of the\n> core regression tests at all. (We've talked before about moving\n> chunks of plpgsql.sql into the plpgsql module, so this is sort of a\n> down payment on that.) Now, if you think about the time to do\n> check-world rather than just the core regression tests, this isn't\n> obviously a win, and in fact it might be a loss because the plpgsql\n> tests run serially not in parallel with anything else. However,\n> by that same token, the parallel-testing overload we were concerned\n> about in f1e671a0b should be much less bad in the plpgsql context.\n> I therefore took a chance on reducing the timeout down to 1 second.\n> If the buildfarm doesn't like that, we can change it back to 2 seconds\n> again. It should still be a net win because of the fact that\n> check-world runs the core tests more than once.\n\nHm, can't we \"just\" parallelize the plpgsql schedule instead?\n\n\n> Thoughts? Anyone object to making these sorts of changes\n> post-feature-freeze?\n\nHm. There's some advantage to doing so, because it won't break any large\npending changes. But it's also possible that it'll destabilize the\nbuildfarm some. In personal capacity I'm like +0.5.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Apr 2019 15:48:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-10 18:35:15 -0400, Tom Lane wrote:\n>> ... What I did instead was to shove\n>> that test case and some related ones into a new plpgsql test file,\n>> src/pl/plpgsql/src/sql/plpgsql_trap.sql, so that it's not part of the\n>> core regression tests at all. (We've talked before about moving\n>> chunks of plpgsql.sql into the plpgsql module, so this is sort of a\n>> down payment on that.) Now, if you think about the time to do\n>> check-world rather than just the core regression tests, this isn't\n>> obviously a win, and in fact it might be a loss because the plpgsql\n>> tests run serially not in parallel with anything else.\n\n> Hm, can't we \"just\" parallelize the plpgsql schedule instead?\n\nIf somebody wants to work on that, I won't stand in the way, but\nit seems like material for a different patch.\n\n>> Thoughts? Anyone object to making these sorts of changes\n>> post-feature-freeze?\n\n> Hm. There's some advantage to doing so, because it won't break any large\n> pending changes. But it's also possible that it'll destabilize the\n> buildfarm some. In personal capacity I'm like +0.5.\n\nMy thought was that there is (hopefully) going to be a lot of testing\ngoing on over the next few months, so making that faster would be\na useful activity.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Apr 2019 18:54:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 3:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I finally got some time to pursue that, and attached is a proposed patch\n> that moves some tests around and slightly adjusts some other ones.\n> To cut to the chase: on my workstation, this cuts the time for\n> \"make installcheck-parallel\" from 21.9 sec to 13.9 sec, or almost 40%.\n> I think that's a worthwhile improvement, considering how often all of us\n> run those tests.\n\nGreat!\n\n> * create_index.sql ran much longer than other tests in its parallel\n> group, so I split out the SP-GiST-related tests into a new file\n> create_index_spgist.sql, and moved the delete_test_table test case\n> to btree_index.sql.\n\nPutting the delete_test_table test case in btree_index.sql make perfect sense.\n\n> * Likewise, I split up indexing.sql by moving the \"fastpath\" test into\n> a new file index_fastpath.sql.\n\nI just noticed that the \"fastpath\" test actually fails to test the\nfastpath optimization -- the coverage we do have comes from another\ntest in btree_index.sql, that I wrote back in December. While I did\nmake a point of ensuring that we had test coverage for the nbtree\nfastpath optimization that went into Postgres 11, I also didn't\nconsider the original fastpath test. I assumed that there were no\ntests to begin with, because gcov showed me that there was no test\ncoverage back in December.\n\nWhat happened here was that commit 074251db limited the fastpath to\nonly be applied to B-Trees with at least 3 levels. While the original\nfastpath optimization tests actually tested the fastpath optimization\nwhen it first went in in March 2018, that only lasted a few weeks,\nsince 074251db didn't adjust the test to still be effective.\n\nI'll come up with a patch to deal with this situation, by\nconsolidating the old and new tests in some way. I don't think that\nyour work needs to block on that, though.\n\n> Thoughts? Anyone object to making these sorts of changes\n> post-feature-freeze?\n\nIMV there should be no problem with pushing ahead with this after\nfeature freeze.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 10 Apr 2019 16:12:43 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Apr 10, 2019 at 3:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * Likewise, I split up indexing.sql by moving the \"fastpath\" test into\n>> a new file index_fastpath.sql.\n\n> I just noticed that the \"fastpath\" test actually fails to test the\n> fastpath optimization -- the coverage we do have comes from another\n> test in btree_index.sql, that I wrote back in December.\n\nOh! Hmm.\n\n> I'll come up with a patch to deal with this situation, by\n> consolidating the old and new tests in some way. I don't think that\n> your work needs to block on that, though.\n\nShould I leave out the part of my patch that creates index_fastpath.sql?\nIf we're going to end up removing that version of the test, there's no\npoint in churning the related lines beforehand.\n\nOne way or the other I want to get that test out of where it is,\nbecause indexing.sql is currently the slowest test in its group.\nBut if you prefer to make btree_index run a bit longer rather than\ninventing a new test script, that's no problem from where I stand.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Apr 2019 19:19:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 4:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'll come up with a patch to deal with this situation, by\n> > consolidating the old and new tests in some way. I don't think that\n> > your work needs to block on that, though.\n>\n> Should I leave out the part of my patch that creates index_fastpath.sql?\n> If we're going to end up removing that version of the test, there's no\n> point in churning the related lines beforehand.\n\nThe suffix truncation stuff made it tricky to force a B-Tree to be\ntall without also consisting of many blocks. Simply using large,\nrandom key values in suffix attributes didn't work anymore. The\nsolution I came up with for the new fastpath test that made it into\nbtree_index.sql was to have redundancy in leading keys, while avoiding\nTOAST compression by using plain storage in the table/index.\n\n> One way or the other I want to get that test out of where it is,\n> because indexing.sql is currently the slowest test in its group.\n> But if you prefer to make btree_index run a bit longer rather than\n> inventing a new test script, that's no problem from where I stand.\n\nThe original fastpath tests don't seem particularly effective to me,\neven without the oversight I mentioned. I suggest that you remove\nthem, since the minimal btree_index.sql fast path test is sufficient.\nIf there ever was a problem in this area, then amcheck would certainly\ndetect it -- there is precisely one place for every tuple on v4\nindexes. The original fastpath tests won't tickle the implementation\nin any interesting way in my opinion.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 10 Apr 2019 16:56:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 4:56 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The original fastpath tests don't seem particularly effective to me,\n> even without the oversight I mentioned. I suggest that you remove\n> them, since the minimal btree_index.sql fast path test is sufficient.\n\nTo be clear: I propose that you remove the tests entirely, and we\nleave it at that. I don't intend to follow up with my own patch\nbecause I don't think that there is anything in the original test case\nthat is worth salvaging.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 10 Apr 2019 17:08:55 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On Thu, 11 Apr 2019 at 10:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In no particular order, here's what I did:\n\nI was surprised to see nothing mentioned about attempting to roughly\nsort the test order in each parallel group according to their runtime.\nShorter running test coming last should reduce the chances of one\nprocess doing its last test when all other processes are already done\nand sitting idle. Of course, this won't be consistent over all\nhardware, but maybe it could be done as an average time for each test\nover the entire buildfarm.\n\n> Thoughts? Anyone object to making these sorts of changes\n> post-feature-freeze?\n\nI think it's a good time to do this sort of thing. It should be\neasier to differentiate tests destabilising due to this change out\nfrom the noise of other changes that are going in.... since currently,\nthe rate of those other changes should not be very high. Doing it any\nlater in the freeze does not seem better since we might discover some\nthings that need to be fixed due to this.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 11 Apr 2019 12:27:20 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I was surprised to see nothing mentioned about attempting to roughly\n> sort the test order in each parallel group according to their runtime.\n\nI'm confused about what you have in mind here? I'm pretty sure pg_regress\nlaunches all the scripts in a group at the same time, so that just\nrearranging the order they're listed in on the schedule line shouldn't\nmake any noticeable difference. If you meant changing the order of\noperations within each script, I don't really want to go there.\nIt'd require careful per-script analysis, and to the extent that the\nexisting tests have some meaningful ordering (admittedly, many don't),\nwe'd lose that.\n\n>> Thoughts? Anyone object to making these sorts of changes\n>> post-feature-freeze?\n\n> I think it's a good time to do this sort of thing. It should be\n> easier to differentiate tests destabilising due to this change out\n> from the noise of other changes that are going in.... since currently,\n> the rate of those other changes should not be very high. Doing it any\n> later in the freeze does not seem better since we might discover some\n> things that need to be fixed due to this.\n\nYeah. I wouldn't propose pushing this in shortly before beta, but\nif we do it now then we've probably got a month to sort out any\nproblems that may appear.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Apr 2019 00:44:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On Thu, 11 Apr 2019 at 16:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > I was surprised to see nothing mentioned about attempting to roughly\n> > sort the test order in each parallel group according to their runtime.\n>\n> I'm confused about what you have in mind here? I'm pretty sure pg_regress\n> launches all the scripts in a group at the same time, so that just\n> rearranging the order they're listed in on the schedule line shouldn't\n> make any noticeable difference.\n\nI probably have looked closer to how that is handled. If they're all\nlaunched at once then there's no point to what I mentioned. Please\ndisregard.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 11 Apr 2019 16:54:09 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Apr 10, 2019 at 4:56 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> The original fastpath tests don't seem particularly effective to me,\n>> even without the oversight I mentioned. I suggest that you remove\n>> them, since the minimal btree_index.sql fast path test is sufficient.\n\n> To be clear: I propose that you remove the tests entirely, and we\n> leave it at that. I don't intend to follow up with my own patch\n> because I don't think that there is anything in the original test case\n> that is worth salvaging.\n\nI checked into this by dint of comparing \"make coverage\" output for\n\"make check\" runs with and without indexing.sql's fastpath tests.\nThere were some differences that seem mostly to be down to whether\nor not autovacuum hit particular code paths during the test run.\nIn total, I found 29 lines that were hit in the first test but not\nin the second ... and 141 lines that were hit in the second test\nbut not the first. So I concur that indexing.sql's fastpath test\nisn't adding anything useful coverage-wise, and will just nuke it.\n\n(It'd be interesting perhaps to check whether the results shown\nby coverage.postgresql.org are similarly unstable. They might be\nless so, since I believe those are taken over the whole check-world\nsuite not just the core regression tests.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Apr 2019 12:55:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So I concur that indexing.sql's fastpath test\n> isn't adding anything useful coverage-wise, and will just nuke it.\n\nGood.\n\n> (It'd be interesting perhaps to check whether the results shown\n> by coverage.postgresql.org are similarly unstable. They might be\n> less so, since I believe those are taken over the whole check-world\n> suite not just the core regression tests.)\n\nI'm almost certain that they're at least slightly unstable. I mostly\nfind the report useful because it shows whether or not something gets\nhit at all. I don't trust it to be very accurate.\n\nI've noticed that the coverage reported on coverage.postgresql.org\nsometimes looks contradictory, which can happen due to compiler\noptimizations. I wonder if that could be addressed in some way,\nbecause I find the site to be a useful resource. I would at least like\nto know the settings used by its builds.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 11 Apr 2019 10:02:02 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On 2019-Apr-11, Peter Geoghegan wrote:\n\n> I've noticed that the coverage reported on coverage.postgresql.org\n> sometimes looks contradictory, which can happen due to compiler\n> optimizations. I wonder if that could be addressed in some way,\n> because I find the site to be a useful resource. I would at least like\n> to know the settings used by its builds.\n\n./configure --enable-depend --enable-coverage --enable-tap-tests --enable-nls --with-python --with-perl --with-tcl --with-openssl --with-libxml --with-ldap --with-pam >> $LOG 2>&1\n\nmake -j4 >> $LOG 2>&1\nmake -j4 -C contrib >> $LOG 2>&1\nmake check-world PG_TEST_EXTRA=\"ssl ldap\" >> $LOG 2>&1\nmake coverage-html >> $LOG 2>&1\n\nThere are no environment variables that would affect it.\n\nIf you want to propose changes, feel free.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 11 Apr 2019 14:00:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 11:00 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> ./configure --enable-depend --enable-coverage --enable-tap-tests --enable-nls --with-python --with-perl --with-tcl --with-openssl --with-libxml --with-ldap --with-pam >> $LOG 2>&1\n>\n> make -j4 >> $LOG 2>&1\n> make -j4 -C contrib >> $LOG 2>&1\n> make check-world PG_TEST_EXTRA=\"ssl ldap\" >> $LOG 2>&1\n> make coverage-html >> $LOG 2>&1\n>\n> There are no environment variables that would affect it.\n\nCould we add \"CFLAGS=-O0\"? This should prevent the kind of\nmachine-wise line-counting described here:\n\nhttps://gcc.gnu.org/onlinedocs/gcc/Gcov-and-Optimization.html#Gcov-and-Optimization\n\nI think that it makes sense to prioritize making it clear which exact\nlines were executed in terms of the semantics of C. I might prefer to\nhave optimizations enabled if I was optimizing my code, but that's not\nwhat the web resource is for, really.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 11 Apr 2019 11:31:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On 2019-Apr-11, Peter Geoghegan wrote:\n\n> On Thu, Apr 11, 2019 at 11:00 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > ./configure --enable-depend --enable-coverage --enable-tap-tests --enable-nls --with-python --with-perl --with-tcl --with-openssl --with-libxml --with-ldap --with-pam >> $LOG 2>&1\n> >\n> > make -j4 >> $LOG 2>&1\n> > make -j4 -C contrib >> $LOG 2>&1\n> > make check-world PG_TEST_EXTRA=\"ssl ldap\" >> $LOG 2>&1\n> > make coverage-html >> $LOG 2>&1\n> >\n> > There are no environment variables that would affect it.\n> \n> Could we add \"CFLAGS=-O0\"?\n\nDone. Do you have a preferred spot where the counts were wrong?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 12 Apr 2019 12:31:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 9:31 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Done. Do you have a preferred spot where the counts were wrong?\n\nNot really, but I can give you an example.\n\nLine counts for each of the two \"break\" statements within\n_bt_keep_natts_fast() are exactly the same. I don't think that this\nbecause we actually hit each break exactly the same number of times\n(90,236 times). I think that we see this because the same instruction\nis associated with both break statements in the loop. All of the\nexamples I've noticed are a bit like that. Not a huge problem, but\nless useful than the alternative.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 12 Apr 2019 09:48:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On 2019-Apr-12, Peter Geoghegan wrote:\n\n> On Fri, Apr 12, 2019 at 9:31 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Done. Do you have a preferred spot where the counts were wrong?\n> \n> Not really, but I can give you an example.\n> \n> Line counts for each of the two \"break\" statements within\n> _bt_keep_natts_fast() are exactly the same. I don't think that this\n> because we actually hit each break exactly the same number of times\n> (90,236 times). I think that we see this because the same instruction\n> is associated with both break statements in the loop. All of the\n> examples I've noticed are a bit like that. Not a huge problem, but\n> less useful than the alternative.\n\nHmm, it's odd, because\nhttps://coverage.postgresql.org/src/backend/access/nbtree/nbtutils.c.gcov.html\nstill shows that function doing that. pg_config shows:\n\n$ ./pg_config --configure\n'--enable-depend' '--enable-coverage' '--enable-tap-tests' '--enable-nls' '--with-python' '--with-perl' '--with-tcl' '--with-openssl' '--with-libxml' '--with-ldap' '--with-pam' 'CFLAGS=-O0'\n\nsrc/Makefile.global says:\n\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -fprofile-arcs -ftest-coverage -O0\n\nthe compile line for nbtutils is:\n\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -fprofile-arcs -ftest-coverage -O0 -I../../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2 -c -o nbtutils.o nbtutils.c -MMD -MP -MF .deps/nbtutils.Po\n\nso I suppose there's something else that's affecting this.\n\nI wonder if it would be useful to add --enable-debug. I think I\npurposefully removed that, but I don't remember any details about it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 12 Apr 2019 13:24:06 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 10:24 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> I wonder if it would be useful to add --enable-debug. I think I\n> purposefully removed that, but I don't remember any details about it.\n\nAs usual, this stuff is horribly under-documented. I think it's\npossible that --enable-debug would help, since llvm-gcov requires it,\nbut that doesn't seem particularly likely.\n\nIt's definitely generally recommended that \"-O0\" be used, so I think\nthat we can agree that that was an improvement, even if it doesn't fix\nthe remaining problem that I noticed when I rechecked nbtutils.c.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 12 Apr 2019 10:49:27 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 10:49 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> It's definitely generally recommended that \"-O0\" be used, so I think\n> that we can agree that that was an improvement, even if it doesn't fix\n> the remaining problem that I noticed when I rechecked nbtutils.c.\n\nI'm not sure that we can really assume that \"-O0\" avoids the behavior\nI pointed out. Perhaps this counts as \"semantic flattening\" or\nsomething, rather than an optimization. I could have easily written\nthe code in _bt_keep_natts_fast() in the way gcov/gcc/whatever thinks\nI ought to have, which would have obscured the distinction anyway.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 12 Apr 2019 10:57:22 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 10:24 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> Hmm, it's odd, because\n> https://coverage.postgresql.org/src/backend/access/nbtree/nbtutils.c.gcov.html\n> still shows that function doing that. pg_config shows:\n>\n> $ ./pg_config --configure\n> '--enable-depend' '--enable-coverage' '--enable-tap-tests' '--enable-nls' '--with-python' '--with-perl' '--with-tcl' '--with-openssl' '--with-libxml' '--with-ldap' '--with-pam' 'CFLAGS=-O0'\n\nSo, we're currently using this on coverage.postgresql.org? We've switched?\n\nI noticed a better example of weird line counts today, this time\nwithin _bt_check_rowcompare():\n\n 1550 4 : cmpresult = 0;\n 1551 4 : if (subkey->sk_flags & SK_ROW_END)\n 1552 1292 : break;\n 1553 0 : subkey++;\n 1554 0 : continue;\n\nI would expect the \"break\" statement to have a line count that is no\ngreater than that of the first two lines that immediately precede, and\nyet it's far far greater (1292 is greater than 4). It looks like there\nhas been some kind of loop transformation.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Apr 2019 18:44:30 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On 2019-Apr-25, Peter Geoghegan wrote:\n\n> On Fri, Apr 12, 2019 at 10:24 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > Hmm, it's odd, because\n> > https://coverage.postgresql.org/src/backend/access/nbtree/nbtutils.c.gcov.html\n> > still shows that function doing that. pg_config shows:\n> >\n> > $ ./pg_config --configure\n> > '--enable-depend' '--enable-coverage' '--enable-tap-tests' '--enable-nls' '--with-python' '--with-perl' '--with-tcl' '--with-openssl' '--with-libxml' '--with-ldap' '--with-pam' 'CFLAGS=-O0'\n> \n> So, we're currently using this on coverage.postgresql.org? We've switched?\n\nYes, I changed it the day you first suggested it.\n\n> I noticed a better example of weird line counts today, this time\n> within _bt_check_rowcompare():\n> \n> 1550 4 : cmpresult = 0;\n> 1551 4 : if (subkey->sk_flags & SK_ROW_END)\n> 1552 1292 : break;\n> 1553 0 : subkey++;\n> 1554 0 : continue;\n> \n> I would expect the \"break\" statement to have a line count that is no\n> greater than that of the first two lines that immediately precede, and\n> yet it's far far greater (1292 is greater than 4). It looks like there\n> has been some kind of loop transformation.\n\nMaybe it takes more than -O0 in cflags to disable those, but as I said,\nthe compile lines do show the -O0.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 25 Apr 2019 22:23:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 7:23 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Maybe it takes more than -O0 in cflags to disable those, but as I said,\n> the compile lines do show the -O0.\n\nApparently, GCC does perform some optimizations at -O0, which is\nbarely acknowledged by its documentation:\n\nhttp://www.complang.tuwien.ac.at/kps2015/proceedings/KPS_2015_submission_29.pdf\n\nSearch the PDF for \"-O0\" to see numerous references to this. It seems\nto be impossible to turn off all GCC optimizations.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Apr 2019 19:29:41 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the runtime of the core regression tests"
}
] |
[
{
"msg_contents": "Hello,\n\nI have some questions about the different types of extended statistics\nthat were introduced in Postgres 10.\n- Which types of queries are each statistic type supposed to improve?\n- When should one type of statistic be used over the other? Should they\n both always be used?\n\nWe have a multi-tenant application and all of our tables have a denormalized\ntenant_id column. (Most tables actually use the tenant_id as part of a\ncomposite primary key on (tenant_id, id).)\n\nAs the docs suggest, we haven't created extended STATISTICS except for when\nwe observe the query planner making poor query plans.\n\nWe've seen poor query plans on queries involving filters on foreign keys:\n\n Table: fk_table\n--------------------\ntenant_id | integer\nid | integer\nfk_id | integer\n\nPRIMARY KEY (tenant_id, id)\nFOREIGN KEY (tenant_id, fk_id) REFERENCES left_table(tenant_id, id)\n\nThe id columns on these tables are unique, so there is a functional dependence\nbetween fk_id and tenant_id; if the fk_id columns are the same, then the\ntenant_id columns must also be the same.\n\nThis table has ~4.6 million rows, ~1300 distinct values for tenant_id, and\n~13000 distinct values for fk_id.\n\nA single SELECT query that filters on tenant_id and fk_id erroneously\nestimates that it will return a single row (4,600,000 / 1300 / 13,000 ~= 0.1):\n\n=> EXPLAIN ANALYZE SELECT * FROM fk_table WHERE tenant_id = 100 AND\nfk_id = 10000;\n QUERY PLAN\n----------------------------------------------------------------------------------\n Index Scan using fk_table_tenant_id_fk_id_index on fk_table\n (cost=0.43..4.45 rows=1 width=44) (actual time=0.016..1.547\nrows=3113 loops=1)\n Index Cond: ((tenant_id = 100) AND (fk_id = 10000))\n\nIn other places we've used a ndistinct statistic to solve this issue, but that\ndoesn't help in this case. Postgres still estimates that the query will return\na single row.\n\n=> CREATE STATISTICS ndistinct_stat (ndistinct) ON tenant_id, fk_id\nFROM fk_table;\n=> ANALYZE fk_table;\n=> SELECT stxname, stxndistinct FROM pg_statistic_ext;\n stxname | stxndistinct |\n----------------+-----------------+\n ndistinct_stat | {\"1, 3\": 3433} |\n=> EXPLAIN ANALYZE SELECT * FROM fk_table WHERE tenant_id = 100 AND\nfk_id = 10000;\n-- (unchanged)\n\nWhy doesn't the ndistinct statistic get used when planning this query? (We're\ncurrently on Postgre 10.6.) In contrast, if we create a functional dependency\nstatistic then Postgres will accurately predict the result size.\n\n=> CREATE STATISTICS dep_stat (dependencies) ON tenant_id, fk_id FROM fk_table;\n=> ANALYZE fk_table;\n=> SELECT stxname, stxdependencies FROM pg_statistic_ext;\n stxname | stxdependencies\n----------------+------------------------------------------\n dep_stat | {\"1 => 3\": 1.000000, \"3 => 1\": 0.060300}\n\n=> EXPLAIN ANALYZE SELECT * FROM fk_table WHERE tenant_id = 100 AND\nfk_id = 10000;\n QUERY PLAN\n----------------------------------------------------------------------------------\n Index Scan using fk_table_tenant_id_fk_id_index on fk_table\n (cost=0.43..1042.23 rows=612 width=44) (actual time=0.011..0.813\nrows=3056 loops=1)\n Index Cond: ((tenant_id = 100) AND (fk_id = 10000))\n\nSo, in general, which type of extended statistic should be used? Where do the\ndifferent kinds of statistics get used in the query planner? Is there an\nadvantage to using one type of statistic vs the other, or should we always\ncreate both?\n\nAnd in our specific example, with a schema designed for multi-tenancy, which\ntypes of statistics should we use for our foreign keys, where tenant_id is\nfunctionally dependent on the other foreign_id columns?\n\nTo explain where some of our confusion is coming from, here's the example where\nadding an ndistinct statistic helped: Postgres was adding a filter after an\nindex scan instead of including the filter as part of the index scan.\n\nbig_table had ~500,000,000 rows,\n~3000 distinct values for column a,\n~3000 distinct values for column b,\nbut just ~4500 distinct values for the (a, b) tuple,\nand column b was functionally dependent on column c.\n\nPostgres wanted to do:\n\n=> SELECT * FROM big_table WHERE a = 1 AND b = 10 AND c IN (100, 101, 102, ...);\nIndex Scan using big_table_a_b_c on big_table (cost=0.57..122.41\nrows=1 width=16)\n Index Cond: ((a = 1) AND (b = 10))\n Filter: c = ANY ('{100, 101, 102, 103, 104, 105, ...}')\n\nBut then we did:\n\n=> CREATE STATISTICS big_table_a_b_ndistinct (ndistinct) ON a, b FROM big_table;\n=> ANALYZE big_table;\n=> SELECT * FROM big_table WHERE a = 1 AND b = 10 AND c IN (100, 101, 102, ...);\nIndex Scan using big_table_a_b_c on big_table (cost=0.57..122.41\nrows=1 width=16)\n Index Cond: ((a = 1) AND (b = 10)) AND (c = ANY ('{100, 101, 102,\n103, 104, 105, ...}'))\n\n(This had very poor performance between Postgres thought it would have to\nfilter 500,000,000 / 3000 / 3000 ~= 55 rows, but actually it had to filter\n500,000,000 / 4500 ~= 110,000 rows.)\n\nBecause of the functional dependency on b and c, maybe a dependencies statistic\non b and c would have also had the desired effect, but at that point we didn't\nentirely understand how functional dependencies worked, so we didn't try them.\n\n\nIf anyone can give some insight about when one of these two statistic types is\nmore appropriate that would be extremely helpful!\n\n- Paul\n\n\n",
"msg_date": "Wed, 10 Apr 2019 16:52:27 -0700",
"msg_from": "Paul Martinez <hellopfm@gmail.com>",
"msg_from_op": true,
"msg_subject": "Proper usage of ndistinct vs. dependencies extended statistics"
},
{
"msg_contents": "On Thu, 11 Apr 2019 at 11:53, Paul Martinez <hellopfm@gmail.com> wrote:\n>\n> I have some questions about the different types of extended statistics\n> that were introduced in Postgres 10.\n> - Which types of queries are each statistic type supposed to improve?\n\nMultivariate ndistinct stats are aimed to improve distinct estimates\nover groups of columns. These can help in cases like GROUP BY a,b,\nSELECT DISTINCT a,b, SELECT a,b FROM x UNION SELECT a,b FROM y; They\nalso help in determining the number of times an index will be\nrescanned in cases like nested loops with a parameterised inner path.\n\nI see multivariate ndistinct estimates are not used for normal\nselectivity estimates for unknown values. e.g PREPARE q1 (int, int)\nAS SELECT * FROM t1 WHERE a = $1 and b = $2; still assumes a and b are\nindependent even when ndistinct stats exist on the two columns.\n\nThere are a few other usages too. See calls of estimate_num_groups()\n\ndependency stats just handle WHERE clauses (or more accurately,\nclauses containing a reference to a single relation. These only\nhandle equality OpExprs. e.g \"a = 10 and y = 3\", not \"a < 6 and y =\n3\". Further stat types (most common values) added in PG12 aim to\nallow inequality operators too.\n\n> - When should one type of statistic be used over the other? Should they\n> both always be used?\n\nIf they both always should be always used then we'd likely not have\nbothered making the types optional. Both ndistinct and dependency\nstats are fairly cheap to calculate and store, so it might not be too\nbig an issue adding both types if you're not sure. With these two\ntypes there's not really any choice for the planner to decide to use\none or the other, it just makes use of the ones it can use for the\ngiven situation. That won't be the case as more stats types get\nadded. In PG12, for example, we had to choose of MCV stats should be\napplied before dependencies stats. That might be a no-brainer, but\nperhaps the future there will be stats types where the order to apply\nthem is not so clear, although in those cases it might be questionable\nwhy you'd want to define more than one type of stats on the same set\nof columns.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 11 Apr 2019 16:26:11 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Proper usage of ndistinct vs. dependencies extended statistics"
}
] |
[
{
"msg_contents": "Hi, Hackers\n\nI noticed something strange. Does it cause nothing?\nI didn't detect anything, but feel restless.\n\nStep:\n- There are two standbys that connect to primary.\n- Kill primary and promote one standby.\n- Restart another standby that is reset primary_conninfo to connect new primary.\n\nI expected that the latest WAL segment file in old timeline is renamed with .partial suffix,\nbut it's not renamed in the restarted standby.\n\nxlog.c says the following, but I didn't understand the bad situation.\n\n * the archive. It's physically present in the new file with new TLI,\n * but recovery won't look there when it's recovering to the older\n--> * timeline. On the other hand, if we archive the partial segment, and\n--> * the original server on that timeline is still running and archives\n--> * the completed version of the same segment later, it will fail. (We\n * used to do that in 9.4 and below, and it caused such problems).\n *\n * As a compromise, we rename the last segment with the .partial\n * suffix, and archive it. Archive recovery will never try to read\n * .partial segments, so they will normally go unused. But in the odd\n * PITR case, the administrator can copy them manually to the pg_wal\n * directory (removing the suffix). They can be useful in debugging,\n * too.\n\nRegards\nRyo Matsumura\n\n\n\n",
"msg_date": "Thu, 11 Apr 2019 00:32:21 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Qestion about .partial WAL file"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 12:32:21AM +0000, Matsumura, Ryo wrote:\n> I expected that the latest WAL segment file in old timeline is renamed with .partial suffix,\n> but it's not renamed in the restarted standby.\n\nPlease note that the last partial segment is only generated on an\ninstance which has promoted. If you replug another standby into the\npromoted standby, then this replugged standby will not generate a\n.partial file, and it should not. What kind of behavior you think is\nright and what did you expect?\n\n> xlog.c says the following, but I didn't understand the bad situation.\n> \n> * the archive. It's physically present in the new file with new TLI,\n> * but recovery won't look there when it's recovering to the older\n> --> * timeline. On the other hand, if we archive the partial segment, and\n> --> * the original server on that timeline is still running and archives\n> --> * the completed version of the same segment later, it will fail. (We\n> * used to do that in 9.4 and below, and it caused such problems).\n\nIf using archive_mode = on, then a promoted standby which archives WAL\nsegments in the same location as the primary may finish by creating a\nconflict if the previous primary is still running after the standby\nhas been promoted, and that this previous primary is able to finish\nthe segment where WAL has forked.\n--\nMichael",
"msg_date": "Thu, 11 Apr 2019 12:55:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Qestion about .partial WAL file"
},
{
"msg_contents": "Michael-san\n\nThank for your advice.\n\n> then a promoted standby which archives WAL segments in the same\n> location as the primary\n\n> if the previous primary is still running after the standby\n\nI could not come up with the combination, but I understand now.\nSorry for bothering you.\n\nRegards\nRyo Matsumura\n\n\n\n",
"msg_date": "Thu, 11 Apr 2019 09:42:15 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Qestion about .partial WAL file"
}
] |
[
{
"msg_contents": "Hi all,\n\nRecent commit bfc80683 has added some documentation in pg_rewind about\nthe fact that it is possible to do the operation with a non-superuser,\nassuming that this role has sufficient grant rights to execute the\nfunctions used by pg_rewind.\n\nPeter Eisentraut has suggested to have some tests for this kind of\nuser here:\nhttps://www.postgresql.org/message-id/e1570ba6-4459-d9b2-1321-9449adaaef4c@2ndquadrant.com\n\nAttached is a patch which switches all the TAP tests of pg_rewind to\ndo that. As of now, the tests depend on a superuser for everything,\nand it seems to me that it makes little sense to make the tests more\npluggable by being able to switch the roles used on-the-fly (the\ninvocation of pg_rewind is stuck into RewindTest.pm) as a superuser\nhas no restrictions.\n\nAny thoughts?\n--\nMichael",
"msg_date": "Thu, 11 Apr 2019 13:13:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Switch TAP tests of pg_rewind to use role with only function\n permissions"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 6:13 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> Hi all,\n>\n> Recent commit bfc80683 has added some documentation in pg_rewind about\n> the fact that it is possible to do the operation with a non-superuser,\n> assuming that this role has sufficient grant rights to execute the\n> functions used by pg_rewind.\n>\n> Peter Eisentraut has suggested to have some tests for this kind of\n> user here:\n>\n> https://www.postgresql.org/message-id/e1570ba6-4459-d9b2-1321-9449adaaef4c@2ndquadrant.com\n>\n> Attached is a patch which switches all the TAP tests of pg_rewind to\n> do that. As of now, the tests depend on a superuser for everything,\n> and it seems to me that it makes little sense to make the tests more\n> pluggable by being able to switch the roles used on-the-fly (the\n> invocation of pg_rewind is stuck into RewindTest.pm) as a superuser\n> has no restrictions.\n>\n> Any thoughts?\n>\n\n+1.\n\nI definitely think having tests for this is good, otherwise we'll just end\nup making a change at some point that then suddenly breaks it and we won't\nnotice.\n\nIf we haven't already (and knowing you it wouldn't surprise me if you had\n:P), we should probably look through the rest of the tests to see if we\nhave other similar cases. In general I think any case where \"can be run by\nnon-superuser with specific permissions or a superuser\" is the case, we\nshould be testing it with the \"non-superuser with permissions\". Because,\nwell, superusers will never have permission problems (and they will both\ntest the functionality).\n\nI do think it's perfectly reasonable to have that hardcoded in the\nRewindTest.pm module. It doesn't have to be pluggable.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Apr 11, 2019 at 6:13 AM Michael Paquier <michael@paquier.xyz> wrote:Hi all,\n\nRecent commit bfc80683 has added some documentation in pg_rewind about\nthe fact that it is possible to do the operation with a non-superuser,\nassuming that this role has sufficient grant rights to execute the\nfunctions used by pg_rewind.\n\nPeter Eisentraut has suggested to have some tests for this kind of\nuser here:\nhttps://www.postgresql.org/message-id/e1570ba6-4459-d9b2-1321-9449adaaef4c@2ndquadrant.com\n\nAttached is a patch which switches all the TAP tests of pg_rewind to\ndo that. As of now, the tests depend on a superuser for everything,\nand it seems to me that it makes little sense to make the tests more\npluggable by being able to switch the roles used on-the-fly (the\ninvocation of pg_rewind is stuck into RewindTest.pm) as a superuser\nhas no restrictions.\n\nAny thoughts?+1.I definitely think having tests for this is good, otherwise we'll just end up making a change at some point that then suddenly breaks it and we won't notice.If we haven't already (and knowing you it wouldn't surprise me if you had :P), we should probably look through the rest of the tests to see if we have other similar cases. In general I think any case where \"can be run by non-superuser with specific permissions or a superuser\" is the case, we should be testing it with the \"non-superuser with permissions\". Because, well, superusers will never have permission problems (and they will both test the functionality).I do think it's perfectly reasonable to have that hardcoded in the RewindTest.pm module. It doesn't have to be pluggable. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 11 Apr 2019 09:40:36 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Switch TAP tests of pg_rewind to use role with only function\n permissions"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 09:40:36AM +0200, Magnus Hagander wrote:\n> If we haven't already (and knowing you it wouldn't surprise me if you had\n> :P), we should probably look through the rest of the tests to see if we\n> have other similar cases. In general I think any case where \"can be run by\n> non-superuser with specific permissions or a superuser\" is the case, we\n> should be testing it with the \"non-superuser with permissions\". Because,\n> well, superusers will never have permission problems (and they will both\n> test the functionality).\n\nI am ready to bet that we have other problems lying around.\n\n> I do think it's perfectly reasonable to have that hardcoded in the\n> RewindTest.pm module. It doesn't have to be pluggable.\n\nThanks, I have committed the patch to do so (d4e2a84), after rewording\na bit the comments. And particularly thanks to Peter to mention that\nhaving more tests with such properties would be nicer.\n--\nMichael",
"msg_date": "Fri, 12 Apr 2019 10:58:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch TAP tests of pg_rewind to use role with only function\n permissions"
}
] |
[
{
"msg_contents": "Hi,\n\nIs it possible to have commit-message or at least git hash in\ncommitfest. It will be very easy to track commit against commitfest\nitem.\n\n-- \nIbrar Ahmed\n\n\n",
"msg_date": "Thu, 11 Apr 2019 14:36:22 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": true,
"msg_subject": "Commit message / hash in commitfest page."
},
{
"msg_contents": "On 2019-04-11 11:36, Ibrar Ahmed wrote:\n> Hi,\n> \n> Is it possible to have commit-message or at least git hash in\n> commitfest. It will be very easy to track commit against commitfest\n> item.\n> \n\nCommitfest items always point to discussion threads. These threads often \nend with a message that says that the patch is pushed. IMHO, that \nmessage would be the place to include the commithash. It would also be \neasily findable via the commitfest application.\n\nErik Rijkers\n\n\n> --\n> Ibrar Ahmed\n\n\n",
"msg_date": "Thu, 11 Apr 2019 11:44:22 +0200",
"msg_from": "Erikjan Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Commit message / hash in commitfest page."
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 2:44 PM Erikjan Rijkers <er@xs4all.nl> wrote:\n>\n> On 2019-04-11 11:36, Ibrar Ahmed wrote:\n> > Hi,\n> >\n> > Is it possible to have commit-message or at least git hash in\n> > commitfest. It will be very easy to track commit against commitfest\n> > item.\n> >\n>\n> Commitfest items always point to discussion threads. These threads often\n> end with a message that says that the patch is pushed. IMHO, that\n> message would be the place to include the commithash. It would also be\n> easily findable via the commitfest application.\n>\n\n+1\n\n> Erik Rijkers\n>\n>\n> > --\n> > Ibrar Ahmed\n\n\n\n-- \nIbrar Ahmed\n\n\n",
"msg_date": "Thu, 11 Apr 2019 14:55:10 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Commit message / hash in commitfest page."
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 02:55:10PM +0500, Ibrar Ahmed wrote:\n>On Thu, Apr 11, 2019 at 2:44 PM Erikjan Rijkers <er@xs4all.nl> wrote:\n>>\n>> On 2019-04-11 11:36, Ibrar Ahmed wrote:\n>> > Hi,\n>> >\n>> > Is it possible to have commit-message or at least git hash in\n>> > commitfest. It will be very easy to track commit against commitfest\n>> > item.\n>> >\n>>\n>> Commitfest items always point to discussion threads. These threads often\n>> end with a message that says that the patch is pushed. IMHO, that\n>> message would be the place to include the commithash. It would also be\n>> easily findable via the commitfest application.\n>>\n>\n>+1\n>\n\nI think it might be useful to actually have that directly in the CF app,\nnot just in the thread. There would need to a way to enter multiple\nhashes, because patches often have multiple pieces.\n\nBut maybe that'd be too much unnecessary burden. I don't remember when I\nlast needed this information. And I'd probably try searching in git log\nfirst anyway.\n\n\nregard\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 13 Apr 2019 21:56:50 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit message / hash in commitfest page."
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Thu, Apr 11, 2019 at 02:55:10PM +0500, Ibrar Ahmed wrote:\n>> On Thu, Apr 11, 2019 at 2:44 PM Erikjan Rijkers <er@xs4all.nl> wrote:\n>>>> Is it possible to have commit-message or at least git hash in\n>>>> commitfest. It will be very easy to track commit against commitfest\n>>>> item.\n\n>>> Commitfest items always point to discussion threads. These threads often\n>>> end with a message that says that the patch is pushed. IMHO, that\n>>> message would be the place to include the commithash. It would also be\n>>> easily findable via the commitfest application.\n\n> I think it might be useful to actually have that directly in the CF app,\n> not just in the thread. There would need to a way to enter multiple\n> hashes, because patches often have multiple pieces.\n\n> But maybe that'd be too much unnecessary burden. I don't remember when I\n> last needed this information. And I'd probably try searching in git log\n> first anyway.\n\nYeah, I can't see committers bothering to do this. Including the\ndiscussion thread link in the commit message is already pretty\nsignificant hassle, and something not everybody remembers/bothers with.\n\nBut ... maybe it could be automated? A bot looking at the commit log\ncould probably suck out the thread links and try to match them up\nto CF entries. Likely you could get about 90% right even without that,\njust by matching the committer's name and the time of commit vs time\nof CF entry closure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Apr 2019 16:27:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commit message / hash in commitfest page."
},
{
"msg_contents": "On 04/13/19 15:56, Tomas Vondra wrote:\n> I think it might be useful to actually have that directly in the CF app,\n> not just in the thread. There would need to a way to enter multiple\n> hashes, because patches often have multiple pieces.\n\nThe CF app already recognizes (some) attachments in the email thread\nand makes them directly clickable from the CF entry page. Could it do\nthat with commit hashes, if found in the body of an email thread?\nGitweb does that pretty successfully with commits mentioned in\ncommit messages, and github does it automagically for text in issues\nand so on.\n\nMaybe it could even recognize phrases like \"commit 01deadbeef closes\ncf entry\" and change the cf entry state, though that'd be gravy.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Sat, 13 Apr 2019 16:48:49 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Commit message / hash in commitfest page."
},
{
"msg_contents": "On Sat, Apr 13, 2019 at 04:27:56PM -0400, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > On Thu, Apr 11, 2019 at 02:55:10PM +0500, Ibrar Ahmed wrote:\n> >> On Thu, Apr 11, 2019 at 2:44 PM Erikjan Rijkers <er@xs4all.nl> wrote:\n> >>>> Is it possible to have commit-message or at least git hash in\n> >>>> commitfest. It will be very easy to track commit against commitfest\n> >>>> item.\n> \n> >>> Commitfest items always point to discussion threads. These threads often\n> >>> end with a message that says that the patch is pushed. IMHO, that\n> >>> message would be the place to include the commithash. It would also be\n> >>> easily findable via the commitfest application.\n> \n> > I think it might be useful to actually have that directly in the CF app,\n> > not just in the thread. There would need to a way to enter multiple\n> > hashes, because patches often have multiple pieces.\n> \n> > But maybe that'd be too much unnecessary burden. I don't remember when I\n> > last needed this information. And I'd probably try searching in git log\n> > first anyway.\n> \n> Yeah, I can't see committers bothering to do this. Including the\n> discussion thread link in the commit message is already pretty\n> significant hassle, and something not everybody remembers/bothers with.\n> \n> But ... maybe it could be automated? A bot looking at the commit log\n> could probably suck out the thread links and try to match them up\n> to CF entries. Likely you could get about 90% right even without that,\n> just by matching the committer's name and the time of commit vs time\n> of CF entry closure.\n\nI've been getting a lot of lift out of the git_fdw (well, out of\ncaching it, as performance isn't great yet) for constructing the\nPostgreSQL Weekly News section on things already committed.\n\nAbout 3.5% of commits (as of last week) on master are within a minute\nof each other, so grabbing a window two minutes wide would work even\nif we didn't have the committer's name in hand, it's unlikely to\nproduce more than one result.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sat, 13 Apr 2019 23:15:26 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Commit message / hash in commitfest page."
},
{
"msg_contents": "On Sat, Apr 13, 2019 at 10:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > On Thu, Apr 11, 2019 at 02:55:10PM +0500, Ibrar Ahmed wrote:\n> >> On Thu, Apr 11, 2019 at 2:44 PM Erikjan Rijkers <er@xs4all.nl> wrote:\n> >>>> Is it possible to have commit-message or at least git hash in\n> >>>> commitfest. It will be very easy to track commit against commitfest\n> >>>> item.\n>\n> >>> Commitfest items always point to discussion threads. These threads\n> often\n> >>> end with a message that says that the patch is pushed. IMHO, that\n> >>> message would be the place to include the commithash. It would also\n> be\n> >>> easily findable via the commitfest application.\n>\n> > I think it might be useful to actually have that directly in the CF app,\n> > not just in the thread. There would need to a way to enter multiple\n> > hashes, because patches often have multiple pieces.\n>\n> > But maybe that'd be too much unnecessary burden. I don't remember when I\n> > last needed this information. And I'd probably try searching in git log\n> > first anyway.\n>\n> Yeah, I can't see committers bothering to do this. Including the\n> discussion thread link in the commit message is already pretty\n> significant hassle, and something not everybody remembers/bothers with.\n>\n> But ... maybe it could be automated? A bot looking at the commit log\n> could probably suck out the thread links and try to match them up\n> to CF entries. Likely you could get about 90% right even without that,\n> just by matching the committer's name and the time of commit vs time\n> of CF entry closure.\n>\n\nWould you even need to match that? It would be easy enough to scan all git\ncommit messages for links to th earchives and populate any CF entry that\nattaches to that same thread.\n\nOf course, that would be async, so you'd end up closing the CF entry and\nthen have it populate with the git information a bit later (in the simple\ncase where there is just one commit and then it 's done).\n\nUnless we want to go all the way and have said bot actualy close the CF\nentry. But the question is, do we?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Apr 13, 2019 at 10:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Thu, Apr 11, 2019 at 02:55:10PM +0500, Ibrar Ahmed wrote:\n>> On Thu, Apr 11, 2019 at 2:44 PM Erikjan Rijkers <er@xs4all.nl> wrote:\n>>>> Is it possible to have commit-message or at least git hash in\n>>>> commitfest. It will be very easy to track commit against commitfest\n>>>> item.\n\n>>> Commitfest items always point to discussion threads. These threads often\n>>> end with a message that says that the patch is pushed. IMHO, that\n>>> message would be the place to include the commithash. It would also be\n>>> easily findable via the commitfest application.\n\n> I think it might be useful to actually have that directly in the CF app,\n> not just in the thread. There would need to a way to enter multiple\n> hashes, because patches often have multiple pieces.\n\n> But maybe that'd be too much unnecessary burden. I don't remember when I\n> last needed this information. And I'd probably try searching in git log\n> first anyway.\n\nYeah, I can't see committers bothering to do this. Including the\ndiscussion thread link in the commit message is already pretty\nsignificant hassle, and something not everybody remembers/bothers with.\n\nBut ... maybe it could be automated? A bot looking at the commit log\ncould probably suck out the thread links and try to match them up\nto CF entries. Likely you could get about 90% right even without that,\njust by matching the committer's name and the time of commit vs time\nof CF entry closure.Would you even need to match that? It would be easy enough to scan all git commit messages for links to th earchives and populate any CF entry that attaches to that same thread.Of course, that would be async, so you'd end up closing the CF entry and then have it populate with the git information a bit later (in the simple case where there is just one commit and then it 's done).Unless we want to go all the way and have said bot actualy close the CF entry. But the question is, do we? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 16 Apr 2019 08:47:27 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Commit message / hash in commitfest page."
},
{
"msg_contents": "On 2019-04-16 08:47, Magnus Hagander wrote:\n> Unless we want to go all the way and have said bot actualy close the CF\n> entry. But the question is, do we?\n\nI don't think so. There are too many special cases that would make this\nunreliable, like one commit fest thread consisting of multiple patches.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Apr 2019 08:55:05 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit message / hash in commitfest page."
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 8:55 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-04-16 08:47, Magnus Hagander wrote:\n> > Unless we want to go all the way and have said bot actualy close the CF\n> > entry. But the question is, do we?\n>\n> I don't think so. There are too many special cases that would make this\n> unreliable, like one commit fest thread consisting of multiple patches.\n>\n\nI definitely don't think we should close just because they show up. It\nwould also require a keyword somewhere to indicate that it should be\nclosed. Of course, it can still lead to weird results when the same thread\nis attached to multiple CF entries etc. So I agree, I don't think we'd want\nthat. Which means we'd have the async/out-of-order issue.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Apr 16, 2019 at 8:55 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-04-16 08:47, Magnus Hagander wrote:\n> Unless we want to go all the way and have said bot actualy close the CF\n> entry. But the question is, do we?\n\nI don't think so. There are too many special cases that would make this\nunreliable, like one commit fest thread consisting of multiple patches.I definitely don't think we should close just because they show up. It would also require a keyword somewhere to indicate that it should be closed. Of course, it can still lead to weird results when the same thread is attached to multiple CF entries etc. So I agree, I don't think we'd want that. Which means we'd have the async/out-of-order issue. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 16 Apr 2019 09:14:48 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Commit message / hash in commitfest page."
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Tue, Apr 16, 2019 at 8:55 AM Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n>> On 2019-04-16 08:47, Magnus Hagander wrote:\n>>> Unless we want to go all the way and have said bot actualy close the CF\n>>> entry. But the question is, do we?\n\n>> I don't think so. There are too many special cases that would make this\n>> unreliable, like one commit fest thread consisting of multiple patches.\n\n> I definitely don't think we should close just because they show up.\n\nAgreed.\n\n> ... Which means we'd have the async/out-of-order issue.\n\nI don't see that as much of a problem. The use-case for these links,\nas I understand it, is for retrospective examination of CF data anyway.\nThe mere fact of closing the CF entry is enough for real-time status.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Apr 2019 09:37:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commit message / hash in commitfest page."
}
] |
[
{
"msg_contents": "Hi,\n\nI have a table with two dates, timeframe_begin and timeframe_end.\n\nI'd like to use daterange operators on this table, and an easy way would be to set up an index using gist on daterange(timeframe_begin, timeframe_end, '[]');\n\nI noticed some bad data where end < begin, so I modified these first, and tried to vcreate the index in the same transaction. The index creation does not notice the data changes. It seems creating the gist index this is not transaction safe?\n\ndb=> begin; \nBEGIN\ndb=> update group_info set timeframe_begin = timeframe_end where timeframe_begin > timeframe_end;\nUPDATE 76\ndb=> create index group_info_timeframe_idx on group_info using gist (daterange(timeframe_begin, timeframe_end, '[]'));\nERROR: range lower bound must be less than or equal to range upper bound\ndb=> abort; \nROLLBACK\n\ndb=> begin; \nBEGIN\ndb=> update group_info set timeframe_begin = timeframe_end where timeframe_begin > timeframe_end;\nUPDATE 76\ndb=> commit;\nCOMMIT\ndb=> begin;\nBEGIN\ndb=> create index group_info_timeframe_idx on group_info using gist (daterange(timeframe_begin, timeframe_end, '[]'));\nCREATE INDEX\ndb=> commit;\nCOMMIT\ndb=>\n\n\nI cannot find anything about gist indexes not being transaction safe? It is reprodcable on different machines with different datasets. Is this correct behaviour?\n\nThis is on PostgreSQL-9.6.\n\nCheers,\nPalle\n\n\n\n",
"msg_date": "Thu, 11 Apr 2019 14:41:25 +0200",
"msg_from": "Palle Girgensohn <girgen@pingpong.se>",
"msg_from_op": true,
"msg_subject": "creating gist index seems to look at data ignoring transaction?"
},
{
"msg_contents": "Palle Girgensohn <girgen@pingpong.se> writes:\n> I noticed some bad data where end < begin, so I modified these first, and tried to vcreate the index in the same transaction. The index creation does not notice the data changes. It seems creating the gist index this is not transaction safe?\n\nIndex creation has to include not-yet-dead tuples in case the index gets\nused by some transaction that can still see those tuples. So in this\ncase index entries get made for both the original and the updated versions\nof the tuples in question.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Apr 2019 10:15:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: creating gist index seems to look at data ignoring transaction?"
}
] |
[
{
"msg_contents": "Hi,\n\n(added Alvaro, Amit, and David)\n\nWhile working on an update-tuple-routing bug in postgres_fdw [1], I\nnoticed this change to ExecCleanupTupleRouting() made by commit\n3f2393edefa5ef2b6970a5a2fa2c7e9c55cc10cf:\n\n+ /*\n+ * Check if this result rel is one belonging to the node's subplans,\n+ * if so, let ExecEndPlan() clean it up.\n+ */\n+ if (htab)\n+ {\n+ Oid partoid;\n+ bool found;\n+\n+ partoid = RelationGetRelid(resultRelInfo->ri_RelationDesc);\n+\n+ (void) hash_search(htab, &partoid, HASH_FIND, &found);\n+ if (found)\n+ continue;\n+ }\n\n /* Allow any FDWs to shut down if they've been exercised */\n- if (resultRelInfo->ri_PartitionReadyForRouting &&\n- resultRelInfo->ri_FdwRoutine != NULL &&\n+ if (resultRelInfo->ri_FdwRoutine != NULL &&\n resultRelInfo->ri_FdwRoutine->EndForeignInsert != NULL)\n\nresultRelInfo->ri_FdwRoutine->EndForeignInsert(mtstate->ps.state,\n resultRelInfo);\n\nThis skips subplan resultrels before calling EndForeignInsert() if they\nare foreign tables, which I think causes an issue: the FDWs would fail\nto release resources for their foreign insert operations, because\nExecEndPlan() and ExecEndModifyTable() don't do anything to allow them\nto do that. So I think we should skip subplan resultrels after\nEndForeignInsert(). Attached is a small patch for that.\n\nBest regards,\nEtsuro Fujita\n\n[1]\nhttps://www.postgresql.org/message-id/21e7eaa4-0d4d-20c2-a1f7-c7e96f4ce440%40lab.ntt.co.jp",
"msg_date": "Thu, 11 Apr 2019 22:05:19 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Issue in ExecCleanupTupleRouting()"
},
{
"msg_contents": "On Fri, 12 Apr 2019 at 01:06, Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> wrote:\n> + /*\n> + * Check if this result rel is one belonging to the node's subplans,\n> + * if so, let ExecEndPlan() clean it up.\n> + */\n> + if (htab)\n> + {\n> + Oid partoid;\n> + bool found;\n> +\n> + partoid = RelationGetRelid(resultRelInfo->ri_RelationDesc);\n> +\n> + (void) hash_search(htab, &partoid, HASH_FIND, &found);\n> + if (found)\n> + continue;\n> + }\n>\n> /* Allow any FDWs to shut down if they've been exercised */\n> - if (resultRelInfo->ri_PartitionReadyForRouting &&\n> - resultRelInfo->ri_FdwRoutine != NULL &&\n> + if (resultRelInfo->ri_FdwRoutine != NULL &&\n> resultRelInfo->ri_FdwRoutine->EndForeignInsert != NULL)\n>\n> resultRelInfo->ri_FdwRoutine->EndForeignInsert(mtstate->ps.state,\n> resultRelInfo);\n>\n> This skips subplan resultrels before calling EndForeignInsert() if they\n> are foreign tables, which I think causes an issue: the FDWs would fail\n> to release resources for their foreign insert operations, because\n> ExecEndPlan() and ExecEndModifyTable() don't do anything to allow them\n> to do that. So I think we should skip subplan resultrels after\n> EndForeignInsert(). Attached is a small patch for that.\n\nOops. I had for some reason been under the impression that it was\nnodeModifyTable.c, or whatever the calling code happened to be that\nhandles these ones, but this is not the case as we call\nExecInitRoutingInfo() from ExecFindPartition() which makes the call to\nBeginForeignInsert. If that part is handled by the tuple routing code,\nthen the subsequent cleanup should be too, in which case your patch\nlooks fine.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 12 Apr 2019 01:28:04 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue in ExecCleanupTupleRouting()"
},
{
"msg_contents": "On 2019/04/11 22:28, David Rowley wrote:\n> On Fri, 12 Apr 2019 at 01:06, Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> wrote:\n>> + /*\n>> + * Check if this result rel is one belonging to the node's subplans,\n>> + * if so, let ExecEndPlan() clean it up.\n>> + */\n>> + if (htab)\n>> + {\n>> + Oid partoid;\n>> + bool found;\n>> +\n>> + partoid = RelationGetRelid(resultRelInfo->ri_RelationDesc);\n>> +\n>> + (void) hash_search(htab, &partoid, HASH_FIND, &found);\n>> + if (found)\n>> + continue;\n>> + }\n>>\n>> /* Allow any FDWs to shut down if they've been exercised */\n>> - if (resultRelInfo->ri_PartitionReadyForRouting &&\n>> - resultRelInfo->ri_FdwRoutine != NULL &&\n>> + if (resultRelInfo->ri_FdwRoutine != NULL &&\n>> resultRelInfo->ri_FdwRoutine->EndForeignInsert != NULL)\n>>\n>> resultRelInfo->ri_FdwRoutine->EndForeignInsert(mtstate->ps.state,\n>> resultRelInfo);\n>>\n>> This skips subplan resultrels before calling EndForeignInsert() if they\n>> are foreign tables, which I think causes an issue: the FDWs would fail\n>> to release resources for their foreign insert operations, because\n>> ExecEndPlan() and ExecEndModifyTable() don't do anything to allow them\n>> to do that. So I think we should skip subplan resultrels after\n>> EndForeignInsert(). Attached is a small patch for that.\n> \n> Oops. I had for some reason been under the impression that it was\n> nodeModifyTable.c, or whatever the calling code happened to be that\n> handles these ones, but this is not the case as we call\n> ExecInitRoutingInfo() from ExecFindPartition() which makes the call to\n> BeginForeignInsert. If that part is handled by the tuple routing code,\n> then the subsequent cleanup should be too, in which case your patch\n> looks fine.\n\nThat sounds right.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Fri, 12 Apr 2019 10:48:06 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Issue in ExecCleanupTupleRouting()"
},
{
"msg_contents": "On Fri, 12 Apr 2019 at 01:06, Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> wrote:\n> While working on an update-tuple-routing bug in postgres_fdw [1], I\n> noticed this change to ExecCleanupTupleRouting() made by commit\n> 3f2393edefa5ef2b6970a5a2fa2c7e9c55cc10cf:\n\nAdded to open items list.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 15 Apr 2019 16:32:53 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue in ExecCleanupTupleRouting()"
},
{
"msg_contents": "(2019/04/12 10:48), Amit Langote wrote:\n> On 2019/04/11 22:28, David Rowley wrote:\n>> On Fri, 12 Apr 2019 at 01:06, Etsuro Fujita<fujita.etsuro@lab.ntt.co.jp> wrote:\n>>> + /*\n>>> + * Check if this result rel is one belonging to the node's subplans,\n>>> + * if so, let ExecEndPlan() clean it up.\n>>> + */\n>>> + if (htab)\n>>> + {\n>>> + Oid partoid;\n>>> + bool found;\n>>> +\n>>> + partoid = RelationGetRelid(resultRelInfo->ri_RelationDesc);\n>>> +\n>>> + (void) hash_search(htab,&partoid, HASH_FIND,&found);\n>>> + if (found)\n>>> + continue;\n>>> + }\n>>>\n>>> /* Allow any FDWs to shut down if they've been exercised */\n>>> - if (resultRelInfo->ri_PartitionReadyForRouting&&\n>>> - resultRelInfo->ri_FdwRoutine != NULL&&\n>>> + if (resultRelInfo->ri_FdwRoutine != NULL&&\n>>> resultRelInfo->ri_FdwRoutine->EndForeignInsert != NULL)\n>>>\n>>> resultRelInfo->ri_FdwRoutine->EndForeignInsert(mtstate->ps.state,\n>>> resultRelInfo);\n>>>\n>>> This skips subplan resultrels before calling EndForeignInsert() if they\n>>> are foreign tables, which I think causes an issue: the FDWs would fail\n>>> to release resources for their foreign insert operations, because\n>>> ExecEndPlan() and ExecEndModifyTable() don't do anything to allow them\n>>> to do that. So I think we should skip subplan resultrels after\n>>> EndForeignInsert(). Attached is a small patch for that.\n>>\n>> Oops. I had for some reason been under the impression that it was\n>> nodeModifyTable.c, or whatever the calling code happened to be that\n>> handles these ones, but this is not the case as we call\n>> ExecInitRoutingInfo() from ExecFindPartition() which makes the call to\n>> BeginForeignInsert. If that part is handled by the tuple routing code,\n>> then the subsequent cleanup should be too, in which case your patch\n>> looks fine.\n>\n> That sounds right.\n\nPushed. Thanks for reviewing, David and Amit!\n\nBest regards,\nEtsuro Fujita\n\n\n\n",
"msg_date": "Mon, 15 Apr 2019 19:12:23 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Issue in ExecCleanupTupleRouting()"
},
{
"msg_contents": "(2019/04/15 13:32), David Rowley wrote:\n> On Fri, 12 Apr 2019 at 01:06, Etsuro Fujita<fujita.etsuro@lab.ntt.co.jp> wrote:\n>> While working on an update-tuple-routing bug in postgres_fdw [1], I\n>> noticed this change to ExecCleanupTupleRouting() made by commit\n>> 3f2393edefa5ef2b6970a5a2fa2c7e9c55cc10cf:\n>\n> Added to open items list.\n\nThanks! I moved this item to resolved ones.\n\nBest regards,\nEtsuro Fujita\n\n\n\n",
"msg_date": "Mon, 15 Apr 2019 19:16:09 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Issue in ExecCleanupTupleRouting()"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nFollowing test-sequence causing an error \"cache lookup failed for collation\n0\";\n\npostgres:5432 [42106]=# create table foobar(a bytea primary key, b int);\nCREATE TABLE\npostgres:5432 [42106]=# insert into foobar\nvalues('\\x4c835521685c46ee827ab83d376cf028', 1);\nINSERT 0 1\npostgres:5432 [42106]=# \\d+ foobar\n Table \"public.foobar\"\n Column | Type | Collation | Nullable | Default | Storage | Stats\ntarget | Description\n--------+---------+-----------+----------+---------+----------+--------------+-------------\n a | bytea | | not null | | extended\n| |\n b | integer | | | | plain\n| |\nIndexes:\n \"foobar_pkey\" PRIMARY KEY, btree (a)\nAccess method: heap\n\npostgres:5432 [42106]=# select * from foobar where a like '%1%';\nERROR: cache lookup failed for collation 0\n\n---\n\nAfter debugging it, I have observed that the code in question was added by\ncommit 5e1963fb764e9cc092e0f7b58b28985c311431d9 which added support for the\ncollations with nondeterministic comparison.\n\nThe error is coming from get_collation_isdeterministic() when colloid\npassed is 0. I think like we do in get_collation_name(), we should return\nfalse here when such collation oid does not exist.\n\nAttached patch doing that change and re-arranged the code to look similar\nto get_collation_name(). Also, added small testcase.\n\n---\n\nHowever, I have not fully understood the code changes done by the said\ncommit and thus the current behavior i.e. cache lookup error, might be the\nexpected one. But if that's the case, I kindly request to please explain\nwhy that is expected.\n\nThanks\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 11 Apr 2019 20:34:37 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "cache lookup failed for collation 0"
},
{
"msg_contents": "Jeevan Chalke <jeevan.chalke@enterprisedb.com> writes:\n> Following test-sequence causing an error \"cache lookup failed for collation 0\";\n> postgres:5432 [42106]=# create table foobar(a bytea primary key, b int);\n> CREATE TABLE\n> postgres:5432 [42106]=# insert into foobar\n> values('\\x4c835521685c46ee827ab83d376cf028', 1);\n> INSERT 0 1\n> postgres:5432 [42106]=# select * from foobar where a like '%1%';\n> ERROR: cache lookup failed for collation 0\n\nGood catch!\n\n> The error is coming from get_collation_isdeterministic() when colloid\n> passed is 0. I think like we do in get_collation_name(), we should return\n> false here when such collation oid does not exist.\n\nConsidering that e.g. lc_ctype_is_c() doesn't fail for InvalidOid, I agree\nthat it's probably a bad idea for get_collation_isdeterministic to fail.\nThere's a lot of code that thinks it can check for InvalidOid only in slow\npaths. However, I'd kind of expect the default result to be \"true\" not\n\"false\". Doing what you suggest would make match_pattern_prefix fail\nentirely, unless we also put a special case there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Apr 2019 11:37:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cache lookup failed for collation 0"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 9:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jeevan Chalke <jeevan.chalke@enterprisedb.com> writes:\n> > Following test-sequence causing an error \"cache lookup failed for\n> collation 0\";\n> > postgres:5432 [42106]=# create table foobar(a bytea primary key, b int);\n> > CREATE TABLE\n> > postgres:5432 [42106]=# insert into foobar\n> > values('\\x4c835521685c46ee827ab83d376cf028', 1);\n> > INSERT 0 1\n> > postgres:5432 [42106]=# select * from foobar where a like '%1%';\n> > ERROR: cache lookup failed for collation 0\n>\n> Good catch!\n>\n> > The error is coming from get_collation_isdeterministic() when colloid\n> > passed is 0. I think like we do in get_collation_name(), we should return\n> > false here when such collation oid does not exist.\n>\n> Considering that e.g. lc_ctype_is_c() doesn't fail for InvalidOid, I agree\n> that it's probably a bad idea for get_collation_isdeterministic to fail.\n> There's a lot of code that thinks it can check for InvalidOid only in slow\n> paths. However, I'd kind of expect the default result to be \"true\" not\n> \"false\". Doing what you suggest would make match_pattern_prefix fail\n> entirely, unless we also put a special case there.\n>\n\nDo you mean, the code in get_collation_isdeterministic() should look like\nsomething like below?\n\nIf colloid = InvalidOid then\n return TRUE\nELSE IF tuple is valid then\n return collisdeterministic from the tuple\nELSE\n return FALSE\n\nI think for non-zero colloid which is not valid we should return false, but\nI may be missing your point here.\n\n\n>\n> regards, tom lane\n>\n\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Thu, Apr 11, 2019 at 9:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Jeevan Chalke <jeevan.chalke@enterprisedb.com> writes:\n> Following test-sequence causing an error \"cache lookup failed for collation 0\";\n> postgres:5432 [42106]=# create table foobar(a bytea primary key, b int);\n> CREATE TABLE\n> postgres:5432 [42106]=# insert into foobar\n> values('\\x4c835521685c46ee827ab83d376cf028', 1);\n> INSERT 0 1\n> postgres:5432 [42106]=# select * from foobar where a like '%1%';\n> ERROR: cache lookup failed for collation 0\n\nGood catch!\n\n> The error is coming from get_collation_isdeterministic() when colloid\n> passed is 0. I think like we do in get_collation_name(), we should return\n> false here when such collation oid does not exist.\n\nConsidering that e.g. lc_ctype_is_c() doesn't fail for InvalidOid, I agree\nthat it's probably a bad idea for get_collation_isdeterministic to fail.\nThere's a lot of code that thinks it can check for InvalidOid only in slow\npaths. However, I'd kind of expect the default result to be \"true\" not\n\"false\". Doing what you suggest would make match_pattern_prefix fail\nentirely, unless we also put a special case there.Do you mean, the code in get_collation_isdeterministic() should look like something like below?If colloid = InvalidOid then return TRUEELSE IF tuple is valid then return collisdeterministic from the tupleELSE return FALSEI think for non-zero colloid which is not valid we should return false, but I may be missing your point here. \n\n regards, tom lane\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 11 Apr 2019 22:26:15 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cache lookup failed for collation 0"
},
{
"msg_contents": "Jeevan Chalke <jeevan.chalke@enterprisedb.com> writes:\n> Do you mean, the code in get_collation_isdeterministic() should look like\n> something like below?\n\n> If colloid = InvalidOid then\n> return TRUE\n> ELSE IF tuple is valid then\n> return collisdeterministic from the tuple\n> ELSE\n> return FALSE\n\nI think it's appropriate to fail if we don't find a tuple, for any\ncollation oid other than zero. Again, if you trace through the\nbehavior of the longstanding collation check functions like\nlc_ctype_is_c(), you'll see that that's what happens (except for\nsome hardwired OIDs that they have fast paths for).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Apr 2019 13:20:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cache lookup failed for collation 0"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 10:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jeevan Chalke <jeevan.chalke@enterprisedb.com> writes:\n> > Do you mean, the code in get_collation_isdeterministic() should look like\n> > something like below?\n>\n> > If colloid = InvalidOid then\n> > return TRUE\n> > ELSE IF tuple is valid then\n> > return collisdeterministic from the tuple\n> > ELSE\n> > return FALSE\n>\n> I think it's appropriate to fail if we don't find a tuple, for any\n> collation oid other than zero. Again, if you trace through the\n> behavior of the longstanding collation check functions like\n> lc_ctype_is_c(), you'll see that that's what happens (except for\n> some hardwired OIDs that they have fast paths for).\n>\n\nOK.\n\nAttached patch which treats \"collation 0\" as deterministic in\nget_collation_isdeterministic() and returns true, keeping rest of the code\nas is.\n\n\n> regards, tom lane\n>\n\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 12 Apr 2019 11:43:36 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cache lookup failed for collation 0"
},
{
"msg_contents": "On 2019-04-11 17:04, Jeevan Chalke wrote:\n> The error is coming from get_collation_isdeterministic() when colloid\n> passed is 0. I think like we do in get_collation_name(), we should\n> return false here when such collation oid does not exist.\n\nI'm not in favor of doing that. It would risk papering over errors of\nomission at other call sites.\n\nThe root cause is that the same code match_pattern_prefix() is being\nused for text and bytea, but bytea does not use collations, so having\nthe collation 0 is expected, and we shouldn't call\nget_collation_isdeterministic() in that case.\n\nProposed patch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 12 Apr 2019 09:56:09 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: cache lookup failed for collation 0"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 1:26 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-04-11 17:04, Jeevan Chalke wrote:\n> > The error is coming from get_collation_isdeterministic() when colloid\n> > passed is 0. I think like we do in get_collation_name(), we should\n> > return false here when such collation oid does not exist.\n>\n> I'm not in favor of doing that. It would risk papering over errors of\n> omission at other call sites.\n>\n> The root cause is that the same code match_pattern_prefix() is being\n> used for text and bytea, but bytea does not use collations, so having\n> the collation 0 is expected, and we shouldn't call\n> get_collation_isdeterministic() in that case.\n>\n> Proposed patch attached.\n>\n\nLooks fine to me.\n\n\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Fri, Apr 12, 2019 at 1:26 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-04-11 17:04, Jeevan Chalke wrote:\n> The error is coming from get_collation_isdeterministic() when colloid\n> passed is 0. I think like we do in get_collation_name(), we should\n> return false here when such collation oid does not exist.\n\nI'm not in favor of doing that. It would risk papering over errors of\nomission at other call sites.\n\nThe root cause is that the same code match_pattern_prefix() is being\nused for text and bytea, but bytea does not use collations, so having\nthe collation 0 is expected, and we shouldn't call\nget_collation_isdeterministic() in that case.\n\nProposed patch attached.Looks fine to me. \n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 15 Apr 2019 11:14:04 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cache lookup failed for collation 0"
},
{
"msg_contents": "On 2019-04-15 07:44, Jeevan Chalke wrote:\n> The root cause is that the same code match_pattern_prefix() is being\n> used for text and bytea, but bytea does not use collations, so having\n> the collation 0 is expected, and we shouldn't call\n> get_collation_isdeterministic() in that case.\n> \n> Proposed patch attached.\n> \n> Looks fine to me.\n\nCommitted, thanks.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 15 Apr 2019 09:37:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: cache lookup failed for collation 0"
}
] |
[
{
"msg_contents": "While working on the instance encryption I found it annoying to apply\ndecyption of XLOG page to three different functions. Attached is a patch that\ntries to merge them all into one function, XLogRead(). The existing\nimplementations differ in the way new segment is opened. So I added a pointer\nto callback function as a new argument. This callback handles the specific\nways to determine segment file name and to open the file.\n\nI can split the patch into multiple diffs to make detailed review easier, but\nfirst I'd like to hear if anything is seriously wrong about this\ndesign. Thanks.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Thu, 11 Apr 2019 18:05:42 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Hello.\n\nAt Thu, 11 Apr 2019 18:05:42 +0200, Antonin Houska <ah@cybertec.at> wrote in <14984.1554998742@spoje.net>\n> While working on the instance encryption I found it annoying to apply\n> decyption of XLOG page to three different functions. Attached is a patch that\n> tries to merge them all into one function, XLogRead(). The existing\n> implementations differ in the way new segment is opened. So I added a pointer\n> to callback function as a new argument. This callback handles the specific\n> ways to determine segment file name and to open the file.\n> \n> I can split the patch into multiple diffs to make detailed review easier, but\n> first I'd like to hear if anything is seriously wrong about this\n> design. Thanks.\n\nThis patch changes XLogRead to allow using other than\nBasicOpenFile to open a segment, and use XLogReaderState.private\nto hold a new struct XLogReadPos for the segment reader. The new\nstruct is heavily duplicated with XLogReaderState and I'm not\nsure the rason why the XLogReadPos is needed.\n\nAnyway, in the first place, such two distinct-but-highly-related\ncallbacks makes things too complex. Heikki said that the log\nreader stuff is better not using callbacks and I agree to that. I\ndid that once for my own but the code is no longer\napplicable. But it seems to be the time to do that.\n\nhttps://www.postgresql.org/message-id/47215279-228d-f30d-35d1-16af695e53f3@iki.fi\n\nThat would seems like follows. That refactoring separates log\nreader and page reader.\n\n\nfor(;;)\n{\n rc = XLogReadRecord(reader, startptr, errormsg);\n\n if (rc == XLREAD_SUCCESS)\n {\n /* great, got record */\n }\n if (rc == XLREAD_INVALID_PAGE || XLREAD_INVALID_RECORD)\n {\n elog(ERROR, \"invalid record\");\n }\n if (rc == XLREAD_NEED_DATA)\n {\n /*\n * Read a page from disk, and place it into reader->readBuf\n */\n XLogPageRead(reader->readPagePtr, /* page to read */\n reader->reqLen /* # of bytes to read */ );\n /*\n * Now that we have read the data that XLogReadRecord()\n * requested, call it again.\n */\n continue;\n }\n}\n\nDecodingContextFindStartpoint(ctx)\n do\n {\n read_local_xlog_page(....);\n rc = XLogReadRecord (reader);\n while (rc == XLREAD_NEED_DATA);\n\nI'm going to do that again.\n\n\nAny other opinions, or thoughts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Fri, 12 Apr 2019 12:27:11 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 12:27:11PM +0900, Kyotaro HORIGUCHI wrote:\n> This patch changes XLogRead to allow using other than\n> BasicOpenFile to open a segment, and use XLogReaderState.private\n> to hold a new struct XLogReadPos for the segment reader. The new\n> struct is heavily duplicated with XLogReaderState and I'm not\n> sure the rason why the XLogReadPos is needed.\n> Any other opinions, or thoughts?\n\nThe focus is on the stability of v12 for the next couple of months, so\nplease make sure to register it to the next CF if you want feedback.\n\nHere are some basic thoughts after a very quick lookup.\n\n+/*\n+ * Position in XLOG file while reading it.\n+ */\n+typedef struct XLogReadPos\n+{\n+ int segFile; /* segment file descriptor */\n+ XLogSegNo segNo; /* segment number */\n+ uint32 segOff; /* offset in the segment */\n+ TimeLineID tli; /* timeline ID of the currently open file */\n+\n+ char *dir; /* directory (only needed by\nfrontends) */\n+} XLogReadPos;\nNot sure if there is any point to split that with the XLOG reader\nstatus.\n\n+static void fatal_error(const char *fmt,...) pg_attribute_printf(1, 2);\n+\n+static void\n+fatal_error(const char *fmt,...)\nIn this more confusion accumulates with something which has enough\nwarts on HEAD when it comes to declare locally equivalents to the\nelog() for src/common/.\n\n+/*\n+ * This is a front-end counterpart of XLogFileNameP.\n+ */\n+static char *\n+XLogFileNameFE(TimeLineID tli, XLogSegNo segno)\n+{\n+ char *result = palloc(MAXFNAMELEN);\n+\n+ XLogFileName(result, tli, segno, WalSegSz);\n+ return result;\n+}\nWe could use a pointer to an allocated area. Or even better, just a\nstatic variable as this just gets used for error messages to store\ntemporarily the segment name in a routine part of perhaps\nxlogreader.c.\n--\nMichael",
"msg_date": "Fri, 12 Apr 2019 13:27:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 2:06 AM Antonin Houska <ah@cybertec.at> wrote:\n\n> While working on the instance encryption I found it annoying to apply\n> decyption of XLOG page to three different functions. Attached is a patch\n> that\n> tries to merge them all into one function, XLogRead(). The existing\n> implementations differ in the way new segment is opened. So I added a\n> pointer\n> to callback function as a new argument. This callback handles the specific\n> ways to determine segment file name and to open the file.\n>\n> I can split the patch into multiple diffs to make detailed review easier,\n> but\n> first I'd like to hear if anything is seriously wrong about this\n> design. Thanks.\n>\n\nI didn't check the code, but it is good to combine all the 3 page read\nfunctions\ninto one instead of spreading the logic.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Fri, Apr 12, 2019 at 2:06 AM Antonin Houska <ah@cybertec.at> wrote:While working on the instance encryption I found it annoying to apply\ndecyption of XLOG page to three different functions. Attached is a patch that\ntries to merge them all into one function, XLogRead(). The existing\nimplementations differ in the way new segment is opened. So I added a pointer\nto callback function as a new argument. This callback handles the specific\nways to determine segment file name and to open the file.\n\nI can split the patch into multiple diffs to make detailed review easier, but\nfirst I'd like to hear if anything is seriously wrong about this\ndesign. Thanks.I didn't check the code, but it is good to combine all the 3 page read functionsinto one instead of spreading the logic.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Fri, 12 Apr 2019 18:48:33 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-Apr-11, Antonin Houska wrote:\n\n> While working on the instance encryption I found it annoying to apply\n> decyption of XLOG page to three different functions. Attached is a patch that\n> tries to merge them all into one function, XLogRead(). The existing\n> implementations differ in the way new segment is opened. So I added a pointer\n> to callback function as a new argument. This callback handles the specific\n> ways to determine segment file name and to open the file.\n> \n> I can split the patch into multiple diffs to make detailed review easier, but\n> first I'd like to hear if anything is seriously wrong about this\n> design. Thanks.\n\nI agree that xlog reading is pretty messy.\n\nI think ifdef'ing the way XLogRead reports errors is not great. Maybe\nwe can pass a function pointer that is to be called in case of errors?\nNot sure about the walsize; maybe it can be a member in XLogReadPos, and\ngiven to XLogReadInitPos()? (Maybe rename XLogReadPos as\nXLogReadContext or something like that, indicating it's not just the\nread position.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 12 Apr 2019 12:46:56 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n\n> Hello.\n> \n> At Thu, 11 Apr 2019 18:05:42 +0200, Antonin Houska <ah@cybertec.at> wrote in <14984.1554998742@spoje.net>\n> > While working on the instance encryption I found it annoying to apply\n> > decyption of XLOG page to three different functions. Attached is a patch that\n> > tries to merge them all into one function, XLogRead(). The existing\n> > implementations differ in the way new segment is opened. So I added a pointer\n> > to callback function as a new argument. This callback handles the specific\n> > ways to determine segment file name and to open the file.\n> > \n> > I can split the patch into multiple diffs to make detailed review easier, but\n> > first I'd like to hear if anything is seriously wrong about this\n> > design. Thanks.\n> \n> This patch changes XLogRead to allow using other than\n> BasicOpenFile to open a segment,\n\nGood point. The acceptable ways to open file on both frontend and backend side\nneed to be documented.\n\n> and use XLogReaderState.private to hold a new struct XLogReadPos for the\n> segment reader. The new struct is heavily duplicated with XLogReaderState\n> and I'm not sure the rason why the XLogReadPos is needed.\n\nok, I missed the fact that XLogReaderState already contains most of the info\nthat I put into XLogReadPos. So XLogReadPos is not needed.\n\n> Anyway, in the first place, such two distinct-but-highly-related\n> callbacks makes things too complex. Heikki said that the log\n> reader stuff is better not using callbacks and I agree to that. I\n> did that once for my own but the code is no longer\n> applicable. But it seems to be the time to do that.\n> \n> https://www.postgresql.org/message-id/47215279-228d-f30d-35d1-16af695e53f3@iki.fi\n\nThanks for the link. My understanding is that the drawback of the\nXLogReaderState.read_page callback is that it cannot easily switch between\nXLOG sources in order to handle failure because the caller of XLogReadRecord()\nusually controls those sources too.\n\nHowever the callback I pass to XLogRead() is different: if it fails, it simply\nraises ERROR. Since this indicates rather low-level problem, there's no reason\nfor this callback to try to recover from the failure.\n\n> That would seems like follows. That refactoring separates log\n> reader and page reader.\n> \n> \n> for(;;)\n> {\n> rc = XLogReadRecord(reader, startptr, errormsg);\n> \n> if (rc == XLREAD_SUCCESS)\n> {\n> /* great, got record */\n> }\n> if (rc == XLREAD_INVALID_PAGE || XLREAD_INVALID_RECORD)\n> {\n> elog(ERROR, \"invalid record\");\n> }\n> if (rc == XLREAD_NEED_DATA)\n> {\n> /*\n> * Read a page from disk, and place it into reader->readBuf\n> */\n> XLogPageRead(reader->readPagePtr, /* page to read */\n> reader->reqLen /* # of bytes to read */ );\n> /*\n> * Now that we have read the data that XLogReadRecord()\n> * requested, call it again.\n> */\n> continue;\n> }\n> }\n> \n> DecodingContextFindStartpoint(ctx)\n> do\n> {\n> read_local_xlog_page(....);\n> rc = XLogReadRecord (reader);\n> while (rc == XLREAD_NEED_DATA);\n> \n> I'm going to do that again.\n> \n> \n> Any other opinions, or thoughts?\n\nI don't see an overlap between what you do and what I do. It seems that even\nif you change the XLOG reader API, you don't care what read_local_xlog_page()\ndoes internally. What I try to fix is XLogRead(), and that is actually a\nsubroutine of read_local_xlog_page().\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 15 Apr 2019 10:22:05 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Apr 12, 2019 at 12:27:11PM +0900, Kyotaro HORIGUCHI wrote:\n> > This patch changes XLogRead to allow using other than\n> > BasicOpenFile to open a segment, and use XLogReaderState.private\n> > to hold a new struct XLogReadPos for the segment reader. The new\n> > struct is heavily duplicated with XLogReaderState and I'm not\n> > sure the rason why the XLogReadPos is needed.\n> > Any other opinions, or thoughts?\n> \n> The focus is on the stability of v12 for the next couple of months, so\n> please make sure to register it to the next CF if you want feedback.\n\nok, will do. (A link to mailing list is needed for the CF entry, so I had to\npost something anyway :-) Since I don't introduce any kind of \"cool new\nfeature\" here, I believe it did not disturb much.)\n\n> Here are some basic thoughts after a very quick lookup.\n> ...\n\nThanks.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 15 Apr 2019 10:48:47 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> I agree that xlog reading is pretty messy.\n> \n> I think ifdef'ing the way XLogRead reports errors is not great. Maybe\n> we can pass a function pointer that is to be called in case of errors?\n\nI'll try a bit harder to evaluate the existing approaches to report the same\nerror on both backend and frontend side.\n\n> Not sure about the walsize; maybe it can be a member in XLogReadPos, and\n> given to XLogReadInitPos()? (Maybe rename XLogReadPos as\n> XLogReadContext or something like that, indicating it's not just the\n> read position.)\n\nAs pointed out by others, XLogReadPos is not necessary. So if XLogRead()\nreceives XLogReaderState instead, it can get the segment size from there.\n\nThanks.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 15 Apr 2019 11:27:36 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> > Not sure about the walsize; maybe it can be a member in XLogReadPos, and\n> > given to XLogReadInitPos()? (Maybe rename XLogReadPos as\n> > XLogReadContext or something like that, indicating it's not just the\n> > read position.)\n> \n> As pointed out by others, XLogReadPos is not necessary. So if XLogRead()\n> receives XLogReaderState instead, it can get the segment size from there.\n\nEventually I found out that it's good to have a separate structure for the\nread position because walsender calls the XLogRead() function directly, not\nvia the XLOG reader. Currently the structure name is XLogSegment (maybe\nsomeone can propose better name) and it's a member of XLogReaderState. No\nfield of the new structure is duplicated now.\n\nThe next version of the patch is attached.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Thu, 02 May 2019 18:17:36 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Thu, May 2, 2019 at 12:18 PM Antonin Houska <ah@cybertec.at> wrote:\n> The next version of the patch is attached.\n\nI don't think any of this looks acceptable:\n\n+#ifndef FRONTEND\n+/*\n+ * Backend should have wal_segment_size variable initialized, segsize is not\n+ * used.\n+ */\n+#define XLogFileNameCommon(tli, num, segsize) XLogFileNameP((tli), (num))\n+#define xlr_error(...) ereport(ERROR, (errcode_for_file_access(),\nerrmsg(__VA_ARGS__)))\n+#else\n+static char xlr_error_msg[MAXFNAMELEN];\n+#define XLogFileNameCommon(tli, num, segsize)\n(XLogFileName(xlr_error_msg, (tli), (num), (segsize)),\\\n+ xlr_error_msg)\n+#include \"fe_utils/logging.h\"\n+/*\n+ * Frontend application (currently only pg_waldump.c) cannot catch and further\n+ * process errors, so they simply treat them as fatal.\n+ */\n+#define xlr_error(...) do {pg_log_fatal(__VA_ARGS__);\nexit(EXIT_FAILURE); } while(0)\n+#endif\n\nThe backend part doesn't look OK because depending on the value of a\nglobal variable instead of getting the information via parameters\nseems like a step backward. The frontend part doesn't look OK because\nit locks every application that uses the xlogreader stuff into using\npg_log_fatal when an error occurs, which may not be what everybody\nwants to do.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 May 2019 14:03:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-May-06, Robert Haas wrote:\n\n> On Thu, May 2, 2019 at 12:18 PM Antonin Houska <ah@cybertec.at> wrote:\n> > The next version of the patch is attached.\n> \n> I don't think any of this looks acceptable:\n\nI agree. I inteded to suggest upthread to pass an additional argument\nto XLogRead, which is a function that takes a message string and\nSQLSTATE; in backend, the function does errstart / errstate / errmsg /\nerrfinish, and in frontend programs it does pg_log_fatal (and ignores\nsqlstate). The message must be sprintf'ed and translated by XLogRead.\n(xlogreader.c could itself provide a default error reporting callback,\nat least for frontend, to avoid repeating the code). That way, if a\ndifferent frontend program wants to do something different, it's fairly\neasy to pass a different function pointer.\n\nBTW, having frontend's XLogFileNameCommon use a totally unrelated\nvariable for its printing is naughty.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 6 May 2019 14:21:34 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Mon, May 6, 2019 at 2:21 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-May-06, Robert Haas wrote:\n> > On Thu, May 2, 2019 at 12:18 PM Antonin Houska <ah@cybertec.at> wrote:\n> > > The next version of the patch is attached.\n> >\n> > I don't think any of this looks acceptable:\n>\n> I agree. I inteded to suggest upthread to pass an additional argument\n> to XLogRead, which is a function that takes a message string and\n> SQLSTATE; in backend, the function does errstart / errstate / errmsg /\n> errfinish, and in frontend programs it does pg_log_fatal (and ignores\n> sqlstate). The message must be sprintf'ed and translated by XLogRead.\n> (xlogreader.c could itself provide a default error reporting callback,\n> at least for frontend, to avoid repeating the code). That way, if a\n> different frontend program wants to do something different, it's fairly\n> easy to pass a different function pointer.\n\nIt seems to me that it's better to unwind the stack i.e. have the\nfunction return the error information to the caller and let the caller\ndo as it likes. The other thread to which Horiguchi-san referred\nearlier in this thread seems to me to have basically concluded that\nthe XLogPageReadCB callback to XLogReaderAllocate is a pain to use\nbecause it doesn't unwind the stack, and work is under way over there\nto get rid of that callback for just that reason. Adding a new\ncallback for error-reporting would just be creating a new instance of\nthe same issue.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 May 2019 15:58:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n> It seems to me that it's better to unwind the stack i.e. have the\n> function return the error information to the caller and let the caller\n> do as it likes.\n\nThanks for a hint. The next version tries to do that.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Tue, 21 May 2019 11:11:37 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Tue, May 21, 2019 at 9:12 PM Antonin Houska <ah@cybertec.at> wrote:\n> Robert Haas <robertmhaas@gmail.com> wrote:\n> > It seems to me that it's better to unwind the stack i.e. have the\n> > function return the error information to the caller and let the caller\n> > do as it likes.\n>\n> Thanks for a hint. The next version tries to do that.\n\nHi Antonin,\n\nCould you please send a fresh rebase for the new Commitfest?\n\nThanks,\n\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Jul 2019 21:54:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Tue, May 21, 2019 at 9:12 PM Antonin Houska <ah@cybertec.at> wrote:\n> > Robert Haas <robertmhaas@gmail.com> wrote:\n> > > It seems to me that it's better to unwind the stack i.e. have the\n> > > function return the error information to the caller and let the caller\n> > > do as it likes.\n> >\n> > Thanks for a hint. The next version tries to do that.\n> \n> Hi Antonin,\n> \n> Could you please send a fresh rebase for the new Commitfest?\n\nRebased.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Tue, 09 Jul 2019 12:15:37 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Hi Antonin, could you please rebase again?\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Sep 2019 17:15:32 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Pushed 0001.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Sep 2019 17:42:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> Hi Antonin, could you please rebase again?\n\nAttached.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Mon, 09 Sep 2019 12:20:30 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "I was confused by the struct name XLogSegment -- the struct is used to\nrepresent a WAL segment while it's kept open, rather than just a WAL\nsegment in abstract. Also, now that we've renamed everything to use the\nterm WAL, it seems wrong to use the name XLog for new structs. I\npropose the name WALOpenSegment for the struct, which solves both\nproblems. (Its initializer function would get the name\nWALOpenSegmentInit.)\n\nNow, the patch introduces a callback for XLogRead, the type of which is\ncalled XLogOpenSegment. If we rename it from XLog to WAL, both names\nend up the same. I propose to rename the function type to\nWALSegmentOpen, which in a \"noun-verb\" view of the world, represents the\naction of opening a WAL segment.\n\nI attach a patch for all this renaming, on top of your series.\n\nI wonder if each of those WALSegmentOpen callbacks should reset [at\nleast some members of] the struct; they're already in charge of setting\n->file, and apparently we're leaving the responsibility of setting the\nrest of the members to XLogRead. That seems weird. Maybe we should say\nthat the CB should only open the segment and not touch the struct at all\nand XLogRead is in charge of everything. Perhaps the other way around\n-- the CB should set everything correctly ... I'm not sure which is\nbest. But having half here and half there seems a recipe for confusion\nand bugs.\n\n\nAnother thing I didn't like much is that everything seems to assume that\nthe only error possible from XLogRead is a read error. Maybe that's\nokay, because it seems to be the current reality, but it seemed odd.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 17 Sep 2019 19:15:21 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> I was confused by the struct name XLogSegment -- the struct is used to\n> represent a WAL segment while it's kept open, rather than just a WAL\n> segment in abstract. Also, now that we've renamed everything to use the\n> term WAL, it seems wrong to use the name XLog for new structs. I\n> propose the name WALOpenSegment for the struct, which solves both\n> problems. (Its initializer function would get the name\n> WALOpenSegmentInit.)\n> \n> Now, the patch introduces a callback for XLogRead, the type of which is\n> called XLogOpenSegment. If we rename it from XLog to WAL, both names\n> end up the same. I propose to rename the function type to\n> WALSegmentOpen, which in a \"noun-verb\" view of the world, represents the\n> action of opening a WAL segment.\n> \n> I attach a patch for all this renaming, on top of your series.\n\nok, thanks.\n\nIn addition I renamed WalSndOpenSegment() to WalSndSegmentOpen() and\nread_local_xlog_page_open_segment() to read_local_xlog_page_segment_open()\n\n> I wonder if each of those WALSegmentOpen callbacks should reset [at\n> least some members of] the struct; they're already in charge of setting\n> ->file, and apparently we're leaving the responsibility of setting the\n> rest of the members to XLogRead. That seems weird. Maybe we should say\n> that the CB should only open the segment and not touch the struct at all\n> and XLogRead is in charge of everything. Perhaps the other way around\n> -- the CB should set everything correctly ... I'm not sure which is\n> best. But having half here and half there seems a recipe for confusion\n> and bugs.\n\nok, I've changed the CB signature. Now it receives poiners to the two\nvariables that it can change while the \"seg\" argument is documented as\nread-only. To indicate that the CB should determine timeline itself, I\nintroduced a new constant InvalidTimeLineID, see the 0004 part.\n\n> Another thing I didn't like much is that everything seems to assume that\n> the only error possible from XLogRead is a read error. Maybe that's\n> okay, because it seems to be the current reality, but it seemed odd.\n\nIn this case I only moved the ereport() code from XLogRead() away (so that\nboth backend and frontend can call the function). Given that the code to open\nWAL segment is now in the callbacks, the only thing that XLogRead() can\nereport is that read() failed. BTW, I introduced one more structure,\nXLogReadError, in this patch version. I think it's better than adding\nerror-specific fields to the WALOpenSegment structure.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Mon, 23 Sep 2019 12:44:31 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "I spent a couple of hours on this patchset today. I merged 0001 and\n0002, and decided the result was still messier than I would have liked,\nso I played with it a bit more -- see attached. I think this is\ncommittable, but I'm afraid it'll cause quite a few conflicts with the\nrest of your series.\n\nI had two gripes, which I feel solved with my changes:\n\n1. I didn't like that \"dir\" and \"wal segment size\" were part of the\n\"currently open segment\" supporting struct. It seemed that those two\nwere slightly higher-level, since they apply to every segment that's\ngoing to be opened, not just the current one.\n\nMy first thought was to put those as members of XLogReaderState, but\nthat doesn't work because the physical walsender.c code does not use\nxlogreader at all, even though it is reading WAL. Anyway my solution\nwas to create yet another struct, which for everything that uses\nxlogreader is just part of that state struct; and for walsender, it's\njust a separate one alongside sendSeg. All in all, this seems pretty\nclean.\n\n2. Having the wal dir be #ifdef FRONTEND seemed out of place. I know\nthe backend code does not use that, but eliding it is more \"noisy\" than\njust setting it to NULL. Also, the \"Finalize the segment pointer\"\nthingy seemed out of place. So my code passes the dir as an argument to\nXLogReaderAllocate, and if it's null then we just don't allocate it.\nEverybody else can use it to guide things. This results in cleaner\ncode, because we don't have to handle it externally, which was causing\nquite some pain to pg_waldump. \n\nNote that ws_dir member is a char array in the struct, not just a\npointer. This saves trouble trying to allocate it (I mainly did it this\nway because we don't have pstrdup_extended(MCXT_ALLOC_NO_OOM) ... yes,\nthis could be made with palloc+snprintf, but eh, that doesn't seem worth\nthe trouble.)\n\n\nSeparately from those two API-wise points, there was one bug which meant\nthat with your 0002+0003 the recovery tests did not pass -- code\nplacement bug. I suppose the bug disappears with later patches in your\nseries, which probably is why you didn't notice. This is the fix for that:\n\n- XLogRead(cur_page, state->seg.size, state->seg.tli, targetPagePtr,\n- state->seg.tli = pageTLI;\n+ state->seg.ws_tli = pageTLI;\n+ XLogRead(cur_page, state->segcxt.ws_segsize, state->seg.ws_tli, targetPagePtr,\n XLOG_BLCKSZ);\n\n\n... Also, yes, I renamed all the struct members.\n\n\nIf you don't have any strong dislikes for these changes, I'll push this\npart and let you rebase the remains on top.\n\n\nRegarding the other patches:\n\n1. I think trying to do palloc(XLogReadError) is a bad idea ... for\nexample, if the read fails because of system pressure, we might return\n\"out of memory\" during that palloc instead of the real read error. This\nparticular problem you could forestall by changing to ErrorContext, but\nI have the impression that it might be better to have the error struct\nby stack-allocated in the caller stack. This forces you to limit the\nmessage string to a maximum size (say 128 bytes or maybe even 1000 bytes\nlike MAX_ERRORMSG_LEN) but I don't have a problem with that.\n\n2. Not a fan of the InvalidTimeLineID stuff offhand. Maybe it's okay ...\nnot convinced yet either way.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 23 Sep 2019 19:00:10 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-Sep-23, Alvaro Herrera wrote:\n\n> I spent a couple of hours on this patchset today. I merged 0001 and\n> 0002, and decided the result was still messier than I would have liked,\n> so I played with it a bit more -- see attached.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 23 Sep 2019 19:00:38 -0300",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> I spent a couple of hours on this patchset today. I merged 0001 and\n> 0002, and decided the result was still messier than I would have liked,\n> so I played with it a bit more -- see attached. I think this is\n> committable, but I'm afraid it'll cause quite a few conflicts with the\n> rest of your series.\n> \n> I had two gripes, which I feel solved with my changes:\n> \n> 1. I didn't like that \"dir\" and \"wal segment size\" were part of the\n> \"currently open segment\" supporting struct. It seemed that those two\n> were slightly higher-level, since they apply to every segment that's\n> going to be opened, not just the current one.\n\nok\n\n> My first thought was to put those as members of XLogReaderState, but\n> that doesn't work because the physical walsender.c code does not use\n> xlogreader at all, even though it is reading WAL.\n\n`I don't remember clearly but I think that this was the reason I tried to move\n\"wal_segment_size\" away from XLogReaderState.\n\n \n> Separately from those two API-wise points, there was one bug which meant\n> that with your 0002+0003 the recovery tests did not pass -- code\n> placement bug. I suppose the bug disappears with later patches in your\n> series, which probably is why you didn't notice. This is the fix for that:\n> \n> - XLogRead(cur_page, state->seg.size, state->seg.tli, targetPagePtr,\n> - state->seg.tli = pageTLI;\n> + state->seg.ws_tli = pageTLI;\n> + XLogRead(cur_page, state->segcxt.ws_segsize, state->seg.ws_tli, targetPagePtr,\n> XLOG_BLCKSZ);\n> \n\nYes, it seems so - the following parts ensure that XLogRead() adjusts the\ntimeline itself. I only checked that the each part of the series keeps the\nsource tree compilable. Thanks for fixing.\n\n> ... Also, yes, I renamed all the struct members.\n>\n> \n> If you don't have any strong dislikes for these changes, I'll push this\n> part and let you rebase the remains on top.\n\nNo objections here.\n\n> 2. Not a fan of the InvalidTimeLineID stuff offhand. Maybe it's okay ...\n> not convinced yet either way.\n\nWell, it seems that the individual callbacks only use this constant in\nAssert() statements. I'll consider if we really need it. The argument value\nshould not determine whether the callback derives the TLI or not.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Tue, 24 Sep 2019 17:33:24 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-Sep-24, Antonin Houska wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > If you don't have any strong dislikes for these changes, I'll push this\n> > part and let you rebase the remains on top.\n> \n> No objections here.\n\noK, pushed. Please rebase the other parts.\n\nI made one small adjustment: in read_local_xlog_page() there was one\n*readTLI output parameter that was being changed to a local variable\nplus later assigment to the output struct member; I changed the code to\ncontinue to assign directly to the output variable instead. There was\nan error case in which the TLI was not assigned to; I suppose this\ndoesn't really change things (we don't examine the TLI in that case, do\nwe?), but it seemed dangerous to leave like that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Sep 2019 16:50:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Sep-24, Antonin Houska wrote:\n> \n> > Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> > > If you don't have any strong dislikes for these changes, I'll push this\n> > > part and let you rebase the remains on top.\n> > \n> > No objections here.\n> \n> oK, pushed. Please rebase the other parts.\n\nThanks!\n\n> I made one small adjustment: in read_local_xlog_page() there was one\n> *readTLI output parameter that was being changed to a local variable\n> plus later assigment to the output struct member; I changed the code to\n> continue to assign directly to the output variable instead. There was\n> an error case in which the TLI was not assigned to; I suppose this\n> doesn't really change things (we don't examine the TLI in that case, do\n> we?), but it seemed dangerous to leave like that.\n\nI used the local variable to make some expressions simpler, but missed the\nfact that this way I can leave the ws_tli field unassigned if the function\nreturns prematurely. Now that I look closer, I see that it can be a problem -\nin the case of ERROR, XLogReadRecord() does reset the state, but it does not\nreset the TLI:\n\nerr:\n\t/*\n\t * Invalidate the read state. We might read from a different source after\n\t * failure.\n\t */\n\tXLogReaderInvalReadState(state);\n\nThus the TLI appears to be important even on ERROR, and what you've done is\ncorrect. Thanks for fixing that.\n\nOne comment on the remaining part of the series:\n\nBefore this refactoring, the walsender.c:XLogRead() function contained these\nlines\n\n /*\n * After reading into the buffer, check that what we read was valid. We do\n * this after reading, because even though the segment was present when we\n * opened it, it might get recycled or removed while we read it. The\n * read() succeeds in that case, but the data we tried to read might\n * already have been overwritten with new WAL records.\n */\n XLByteToSeg(startptr, segno, segcxt->ws_segsize);\n CheckXLogRemoved(segno, ThisTimeLineID);\n\nbut they don't fit into the new, generic implementation, so I copied these\nlines to the two places right after the call of the new XLogRead(). However I\nwas not sure if ThisTimeLineID was ever correct here. It seems the original\nwalsender.c:XLogRead() implementation did not update ThisTimeLineID (and\ntherefore neither the new callback WalSndSegmentOpen() does), so both\nlogical_read_xlog_page() and XLogSendPhysical() could read the data from\nanother (historic) timeline. I think we should check the segment we really\nread data from:\n\n\tCheckXLogRemoved(segno, sendSeg->ws_tli);\n\nThe rebased code is attached.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Thu, 26 Sep 2019 14:08:33 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-Sep-26, Antonin Houska wrote:\n\n> One comment on the remaining part of the series:\n> \n> Before this refactoring, the walsender.c:XLogRead() function contained these\n> lines\n> \n> /*\n> * After reading into the buffer, check that what we read was valid. We do\n> * this after reading, because even though the segment was present when we\n> * opened it, it might get recycled or removed while we read it. The\n> * read() succeeds in that case, but the data we tried to read might\n> * already have been overwritten with new WAL records.\n> */\n> XLByteToSeg(startptr, segno, segcxt->ws_segsize);\n> CheckXLogRemoved(segno, ThisTimeLineID);\n> \n> but they don't fit into the new, generic implementation, so I copied these\n> lines to the two places right after the call of the new XLogRead(). However I\n> was not sure if ThisTimeLineID was ever correct here. It seems the original\n> walsender.c:XLogRead() implementation did not update ThisTimeLineID (and\n> therefore neither the new callback WalSndSegmentOpen() does), so both\n> logical_read_xlog_page() and XLogSendPhysical() could read the data from\n> another (historic) timeline. I think we should check the segment we really\n> read data from:\n> \n> \tCheckXLogRemoved(segno, sendSeg->ws_tli);\n\nHmm, okay. I hope we can get rid of ThisTimeLineID one day.\n\nYou placed the errinfo in XLogRead's stack rather than its callers' ...\nI don't think that works, because as soon as XLogRead returns that\nmemory is no longer guaranteed to exist. You need to allocate the\nstruct in the callers stacks and pass its address to XLogRead. XLogRead\ncan return NULL if everything's okay or the pointer to the errinfo\nstruct.\n\nI've been wondering if it's really necessary to pass 'seg' to the\nopenSegment() callback. Only walsender wants that, and it seems ...\nweird. Maybe that's not something for this patch series to fix, but it\nwould be good to find a more decent way to do the TLI switch at some\npoint.\n\n> +\t\t\t/*\n> +\t\t\t * If the function is called by the XLOG reader, the reader will\n> +\t\t\t * eventually set both \"ws_segno\" and \"ws_off\", however the XLOG\n> +\t\t\t * reader is not necessarily involved. Furthermore, we need to set\n> +\t\t\t * the current values for this function to work.\n> +\t\t\t */\n> +\t\t\tseg->ws_segno = nextSegNo;\n> +\t\t\tseg->ws_off = 0;\n\nWhy do we leave this responsibility to ReadPageInternal? Wouldn't it\nmake more sense to leave XLogRead be always responsible for setting\nthese correctly, and remove those lines from ReadPageInternal? (BTW \"is\ncalled by the XLOG reader\" is a bit strange in code that appears in\nxlogreader.c).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Sep 2019 10:22:10 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Sep-26, Antonin Houska wrote:\n\n> You placed the errinfo in XLogRead's stack rather than its callers' ...\n> I don't think that works, because as soon as XLogRead returns that\n> memory is no longer guaranteed to exist.\n\nI was aware of this problem, therefore I defined the field as static:\n\n+XLogReadError *\n+XLogRead(char *buf, XLogRecPtr startptr, Size count, TimeLineID *tli_p,\n+ WALOpenSegment *seg, WALSegmentContext *segcxt,\n+ WALSegmentOpen openSegment)\n+{\n+ char *p;\n+ XLogRecPtr recptr;\n+ Size nbytes;\n+ static XLogReadError errinfo;\n\n> You need to allocate the struct in the callers stacks and pass its address\n> to XLogRead. XLogRead can return NULL if everything's okay or the pointer\n> to the errinfo struct.\n\nI didn't choose this approach because that would add one more argument to the\nfunction.\n\n> I've been wondering if it's really necessary to pass 'seg' to the\n> openSegment() callback. Only walsender wants that, and it seems ...\n> weird. Maybe that's not something for this patch series to fix, but it\n> would be good to find a more decent way to do the TLI switch at some\n> point.\n\nGood point. Since walsender.c already has the \"sendSeg\" global variable, maybe\nwe can let WalSndSegmentOpen() use this one, and remove the \"seg\" argument\nfrom the callback.\n\n> > +\t\t\t/*\n> > +\t\t\t * If the function is called by the XLOG reader, the reader will\n> > +\t\t\t * eventually set both \"ws_segno\" and \"ws_off\", however the XLOG\n> > +\t\t\t * reader is not necessarily involved. Furthermore, we need to set\n> > +\t\t\t * the current values for this function to work.\n> > +\t\t\t */\n> > +\t\t\tseg->ws_segno = nextSegNo;\n> > +\t\t\tseg->ws_off = 0;\n> \n> Why do we leave this responsibility to ReadPageInternal? Wouldn't it\n> make more sense to leave XLogRead be always responsible for setting\n> these correctly, and remove those lines from ReadPageInternal?\n\nI think there's no rule that ReadPageInternal() must use XLogRead(). If we do\nwhat you suggest, we need make this responsibility documented. I'll consider\nthat.\n\n> (BTW \"is called by the XLOG reader\" is a bit strange in code that appears in\n> xlogreader.c).\n\nok, \"called by XLogPageReadCB callback\" would be more accurate. Not sure if\nwe'll eventually need this phrase in the comment at all.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 27 Sep 2019 19:54:25 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-Sep-27, Antonin Houska wrote:\n\n> > You placed the errinfo in XLogRead's stack rather than its callers' ...\n> > I don't think that works, because as soon as XLogRead returns that\n> > memory is no longer guaranteed to exist.\n> \n> I was aware of this problem, therefore I defined the field as static:\n> \n> +XLogReadError *\n> +XLogRead(char *buf, XLogRecPtr startptr, Size count, TimeLineID *tli_p,\n> + WALOpenSegment *seg, WALSegmentContext *segcxt,\n> + WALSegmentOpen openSegment)\n> +{\n> + char *p;\n> + XLogRecPtr recptr;\n> + Size nbytes;\n> + static XLogReadError errinfo;\n\nI see.\n\n> > You need to allocate the struct in the callers stacks and pass its address\n> > to XLogRead. XLogRead can return NULL if everything's okay or the pointer\n> > to the errinfo struct.\n> \n> I didn't choose this approach because that would add one more argument to the\n> function.\n\nYeah, the signature does seem a bit unwieldy. But I wonder if that's\ntoo terrible a problem, considering that this code is incurring a bunch\nof syscalls in the best case anyway.\n\nBTW that tli_p business to the openSegment callback is horribly\ninconsistent. Some callers accept a NULL tli_p, others will outright\ncrash, even though the API docs say that the callback must determine the\ntimeline. This is made more complicated by us having the TLI in \"seg\"\nalso. Unless I misread, the problem is again that the walsender code is\ndoing nasty stuff with globals (endSegNo). As a very minor stylistic\npoint, we prefer to have out params at the end of the signature.\n\n> > Why do we leave this responsibility to ReadPageInternal? Wouldn't it\n> > make more sense to leave XLogRead be always responsible for setting\n> > these correctly, and remove those lines from ReadPageInternal?\n> \n> I think there's no rule that ReadPageInternal() must use XLogRead(). If we do\n> what you suggest, we need make this responsibility documented. I'll consider\n> that.\n\nHmm. Thanks.\n\n> > (BTW \"is called by the XLOG reader\" is a bit strange in code that appears in\n> > xlogreader.c).\n> \n> ok, \"called by XLogPageReadCB callback\" would be more accurate. Not sure if\n> we'll eventually need this phrase in the comment at all.\n\nI think that would be slightly clearer. But if we can force this code\ninto actually making sense, that would be much better.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Sep 2019 16:17:36 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Sep-27, Antonin Houska wrote:\n>>> You placed the errinfo in XLogRead's stack rather than its callers' ...\n>>> I don't think that works, because as soon as XLogRead returns that\n>>> memory is no longer guaranteed to exist.\n\n>> I was aware of this problem, therefore I defined the field as static:\n>> \n>> +XLogReadError *\n>> +XLogRead(char *buf, XLogRecPtr startptr, Size count, TimeLineID *tli_p,\n>> + WALOpenSegment *seg, WALSegmentContext *segcxt,\n>> + WALSegmentOpen openSegment)\n>> +{\n>> + char *p;\n>> + XLogRecPtr recptr;\n>> + Size nbytes;\n>> + static XLogReadError errinfo;\n\n> I see.\n\nThat seems like an absolutely terrible \"fix\". We don't really want\nXLogRead to be defined in a way that forces it to be non-reentrant do we?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Sep 2019 15:28:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> BTW that tli_p business to the openSegment callback is horribly\n> inconsistent. Some callers accept a NULL tli_p, others will outright\n> crash, even though the API docs say that the callback must determine the\n> timeline. This is made more complicated by us having the TLI in \"seg\"\n> also. Unless I misread, the problem is again that the walsender code is\n> doing nasty stuff with globals (endSegNo). As a very minor stylistic\n> point, we prefer to have out params at the end of the signature.\n\nXLogRead() tests for NULL so it should not crash but I don't insist on doing\nit this way. XLogRead() actually does not have to care whether the \"open\nsegment callback\" determines the TLI or not, so it (XLogRead) can always\nreceive a valid pointer to seg.ws_tli. However that in turn implies that\nXLogRead() does not need the \"tli\" argument at all.\n\n> > > Why do we leave this responsibility to ReadPageInternal? Wouldn't it\n> > > make more sense to leave XLogRead be always responsible for setting\n> > > these correctly, and remove those lines from ReadPageInternal?\n> > \n> > I think there's no rule that ReadPageInternal() must use XLogRead(). If we do\n> > what you suggest, we need make this responsibility documented. I'll consider\n> > that.\n\nI think now we should not add any responsibility to XLogPageReadCB or its\nsubroutines because some extensions might already have their implementation of\nXLogPageReadCB w/o XLogRead, and this change would break them.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Sat, 28 Sep 2019 10:14:25 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Sep-27, Antonin Houska wrote:\n> >>> You placed the errinfo in XLogRead's stack rather than its callers' ...\n> >>> I don't think that works, because as soon as XLogRead returns that\n> >>> memory is no longer guaranteed to exist.\n> \n> >> I was aware of this problem, therefore I defined the field as static:\n> >> \n> >> +XLogReadError *\n> >> +XLogRead(char *buf, XLogRecPtr startptr, Size count, TimeLineID *tli_p,\n> >> + WALOpenSegment *seg, WALSegmentContext *segcxt,\n> >> + WALSegmentOpen openSegment)\n> >> +{\n> >> + char *p;\n> >> + XLogRecPtr recptr;\n> >> + Size nbytes;\n> >> + static XLogReadError errinfo;\n> \n> > I see.\n> \n> That seems like an absolutely terrible \"fix\". We don't really want\n> XLogRead to be defined in a way that forces it to be non-reentrant do we?\n\nGood point. I forgot that the XLOG reader can be used by frontends, so thread\nsafety is important here.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Sat, 28 Sep 2019 10:15:56 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> > BTW that tli_p business to the openSegment callback is horribly\n> > inconsistent. Some callers accept a NULL tli_p, others will outright\n> > crash, even though the API docs say that the callback must determine the\n> > timeline. This is made more complicated by us having the TLI in \"seg\"\n> > also. Unless I misread, the problem is again that the walsender code is\n> > doing nasty stuff with globals (endSegNo). As a very minor stylistic\n> > point, we prefer to have out params at the end of the signature.\n> \n> XLogRead() tests for NULL so it should not crash but I don't insist on doing\n> it this way. XLogRead() actually does not have to care whether the \"open\n> segment callback\" determines the TLI or not, so it (XLogRead) can always\n> receive a valid pointer to seg.ws_tli.\n\nThis is actually wrong - seg.ws_tli is not always the correct value to\npass. If seg.ws_tli refers to the segment from which data was read last time,\nthen XLogRead() still needs a separate argument to specify from which TLI the\ncurrent call should read. If these two differ, new file needs to be opened.\n\nThe problem of walsender.c is that its implementation of XLogRead() does not\ncare about the TLI of the previous read. If the behavior of the new, generic\nimplementation should be exactly the same, we need to tell XLogRead() that in\nsome cases it also should not compare the current TLI to the previous\none. That's why I tried to use the NULL pointer, or the InvalidTimeLineID\nearlier.\n\nAnother approach is to add a boolean argument \"check_tli\", but that still\nforces caller to pass some (random) value of the tli. The concept of\nInvalidTimeLineID seems to me less disturbing than this.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Sat, 28 Sep 2019 15:00:35 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Hello,\n\nAt Sat, 28 Sep 2019 15:00:35 +0200, Antonin Houska <ah@cybertec.at> wrote in <9236.1569675635@antos>\n> Antonin Houska <ah@cybertec.at> wrote:\n> \n> > Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > \n> > > BTW that tli_p business to the openSegment callback is horribly\n> > > inconsistent. Some callers accept a NULL tli_p, others will outright\n> > > crash, even though the API docs say that the callback must determine the\n> > > timeline. This is made more complicated by us having the TLI in \"seg\"\n> > > also. Unless I misread, the problem is again that the walsender code is\n> > > doing nasty stuff with globals (endSegNo). As a very minor stylistic\n> > > point, we prefer to have out params at the end of the signature.\n> > \n> > XLogRead() tests for NULL so it should not crash but I don't insist on doing\n> > it this way. XLogRead() actually does not have to care whether the \"open\n> > segment callback\" determines the TLI or not, so it (XLogRead) can always\n> > receive a valid pointer to seg.ws_tli.\n> \n> This is actually wrong - seg.ws_tli is not always the correct value to\n> pass. If seg.ws_tli refers to the segment from which data was read last time,\n> then XLogRead() still needs a separate argument to specify from which TLI the\n> current call should read. If these two differ, new file needs to be opened.\n\nopenSegment represents the file *currently* opened. XLogRead\nneeds the TLI *to be* opened. If they are different, as far as\nwal logical wal sender and pg_waldump is concerned, XLogRead\nswitches to the new TLI and the new TLI is set to\nopenSegment.ws_tli. So, it seems to me that the parameter\ndoesn't need to be inout? It is enough that it is an \"in\"\nparameter.\n\n> The problem of walsender.c is that its implementation of XLogRead() does not\n> care about the TLI of the previous read. If the behavior of the new, generic\n> implementation should be exactly the same, we need to tell XLogRead() that in\n> some cases it also should not compare the current TLI to the previous\n> one. That's why I tried to use the NULL pointer, or the InvalidTimeLineID\n> earlier.\n\nPhysical wal sender doesn't switch TLI. So I don't think the\nbehavior doesn't harm (or doesn't fire). openSegment holds the\nTLI set at the first call. (Even if future wal sender switches\nTLI, the behavior should be needed.)\n\n> Another approach is to add a boolean argument \"check_tli\", but that still\n> forces caller to pass some (random) value of the tli. The concept of\n> InvalidTimeLineID seems to me less disturbing than this.\n\nSo I think InvalidTimeLineID is not needed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 01 Oct 2019 12:22:27 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Sat, 28 Sep 2019 15:00:35 +0200, Antonin Houska <ah@cybertec.at> wrote in <9236.1569675635@antos>\n> > Antonin Houska <ah@cybertec.at> wrote:\n> > \n> > > Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > \n> > > > BTW that tli_p business to the openSegment callback is horribly\n> > > > inconsistent. Some callers accept a NULL tli_p, others will outright\n> > > > crash, even though the API docs say that the callback must determine the\n> > > > timeline. This is made more complicated by us having the TLI in \"seg\"\n> > > > also. Unless I misread, the problem is again that the walsender code is\n> > > > doing nasty stuff with globals (endSegNo). As a very minor stylistic\n> > > > point, we prefer to have out params at the end of the signature.\n> > > \n> > > XLogRead() tests for NULL so it should not crash but I don't insist on doing\n> > > it this way. XLogRead() actually does not have to care whether the \"open\n> > > segment callback\" determines the TLI or not, so it (XLogRead) can always\n> > > receive a valid pointer to seg.ws_tli.\n> > \n> > This is actually wrong - seg.ws_tli is not always the correct value to\n> > pass. If seg.ws_tli refers to the segment from which data was read last time,\n> > then XLogRead() still needs a separate argument to specify from which TLI the\n> > current call should read. If these two differ, new file needs to be opened.\n> \n> openSegment represents the file *currently* opened.\n\nI suppose you mean the \"seg\" argument.\n\n> XLogRead needs the TLI *to be* opened. If they are different, as far as wal\n> logical wal sender and pg_waldump is concerned, XLogRead switches to the new\n> TLI and the new TLI is set to openSegment.ws_tli.\n\nYes, it works in these cases.\n\n> So, it seems to me that the parameter doesn't need to be inout? It is enough\n> that it is an \"in\" parameter.\n\nI did consider \"TimeLineID *tli_p\" to be \"in\" parameter in the last patch\nversion. The reason I used pointer was the special meaning of the NULL value:\nif NULL is passed, then the timeline should be ignored (because of the other\ncases, see below).\n\n> > The problem of walsender.c is that its implementation of XLogRead() does not\n> > care about the TLI of the previous read. If the behavior of the new, generic\n> > implementation should be exactly the same, we need to tell XLogRead() that in\n> > some cases it also should not compare the current TLI to the previous\n> > one. That's why I tried to use the NULL pointer, or the InvalidTimeLineID\n> > earlier.\n> \n> Physical wal sender doesn't switch TLI. So I don't think the\n> behavior doesn't harm (or doesn't fire). openSegment holds the\n> TLI set at the first call. (Even if future wal sender switches\n> TLI, the behavior should be needed.)\n\nNote that walsender.c:XLogRead() has no TLI argument, however the XLogRead()\nintroduced by the patch does have one. What should be passed for TLI to the\nnew implementation if it's called from walsender.c? If the check for a segment\nchange looks like this (here \"tli\" is the argument representing the desired\nTLI)\n\n\tif (seg->ws_file < 0 ||\n\t\t!XLByteInSeg(recptr, seg->ws_segno, segcxt->ws_segsize) ||\n\t\ttli != seg->ws_tli)\n\t{\n\t\tXLogSegNo\tnextSegNo;\n\n\t\t/* Switch to another logfile segment */\n\t\tif (seg->ws_file >= 0)\n\t\t\tclose(seg->ws_file);\n\nthen any valid TLI can result in accidental closing of the current segment\nfile. Since this is only refactoring patch, we should not allow such a change\nof behavior even if it seems that the same segment will be reopened\nimmediately.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Tue, 01 Oct 2019 08:28:03 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "At Tue, 01 Oct 2019 08:28:03 +0200, Antonin Houska <ah@cybertec.at> wrote in <2188.1569911283@antos>\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > > XLogRead() tests for NULL so it should not crash but I don't insist on doing\n> > > > it this way. XLogRead() actually does not have to care whether the \"open\n> > > > segment callback\" determines the TLI or not, so it (XLogRead) can always\n> > > > receive a valid pointer to seg.ws_tli.\n> > > \n> > > This is actually wrong - seg.ws_tli is not always the correct value to\n> > > pass. If seg.ws_tli refers to the segment from which data was read last time,\n> > > then XLogRead() still needs a separate argument to specify from which TLI the\n> > > current call should read. If these two differ, new file needs to be opened.\n> > \n> > openSegment represents the file *currently* opened.\n> \n> I suppose you mean the \"seg\" argument.\n> \n> > XLogRead needs the TLI *to be* opened. If they are different, as far as wal\n> > logical wal sender and pg_waldump is concerned, XLogRead switches to the new\n> > TLI and the new TLI is set to openSegment.ws_tli.\n> \n> Yes, it works in these cases.\n> \n> > So, it seems to me that the parameter doesn't need to be inout? It is enough\n> > that it is an \"in\" parameter.\n> \n> I did consider \"TimeLineID *tli_p\" to be \"in\" parameter in the last patch\n> version. The reason I used pointer was the special meaning of the NULL value:\n> if NULL is passed, then the timeline should be ignored (because of the other\n> cases, see below).\n\nUnderstood.\n\n> > > The problem of walsender.c is that its implementation of XLogRead() does not\n> > > care about the TLI of the previous read. If the behavior of the new, generic\n> > > implementation should be exactly the same, we need to tell XLogRead() that in\n> > > some cases it also should not compare the current TLI to the previous\n> > > one. That's why I tried to use the NULL pointer, or the InvalidTimeLineID\n> > > earlier.\n> > \n> > Physical wal sender doesn't switch TLI. So I don't think the\n> > behavior doesn't harm (or doesn't fire). openSegment holds the\n> > TLI set at the first call. (Even if future wal sender switches\n> > TLI, the behavior should be needed.)\n> \n> Note that walsender.c:XLogRead() has no TLI argument, however the XLogRead()\n> introduced by the patch does have one. What should be passed for TLI to the\n> new implementation if it's called from walsender.c? If the check for a segment\n> change looks like this (here \"tli\" is the argument representing the desired\n> TLI)\n\nTLI is mandatory to generate a wal file name so it must be passed\nto the function anyways. In the current code it is sendTimeLine\nfor the walsender.c:XLogRead(). logical_read_xlog_page sets the\nvariable very time immediately before calling\nXLogRead(). CreateReplicationSlot and StartReplication set the\nvariable to desired TLI immediately before calling and once it is\nset by StartReplication, it is not changed by XLogSendPhysical\nand wal sender ends at the end of the current timeline. In the\nXLogRead, the value is copied to sendSeg->ws_tli when the file\nfor the new timeline is read.\n\n> \tif (seg->ws_file < 0 ||\n> \t\t!XLByteInSeg(recptr, seg->ws_segno, segcxt->ws_segsize) ||\n> \t\ttli != seg->ws_tli)\n> \t{\n> \t\tXLogSegNo\tnextSegNo;\n> \n> \t\t/* Switch to another logfile segment */\n> \t\tif (seg->ws_file >= 0)\n> \t\t\tclose(seg->ws_file);\n> \n> then any valid TLI can result in accidental closing of the current segment\n> file. Since this is only refactoring patch, we should not allow such a change\n> of behavior even if it seems that the same segment will be reopened\n> immediately.\n\nMmm. ws_file must be -1 in the case? tli != seg->ws_tli is true\nbut seg->ws_file < 0 is also always true at the time. In other\nwords, the \"tli != seg->ws_tli\" is not even evaluated.\n\nIf wal sender had an open file (ws_file >= 0) and the new TLI is\ndifferent from ws_tli, it would be the sign of a serious bug.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 01 Oct 2019 17:48:22 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Tue, 01 Oct 2019 08:28:03 +0200, Antonin Houska <ah@cybertec.at> wrote in <2188.1569911283@antos>\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> > > > The problem of walsender.c is that its implementation of XLogRead() does not\n> > > > care about the TLI of the previous read. If the behavior of the new, generic\n> > > > implementation should be exactly the same, we need to tell XLogRead() that in\n> > > > some cases it also should not compare the current TLI to the previous\n> > > > one. That's why I tried to use the NULL pointer, or the InvalidTimeLineID\n> > > > earlier.\n> > > \n> > > Physical wal sender doesn't switch TLI. So I don't think the\n> > > behavior doesn't harm (or doesn't fire). openSegment holds the\n> > > TLI set at the first call. (Even if future wal sender switches\n> > > TLI, the behavior should be needed.)\n> > \n> > Note that walsender.c:XLogRead() has no TLI argument, however the XLogRead()\n> > introduced by the patch does have one. What should be passed for TLI to the\n> > new implementation if it's called from walsender.c? I f the check for a segment\n> > change looks like this (here \"tli\" is the argument representing the desired\n> > TLI)\n> \n> TLI is mandatory to generate a wal file name so it must be passed\n> to the function anyways. In the current code it is sendTimeLine\n> for the walsender.c:XLogRead(). logical_read_xlog_page sets the\n> variable very time immediately before calling\n> XLogRead(). CreateReplicationSlot and StartReplication set the\n> variable to desired TLI immediately before calling and once it is\n> set by StartReplication, it is not changed by XLogSendPhysical\n> and wal sender ends at the end of the current timeline. In the\n> XLogRead, the value is copied to sendSeg->ws_tli when the file\n> for the new timeline is read.\n\nAre you saying that we should pass sendTimeLine to XLogRead()? I think it's\nnot always correct because sendSeg->ws_tli is sometimes assigned\nsendTimeLineNextTLI, so the test \"tli != seg->ws_tli\" in\n\n> > \tif (seg->ws_file < 0 ||\n> > \t\t!XLByteInSeg(recptr, seg->ws_segno, segcxt->ws_segsize) ||\n> > \t\ttli != seg->ws_tli)\n> > \t{\n> > \t\tXLogSegNo\tnextSegNo;\n\ncould pass occasionally.\n\n> Mmm. ws_file must be -1 in the case? tli != seg->ws_tli is true\n> but seg->ws_file < 0 is also always true at the time. In other\n> words, the \"tli != seg->ws_tli\" is not even evaluated.\n> \n> If wal sender had an open file (ws_file >= 0) and the new TLI is\n> different from ws_tli, it would be the sign of a serious bug.\n\nSo we can probably pass ws_tli as the \"new TLI\" when calling the new\nXLogRead() from walsender.c. Is that what you try to say? I need to think\nabout it more but it sounds like a good idea.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 02 Oct 2019 09:16:10 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "This is the next version.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Fri, 04 Oct 2019 12:11:11 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Fri, Oct 04, 2019 at 12:11:11PM +0200, Antonin Houska wrote:\n> This is the next version.\n\nSo... These are the two last bits to look at, reducing a bit the code\nsize:\n5 files changed, 396 insertions(+), 419 deletions(-)\n\nAnd what this patch set does is to refactor the routines we use now in\nxlogreader.c to read a page by having new callbacks to open a segment,\nas that's basically the only difference between the context of a WAL\nsender, pg_waldump and recovery.\n\nHere are some comments reading through the code.\n\n+ * Note that XLogRead(), if used, should have updated the \"seg\" too for\n+ * its own reasons, however we cannot rely on ->read_page() to call\n+ * XLogRead().\nWhy?\n\nYour patch removes all the three optional lseek() calls which can\nhappen in a segment. Am I missing something but isn't that plain\nwrong? You could reuse the error context for that as well if an error\nhappens as what's needed is basically the segment name and the LSN\noffset.\n\nAll the callers of XLogReadProcessError() are in src/backend/, so it\nseems to me that there is no point to keep that in xlogreader.c but it\nshould be instead in xlogutils.c, no? It seems to me that this is\nmore like XLogGenerateError, or just XLogError(). We have been using\nxlog as an acronym in many places of the code, so switching now to wal\njust for the past matter of the pg_xlog -> pg_wal switch does not seem\nworth bothering.\n\n+read_local_xlog_page_segment_open(XLogSegNo nextSegNo,\n+ WALSegmentContext *segcxt,\nCould you think about a more simple name here? It is a callback to\nopen a new segment, so it seems to me that we could call it just\nopen_segment_callback(). There is also no point in using a pointer to\nthe TLI, no?\n\n+ * Read 'count' bytes from WAL fetched from timeline 'tli' into 'buf',\n+ * starting at location 'startptr'. 'seg' is the last segment used,\n+ * 'openSegment' is a callback to opens the next segment if needed and\n+ * 'segcxt' is additional segment info that does not fit into 'seg'.\nA typo and the last part of the last sentence could be better worded.\n--\nMichael",
"msg_date": "Thu, 7 Nov 2019 13:48:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Oct 04, 2019 at 12:11:11PM +0200, Antonin Houska wrote:\n> > This is the next version.\n> \n> So... These are the two last bits to look at, reducing a bit the code\n> size:\n> 5 files changed, 396 insertions(+), 419 deletions(-)\n> \n> And what this patch set does is to refactor the routines we use now in\n> xlogreader.c to read a page by having new callbacks to open a segment,\n> as that's basically the only difference between the context of a WAL\n> sender, pg_waldump and recovery.\n> \n> Here are some comments reading through the code.\n> \n> + * Note that XLogRead(), if used, should have updated the \"seg\" too for\n> + * its own reasons, however we cannot rely on ->read_page() to call\n> + * XLogRead().\n> Why?\n\nI've updated the comment:\n\n+ /*\n+ * Update read state information.\n+ *\n+ * If XLogRead() is was called by ->read_page, it should have updated the\n+ * ->seg fields accordingly (since we never request more than a single\n+ * page, neither ws_segno nor ws_off should have advanced beyond\n+ * targetSegNo and targetPageOff respectively). However it's not mandatory\n+ * for ->read_page to call XLogRead().\n+ */\n\nBesides what I say here, I'm not sure if we should impose additional\nrequirement on the existing callbacks (possibly those in extensions) to update\nthe XLogReaderState.seg structure.\n\n> Your patch removes all the three optional lseek() calls which can\n> happen in a segment. Am I missing something but isn't that plain\n> wrong? You could reuse the error context for that as well if an error\n> happens as what's needed is basically the segment name and the LSN\n> offset.\n\nExplicit call of lseek() is not used because XLogRead() uses pg_pread()\nnow. Nevertheless I found out that in the the last version of the patch I set\nws_off to 0 for a newly opened segment. This was wrong, fixed now.\n\n> All the callers of XLogReadProcessError() are in src/backend/, so it\n> seems to me that there is no point to keep that in xlogreader.c but it\n> should be instead in xlogutils.c, no? It seems to me that this is\n> more like XLogGenerateError, or just XLogError(). We have been using\n> xlog as an acronym in many places of the code, so switching now to wal\n> just for the past matter of the pg_xlog -> pg_wal switch does not seem\n> worth bothering.\n\nok, moved to xlogutils.c and renamed to XLogReadRaiseError(). I think the\n\"Read\" word should be there because many other error can happen during XLOG\nprocessing.\n\n> +read_local_xlog_page_segment_open(XLogSegNo nextSegNo,\n> + WALSegmentContext *segcxt,\n> Could you think about a more simple name here? It is a callback to\n> open a new segment, so it seems to me that we could call it just\n> open_segment_callback().\n\nok, the function is not exported to other modules, so there's no need to care\nabout uniqueness of the name. I chose wal_segment_open(), according to the\ncallback type WALSegmentOpen.\n\n> There is also no point in using a pointer to the TLI, no?\n\nThis particular callback makes no decision about the TLI, so it only uses\ntli_p as an input argument.\n\n> + * Read 'count' bytes from WAL fetched from timeline 'tli' into 'buf',\n> + * starting at location 'startptr'. 'seg' is the last segment used,\n> + * 'openSegment' is a callback to opens the next segment if needed and\n> + * 'segcxt' is additional segment info that does not fit into 'seg'.\n> A typo and the last part of the last sentence could be better worded.\n\nok, adjusted a bit.\n\nThanks for review.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Mon, 11 Nov 2019 16:25:56 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 04:25:56PM +0100, Antonin Houska wrote:\n>> On Fri, Oct 04, 2019 at 12:11:11PM +0200, Antonin Houska wrote:\n> + /*\n> + * Update read state information.\n> + *\n> + * If XLogRead() is was called by ->read_page, it should have updated the\n> + * ->seg fields accordingly (since we never request more than a single\n> + * page, neither ws_segno nor ws_off should have advanced beyond\n> + * targetSegNo and targetPageOff respectively). However it's not mandatory\n> + * for ->read_page to call XLogRead().\n> + */\n> \n> Besides what I say here, I'm not sure if we should impose additional\n> requirement on the existing callbacks (possibly those in extensions) to update\n> the XLogReaderState.seg structure.\n\n\"is was called\" does not make sense in this sentence. Actually, I\nwould tend to just remove it completely.\n\n>> Your patch removes all the three optional lseek() calls which can\n>> happen in a segment. Am I missing something but isn't that plain\n>> wrong? You could reuse the error context for that as well if an error\n>> happens as what's needed is basically the segment name and the LSN\n>> offset.\n> \n> Explicit call of lseek() is not used because XLogRead() uses pg_pread()\n> now. Nevertheless I found out that in the the last version of the patch I set\n> ws_off to 0 for a newly opened segment. This was wrong, fixed now.\n\nMissed that part, thanks. This was actually not obvious after an\ninitial lookup of the patch. Wouldn't it make sense to split that\npart in a separate patch that we could review and get committed first\nthen? It would have the advantage to make the rest easier to review\nand follow. And using pread is actually better for performance\ncompared to read+lseek. Now there is also the argument that we don't\nalways seek into an opened WAL segment, and that a plain read() is\nactually better than pread() in some cases.\n\n> ok, moved to xlogutils.c and renamed to XLogReadRaiseError(). I think the\n> \"Read\" word should be there because many other error can happen during XLOG\n> processing.\n\nNo issue with this name.\n\n> ok, the function is not exported to other modules, so there's no need to care\n> about uniqueness of the name. I chose wal_segment_open(), according to the\n> callback type WALSegmentOpen.\n\nName is fine by me.\n\n>> There is also no point in using a pointer to the TLI, no?\n> \n> This particular callback makes no decision about the TLI, so it only uses\n> tli_p as an input argument.\n\nMissed that walsender.c can enforce the tli to a new value, objection\nwithdrawn.\n\n+ * BasicOpenFile() is the preferred way to open the segment file in backend\n+ * code, whereas open(2) should be used in frontend.\nI would remove that sentence.\n\n+#ifndef FRONTEND\n+/*\n+ * Backend-specific convenience code to handle read errors encountered by\n+ * XLogRead().\n+ */\n+void\n+XLogReadRaiseError(XLogReadError *errinfo)\nNo need for the FRONTEND ifndef's here as xlogutils.c is backend-only.\n\n+#ifndef FRONTEND\n+void XLogReadRaiseError(XLogReadError *errinfo);\n+#endif\nSame as above, and missing an extern declaration.\n\n+ fatal_error(\"could not read from log file %s, offset %u, length %zu: %s\",\n+ fname, seg->ws_off, (Size) errinfo.reqbytes,\n+ strerror(errinfo.read_errno));\nLet's use this occasion to make those error messages more generic to\nreduce the pain of translators as the file name lets us know that we\nhave to deal with a WAL segment. Here are some suggestions, taking\ninto account the offset:\n- If errno is set: \"could not read file \\\"%s\\\" at offset %u: %m\"\n- For partial read: \"could not read file \\\"%s\\\" at offset %u: read %d of %zu\"\n--\nMichael",
"msg_date": "Tue, 12 Nov 2019 13:31:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Nov 11, 2019 at 04:25:56PM +0100, Antonin Houska wrote:\n> >> On Fri, Oct 04, 2019 at 12:11:11PM +0200, Antonin Houska wrote:\n> >> Your patch removes all the three optional lseek() calls which can\n> >> happen in a segment. Am I missing something but isn't that plain\n> >> wrong? You could reuse the error context for that as well if an error\n> >> happens as what's needed is basically the segment name and the LSN\n> >> offset.\n> > \n> > Explicit call of lseek() is not used because XLogRead() uses pg_pread()\n> > now. Nevertheless I found out that in the the last version of the patch I set\n> > ws_off to 0 for a newly opened segment. This was wrong, fixed now.\n> \n> Missed that part, thanks. This was actually not obvious after an\n> initial lookup of the patch. Wouldn't it make sense to split that\n> part in a separate patch that we could review and get committed first\n> then? It would have the advantage to make the rest easier to review\n> and follow. And using pread is actually better for performance\n> compared to read+lseek. Now there is also the argument that we don't\n> always seek into an opened WAL segment, and that a plain read() is\n> actually better than pread() in some cases.\n\nok, the next version uses explicit lseek(). Maybe the fact that XLOG is mostly\nread sequentially (i.e. without frequent seeks) is the reason pread() has't\nbeen adopted so far.\n\nThe new version reflects your other suggestions too, except the one about not\nrenaming \"XLOG\" -> \"WAL\" (actually you mentioned that earlier in the\nthread). I recall that when working on the preliminary patch (709d003fbd),\nAlvaro suggested \"WAL\" for some structures because these are new. The rule\nseemed to be that \"XLOG...\" should be left for the existing symbols, while the\nnew ones should be \"WAL...\":\n\nhttps://www.postgresql.org/message-id/20190917221521.GA15733%40alvherre.pgsql\n\nSo I decided to rename the new symbols and to remove the related comment.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Tue, 12 Nov 2019 12:06:33 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-Nov-12, Antonin Houska wrote:\n\n> ok, the next version uses explicit lseek(). Maybe the fact that XLOG is mostly\n> read sequentially (i.e. without frequent seeks) is the reason pread() has't\n> been adopted so far.\n\nI don't quite understand why you backed off from switching to pread. It\nseemed a good change to me.\n\nHere's a few edits on top of your latest.\n\nThe new routine WALRead() is not at all the same as the previous\nXLogRead, so I don't see why we would keep the name. Hence renamed.\n\nI see no reason for the openSegment callback to return the FD in an out\nparam instead of straight return value. Changed that way.\n\nHaving seek/open be a boolean \"xlr_seek\" seems a bit weird. Changed to\nan \"operation\" enum. (Maybe if we go back to pg_pread we can get rid of\nthis.) Accordingly, change WALReadRaiseError and WALDumpReadPage.\n\nChange xlr_seg to be a struct rather than pointer to struct. It seems a\nbit dangerous to me to return a pointer that we don't know is going to\nbe valid at raise-error time. Struct assignment works fine for the\npurpose.\n\nRenamed XLogDumpReadPage to WALDumpReadPage, because what the heck is\nXLogDump anyway? That doesn't exist.\n\nI would only like to switch this back to pg_pread() (from seek/read) and\nI'd be happy to commit this.\n\nWhat is logical_read_local_xlog_page all about? Seems useless. Let's\nget rid of it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 15 Nov 2019 18:41:02 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Attachment.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 15 Nov 2019 18:41:45 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "BTW ... contrib/test_decoding fails with this patch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 17 Nov 2019 01:22:21 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Fri, Nov 15, 2019 at 06:41:02PM -0300, Alvaro Herrera wrote:\n> I don't quite understand why you backed off from switching to pread. It\n> seemed a good change to me.\n>\n> [...]\n>\n> Having seek/open be a boolean \"xlr_seek\" seems a bit weird. Changed to\n> an \"operation\" enum. (Maybe if we go back to pg_pread we can get rid of\n> this.) Accordingly, change WALReadRaiseError and WALDumpReadPage.\n\nThis has been quickly mentioned on the thread which has introduced\npread():\nhttps://www.postgresql.org/message-id/c2f56d0a-cadd-3df1-ae48-b84dc8128c37@redhat.com\n\nNow, read() > pread() > read()+lseek(), and we don't actually need to\nseek into the file for all the cases where we read a WAL page. And on\na platform which uses the fallback implementation, this increases the\nnumber of lseek() calls. I can see as you say that using it directly\nin the refactoring can simplify the code.\n--\nMichael",
"msg_date": "Mon, 18 Nov 2019 21:29:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Mon, Nov 18, 2019 at 09:29:03PM +0900, Michael Paquier wrote:\n> Now, read() > pread() > read()+lseek(), and we don't actually need to\n> seek into the file for all the cases where we read a WAL page. And on\n> a platform which uses the fallback implementation, this increases the\n> number of lseek() calls. I can see as you say that using it directly\n> in the refactoring can simplify the code.\n\nPutting this point aside, here is the error coming from\ncontrib/test_decoding/, and this is independent of Alvaro's changes:\n+ERROR: invalid magic number 0000 in log segment\n000000010000000000000001, offset 6905856\n\nI don't think that this is just xlp_magic messed up, the full page\nread is full of zeros. But that's just a guess.\n\nLooking at the code, I am spotting one inconsistency in the way\nseg->ws_off is compiled after doing the read on the new version\ncompared to the three others. read() would move the offset of the\nfile, but the code is forgetting to increment it by a amount of\nreadbytes. Isn't that incorrect?\n\nA second thing is that wal_segment_open() definition is incorrect in\nxlogutils.c, generating a warning. The opened fd is the returned\nresult, and not an argument of the routine.\n\nI am switching the patch as waiting on author. Antonin, could you\nlook at those problems?\n--\nMichael",
"msg_date": "Wed, 20 Nov 2019 17:38:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Nov-12, Antonin Houska wrote:\n> \n> > ok, the next version uses explicit lseek(). Maybe the fact that XLOG is mostly\n> > read sequentially (i.e. without frequent seeks) is the reason pread() has't\n> > been adopted so far.\n> \n> I don't quite understand why you backed off from switching to pread. It\n> seemed a good change to me.\n\nI agreed with Michael that it makes comparison of the old and new code more\ndifficult, and I also thought that his arguments about performance might be\nworthwhile because WAL reading is mostly sequential and does not require many\nseeks. However things appear to be more complex, see below.\n\n> Here's a few edits on top of your latest.\n\n> ...\n\nI agree with your renamings.\n\n> Change xlr_seg to be a struct rather than pointer to struct. It seems a\n> bit dangerous to me to return a pointer that we don't know is going to\n> be valid at raise-error time. Struct assignment works fine for the\n> purpose.\n\nok\n\n> I would only like to switch this back to pg_pread() (from seek/read) and\n> I'd be happy to commit this.\n\nI realized that, starting from commit 709d003fbd98b975a4fbcb4c5750fa6efaf9ad87\nwe use the WALOpenSegment.ws_off field incorrectly in\nwalsender.c:XLogRead(). In that commit we used this field to replace\nXLogReaderState.readOff:\n\n@@ -156,10 +165,9 @@ struct XLogReaderState\n char *readBuf;\n uint32 readLen;\n \n- /* last read segment, segment offset, TLI for data currently in readBuf */\n- XLogSegNo readSegNo;\n- uint32 readOff;\n- TimeLineID readPageTLI;\n+ /* last read XLOG position for data currently in readBuf */\n+ WALSegmentContext segcxt;\n+ WALOpenSegment seg;\n \n /*\n * beginning of prior page read, and its TLI. Doesn't necessarily\n\nThus we cannot use it in XLogRead() to track the current position in the\nsegment file. Although walsender.c:XLogRead() misses this point, it's not\nbroken because walsender.c does not use XLogReaderState at all.\n\nSo if explicit lseek() should be used, another field should be added to\nWALOpenSegment. I failed to do so when removing the pg_pread() call from the\npatch, and that was the reason for the problem reported here:\n\nhttps://www.postgresql.org/message-id/20191117042221.GA16537%40alvherre.pgsql\nhttps://www.postgresql.org/message-id/20191120083802.GB47145@paquier.xyz\n\nThus the use of pg_pread() makes the code quite a bit simpler, so I\nre-introduced it. If you decide that an explicit lseek() should be used yet,\njust let me know.\n\n> What is logical_read_local_xlog_page all about? Seems useless. Let's\n> get rid of it.\n\nIt seems so. Should I post a patch for that?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Wed, 20 Nov 2019 15:50:29 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 03:50:29PM +0100, Antonin Houska wrote:\n> Thus the use of pg_pread() makes the code quite a bit simpler, so I\n> re-introduced it. If you decide that an explicit lseek() should be used yet,\n> just let me know.\n\nSkimming through the code, it looks like in a good state. The\nfailures of test_deconding are fixed, and all the changes from Alvaro\nhave been added.\n\n+ fatal_error(\"could not read in file %s, offset %u, length %zu: %s\",\n+ fname, seg->ws_off, (Size) errinfo.wre_req,\n+ strerror(errinfo.wre_errno));\nYou should be able to use %m here instead of strerror().\n\nIt seems to me that it is always important to not do changes\ncompletely blindly either so as this does not become an issue for\nrecovery later on. FWIW, I ran a small set of tests with a WAL\nsegment sizes of 1MB and 1GB (fsync = off, max_wal_size/min_wal_size\nset very high, 1 billion rows in single-column table followed by a\nseries of updates):\n- Created a primary and a standby which archive_mode set.\n- Stopped the standby.\n- Produced close to 12GB worth of WAL.\n- Restarted the standby with restore_command and compared the time it\ntakes for recovery to complete all the segments with HEAD and your\nrefactoring:\n1GB + HEAD: 7min52s\n1GB + patch: 8min10s\n1MB + HEAD: 10min17s\n1MB + patch: 12min1s\n\nAnd with WAL segments at 1MB, I was seeing quite a slowdown with the\npatch... Then I have done an extra test with pg_waldump with the\nsegments generated previously with the output redirected to /dev/null.\nGoing through 512 segments takes 15.730s with HEAD (average of 3 runs)\nand 15.851s with the patch.\n--\nMichael",
"msg_date": "Thu, 21 Nov 2019 17:05:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 05:05:50PM +0900, Michael Paquier wrote:\n> And with WAL segments at 1MB, I was seeing quite a slowdown with the\n> patch... Then I have done an extra test with pg_waldump with the\n> segments generated previously with the output redirected to /dev/null.\n> Going through 512 segments takes 15.730s with HEAD (average of 3 runs)\n> and 15.851s with the patch.\n\nHere are more tests with pg_waldump and 1MB/1GB segment sizes with\nrecords generated from pgbench, (7 runs, eliminated the two highest\nand two lowest, these are the remaining 3 runs as real time):\n1) 1MB segment size, 512 segments:\ntime pg_waldump 000000010000000100000C00 000000010000000100000F00 > /dev/null\n- HEAD: 0m4.512s, 0m4.446s, 0m4.501s\n- Patch + system's pg_read: 0m4.495s, 0m4.502s, 0m4.486s\n- Patch + fallback pg_read: 0m4.505s, 0m4.527s, 0m4.495s\n2) 1GB segment size, 3 segments:\ntime pg_waldump 000000010000000200000001 000000010000000200000003 > /dev/null\n- HEAD: 0m11.802s, 0m11.834s, 0m11.846s\n- Patch + system's pg_read: 0m11.939s, 0m11.991s, 0m11.966s\n- Patch + fallback pg_read: 0m12.054s, 0m12.066s, 0m12.159s\nSo there is a tendency for a small slowdown here. Still it is not\nthat much, so I withdraw my concerns.\n\nAnother thing:\n+void WALReadRaiseError(WALReadError *errinfo);\nThis is missing an \"extern\" declaration.\n\nAlvaro, you are marked as a committer of this CF entry. Are you\nplanning to look at it again? Sorry for the delay from my side.\n--\nMichael",
"msg_date": "Fri, 22 Nov 2019 09:49:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-Nov-22, Michael Paquier wrote:\n\n> Alvaro, you are marked as a committer of this CF entry. Are you\n> planning to look at it again? Sorry for the delay from my side.\n\nYes :-) hopefully next week. Thanks for reviewing.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 00:49:33 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 12:49:33AM -0300, Alvaro Herrera wrote:\n> Yes :-) hopefully next week. Thanks for reviewing.\n\nThanks, I am switching the entry as ready for committer then. Please\nnote that the latest patch series have a conflict at the top of\nwalsender.c easy enough to resolve, and that the function declaration\nin xlogutils.h misses an \"extern\". I personally find unnecessary the\nlast sentence in the new comment block of xlogreader.h to describe the\nnew callback to open a segment about BasicOpenFile() and open()\nbecause one could also use a transient file opened in the backend, but\nI'll be fine with anything you think is most fit. That's a minor\npoint.\n\nThanks Antonin for doing the refactoring effort.\n--\nMichael",
"msg_date": "Fri, 22 Nov 2019 16:11:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Nov 21, 2019 at 05:05:50PM +0900, Michael Paquier wrote:\n> > And with WAL segments at 1MB, I was seeing quite a slowdown with the\n> > patch... Then I have done an extra test with pg_waldump with the\n> > segments generated previously with the output redirected to /dev/null.\n> > Going through 512 segments takes 15.730s with HEAD (average of 3 runs)\n> > and 15.851s with the patch.\n> \n> Here are more tests with pg_waldump and 1MB/1GB segment sizes with\n> records generated from pgbench, (7 runs, eliminated the two highest\n> and two lowest, these are the remaining 3 runs as real time):\n> 1) 1MB segment size, 512 segments:\n> time pg_waldump 000000010000000100000C00 000000010000000100000F00 > /dev/null\n> - HEAD: 0m4.512s, 0m4.446s, 0m4.501s\n> - Patch + system's pg_read: 0m4.495s, 0m4.502s, 0m4.486s\n> - Patch + fallback pg_read: 0m4.505s, 0m4.527s, 0m4.495s\n> 2) 1GB segment size, 3 segments:\n> time pg_waldump 000000010000000200000001 000000010000000200000003 > /dev/null\n> - HEAD: 0m11.802s, 0m11.834s, 0m11.846s\n> - Patch + system's pg_read: 0m11.939s, 0m11.991s, 0m11.966s\n> - Patch + fallback pg_read: 0m12.054s, 0m12.066s, 0m12.159s\n> So there is a tendency for a small slowdown here. Still it is not\n> that much, so I withdraw my concerns.\n\nThanks for the testing!\n\nI thought that in [1] you try discourage me from using pg_pread(), but now it\nseems to be the opposite. Ideally I'd like to see no overhead added by my\npatch at all, but the code simplicity should matter too.\n\nAs a clue, we can perhaps consider the fact that commit c24dcd0c removed\nexplicit lseek() also from XLogWrite(), but I'm not sure how much we can\ncompare XLOG writing and reading (I'd expect writing to be a bit less\nsequential than reading because XLogWrite() may need to write the last page\nmore than once.)\n\nLet's wait for Alvaro's judgement.\n\n> Another thing:\n> +void WALReadRaiseError(WALReadError *errinfo);\n> This is missing an \"extern\" declaration.\n\nI'll fix this as well as the other problem reported in [1] as soon as I know\nwhether pg_pread() should be used or not.\n\n[1] https://www.postgresql.org/message-id/20191121080550.GG153437%40paquier.xyz\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 22 Nov 2019 08:28:32 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-Nov-22, Antonin Houska wrote:\n\n> I thought that in [1] you try discourage me from using pg_pread(), but now it\n> seems to be the opposite. Ideally I'd like to see no overhead added by my\n> patch at all, but the code simplicity should matter too.\n\nFWIW I think the new code is buggy because it doesn't seem to be setting\nws_off, so I suppose the optimization in ReadPageInternal to skip\nreading the page when it's already the page we have is not hit, except\nfor the first page in the segment. I didn't verify this, just my\nimpression while reading the code.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 10:35:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 10:35:51AM -0300, Alvaro Herrera wrote:\n> FWIW I think the new code is buggy because it doesn't seem to be setting\n> ws_off, so I suppose the optimization in ReadPageInternal to skip\n> reading the page when it's already the page we have is not hit, except\n> for the first page in the segment. I didn't verify this, just my\n> impression while reading the code.\n\nFWIW, this matches with my impression here, third paragraph:\nhttps://www.postgresql.org/message-id/20191120083802.GB47145@paquier.xyz\n--\nMichael",
"msg_date": "Fri, 22 Nov 2019 22:40:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-Nov-22, Michael Paquier wrote:\n\n> On Fri, Nov 22, 2019 at 10:35:51AM -0300, Alvaro Herrera wrote:\n> > FWIW I think the new code is buggy because it doesn't seem to be setting\n> > ws_off, so I suppose the optimization in ReadPageInternal to skip\n> > reading the page when it's already the page we have is not hit, except\n> > for the first page in the segment. I didn't verify this, just my\n> > impression while reading the code.\n> \n> FWIW, this matches with my impression here, third paragraph:\n> https://www.postgresql.org/message-id/20191120083802.GB47145@paquier.xyz\n\nAh, right.\n\nI was wondering if we shouldn't do away with the concept of \"offset\" as\nsuch, since the offset there is always forcibly set to the start of a\npage. Why don't we count page numbers instead? It seems like the\ninterface is confusingly generic (measure in bytes) yet not offer any\nextra functionality that could not be obtained with a simpler struct\nrepr (measure in pages).\n\nBut then that's not something that we need to change in this patch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 11:25:28 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Nov-22, Michael Paquier wrote:\n> \n> > On Fri, Nov 22, 2019 at 10:35:51AM -0300, Alvaro Herrera wrote:\n> > > FWIW I think the new code is buggy because it doesn't seem to be setting\n> > > ws_off, so I suppose the optimization in ReadPageInternal to skip\n> > > reading the page when it's already the page we have is not hit, except\n> > > for the first page in the segment. I didn't verify this, just my\n> > > impression while reading the code.\n> > \n> > FWIW, this matches with my impression here, third paragraph:\n> > https://www.postgresql.org/message-id/20191120083802.GB47145@paquier.xyz\n> \n> Ah, right.\n\nAs I pointed out in\n\nhttps://www.postgresql.org/message-id/88183.1574261429%40antos\n\nseg.ws_off only replaced readOff in XLogReaderState. So we should only update\nws_off where readOff was updated before commit 709d003. This does happen in\nReadPageInternal (see HEAD) and I see no reason for the final patch to update\nws_off anywhere else.\n\n> I was wondering if we shouldn't do away with the concept of \"offset\" as\n> such, since the offset there is always forcibly set to the start of a\n> page. Why don't we count page numbers instead? It seems like the\n> interface is confusingly generic (measure in bytes) yet not offer any\n> extra functionality that could not be obtained with a simpler struct\n> repr (measure in pages).\n\nYes, I agree that page numbers would be sufficient.\n\n> But then that's not something that we need to change in this patch.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 22 Nov 2019 16:55:20 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-Nov-22, Antonin Houska wrote:\n\n> As I pointed out in\n> \n> https://www.postgresql.org/message-id/88183.1574261429%40antos\n> \n> seg.ws_off only replaced readOff in XLogReaderState. So we should only update\n> ws_off where readOff was updated before commit 709d003. This does happen in\n> ReadPageInternal (see HEAD) and I see no reason for the final patch to update\n> ws_off anywhere else.\n\nOh you're right.\n\nI see no reason to leave ws_off. We can move that to XLogReaderState; I\ndid that here. We also need the offset in WALReadError, though, so I\nadded it there too. Conceptually it seems clearer to me this way.\n\nWhat do you think of the attached?\n\nBTW I'm not clear what errors can pread()/pg_pread() report that do not\nset errno. I think lines 1083/1084 of WALRead are spurious now.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 22 Nov 2019 19:56:32 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 07:56:32PM -0300, Alvaro Herrera wrote:\n> I see no reason to leave ws_off. We can move that to XLogReaderState; I\n> did that here. We also need the offset in WALReadError, though, so I\n> added it there too. Conceptually it seems clearer to me this way.\n\nYeah, that seems cleaner.\n\n> What do you think of the attached?\n\nLooks rather fine to me.\n\n> BTW I'm not clear what errors can pread()/pg_pread() report that do not\n> set errno. I think lines 1083/1084 of WALRead are spurious now.\n\nBecause we have no guarantee that errno will be cleared if you do a\npartial read where errno is not set, so you may finish by reporting\nthe state of a previous failed read instead of the partially-failed\none depending on how WALReadError is treated? In short, I don't see\nany actual reason why it would be good to remove the reset of errno\neither before the calls to pread and pwrite().\n--\nMichael",
"msg_date": "Mon, 25 Nov 2019 12:30:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Nov-22, Antonin Houska wrote:\n> \n> > As I pointed out in\n> > \n> > https://www.postgresql.org/message-id/88183.1574261429%40antos\n> > \n> > seg.ws_off only replaced readOff in XLogReaderState. So we should only update\n> > ws_off where readOff was updated before commit 709d003. This does happen in\n> > ReadPageInternal (see HEAD) and I see no reason for the final patch to update\n> > ws_off anywhere else.\n> \n> Oh you're right.\n> \n> I see no reason to leave ws_off. We can move that to XLogReaderState; I\n> did that here. We also need the offset in WALReadError, though, so I\n> added it there too. Conceptually it seems clearer to me this way.\n> \n> What do you think of the attached?\n\nIt looks good to me. Attached is just a fix of a minor problem in error\nreporting that Michael pointed out earlier.\n\n> BTW I'm not clear what errors can pread()/pg_pread() report that do not\n> set errno. I think lines 1083/1084 of WALRead are spurious now.\n\nAll I can say is that the existing calls of pg_pread() do not clear errno, so\nyou may be right. I'd appreciate more background about the \"partial read\" that\nMichael mentions here:\n\nhttps://www.postgresql.org/message-id/20191125033048.GG37821%40paquier.xyz\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Mon, 25 Nov 2019 10:02:00 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-Nov-25, Antonin Houska wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > I see no reason to leave ws_off. We can move that to XLogReaderState; I\n> > did that here. We also need the offset in WALReadError, though, so I\n> > added it there too. Conceptually it seems clearer to me this way.\n> > \n> > What do you think of the attached?\n> \n> It looks good to me. Attached is just a fix of a minor problem in error\n> reporting that Michael pointed out earlier.\n\nExcellent, I pushed it with this change included and some other cosmetic\nchanges.\n\nNow there's only XLogPageRead() ...\n\n> > BTW I'm not clear what errors can pread()/pg_pread() report that do not\n> > set errno. I think lines 1083/1084 of WALRead are spurious now.\n> \n> All I can say is that the existing calls of pg_pread() do not clear errno, so\n> you may be right.\n\nRight ... in this interface, we only report an error if pg_pread()\nreturns negative, which is documented to always set errno.\n\n> I'd appreciate more background about the \"partial read\" that\n> Michael mentions here:\n> \n> https://www.postgresql.org/message-id/20191125033048.GG37821%40paquier.xyz\n\nIn the current implementation, if pg_pread() does a partial read, we\njust loop one more time.\n\nI considered changing the \"if (readbytes <= 0)\" with \"if (readbytes <\nsegbytes)\", but that seemed pointless.\n\nHowever, writing this now makes me think that we should add a\nCHECK_FOR_INTERRUPTS in this loop. (I also wonder if we shouldn't limit\nthe number of times we retry if pg_pread returns zero (i.e. no error,\nbut no bytes read either). I don't know if this is a real-world\nconsideration.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 25 Nov 2019 15:15:34 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Nov-25, Antonin Houska wrote:\n> \n> > Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> > > I see no reason to leave ws_off. We can move that to XLogReaderState; I\n> > > did that here. We also need the offset in WALReadError, though, so I\n> > > added it there too. Conceptually it seems clearer to me this way.\n> > > \n> > > What do you think of the attached?\n> > \n> > It looks good to me. Attached is just a fix of a minor problem in error\n> > reporting that Michael pointed out earlier.\n> \n> Excellent, I pushed it with this change included and some other cosmetic\n> changes.\n\nThanks!\n\n> Now there's only XLogPageRead() ...\n\nHm, this seems rather specific, not sure it's worth trying to use WALRead()\nhere. Anyway, I notice that it uses pg_read() too.\n\n> > I'd appreciate more background about the \"partial read\" that\n> > Michael mentions here:\n> > \n> > https://www.postgresql.org/message-id/20191125033048.GG37821%40paquier.xyz\n> \n> In the current implementation, if pg_pread() does a partial read, we\n> just loop one more time.\n> \n> I considered changing the \"if (readbytes <= 0)\" with \"if (readbytes <\n> segbytes)\", but that seemed pointless.\n\nIn the pread() documentation I see \"Upon reading end-of-file, zero is\nreturned.\" but that does not tell whether zero can be returned without\nreaching EOF. However XLogPageRead() handles zero as an error, so WALRead() is\nconsistent with that.\n\n> However, writing this now makes me think that we should add a\n> CHECK_FOR_INTERRUPTS in this loop. (I also wonder if we shouldn't limit\n> the number of times we retry if pg_pread returns zero (i.e. no error,\n> but no bytes read either). I don't know if this is a real-world\n> consideration.)\n\nIf statement above is correct, then we shouldn't need this.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Tue, 26 Nov 2019 11:40:17 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
},
{
"msg_contents": "On 2019-Nov-20, Antonin Houska wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > What is logical_read_local_xlog_page all about? Seems useless. Let's\n> > get rid of it.\n> \n> It seems so. Should I post a patch for that?\n\nNo need .. it was simple enough. Just pushed it.\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Mar 2020 18:22:41 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Attempt to consolidate reading of XLOG page"
}
] |
[
{
"msg_contents": "Currently pg_stat_replication view does not tell useful information\nregarding client connections if UNIX domain sockets are used for\ncommunication between sender and receiver. So it is not possible to\ntell which row corresponds to which standby server.\n\ntest=# select client_addr, client_hostname, client_port,sync_state client_port from pg_stat_replication;\n client_addr | client_hostname | client_port | client_port \n-------------+-----------------+-------------+-------------\n | | -1 | async\n | | -1 | async\n(2 rows)\n\nThis is due to that pg_stat_replication is created from\npg_stat_get_activity view. pg_stat_get_activity view calls\npg_stat_get_activity() which returns always NULL, NULL, -1 for\nclient_add, client_hostname and client_port.\n\n\t\t\t\telse if (beentry->st_clientaddr.addr.ss_family == AF_UNIX)\n\t\t\t\t{\n\t\t\t\t\t/*\n\t\t\t\t\t * Unix sockets always reports NULL for host and -1 for\n\t\t\t\t\t * port, so it's possible to tell the difference to\n\t\t\t\t\t * connections we have no permissions to view, or with\n\t\t\t\t\t * errors.\n\t\t\t\t\t */\n\nChanging this behavior would affect existing pg_stat_get_activity view\nusers and I hesitate to do so. I wonder if we could add receiver's\nUNIX domain socket path to from pg_stat_get_wal_senders() (which is\ncalled from pg_stat_replication view) so that the poor UNIX domain\nsocket users could make their own view or access\npg_stat_get_wal_senders() to get the UNIX domain socket path.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 12 Apr 2019 09:16:27 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Adding Unix domain socket path and port to\n pg_stat_get_wal_senders()"
},
{
"msg_contents": "Em qui, 11 de abr de 2019 às 21:16, Tatsuo Ishii <ishii@sraoss.co.jp> escreveu:\n>\n> Currently pg_stat_replication view does not tell useful information\n> regarding client connections if UNIX domain sockets are used for\n> communication between sender and receiver. So it is not possible to\n> tell which row corresponds to which standby server.\n>\napplication_name. I'm not sure if it solves your complain but Peter\ncommitted a patch [1] for v12 that distinguishes replicas in the same\nhost via cluster_name.\n\n> test=# select client_addr, client_hostname, client_port,sync_state client_port from pg_stat_replication;\n> client_addr | client_hostname | client_port | client_port\n> -------------+-----------------+-------------+-------------\n> | | -1 | async\n> | | -1 | async\n> (2 rows)\n>\n> This is due to that pg_stat_replication is created from\n> pg_stat_get_activity view. pg_stat_get_activity view calls\n> pg_stat_get_activity() which returns always NULL, NULL, -1 for\n> client_add, client_hostname and client_port.\n>\nSocket has different semantic from TCP/UDP. We can't add socket\ninformation into client_addr unless we are prepared to break this view\n(client_addr has type inet and it would be necessary to change it to\ntext). It could break a lot of applications.\n\n\n[1] https://www.postgresql.org/message-id/flat/1257eaee-4874-e791-e83a-46720c72cac7@2ndquadrant.com\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Thu, 11 Apr 2019 22:19:01 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: Adding Unix domain socket path and port to\n pg_stat_get_wal_senders()"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 10:19:01PM -0300, Euler Taveira wrote:\n> application_name. I'm not sure if it solves your complain but Peter\n> committed a patch [1] for v12 that distinguishes replicas in the same\n> host via cluster_name.\n\nLet's be honest, this is just a workaround.\n\n> Socket has different semantic from TCP/UDP. We can't add socket\n> information into client_addr unless we are prepared to break this view\n> (client_addr has type inet and it would be necessary to change it to\n> text). It could break a lot of applications.\n\nclient_addr does not seem the right place to store this information,\nand it is already documented for years that NULL is used when using a\nUnix socket. But I think that we could change *client_hostname* so as\nthe path name is reported instead of NULL when connecting through a\nUnix domain socket, and there is no need to switch the field type for\nthat.\n\nI agree with Ishii-san that it would be nice to close the gap here.\nFor pg_stat_wal_receiver, please note that sender_host reports\ncorrectly the domain path when connecting locally.\n--\nMichael",
"msg_date": "Fri, 12 Apr 2019 13:39:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Adding Unix domain socket path and port to\n pg_stat_get_wal_senders()"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Apr 11, 2019 at 10:19:01PM -0300, Euler Taveira wrote:\n>> Socket has different semantic from TCP/UDP. We can't add socket\n>> information into client_addr unless we are prepared to break this view\n>> (client_addr has type inet and it would be necessary to change it to\n>> text). It could break a lot of applications.\n\nAgreed.\n\n> client_addr does not seem the right place to store this information,\n> and it is already documented for years that NULL is used when using a\n> Unix socket. But I think that we could change *client_hostname* so as\n> the path name is reported instead of NULL when connecting through a\n> Unix domain socket, and there is no need to switch the field type for\n> that.\n\nThat seems like a hack, and I think it could still break apps that\nare expecting particular semantics for that field. Why not add a\nnew column instead?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Apr 2019 09:43:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding Unix domain socket path and port to\n pg_stat_get_wal_senders()"
},
{
"msg_contents": ">> client_addr does not seem the right place to store this information,\n>> and it is already documented for years that NULL is used when using a\n>> Unix socket. But I think that we could change *client_hostname* so as\n>> the path name is reported instead of NULL when connecting through a\n>> Unix domain socket, and there is no need to switch the field type for\n>> that.\n> \n> That seems like a hack, and I think it could still break apps that\n> are expecting particular semantics for that field. Why not add a\n> new column instead?\n\nActually I aready proposed to add new column to pg_stat_get_wal_senders():\n\n> Changing this behavior would affect existing pg_stat_get_activity view\n> users and I hesitate to do so. I wonder if we could add receiver's\n> UNIX domain socket path to from pg_stat_get_wal_senders() (which is\n> called from pg_stat_replication view) so that the poor UNIX domain\n> socket users could make their own view or access\n> pg_stat_get_wal_senders() to get the UNIX domain socket path.\n\nIf we were ok to add a new column to pg_stat_activity view or\npg_stat_replication view as well, that will be great.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 12 Apr 2019 23:38:52 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Adding Unix domain socket path and port to\n pg_stat_get_wal_senders()"
},
{
"msg_contents": "Em sex, 12 de abr de 2019 às 01:39, Michael Paquier\n<michael@paquier.xyz> escreveu:\n>\n> On Thu, Apr 11, 2019 at 10:19:01PM -0300, Euler Taveira wrote:\n> > application_name. I'm not sure if it solves your complain but Peter\n> > committed a patch [1] for v12 that distinguishes replicas in the same\n> > host via cluster_name.\n>\n> Let's be honest, this is just a workaround.\n>\nThe question is: what is the problem we want to solve? Ishii-san asked\nfor a socket path. If we have already figured out the replica (via\napplication_name), use the replica PID to find the socket path. A new\ncolumn as suggested by Tom could show the desired info. Is it *really*\nuseful? I mean, how many setups have master and replica in the same\nserver? For a socket connection, directory is important and that\ninformation I can get from unix_socket_directories parameter (I've\nnever seen a setup with multiple socket directories).\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Fri, 12 Apr 2019 11:55:26 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: Adding Unix domain socket path and port to\n pg_stat_get_wal_senders()"
},
{
"msg_contents": "Euler Taveira <euler@timbira.com.br> writes:\n> The question is: what is the problem we want to solve? Ishii-san asked\n> for a socket path. If we have already figured out the replica (via\n> application_name), use the replica PID to find the socket path. A new\n> column as suggested by Tom could show the desired info. Is it *really*\n> useful? I mean, how many setups have master and replica in the same\n> server?\n\nYeah, I think that argument is why we didn't cover the case in the\noriginal view design. This additional column would be useless on\nWindows, too. Still, since Ishii-san is concerned about this,\nI suppose he has a plausible use-case in mind.\n\n> For a socket connection, directory is important and that\n> information I can get from unix_socket_directories parameter (I've\n> never seen a setup with multiple socket directories).\n\nThose are actually pretty common, for example if you use Red Hat's\npackaging you will have both /var/run/postgresql and /tmp as socket\ndirectories (since they consider use of /tmp deprecated, but getting\nrid of all clients' use of it turns out to be really hard). However,\nit's definitely fair to question whether anyone *cares* which of\nthe server's socket directories a given connection used. Aren't\nthey going to be pretty much all equivalent?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Apr 2019 11:57:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding Unix domain socket path and port to\n pg_stat_get_wal_senders()"
},
{
"msg_contents": "> The question is: what is the problem we want to solve?\n\nThe client_hostname is useful for TCP/IP connections because it\nindicates which row of the view is related to which standby server. I\nwould like to have the same for UNIX domain socket case as well.\n\n> Ishii-san asked\n> for a socket path. If we have already figured out the replica (via\n> application_name), use the replica PID to find the socket path.\n\nWell, I would like to avoid to use application_name if possible.\n\n> A new\n> column as suggested by Tom could show the desired info. Is it *really*\n> useful? I mean, how many setups have master and replica in the same\n> server?\n\nFor developing/testing purpose I often create master and some replicas\nin the same server. The same technique is used in a regression test\nfor Pgpool-II.\n\n> For a socket connection, directory is important and that\n> information I can get from unix_socket_directories parameter (I've\n> never seen a setup with multiple socket directories).\n\nYes, it could be a way to get the same information that\nsockaddr_un.sunpath used to provide. But now I realize that it's not\nwhat I want. What I actually wanted was, which row of the view is\nrelated to which standby server. So what I really need is the standby\nserver's accepting socket path, *not* primary server's. Currently it\nseems it's not possible except using the application_name\nhack. Probably cleaner way would be walreciver provides socket path\ninformation in startup packet and walsender keeps the info in shared\nmemory so that pg_stat_replication view can use it later on.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 14 Apr 2019 21:16:41 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Adding Unix domain socket path and port to\n pg_stat_get_wal_senders()"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 11:38:52PM +0900, Tatsuo Ishii wrote:\n> If we were ok to add a new column to pg_stat_activity view or\n> pg_stat_replication view as well, that will be great.\n\nOkay, no objections with a separate, new, column if that's the\nconsensus.\n--\nMichael",
"msg_date": "Mon, 15 Apr 2019 12:52:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Adding Unix domain socket path and port to\n pg_stat_get_wal_senders()"
},
{
"msg_contents": "On 2019-04-12 17:57, Tom Lane wrote:\n>> For a socket connection, directory is important and that\n>> information I can get from unix_socket_directories parameter (I've\n>> never seen a setup with multiple socket directories).\n> Those are actually pretty common, for example if you use Red Hat's\n> packaging you will have both /var/run/postgresql and /tmp as socket\n> directories (since they consider use of /tmp deprecated, but getting\n> rid of all clients' use of it turns out to be really hard). However,\n> it's definitely fair to question whether anyone *cares* which of\n> the server's socket directories a given connection used. Aren't\n> they going to be pretty much all equivalent?\n\nSo what is being asked here is really information about which end point\non the server is being connected to. That is also information for the\nTCP/IP case that we don't currently provide. It's probably of marginal\nuse, as you also say.\n\nI don't get what this has to do with walsenders specifically. Do they\nhave each walsender connect to a different socket?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 15 Apr 2019 23:01:30 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding Unix domain socket path and port to\n pg_stat_get_wal_senders()"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-04-12 17:57, Tom Lane wrote:\n>> Those are actually pretty common, for example if you use Red Hat's\n>> packaging you will have both /var/run/postgresql and /tmp as socket\n>> directories (since they consider use of /tmp deprecated, but getting\n>> rid of all clients' use of it turns out to be really hard). However,\n>> it's definitely fair to question whether anyone *cares* which of\n>> the server's socket directories a given connection used. Aren't\n>> they going to be pretty much all equivalent?\n\n> So what is being asked here is really information about which end point\n> on the server is being connected to. That is also information for the\n> TCP/IP case that we don't currently provide.\n\nGood point.\n\n> It's probably of marginal use, as you also say.\n\nYeah. Per downthread discussion, what Tatsuo-san really wants to know\nis not that at all, but which client (slave server) is connecting.\nIt's not very clear how to identify the client, but knowing which socket\nit came through doesn't seem to help for that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 17:15:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding Unix domain socket path and port to\n pg_stat_get_wal_senders()"
}
] |
[
{
"msg_contents": "I find the dependency is complex among header files in PG. At the same\ntime, I find the existing code still can use the header file very\ncleanly/alphabetically. so I probably missed some knowledge here.\n\nfor example, when I want the LOCKTAG in .c file, which is defined in\n\"storage/lock.h\". then I wrote the code like this:\n\n#include \"storage/lock.h\"\n...\n\nLOCKTAG tag;\n\n\ncompile and get errors.\n\nIn file included from\n.../src/include/storage/lock.h:21:\n/../../../src/include/storage/lockdefs.h:50:2: error: unknown type name\n 'TransactionId'\n TransactionId xid; /* xid of holder of\nAccessExclusiveLock */\n\nso I HAVE TO\n1. include the header file which contains the TransactionId\n2. add it before the lock.h.\n\nnormally I think we can add the dependency in lock.h directly to resolve\nthis issue.\n\nso how can I include header file effectively ?\n\nThanks\n\nI find the dependency is complex among header files in PG. At the same time, I find the existing code still can use the header file very cleanly/alphabetically. so I probably missed some knowledge here.for example, when I want the LOCKTAG in .c file, which is defined in \"storage/lock.h\". then I wrote the code like this:#include \"storage/lock.h\"...LOCKTAG tag;compile and get errors. In file included from .../src/include/storage/lock.h:21:/../../../src/include/storage/lockdefs.h:50:2: error: unknown type name 'TransactionId' TransactionId xid; /* xid of holder of AccessExclusiveLock */ so I HAVE TO 1. include the header file which contains the TransactionId 2. add it before the lock.h. normally I think we can add the dependency in lock.h directly to resolve this issue.so how can I include header file effectively ?Thanks",
"msg_date": "Fri, 12 Apr 2019 10:08:52 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "How to include the header files effectively"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> for example, when I want the LOCKTAG in .c file, which is defined in\n> \"storage/lock.h\". then I wrote the code like this:\n\n> #include \"storage/lock.h\"\n> ...\n> LOCKTAG tag;\n\n> compile and get errors.\n\n> In file included from\n> .../src/include/storage/lock.h:21:\n> /../../../src/include/storage/lockdefs.h:50:2: error: unknown type name\n> 'TransactionId'\n> TransactionId xid; /* xid of holder of\n> AccessExclusiveLock */\n\nThe reason that's failing is that you didn't include postgres.h first.\n\nThe general expectation --- and we do mechanically verify this,\nperiodically --- is that any Postgres header should have enough #include's\nthat you can include it without further work, so long as you included\npostgres.h (or postgres_fe.h, or c.h, depending on context) beforehand.\nOne of those three headers must be the first inclusion in every Postgres\n.c file. There are portability reasons behind that rule, which you\ndon't really want to know about ;-) ... just do it like that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Apr 2019 22:21:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to include the header files effectively"
},
{
"msg_contents": "On 2019-Apr-12, Andy Fan wrote:\n\n> for example, when I want the LOCKTAG in .c file, which is defined in\n> \"storage/lock.h\". then I wrote the code like this:\n> \n> #include \"storage/lock.h\"\n> ...\n> \n> /../../../src/include/storage/lockdefs.h:50:2: error: unknown type name\n> 'TransactionId'\n> TransactionId xid; /* xid of holder of AccessExclusiveLock */\n\nWhat are you trying to do? Your .c file must include \"postgres.h\"\nbefore any other header file. There should be no other dependencies.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 11 Apr 2019 22:22:56 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: How to include the header files effectively"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 10:22:56PM -0400, Alvaro Herrera wrote:\n> What are you trying to do? Your .c file must include \"postgres.h\"\n> before any other header file. There should be no other dependencies.\n\nThe usual rule when it comes to develop extensions or a patch is to\ninclude headers in the following order:\n1) postgres.h for backend code and postgres_fe.h for frontend (use\nifdef FRONTEND if a file is used in both context, see src/common/*.c).\n2) System-related headers, like <unistd.h> or such.\n3) Other PostgreSQL internal headers, in any patch posted to the lists\nthese in alphabetical order is warmly welcome.\n--\nMichael",
"msg_date": "Fri, 12 Apr 2019 13:31:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: How to include the header files effectively"
}
] |
[
{
"msg_contents": "Hi,\n\nI found some minor grammar mistake while reading reloptions.c code comments.\nAttached is the fix.\nI just changed \"affect\" to \"effect\", for both n_distinct and vacuum_truncate.\n - * values has no affect until the ...\n + * values has no effect until the ...\n\nRegards,\nKirk Jamison",
"msg_date": "Fri, 12 Apr 2019 02:41:37 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Minor fix in reloptions.c comments"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 02:41:37AM +0000, Jamison, Kirk wrote:\n> I found some minor grammar mistake while reading reloptions.c code comments.\n> Attached is the fix.\n> I just changed \"affect\" to \"effect\", for both n_distinct and vacuum_truncate.\n> - * values has no affect until the ...\n> + * values has no effect until the ...\n\nA lot of those parameter updates affect processing and still they have\nmany side effects, as per those paragraphs.\n\nFixed, thanks!\n--\nMichael",
"msg_date": "Fri, 12 Apr 2019 13:00:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Minor fix in reloptions.c comments"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 12:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Apr 12, 2019 at 02:41:37AM +0000, Jamison, Kirk wrote:\n> > I found some minor grammar mistake while reading reloptions.c code comments.\n> > Attached is the fix.\n> > I just changed \"affect\" to \"effect\", for both n_distinct and vacuum_truncate.\n> > - * values has no affect until the ...\n> > + * values has no effect until the ...\n>\n> A lot of those parameter updates affect processing and still they have\n> many side effects, as per those paragraphs.\n\nWell, \"has no affect\" is clearly wrong here, and Kirk's fix is clearly\nright. I don't know what your point here is.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Apr 2019 13:47:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Minor fix in reloptions.c comments"
}
] |
[
{
"msg_contents": "Hello devs,\n\nI'm looking at psql's use of PQexec for implementing some feature.\n\nWhen running with multiple SQL commands, the doc is not very helpful.\n\n From the source code I gathered that PQexec returns the first COPY results \nif any, and if not the last non-empty results, unless all is empty in \nwhich case an empty result is returned. So * marks the returned result\nin the following examples:\n\n INSERT ... \\; * COPY ... \\; SELECT ... \\; \\;\n SELECT ... \\; UPDATE ... \\; * SELECT ... \\; \\;\n \\; \\; * ;\n\nThe attached patch tries to improve the documentation based on my \nunderstanding.\n\nIMVHO, psql's code is kind of a mess to work around this strange behavior, \nas there is a loop over results within PQexec, then another one after \nPQexec if there were some COPY.\n\n-- \nFabien.",
"msg_date": "Fri, 12 Apr 2019 10:38:49 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "improve PQexec documentation"
},
{
"msg_contents": "On 2019-Apr-12, Fabien COELHO wrote:\n\n> I'm looking at psql's use of PQexec for implementing some feature.\n> \n> When running with multiple SQL commands, the doc is not very helpful.\n> \n> From the source code I gathered that PQexec returns the first COPY results\n> if any, and if not the last non-empty results, unless all is empty in which\n> case an empty result is returned.\n\nI'm not sure we necessarily want to document this behavior. If it was\nsuper helpful for some reason, or if we thought we would never change\nit, then it would make sense to document it in minute detail. But\notherwise I think documenting it sets a promise that we would (try to)\nnever change it in the future, which I don't necessarily agree with --\nparticularly since it's somewhat awkward to use.\n\nI'm inclined to reject this patch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 12 Apr 2019 09:12:11 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: improve PQexec documentation"
},
{
"msg_contents": "\nHello Alvaro,\n\n>> I'm looking at psql's use of PQexec for implementing some feature.\n>>\n>> When running with multiple SQL commands, the doc is not very helpful.\n>>\n>> From the source code I gathered that PQexec returns the first COPY results\n>> if any, and if not the last non-empty results, unless all is empty in which\n>> case an empty result is returned.\n>\n> I'm not sure we necessarily want to document this behavior. If it was\n> super helpful for some reason, or if we thought we would never change\n> it, then it would make sense to document it in minute detail. But\n> otherwise I think documenting it sets a promise that we would (try to)\n> never change it in the future, which I don't necessarily agree with --\n> particularly since it's somewhat awkward to use.\n>\n> I'm inclined to reject this patch.\n\nHmmm. I obviously agree that PQexec is beyond awkward.\n\nNow I'm not sure how anyone is expected to guess the actual function \nworking from the available documentation, and without this knowledge I \ncannot see how to write meaningful code for the multiple query case.\n\nBasically it seems to have been designed for simple queries, and then \naccomodated somehow for the multiple case but with a strange non \nsystematic approach.\n\nI think it would have been much simpler and straightforward to always \nreturn the first result and let the client do whatever it wants \nafterwards. However, as it has existed for quite some time, I'm unsure how \nlikely it is to change as it would break existing code, so documenting its \nbehavior seems logical. I'd be all in favor of changing the behavior, but \nI'm pessimistic that it could pass. Keeping the current status (not really \ndocumented & awkward behavior) seems rather strange.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 12 Apr 2019 17:51:29 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: improve PQexec documentation"
},
{
"msg_contents": "On 2019-04-12 17:51, Fabien COELHO wrote:\n> Hmmm. I obviously agree that PQexec is beyond awkward.\n> \n> Now I'm not sure how anyone is expected to guess the actual function \n> working from the available documentation, and without this knowledge I \n> cannot see how to write meaningful code for the multiple query case.\n\nBut you're not really supposed to use it for multiple queries or\nmultiple result sets anyway. There are other functions for this.\n\nIf a source code comment in libpq or psql would help explaining some of\nthe current code, then we could add that. But I am also not sure that\nenshrining the current behavior on the API documentation is desirable.\n\n> Basically it seems to have been designed for simple queries, and then \n> accomodated somehow for the multiple case but with a strange non \n> systematic approach.\n\nprobably\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 5 Jul 2019 08:47:32 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: improve PQexec documentation"
},
{
"msg_contents": "On Sat, Apr 13, 2019 at 1:12 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I'm inclined to reject this patch.\n\nOn Fri, Jul 5, 2019 at 6:47 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> But you're not really supposed to use it for multiple queries or\n> multiple result sets anyway. There are other functions for this.\n>\n> If a source code comment in libpq or psql would help explaining some of\n> the current code, then we could add that. But I am also not sure that\n> enshrining the current behavior on the API documentation is desirable.\n\nHi Fabien,\n\nBased on the above, I have marked this as \"Returned with feedback\".\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2019 21:12:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve PQexec documentation"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI read many mail discussions in supporting data at rest encryption support\nin\nPostgreSQL.\n\nI checked the discussions around full instance encryption or tablespace or\ntable level encryption. In my observation, all the proposals are trying to\nmodify\nthe core code to support encryption.\n\nI am thinking of an approach of providing tablespace level encryption\nsupport\nincluding WAL using an extension instead of changing the core code by adding\nhooks in xlogwrite and xlogread flows, reorderbuffer flows and also by\nadding\nsmgr plugin routines to support encryption and decryption of other pages.\n\nDefinitely this approach does't work for full instance encryption.\n\nAny opinions/comments/problems in evaluating the encryption with an\nextesnion\napproach?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nHi Hackers,I read many mail discussions in supporting data at rest encryption support inPostgreSQL.I checked the discussions around full instance encryption or tablespace ortable level encryption. In my observation, all the proposals are trying to modifythe core code to support encryption.I am thinking of an approach of providing tablespace level encryption supportincluding WAL using an extension instead of changing the core code by addinghooks in xlogwrite and xlogread flows, reorderbuffer flows and also by addingsmgr plugin routines to support encryption and decryption of other pages. Definitely this approach does't work for full instance encryption.Any opinions/comments/problems in evaluating the encryption with an extesnionapproach?Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Fri, 12 Apr 2019 19:34:13 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Transparent data encryption support as an extension"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 6:34 PM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:\n>\n> Hi Hackers,\n>\n> I read many mail discussions in supporting data at rest encryption support in\n> PostgreSQL.\n>\n> I checked the discussions around full instance encryption or tablespace or\n> table level encryption. In my observation, all the proposals are trying to modify\n> the core code to support encryption.\n>\n> I am thinking of an approach of providing tablespace level encryption support\n> including WAL using an extension instead of changing the core code by adding\n> hooks in xlogwrite and xlogread flows, reorderbuffer flows and also by adding\n> smgr plugin routines to support encryption and decryption of other pages.\n>\n> Definitely this approach does't work for full instance encryption.\n>\n> Any opinions/comments/problems in evaluating the encryption with an extesnion\n> approach?\n>\n\nThe discussion[1] of similar proposal might be worth to read. The\nproposal was adding hook in BufferSync, although for differential\nbackup purpose.\n\n[1] https://www.postgresql.org/message-id/20051502087457@webcorp01e.yandex-team.ru\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 12 Apr 2019 19:04:16 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transparent data encryption support as an extension"
}
] |
[
{
"msg_contents": "Hi,\n\nThere are something like the following code in many places in PostgreSQL code.\n\npgstat_report_wait_start(WAIT_EVENT_xxx);\nif (write(...) != len)\n{\n ereport(ERROR, ...);\n}\npgstat_report_wait_end();\n\nAlmost of these places don't call pgstat_report_wait_end() before\nereport(ERROR) but some places. Especially in RecreateTwoPhaseFile()\nwe have,\n\n /* Write content and CRC */\n errno = 0;\n pgstat_report_wait_start(WAIT_EVENT_TWOPHASE_FILE_WRITE);\n if (write(fd, content, len) != len)\n {\n int save_errno = errno;\n\n pgstat_report_wait_end();\n CloseTransientFile(fd);\n\n /* if write didn't set errno, assume problem is no disk space */\n errno = save_errno ? save_errno : ENOSPC;\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not write file \\\"%s\\\": %m\", path)));\n }\n if (write(fd, &statefile_crc, sizeof(pg_crc32c)) != sizeof(pg_crc32c))\n {\n int save_errno = errno;\n\n pgstat_report_wait_end();\n CloseTransientFile(fd);\n\n /* if write didn't set errno, assume problem is no disk space */\n errno = save_errno ? save_errno : ENOSPC;\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not write file \\\"%s\\\": %m\", path)));\n }\n pgstat_report_wait_end();\n\n /*\n * We must fsync the file because the end-of-replay checkpoint will not do\n * so, there being no GXACT in shared memory yet to tell it to.\n */\n pgstat_report_wait_start(WAIT_EVENT_TWOPHASE_FILE_SYNC);\n if (pg_fsync(fd) != 0)\n {\n int save_errno = errno;\n\n CloseTransientFile(fd);\n errno = save_errno;\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not fsync file \\\"%s\\\": %m\", path)));\n }\n pgstat_report_wait_end();\n\nFirst two call pgstat_report_wait_end() but third one doesn't.\n\nAs far as I know there are three places where call\npgstat_report_wait_end before ereport(ERROR): two in twophase.c\nandanother in copydir.c(at L199). Since we eventually call\npgstat_report_wait_end() in AbortTransaction(). I think that we don't\nneed to call pgstat_report_wait_end() if we're going to raise an error\njust after that. Is that right?\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 12 Apr 2019 19:27:44 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Calling pgstat_report_wait_end() before ereport(ERROR)"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 07:27:44PM +0900, Masahiko Sawada wrote:\n> As far as I know there are three places where call\n> pgstat_report_wait_end before ereport(ERROR): two in twophase.c\n> andanother in copydir.c(at L199). Since we eventually call\n> pgstat_report_wait_end() in AbortTransaction(). I think that we don't\n> need to call pgstat_report_wait_end() if we're going to raise an error\n> just after that. Is that right?\n\nRecreateTwoPhaseFile() gets called in the checkpointer or the startup\nprocess which do not have a transaction context so the wait event\nwould not get cleaned up, and we should call pgstat_report_wait_end()\nin the third elog(ERROR), no? It looks that 249cf070 has been rather\ninconsistent in its way of handling things.\n--\nMichael",
"msg_date": "Fri, 12 Apr 2019 21:07:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Calling pgstat_report_wait_end() before ereport(ERROR)"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 9:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Apr 12, 2019 at 07:27:44PM +0900, Masahiko Sawada wrote:\n> > As far as I know there are three places where call\n> > pgstat_report_wait_end before ereport(ERROR): two in twophase.c\n> > andanother in copydir.c(at L199). Since we eventually call\n> > pgstat_report_wait_end() in AbortTransaction(). I think that we don't\n> > need to call pgstat_report_wait_end() if we're going to raise an error\n> > just after that. Is that right?\n>\n> RecreateTwoPhaseFile() gets called in the checkpointer or the startup\n> process which do not have a transaction context\n\nYes.\n\n> so the wait event would not get cleaned up\n\nBut I think that's not right, I've checked the code. If the startup\nprocess failed in that function it raises a FATAL and recovery fails,\nand if checkpointer process does then it calls\npgstat_report_wait_end() in CheckpointerMain().\n\n> It looks that 249cf070 has been rather\n> inconsistent in its way of handling things.\n\nYeah, I think that at least handling of pgstat_report_wait_end() in\nRecreateTwoPhseFile() is inconsistent in any case.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 12 Apr 2019 22:06:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Calling pgstat_report_wait_end() before ereport(ERROR)"
},
{
"msg_contents": "Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> There are something like the following code in many places in PostgreSQL code.\n> ...\n> Since we eventually call\n> pgstat_report_wait_end() in AbortTransaction(). I think that we don't\n> need to call pgstat_report_wait_end() if we're going to raise an error\n> just after that. Is that right?\n\nYes ... and those CloseTransientFile calls are unnecessary as well.\n\nTo a first approximation, *any* cleanup-type call occurring just before\nan ereport(ERROR) is probably unnecessary, or if it is necessary then\nthe code is broken in other ways. One should not assume that there is\nno other way for an error to be thrown while the resource is held, and\ntherefore it's generally better design to have enough infrastructure\nso that the error cleanup mechanisms can handle whatever cleanup is\nneeded. We certainly have such infrastructure for OpenTransientFile/\nCloseTransientFile, and according to what you say above (I didn't\ncheck it) pgstat wait reporting is handled similarly. So these\ncall sites could all be simplified substantially.\n\nThere are exceptions to this rule of thumb. In some places, for\ninstance, it's worth releasing a lock before ereport simply to shorten\nthe length of time that the lock might stay held. And there are places\nwhere a very low-level resource (such as a spinlock) is only held in\nstraight-line code so there's not really need for error cleanup\ninfrastructure for it. Perhaps there's an argument to be made that\npgstat wait reporting could be put in this second category, but\nI doubt it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Apr 2019 10:05:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling pgstat_report_wait_end() before ereport(ERROR)"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 10:06:41PM +0900, Masahiko Sawada wrote:\n> But I think that's not right, I've checked the code. If the startup\n> process failed in that function it raises a FATAL and recovery fails,\n> and if checkpointer process does then it calls\n> pgstat_report_wait_end() in CheckpointerMain().\n\nWell, the point is that the code raises an ERROR, then a FATAL because\nit gets upgraded by recovery. The take, at least it seems to me, is\nthat if any new caller of the function misses to clean up the event\nthen the routine gets cleared. So it seems to me that the current\ncoding is aimed to be more defensive than anything. I agree that\nthere is perhaps little point in doing so. In my experience a backend\nswitches very quickly back to ClientRead, cleaning up the previous\nevent. Looking around, we have also some code paths in slot.c and\norigin.c which close a transient file, clear the event flag... And\nthen PANIC, which makes even less sense.\n\nIn short, I tend to think that the attached is an acceptable cleanup.\nThoughts?\n--\nMichael",
"msg_date": "Tue, 16 Apr 2019 14:44:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Calling pgstat_report_wait_end() before ereport(ERROR)"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 11:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > There are something like the following code in many places in PostgreSQL code.\n> > ...\n> > Since we eventually call\n> > pgstat_report_wait_end() in AbortTransaction(). I think that we don't\n> > need to call pgstat_report_wait_end() if we're going to raise an error\n> > just after that. Is that right?\n>\n> Yes ... and those CloseTransientFile calls are unnecessary as well.\n>\n> To a first approximation, *any* cleanup-type call occurring just before\n> an ereport(ERROR) is probably unnecessary, or if it is necessary then\n> the code is broken in other ways. One should not assume that there is\n> no other way for an error to be thrown while the resource is held, and\n> therefore it's generally better design to have enough infrastructure\n> so that the error cleanup mechanisms can handle whatever cleanup is\n> needed. We certainly have such infrastructure for OpenTransientFile/\n> CloseTransientFile, and according to what you say above (I didn't\n> check it) pgstat wait reporting is handled similarly. So these\n> call sites could all be simplified substantially.\n>\n> There are exceptions to this rule of thumb. In some places, for\n> instance, it's worth releasing a lock before ereport simply to shorten\n> the length of time that the lock might stay held. And there are places\n> where a very low-level resource (such as a spinlock) is only held in\n> straight-line code so there's not really need for error cleanup\n> infrastructure for it. Perhaps there's an argument to be made that\n> pgstat wait reporting could be put in this second category, but\n> I doubt it.\n>\n\nThank you for explanation! That's really helpful for me.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 16 Apr 2019 19:44:26 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Calling pgstat_report_wait_end() before ereport(ERROR)"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 2:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Apr 12, 2019 at 10:06:41PM +0900, Masahiko Sawada wrote:\n> > But I think that's not right, I've checked the code. If the startup\n> > process failed in that function it raises a FATAL and recovery fails,\n> > and if checkpointer process does then it calls\n> > pgstat_report_wait_end() in CheckpointerMain().\n>\n> Well, the point is that the code raises an ERROR, then a FATAL because\n> it gets upgraded by recovery. The take, at least it seems to me, is\n> that if any new caller of the function misses to clean up the event\n> then the routine gets cleared. So it seems to me that the current\n> coding is aimed to be more defensive than anything. I agree that\n> there is perhaps little point in doing so. In my experience a backend\n> switches very quickly back to ClientRead, cleaning up the previous\n> event. Looking around, we have also some code paths in slot.c and\n> origin.c which close a transient file, clear the event flag... And\n> then PANIC, which makes even less sense.\n>\n> In short, I tend to think that the attached is an acceptable cleanup.\n> Thoughts?\n\nAgreed. There are also some code which raise an ERROR after close a\ntransient file but I think it's a good idea to not include them for\nsafety. It looks to me that the patch you proposed cleans places as\nmuch as we can do.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 16 Apr 2019 20:03:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Calling pgstat_report_wait_end() before ereport(ERROR)"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> In short, I tend to think that the attached is an acceptable cleanup.\n> Thoughts?\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Apr 2019 10:33:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling pgstat_report_wait_end() before ereport(ERROR)"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 08:03:22PM +0900, Masahiko Sawada wrote:\n> Agreed. There are also some code which raise an ERROR after close a\n> transient file but I think it's a good idea to not include them for\n> safety. It looks to me that the patch you proposed cleans places as\n> much as we can do.\n\nThanks for the lookup, committed.\n--\nMichael",
"msg_date": "Wed, 17 Apr 2019 09:57:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Calling pgstat_report_wait_end() before ereport(ERROR)"
}
] |
[
{
"msg_contents": "Hackers,\n\nI will use as an example the code in the regression test\n'collate.linux.utf8'.\nThere you can find:\n\nSET lc_time TO 'tr_TR';\nSELECT to_char(date '2010-04-01', 'DD TMMON YYYY');\n to_char\n-------------\n 01 NIS 2010\n(1 row)\n\nThe problem is that the locale 'tr_TR' uses the encoding ISO-8859-9\n(LATIN5),\nwhile the test runs in UTF8. So the following code will raise an error:\n\nSET lc_time TO 'tr_TR';\nSELECT to_char(date '2010-02-01', 'DD TMMON YYYY');\nERROR: invalid byte sequence for encoding \"UTF8\": 0xde 0x75\n\nThe problem seems to be in the code touched in the attached patch.\n\nRegards,\n\nJuan Jose Santamaria Flecha",
"msg_date": "Fri, 12 Apr 2019 18:45:51 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "TM format can mix encodings in to_char()"
},
{
"msg_contents": "Hello.\n\nAt Fri, 12 Apr 2019 18:45:51 +0200, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote in <CAC+AXB22So5aZm2vZe+MChYXec7gWfr-n-SK-iO091R0P_1Tew@mail.gmail.com>\n> Hackers,\n> \n> I will use as an example the code in the regression test\n> 'collate.linux.utf8'.\n> There you can find:\n> \n> SET lc_time TO 'tr_TR';\n> SELECT to_char(date '2010-04-01', 'DD TMMON YYYY');\n> to_char\n> -------------\n> 01 NIS 2010\n> (1 row)\n> \n> The problem is that the locale 'tr_TR' uses the encoding ISO-8859-9\n> (LATIN5),\n> while the test runs in UTF8. So the following code will raise an error:\n> \n> SET lc_time TO 'tr_TR';\n> SELECT to_char(date '2010-02-01', 'DD TMMON YYYY');\n> ERROR: invalid byte sequence for encoding \"UTF8\": 0xde 0x75\n\nThe same case is handled for lc_numeric. lc_time ought to be\ntreated the same way.\n\n> The problem seems to be in the code touched in the attached patch.\n\nIt seems basically correct, but cache_single_time does extra\nstrdup when pg_any_to_server did conversion. Maybe it would be\nbetter be like this:\n\n> oldcxt = MemoryContextSwitchTo(TopMemoryContext);\n> ptr = pg_any_to_server(buf, strlen(buf), encoding);\n> \n> if (ptr == buf)\n> {\n> \t/* Conversion didn't pstrdup, so we must */\n> \tptr = pstrdup(buf);\n> }\n> MemoryContextSwitchTo(oldcxt);\n\n-\tint\t\t\ti;\n+\tint\t\t\ti,\n+\t\t\t\tencoding;\n\nIt is not strictly kept, but (I believe) we don't define multiple\nvariables in a single definition.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Fri, 19 Apr 2019 17:30:17 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> The problem is that the locale 'tr_TR' uses the encoding ISO-8859-9 (LATIN5),\n> while the test runs in UTF8. So the following code will raise an error:\n\n> SET lc_time TO 'tr_TR';\n> SELECT to_char(date '2010-02-01', 'DD TMMON YYYY');\n> ERROR: invalid byte sequence for encoding \"UTF8\": 0xde 0x75\n\nUgh.\n\n> The problem seems to be in the code touched in the attached patch.\n\nHmm. I'd always imagined that the way that libc works is that LC_CTYPE\ndetermines the encoding (codeset) it's using across the board, so that\nfunctions like strftime would deliver data in that encoding. That's\nmainly based on the observation that nl_langinfo(CODESET) is specified\nto depend on LC_CTYPE, and it would be monumentally stupid for any libc\nfunctions to be operating according to a codeset that there's no way to\ndiscover.\n\nHowever, your example shows that at least glibc is indeed\nmonumentally stupid about this :-(.\n\nBut ... perhaps other implementations are not so silly? I went\nlooking into the POSIX spec to see if it says anything about this,\nand discovered (in Base Definitions section 7, Locale):\n\n If different character sets are used by the locale categories, the\n results achieved by an application utilizing these categories are\n undefined. Likewise, if different codesets are used for the data being\n processed by interfaces whose behavior is dependent on the current\n locale, or the codeset is different from the codeset assumed when the\n locale was created, the result is also undefined.\n\n\"Undefined\" is a term of art here: it means the library can misbehave\narbitrarily badly, up to and including abort() or halt-and-catch-fire.\nWe do *not* want to be invoking undefined behavior, even if particular\nimplementations seem to behave sanely. Your proposed patch isn't\ngetting us out of that, and what it is doing instead is embedding an\nassumption that the implementation handles this in a particular way.\n\nSo what I'm thinking really needs to be done here is to force it to work\naccording to the LC_CTYPE-determines-the-codeset-for-everything model.\nNote that that model is embedded into PG in quite a few ways besides the\none at stake here; for instance, pg_perm_setlocale thinks it should make\ngettext track the LC_CTYPE encoding, not anything else.\n\nIf we're willing to assume a lot about how locale names are spelled,\nwe could imagine fixing this in cache_locale_time by having it strip\nany encoding spec from the given LC_TIME string and then adding on the\ncodeset name from nl_langinfo(CODESET). Not sure about how well\nthat'd play on Windows, though. We'd also need to adjust check_locale\nso that it does the same dance.\n\nBTW, it seems very likely that we have similar issues with LC_MONETARY\nand LC_NUMERIC in PGLC_localeconv(). There's an interesting Windows-only\nhack in there now that seems to be addressing more or less the same issue;\nI wonder whether that would be rendered unnecessary if we approached it\nlike this?\n\nI'm also wondering why we have not noticed any comparable problem with\nLC_MESSAGES or LC_COLLATE. It's not so surprising that we haven't\nunderstood this hazard before with LC_TIME/LC_MONETARY/LC_NUMERIC given\ntheir limited usage in PG, but the same can't be said of LC_MESSAGES or\nLC_COLLATE.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Apr 2019 12:47:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "I wrote:\n> Hmm. I'd always imagined that the way that libc works is that LC_CTYPE\n> determines the encoding (codeset) it's using across the board, so that\n> functions like strftime would deliver data in that encoding.\n> [ and much more based on that ]\n\nAfter further study of the code, the situation seems less dire than\nI feared yesterday. In the first place, we disallow settings of\nLC_COLLATE and LC_CTYPE that don't match the database encoding, see\ntests in dbcommands.c's check_encoding_locale_matches() and in initdb.\nSo that core functionality will be consistent in any case.\n\nAlso, I see that PGLC_localeconv() is effectively doing exactly what\nyou suggested for strings that are encoded according to LC_MONETARY\nand LC_NUMERIC:\n\n encoding = pg_get_encoding_from_locale(locale_monetary, true);\n\n db_encoding_convert(encoding, &worklconv.int_curr_symbol);\n db_encoding_convert(encoding, &worklconv.currency_symbol);\n ...\n\nThis is a little bit off, now that I look at it, because it's\nfailing to account for the possibility of getting -1 from\npg_get_encoding_from_locale. It should probably do what\npg_bind_textdomain_codeset does:\n\n\tif (encoding < 0)\n\t\tencoding = PG_SQL_ASCII;\n\nsince passing PG_SQL_ASCII to the conversion will have the effect of\nvalidating the data without any actual conversion.\n\nI remain wary of this idea because it's depending on something that's\nundefined per POSIX, but apparently it's working well enough for\nLC_MONETARY and LC_NUMERIC, so we can probably get away with it for\nLC_TIME as well. Anyway the current code clearly does not work on\nglibc, and I also verified that there's a problem on FreeBSD, so\nthis patch should make things better.\n\nAlso, experimentation suggests that LC_MESSAGES actually does work\nthe way I thought this stuff works, ie, its implied codeset isn't\nreally used. (I think this only matters for strerror(), since we\nforce the issue for gettext, but glibc's strerror() is clearly not\npaying attention to that.) Sigh, who needs consistency?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Apr 2019 11:50:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "I wrote:\n> This is a little bit off, now that I look at it, because it's\n> failing to account for the possibility of getting -1 from\n> pg_get_encoding_from_locale. It should probably do what\n> pg_bind_textdomain_codeset does:\n> \tif (encoding < 0)\n> \t\tencoding = PG_SQL_ASCII;\n\nActually, the more I looked at the existing code, the less happy I got\nwith its error handling. cache_locale_time() is an absolute failure\nin that respect, because it makes no attempt at all to avoid throwing\nerrors while LC_TIME or LC_CTYPE is set to a transient value. We\ncould maybe tolerate LC_TIME being weird, but continuing with a value\nof LC_CTYPE that doesn't match the database setting would almost\ncertainly be disastrous.\n\nPGLC_localeconv had at least thought about the issue, but it ends up\nwimping out:\n\n /*\n * Report it if we failed to restore anything. Perhaps this should be\n * FATAL, rather than continuing with bad locale settings?\n */\n if (trouble)\n elog(WARNING, \"failed to restore old locale\");\n\nAnd it's also oddly willing to keep going even if it couldn't get the\noriginal setlocale settings to begin with.\n\nI think that this code was written with only LC_MONETARY and LC_NUMERIC\nin mind, for which there's at least some small argument that continuing\nwith unwanted values wouldn't be fatal (though IIRC, LC_NUMERIC would\nstill change the behavior of float8out). Since we added code to also\nchange LC_CTYPE on Windows, I think that continuing on after a restore\nfailure would be disastrous. And we've not heard any field reports\nof this warning anyway, so there's no reason to think we have to support\nthe case in practice.\n\nHence, the attached revised patch changes the code to do elog(FATAL)\nif it can't restore any locale settings to their previous values,\nand it fixes cache_locale_time to not do anything risky while it's\ngot transient locale settings in place.\n\nI propose to apply and back-patch this; the code's basically the\nsame in all supported branches.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 20 Apr 2019 14:46:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "I wrote:\n> [ fix-encoding-and-error-recovery-in-cache-locale-time.patch ]\n\nOn closer inspection, I'm pretty sure either version of this patch\nwill break things on Windows, because that platform already had code\nto convert the result of wcsftime() to the database encoding; we\nwere adding code to do a second conversion, which will not go well.\n\nThe attached revised patch deletes the no-longer-necessary\nplatform-specific recoding stanza, in favor of having cache_locale_time\nknow that it's getting UTF8 rather than something else. I also\nupdated a bunch of the related comments.\n\nI don't have any way to test this on Windows, so could somebody\ndo that? Manually running the Turkish test cases ought to be enough.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 21 Apr 2019 00:28:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "\nOn 4/21/19 12:28 AM, Tom Lane wrote:\n> I wrote:\n>> [ fix-encoding-and-error-recovery-in-cache-locale-time.patch ]\n> On closer inspection, I'm pretty sure either version of this patch\n> will break things on Windows, because that platform already had code\n> to convert the result of wcsftime() to the database encoding; we\n> were adding code to do a second conversion, which will not go well.\n>\n> The attached revised patch deletes the no-longer-necessary\n> platform-specific recoding stanza, in favor of having cache_locale_time\n> know that it's getting UTF8 rather than something else. I also\n> updated a bunch of the related comments.\n>\n> I don't have any way to test this on Windows, so could somebody\n> do that? Manually running the Turkish test cases ought to be enough.\n>\n> \t\t\t\n\n\nHow does one do that? Just set a Turkish locale?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 21 Apr 2019 09:25:48 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 4/21/19 12:28 AM, Tom Lane wrote:\n>> I don't have any way to test this on Windows, so could somebody\n>> do that? Manually running the Turkish test cases ought to be enough.\n\n> How does one do that? Just set a Turkish locale?\n\nTry variants of the original test case. For instance, in a UTF8\ndatabase,\n\nregression=# show server_encoding ;\n server_encoding \n-----------------\n UTF8\n(1 row)\n\nregression=# SET lc_time TO 'tr_TR.iso88599';\nSET\nregression=# SELECT to_char(date '2010-02-01', 'DD TMMON YYYY');\n to_char \n--------------\n 01 ŞUB 2010\n(1 row)\n\nUnpatched, I get an error about invalid data. Now, this is in\na Linux machine, and you'll have to adapt it for Windows --- at\nleast change the LC_TIME setting. But the idea is to get out some\nnon-ASCII strings from an LC_TIME setting that names an encoding\ndifferent from the database's.\n\n(I suspect you'll find that the existing code works fine on\nWindows, it's only the first version(s) of this patch that fail.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Apr 2019 10:21:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "On Sun, Apr 21, 2019 at 6:26 AM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> How does one do that? Just set a Turkish locale?\n\ntr_TR is, in a sense, special among locales:\n\nhttp://blog.thetaphi.de/2012/07/default-locales-default-charsets-and.html\n\nThe Turkish dotless i has apparently been implicated in all kinds of\nbugs in quite a variety of contexts.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 22 Apr 2019 11:51:48 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sun, Apr 21, 2019 at 6:26 AM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>> How does one do that? Just set a Turkish locale?\n\n> tr_TR is, in a sense, special among locales:\n> http://blog.thetaphi.de/2012/07/default-locales-default-charsets-and.html\n> The Turkish dotless i has apparently been implicated in all kinds of\n> bugs in quite a variety of contexts.\n\nYeah, we've had our share of those :-(. But the dotless i is not the\nproblem here --- it happens to not trigger an encoding conversion\nissue, it seems. Amusingly, the existing test case for lc_time = tr_TR\nin collate.linux.utf8.sql is specifically coded to check what happens\nwith dotted/dotless i, and yet it manages to not trip over this problem.\n(I suspect the reason is that what comes out of strftime is \"Nis\" which\nis ASCII, and the non-ASCII characters only arise from subsequent case\nconversion within PG.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 15:16:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "Actually, I tried to show my findings with the tr_TR regression test, but\nyou\ncan reproduce the same issue with other locales and non-ASCII characters, as\nTom has pointed out.\n\nFor exampe:\n\nde_DE ISO-8859-1: March\nes_ES ISO-8859-1: Wednesday\nfr_FR ISO-8859-1: February\n\nRegards,\n\nJuan José Santamaría Flecha\n\nActually, I tried to show my findings with the tr_TR regression test, but youcan reproduce the same issue with other locales and non-ASCII characters, asTom has pointed out.For exampe:de_DE ISO-8859-1: Marches_ES ISO-8859-1: Wednesdayfr_FR ISO-8859-1: FebruaryRegards,Juan José Santamaría Flecha",
"msg_date": "Mon, 22 Apr 2019 23:41:57 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "On 4/21/19 10:21 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 4/21/19 12:28 AM, Tom Lane wrote:\n>>> I don't have any way to test this on Windows, so could somebody\n>>> do that? Manually running the Turkish test cases ought to be enough.\n>> How does one do that? Just set a Turkish locale?\n> Try variants of the original test case. For instance, in a UTF8\n> database,\n>\n> regression=# show server_encoding ;\n> server_encoding \n> -----------------\n> UTF8\n> (1 row)\n>\n> regression=# SET lc_time TO 'tr_TR.iso88599';\n> SET\n> regression=# SELECT to_char(date '2010-02-01', 'DD TMMON YYYY');\n> to_char \n> --------------\n> 01 ŞUB 2010\n> (1 row)\n>\n> Unpatched, I get an error about invalid data. Now, this is in\n> a Linux machine, and you'll have to adapt it for Windows --- at\n> least change the LC_TIME setting. But the idea is to get out some\n> non-ASCII strings from an LC_TIME setting that names an encoding\n> different from the database's.\n>\n> (I suspect you'll find that the existing code works fine on\n> Windows, it's only the first version(s) of this patch that fail.)\n>\n> \t\t\t\n\n\n\nTest above works as expected with the patch, see attached. This is from\njacana.\n\n\nLMK if you want more tests run before I blow the test instance away\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 23 Apr 2019 18:07:25 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Test above works as expected with the patch, see attached. This is from\n> jacana.\n\nGreat, thanks for checking!\n\n> LMK if you want more tests run before I blow the test instance away\n\nCan't think of anything else.\n\nIt'd be nice if we could cover stuff like this in the regression tests,\nbut I'm not sure how, seeing that the locale names are platform-dependent\nand the overall behavior will also depend on the database encoding ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Apr 2019 18:10:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "It looks as if no work is left for this patch, so maybe updating the author to Tom Lane (I'm just a repoter at this point, which it's fine) and the status to ready for committer would better reflect its current status. Does anyone think otherwise?\r\n\r\nRegards,\r\n\r\nJuan José Santamaría Flecha",
"msg_date": "Sat, 29 Jun 2019 06:21:37 +0000",
"msg_from": "Juanjo Santamaria Flecha <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
},
{
"msg_contents": "Juanjo Santamaria Flecha <juanjo.santamaria@gmail.com> writes:\n> It looks as if no work is left for this patch, so maybe updating the author to Tom Lane (I'm just a repoter at this point, which it's fine) and the status to ready for committer would better reflect its current status. Does anyone think otherwise?\n\nYeah, this was dealt with in 7ad1cd31b et al. I didn't realize there\nwas a CF entry for it, or I would have closed it then. I've done so now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 29 Jun 2019 09:58:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TM format can mix encodings in to_char()"
}
] |
[
{
"msg_contents": "Hi All,\n\nWe build Postgres on Power and x86 With the latest Postgres 11 release (11.2) we get error on\npower8 ppc64le (Redhat and CentOS). No error on SUSE on power8\n\nNo error on x86_64 (RH, Centos and SUSE)\n\nfrom the log file\n2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: listening on IPv4 address \"0.0.0.0\", port 5432\n2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: listening on IPv6 address \"::\", port 5432\n2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: listening on Unix socket \"/tmp/.s.PGSQL.5432\"\n2019-04-09 12:30:10 UTC pid:204 xid:0 ip: LOG: database system was shut down at 2019-04-09 12:27:09 UTC\n2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: database system is ready to accept connections\n2019-04-09 12:31:46 UTC pid:203 xid:0 ip: LOG: received SIGHUP, reloading configuration files\n2019-04-09 12:35:10 UTC pid:205 xid:0 ip: PANIC: could not flush dirty data: Operation not permitted\n2019-04-09 12:35:10 UTC pid:203 xid:0 ip: LOG: checkpointer process (PID 205) was terminated by signal 6: Aborted\n2019-04-09 12:35:10 UTC pid:203 xid:0 ip: LOG: terminating any other active server processes\n2019-04-09 12:35:10 UTC pid:208 xid:0 ip: WARNING: terminating connection because of crash of another server process\n2019-04-09 12:35:10 UTC pid:208 xid:0 ip: DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n2019-04-09 12:35:10 UTC pid:208 xid:0 ip: HINT: In a moment you should be able to reconnect to the database and repeat your command.\n2019-04-09 12:35:10 UTC pid:203 xid:0 ip: LOG: all server processes terminated; reinitializing\n2019-04-09 12:35:10 UTC pid:224 xid:0 ip: LOG: database system was interrupted; last known up at 2019-04-09 12:30:10 UTC\n2019-04-09 12:35:10 UTC pid:224 xid:0 ip: PANIC: could not flush dirty data: Operation not permitted\n2019-04-09 12:35:10 UTC pid:203 xid:0 ip: LOG: startup process (PID 224) was terminated by signal 6: Aborted\n2019-04-09 12:35:10 UTC pid:203 xid:0 ip: LOG: aborting startup due to startup process failure\n2019-04-09 12:35:10 UTC pid:203 xid:0 ip: LOG: database system is shut down\n\nfrom pg_config\n\npg_config output\n\nBINDIR = /usr/local/postgres/11/bin\nDOCDIR = /usr/local/postgres/11/share/doc\nHTMLDIR = /usr/local/postgres/11/share/doc\nINCLUDEDIR = /usr/local/postgres/11/include\nPKGINCLUDEDIR = /usr/local/postgres/11/include\nINCLUDEDIR-SERVER = /usr/local/postgres/11/include/server\nLIBDIR = /usr/local/postgres/11/lib\nPKGLIBDIR = /usr/local/postgres/11/lib\nLOCALEDIR = /usr/local/postgres/11/share/locale\nMANDIR = /usr/local/postgres/11/share/man\nSHAREDIR = /usr/local/postgres/11/share\nSYSCONFDIR = /usr/local/postgres/etc\nPGXS = /usr/local/postgres/11/lib/pgxs/src/makefiles/pgxs.mk\nCONFIGURE = '--with-tclconfig=/usr/lib64' '--with-perl' '--with-python' '--with-tcl' '--with-openssl' '--with-pam' '--with-gssapi' '--enable-nls' '--with-libxml' '--with-libxslt' '--with-ldap' '--prefix=/usr/local/postgres/11' 'CFLAGS=-O3 -g -pipe -Wall -D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -m64 -mcpu=power8 -mtune=power8 -DLINUX_OOM_SCORE_ADJ=0' '--with-libs=/usr/lib' '--with-includes=/usr/include' '--with-uuid=e2fs' '--sysconfdir=/usr/local/postgres/etc' '--with-llvm' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'\nCC = gcc\nCPPFLAGS = -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O3 -g -pipe -Wall -D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -m64 -mcpu=power8 -mtune=power8 -DLINUX_OOM_SCORE_ADJ=0\nCFLAGS_SL = -fPIC\nLDFLAGS = -L/usr/local/lib -L/usr/lib -Wl,--as-needed -Wl,-rpath,'/usr/local/postgres/11/lib',--enable-new-dtags\nLDFLAGS_EX =\nLDFLAGS_SL =\nLIBS = -lpgcommon -lpgport -lpthread -lxslt -lxml2 -lpam -lssl -lcrypto -lgssapi_krb5 -lz -lreadline -lrt -lcrypt -ldl -lm\nVERSION = PostgreSQL 11.2\n\nI get the feeling this is related to the fsync() issue.\nwhy is it happening on Power RH and CentOS, but not on the other platforms?\n\nLet me know if i need to provide any more information.\n\nReiner",
"msg_date": "Fri, 12 Apr 2019 20:04:00 +0200",
"msg_from": "reiner peterke <zedaardv@gmail.com>",
"msg_from_op": true,
"msg_subject": "PANIC: could not flush dirty data: Operation not permitted power8,\n Redhat Centos"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-12 20:04:00 +0200, reiner peterke wrote:\n> We build Postgres on Power and x86 With the latest Postgres 11 release (11.2) we get error on\n> power8 ppc64le (Redhat and CentOS). No error on SUSE on power8\n\n> 2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: listening on IPv4 address \"0.0.0.0\", port 5432\n> 2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: listening on IPv6 address \"::\", port 5432\n> 2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: listening on Unix socket \"/tmp/.s.PGSQL.5432\"\n> 2019-04-09 12:30:10 UTC pid:204 xid:0 ip: LOG: database system was shut down at 2019-04-09 12:27:09 UTC\n> 2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: database system is ready to accept connections\n> 2019-04-09 12:31:46 UTC pid:203 xid:0 ip: LOG: received SIGHUP, reloading configuration files\n> 2019-04-09 12:35:10 UTC pid:205 xid:0 ip: PANIC: could not flush dirty data: Operation not permitted\n> 2019-04-09 12:35:10 UTC pid:203 xid:0 ip: LOG: checkpointer process (PID 205) was terminated by signal 6: Aborted\n\nAny chance you can strace this? Because I don't understand how you'd get\na permission error here.\n\n\n> I get the feeling this is related to the fsync() issue.\n> why is it happening on Power RH and CentOS, but not on the other platforms?\n\nYea, the PANIC is due to various OSs, including linux, basically feeling\nfree to discard any diryt data after any integrity related calls fail\n(we could narrow it down, but it's hard, given the variability between\nversions). That is, if they signal such issues at all :(\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 12 Apr 2019 12:22:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: could not flush dirty data: Operation not permitted\n power8, Redhat Centos"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-12 20:04:00 +0200, reiner peterke wrote:\n>> We build Postgres on Power and x86 With the latest Postgres 11 release (11.2) we get error on\n>> power8 ppc64le (Redhat and CentOS). No error on SUSE on power8\n\n> Any chance you can strace this? Because I don't understand how you'd get\n> a permission error here.\n\nWhat kind of filesystem are the database files on?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Apr 2019 15:33:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: could not flush dirty data: Operation not permitted\n power8, Redhat Centos"
},
{
"msg_contents": "On Sat, Apr 13, 2019 at 7:23 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-04-12 20:04:00 +0200, reiner peterke wrote:\n> > We build Postgres on Power and x86 With the latest Postgres 11 release (11.2) we get error on\n> > power8 ppc64le (Redhat and CentOS). No error on SUSE on power8\n\nHuh, I wonder what is different. I don't see this on EDB's CentOS\n7.1 POWER8 system with an XFS filesystem. I ran it under strace -f\nand saw this:\n\n[pid 51614] sync_file_range2(0x19, 0x2, 0x8000, 0x2000, 0x2, 0x8) = 0\n\n> > 2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: listening on IPv4 address \"0.0.0.0\", port 5432\n> > 2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: listening on IPv6 address \"::\", port 5432\n> > 2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: listening on Unix socket \"/tmp/.s.PGSQL.5432\"\n> > 2019-04-09 12:30:10 UTC pid:204 xid:0 ip: LOG: database system was shut down at 2019-04-09 12:27:09 UTC\n> > 2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: database system is ready to accept connections\n> > 2019-04-09 12:31:46 UTC pid:203 xid:0 ip: LOG: received SIGHUP, reloading configuration files\n> > 2019-04-09 12:35:10 UTC pid:205 xid:0 ip: PANIC: could not flush dirty data: Operation not permitted\n> > 2019-04-09 12:35:10 UTC pid:203 xid:0 ip: LOG: checkpointer process (PID 205) was terminated by signal 6: Aborted\n>\n> Any chance you can strace this? Because I don't understand how you'd get\n> a permission error here.\n\nMe neither. I hacked my tree so that it would use the msync() version\ninstead of the sync_file_range() version but that worked too.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 13 Apr 2019 09:16:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: could not flush dirty data: Operation not permitted\n power8, Redhat Centos"
},
{
"msg_contents": "\n\nsent by smoke signals at great danger to my self. \n\n> On 12 Apr 2019, at 23:16, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n>> On Sat, Apr 13, 2019 at 7:23 AM Andres Freund <andres@anarazel.de> wrote:\n>>> On 2019-04-12 20:04:00 +0200, reiner peterke wrote:\n>>> We build Postgres on Power and x86 With the latest Postgres 11 release (11.2) we get error on\n>>> power8 ppc64le (Redhat and CentOS). No error on SUSE on power8\n> \n> Huh, I wonder what is different. I don't see this on EDB's CentOS\n> 7.1 POWER8 system with an XFS filesystem. I ran it under strace -f\n> and saw this:\n> \n> [pid 51614] sync_file_range2(0x19, 0x2, 0x8000, 0x2000, 0x2, 0x8) = 0\n> \n>>> 2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: listening on IPv4 address \"0.0.0.0\", port 5432\n>>> 2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: listening on IPv6 address \"::\", port 5432\n>>> 2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: listening on Unix socket \"/tmp/.s.PGSQL.5432\"\n>>> 2019-04-09 12:30:10 UTC pid:204 xid:0 ip: LOG: database system was shut down at 2019-04-09 12:27:09 UTC\n>>> 2019-04-09 12:30:10 UTC pid:203 xid:0 ip: LOG: database system is ready to accept connections\n>>> 2019-04-09 12:31:46 UTC pid:203 xid:0 ip: LOG: received SIGHUP, reloading configuration files\n>>> 2019-04-09 12:35:10 UTC pid:205 xid:0 ip: PANIC: could not flush dirty data: Operation not permitted\n>>> 2019-04-09 12:35:10 UTC pid:203 xid:0 ip: LOG: checkpointer process (PID 205) was terminated by signal 6: Aborted\n>> \n>> Any chance you can strace this? Because I don't understand how you'd get\n>> a permission error here.\n> \n> Me neither. I hacked my tree so that it would use the msync() version\n> instead of the sync_file_range() version but that worked too.\n> \n> -- \n> Thomas Munro\n> https://enterprisedb.com\n\nI forgot to mention that this is happening in a docker container. \nI want to test it on a VM to see if it is container related. I am sick at the moment so i’m unable to do the test at the moment. \n\nReiner\n\n",
"msg_date": "Mon, 15 Apr 2019 09:57:43 +0200",
"msg_from": "zedaardv@gmail.com",
"msg_from_op": false,
"msg_subject": "Re: PANIC: could not flush dirty data: Operation not permitted\n power8, Redhat Centos"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 08:04:00PM +0200, reiner peterke wrote:\n> We build Postgres on Power and x86 With the latest Postgres 11 release (11.2) we get error on\n> power8 ppc64le (Redhat and CentOS). No error on SUSE on power8\n> \n> No error on x86_64 (RH, Centos and SUSE)\n\nSo there's an error on power8 with RH but not SUSE.\n\nWhat kernel versions are used for each of the successful and not successful ?\n\nJustin\n\n\n",
"msg_date": "Mon, 15 Apr 2019 07:44:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: could not flush dirty data: Operation not permitted\n power8, Redhat Centos"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 7:57 PM <zedaardv@gmail.com> wrote:\n> I forgot to mention that this is happening in a docker container.\n\nHuh, so there may be some configuration of Linux container that can\nfail here with EPERM, even though that error that does not appear in\nthe man page, and doesn't make much intuitive sense. Would be good to\nfigure out how that happens.\n\nIf we could somehow confirm* that sync_file_range() with the\nnon-waiting flags we are using is non-destructive of error state, as\nAndres speculated (that is, it cannot eat the only error report we're\never going to get to tell us that buffered dirty data may have been\ndropped), then I suppose we could just remove the data_sync_elevel()\npromotion here. As with the WSL case (before the PANIC commit and the\nsubsequent don't-repeat-the-warning-forever patch), a user of this\nposited EPERM-generating container configuration would then get\nrepeated warnings in the log forever (as they presumably did before).\nRepeated WARNING messages are probably OK here, I think... I mean, if,\nsay, someone complains that FlubOS's Linux emulation fails here with\nEIEIO, I'd say they should put up with the warnings and complain over\non the flub-hackers list, or whatever, and I'd say the same for\ncontainers that generate EPERM: either the man page or the containter\ntechnology needs work.\n\nBut... I still think we should try to avoid making decisions based on\nknowledge of kernel implementation details, if it can be avoided. I'd\nprobably rather treat EPERM explicitly differently (and eventually\nEIEIO too, if a report comes in) than drop the current paranoid coding\ncompletely.\n\n*I'm not looking at it myself. A sync_file_range() implementation is\non my list of potential FreeBSD projects for a rainy day, so I don't\nwant to study anything but the man page, even if it's wrong.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Apr 2019 13:04:13 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: could not flush dirty data: Operation not permitted\n power8, Redhat Centos"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 1:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Apr 15, 2019 at 7:57 PM <zedaardv@gmail.com> wrote:\n> > I forgot to mention that this is happening in a docker container.\n>\n> Huh, so there may be some configuration of Linux container that can\n> fail here with EPERM, even though that error that does not appear in\n> the man page, and doesn't make much intuitive sense. Would be good to\n> figure out how that happens.\n\nSteve Dodd ran into the same problem in Borg[1]. It looks like what's\nhappening here is that on PowerPC and ARM systems, there is a second\nsystem call sync_file_range2 that has the arguments arranged in a\nbetter order for their calling conventions (see Notes section of man\nsync_file_range), and glibc helpfully translates for you, but some\ncontainer technologies forgot to include sync_file_range2 in their\nsyscall forwarding table. Perhaps we should just handle this with the\nnot_implemented_by_kernel mechanism I added for WSL.\n\n[1] https://lists.freedesktop.org/archives/systemd-devel/2019-August/043276.html\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Aug 2019 07:32:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: could not flush dirty data: Operation not permitted\n power8, Redhat Centos"
},
{
"msg_contents": "On Mon, Aug 19, 2019 at 7:32 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Apr 17, 2019 at 1:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Mon, Apr 15, 2019 at 7:57 PM <zedaardv@gmail.com> wrote:\n> > > I forgot to mention that this is happening in a docker container.\n> >\n> > Huh, so there may be some configuration of Linux container that can\n> > fail here with EPERM, even though that error that does not appear in\n> > the man page, and doesn't make much intuitive sense. Would be good to\n> > figure out how that happens.\n>\n> Steve Dodd ran into the same problem in Borg[1]. It looks like what's\n> happening here is that on PowerPC and ARM systems, there is a second\n> system call sync_file_range2 that has the arguments arranged in a\n> better order for their calling conventions (see Notes section of man\n> sync_file_range), and glibc helpfully translates for you, but some\n> container technologies forgot to include sync_file_range2 in their\n> syscall forwarding table. Perhaps we should just handle this with the\n> not_implemented_by_kernel mechanism I added for WSL.\n\nI've just heard that it was fixed overnight in seccomp, which is\nprobably what Docker is using to give you EPERM for syscalls it\ndoesn't like the look of:\n\nhttps://github.com/systemd/systemd/pull/13352/commits/90ddac6087b5f8f3736364cfdf698e713f7e8869\n\nNot being a Docker user, I'm sure if/when that will flow into the\nright places in a timely fashion but if not it looks like you can\nalways configure your own profile or take one from somewhere else,\nprobably something like this:\n\nhttps://github.com/moby/moby/commit/52d8f582c331e35f7b841171a1c22e2d9bbfd0b8\n\nSo it looks like we don't need to do anything at all on our side,\nunless someone knows better.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Aug 2019 10:53:07 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: could not flush dirty data: Operation not permitted\n power8, Redhat Centos"
}
] |
[
{
"msg_contents": "While looking at the pending patch to clean up management of\nrd_partcheck, I noticed that RelationCacheInitializePhase3 has code that\npurports to reload rd_partkey and rd_partdesc, but none for rd_partcheck.\nHowever, that reload code is dead code, as is easily confirmed by\nchecking the code coverage report, because we have no partitioned system\ncatalogs.\n\nMoreover, if somebody tried to add such a catalog, I'd bet a good deal\nof money that this code would not work. It seems highly unlikely that\nwe could run RelationBuildPartitionKey or RelationBuildPartitionDesc\nsuccessfully when we haven't even finished bootstrapping the relcache.\n\nI don't think that this foolishness is entirely the fault of the\npartitioning work; it's evidently modeled on the adjacent code to reload\nrules, triggers, and row security code. But that code is all equally\ndead, equally unlikely to work if somebody tried to invoke it, and\nequally likely to be forever unused because there are many other\nproblems you'd have to surmount to support something like triggers or\nrow security on system catalogs.\n\nI'm inclined to remove almost everything below the comment\n\"Fix data that isn't saved in relcache cache file\", and replace\nit with either assertions or test-and-elogs that say that a\nrelation appearing in the cache file can't have triggers or rules\nor row security or be partitioned.\n\nI am less sure about whether the table-access-method stanza is as silly\nas the rest, but I do see that it's unreached in current testing.\nSo I wonder whether there is any thought that we'd realistically support\ncatalogs with nondefault AMs, and if there is, does anyone think that\nthis code would work?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Apr 2019 14:17:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Useless code in RelationCacheInitializePhase3"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-12 14:17:11 -0400, Tom Lane wrote:\n> While looking at the pending patch to clean up management of\n> rd_partcheck, I noticed that RelationCacheInitializePhase3 has code that\n> purports to reload rd_partkey and rd_partdesc, but none for rd_partcheck.\n> However, that reload code is dead code, as is easily confirmed by\n> checking the code coverage report, because we have no partitioned system\n> catalogs.\n> \n> Moreover, if somebody tried to add such a catalog, I'd bet a good deal\n> of money that this code would not work. It seems highly unlikely that\n> we could run RelationBuildPartitionKey or RelationBuildPartitionDesc\n> successfully when we haven't even finished bootstrapping the relcache.\n\nBut it sure would be nice if we made it work at some point. Having\ne.g. global, permanent + unlogged, and temporary tables attributes in a\nseparate pg_attribute would be quite an advantage (and much easier than\na separate pg_class). Obviously even that is *far* from trivial.\n\n\n> I don't think that this foolishness is entirely the fault of the\n> partitioning work; it's evidently modeled on the adjacent code to reload\n> rules, triggers, and row security code. But that code is all equally\n> dead, equally unlikely to work if somebody tried to invoke it, and\n> equally likely to be forever unused because there are many other\n> problems you'd have to surmount to support something like triggers or\n> row security on system catalogs.\n\nI don't see us wanting to go to supporting triggers, but I could see us\ndesiring RLS at some point. To hide rows a user doesn't have access to.\n\n\n> I am less sure about whether the table-access-method stanza is as silly\n> as the rest, but I do see that it's unreached in current testing.\n> So I wonder whether there is any thought that we'd realistically support\n> catalogs with nondefault AMs, and if there is, does anyone think that\n> this code would work?\n\nRight now it definitely won't work, most importantly because there's a\nfair bit of catalog related code that triggers direct\nheap_insert/update/delete, and expects systable_getnext() to not need\nmemory to allocate the result in the current context (hence the\n!shouldFree assert) and just generally because a lot of places just\nstraight up assume the catalog is heap.\n\nMost of that would be fairly easy to fix however. A lot of rote work,\nbut technically not hard. The hardest is probably a bunch of code that\nuses xmin for cache validation and such, but that seems solvable.\n\nI don't quite know however how we'd use the ability to technically be\nable to have a different AM for catalog tables. One possible thing would\nbe using different builtin AMs for different catalog tables, that seems\nlike it'd not be too hard. But after that it gets harder - e.g. doing\nan initdb with a different default AM sounds not impossible, but also\nfar from easy (we can't do pg_proc lookups before having initialized it,\nwhich is why formrdesc hardcodes GetHeapamTableAmRoutine()). And having\ndifferent AMs per database seems even harder.\n\nI think it probably would work for catalog tables, as it's coded right\nnow. There's no catalog lookups RelationInitTableAccessMethod() for\ntables that return true for IsCatalogTable(). In fact, I think we should\napply something like:\n\ndiff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c\nindex 64f3c2e8870..7ff64b108c4 100644\n--- a/src/backend/utils/cache/relcache.c\n+++ b/src/backend/utils/cache/relcache.c\n@@ -1746,6 +1746,7 @@ RelationInitTableAccessMethod(Relation relation)\n * seem prudent to show that in the catalog. So just overwrite it\n * here.\n */\n+ Assert(relation->rd_rel->relam == InvalidOid);\n relation->rd_amhandler = HEAP_TABLE_AM_HANDLER_OID;\n }\n else if (IsCatalogRelation(relation))\n@@ -1935,8 +1936,7 @@ formrdesc(const char *relationName, Oid relationReltype,\n /*\n * initialize the table am handler\n */\n- relation->rd_rel->relam = HEAP_TABLE_AM_OID;\n- relation->rd_tableam = GetHeapamTableAmRoutine();\n+ RelationInitTableAccessMethod(relation);\n \n /*\n * initialize the rel-has-index flag, using hardwired knowledge\n\nTo a) ensure that that is and stays the case b) avoid having the\nnecessary information in multiple places. Not sure why we not ended up\ndoing the thing in the second hunk earlier. Just using\nRelationInitTableAccessMethod() seems cleaner to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 12 Apr 2019 13:13:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Useless code in RelationCacheInitializePhase3"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-12 14:17:11 -0400, Tom Lane wrote:\n>> While looking at the pending patch to clean up management of\n>> rd_partcheck, I noticed that RelationCacheInitializePhase3 has code that\n>> purports to reload rd_partkey and rd_partdesc, but none for rd_partcheck.\n>> However, that reload code is dead code, as is easily confirmed by\n>> checking the code coverage report, because we have no partitioned system\n>> catalogs.\n>> \n>> Moreover, if somebody tried to add such a catalog, I'd bet a good deal\n>> of money that this code would not work. It seems highly unlikely that\n>> we could run RelationBuildPartitionKey or RelationBuildPartitionDesc\n>> successfully when we haven't even finished bootstrapping the relcache.\n\n> But it sure would be nice if we made it work at some point.\n\nWhether it would be nice or not is irrelevant to my point: this code\ndoesn't work, and it's unlikely that it would ever be part of a working\nsolution. I don't think there's any way that it'd be sane to attempt\ncatalog accesses during RelationCacheInitializePhase3. If we want any\nof these features for system catalogs, I think the route to a real fix\nwould be to make them load-on-demand data so that they can be fetched\nlater on. Or, possibly, the easiest way is to include these data\nstructures in the dumped cache file. But what's here is a dead end.\nI'd even call it an attractive nuisance, because it encourages people\nto add yet more nonfunctional code, rather than pointing them in the\ndirection of doing something useful.\n\n>> I am less sure about whether the table-access-method stanza is as silly\n>> as the rest, but I do see that it's unreached in current testing.\n>> So I wonder whether there is any thought that we'd realistically support\n>> catalogs with nondefault AMs, and if there is, does anyone think that\n>> this code would work?\n\n> Right now it definitely won't work,\n\nSure, I wasn't expecting that. The question is the same as above:\nis it plausible that this code would appear in this form in a complete\nworking implementation? If not, I think we should rip it out rather\nthan leave the impression that we think it does something useful.\n\n> I think it probably would work for catalog tables, as it's coded right\n> now. There's no catalog lookups RelationInitTableAccessMethod() for\n> tables that return true for IsCatalogTable(). In fact, I think we should\n> apply something like:\n\nMakes sense, and I'd also add some comments pointing out that there had\nbetter not be any catalog lookups when this is called for a system\ncatalog.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Apr 2019 10:49:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Useless code in RelationCacheInitializePhase3"
},
{
"msg_contents": "I wrote:\n> Whether it would be nice or not is irrelevant to my point: this code\n> doesn't work, and it's unlikely that it would ever be part of a working\n> solution. I don't think there's any way that it'd be sane to attempt\n> catalog accesses during RelationCacheInitializePhase3.\n\nBTW, to clarify that: obviously, this loop *does* access pg_class, and\npg_class's indexes too. The issue here is that if any of these other\nstanzas ever really executed, we would be doing accesses to a bunch of\nother catalogs as well, meaning that their relcache entries would have to\nalready exist in a state valid enough to permit access. That would mean\nthat they'd have to be treated as bootstrap catalogs so that we could\ncreate hardwired entries with formrdesc. That's not a direction I want\nto go in. Bootstrap catalogs are a huge pain to maintain; we don't want\nany more than the absolute minimum of them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Apr 2019 11:09:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Useless code in RelationCacheInitializePhase3"
},
{
"msg_contents": "Hello,\n\nOn 2019-Apr-13, Tom Lane wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n\n> > I think it probably would work for catalog tables, as it's coded right\n> > now. There's no catalog lookups RelationInitTableAccessMethod() for\n> > tables that return true for IsCatalogTable(). In fact, I think we should\n> > apply something like:\n> \n> Makes sense, and I'd also add some comments pointing out that there had\n> better not be any catalog lookups when this is called for a system\n> catalog.\n\nI think this was forgotten ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 10 Sep 2019 13:34:17 -0300",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Useless code in RelationCacheInitializePhase3"
}
] |
[
{
"msg_contents": "Hello devs,\n\nThe attached patch implements a new SHOW_ALL_RESULTS option for psql, \nwhich shows all results of a combined query (\\;) instead of only the last \none.\n\nThis solves a frustration that intermediate results were hidden from view \nfor no good reason that I could think of.\n\nFor that, call PQsendQuery instead of (mostly not documented) PQexec, and \nrework how results are processed afterwards.\n\nTiming is moved to ProcessQueryResults to keep the last result handling \nout of the measured time. I think it would not be a big deal to include \nit, but this is the previous behavior.\n\nIn passing, refactor a little and add comments. Make function names about \nresults plural or singular consistently with the fact the it processes one \nor several results. Change \"PrintQueryResult\" to \"HandleQueryResult\" \nbecause it was not always printing something. Also add a HandleCopyResult \nfunction, which makes the patch a little bigger by moving things around \nbut clarifies the code.\n\nCode in \"common.c\" is actually a little shorter than the previous version. \n From my point of view the code is clearer than before because there is \nonly one loop over results, not an implicit one within PQexec and another \none afterwards to handle copy.\n\nAdd a few tests for the new feature.\n\nIMHO this new setting should be on by default: few people know about \\; so \nit would not change anything for most, and I do not see why those who use \nit would not be interested by the results of all the queries they asked \nfor.\n\n-- \nFabien.",
"msg_date": "Sat, 13 Apr 2019 22:37:37 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hi Fabien, \n\nI review your patch. \n\n> Add a few tests for the new feature.\n+++ b/src/test/regress/expected/psql.out\n@@ -4729,3 +4729,46 @@ drop schema testpart;\n set search_path to default;\n set role to default;\n drop role testrole_partitioning;\n+-- \nThere is space (+--' '). Please delete it. It is cause of regression test failed.\n\n> IMHO this new setting should be on by default: few people know about \\; so\n> it would not change anything for most, and I do not see why those who use\n> it would not be interested by the results of all the queries they asked for.\nI agree with your opinion.\n\nI test some query combination case. And I found when warning happen, the message is printed in head of results. I think it is not clear in which query the warning occurred.\nHow about print warning message before the query that warning occurred?\n\nFor example,\n-- devide by ';'\npostgres=# BEGIN; BEGIN; SELECT 1 AS one; COMMIT; BEGIN; BEGIN; SELECT 1 AS one; COMMIT;\nBEGIN\npsql: WARNING: there is already a transaction in progress\nBEGIN\n one \n-----\n 1\n(1 row)\n\nCOMMIT\nBEGIN\npsql: WARNING: there is already a transaction in progress\nBEGIN\n one \n-----\n 1\n(1 row)\n\nCOMMIT\n\n\n-- devide by '\\;' and set SHOW_RESULT_ALL on\npostgres=# \\set SHOW_ALL_RESULTS on\npostgres=# BEGIN\\; BEGIN\\; SELECT 1 AS one\\; COMMIT\\; BEGIN\\; BEGIN\\; SELECT 1 AS one\\; COMMIT;\npsql: WARNING: there is already a transaction in progress\nBEGIN\nBEGIN\n one \n-----\n 1\n(1 row)\n\npsql: WARNING: there is already a transaction in progress\nCOMMIT\nBEGIN\nBEGIN\n one \n-----\n 1\n(1 row)\n\nCOMMIT\n\nI will check the code soon.\n\nRegards, \nAya Iwata\n\n\n\n",
"msg_date": "Wed, 24 Apr 2019 02:34:39 +0000",
"msg_from": "\"Iwata, Aya\" <iwata.aya@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Aya-san,\n\nThanks for this review.\n\n> There is space (+--' '). Please delete it. It is cause of regression test failed.\n\nIndeed, unsure how I could do that. Fixed.\n\n>> IMHO this new setting should be on by default: few people know about \\; so\n>> it would not change anything for most, and I do not see why those who use\n>> it would not be interested by the results of all the queries they asked for.\n> I agree with your opinion.\n\nOk. I did not yet change the default in the attached version, though.\n\n> I test some query combination case. And I found when warning happen, the \n> message is printed in head of results. I think it is not clear in which \n> query the warning occurred.\n\nIndeed.\n\n> How about print warning message before the query that warning occurred?\n\nSure. It happened to be trickier than I thought to achieve this, because \nthere is a callback hook to send notifications.\n\nThis attached version does:\n - ensure that warnings appear just before its\n - add the entry in psql's help\n - redefine the function boundary so that timing is cleaner\n - include somehow improved tests\n\n-- \nFabien.",
"msg_date": "Fri, 26 Apr 2019 18:38:47 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "RE: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\tFabien COELHO wrote:\n\n> >> IMHO this new setting should be on by default: few people know about \\; so\n> >> it would not change anything for most, and I do not see why those who use\n> >> it would not be interested by the results of all the queries they asked for.\n> > I agree with your opinion.\n> \n> Ok. I did not yet change the default in the attached version, though.\n\nI'd go further and suggest that there shouldn't be a variable\ncontrolling this. All results that come in should be processed, period.\nIt's not just about \\; If the ability of CALL to produce multiple\nresultsets gets implemented (it was posted as a POC during v11\ndevelopment), this will be needed too.\n\n> This attached version does:\n> - ensure that warnings appear just before its\n> - add the entry in psql's help\n> - redefine the function boundary so that timing is cleaner\n> - include somehow improved tests\n\n\\errverbose seems to no longer work with the patch:\n\ntest=> select 1/0;\npsql: ERROR: division by zero\n\ntest=> \\errverbose\nThere is no previous error.\n\nas opposed to this output with PG11:\n\ntest=> \\errverbose\nERROR:\t22012: division by zero\nLOCATION: int4div, int.c:820\n\n\\errverbose has probably no regression tests because its output\nincludes these ever-changing line numbers; hence `make check`\ncannot be used to find this regression.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 15 May 2019 18:41:32 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "RE: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\nBonjour Daniel,\n\n>>>> IMHO this new setting should be on by default: few people know about \\; so\n>>>> it would not change anything for most, and I do not see why those who use\n>>>> it would not be interested by the results of all the queries they asked for.\n>>> I agree with your opinion.\n>>\n>> Ok. I did not yet change the default in the attached version, though.\n>\n> I'd go further and suggest that there shouldn't be a variable\n> controlling this. All results that come in should be processed, period.\n> It's not just about \\; If the ability of CALL to produce multiple\n> resultsets gets implemented (it was posted as a POC during v11\n> development), this will be needed too.\n\nI do agree, but I'm afraid that if there is no opt-out it could be seen as \na regression by some.\n\n>> This attached version does:\n>> - ensure that warnings appear just before its\n>> - add the entry in psql's help\n>> - redefine the function boundary so that timing is cleaner\n>> - include somehow improved tests\n>\n> \\errverbose seems to no longer work with the patch:\n>\n> test=> select 1/0;\n> psql: ERROR: division by zero\n>\n> test=> \\errverbose\n> There is no previous error.\n>\n> as opposed to this output with PG11:\n>\n> test=> \\errverbose\n> ERROR:\t22012: division by zero\n> LOCATION: int4div, int.c:820\n\nThanks for the catch. I'll investigate.\n\n> \\errverbose has probably no regression tests because its output includes \n> these ever-changing line numbers; hence `make check` cannot be used to \n> find this regression.\n\nWhat is not tested does not work:-( The TAP infrastructure for psql \nincluded in some patch (https://commitfest.postgresql.org/23/2100/ I \nguess) would help testing such slightly varying features which cannot be \ntested with a hardcoded reference text.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 16 May 2019 10:43:08 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "RE: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Re-bonjour Daniel,\n\n>> This attached version does:\n>> - ensure that warnings appear just before its\n>> - add the entry in psql's help\n>> - redefine the function boundary so that timing is cleaner\n>> - include somehow improved tests\n>\n> \\errverbose seems to no longer work with the patch:\n\nHere is a v3 which fixes \\errverbose, hopefully.\n\nThe feature is still an option which is not enabled by default.\n\n-- \nFabien.",
"msg_date": "Thu, 16 May 2019 17:15:49 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "RE: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "> Here is a v3 which fixes \\errverbose, hopefully.\n\nV5 is a rebase and an slightly improved documentation.\n\n-- \nFabien.",
"msg_date": "Thu, 23 May 2019 17:11:20 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "RE: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": ">> Here is a v3 which fixes \\errverbose, hopefully.\n>\n> V5 is a rebase and an slightly improved documentation.\n\nIt was really v4. v5 is a rebase.\n\n-- \nFabien.",
"msg_date": "Wed, 26 Jun 2019 22:27:28 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "RE: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 2019-05-15 18:41, Daniel Verite wrote:\n> I'd go further and suggest that there shouldn't be a variable\n> controlling this. All results that come in should be processed, period.\n\nI agree with that.\n\n> It's not just about \\; If the ability of CALL to produce multiple\n> resultsets gets implemented (it was posted as a POC during v11\n> development), this will be needed too.\n\nSee previous patch here:\nhttps://www.postgresql.org/message-id/flat/4580ff7b-d610-eaeb-e06f-4d686896b93b%402ndquadrant.com\n\nIn that patch, I discussed the specific ways in which \\timing works in\npsql and how that conflicts with multiple result sets. What is the\nsolution to that in this patch?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 13:35:45 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\nHello Peter,\n\n>> I'd go further and suggest that there shouldn't be a variable\n>> controlling this. All results that come in should be processed, period.\n>\n> I agree with that.\n\nI kind of agree as well, but I was pretty sure that someone would complain \nif the current behavior was changed.\n\nShould I produce a patch where the behavior is not an option, or turn the \noption on by default, or just keep it like that for the time being?\n\n>> It's not just about \\; If the ability of CALL to produce multiple\n>> resultsets gets implemented (it was posted as a POC during v11\n>> development), this will be needed too.\n>\n> See previous patch here:\n> https://www.postgresql.org/message-id/flat/4580ff7b-d610-eaeb-e06f-4d686896b93b%402ndquadrant.com\n>\n> In that patch, I discussed the specific ways in which \\timing works in\n> psql and how that conflicts with multiple result sets. What is the\n> solution to that in this patch?\n\n\\timing was kind of a ugly feature to work around. The good intention \nbehind \\timing is that it should reflect the time to perform the query \nfrom the client perspective, but should not include processing the \nresults.\n\nHowever, if a message results in several queries, they are processed as \nthey arrive, so that \\timing reports the time to perform all queries and \nthe time to process all but the last result.\n\nAlthough on paper we could try to get all results first, take the time, \nthen process them, this does not work in the general case because COPY \ntakes on the connection so you have to process its result before switching \nto the next result.\n\nThere is also some stuff to handle notices which are basically send as \nevents when they occur, so that the notice shown are related to the \nresult under processing.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 22 Jul 2019 13:33:55 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\tFabien COELHO wrote:\n\n> >> I'd go further and suggest that there shouldn't be a variable\n> >> controlling this. All results that come in should be processed, period.\n> >\n> > I agree with that.\n> \n> I kind of agree as well, but I was pretty sure that someone would complain \n> if the current behavior was changed.\n\nIf queries in a compound statement must be kept silent,\nthey can be converted to CTEs or DO-blocks to produce the\nsame behavior without having to configure anything in psql.\nThat cost on users doesn't seem too bad, compared to introducing\na knob in psql, and presumably maintaining it forever.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 24 Jul 2019 14:59:05 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Bonjour Daniel,\n\n>> I kind of agree as well, but I was pretty sure that someone would complain\n>> if the current behavior was changed.\n>\n> If queries in a compound statement must be kept silent,\n> they can be converted to CTEs or DO-blocks to produce the\n> same behavior without having to configure anything in psql.\n> That cost on users doesn't seem too bad, compared to introducing\n> a knob in psql, and presumably maintaining it forever.\n\nOk.\n\nAttached a \"do it always version\", which does the necessary refactoring. \nThere is seldom new code, it is rather moved around, some functions are \ncreated for clarity.\n\n-- \nFabien.",
"msg_date": "Wed, 24 Jul 2019 19:41:24 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\tFabien COELHO wrote:\n\n> Attached a \"do it always version\", which does the necessary refactoring. \n> There is seldom new code, it is rather moved around, some functions are \n> created for clarity.\n\nThanks for the update!\nFYI you forgot to remove that bit:\n\n--- a/src/bin/psql/tab-complete.c\n+++ b/src/bin/psql/tab-complete.c\n@@ -3737,7 +3737,7 @@ psql_completion(const char *text, int start, int end)\n\telse if (TailMatchesCS(\"\\\\set\", MatchAny))\n\t{\n\t\tif (TailMatchesCS(\"AUTOCOMMIT|ON_ERROR_STOP|QUIET|\"\n-\t\t\t\t\t\t \"SINGLELINE|SINGLESTEP\"))\n+\t\t\t\t\t\t \n\"SINGLELINE|SINGLESTEP|SHOW_ALL_RESULTS\"))\n\nAlso copydml does not seem to be exercised with combined\nqueries, so do we need this chunk:\n\n--- a/src/test/regress/sql/copydml.sql\n+++ b/src/test/regress/sql/copydml.sql\n@@ -70,10 +70,10 @@ drop rule qqq on copydml_test;\n create function qqq_trig() returns trigger as $$\n begin\n if tg_op in ('INSERT', 'UPDATE') then\n- raise notice '% %', tg_op, new.id;\n+ raise notice '% % %', tg_when, tg_op, new.id;\n return new;\n else\n- raise notice '% %', tg_op, old.id;\n+ raise notice '% % %', tg_when, tg_op, old.id;\n return old;\n end if;\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Thu, 25 Jul 2019 23:02:28 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Bonsoir Daniel,\n\n> FYI you forgot to remove that bit:\n>\n> + \"SINGLELINE|SINGLESTEP|SHOW_ALL_RESULTS\"))\n\nIndeed. I found another such instance in \"help.c\".\n\n> Also copydml does not seem to be exercised with combined\n> queries, so do we need this chunk:\n\n> --- a/src/test/regress/sql/copydml.sql\n\nYep, because I reorganized the notice code significantly, and I wanted to \nbe sure that the right notices are displayed in the right order, which \ndoes not show if the trigger just says \"NOTICE: UPDATE 8\".\n\nAttached a v2 for the always-show-all-results variant. Thanks for the \ndebug!\n\n-- \nFabien.",
"msg_date": "Thu, 25 Jul 2019 21:42:11 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello.\n\nAt Thu, 25 Jul 2019 21:42:11 +0000 (GMT), Fabien COELHO <coelho@cri.ensmp.fr> wrote in <alpine.DEB.2.21.1907252135060.21130@lancre>\n> \n> Bonsoir Daniel,\n> \n> > FYI you forgot to remove that bit:\n> >\n> > + \"SINGLELINE|SINGLESTEP|SHOW_ALL_RESULTS\"))\n> \n> Indeed. I found another such instance in \"help.c\".\n> \n> > Also copydml does not seem to be exercised with combined\n> > queries, so do we need this chunk:\n> \n> > --- a/src/test/regress/sql/copydml.sql\n> \n> Yep, because I reorganized the notice code significantly, and I wanted\n> to be sure that the right notices are displayed in the right order,\n> which does not show if the trigger just says \"NOTICE: UPDATE 8\".\n> \n> Attached a v2 for the always-show-all-results variant. Thanks for the\n> debug!\n\nI have some comments on this patch.\n\nI'm +1 for always output all results without having knobs.\n\nDocumentation (psql-ref.sgml) has another place that needs the\nsame amendment.\n\nLooking the output for -t, -0, -A or something like, we might need\nto introduce result-set separator.\n\n# -eH looks broken for me but it would be another issue.\n\nValid setting of FETCH_COUNT disables this feature. I think it is\nunwanted behavior.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 26 Jul 2019 13:17:04 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Kyotaro-san,\n\n>> Attached a v2 for the always-show-all-results variant. Thanks for the \n>> debug!\n>\n> I have some comments on this patch.\n>\n> I'm +1 for always output all results without having knobs.\n\nThat makes 4 opinions expressed towards this change of behavior, and none \nagainst.\n\n> Documentation (psql-ref.sgml) has another place that needs the\n> same amendment.\n\nIndeed.\n\n> Looking the output for -t, -0, -A or something like, we might need\n> to introduce result-set separator.\n\nYep, possibly. I'm not sure this is material for this patch, though.\n\n> # -eH looks broken for me but it would be another issue.\n\nIt seems to work for me. Could you be more precise about how it is broken?\n\n> Valid setting of FETCH_COUNT disables this feature. I think it is\n> unwanted behavior.\n\nYes and no: this behavior (bug, really) is pre-existing, FETCH_COUNT does \nnot work with combined queries:\n\n sh> /usr/bin/psql\n psql (12beta2 ...)\n fabien=# \\set FETCH_COUNT 2\n fabien=# SELECT 1234 \\; SELECT 5432 ;\n fabien=#\n\n same thing with pg 11.4, and probably down to every version of postgres\n since the feature was implemented...\n\nI think that fixing this should be a separate bug report and patch. I'll \ntry to look at it.\n\nThanks for the feedback. Attached v3 with further documentation updates.\n\n-- \nFabien.",
"msg_date": "Fri, 26 Jul 2019 08:19:47 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\tFabien COELHO wrote:\n\n> sh> /usr/bin/psql\n> psql (12beta2 ...)\n> fabien=# \\set FETCH_COUNT 2\n> fabien=# SELECT 1234 \\; SELECT 5432 ;\n> fabien=#\n> \n> same thing with pg 11.4, and probably down to every version of postgres\n> since the feature was implemented...\n> \n> I think that fixing this should be a separate bug report and patch. I'll \n> try to look at it.\n\nThat reminds me that it was already discussed in [1]. I should add the\nproposed fix to the next commitfest.\n\n\n[1]\nhttps://www.postgresql.org/message-id/flat/a0a854b6-563c-4a11-bf1c-d6c6f924004d%40manitou-mail.org\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Fri, 26 Jul 2019 14:12:39 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "> Thanks for the feedback. Attached v3 with further documentation updates.\n\nAttached v4 also fixes pg_stat_statements non regression tests, per pg \npatch tester travis run.\n\n-- \nFabien.",
"msg_date": "Sun, 28 Jul 2019 23:36:22 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello, Fabien.\n\nAt Fri, 26 Jul 2019 08:19:47 +0000 (GMT), Fabien COELHO <coelho@cri.ensmp.fr> wrote in <alpine.DEB.2.21.1907260738240.13195@lancre>\n> \n> Hello Kyotaro-san,\n> \n> >> Attached a v2 for the always-show-all-results variant. Thanks for the\n> >> debug!\n> >\n> > I have some comments on this patch.\n> >\n> > I'm +1 for always output all results without having knobs.\n> \n> That makes 4 opinions expressed towards this change of behavior, and\n> none against.\n> \n> > Documentation (psql-ref.sgml) has another place that needs the\n> > same amendment.\n> \n> Indeed.\n> \n> > Looking the output for -t, -0, -A or something like, we might need\n> > to introduce result-set separator.\n> \n> Yep, possibly. I'm not sure this is material for this patch, though.\n\nI'm fine with that.\n\n> > # -eH looks broken for me but it would be another issue.\n> \n> It seems to work for me. Could you be more precise about how it is\n> broken?\n\nIt emits bare command string before html result. It's not caused\nby this patch.\n\n\n> > Valid setting of FETCH_COUNT disables this feature. I think it is\n> > unwanted behavior.\n> \n> Yes and no: this behavior (bug, really) is pre-existing, FETCH_COUNT\n> does not work with combined queries:\n> \n> sh> /usr/bin/psql\n> psql (12beta2 ...)\n> fabien=# \\set FETCH_COUNT 2\n> fabien=# SELECT 1234 \\; SELECT 5432 ;\n> fabien=#\n> \n> same thing with pg 11.4, and probably down to every version of\n> postgres\n> since the feature was implemented...\n> \n> I think that fixing this should be a separate bug report and\n> patch. I'll try to look at it.\n\nAh, I didin't notieced that. Thanks for the explanation.\n\n> Thanks for the feedback. Attached v3 with further documentation\n> updates.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 29 Jul 2019 11:59:38 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello.\n\nOn 2019/07/29 6:36, Fabien COELHO wrote:> \n>> Thanks for the feedback. Attached v3 with further documentation updates.\n> \n> Attached v4 also fixes pg_stat_statements non regression tests, per pg \n> patch tester travis run.\n\nThanks. I looked this more closely.\n\n\n+ * Marshal the COPY data. Either subroutine will get the\n+ * connection out of its COPY state, then call PQresultStatus()\n+ * once and report any error.\n\nThis comment doesn't explain what the result value means.\n\n+ * When our command string contained a COPY FROM STDIN or COPY TO STDOUT,\n+ * the PGresult associated with these commands must be processed. In that\n+ * event, we'll marshal data for the COPY.\n\nI think this is not needed. This phrase was needed to explain why\nwe need to loop over subsequent results after PQexec in the\ncurrent code, but in this patch PQsendQuery is used instead,\nwhich doesn't suffer somewhat confusing behavior. All results are\nhandled without needing an unusual processing.\n\n+ * Update result if further processing is necessary. (Returning NULL\n+ * prevents the command status from being printed, which we want in that\n+ * case so that the status line doesn't get taken as part of the COPY data.)\n\nIt seems that the purpose of the returned PGresult is only\nprinting status of this COPY. If it is true, I'd like to see\nsomething like the following example.\n\n| Returns result in the case where queryFout is safe to output\n| result status. That is, in the case of COPY IN, or in the case\n| where COPY OUT is written to other than pset.queryFout.\n\n\n+ if (!AcceptResult(result, false))\n+ {\n+ /* some error occured, record that */\n+ ShowNoticeMessage(¬es);\n\nThe comment in the original code was:\n\n- /*\n- * Failure at this point is always a server-side failure or a\n- * failure to submit the command string. Either way, we're\n- * finished with this command string.\n- */\n\nThe first half of the comment seems to be true for this\npatch. Don't we preserve that comment?\n\n\n+ success = handleCopyOut(pset.db,\n+ copystream,\n+ ©_result)\n+ && success\n+ && (copystream != NULL);\n\nsuccess is always true at thie point so \"&& success\" is no longer\nuseful. (It is same for the COPY IN case).\n\n\n+ /* must handle COPY before changing the current result */\n+ result_status = PQresultStatus(result);\n+ if (result_status == PGRES_COPY_IN ||\n+ result_status == PGRES_COPY_OUT)\n\nI didn't get \"before changing the curren result\" in the\ncomment. Isn't \"handle COPY stream if any\" enough?\n\n+ if (result_status == PGRES_COPY_IN ||\n+ result_status == PGRES_COPY_OUT)\n+ {\n+ ShowNoticeMessage(¬es);\n+ HandleCopyResult(&result);\n+ }\n\nIt seems that it is wrong that this ignores the return value of\nHandleCopyResult().\n\n\n+ /* timing measure before printing the last result */\n+ if (last && pset.timing)\n\nI'm not sure whether we reached any consensus with ths\nbehavior. This means the timing includes result-printing time of\nother than the last one. If we don't want include printing time\nat all, we can exclude it with a small amount of additional\ncomplexity.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 29 Jul 2019 14:55:48 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Kyotaro-san,\n\n> Thanks. I looked this more closely.\n\nIndeed! Thanks for this detailed review.\n\n> + * Marshal the COPY data. Either subroutine will get the\n> + * connection out of its COPY state, then call PQresultStatus()\n> + * once and report any error.\n>\n> This comment doesn't explain what the result value means.\n\nOk, added.\n\n> + * When our command string contained a COPY FROM STDIN or COPY TO STDOUT,\n> + * the PGresult associated with these commands must be processed. In that\n> + * event, we'll marshal data for the COPY.\n>\n> I think this is not needed. This phrase was needed to explain why\n> we need to loop over subsequent results after PQexec in the\n> current code, but in this patch PQsendQuery is used instead,\n> which doesn't suffer somewhat confusing behavior. All results are\n> handled without needing an unusual processing.\n\nHmmm. More or less. \"COPY\" commands have two results, one for taking over \nthe input or output streams more or less directly at the protocol level, \nand one for the final summary, which is quite special compared to other \ncommands, all that managed in \"copy.c\". So ISTM that the comment is \nsomehow still appropriate.\n\nThe difference with the previous behavior is that other results could be \nimmediately discarded but these ones, while now they are all processed.\n\nI've kept this comment and added another one to try to make that clear.\n\n> + * Update result if further processing is necessary. (Returning NULL\n> + * prevents the command status from being printed, which we want in that\n> + * case so that the status line doesn't get taken as part of the COPY data.)\n>\n> It seems that the purpose of the returned PGresult is only\n> printing status of this COPY. If it is true, I'd like to see\n> something like the following example.\n>\n> | Returns result in the case where queryFout is safe to output\n> | result status. That is, in the case of COPY IN, or in the case\n> | where COPY OUT is written to other than pset.queryFout.\n\nI have tried to improved the comment based on your suggestion.\n\n> + if (!AcceptResult(result, false))\n> + {\n> + /* some error occured, record that */\n> + ShowNoticeMessage(¬es);\n>\n> The comment in the original code was:\n>\n> - /*\n> - * Failure at this point is always a server-side failure or a\n> - * failure to submit the command string. Either way, we're\n> - * finished with this command string.\n> - */\n>\n> The first half of the comment seems to be true for this\n> patch. Don't we preserve that comment?\n\nOk. Some form put back.\n\n> + success = handleCopyOut(pset.db,\n> + copystream,\n> + ©_result)\n> + && success\n> + && (copystream != NULL);\n>\n> success is always true at thie point so \"&& success\" is no longer\n> useful.\n\nOk.\n\n> (It is same for the COPY IN case).\n\nOk.\n\n> + /* must handle COPY before changing the current result */\n> + result_status = PQresultStatus(result);\n> + if (result_status == PGRES_COPY_IN ||\n> + result_status == PGRES_COPY_OUT)\n>\n> I didn't get \"before changing the curren result\" in the comment. Isn't \n> \"handle COPY stream if any\" enough?\n\nAlas, I think not.\n\nThe issue is that I need to know whether this is the last result (eg \\gset \napplies only on the last result), so I'll call PQgetResult() to get that.\n\nHowever, on COPY, this is the second \"final\" result which says how much \nwas copied. If I have not send/received the data, the count will not be \nright.\n\n> + if (result_status == PGRES_COPY_IN ||\n> + result_status == PGRES_COPY_OUT)\n> + {\n> + ShowNoticeMessage(¬es);\n> + HandleCopyResult(&result);\n> + }\n>\n> It seems that it is wrong that this ignores the return value of\n> HandleCopyResult().\n\nYep. Fixed.\n\n> + /* timing measure before printing the last result */\n> + if (last && pset.timing)\n>\n> I'm not sure whether we reached any consensus with ths\n> behavior. This means the timing includes result-printing time of\n> other than the last one. If we don't want include printing time\n> at all, we can exclude it with a small amount of additional\n> complexity.\n\nI think that this point is desperate, because the way timing is \nimplemented client-side.\n\nAlthough we could try to stop timing before each result processing, it \nwould not prevent the server to go on with other queries and send back \nresults, psql to receive further results (next_result), so the final \nfigures would be stupid anyway, just another form of stupid.\n\nBasically the approach cannot work with combined queries: It only worked \nbefore because the intermediate results were coldly discarded.\n\nMaybe the server to report its execution times for each query somehow, but \nthen the communication time would not be included.\n\nI have added a comment about why timing does not make much sense with \ncombined queries.\n\nAttached a v5.\n\n-- \nFabien.",
"msg_date": "Mon, 29 Jul 2019 23:44:43 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "This v6 is just Fabien's v5, rebased over a very minor conflict, and\npgindented. No further changes. I've marked this Ready for Committer.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 12 Sep 2019 16:31:11 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 1:01 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> This v6 is just Fabien's v5, rebased over a very minor conflict, and\n> pgindented. No further changes. I've marked this Ready for Committer.\n>\nShould we add function header for the below function to maintain the\ncommon standard of this file:\n+\n+static void\n+AppendNoticeMessage(void *arg, const char *msg)\n+{\n+ t_notice_messages *notes = (t_notice_messages *) arg;\n+\n+ appendPQExpBufferStr(notes->in_flip ? ¬es->flip : ¬es->flop, msg);\n+}\n+\n+static void\n+ShowNoticeMessage(t_notice_messages *notes)\n+{\n+ PQExpBufferData *current = notes->in_flip ? ¬es->flip : ¬es->flop;\n+\n+ if (current->data != NULL && *current->data != '\\0')\n+ pg_log_info(\"%s\", current->data);\n+ resetPQExpBuffer(current);\n+}\n+\n+/*\n+ * SendQueryAndProcessResults: utility function for use by SendQuery() only\n+ *\n\n+static void\n+ShowErrorMessage(const PGresult *result)\n+{\n+ const char *error = PQerrorMessage(pset.db);\n+\n+ if (strlen(error))\n+ pg_log_info(\"%s\", error);\n+\n+ CheckConnection();\n+}\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Sep 2019 09:23:26 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "> Should we add function header for the below function to maintain the\n> common standard of this file:\n\nYes. Attached v6 does that.\n\n-- \nFabien.",
"msg_date": "Fri, 20 Sep 2019 10:10:58 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 1:41 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> > Should we add function header for the below function to maintain the\n> > common standard of this file:\n>\n> Yes. Attached v6 does that.\n>\nThanks for fixing it.\n\nThe below addition can be removed, it seems to be duplicate:\n@@ -3734,6 +3734,11 @@ listTables(const char *tabtypes, const char\n*pattern, bool verbose, bool showSys\n translate_columns[cols_so_far] = true;\n }\n\n+ /*\n+ * We don't bother to count cols_so_far below here, as there's no need\n+ * to; this might change with future additions to the output columns.\n+ */\n+\n /*\n * We don't bother to count cols_so_far below here, as there's no need\n * to; this might change with future additions to the output columns.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Sep 2019 14:33:55 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "> The below addition can be removed, it seems to be duplicate:\n\nIndeed. I'm unsure how this got into the patch, probably some rebase \nmix-up. Attached v7 removes the duplicates.\n\n-- \nFabien.",
"msg_date": "Fri, 20 Sep 2019 16:25:40 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": ">> The below addition can be removed, it seems to be duplicate:\n>\n> Indeed. I'm unsure how this got into the patch, probably some rebase mix-up. \n> Attached v7 removes the duplicates.\n\nAttached patch v8 is a rebase.\n\n\n-- \nFabien.",
"msg_date": "Mon, 2 Dec 2019 11:09:02 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hi,\n\nThis is one of the patches already marked as RFC (since September by\nAlvaro). Anyone interested in actually pushing it, so that it does not\nfall through to yet another commitfest?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 Jan 2020 18:48:21 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> This is one of the patches already marked as RFC (since September by\n> Alvaro). Anyone interested in actually pushing it, so that it does not\n> fall through to yet another commitfest?\n\nTBH, I think we'd be better off to reject it. This makes a nontrivial\nchange in a very long-standing psql behavior, with AFAICS no way to\nget back the old semantics. (The thread title is completely misleading\nabout that; there's no \"option\" in the patch as it stands.) Sure,\nin a green field this behavior would likely be more sensible ... but\nthat has to be weighed against the fact that it's behaved the way it\ndoes for a long time, and any existing scripts that are affected by\nthat behavior have presumably deliberately chosen to use it.\n\nI can't imagine that changing this will make very many people happier.\nIt seems much more likely that people who are affected will be unhappy.\n\nThe compatibility issue could be resolved by putting in the option\nthat I suppose was there at the beginning. But then we'd have to\nhave a debate about which behavior would be default, and there would\nstill be the question of who would find this to be an improvement.\nIf you're chaining together commands with \\; then it's likely that\nyou are happy with the way it behaves today. Certainly there's been\nno drumbeat of bug reports about it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 13:08:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 01:08:16PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> This is one of the patches already marked as RFC (since September by\n>> Alvaro). Anyone interested in actually pushing it, so that it does not\n>> fall through to yet another commitfest?\n>\n>TBH, I think we'd be better off to reject it. This makes a nontrivial\n>change in a very long-standing psql behavior, with AFAICS no way to\n>get back the old semantics. (The thread title is completely misleading\n>about that; there's no \"option\" in the patch as it stands.) Sure,\n>in a green field this behavior would likely be more sensible ... but\n>that has to be weighed against the fact that it's behaved the way it\n>does for a long time, and any existing scripts that are affected by\n>that behavior have presumably deliberately chosen to use it.\n>\n>I can't imagine that changing this will make very many people happier.\n>It seems much more likely that people who are affected will be unhappy.\n>\n>The compatibility issue could be resolved by putting in the option\n>that I suppose was there at the beginning. But then we'd have to\n>have a debate about which behavior would be default, and there would\n>still be the question of who would find this to be an improvement.\n>If you're chaining together commands with \\; then it's likely that\n>you are happy with the way it behaves today. Certainly there's been\n>no drumbeat of bug reports about it.\n>\n\nI don't know, really, I only pinged this as a CFM who sees a patch\nmarked as RFC for months ...\n\nThe current behavior certainly seems strange/wrong to me - if I send\nmultiple queries to psql, I'd certainly expect results for all of them,\nnot just the last one. So the current behavior seems pretty surprising.\n\nI'm unable to make any judgments about risks/benefits of this change. I\ncan't imagine anyone intentionally relying on the current behavior, so\nI'd say the patch is unlikely to break anything (which is not already\nbroken). But I don't have any data to support this ...\n\nEssentially, I'm just advocating to make a decision - we should either\ncommit or reject the patch, not just move it to the next commitfest over\nand over.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 16 Jan 2020 21:53:07 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 2020-Jan-16, Tom Lane wrote:\n\n> The compatibility issue could be resolved by putting in the option\n> that I suppose was there at the beginning. But then we'd have to\n> have a debate about which behavior would be default, and there would\n> still be the question of who would find this to be an improvement.\n> If you're chaining together commands with \\; then it's likely that\n> you are happy with the way it behaves today. Certainly there's been\n> no drumbeat of bug reports about it.\n\nThe patch originally submitted did indeed have the option (defaulting to\n\"off\", that is, the original behavior), and it was removed at request of\nreviewers Daniel V�rit�, Peter Eisentraut and Kyotaro Horiguchi.\n\nMy own opinion is that any scripts that rely heavily on the current\nbehavior are stepping on shaky ground anyway. I'm not saying we should\nbreak them on every chance we get -- just that keeping them unharmed is\nnot necessarily a priority, and that if this patch enables other psql\nfeatures, it might be a good step forward.\n\nMy own vote would be to use the initial patch (after applying any\nunrelated changes per later review), ie. add the feature with its\ndisable button.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 Jan 2020 18:05:52 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\nHello Tom,\n\n>> This is one of the patches already marked as RFC (since September by\n>> Alvaro). Anyone interested in actually pushing it, so that it does not\n>> fall through to yet another commitfest?\n>\n> TBH, I think we'd be better off to reject it. This makes a nontrivial\n> change in a very long-standing psql behavior, with AFAICS no way to\n> get back the old semantics. (The thread title is completely misleading\n> about that; there's no \"option\" in the patch as it stands.)\n\nThe thread title was not misleading, the initial version of the patch did \noffer an option. Then I was said \"the current behavior is stupid (which I \nagree), let us change it to the sane behavior without option\", then I'm \ntold the contrary. Sigh.\n\nI still have the patch with the option, though.\n\n> Sure, in a green field this behavior would likely be more sensible ... \n> but that has to be weighed against the fact that it's behaved the way it \n> does for a long time, and any existing scripts that are affected by that \n> behavior have presumably deliberately chosen to use it.\n\nI cannot imagine many people actually relying on the current insane \nbehavior.\n\n> I can't imagine that changing this will make very many people happier.\n> It seems much more likely that people who are affected will be unhappy.\n>\n> The compatibility issue could be resolved by putting in the option\n> that I suppose was there at the beginning.\n\nIndeed.\n\n> But then we'd have to have a debate about which behavior would be \n> default,\n\nThe patch was keeping current behavior as the default because people do \nnot like a change whatever.\n\n> and there would still be the question of who would find this to \n> be an improvement. If you're chaining together commands with \\; then \n> it's likely that you are happy with the way it behaves today. \n> Certainly there's been no drumbeat of bug reports about it.\n\nWhy would there be bug report if this is a feature? :-)\n\nThe behavior has been irritating me for a long time. It is plain stupid to \nbe able to send queries but not see their results.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 16 Jan 2020 22:33:48 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\n> My own vote would be to use the initial patch (after applying any\n> unrelated changes per later review), ie. add the feature with its\n> disable button.\n\nI can do that, but not if there is a veto from Tom on the feature.\n\nI wish definite negative opinions by senior committers would be expressed \nearlier, so that people do not spend time rewiewing dead code and \ndeveloping even deader code.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 16 Jan 2020 22:36:42 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\tAlvaro Herrera wrote:\n\n> if this patch enables other psql features, it might be a good step\n> forward.\n\nYes. For instance if the stored procedures support gets improved to\nproduce several result sets, how is psql going to benefit from it\nwhile sticking to the old way (PGresult *r = PQexec(query))\nof executing queries that discards N-1 out of N result sets?\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Fri, 17 Jan 2020 19:48:00 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> Yes. For instance if the stored procedures support gets improved to\n> produce several result sets, how is psql going to benefit from it\n> while sticking to the old way (PGresult *r = PQexec(query))\n> of executing queries that discards N-1 out of N result sets?\n\nI'm not really holding my breath for that to happen, considering\nit would involve fundamental breakage of the wire protocol.\n(For example, extended query protocol assumes that Describe\nPortal only needs to describe one result set. There might be\nmore issues, but that one's bad enough.)\n\nWhen and if we break all the things that would break, it'd be\ntime enough for incompatible changes in psql's behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Jan 2020 14:10:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\tTom Lane wrote:\n\n> I'm not really holding my breath for that to happen, considering\n> it would involve fundamental breakage of the wire protocol.\n> (For example, extended query protocol assumes that Describe\n> Portal only needs to describe one result set. There might be\n> more issues, but that one's bad enough.)\n\nI'm not sure that CALL can be used at all with the extended protocol\ntoday (just like multiple queries per statements, except that for these,\nI'm sure).\nMy interpretation is that the extended protocol deliberately\nlefts out the possibility of multiple result sets because it doesn't fit\nwith how it's designed and if you want to have this, you can just use\nthe old protocol's Query message. There is no need to break anything\nor invent anything but on the contrary to use the older way.\n\nConsidering these 3 ways to use libpq to send queries:\n\n1. using old protocol with PQexec: only one resultset.\n\n2. using old protocol with PQsendQuery+looping on PQgetResult:\nsame as #1 except multiple result sets can be processed\n\n3. using extended protocol: not for multiple result sets, not for copy,\npossibly not for other things, but can use bind parameters, binary format,\npipelining,...\n\nThe current patch is about using #2 instead of #1.\nThey have been patches about doing bits of #3 in some cases\n(binary output, maybe parameters too?) and none got eventually in.\nISTM that the current situation is that psql is stuck at #1 since forever\nso it's not fully using the capabilities of the protocol, both old and new.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Fri, 17 Jan 2020 21:39:07 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "This patch was marked as ready for committer, but clearly there's an\nongoin discussion about what should be the default behavoir, if this\nbreaks existing apps etc. So I've marked it as \"needs review\" and moved\nit to the next CF.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 1 Feb 2020 13:01:36 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\nHello Tomas,\n\n> This patch was marked as ready for committer, but clearly there's an\n> ongoin discussion about what should be the default behavoir, if this\n> breaks existing apps etc. So I've marked it as \"needs review\" and moved\n> it to the next CF.\n\nThe issue is that root (aka Tom) seems to be against the feature, and \nwould like the keep it as current. Although my opinion is that the \nprevious behavior is close to insane, I'm ready to resurect the guc to \ncontrol the behavior so that it would be possible, or even the default.\n\nRight now I'm waiting for a \"I will not veto it on principle\" from Tom \n(I'm okay with a reject based on bad implementation) before spending more \ntime on it: Although my time is given for free, it is not a good reason to \nsend it down the drain if there is a reject coming whatever I do.\n\nTom, would you consider the feature acceptable with a guc to control it?\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 2 Feb 2020 09:16:29 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello,\n\n>> This patch was marked as ready for committer, but clearly there's an\n>> ongoin discussion about what should be the default behavoir, if this\n>> breaks existing apps etc. So I've marked it as \"needs review\" and moved\n>> it to the next CF.\n>\n> The issue is that root (aka Tom) seems to be against the feature, and would \n> like the keep it as current. Although my opinion is that the previous \n> behavior is close to insane, I'm ready to resurect the guc to control the \n> behavior so that it would be possible, or even the default.\n>\n> Right now I'm waiting for a \"I will not veto it on principle\" from Tom (I'm \n> okay with a reject based on bad implementation) before spending more time on \n> it: Although my time is given for free, it is not a good reason to send it \n> down the drain if there is a reject coming whatever I do.\n>\n> Tom, would you consider the feature acceptable with a guc to control it?\n\nTom, I would appreciate if you could answer this question.\n\nFor me, the current behavior is both stupid and irritating: why would pg \ndecide to drop the result of a query that I carefully typed?. It makes the \nmulti-query feature basically useless from psql, so I did not resurrect \nthe guc, but I will if it would remove a veto.\n\nIn the meantime, here is a v9 which also fixes the behavior when using \n\\watch, so that now one can issue several \\;-separated queries and have \ntheir progress shown. I just needed that a few days ago and was \ndisappointed but unsurprised that it did not work.\n\nWatch does not seem to be tested anywhere, I kept it that way. Sigh.\n\n-- \nFabien.",
"msg_date": "Sat, 6 Jun 2020 16:36:41 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Sun, Jun 7, 2020 at 2:36 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> In the meantime, here is a v9 which also fixes the behavior when using\n> \\watch, so that now one can issue several \\;-separated queries and have\n> their progress shown. I just needed that a few days ago and was\n> disappointed but unsurprised that it did not work.\n\nHi Fabien,\n\nThis seems to break the 013_crash_restart.pl test.\n\n\n",
"msg_date": "Mon, 20 Jul 2020 16:57:34 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\n>> In the meantime, here is a v9 which also fixes the behavior when using\n>> \\watch, so that now one can issue several \\;-separated queries and have\n>> their progress shown. I just needed that a few days ago and was\n>> disappointed but unsurprised that it did not work.\n>\n> This seems to break the 013_crash_restart.pl test.\n\nYes, indeed. I'm planning to investigate, hopefully this week.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 20 Jul 2020 07:48:42 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Mon, Jul 20, 2020 at 07:48:42AM +0200, Fabien COELHO wrote:\n> Yes, indeed. I'm planning to investigate, hopefully this week.\n\nThis reply was two months ago, and nothing has happened, so I have\nmarked the patch as RwF.\n--\nMichael",
"msg_date": "Wed, 30 Sep 2020 15:21:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 30.09.20 08:21, Michael Paquier wrote:\n> On Mon, Jul 20, 2020 at 07:48:42AM +0200, Fabien COELHO wrote:\n>> Yes, indeed. I'm planning to investigate, hopefully this week.\n> \n> This reply was two months ago, and nothing has happened, so I have\n> marked the patch as RwF.\n\nGiven the ongoing work on returning multiple result sets from stored \nprocedures[0], I went to dust off this patch.\n\nBased on the feedback, I put back the titular SHOW_ALL_RESULTS option, \nbut set the default to on. I fixed the test failure in \n013_crash_restart.pl. I also trimmed back the test changes a bit so \nthat the resulting test output changes are visible better. (We could \nmake those stylistic changes separately in another patch.) I'll put \nthis back into the commitfest for another look.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/6e747f98-835f-2e05-cde5-86ee444a7140@2ndquadrant.com",
"msg_date": "Sat, 27 Feb 2021 16:19:30 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Peter,\n\n>> This reply was two months ago, and nothing has happened, so I have\n>> marked the patch as RwF.\n>\n> Given the ongoing work on returning multiple result sets from stored \n> procedures[0], I went to dust off this patch.\n>\n> Based on the feedback, I put back the titular SHOW_ALL_RESULTS option, \n> but set the default to on. I fixed the test failure in \n> 013_crash_restart.pl. I also trimmed back the test changes a bit so \n> that the resulting test output changes are visible better. (We could \n> make those stylistic changes separately in another patch.) I'll put \n> this back into the commitfest for another look.\n\nThanks a lot for the fixes and pushing it forward!\n\nMy 0.02�: I tested this updated version and do not have any comment on \nthis version. From my point of view it could be committed. I would not \nbother to separate the test style ajustments.\n\n-- \nFabien.",
"msg_date": "Sun, 14 Mar 2021 10:54:28 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 14.03.21 10:54, Fabien COELHO wrote:\n> \n> Hello Peter,\n> \n>>> This reply was two months ago, and nothing has happened, so I have\n>>> marked the patch as RwF.\n>>\n>> Given the ongoing work on returning multiple result sets from stored \n>> procedures[0], I went to dust off this patch.\n>>\n>> Based on the feedback, I put back the titular SHOW_ALL_RESULTS option, \n>> but set the default to on.� I fixed the test failure in \n>> 013_crash_restart.pl.� I also trimmed back the test changes a bit so \n>> that the resulting test output changes are visible better.� (We could \n>> make those stylistic changes separately in another patch.)� I'll put \n>> this back into the commitfest for another look.\n> \n> Thanks a lot for the fixes and pushing it forward!\n> \n> My 0.02�: I tested this updated version and do not have any comment on \n> this version. From my point of view it could be committed. I would not \n> bother to separate the test style ajustments.\n\nCommitted. The last thing I fixed was the diff in the copy2.out \nregression test. The order of the notices with respect to the error \nmessages was wrong. I fixed that by switching back to the regular \nnotice processor during COPY handling.\n\nBtw., not sure if that was mentioned before, but a cool use of this is \nto \\watch multiple queries at once, like\n\nselect current_date \\; select current_time \\watch\n\n\n",
"msg_date": "Tue, 6 Apr 2021 17:35:20 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Peter,\n\n>> My 0.02�: I tested this updated version and do not have any comment on this \n>> version. From my point of view it could be committed. I would not bother to \n>> separate the test style ajustments.\n>\n> Committed. The last thing I fixed was the diff in the copy2.out regression \n> test. The order of the notices with respect to the error messages was wrong. \n> I fixed that by switching back to the regular notice processor during COPY \n> handling.\n>\n> Btw., not sure if that was mentioned before, but a cool use of this is to \n> \\watch multiple queries at once, like\n>\n> select current_date \\; select current_time \\watch\n\nIndeed, that was one of the things I tested on the patch. I'm wondering \nwhether the documentation should point this out explicitely.\n\nAnyway Thanks for the push!\n\n-- \nFabien.",
"msg_date": "Tue, 6 Apr 2021 23:29:01 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hi\n\nI met a problem after commit 3a51306722.\n\nWhile executing a SQL statement with psql, I can't interrupt it by pressing ctrl+c.\n\nFor example:\npostgres=# insert into test select generate_series(1,10000000);\n^C^CINSERT 0 10000000\n\nPress ctrl+c before finishing INSERT, and psql still continuing to INSERT.\n\nIs it the result expected? And I think maybe it is better to allow users to interrupt by pressing ctrl+c.\n\nRegards,\nShi yu\n\n\n",
"msg_date": "Wed, 7 Apr 2021 07:40:57 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello,\n\n> I met a problem after commit 3a51306722.\n>\n> While executing a SQL statement with psql, I can't interrupt it by pressing ctrl+c.\n>\n> For example:\n> postgres=# insert into test select generate_series(1,10000000);\n> ^C^CINSERT 0 10000000\n>\n> Press ctrl+c before finishing INSERT, and psql still continuing to INSERT.\n\nI can confirm this unexpected change of behavior on this commit. This is \nindeed e bug.\n\n> Is it the result expected?\n\nObviously not.\n\n> And I think maybe it is better to allow users to interrupt by pressing \n> ctrl+c.\n\nObviously yes.\n\nThe problem is that the cancellation stuff is cancelled too early after \nsending an asynchronous request.\n\nAttached a patch which attempts to fix this by moving the cancellation \ncancelling request after processing results.\n\n-- \nFabien.",
"msg_date": "Wed, 7 Apr 2021 16:02:10 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "RE: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "> Attached a patch which attempts to fix this by moving the cancellation\n> cancelling request after processing results.\n\nThank you for your fixing. I tested and the problem has been solved after applying your patch.\n\nRegards,\nShi yu\n\n\n",
"msg_date": "Thu, 8 Apr 2021 01:32:12 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 01:32:12AM +0000, shiy.fnst@fujitsu.com wrote:\n> > Attached a patch which attempts to fix this by moving the cancellation\n> > cancelling request after processing results.\n> \n> Thank you for your fixing. I tested and the problem has been solved after applying your patch.\n\nThanks for the patch Fabien. I've hit this issue multiple time and this is\nindeed unwelcome. Should we add it as an open item?\n\n\n",
"msg_date": "Fri, 9 Apr 2021 00:30:50 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Bonjour Julien,\n\n>>> Attached a patch which attempts to fix this by moving the cancellation\n>>> cancelling request after processing results.\n>>\n>> Thank you for your fixing. I tested and the problem has been solved \n>> after applying your patch.\n>\n> Thanks for the patch Fabien. I've hit this issue multiple time and this is\n> indeed unwelcome. Should we add it as an open item?\n\nIt is definitely a open item. I'm not sure where you want to add it… \npossibly the \"Pg 14 Open Items\" wiki page? I tried but I do not have \nenough privileges, if you can do it please proceed. I added an entry in \nthe next CF in the bugfix section.\n\n-- \nFabien.",
"msg_date": "Thu, 8 Apr 2021 19:04:01 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Bonjour Fabien,\n\nOn Thu, Apr 08, 2021 at 07:04:01PM +0200, Fabien COELHO wrote:\n> > \n> > Thanks for the patch Fabien. I've hit this issue multiple time and this is\n> > indeed unwelcome. Should we add it as an open item?\n> \n> It is definitely a open item. I'm not sure where you want to add it…\n> possibly the \"Pg 14 Open Items\" wiki page?\n\nCorrect.\n\n> I tried but I do not have enough\n> privileges, if you can do it please proceed. I added an entry in the next CF\n> in the bugfix section.\n\nThat's strange, I don't think you need special permission there. It's\nworking for me so I added an item with a link to the patch!\n\n\n",
"msg_date": "Fri, 9 Apr 2021 01:11:35 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 01:11:35AM +0800, Julien Rouhaud wrote:\n> On Thu, Apr 08, 2021 at 07:04:01PM +0200, Fabien COELHO wrote:\n>> It is definitely a open item. I'm not sure where you want to add it…\n>> possibly the \"Pg 14 Open Items\" wiki page?\n> \n> Correct.\n\nI was running a long query this morning and wondered why the\ncancellation was suddenly broken. So I am not alone, and here you are\nwith already a solution :)\n\nSo, studying through 3a51306, this stuff has changed the query\nexecution from a sync PQexec() to an async PQsendQuery(). And the\nproposed fix changes back to the behavior where the cancellation\nreset happens after getting a result, as there is no need to cancel\nanything.\n\nNo strong objections from here if the consensus is to make\nSendQueryAndProcessResults() handle the cancel reset properly though I\nam not sure if this is the cleanest way to do things, but let's make\nat least the whole business consistent in the code for all those code\npaths. For example, PSQLexecWatch() does an extra ResetCancelConn()\nthat would be useless once we are done with\nSendQueryAndProcessResults(). Also, I can see that\nSendQueryAndProcessResults() would not issue a cancel reset if the\nquery fails, for \\watch when cancel is pressed, and for \\watch with\nCOPY. So, my opinion here would be to keep ResetCancelConn() within\nPSQLexecWatch(), just add an extra one in SendQuery() to make all the\nthree code paths printing results consistent, and leave\nSendQueryAndProcessResults() out of the cancellation logic.\n\n>> I tried but I do not have enough\n>> privileges, if you can do it please proceed. I added an entry in the next CF\n>> in the bugfix section.\n> \n> That's strange, I don't think you need special permission there. It's\n> working for me so I added an item with a link to the patch!\n\nAs long as you have a community account, you should have the\npossibility to edit the page. So if you feel that any change is\nrequired, please feel free to do so, of course.\n--\nMichael",
"msg_date": "Fri, 9 Apr 2021 10:06:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 6:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Apr 09, 2021 at 01:11:35AM +0800, Julien Rouhaud wrote:\n> > On Thu, Apr 08, 2021 at 07:04:01PM +0200, Fabien COELHO wrote:\n> >> It is definitely a open item. I'm not sure where you want to add it…\n> >> possibly the \"Pg 14 Open Items\" wiki page?\n> >\n> > Correct.\n>\n> I was running a long query this morning and wondered why the\n> cancellation was suddenly broken. So I am not alone, and here you are\n> with already a solution :)\n>\n> So, studying through 3a51306, this stuff has changed the query\n> execution from a sync PQexec() to an async PQsendQuery(). And the\n> proposed fix changes back to the behavior where the cancellation\n> reset happens after getting a result, as there is no need to cancel\n> anything.\n>\n> No strong objections from here if the consensus is to make\n> SendQueryAndProcessResults() handle the cancel reset properly though I\n> am not sure if this is the cleanest way to do things, but let's make\n> at least the whole business consistent in the code for all those code\n> paths. For example, PSQLexecWatch() does an extra ResetCancelConn()\n> that would be useless once we are done with\n> SendQueryAndProcessResults(). Also, I can see that\n> SendQueryAndProcessResults() would not issue a cancel reset if the\n> query fails, for \\watch when cancel is pressed, and for \\watch with\n> COPY. So, my opinion here would be to keep ResetCancelConn() within\n> PSQLexecWatch(), just add an extra one in SendQuery() to make all the\n> three code paths printing results consistent, and leave\n> SendQueryAndProcessResults() out of the cancellation logic.\n\nHi, I'm also facing the query cancellation issue, I have to kill the\nbackend everytime to cancel a query, it's becoming difficult.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Apr 2021 08:50:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Bonjour Michaël,\n\n> I was running a long query this morning and wondered why the \n> cancellation was suddenly broken. So I am not alone, and here you are \n> with already a solution :)\n>\n> So, studying through 3a51306, this stuff has changed the query\n> execution from a sync PQexec() to an async PQsendQuery().\n\nYes, because we want to handle all results whereas PQexec jumps to the \nlast one.\n\n> And the proposed fix changes back to the behavior where the cancellation \n> reset happens after getting a result, as there is no need to cancel \n> anything.\n\nYep. ISTM that was what happens internally in PQexec.\n\n> No strong objections from here if the consensus is to make\n> SendQueryAndProcessResults() handle the cancel reset properly though I\n> am not sure if this is the cleanest way to do things,\n\nI was wondering as well, I did a quick fix because it can be irritating \nand put off looking at it more precisely over the week-end.\n\n> but let's make at least the whole business consistent in the code for \n> all those code paths.\n\nThere are quite a few of them, some which reset the stuff and some which \ndo not depending on various conditions, some with early exits, all of \nwhich required brain cells and a little time to investigate…\n\n> For example, PSQLexecWatch() does an extra ResetCancelConn() that would \n> be useless once we are done with SendQueryAndProcessResults(). Also, I \n> can see that SendQueryAndProcessResults() would not issue a cancel reset \n> if the query fails, for \\watch when cancel is pressed, and for \\watch \n> with COPY.\n\n> So, my opinion here would be to keep ResetCancelConn() within \n> PSQLexecWatch(), just add an extra one in SendQuery() to make all the \n> three code paths printing results consistent, and leave \n> SendQueryAndProcessResults() out of the cancellation logic.\n\nYep, it looks much better. I found it strange that the later did a reset \nbut was not doing the set.\n\nAttached v2 does as you suggest.\n\n>> That's strange, I don't think you need special permission there. It's\n>> working for me so I added an item with a link to the patch!\n>\n> As long as you have a community account, you should have the\n> possibility to edit the page.\n\nAfter login as \"calvin\", I have \"Want to edit, but don't see an edit \nbutton when logged in? Click here.\".\n\n> So if you feel that any change is required, please feel free to do so, \n> of course.\n\n-- \nFabien.",
"msg_date": "Fri, 9 Apr 2021 08:47:07 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\n> Attached v2 does as you suggest.\n\nThere is not a single test of \"ctrl-c\" which would have caught this \ntrivial and irritating regression. ISTM that a TAP test is doable. Should \none be added?\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 9 Apr 2021 08:52:20 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 08:47:07AM +0200, Fabien COELHO wrote:\n> Yep, it looks much better. I found it strange that the later did a reset but\n> was not doing the set.\n> \n> Attached v2 does as you suggest.\n\nClose enough. I was thinking about this position of the attached,\nwhich is more consistent with the rest.\n--\nMichael",
"msg_date": "Fri, 9 Apr 2021 19:51:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 2021-Apr-08, Fabien COELHO wrote:\n\n> It is definitely a open item. I'm not sure where you want to add it…\n> possibly the \"Pg 14 Open Items\" wiki page? I tried but I do not have enough\n> privileges, if you can do it please proceed. I added an entry in the next CF\n> in the bugfix section.\n\nUser \"calvin\" has privs of wiki editor. If that's not your Wiki\nusername, please state what it is.\n\n-- \nÁlvaro Herrera Valdivia, Chile\n\"Cuando mañana llegue pelearemos segun lo que mañana exija\" (Mowgli)\n\n\n",
"msg_date": "Fri, 9 Apr 2021 12:12:03 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": ">> Yep, it looks much better. I found it strange that the later did a reset but\n>> was not doing the set.\n>>\n>> Attached v2 does as you suggest.\n>\n> Close enough. I was thinking about this position of the attached,\n> which is more consistent with the rest.\n\nGiven the structural complexity of the function, the end of the file \nseemed like a good place to have an all-path-guaranteed reset.\n\nI find it a little bit strange to have the Set at the upper level and the \nReset in many… but not all branches, though.\n\nFor instance the on_error_rollback_savepoint/svptcmd branch includes a \nreset long after many other conditional resets, I cannot guess whether the \ninitial set is still active or has been long wiped out and this query is \njust not cancellable.\n\nAlso, ISTM that in the worst case a cancellation request is sent to a \nserver which is idle, in which case it will be ignored, so the code should \nbe in no hurry to clean it, at least not at the price of code clarity.\n\nAnyway, the place you suggest seems ok.\n\n-- \nFabien.",
"msg_date": "Fri, 9 Apr 2021 20:04:11 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Coverity has pointed out another problem with this patch:\n\n/srv/coverity/git/pgsql-git/postgresql/src/bin/psql/common.c: 1425 in SendQuery()\n1419 \t\t\t\t/*\n1420 \t\t\t\t * Do nothing if they are messing with savepoints themselves:\n1421 \t\t\t\t * If the user did COMMIT AND CHAIN, RELEASE or ROLLBACK, our\n1422 \t\t\t\t * savepoint is gone. If they issued a SAVEPOINT, releasing\n1423 \t\t\t\t * ours would remove theirs.\n1424 \t\t\t\t */\n>>> CID 1476042: Control flow issues (DEADCODE)\n>>> Execution cannot reach the expression \"strcmp(PQcmdStatus(results), \"COMMIT\") == 0\" inside this statement: \"if (results && (strcmp(PQcm...\".\n1425 \t\t\t\tif (results &&\n1426 \t\t\t\t\t(strcmp(PQcmdStatus(results), \"COMMIT\") == 0 ||\n1427 \t\t\t\t\t strcmp(PQcmdStatus(results), \"SAVEPOINT\") == 0 ||\n1428 \t\t\t\t\t strcmp(PQcmdStatus(results), \"RELEASE\") == 0 ||\n1429 \t\t\t\t\t strcmp(PQcmdStatus(results), \"ROLLBACK\") == 0))\n1430 \t\t\t\t\tsvptcmd = NULL;\n\nIt's right: this is dead code because all paths through the if-nest\nstarting at line 1373 now leave results = NULL. Hence, this patch\nhas broken the autocommit logic; it's no longer possible to tell\nwhether we should do anything with our savepoint.\n\nBetween this and the known breakage of control-C, it seems clear\nto me that this patch was nowhere near ready for prime time.\nI think shoving it in on the last day before feature freeze was\nill-advised, and it ought to be reverted. We can try again later.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 11:14:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "... btw, Coverity also doesn't like this fragment of the patch:\n\n/srv/coverity/git/pgsql-git/postgresql/src/bin/psql/common.c: 1084 in ShowNoticeMessage()\n1078 static void\n1079 ShowNoticeMessage(t_notice_messages *notes)\n1080 {\n1081 \tPQExpBufferData\t*current = notes->in_flip ? ¬es->flip : ¬es->flop;\n1082 \tif (current->data != NULL && *current->data != '\\0')\n1083 \t\tpg_log_info(\"%s\", current->data);\n>>> CID 1476041: Null pointer dereferences (FORWARD_NULL)\n>>> Passing \"current\" to \"resetPQExpBuffer\", which dereferences null \"current->data\".\n1084 \tresetPQExpBuffer(current);\n1085 }\n1086 \n1087 /*\n1088 * SendQueryAndProcessResults: utility function for use by SendQuery()\n1089 * and PSQLexecWatch().\n\nIts point here is that either the test of \"current->data != NULL\" is\nuseless, or resetPQExpBuffer needs such a test too. I'm inclined\nto guess the former.\n\n(Just as a matter of style, I don't care for the flip/flop terminology\nhere, not least because it's not clear why exactly two buffers suffice\nand will suffice forevermore. I'd be inclined to use an array of\ntwo buffers with an index variable.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 12:16:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 08:52:20AM +0200, Fabien COELHO wrote:\n> There is not a single test of \"ctrl-c\" which would have caught this trivial\n> and irritating regression. ISTM that a TAP test is doable. Should one be\n> added?\n\nIf you can design something reliable, I would welcome that.\n--\nMichael",
"msg_date": "Mon, 12 Apr 2021 12:10:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Apr 09, 2021 at 08:52:20AM +0200, Fabien COELHO wrote:\n>> There is not a single test of \"ctrl-c\" which would have caught this trivial\n>> and irritating regression. ISTM that a TAP test is doable. Should one be\n>> added?\n\n> If you can design something reliable, I would welcome that.\n\n+1, there's a lot of moving parts there.\n\nI think avoiding any timing issues wouldn't be hard; the\nquery-to-be-interrupted could be \"select pg_sleep(1000)\" or so.\nWhat's less clear is whether we can trigger the control-C\nresponse reliably across platforms.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 23:44:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Sun, Apr 11, 2021 at 11:14:07AM -0400, Tom Lane wrote:\n> It's right: this is dead code because all paths through the if-nest\n> starting at line 1373 now leave results = NULL. Hence, this patch\n> has broken the autocommit logic; it's no longer possible to tell\n> whether we should do anything with our savepoint.\n\nUgh, that's a good catch from Coverity here.\n\n> Between this and the known breakage of control-C, it seems clear\n> to me that this patch was nowhere near ready for prime time.\n> I think shoving it in on the last day before feature freeze was\n> ill-advised, and it ought to be reverted. We can try again later.\n\nYes, I agree that a revert would be more adapted at this stage.\nPeter?\n--\nMichael",
"msg_date": "Mon, 12 Apr 2021 13:19:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\nHello Tom,\n\n> It's right: this is dead code because all paths through the if-nest\n> starting at line 1373 now leave results = NULL. Hence, this patch\n> has broken the autocommit logic;\n\nDo you mean yet another feature without a single non-regression test? :-(\n\nI tend to rely on non regression tests to catch bugs in complex \nmulti-purpose hard-to-maintain functions when the code is modified.\n\nI have submitted a patch to improve psql coverage to about 90%, but given \nthe lack of enthousiasm, I simply dropped it. Not sure I was right not \nto insist.\n\n> it's no longer possible to tell whether we should do anything with our \n> savepoint.\n\n> Between this and the known breakage of control-C, it seems clear\n> to me that this patch was nowhere near ready for prime time.\n> I think shoving it in on the last day before feature freeze was\n> ill-advised, and it ought to be reverted. We can try again later.\n\nThe patch has been asleep for quite a while, and was resurrected, possibly \ntoo late in the process. ISTM that fixing it for 14 is manageable, \nbut this is not my call.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 12 Apr 2021 16:25:33 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> Between this and the known breakage of control-C, it seems clear\n>> to me that this patch was nowhere near ready for prime time.\n>> I think shoving it in on the last day before feature freeze was\n>> ill-advised, and it ought to be reverted. We can try again later.\n\n> The patch has been asleep for quite a while, and was resurrected, possibly \n> too late in the process. ISTM that fixing it for 14 is manageable, \n> but this is not my call.\n\nI just observed an additional issue that I assume was introduced by this\npatch, which is that psql's response to a server crash has gotten\nrepetitive:\n\nregression=# CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10));\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\n!?> \n\nI've never seen that before, and it's not because I don't see\nserver crashes regularly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 12:23:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 4/12/21, 9:25 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\r\n>>> Between this and the known breakage of control-C, it seems clear\r\n>>> to me that this patch was nowhere near ready for prime time.\r\n>>> I think shoving it in on the last day before feature freeze was\r\n>>> ill-advised, and it ought to be reverted. We can try again later.\r\n>\r\n>> The patch has been asleep for quite a while, and was resurrected, possibly\r\n>> too late in the process. ISTM that fixing it for 14 is manageable,\r\n>> but this is not my call.\r\n>\r\n> I just observed an additional issue that I assume was introduced by this\r\n> patch, which is that psql's response to a server crash has gotten\r\n> repetitive:\r\n>\r\n> regression=# CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10));\r\n> server closed the connection unexpectedly\r\n> This probably means the server terminated abnormally\r\n> before or while processing the request.\r\n> The connection to the server was lost. Attempting reset: Failed.\r\n> The connection to the server was lost. Attempting reset: Failed.\r\n> !?>\r\n>\r\n> I've never seen that before, and it's not because I don't see\r\n> server crashes regularly.\r\n\r\nI think I've found another issue with this patch. If AcceptResult()\r\nreturns false in SendQueryAndProcessResults(), it seems to result in\r\nan infinite loop of \"unexpected PQresultStatus\" messages. This can be\r\nreproduced by trying to run \"START_REPLICATION\" via psql.\r\n\r\nThe following patch seems to resolve the issue, although I'll admit I\r\nhaven't dug into this too deeply. In any case, +1 for reverting the\r\npatch for now.\r\n\r\ndiff --git a/src/bin/psql/common.c b/src/bin/psql/common.c\r\nindex 028a357991..abafd41763 100644\r\n--- a/src/bin/psql/common.c\r\n+++ b/src/bin/psql/common.c\r\n@@ -1176,7 +1176,7 @@ SendQueryAndProcessResults(const char *query, double *pelapsed_msec, bool is_wat\r\n\r\n /* and switch to next result */\r\n result = PQgetResult(pset.db);\r\n- continue;\r\n+ break;\r\n }\r\n\r\n /* must handle COPY before changing the current result */\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 12 Apr 2021 19:08:21 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 2021-Apr-12, Bossart, Nathan wrote:\n\n> The following patch seems to resolve the issue, although I'll admit I\n> haven't dug into this too deeply. In any case, +1 for reverting the\n> patch for now.\n\nPlease note that there's no \"for now\" about it -- if the patch is\nreverted, the only way to get it back is to wait for PG15. That's\nundesirable. A better approach is to collect all those bugs and get\nthem fixed. There's plenty of time to do that.\n\nI, for one, would prefer to see the feature repaired in this cycle.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Mon, 12 Apr 2021 15:33:01 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 07:08:21PM +0000, Bossart, Nathan wrote:\n> I think I've found another issue with this patch. If AcceptResult()\n> returns false in SendQueryAndProcessResults(), it seems to result in\n> an infinite loop of \"unexpected PQresultStatus\" messages. This can be\n> reproduced by trying to run \"START_REPLICATION\" via psql.\n\nYes, that's another problem, and this causes an infinite loop where\nwe would just report one error previously :/\n--\nMichael",
"msg_date": "Tue, 13 Apr 2021 08:31:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 03:33:01PM -0400, Alvaro Herrera wrote:\n> Please note that there's no \"for now\" about it -- if the patch is\n> reverted, the only way to get it back is to wait for PG15. That's\n> undesirable. A better approach is to collect all those bugs and get\n> them fixed. There's plenty of time to do that.\n> \n> I, for one, would prefer to see the feature repaired in this cycle.\n\nIf it is possible to get that fixed, I would not mind waiting a bit\nmore but it would be nice to see some actual proposals. There are\nalready three identified bugs in psql introduced by this commit,\nincluding the query cancellation.\n\nThat's a lot IMO, so my vote would be to discard this feature for now\nand revisit it properly in the 15 dev cycle, so as resources are\nredirected into more urgent issues (13 open items as of the moment of\nwriting this email).\n--\nMichael",
"msg_date": "Thu, 15 Apr 2021 10:21:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Apr 12, 2021 at 03:33:01PM -0400, Alvaro Herrera wrote:\n>> I, for one, would prefer to see the feature repaired in this cycle.\n\n> If it is possible to get that fixed, I would not mind waiting a bit\n> more but it would be nice to see some actual proposals. There are\n> already three identified bugs in psql introduced by this commit,\n> including the query cancellation.\n\n> That's a lot IMO, so my vote would be to discard this feature for now\n> and revisit it properly in the 15 dev cycle, so as resources are\n> redirected into more urgent issues (13 open items as of the moment of\n> writing this email).\n\nI don't wish to tell people which open issues they ought to work on\n... but this patch seems like it could be quite a large can of worms,\nand I'm not detecting very much urgency about getting it fixed.\nIf it's not to be reverted then some significant effort needs to be\nput into it soon.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Apr 2021 21:51:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\nHello Tom,\n\n>> That's a lot IMO, so my vote would be to discard this feature for now\n>> and revisit it properly in the 15 dev cycle, so as resources are\n>> redirected into more urgent issues (13 open items as of the moment of\n>> writing this email).\n>\n> I don't wish to tell people which open issues they ought to work on\n> ... but this patch seems like it could be quite a large can of worms,\n> and I'm not detecting very much urgency about getting it fixed.\n> If it's not to be reverted then some significant effort needs to be\n> put into it soon.\n\nMy overly naive trust in non regression test to catch any issues has been \nlargely proven wrong. Three key features do not have a single tests. Sigh.\n\nI'll have some time to look at it over next week-end, but not before.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 15 Apr 2021 13:51:20 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 15.04.21 13:51, Fabien COELHO wrote:\n>>> That's a lot IMO, so my vote would be to discard this feature for now\n>>> and revisit it properly in the 15 dev cycle, so as resources are\n>>> redirected into more urgent issues (13 open items as of the moment of\n>>> writing this email).\n>>\n>> I don't wish to tell people which open issues they ought to work on\n>> ... but this patch seems like it could be quite a large can of worms,\n>> and I'm not detecting very much urgency about getting it fixed.\n>> If it's not to be reverted then some significant effort needs to be\n>> put into it soon.\n> \n> My overly naive trust in non regression test to catch any issues has \n> been largely proven wrong. Three key features do not have a single \n> tests. Sigh.\n> \n> I'll have some time to look at it over next week-end, but not before.\n\nI have reverted the patch and moved the commit fest entry to CF 2021-07.\n\n\n",
"msg_date": "Thu, 15 Apr 2021 20:06:59 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Peter,\n\n>> My overly naive trust in non regression test to catch any issues has been \n>> largely proven wrong. Three key features do not have a single tests. Sigh.\n>> \n>> I'll have some time to look at it over next week-end, but not before.\n>\n> I have reverted the patch and moved the commit fest entry to CF 2021-07.\n\nAttached a v7 which fixes known issues.\n\nI've tried to simplify the code and added a few comments. I've moved query \ncancellation reset in one place in SendQuery. I've switched to an array of \nbuffers for notices, as suggested by Tom.\n\nThe patch includes basic AUTOCOMMIT and ON_ERROR_ROLLBACK tests, which did \nnot exist before, at all. I tried cancelling queries manually, but did not \ndevelop a test for this, mostly because last time I submitted a TAP test \nabout psql to raise its coverage it was rejected.\n\nAs usual, what is not tested does not work…\n\n-- \nFabien.",
"msg_date": "Sat, 12 Jun 2021 11:41:24 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 12.06.21 11:41, Fabien COELHO wrote:\n> The patch includes basic AUTOCOMMIT and ON_ERROR_ROLLBACK tests, which \n> did not exist before, at all.\n\nI looked at these tests first. The tests are good, they increase \ncoverage. But they don't actually test the issue that was broken by the \nprevious patch, namely the situation where autocommit is off and the \nuser manually messes with the savepoints. I applied the tests against \nthe previous patch and there was no failure. So the tests are useful, \nbut they don't really help this patch. Would you like to enhance the \ntests a bit to cover this case? I think we could move forward with \nthese tests then.\n\n\n",
"msg_date": "Mon, 5 Jul 2021 12:17:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On Sat, Jun 12, 2021 at 3:11 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Peter,\n>\n> >> My overly naive trust in non regression test to catch any issues has been\n> >> largely proven wrong. Three key features do not have a single tests. Sigh.\n> >>\n> >> I'll have some time to look at it over next week-end, but not before.\n> >\n> > I have reverted the patch and moved the commit fest entry to CF 2021-07.\n>\n> Attached a v7 which fixes known issues.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 15 Jul 2021 17:44:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nOk. I noticed. The patch got significantly broken by the watch pager \ncommit. I also have to enhance the added tests (per Peter request).\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 15 Jul 2021 17:46:08 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 15.07.21 17:46, Fabien COELHO wrote:\n>> The patch does not apply on Head anymore, could you rebase and post a\n>> patch. I'm changing the status to \"Waiting for Author\".\n> \n> Ok. I noticed. The patch got significantly broken by the watch pager \n> commit. I also have to enhance the added tests (per Peter request).\n\nI wrote a test to check psql query cancel support. I checked that it \nfails against the patch that was reverted. Maybe this is useful.",
"msg_date": "Wed, 21 Jul 2021 21:55:30 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": ">>> The patch does not apply on Head anymore, could you rebase and post a\n>>> patch. I'm changing the status to \"Waiting for Author\".\n>> \n>> Ok. I noticed. The patch got significantly broken by the watch pager \n>> commit. I also have to enhance the added tests (per Peter request).\n>\n> I wrote a test to check psql query cancel support. I checked that it fails \n> against the patch that was reverted. Maybe this is useful.\n\nThank you! The patch update is in progress…\n\nThe newly added PSQL_WATCH_PAGER feature which broke the patch does not \nseem to be tested anywhere, this is tiring:-(\n\n-- \nFabien.",
"msg_date": "Thu, 22 Jul 2021 07:52:02 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hi\n\nčt 22. 7. 2021 v 7:52 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n\n>\n> >>> The patch does not apply on Head anymore, could you rebase and post a\n> >>> patch. I'm changing the status to \"Waiting for Author\".\n> >>\n> >> Ok. I noticed. The patch got significantly broken by the watch pager\n> >> commit. I also have to enhance the added tests (per Peter request).\n> >\n> > I wrote a test to check psql query cancel support. I checked that it\n> fails\n> > against the patch that was reverted. Maybe this is useful.\n>\n> Thank you! The patch update is in progress…\n>\n> The newly added PSQL_WATCH_PAGER feature which broke the patch does not\n> seem to be tested anywhere, this is tiring:-(\n>\n\nDo you have any idea how this can be tested? It requires some pager that\ndoesn't use blocking reading, and you need to do remote control of this\npager. So it requires a really especially written pager just for this\npurpose. It is solvable, but I am not sure if it is adequate to this\npatch.\n\nRegards\n\nPavel\n\n\n\n> --\n> Fabien.\n\nHičt 22. 7. 2021 v 7:52 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n>>> The patch does not apply on Head anymore, could you rebase and post a\n>>> patch. I'm changing the status to \"Waiting for Author\".\n>> \n>> Ok. I noticed. The patch got significantly broken by the watch pager \n>> commit. I also have to enhance the added tests (per Peter request).\n>\n> I wrote a test to check psql query cancel support. I checked that it fails \n> against the patch that was reverted. Maybe this is useful.\n\nThank you! The patch update is in progress…\n\nThe newly added PSQL_WATCH_PAGER feature which broke the patch does not \nseem to be tested anywhere, this is tiring:-(Do you have any idea how this can be tested? It requires some pager that doesn't use blocking reading, and you need to do remote control of this pager. So it requires a really especially written pager just for this purpose. It is solvable, but I am not sure if it is adequate to this patch. RegardsPavel\n\n-- \nFabien.",
"msg_date": "Thu, 22 Jul 2021 08:33:09 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Pavel,\n\n>> The newly added PSQL_WATCH_PAGER feature which broke the patch does not\n>> seem to be tested anywhere, this is tiring:-(\n>\n> Do you have any idea how this can be tested?\n\nThe TAP patch sent by Peter on this thread is a very good start.\n\n> It requires some pager that doesn't use blocking reading, and you need \n> to do remote control of this pager. So it requires a really especially \n> written pager just for this purpose. It is solvable, but I am not sure \n> if it is adequate to this patch.\n\nNot really: The point would not be to test the pager itself (that's for \nthe people who develop the pager, not for psql), but just to test that the \npager is actually started or not started by psql depending on conditions \n(eg pset pager…) and that it does *something* when started. See for \ninstance the simplistic pager.pl script attached, the output of which \ncould be tested. Note that PSQL_PAGER is not tested at all either. \nBasically \"psql\" is not tested, which is a pain when developing a non \ntrivial patch.\n\n-- \nFabien.",
"msg_date": "Thu, 22 Jul 2021 11:00:41 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "čt 22. 7. 2021 v 11:00 odesílatel Fabien COELHO <coelho@cri.ensmp.fr>\nnapsal:\n\n>\n> Hello Pavel,\n>\n> >> The newly added PSQL_WATCH_PAGER feature which broke the patch does not\n> >> seem to be tested anywhere, this is tiring:-(\n> >\n> > Do you have any idea how this can be tested?\n>\n> The TAP patch sent by Peter on this thread is a very good start.\n>\n> > It requires some pager that doesn't use blocking reading, and you need\n> > to do remote control of this pager. So it requires a really especially\n> > written pager just for this purpose. It is solvable, but I am not sure\n> > if it is adequate to this patch.\n>\n> Not really: The point would not be to test the pager itself (that's for\n> the people who develop the pager, not for psql), but just to test that the\n> pager is actually started or not started by psql depending on conditions\n> (eg pset pager…) and that it does *something* when started. See for\n> instance the simplistic pager.pl script attached, the output of which\n> could be tested. Note that PSQL_PAGER is not tested at all either.\n> Basically \"psql\" is not tested, which is a pain when developing a non\n> trivial patch.\n>\n\nMinimally for PSQL_WATCH_PAGER, the pager should exit after some time, but\nbefore it has to repeat data reading. Elsewhere the psql will hang.\n\ncan be solution to use special mode for psql, when psql will do write to\nlogfile and redirect to file instead using any (simplified) pager?\nTheoretically, there is nothing special on usage of pager, and just you can\ntest redirecting to file. That is not tested too. In this mode, you can\nsend sigint to psql - and it can be emulation of sigint to pager in\nPSQL_WATCH_PAGER mode,\n\n\n\n\n> --\n> Fabien.\n\nčt 22. 7. 2021 v 11:00 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\nHello Pavel,\n\n>> The newly added PSQL_WATCH_PAGER feature which broke the patch does not\n>> seem to be tested anywhere, this is tiring:-(\n>\n> Do you have any idea how this can be tested?\n\nThe TAP patch sent by Peter on this thread is a very good start.\n\n> It requires some pager that doesn't use blocking reading, and you need \n> to do remote control of this pager. So it requires a really especially \n> written pager just for this purpose. It is solvable, but I am not sure \n> if it is adequate to this patch.\n\nNot really: The point would not be to test the pager itself (that's for \nthe people who develop the pager, not for psql), but just to test that the \npager is actually started or not started by psql depending on conditions \n(eg pset pager…) and that it does *something* when started. See for \ninstance the simplistic pager.pl script attached, the output of which \ncould be tested. Note that PSQL_PAGER is not tested at all either. \nBasically \"psql\" is not tested, which is a pain when developing a non \ntrivial patch.Minimally for PSQL_WATCH_PAGER, the pager should exit after some time, but before it has to repeat data reading. Elsewhere the psql will hang.can be solution to use special mode for psql, when psql will do write to logfile and redirect to file instead using any (simplified) pager? Theoretically, there is nothing special on usage of pager, and just you can test redirecting to file. That is not tested too. In this mode, you can send sigint to psql - and it can be emulation of sigint to pager in PSQL_WATCH_PAGER mode, \n\n-- \nFabien.",
"msg_date": "Thu, 22 Jul 2021 11:28:49 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\nHello,\n\n> Minimally for PSQL_WATCH_PAGER, the pager should exit after some time, but\n> before it has to repeat data reading. Elsewhere the psql will hang.\n\nSure. The \"pager.pl\" script I sent exits after reading a few lines.\n\n> can be solution to use special mode for psql, when psql will do write to\n> logfile and redirect to file instead using any (simplified) pager?\n\nI do not want a special psql mode, I just would like \"make check\" to tell \nme if I broke the PSQL_WATCH_PAGER feature after reworking the \nmulti-results patch.\n\n> Theoretically, there is nothing special on usage of pager, and just you can\n> test redirecting to file.\n\nI do not follow. For what I seen the watch pager feature is somehow a \nlittle different, and I'd like to be sure I'm not breaking anything.\n\nFor your information, pspg does not seem to like being fed two results\n\n sh> PSQL_WATCH_PAGER=\"pspg --stream\"\n psql> SELECT NOW() \\; SELECT RANDOM() \\watch 1\n\nThe first table is shown, the second seems ignored.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 22 Jul 2021 16:49:30 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": ">> Ok. I noticed. The patch got significantly broken by the watch pager \n>> commit. I also have to enhance the added tests (per Peter request).\n>\n> I wrote a test to check psql query cancel support. I checked that it fails \n> against the patch that was reverted. Maybe this is useful.\n\nHere is the updated version (v8? I'm not sure what the right count is), \nwhich works for me and for \"make check\", including some tests added for \nuncovered paths.\n\nI included your tap test (thanks again!) with some more comments and \ncleanup.\n\nI tested manually for the pager feature, which mostly work, althoug \n\"pspg --stream\" does not seem to expect two tables, or maybe there is a \nway to switch between these that I have not found.\n\n-- \nFabien.",
"msg_date": "Thu, 22 Jul 2021 16:58:40 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "čt 22. 7. 2021 v 16:49 odesílatel Fabien COELHO <coelho@cri.ensmp.fr>\nnapsal:\n\n>\n> Hello,\n>\n> > Minimally for PSQL_WATCH_PAGER, the pager should exit after some time,\n> but\n> > before it has to repeat data reading. Elsewhere the psql will hang.\n>\n> Sure. The \"pager.pl\" script I sent exits after reading a few lines.\n>\n> > can be solution to use special mode for psql, when psql will do write to\n> > logfile and redirect to file instead using any (simplified) pager?\n>\n> I do not want a special psql mode, I just would like \"make check\" to tell\n> me if I broke the PSQL_WATCH_PAGER feature after reworking the\n> multi-results patch.\n>\n> > Theoretically, there is nothing special on usage of pager, and just you\n> can\n> > test redirecting to file.\n>\n> I do not follow. For what I seen the watch pager feature is somehow a\n> little different, and I'd like to be sure I'm not breaking anything.\n>\n> For your information, pspg does not seem to like being fed two results\n>\n> sh> PSQL_WATCH_PAGER=\"pspg --stream\"\n> psql> SELECT NOW() \\; SELECT RANDOM() \\watch 1\n>\n> The first table is shown, the second seems ignored.\n>\n\npspg cannot show multitable results, so it is not surprising. And I don't\nthink about supporting this. Unfortunately I am not able to detect this\nsituation and show some warnings, just because psql doesn't send enough\ndata for it. Can be nice if psql sends some invisible characters, that\nallows synchronization. But there is nothing. I just detect the timestamp\nline and empty lines.\n\n\n\n> --\n> Fabien.\n>\n\nčt 22. 7. 2021 v 16:49 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\nHello,\n\n> Minimally for PSQL_WATCH_PAGER, the pager should exit after some time, but\n> before it has to repeat data reading. Elsewhere the psql will hang.\n\nSure. The \"pager.pl\" script I sent exits after reading a few lines.\n\n> can be solution to use special mode for psql, when psql will do write to\n> logfile and redirect to file instead using any (simplified) pager?\n\nI do not want a special psql mode, I just would like \"make check\" to tell \nme if I broke the PSQL_WATCH_PAGER feature after reworking the \nmulti-results patch.\n\n> Theoretically, there is nothing special on usage of pager, and just you can\n> test redirecting to file.\n\nI do not follow. For what I seen the watch pager feature is somehow a \nlittle different, and I'd like to be sure I'm not breaking anything.\n\nFor your information, pspg does not seem to like being fed two results\n\n sh> PSQL_WATCH_PAGER=\"pspg --stream\"\n psql> SELECT NOW() \\; SELECT RANDOM() \\watch 1\n\nThe first table is shown, the second seems ignored.pspg cannot show multitable results, so it is not surprising. And I don't think about supporting this. Unfortunately I am not able to detect this situation and show some warnings, just because psql doesn't send enough data for it. Can be nice if psql sends some invisible characters, that allows synchronization. But there is nothing. I just detect the timestamp line and empty lines.\n\n-- \nFabien.",
"msg_date": "Thu, 22 Jul 2021 17:13:45 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "čt 22. 7. 2021 v 16:58 odesílatel Fabien COELHO <coelho@cri.ensmp.fr>\nnapsal:\n\n>\n> >> Ok. I noticed. The patch got significantly broken by the watch pager\n> >> commit. I also have to enhance the added tests (per Peter request).\n> >\n> > I wrote a test to check psql query cancel support. I checked that it\n> fails\n> > against the patch that was reverted. Maybe this is useful.\n>\n> Here is the updated version (v8? I'm not sure what the right count is),\n> which works for me and for \"make check\", including some tests added for\n> uncovered paths.\n>\n> I included your tap test (thanks again!) with some more comments and\n> cleanup.\n>\n> I tested manually for the pager feature, which mostly work, althoug\n> \"pspg --stream\" does not seem to expect two tables, or maybe there is a\n> way to switch between these that I have not found.\n>\n\npspg doesn't support this feature. Theoretically it can be implementable (I\nam able to hold two datasets now), but without any help with\nsynchronization I don't want to implement any more complex parsing. On the\npspg side I am not able to detect what is the first result in the batch,\nwhat is the last result (without some hard heuristics - probably I can read\nsome information from timestamps). And if you need two or more results in\none terminal, then mode without pager is better.\n\n\n\n> --\n> Fabien.\n\nčt 22. 7. 2021 v 16:58 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n>> Ok. I noticed. The patch got significantly broken by the watch pager \n>> commit. I also have to enhance the added tests (per Peter request).\n>\n> I wrote a test to check psql query cancel support. I checked that it fails \n> against the patch that was reverted. Maybe this is useful.\n\nHere is the updated version (v8? I'm not sure what the right count is), \nwhich works for me and for \"make check\", including some tests added for \nuncovered paths.\n\nI included your tap test (thanks again!) with some more comments and \ncleanup.\n\nI tested manually for the pager feature, which mostly work, althoug \n\"pspg --stream\" does not seem to expect two tables, or maybe there is a \nway to switch between these that I have not found.pspg doesn't support this feature. Theoretically it can be implementable (I am able to hold two datasets now), but without any help with synchronization I don't want to implement any more complex parsing. On the pspg side I am not able to detect what is the first result in the batch, what is the last result (without some hard heuristics - probably I can read some information from timestamps). And if you need two or more results in one terminal, then mode without pager is better. \n\n-- \nFabien.",
"msg_date": "Thu, 22 Jul 2021 17:23:30 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "čt 22. 7. 2021 v 17:23 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> čt 22. 7. 2021 v 16:58 odesílatel Fabien COELHO <coelho@cri.ensmp.fr>\n> napsal:\n>\n>>\n>> >> Ok. I noticed. The patch got significantly broken by the watch pager\n>> >> commit. I also have to enhance the added tests (per Peter request).\n>> >\n>> > I wrote a test to check psql query cancel support. I checked that it\n>> fails\n>> > against the patch that was reverted. Maybe this is useful.\n>>\n>> Here is the updated version (v8? I'm not sure what the right count is),\n>> which works for me and for \"make check\", including some tests added for\n>> uncovered paths.\n>>\n>> I included your tap test (thanks again!) with some more comments and\n>> cleanup.\n>>\n>> I tested manually for the pager feature, which mostly work, althoug\n>> \"pspg --stream\" does not seem to expect two tables, or maybe there is a\n>> way to switch between these that I have not found.\n>>\n>\n> pspg doesn't support this feature. Theoretically it can be implementable\n> (I am able to hold two datasets now), but without any help with\n> synchronization I don't want to implement any more complex parsing. On the\n> pspg side I am not able to detect what is the first result in the batch,\n> what is the last result (without some hard heuristics - probably I can read\n> some information from timestamps). And if you need two or more results in\n> one terminal, then mode without pager is better.\n>\n\nbut the timestamps are localized, and again I have not enough information\non the pspg side for correct parsing.\n\nSo until psql will use some tags that allow more simple detection of start\nand end batch or relation, this feature will not be supported by pspg :-/.\nThere are some invisible ascii codes that can be used for this purpose.\n\n\n>\n>\n>> --\n>> Fabien.\n>\n>\n\nčt 22. 7. 2021 v 17:23 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:čt 22. 7. 2021 v 16:58 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n>> Ok. I noticed. The patch got significantly broken by the watch pager \n>> commit. I also have to enhance the added tests (per Peter request).\n>\n> I wrote a test to check psql query cancel support. I checked that it fails \n> against the patch that was reverted. Maybe this is useful.\n\nHere is the updated version (v8? I'm not sure what the right count is), \nwhich works for me and for \"make check\", including some tests added for \nuncovered paths.\n\nI included your tap test (thanks again!) with some more comments and \ncleanup.\n\nI tested manually for the pager feature, which mostly work, althoug \n\"pspg --stream\" does not seem to expect two tables, or maybe there is a \nway to switch between these that I have not found.pspg doesn't support this feature. Theoretically it can be implementable (I am able to hold two datasets now), but without any help with synchronization I don't want to implement any more complex parsing. On the pspg side I am not able to detect what is the first result in the batch, what is the last result (without some hard heuristics - probably I can read some information from timestamps). And if you need two or more results in one terminal, then mode without pager is better. but the timestamps are localized, and again I have not enough information on the pspg side for correct parsing. So until psql will use some tags that allow more simple detection of start and end batch or relation, this feature will not be supported by pspg :-/. There are some invisible ascii codes that can be used for this purpose. \n\n-- \nFabien.",
"msg_date": "Thu, 22 Jul 2021 17:28:58 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\n>>> I tested manually for the pager feature, which mostly work, althoug \n>>> \"pspg --stream\" does not seem to expect two tables, or maybe there is \n>>> a way to switch between these that I have not found.\n>>\n>> pspg doesn't support this feature.\n\nSure. Note that it is not a feature yet:-)\n\nISTM that having some configurable pager-targetted marker would greatly \nhelp parsing on the pager side, so this might be the way to go, if this\nfinally becomes a feature.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 23 Jul 2021 09:41:05 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "pá 23. 7. 2021 v 9:41 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n\n>\n> >>> I tested manually for the pager feature, which mostly work, althoug\n> >>> \"pspg --stream\" does not seem to expect two tables, or maybe there is\n> >>> a way to switch between these that I have not found.\n> >>\n> >> pspg doesn't support this feature.\n>\n> Sure. Note that it is not a feature yet:-)\n>\n> ISTM that having some configurable pager-targetted marker would greatly\n> help parsing on the pager side, so this might be the way to go, if this\n> finally becomes a feature.\n>\n\nyes, It can help me lot of, and pspg can be less sensitive (or immune)\nagainst synchronization errors.\n\nPavel\n\n\n> --\n> Fabien.\n>\n\npá 23. 7. 2021 v 9:41 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n>>> I tested manually for the pager feature, which mostly work, althoug \n>>> \"pspg --stream\" does not seem to expect two tables, or maybe there is \n>>> a way to switch between these that I have not found.\n>>\n>> pspg doesn't support this feature.\n\nSure. Note that it is not a feature yet:-)\n\nISTM that having some configurable pager-targetted marker would greatly \nhelp parsing on the pager side, so this might be the way to go, if this\nfinally becomes a feature.yes, It can help me lot of, and pspg can be less sensitive (or immune) against synchronization errors. Pavel\n\n-- \nFabien.",
"msg_date": "Fri, 23 Jul 2021 09:56:41 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 22.07.21 16:58, Fabien COELHO wrote:\n> Here is the updated version (v8? I'm not sure what the right count is), \n> which works for me and for \"make check\", including some tests added for \n> uncovered paths.\n> \n> I included your tap test (thanks again!) with some more comments and \n> cleanup.\n\nThe tap test had a merge conflict, so I fixed that and committed it \nseparately. I was wondering about its portability, so it's good to sort \nthat out separately from your main patch. There are already a few \nfailures on the build farm right now, so let's see where this is heading.\n\n\n",
"msg_date": "Fri, 20 Aug 2021 13:28:10 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 22.07.21 16:58, Fabien COELHO wrote:\n>>> Ok. I noticed. The patch got significantly broken by the watch pager \n>>> commit. I also have to enhance the added tests (per Peter request).\n>>\n>> I wrote a test to check psql query cancel support.� I checked that it \n>> fails against the patch that was reverted.� Maybe this is useful.\n> \n> Here is the updated version (v8? I'm not sure what the right count is), \n> which works for me and for \"make check\", including some tests added for \n> uncovered paths.\n\nI was looking at adding test coverage for the issue complained about in \n[0]. That message said that the autocommit logic was broken, but \nactually the issue was with the ON_ERROR_ROLLBACK logic. However, it \nturned out that neither feature had any test coverage, and they are \neasily testable using the pg_regress setup, so I wrote tests for both \nand another little thing I found nearby.\n\nIt turns out that your v8 patch still has the issue complained about in \n[0]. The issue is that after COMMIT AND CHAIN, the internal savepoint \nis gone, but the patched psql still thinks it should be there and tries \nto release it, which leads to errors.\n\n\n[0]: https://www.postgresql.org/message-id/2671235.1618154047@sss.pgh.pa.us",
"msg_date": "Fri, 24 Sep 2021 14:42:33 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hallo Peter,\n\n> It turns out that your v8 patch still has the issue complained about in [0]. \n> The issue is that after COMMIT AND CHAIN, the internal savepoint is gone, but \n> the patched psql still thinks it should be there and tries to release it, \n> which leads to errors.\n\nIndeed. Thanks for the catch.\n\nAttached v9 integrates your tests and makes them work.\n\n-- \nFabien.",
"msg_date": "Sat, 25 Sep 2021 11:28:52 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Peter,\n\n> Attached v9 integrates your tests and makes them work.\n\nAttached v11 is a rebase.\n\n-- \nFabien.",
"msg_date": "Sat, 2 Oct 2021 16:31:04 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 02.10.21 16:31, Fabien COELHO wrote:\n>> Attached v9 integrates your tests and makes them work.\n> \n> Attached v11 is a rebase.\n\nThis patch still has a few of the problems reported earlier this year.\n\nIn [0], it was reported that certain replication commands result in \ninfinite loops because of faulty error handling. This still happens. I \nwrote a test for it, attached here. (I threw in a few more basic tests, \njust to have some more coverage that was lacking, and to have a file to \nput the new test in.)\n\nIn [1], it was reported that server crashes produce duplicate error \nmessages. This also still happens. I didn't write a test for it, maybe \nyou have an idea. (Obviously, we could check whether the error message \nis literally there twice in the output, but that doesn't seem very \ngeneral.) But it's easy to test manually: just have psql connect, shut \ndown the server, then run a query.\n\nAdditionally, I looked into the Coverity issue reported in [2]. That \none is fixed, but I figured it would be good to be able to check your \npatches with a static analyzer in a similar way. I don't have the \nability to run Coverity locally, so I looked at scan-build and fixed a \nfew minor warnings, also attached as a patch. Your current patch \nappears to be okay in that regard.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/69C0B369-570C-4524-8EE4-BCCACECB6BEE@amazon.com\n\n[1]: https://www.postgresql.org/message-id/2902362.1618244606@sss.pgh.pa.us\n\n[2]: https://www.postgresql.org/message-id/2680034.1618157764@sss.pgh.pa.us",
"msg_date": "Fri, 8 Oct 2021 14:15:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "> On 8 Oct 2021, at 14:15, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 02.10.21 16:31, Fabien COELHO wrote:\n>>> Attached v9 integrates your tests and makes them work.\n>> Attached v11 is a rebase.\n> \n> This patch still has a few of the problems reported earlier this year.\n\nThe patch fails to apply and the thread seems to have taken a nap. You\nmentioned on the \"dynamic result sets support in extended query protocol\"\nthread [0] that you were going to work on this as a pre-requisite for that\npatch. Is that still the plan so we should keep this in the Commitfest?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://www.postgresql.org/message-id/6f038f18-0f2b-5271-a56f-1770577f246c%40enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 23 Nov 2021 10:17:23 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\nHello Daniel,\n\n>> This patch still has a few of the problems reported earlier this year.\n>\n> The patch fails to apply and the thread seems to have taken a nap.\n\nI'm not napping:-) I just do not have enough time available this month. I \nintend to work on the patch in the next CF (January). AFAICR there is one \nnecessary rebase and one bug to fix.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 24 Nov 2021 17:21:19 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Peter,\n\nI finally took some time to look at this.\n\n>> Attached v11 is a rebase.\n>\n> This patch still has a few of the problems reported earlier this year.\n>\n> In [0], it was reported that certain replication commands result in infinite \n> loops because of faulty error handling. This still happens. I wrote a test \n> for it, attached here. (I threw in a few more basic tests, just to have some \n> more coverage that was lacking, and to have a file to put the new test in.)\n\nHmmm… For some unclear reason on errors on a PGRES_COPY_* state \nPQgetResult keeps on returning an empty result. PQexec manually ignores \nit, so I did the same with a comment, but for me the real bug is somehow \nin PQgetResult behavior…\n\n> In [1], it was reported that server crashes produce duplicate error messages. \n> This also still happens. I didn't write a test for it, maybe you have an \n> idea. (Obviously, we could check whether the error message is literally \n> there twice in the output, but that doesn't seem very general.) But it's \n> easy to test manually: just have psql connect, shut down the server, then run \n> a query.\n\nThis is also a feature/bug of libpq which happens to be hidden by PQexec: \nwhen one command crashes PQgetResult actually returns *2* results. First \none with the FATAL message, second one when libpq figures out that the \nconnection was lost with the second message appended to the first. PQexec \njust happen to silently ignore the first result. I added a manual reset of \nthe error message when first shown so that it is not shown twice. It is \nunclear to me whether the reset should be somewhere in libpq instead. I \nadded a voluntary crash at the end of the psql test.\n\nAttached v12 somehow fixes these issues in \"psql\" code rather than in \nlibpq.\n\n-- \nFabien.",
"msg_date": "Thu, 23 Dec 2021 07:40:37 -0400 (AST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 23.12.21 12:40, Fabien COELHO wrote:\n>> In [0], it was reported that certain replication commands result in \n>> infinite loops because of faulty error handling. This still happens. \n>> I wrote a test for it, attached here. (I threw in a few more basic \n>> tests, just to have some more coverage that was lacking, and to have a \n>> file to put the new test in.)\n> \n> Hmmm… For some unclear reason on errors on a PGRES_COPY_* state \n> PQgetResult keeps on returning an empty result. PQexec manually ignores \n> it, so I did the same with a comment, but for me the real bug is somehow \n> in PQgetResult behavior…\n> \n>> In [1], it was reported that server crashes produce duplicate error \n>> messages. This also still happens. I didn't write a test for it, \n>> maybe you have an idea. (Obviously, we could check whether the error \n>> message is literally there twice in the output, but that doesn't seem \n>> very general.) But it's easy to test manually: just have psql \n>> connect, shut down the server, then run a query.\n> \n> This is also a feature/bug of libpq which happens to be hidden by \n> PQexec: when one command crashes PQgetResult actually returns *2* \n> results. First one with the FATAL message, second one when libpq figures \n> out that the connection was lost with the second message appended to the \n> first. PQexec just happen to silently ignore the first result. I added a \n> manual reset of the error message when first shown so that it is not \n> shown twice. It is unclear to me whether the reset should be somewhere \n> in libpq instead. I added a voluntary crash at the end of the psql test.\n\nI agree that these two behaviors in libpq are dubious, especially the \nsecond one. I want to spend some time analyzing this more and see if \nfixes in libpq might be appropriate.\n\n\n",
"msg_date": "Mon, 27 Dec 2021 13:44:29 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\n>> [...]\n>\n> I agree that these two behaviors in libpq are dubious, especially the \n> second one. I want to spend some time analyzing this more and see if \n> fixes in libpq might be appropriate.\n\nOk.\n\nMy analysis is that fixing libpq behavior is not in the scope of a psql \npatch, and that if I was to do that it was sure delay the patch even \nfurther. Also these issues/features are corner cases that provably very \nfew people bumped into.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 29 Dec 2021 08:42:53 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 23.12.21 12:40, Fabien COELHO wrote:\n> This is also a feature/bug of libpq which happens to be hidden by \n> PQexec: when one command crashes PQgetResult actually returns *2* \n> results. First one with the FATAL message, second one when libpq figures \n> out that the connection was lost with the second message appended to the \n> first. PQexec just happen to silently ignore the first result. I added a \n> manual reset of the error message when first shown so that it is not \n> shown twice. It is unclear to me whether the reset should be somewhere \n> in libpq instead. I added a voluntary crash at the end of the psql test.\n\nWith this \"voluntary crash\", the regression test output is now\n\n psql ... ok (test process exited with \nexit code 2) 281 ms\n\nNormally, I'd expect this during development if there was a crash \nsomewhere, but showing this during a normal run now, and moreover still \nsaying \"ok\", is quite weird and confusing. Maybe this type of test \nshould be done in the TAP framework instead.\n\n\n",
"msg_date": "Mon, 3 Jan 2022 17:35:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\nHello Peter,\n\n> With this \"voluntary crash\", the regression test output is now\n>\n> psql ... ok (test process exited with exit \n> code 2) 281 ms\n>\n> Normally, I'd expect this during development if there was a crash somewhere, \n> but showing this during a normal run now, and moreover still saying \"ok\",\n\nWell, from a testing perspective, the crash is voluntary and it is \nindeed ok:-)\n\n> is quite weird and confusing. Maybe this type of test should be done in \n> the TAP framework instead.\n\nIt could. Another simpler option: add a \"psql_voluntary_crash.sql\" with \njust that test instead of modifying the \"psql.sql\" test script? That would \nkeep the test exit code information, but the name of the script would make \nthings clearer?\n\nAlso, if non zero status do not look so ok, should they be noted as bad?\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 4 Jan 2022 08:55:34 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Peter,\n\n>> quite weird and confusing. Maybe this type of test should be done in \n>> the TAP framework instead.\n\nAttached v13 where the crash test is moved to tap.\n\n-- \nFabien.",
"msg_date": "Sat, 8 Jan 2022 19:32:36 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-08 19:32:36 +0100, Fabien COELHO wrote:\n> Attached v13 where the crash test is moved to tap.\n\nThe reason this test constantly fails on cfbot windows is a use-after-free\nbug.\n\nI figured that out in the context of another thread, so the debugging is\nthere:\n\nhttps://postgr.es/m/20220113054123.ib4khtafgq34lv4z%40alap3.anarazel.de\n> Ah, I see the bug. It's a use-after-free introduced in the patch:\n>\n> SendQueryAndProcessResults(const char *query, double *pelapsed_msec,\n> \tbool is_watch, const printQueryOpt *opt, FILE *printQueryFout, bool *tx_ended)\n> \n> \n> ...\n> \t/* first result */\n> \tresult = PQgetResult(pset.db);\n> \n> \n> \twhile (result != NULL)\n> \n> \n> ...\n> \t\tif (!AcceptResult(result, false))\n> \t\t{\n> ...\n> \t\t\tClearOrSaveResult(result);\n> \t\t\tsuccess = false;\n> \n> \n> \t\t\t/* and switch to next result */\n> \t\t\tresult_status = PQresultStatus(result);\n> \t\t\tif (result_status == PGRES_COPY_BOTH ||\n> \t\t\t\tresult_status == PGRES_COPY_OUT ||\n> \t\t\t\tresult_status == PGRES_COPY_IN)\n> \n> \n> So we called ClearOrSaveResult() with did a PQclear(), and then we go and call\n> PQresultStatus().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Jan 2022 21:44:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Andres,\n\n> The reason this test constantly fails on cfbot windows is a use-after-free\n> bug.\n\nIndeed! Thanks a lot for the catch and the debug!\n\nThe ClearOrSaveResult function is quite annoying because it may or may not \nclear the result as a side effect.\n\nAttached v14 moves the status extraction before the possible clear. I've \nadded a couple of results = NULL after such calls in the code.\n\n-- \nFabien.",
"msg_date": "Sat, 15 Jan 2022 10:00:11 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 15.01.22 10:00, Fabien COELHO wrote:\n>> The reason this test constantly fails on cfbot windows is a \n>> use-after-free\n>> bug.\n> \n> Indeed! Thanks a lot for the catch and the debug!\n> \n> The ClearOrSaveResult function is quite annoying because it may or may \n> not clear the result as a side effect.\n> \n> Attached v14 moves the status extraction before the possible clear. I've \n> added a couple of results = NULL after such calls in the code.\n\nIn the psql.sql test file, the test I previously added concluded with \n\\set ECHO none, which was a mistake that I have now fixed. As a result, \nthe tests that you added after that point didn't show their input lines, \nwhich was weird and not intentional. So the tests will now show a \ndifferent output.\n\nI notice that this patch has recently gained a new libpq function. I \ngather that this is to work around the misbehaviors in libpq that we \nhave discussed. But I think if we are adding a libpq API function to \nwork around a misbehavior in libpq, we might as well fix the misbehavior \nin libpq to begin with. Adding a new public libpq function is a \nsignificant step, needs documentation, etc. It would be better to do \nwithout. Also, it makes one wonder how others are supposed to use this \nmultiple-results API properly, if even psql can't do it without \nextending libpq. Needs more thought.\n\n\n",
"msg_date": "Tue, 18 Jan 2022 17:36:59 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\nHallo Peter,\n\n>> Attached v14 moves the status extraction before the possible clear. I've \n>> added a couple of results = NULL after such calls in the code.\n>\n> In the psql.sql test file, the test I previously added concluded with \\set \n> ECHO none, which was a mistake that I have now fixed. As a result, the tests \n> that you added after that point didn't show their input lines, which was \n> weird and not intentional. So the tests will now show a different output.\n\nOk.\n\n> I notice that this patch has recently gained a new libpq function. I gather \n> that this is to work around the misbehaviors in libpq that we have discussed.\n\nIndeed.\n\n> But I think if we are adding a libpq API function to work around a \n> misbehavior in libpq, we might as well fix the misbehavior in libpq to \n> begin with. Adding a new public libpq function is a significant step, \n> needs documentation, etc.\n\nI'm not so sure.\n\nThe choice is (1) change the behavior of an existing function or (2) add a \nnew function. Whatever the existing function does, the usual anwer to API \nchanges is \"someone is going to complain because it breaks their code\", so \n\"Returned with feedback\", hence I did not even try. The advantage of (2) \nis that it does not harm anyone to have a new function that they just do \nnot need to use.\n\n> It would be better to do without. Also, it makes one wonder how others \n> are supposed to use this multiple-results API properly, if even psql \n> can't do it without extending libpq. Needs more thought.\n\nFine with me! Obviously I'm okay if libpq is repaired instead of writing \nstrange code on the client to deal with strange behavior.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 23 Jan 2022 18:17:37 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 23.01.22 18:17, Fabien COELHO wrote:\n>> But I think if we are adding a libpq API function to work around a \n>> misbehavior in libpq, we might as well fix the misbehavior in libpq to \n>> begin with. Adding a new public libpq function is a significant step, \n>> needs documentation, etc.\n> \n> I'm not so sure.\n> \n> The choice is (1) change the behavior of an existing function or (2) add \n> a new function. Whatever the existing function does, the usual anwer to \n> API changes is \"someone is going to complain because it breaks their \n> code\", so \"Returned with feedback\", hence I did not even try. The \n> advantage of (2) is that it does not harm anyone to have a new function \n> that they just do not need to use.\n> \n>> It would be better to do without. Also, it makes one wonder how \n>> others are supposed to use this multiple-results API properly, if even \n>> psql can't do it without extending libpq. Needs more thought.\n> \n> Fine with me! Obviously I'm okay if libpq is repaired instead of writing \n> strange code on the client to deal with strange behavior.\n\nI have a new thought on this, as long as we are looking into libpq. Why \ncan't libpq provide a variant of PQexec() that returns all results, \ninstead of just the last one. It has all the information, all it has to \ndo is return the results instead of throwing them away. Then the \nchanges in psql would be very limited, and we don't have to re-invent \nPQexec() from its pieces in psql. And this would also make it easier \nfor other clients and user code to make use of this functionality more \neasily.\n\nAttached is a rough draft of what this could look like. It basically \nworks. Thoughts?",
"msg_date": "Thu, 27 Jan 2022 14:30:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Peter,\n\n>>> It would be better to do without.� Also, it makes one wonder how others \n>>> are supposed to use this multiple-results API properly, if even psql can't \n>>> do it without extending libpq. Needs more thought.\n>> \n>> Fine with me! Obviously I'm okay if libpq is repaired instead of writing \n>> strange code on the client to deal with strange behavior.\n>\n> I have a new thought on this, as long as we are looking into libpq. Why \n> can't libpq provide a variant of PQexec() that returns all results, instead \n> of just the last one. It has all the information, all it has to do is return \n> the results instead of throwing them away. Then the changes in psql would be \n> very limited, and we don't have to re-invent PQexec() from its pieces in \n> psql. And this would also make it easier for other clients and user code to \n> make use of this functionality more easily.\n>\n> Attached is a rough draft of what this could look like. It basically works. \n> Thoughts?\n\nMy 0.02�:\n\nWith this approach results are not available till the last one has been \nreturned? If so, it loses the nice asynchronous property of getting \nresults as they come when they come? This might or might not be desirable \ndepending on the use case. For \"psql\", ISTM that we should want \nimmediate and asynchronous anyway??\n\nI'm unclear about what happens wrt to client-side data buffering if \nseveral large results are returned? COPY??\n\nAlso, I guess the user must free the returned array on top of closing all \nresults?\n\n-- \nFabien.",
"msg_date": "Sat, 29 Jan 2022 15:40:00 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 29.01.22 15:40, Fabien COELHO wrote:\n> With this approach results are not available till the last one has been \n> returned? If so, it loses the nice asynchronous property of getting \n> results as they come when they come? This might or might not be \n> desirable depending on the use case. For \"psql\", ISTM that we should \n> want immediate and asynchronous anyway??\n\nWell, I'm not sure. I'm thinking about this in terms of the dynamic \nresult sets from stored procedures feature. That is typically used for \nsmall result sets. The interesting feature there is that the result \nsets can have different shapes. But of course people can use it \ndifferently. What is your motivation for this feature, and what is your \nexperience how people would use it?\n\n\n\n",
"msg_date": "Thu, 3 Feb 2022 14:54:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "I wrote a few more small tests for psql to address uncovered territory \nin SendQuery() especially:\n\n- \\timing\n- client encoding handling\n- notifications\n\nWhat's still missing is:\n\n- \\watch\n- pagers\n\nFor \\watch, I think one would need something like the current cancel \ntest (since you need to get the psql pid to send a signal to stop the \nwatch). It would work in principle, but it will require more work to \nrefactor the cancel test.\n\nFor pagers, I don't know. I would be pretty easy to write a simple \nscript that acts as a pass-through pager and check that it is called. \nThere were some discussions earlier in the thread that some version of \nsome patch had broken some use of pagers. Does anyone remember details? \n Anything worth testing specifically?",
"msg_date": "Tue, 22 Feb 2022 15:31:16 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 15.01.22 10:00, Fabien COELHO wrote:\n>> The reason this test constantly fails on cfbot windows is a \n>> use-after-free\n>> bug.\n> \n> Indeed! Thanks a lot for the catch and the debug!\n> \n> The ClearOrSaveResult function is quite annoying because it may or may \n> not clear the result as a side effect.\n> \n> Attached v14 moves the status extraction before the possible clear. I've \n> added a couple of results = NULL after such calls in the code.\n\nAre you planning to send a rebased patch for this commit fest?\n\n\n",
"msg_date": "Fri, 4 Mar 2022 09:50:20 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "> Are you planning to send a rebased patch for this commit fest?\n\nArgh, I did it in a reply in another thread:-( Attached v15.\n\nSo as to help moves things forward, I'd suggest that we should not to care \ntoo much about corner case repetition of some error messages which are due \nto libpq internals, so I could remove the ugly buffer reset from the patch \nand have the repetition, and if/when the issue is fixed later in libpq \nthen the repetition will be removed, fine! The issue is that we just \nexpose the strange behavior of libpq, which is libpq to solve, not psql.\n\nWhat do you think?\n\n-- \nFabien.",
"msg_date": "Fri, 4 Mar 2022 14:48:27 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": ">> Are you planning to send a rebased patch for this commit fest?\n>\n> Argh, I did it in a reply in another thread:-( Attached v15.\n>\n> So as to help moves things forward, I'd suggest that we should not to care \n> too much about corner case repetition of some error messages which are due to \n> libpq internals, so I could remove the ugly buffer reset from the patch and \n> have the repetition, and if/when the issue is fixed later in libpq then the \n> repetition will be removed, fine! The issue is that we just expose the \n> strange behavior of libpq, which is libpq to solve, not psql.\n\nSee attached v16 which removes the libpq workaround.\n\n-- \nFabien.",
"msg_date": "Sat, 12 Mar 2022 17:27:48 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 12.03.22 17:27, Fabien COELHO wrote:\n> \n>>> Are you planning to send a rebased patch for this commit fest?\n>>\n>> Argh, I did it in a reply in another thread:-( Attached v15.\n>>\n>> So as to help moves things forward, I'd suggest that we should not to \n>> care too much about corner case repetition of some error messages \n>> which are due to libpq internals, so I could remove the ugly buffer \n>> reset from the patch and have the repetition, and if/when the issue is \n>> fixed later in libpq then the repetition will be removed, fine! The \n>> issue is that we just expose the strange behavior of libpq, which is \n>> libpq to solve, not psql.\n> \n> See attached v16 which removes the libpq workaround.\n\nI suppose this depends on\n\nhttps://www.postgresql.org/message-id/flat/ab4288f8-be5c-57fb-2400-e3e857f53e46%40enterprisedb.com\n\ngetting committed, because right now this makes the psql TAP tests fail \nbecause of the duplicate error message.\n\nHow should we handle that?\n\n\nIn this part of the patch, there seems to be part of a sentence missing:\n\n+ * Marshal the COPY data. Either subroutine will get the\n+ * connection out of its COPY state, then\n+ * once and report any error. Return whether all was ok.\n\n\n",
"msg_date": "Thu, 17 Mar 2022 15:51:45 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I suppose this depends on\n> https://www.postgresql.org/message-id/flat/ab4288f8-be5c-57fb-2400-e3e857f53e46%40enterprisedb.com\n> getting committed, because right now this makes the psql TAP tests fail \n> because of the duplicate error message.\n\nUmm ... wasn't 618c16707 what you need here?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 17 Mar 2022 11:01:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Peter,\n\n>> See attached v16 which removes the libpq workaround.\n>\n> I suppose this depends on\n>\n> https://www.postgresql.org/message-id/flat/ab4288f8-be5c-57fb-2400-e3e857f53e46%40enterprisedb.com\n>\n> getting committed, because right now this makes the psql TAP tests fail \n> because of the duplicate error message.\n>\n> How should we handle that?\n\nOk, it seems I got the patch wrong.\n\nAttached v17 is another try. The point is to record the current status, \nwhatever it is, buggy or not, and to update the test when libpq fixes \nthings, whenever this is done.\n\n> In this part of the patch, there seems to be part of a sentence missing:\n\nIndeed! The missing part was put back in v17.\n\n-- \nFabien.",
"msg_date": "Thu, 17 Mar 2022 19:04:33 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 17.03.22 19:04, Fabien COELHO wrote:\n> \n> Hello Peter,\n> \n>>> See attached v16 which removes the libpq workaround.\n>>\n>> I suppose this depends on\n>>\n>> https://www.postgresql.org/message-id/flat/ab4288f8-be5c-57fb-2400-e3e857f53e46%40enterprisedb.com \n>>\n>>\n>> getting committed, because right now this makes the psql TAP tests \n>> fail because of the duplicate error message.\n>>\n>> How should we handle that?\n> \n> Ok, it seems I got the patch wrong.\n> \n> Attached v17 is another try. The point is to record the current status, \n> whatever it is, buggy or not, and to update the test when libpq fixes \n> things, whenever this is done.\n\nYour patch contains this test case:\n\n+# Test voluntary crash\n+my ($ret, $out, $err) = $node->psql(\n+ 'postgres',\n+ \"SELECT 'before' AS running;\\n\" .\n+ \"SELECT pg_terminate_backend(pg_backend_pid());\\n\" .\n+ \"SELECT 'AFTER' AS not_running;\\n\");\n+\n+is($ret, 2, \"server stopped\");\n+like($out, qr/before/, \"output before crash\");\n+ok($out !~ qr/AFTER/, \"no output after crash\");\n+is($err, 'psql:<stdin>:2: FATAL: terminating connection due to \nadministrator command\n+psql:<stdin>:2: FATAL: terminating connection due to administrator command\n+server closed the connection unexpectedly\n+ This probably means the server terminated abnormally\n+ before or while processing the request.\n+psql:<stdin>:2: fatal: connection to server was lost', \"expected error \nmessage\");\n\nThe expected output (which passes) contains this line twice:\n\npsql:<stdin>:2: FATAL: terminating connection due to administrator command\npsql:<stdin>:2: FATAL: terminating connection due to administrator command\n\nIf I paste this test case into current master without your patch, I only \nget this line once. So your patch is changing this output. The whole \npoint of the libpq fixes was to not have this duplicate output. So I \nthink something is still wrong somewhere.\n\n\n",
"msg_date": "Tue, 22 Mar 2022 15:52:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 17.03.22 19:04, Fabien COELHO wrote:\n> Indeed! The missing part was put back in v17.\n\nSome unrelated notes on this v17 patch:\n\n-use Test::More;\n+use Test::More tests => 41;\n\nThe test counts are not needed/wanted anymore.\n\n\n+\n+\\set ECHO none\n+\n\nThis seems inappropriate.\n\n\n+--\n+-- autocommit\n+--\n\n+--\n+-- test ON_ERROR_ROLLBACK\n+--\n\nThis test file already contains tests for autocommit and \nON_ERROR_ROLLBACK. If you want to change those, please add yours into \nthe existing sections, not make new ones. I'm not sure if your tests \nadd any new coverage, or if it is just duplicate.\n\n\n",
"msg_date": "Tue, 22 Mar 2022 16:01:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\nHello Peter,\n\n>> Attached v17 is another try. The point is to record the current status, \n>> whatever it is, buggy or not, and to update the test when libpq fixes \n>> things, whenever this is done.\n>\n> [...]\n>\n> The expected output (which passes) contains this line twice:\n>\n> psql:<stdin>:2: FATAL: terminating connection due to administrator command\n> psql:<stdin>:2: FATAL: terminating connection due to administrator command\n\n\n> If I paste this test case into current master without your patch, I only get \n> this line once. So your patch is changing this output. The whole point of \n> the libpq fixes was to not have this duplicate output. So I think something \n> is still wrong somewhere.\n\nHmmm. Yes and no:-)\n\nThe previous path inside libpq silently ignores intermediate results, it \nskips all results to keep only the last one. The new approach does not \ndiscard resultss silently, hence the duplicated output, because they are \nactually there and have always been there in the first place, they were \njust ignored: The previous \"good\" result is really a side effect of a bad \nimplementation in a corner case, which just becomes apparent when opening \nthe list of results.\n\nSo my opinion is still to dissociate the libpq \"bug/behavior\" fix from \nthis feature, as they are only loosely connected, because it is a very \ncorner case anyway.\n\nAn alternative would be to remove the test case, but I'd prefer that it is \nkept.\n\nIf you want to wait for libpq to provide a solution for this corner case, \nI'm afraid that \"never\" is the likely result, especially as no test case \nexercices this path to show that there is a problem somewhere, so nobody \nshould care to fix it. I'm not sure it is even worth it given the highly \nspecial situation which triggers the issue, which is not such an actual \nproblem (ok, the user is told twice that there was a connection loss, no \nbig deal).\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 23 Mar 2022 13:58:36 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 23.03.22 13:58, Fabien COELHO wrote:\n> If you want to wait for libpq to provide a solution for this corner \n> case, I'm afraid that \"never\" is the likely result, especially as no \n> test case exercices this path to show that there is a problem somewhere, \n> so nobody should care to fix it. I'm not sure it is even worth it given \n> the highly special situation which triggers the issue, which is not such \n> an actual problem (ok, the user is told twice that there was a \n> connection loss, no big deal).\n\nAs Tom said earlier, wasn't this fixed by 618c16707? If not, is there \nany other discussion on the specifics of this issue? I'm not aware of one.\n\n\n",
"msg_date": "Fri, 25 Mar 2022 16:04:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "> As Tom said earlier, wasn't this fixed by 618c16707? If not, is there any \n> other discussion on the specifics of this issue? I'm not aware of one.\n\nHmmm… I'll try to understand why the doubled message seems to be still \nthere.\n\n-- \nFabien.",
"msg_date": "Fri, 25 Mar 2022 16:46:52 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hello Peter,\n\n>> As Tom said earlier, wasn't this fixed by 618c16707? If not, is there any \n>> other discussion on the specifics of this issue? I'm not aware of one.\n\nThis answer is that I had kept psql's calls to PQerrorMessage which \nreports errors from the connection, whereas it needed to change to \nPQresultErrorMessage to benefit from the libpq improvement.\n\nI made the added autocommit/on_error_rollback tests at the end really \nfocus on multi-statement queries (\\;), as was more or less intended.\n\nI updated the tap test.\n\n-- \nFabien.",
"msg_date": "Sat, 26 Mar 2022 16:55:27 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Attached a rebase.\n\n-- \nFabien.",
"msg_date": "Thu, 31 Mar 2022 18:54:17 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "> Attached a rebase.\n\nAgain, after the SendQuery refactoring extraction.\n\n-- \nFabien.",
"msg_date": "Fri, 1 Apr 2022 07:46:39 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 01.04.22 07:46, Fabien COELHO wrote:\n>> Attached a rebase.\n> \n> Again, after the SendQuery refactoring extraction.\n\nI'm doing this locally, so don't feel obliged to send more of these. ;-)\n\nI've started committing this now, in pieces.\n\n\n",
"msg_date": "Fri, 1 Apr 2022 16:09:55 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "\n>> Again, after the SendQuery refactoring extraction.\n>\n> I'm doing this locally, so don't feel obliged to send more of these. ;-)\n\nGood for me :-)\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 2 Apr 2022 15:26:02 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "On 02.04.22 15:26, Fabien COELHO wrote:\n> \n>>> Again, after the SendQuery refactoring extraction.\n>>\n>> I'm doing this locally, so don't feel obliged to send more of these. ;-)\n> \n> Good for me :-)\n\nThis has been committed.\n\nI reduced some of your stylistic changes in order to keep the surface \narea of this complicated patch small. We can apply some of those later \nif you are interested. Right now, let's let it settle a bit.\n\n\n\n",
"msg_date": "Mon, 4 Apr 2022 23:32:50 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-04 23:32:50 +0200, Peter Eisentraut wrote:\n> This has been committed.\n\nIt's somewhat annoying that made pg_regress even more verbose than before:\n\n============== removing existing temp instance ==============\n============== creating temporary instance ==============\n============== initializing database system ==============\n============== starting postmaster ==============\nrunning on port 51696 with PID 2203449\n============== creating database \"regression\" ==============\nCREATE DATABASE\nALTER DATABASE\nALTER DATABASE\nALTER DATABASE\nALTER DATABASE\nALTER DATABASE\nALTER DATABASE\n============== running regression test queries ==============\n\nUnfortunately it appears that neither can CREATE DATABASE set GUCs, nor can\nALTER DATABASE set multiple GUCs in one statement.\n\nPerhaps we can just set SHOW_ALL_RESULTS off for that psql command?\n\n- Andres\n\n\n",
"msg_date": "Tue, 5 Apr 2022 19:06:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option - pg_regress output"
},
{
"msg_contents": "On 06.04.22 04:06, Andres Freund wrote:\n> On 2022-04-04 23:32:50 +0200, Peter Eisentraut wrote:\n>> This has been committed.\n> \n> It's somewhat annoying that made pg_regress even more verbose than before:\n> \n> ============== removing existing temp instance ==============\n> ============== creating temporary instance ==============\n> ============== initializing database system ==============\n> ============== starting postmaster ==============\n> running on port 51696 with PID 2203449\n> ============== creating database \"regression\" ==============\n> CREATE DATABASE\n> ALTER DATABASE\n> ALTER DATABASE\n> ALTER DATABASE\n> ALTER DATABASE\n> ALTER DATABASE\n> ALTER DATABASE\n> ============== running regression test queries ==============\n> \n> Unfortunately it appears that neither can CREATE DATABASE set GUCs, nor can\n> ALTER DATABASE set multiple GUCs in one statement.\n> \n> Perhaps we can just set SHOW_ALL_RESULTS off for that psql command?\n\nDo you mean the extra \"ALTER DATABASE\" lines? Couldn't we just turn all \nof those off? AFAICT, no one likes them.\n\n\n",
"msg_date": "Wed, 6 Apr 2022 10:37:29 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option - pg_regress output"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-06 10:37:29 +0200, Peter Eisentraut wrote:\n> On 06.04.22 04:06, Andres Freund wrote:\n> > On 2022-04-04 23:32:50 +0200, Peter Eisentraut wrote:\n> > > This has been committed.\n> > \n> > It's somewhat annoying that made pg_regress even more verbose than before:\n> > \n> > ============== removing existing temp instance ==============\n> > ============== creating temporary instance ==============\n> > ============== initializing database system ==============\n> > ============== starting postmaster ==============\n> > running on port 51696 with PID 2203449\n> > ============== creating database \"regression\" ==============\n> > CREATE DATABASE\n> > ALTER DATABASE\n> > ALTER DATABASE\n> > ALTER DATABASE\n> > ALTER DATABASE\n> > ALTER DATABASE\n> > ALTER DATABASE\n> > ============== running regression test queries ==============\n> > \n> > Unfortunately it appears that neither can CREATE DATABASE set GUCs, nor can\n> > ALTER DATABASE set multiple GUCs in one statement.\n> > \n> > Perhaps we can just set SHOW_ALL_RESULTS off for that psql command?\n> \n> Do you mean the extra \"ALTER DATABASE\" lines? Couldn't we just turn all of\n> those off? AFAICT, no one likes them.\n\nYea. Previously there was just CREATE DATABASE. And yes, it seems like we\nshould use -q in psql_start_command().\n\nDaniel has a patch to shrink pg_regress output overall, but it came too late\nfor 15. It'd still be good to avoid further increasing the size till then IMO.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Apr 2022 10:05:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: psql - add SHOW_ALL_RESULTS option - pg_regress output"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm trying to use the SPI to save the executed plans in the ExecutorEnd. When the plan involves multiple workers, the insert operations would trigger an error: cannot execute INSERT during a parallel operation.\n\nI wonder if there's a different hook I can use when there's a gather node? or other ways to get around?\n\nThank you,\nDonald Dong\n\n",
"msg_date": "Sat, 13 Apr 2019 19:11:09 -0700",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Execute INSERT during a parallel operation"
},
{
"msg_contents": "On Sun, 14 Apr 2019 at 04:11, Donald Dong <xdong@csumb.edu> wrote:\n>\n> Hi,\n>\n> I'm trying to use the SPI to save the executed plans in the ExecutorEnd. When the plan involves multiple workers, the insert operations would trigger an error: cannot execute INSERT during a parallel operation.\n>\n> I wonder if there's a different hook I can use when there's a gather node? or other ways to get around?\n\nA bit more detail on what are you trying to do exactly like what is\nthe query you or code you are dealing with, will be helpful in\nproviding suggestions,\n\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Fri, 26 Apr 2019 16:51:26 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Execute INSERT during a parallel operation"
}
] |
[
{
"msg_contents": "Hi,\n\nBoth tables in $subject (in datatype.sgml and xfunc.sgml, respectively)\ncontain similar information (though the xfunc one mentions C structs and\nheader files, and the datatype one does not, but has a description column)\nand seem similarly out-of-date with respect to the currently supported\ntypes ... though not identically out-of-date; they have different numbers\nof rows, and different types that are missing.\n\nHow crazy an idea would it be to have include/catalog/pg_type.dat\naugmented with description, ctypename, and cheader fields, and let\nboth tables be generated, with their respective columns?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Sun, 14 Apr 2019 02:49:57 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "doc: datatype-table and xfunc-c-type-table"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 02:49:57AM -0400, Chapman Flack wrote:\n> Hi,\n> \n> Both tables in $subject (in datatype.sgml and xfunc.sgml, respectively)\n> contain similar information (though the xfunc one mentions C structs and\n> header files, and the datatype one does not, but has a description column)\n> and seem similarly out-of-date with respect to the currently supported\n> types ... though not identically out-of-date; they have different numbers\n> of rows, and different types that are missing.\n> \n> How crazy an idea would it be to have include/catalog/pg_type.dat\n> augmented with description, ctypename, and cheader fields, and let\n> both tables be generated, with their respective columns?\n\nNot at all. Although literate programming didn't catch on, having a\nsingle point of truth is generally good practice.\n\nThere are almost certainly other parts of the documentation that\nshould also be generated from the source code, but that's a matter for\nseparate threads for the cases where that would make sense.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 23 Apr 2019 17:57:32 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: doc: datatype-table and xfunc-c-type-table"
}
] |
[
{
"msg_contents": "Identity columns don't work if they own more than one sequence.\n\nSo if one tries to convert a \"serial\" column to an identity column,\nthe following can happen:\n\ntest=> CREATE TABLE ser(id serial);\nCREATE TABLE\ntest=> ALTER TABLE ser ALTER id ADD GENERATED ALWAYS AS IDENTITY;\nERROR: column \"id\" of relation \"ser\" already has a default value\n\nHm, ok, let's drop the column default value.\n\ntest=> ALTER TABLE ser ALTER id DROP DEFAULT;\nALTER TABLE\n\nNow it works:\n\ntest=> ALTER TABLE ser ALTER id ADD GENERATED ALWAYS AS IDENTITY;\nALTER TABLE\n\nBut not very much:\n\ntest=> INSERT INTO ser (id) VALUES (DEFAULT);\nERROR: more than one owned sequence found\n\n\nI propose that we check if there already is a dependent sequence\nbefore adding an identity column.\n\nThe attached patch does that, and also forbids setting the ownership\nof a sequence to an identity column.\n\nI think this should be backpatched.\n\nYours,\nLaurenz Albe",
"msg_date": "Sun, 14 Apr 2019 17:51:47 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Identity columns should own only one sequence"
},
{
"msg_contents": "I wrote:\n> Identity columns don't work if they own more than one sequence.\n> \n[...]\n> test=> INSERT INTO ser (id) VALUES (DEFAULT);\n> ERROR: more than one owned sequence found\n> \n> \n> I propose that we check if there already is a dependent sequence\n> before adding an identity column.\n> \n> The attached patch does that, and also forbids setting the ownership\n> of a sequence to an identity column.\n\nAlternatively, maybe getOwnedSequence should only consider sequences\nwith an \"internal\" dependency on the column. That would avoid the problem\nwithout forbidding anything, since normal OWNED BY dependencies are \"auto\".\n\nWhat do you think?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Sun, 14 Apr 2019 20:15:09 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On Sun, 2019-04-14 at 20:15 +0200, I wrote:\n> I wrote:\n> > Identity columns don't work if they own more than one sequence.\n> \n> Alternatively, maybe getOwnedSequence should only consider sequences\n> with an \"internal\" dependency on the column. That would avoid the problem\n> without forbidding anything, since normal OWNED BY dependencies are \"auto\".\n> \n> What do you think?\n\nHere is a patch that illustrates the second approach.\n\nI'll add this thread to the next commitfest.\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 24 Apr 2019 16:48:25 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 05:51:47PM +0200, Laurenz Albe wrote:\n> test=> INSERT INTO ser (id) VALUES (DEFAULT);\n> ERROR: more than one owned sequence found\n\nYes this should never be user-triggerable, so it seems that we need to\nfix and back-patch something if possible.\n\n> I propose that we check if there already is a dependent sequence\n> before adding an identity column.\n\nThat looks awkward. Souldn't we make sure that when dropping the\ndefault associated with a serial column then the dependency between\nthe column and the sequence is removed instead? This implies more\ncomplication in ATExecColumnDefault().\n\n> The attached patch does that, and also forbids setting the ownership\n> of a sequence to an identity column.\n> \n> I think this should be backpatched.\n\nCould you add some test cases with what you think is adapted?\n--\nMichael",
"msg_date": "Thu, 25 Apr 2019 09:55:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On Thu, 2019-04-25 at 09:55 +0900, Michael Paquier wrote:\n> On Sun, Apr 14, 2019 at 05:51:47PM +0200, Laurenz Albe wrote:\n> > test=> INSERT INTO ser (id) VALUES (DEFAULT);\n> > ERROR: more than one owned sequence found\n> \n> Yes this should never be user-triggerable, so it seems that we need to\n> fix and back-patch something if possible.\n> \n> > I propose that we check if there already is a dependent sequence\n> > before adding an identity column.\n> \n> That looks awkward. Souldn't we make sure that when dropping the\n> default associated with a serial column then the dependency between\n> the column and the sequence is removed instead? This implies more\n> complication in ATExecColumnDefault().\n> \n> > The attached patch does that, and also forbids setting the ownership\n> > of a sequence to an identity column.\n> > \n> > I think this should be backpatched.\n> \n> Could you add some test cases with what you think is adapted?\n\nYou are right! Dropping the dependency with the DEFAULT is the\ncleanest approach.\n\nI have left the checks to prevent double sequence ownership from\nhappening.\n\nI added regression tests covering all added code.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 26 Apr 2019 12:47:17 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On 2019-04-14 17:51, Laurenz Albe wrote:\n> Identity columns don't work if they own more than one sequence.\n\nWell, they shouldn't, because then how do they know which sequence they\nshould use?\n\n> So if one tries to convert a \"serial\" column to an identity column,\n> the following can happen:\n> \n> test=> CREATE TABLE ser(id serial);\n> CREATE TABLE\n> test=> ALTER TABLE ser ALTER id ADD GENERATED ALWAYS AS IDENTITY;\n> ERROR: column \"id\" of relation \"ser\" already has a default value\n> \n> Hm, ok, let's drop the column default value.\n> \n> test=> ALTER TABLE ser ALTER id DROP DEFAULT;\n> ALTER TABLE\n> \n> Now it works:\n> \n> test=> ALTER TABLE ser ALTER id ADD GENERATED ALWAYS AS IDENTITY;\n> ALTER TABLE\n> \n> But not very much:\n> \n> test=> INSERT INTO ser (id) VALUES (DEFAULT);\n> ERROR: more than one owned sequence found\n\nYou also need to run\n\nALTER SEQUENCE ser_id_seq OWNED BY NONE;\n\nbecause dropping the default doesn't release the linkage of the sequence\nwith the table. These are just weird artifacts of how serial is\nimplemented, but that's why identity columns were added to improve\nthings. I don't think we need to make things more complicated here.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 26 Apr 2019 15:23:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On Fri, 2019-04-26 at 15:23 +0200, Peter Eisentraut wrote:\n> > So if one tries to convert a \"serial\" column to an identity column,\n> > the following can happen:\n> > \n> > test=> CREATE TABLE ser(id serial);\n> > CREATE TABLE\n> > test=> ALTER TABLE ser ALTER id ADD GENERATED ALWAYS AS IDENTITY;\n> > ERROR: column \"id\" of relation \"ser\" already has a default value\n> > \n> > Hm, ok, let's drop the column default value.\n> > \n> > test=> ALTER TABLE ser ALTER id DROP DEFAULT;\n> > ALTER TABLE\n> > \n> > Now it works:\n> > \n> > test=> ALTER TABLE ser ALTER id ADD GENERATED ALWAYS AS IDENTITY;\n> > ALTER TABLE\n> > \n> > But not very much:\n> > \n> > test=> INSERT INTO ser (id) VALUES (DEFAULT);\n> > ERROR: more than one owned sequence found\n> \n> You also need to run\n> \n> ALTER SEQUENCE ser_id_seq OWNED BY NONE;\n> \n> because dropping the default doesn't release the linkage of the sequence\n> with the table. These are just weird artifacts of how serial is\n> implemented, but that's why identity columns were added to improve\n> things. I don't think we need to make things more complicated here.\n\nWhat do you think of the patch I just posted on this thread to\nremove ownership automatically when the default is dropped, as Michael\nsuggested? I think that would make things much more intuitive from\nthe user's perspective.\n\nCorrect me if I am wrong, but the sequence behind identity columns\nshould be an implementation detail that the user doesn't have to know about.\nSo the error message about \"owned sequences\" is likely to confuse users.\n\nI have had a report by a confused user, so I think the problem is real.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 26 Apr 2019 15:37:56 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On 2019-Apr-26, Laurenz Albe wrote:\n\n> What do you think of the patch I just posted on this thread to\n> remove ownership automatically when the default is dropped, as Michael\n> suggested? I think that would make things much more intuitive from\n> the user's perspective.\n\nI think a better overall fix is that that when creating the generated\ncolumn (or altering a column to make it generated) we should look for\nexisting an existing sequence and take ownership of that (update\npg_depend records), before deciding to create a new sequence.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 26 Apr 2019 11:55:34 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 11:55:34AM -0400, Alvaro Herrera wrote:\n> On 2019-Apr-26, Laurenz Albe wrote:\n> I think a better overall fix is that that when creating the generated\n> column (or altering a column to make it generated) we should look for\n> existing an existing sequence and take ownership of that (update\n> pg_depend records), before deciding to create a new sequence.\n\nWhat that be actually right? The current value of the sequence would\nbe the one from the previous use, and max/min values may not be the\ndefaults associated with identity columns, which is ranged in\n[-2^31,2^31-1] by default, but the sequence you may attempt (or not)\nto attach to could have completely different properties. It seems to\nme that it is much better to start afresh and not enforce the sequence\ninto something that the user may perhaps not want to use.\n\nMy 2c.\n--\nMichael",
"msg_date": "Sat, 27 Apr 2019 09:49:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On 2019-04-26 15:37, Laurenz Albe wrote:\n> What do you think of the patch I just posted on this thread to\n> remove ownership automatically when the default is dropped, as Michael\n> suggested? I think that would make things much more intuitive from\n> the user's perspective.\n\nI think that adds more nonstandard behavior on top of an already\nconfusing and obsolescent feature, so I can't get too excited about it.\n\nA more forward-looking fix would be your other idea of having\ngetOwnedSequence() only deal with identity sequences (not serial\nsequences). See attached patch for a draft.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 27 Apr 2019 14:16:01 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On Sat, 2019-04-27 at 14:16 +0200, Peter Eisentraut wrote:\n> On 2019-04-26 15:37, Laurenz Albe wrote:\n> > What do you think of the patch I just posted on this thread to\n> > remove ownership automatically when the default is dropped, as Michael\n> > suggested? I think that would make things much more intuitive from\n> > the user's perspective.\n> \n> I think that adds more nonstandard behavior on top of an already\n> confusing and obsolescent feature, so I can't get too excited about it.\n> \n> A more forward-looking fix would be your other idea of having\n> getOwnedSequence() only deal with identity sequences (not serial\n> sequences). See attached patch for a draft.\n\nThat looks good to me.\n\nI agree that slapping on black magic that appropriates a pre-existing\nowned sequence seems out of proportion.\n\nI still think thatthat there is merit to Michael's idea of removing\nsequence \"ownership\" (which is just a dependency) when the DEFAULT\non the column is dropped, but this approach is possibly cleaner.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 29 Apr 2019 18:28:39 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On 2019-04-29 18:28, Laurenz Albe wrote:\n> I still think thatthat there is merit to Michael's idea of removing\n> sequence \"ownership\" (which is just a dependency) when the DEFAULT\n> on the column is dropped, but this approach is possibly cleaner.\n\nI think the proper way to address this would be to create some kind of\ndependency between the sequence and the default.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 May 2019 22:43:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On Thu, 2019-05-02 at 22:43 +0200, Peter Eisentraut wrote:\n> On 2019-04-29 18:28, Laurenz Albe wrote:\n> > I still think thatthat there is merit to Michael's idea of removing\n> > sequence \"ownership\" (which is just a dependency) when the DEFAULT\n> > on the column is dropped, but this approach is possibly cleaner.\n> \n> I think the proper way to address this would be to create some kind of\n> dependency between the sequence and the default.\n\nThat is certainly true. But that's hard to retrofit into existing databases,\nso it would probably be a modification that is not backpatchable.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 03 May 2019 08:14:35 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On Fri, May 03, 2019 at 08:14:35AM +0200, Laurenz Albe wrote:\n> On Thu, 2019-05-02 at 22:43 +0200, Peter Eisentraut wrote:\n>> I think the proper way to address this would be to create some kind of\n>> dependency between the sequence and the default.\n> \n> That is certainly true. But that's hard to retrofit into existing databases,\n> so it would probably be a modification that is not backpatchable.\n\nAnd this is basically already the dependency which exists between the\nsequence and the relation created with the serial column. So what's\nthe advantage of adding more dependencies if we already have what we\nneed? I still think that we should be more careful to drop the\ndependency between the sequence and the relation's column if dropping\nthe default using it. If a DDL defines first a sequence, and then a\ndefault expression using nextval() on a column, then no serial-related\ndependency exist.\n--\nMichael",
"msg_date": "Tue, 7 May 2019 13:06:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On Tue, 2019-05-07 at 13:06 +0900, Michael Paquier wrote:\n> On Fri, May 03, 2019 at 08:14:35AM +0200, Laurenz Albe wrote:\n> > On Thu, 2019-05-02 at 22:43 +0200, Peter Eisentraut wrote:\n> >> I think the proper way to address this would be to create some kind of\n> >> dependency between the sequence and the default.\n> > \n> > That is certainly true. But that's hard to retrofit into existing databases,\n> > so it would probably be a modification that is not backpatchable.\n> \n> And this is basically already the dependency which exists between the\n> sequence and the relation created with the serial column. So what's\n> the advantage of adding more dependencies if we already have what we\n> need? I still think that we should be more careful to drop the\n> dependency between the sequence and the relation's column if dropping\n> the default using it. If a DDL defines first a sequence, and then a\n> default expression using nextval() on a column, then no serial-related\n\nI believe we should have both:\n\n- Identity columns should only use sequences with an INTERNAL dependency,\n as in Peter's patch.\n\n- When a column default is dropped, remove all dependencies between the\n column and sequences.\n\nIn the spirit of moving this along, I have attached a patch which is\nPeter's patch from above with a regression test.\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 08 May 2019 16:49:23 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On 2019-05-08 16:49, Laurenz Albe wrote:\n> I believe we should have both:\n> \n> - Identity columns should only use sequences with an INTERNAL dependency,\n> as in Peter's patch.\n\nI have committed this.\n\n> - When a column default is dropped, remove all dependencies between the\n> column and sequences.\n\nThere is no proposed patch for this, AFAICT.\n\nSo I have closed this commit fest item for now.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 12:17:32 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> On 2019-05-08 16:49, Laurenz Albe wrote:\n> > I believe we should have both:\n> > \n> > - Identity columns should only use sequences with an INTERNAL dependency,\n> > as in Peter's patch.\n> \n> I have committed this.\n\nThanks!\n\n> > - When a column default is dropped, remove all dependencies between the\n> > column and sequences.\n> \n> There is no proposed patch for this, AFAICT.\n\nThere was one in\nhttps://www.postgresql.org/message-id/3916586ef7f33948235fe60f54a3750046f5d940.camel%40cybertec.at\n\n> So I have closed this commit fest item for now.\n\nThat's fine with me.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 05 Aug 2019 13:24:32 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> On 2019-05-08 16:49, Laurenz Albe wrote:\n> > I believe we should have both:\n> > \n> > - Identity columns should only use sequences with an INTERNAL dependency,\n> > as in Peter's patch.\n> \n> I have committed this.\n\nSince this is a bug fix, shouldn't it be backpatched?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 05 Aug 2019 13:30:43 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Identity columns should own only one sequence"
},
{
"msg_contents": "On 2019-08-05 13:30, Laurenz Albe wrote:\n> Peter Eisentraut wrote:\n>> On 2019-05-08 16:49, Laurenz Albe wrote:\n>>> I believe we should have both:\n>>>\n>>> - Identity columns should only use sequences with an INTERNAL dependency,\n>>> as in Peter's patch.\n>>\n>> I have committed this.\n> \n> Since this is a bug fix, shouldn't it be backpatched?\n\nIn cases where the workaround is \"don't do that then\", I'm inclined to\nleave it alone.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 6 Aug 2019 11:54:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Identity columns should own only one sequence"
}
] |
[
{
"msg_contents": "Hi all,\n\nAfter all the great work that was done on partitioning over the past months, I wanted to take a closer look at the performance. I prepared some performance tests for use cases that I often encounter on partitioned tables. My goal was to test the performance and stability of the recent changes that were implemented regarding query planning and run-time pruning between native partitioning on HEAD, PG11 and TimescaleDb partitioning on PG11. I'm posting the results here now. I'm sure many of these results are not new to most people here, but maybe there's some useful information in there. :-) Furthermore, I have a few questions about the results that I obtained.\n\nSeveral test cases were prepared. These test cases were run for cases with 16, 64, 256, 1024 and 4096 partitions using pgbench functionality for 60 seconds each. The exact commands that were run can be found in the attached output file. Note that most of the SELECT queries that are benchmarked are basically testing planning time, as this is generally the part that becomes much slower when more partitions are added. The queries are written such that only one or two partitions are interesting for that particular query and the rest should be discarded as early as possible. Execution time is very small for these queries as they just do a simple index scan on the remaining chunk.\n\nThe test cases were (see benchmark.sql for the SQL commands for setup and test cases):\n1. Insert batches of 1000 rows per transaction\n2. Simple SELECT query pruning on a static timestamp\n3. The same SELECT query with static timestamp but with an added 'ORDER BY a, updated_at DESC LIMIT 1', which matches the index defined on the table\n4. The same simple SELECT query, but now it's wrapped inside an inlineable SQL function, called with a static timestamp\n5. The same simple SELECT query, but now it's wrapped inside a non-inlineable SQL function, called with a static timestamp\n6. The same simple SELECT query, but now it's wrapped inside a plpgsql function, called with a static timestamp\n7. Simple SELECT query pruning on a timestamp now()\n8. The same SELECT query with dynamic timestamp but with an added 'ORDER BY a, updated_at DESC LIMIT 1', which matches the index defined on the table\n9. The same simple SELECT query, but now it's wrapped inside an inlineable SQL function, called with a dynamic timestamp\n10. The same simple SELECT query, but now it's wrapped inside a non-inlineable SQL function, called with a dynamic timestamp\n11. The same simple SELECT query, but now it's wrapped inside a plpgsql function, called with a dynamic timestamp\n12. The same query as 2) but then in an inlineable function\n13. The same query as 3) but then in an inlineable function\n14. A SELECT with nested loop (10 iterations) with opportunities for run-time pruning - some rows from a table are selected and the timestamp from rows in that table is used to join on another partitioned table\n\nThe full results can be found in the attached file (results.txt). I also produced graphs of the results, which can be found on TimescaleDb's Github page [1]. Please take a look at these figures for an easy overview of the results. In general performance of HEAD looks really good.\n\nWhile looking at these results, there were a few questions that I couldn't answer.\n1) It seems like the queries inside plpgsql functions (case 6 and 11) perform relatively well in PG11 compared to a non-inlineable SQL function (case 5 and 10), when a table consists of many partitions. As far as I know, both plpgsql and non-inlineable SQL functions are executed with generic plans. What can explain this difference? Are non-inlineable SQL function plans not reused between transactions, while plpgsql plans are?\n2) Is running non-inlined SQL functions with a generic plan even the best option all the time? Wouldn't it be better to adopt a similar approach to what plpgsql does, where it tries to test if using a generic plan is beneficial? The non-inlineable SQL functions suffer a big performance hit for a large number of partitions, because they cannot rely on static planning-time pruning.\n3) What could be causing the big performance difference between case 7 (simple SELECT) and 8 (simple SELECT with ORDER BY <index> LIMIT 1)? For 4096 partitions, TPS of 7) is around 5, while adding the ORDER BY <index> LIMIT 1 makes TPS drop well below 1. In theory, run-time pruning of the right chunk should take exactly the same amount of time in both cases, because both are pruning timestamp now() on the same number of partitions. The resulting plans are also identical with the exception of the top LIMIT-node (in PG11 they differ slightly as a MergeAppend is chosen for the ORDER BY instead of an Append, in HEAD with ordered append this is not necessary anymore). Am I missing something here?\n4) A more general question about run-time pruning in nested loops, like the one for case 14. I believe I read in one of the previous threads that run-time pruning only reoccurs if it determines that the value that determines which partitions must be excluded has changed in between iterations. How is this defined? Eg. let's say partitions are 1-day wide and the first iteration of the loop filters on the partitioned table for timestamp between 14-04-2019 12:00 and 14-04-2019 20:00 (dynamically determined). Then the second iteration comes along and now filters on values between 14-04-2019 12:00 and 14-04-2019 19:00. The partition that should be scanned hasn't changed, because both timestamps fall into the same partition. Is the full process of run-time pruning applied again, or is there some kind of shortcut that first checks if the previous pruning result is still valid even if the value has changed slightly? If not, would this be a possible optimization, as I think it's a case that occurs very often? I don't know the run-time pruning code very well though, so it may just be a crazy idea that can't be practically achieved.\n\nThere was one other thing I noticed, and I believe it was raised by Tom in a separate thread as well: the setup code itself is really slow. Creating of partitions is taking a long time (it's taking several minutes to create the 4096 partition table).\n\nThanks again for the great work on partitioning! Almost every case that I tested is way better than the comparable case in PG11.\n\n-Floris\n\n[1] https://github.com/timescale/timescaledb/issues/1154#issuecomment-482347314?",
"msg_date": "Sun, 14 Apr 2019 19:19:41 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "partitioning performance tests after recent patches"
},
{
"msg_contents": "On Mon, 15 Apr 2019 at 07:19, Floris Van Nee <florisvannee@optiver.com> wrote:\n> 3) What could be causing the big performance difference between case 7 (simple SELECT) and 8 (simple SELECT with ORDER BY <index> LIMIT 1)? For 4096 partitions, TPS of 7) is around 5, while adding the ORDER BY <index> LIMIT 1 makes TPS drop well below 1. In theory, run-time pruning of the right chunk should take exactly the same amount of time in both cases, because both are pruning timestamp now() on the same number of partitions. The resulting plans are also identical with the exception of the top LIMIT-node (in PG11 they differ slightly as a MergeAppend is chosen for the ORDER BY instead of an Append, in HEAD with ordered append this is not necessary anymore). Am I missing something here?\n\nWith the information provided, I don't really see any reason why the\nORDER BY LIMIT would slow it down if the plan is the same apart from\nthe LIMIT node. Please share the EXPLAIN ANALYZE output of each.\n\n> 4) A more general question about run-time pruning in nested loops, like the one for case 14. I believe I read in one of the previous threads that run-time pruning only reoccurs if it determines that the value that determines which partitions must be excluded has changed in between iterations. How is this defined? Eg. let's say partitions are 1-day wide and the first iteration of the loop filters on the partitioned table for timestamp between 14-04-2019 12:00 and 14-04-2019 20:00 (dynamically determined). Then the second iteration comes along and now filters on values between 14-04-2019 12:00 and 14-04-2019 19:00. The partition that should be scanned hasn't changed, because both timestamps fall into the same partition. Is the full process of run-time pruning applied again, or is there some kind of shortcut that first checks if the previous pruning result is still valid even if the value has changed slightly? If not, would this be a possible optimization, as I think it's a case that occurs very often? I don't know the run-time pruning code very well though, so it may just be a crazy idea that can't be practically achieved.\n\nCurrently, there's no shortcut. It knows which parameters partition\npruning depends on and it reprunes whenever the value of ones of these\nchanges.\n\nI'm not really sure how rechecking would work exactly. There are cases\nwhere it wouldn't be possible, say the condition was: partkey >= $1\nand there was no partition for $1 since it was beyond the range of the\ndefined range partitions. How could we tell if we can perform the\nshortcut if the next param value falls off the lower bound of the\ndefined partitions? The first would include no partitions and the\nsecond includes all partitions, but the actual value of $1 belongs to\nno partition in either case so we can't check to see if it matches the\nsame partition. Perhaps it could work for equality operators when\njust a single partition is matched in the first place, it might then\nbe possible to do a shortcircuit recheck to see if the same partition\nmatches the next set of values. The problem with that is that\nrun-time pruning code in the executor does not care about which\noperators are used. It just passes those details off to the pruning\ncode to deal with it. Perhaps something can be decided in the planner\nin analyze_partkey_exprs() to have it set a \"do_recheck\" flag to tell\nthe executor to check before pruning again... Or maybe it's okay to\njust try a recheck when we match to just a single partition and just\nrecheck the new values are allowed in that partition when re-pruning.\nHowever, that might be just too overly dumb since for inequality\noperators the original values may never even have falling inside the\npartition's bounds in the first place.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 15 Apr 2019 11:25:14 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: partitioning performance tests after recent patches"
},
{
"msg_contents": "Hi David,\n\nThanks for your reply. I really appreciate your work on run-time pruning!\nHere's the output of explain/analyze for HEAD. At run-time, technically all partitions could be pruned directly. However, one partition remains in the output of explain/analyze because of other difficulties with removing all of them, if I remember correctly? Still, that partition is never executed. The only difference I can see is the Limit node on top, as well as apparently another partition appearing in the analyze output (4096_4096, last partition, remains in the first plan. 4096_1, the first partition, remains the second plan).\n\n-- select_now.sql\nexplain(analyze, verbose, buffers on)\nselect * from :tbl where a='abc' and updated_at between now() and now()+interval '1d';\n\nAppend (cost=0.16..8949.61 rows=4096 width=112) (actual time=0.000..0.000 rows=0 loops=1)\n Subplans Removed: 4095\n -> Index Scan using p4096_4096_a_updated_at_idx on public.p4096_4096 (cost=0.16..2.18 rows=1 width=112) (never executed)\n Output: p4096_4096.a, p4096_4096.b, p4096_4096.c, p4096_4096.d, p4096_4096.updated_at\n Index Cond: ((p4096_4096.a = 'abc'::text) AND (p4096_4096.updated_at >= now()) AND (p4096_4096.updated_at <= (now() + '1 day'::interval)))\nPlanning Time: 237.603 ms\nExecution Time: 0.475 ms\n\n-- select_now_limit.sql\nexplain(analyze, verbose, buffers on)\nselect * from :tbl where a='abc' and updated_at between now() and now()+interval '1d'\norder by a, updated_at desc limit 1;\n\nLimit (cost=645.53..647.56 rows=1 width=112) (actual time=0.002..0.002 rows=0 loops=1)\n Output: p4096_1.a, p4096_1.b, p4096_1.c, p4096_1.d, p4096_1.updated_at\n -> Append (cost=645.53..8949.61 rows=4096 width=112) (actual time=0.000..0.000 rows=0 loops=1)\n Subplans Removed: 4095\n -> Index Scan using p4096_1_a_updated_at_idx on public.p4096_1 (cost=0.57..2.03 rows=1 width=54) (never executed)\n Output: p4096_1.a, p4096_1.b, p4096_1.c, p4096_1.d, p4096_1.updated_at\n Index Cond: ((p4096_1.a = 'abc'::text) AND (p4096_1.updated_at >= now()) AND (p4096_1.updated_at <= (now() + '1 day'::interval)))\nPlanning Time: 3897.687 ms\nExecution Time: 0.491 ms\n\nRegarding the nested loops - thanks for your explanation. I can see this is more complicated than I initially thought. It may be doable to determine if your set of pruned partitions is still valid, but it's more difficult to determine if, on top of that, extra partitions must be included due to widening of the range. \n\n-Floris\n\n________________________________________\nFrom: David Rowley <david.rowley@2ndquadrant.com>\nSent: Monday, April 15, 2019 1:25 AM\nTo: Floris Van Nee\nCc: Pg Hackers\nSubject: Re: partitioning performance tests after recent patches [External]\n\nOn Mon, 15 Apr 2019 at 07:19, Floris Van Nee <florisvannee@optiver.com> wrote:\n> 3) What could be causing the big performance difference between case 7 (simple SELECT) and 8 (simple SELECT with ORDER BY <index> LIMIT 1)? For 4096 partitions, TPS of 7) is around 5, while adding the ORDER BY <index> LIMIT 1 makes TPS drop well below 1. In theory, run-time pruning of the right chunk should take exactly the same amount of time in both cases, because both are pruning timestamp now() on the same number of partitions. The resulting plans are also identical with the exception of the top LIMIT-node (in PG11 they differ slightly as a MergeAppend is chosen for the ORDER BY instead of an Append, in HEAD with ordered append this is not necessary anymore). Am I missing something here?\n\nWith the information provided, I don't really see any reason why the\nORDER BY LIMIT would slow it down if the plan is the same apart from\nthe LIMIT node. Please share the EXPLAIN ANALYZE output of each.\n\n> 4) A more general question about run-time pruning in nested loops, like the one for case 14. I believe I read in one of the previous threads that run-time pruning only reoccurs if it determines that the value that determines which partitions must be excluded has changed in between iterations. How is this defined? Eg. let's say partitions are 1-day wide and the first iteration of the loop filters on the partitioned table for timestamp between 14-04-2019 12:00 and 14-04-2019 20:00 (dynamically determined). Then the second iteration comes along and now filters on values between 14-04-2019 12:00 and 14-04-2019 19:00. The partition that should be scanned hasn't changed, because both timestamps fall into the same partition. Is the full process of run-time pruning applied again, or is there some kind of shortcut that first checks if the previous pruning result is still valid even if the value has changed slightly? If not, would this be a possible optimization, as I think it's a case that occurs very often? I don't know the run-time pruning code very well though, so it may just be a crazy idea that can't be practically achieved.\n\nCurrently, there's no shortcut. It knows which parameters partition\npruning depends on and it reprunes whenever the value of ones of these\nchanges.\n\nI'm not really sure how rechecking would work exactly. There are cases\nwhere it wouldn't be possible, say the condition was: partkey >= $1\nand there was no partition for $1 since it was beyond the range of the\ndefined range partitions. How could we tell if we can perform the\nshortcut if the next param value falls off the lower bound of the\ndefined partitions? The first would include no partitions and the\nsecond includes all partitions, but the actual value of $1 belongs to\nno partition in either case so we can't check to see if it matches the\nsame partition. Perhaps it could work for equality operators when\njust a single partition is matched in the first place, it might then\nbe possible to do a shortcircuit recheck to see if the same partition\nmatches the next set of values. The problem with that is that\nrun-time pruning code in the executor does not care about which\noperators are used. It just passes those details off to the pruning\ncode to deal with it. Perhaps something can be decided in the planner\nin analyze_partkey_exprs() to have it set a \"do_recheck\" flag to tell\nthe executor to check before pruning again... Or maybe it's okay to\njust try a recheck when we match to just a single partition and just\nrecheck the new values are allowed in that partition when re-pruning.\nHowever, that might be just too overly dumb since for inequality\noperators the original values may never even have falling inside the\npartition's bounds in the first place.\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 15 Apr 2019 07:33:29 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "Re: partitioning performance tests after recent patches"
},
{
"msg_contents": "Hi,\n\nThanks a lot for very exhaustive testing.\n\nDavid already replied to some points, but let me comment on a couple of\npoints.\n\nPlease be advised that these may not be the numbers (or scalability\npattern of these numbers) you'll see when PG 12 is actually released,\nbecause we may end up changing something that makes performance suffer a\nbit. In particular, we are contemplating some changes around the safety\nof planner's handling of cached partitioning metadata (in light of reduced\nlock levels for adding/removing partitions) that might reduce the TPS\nfigure, the impact of which would worsen as the number of partitions\nincreases. Although, nothing is final yet; if interested, you can follow\nthat discussion at [1].\n\nOn 2019/04/15 4:19, Floris Van Nee wrote:\n> The test cases were (see benchmark.sql for the SQL commands for setup and test cases):\n> 1. Insert batches of 1000 rows per transaction\n> 2. Simple SELECT query pruning on a static timestamp\n> 3. The same SELECT query with static timestamp but with an added 'ORDER BY a, updated_at DESC LIMIT 1', which matches the index defined on the table\n> 4. The same simple SELECT query, but now it's wrapped inside an inlineable SQL function, called with a static timestamp\n> 5. The same simple SELECT query, but now it's wrapped inside a non-inlineable SQL function, called with a static timestamp\n> 6. The same simple SELECT query, but now it's wrapped inside a plpgsql function, called with a static timestamp\n> 7. Simple SELECT query pruning on a timestamp now()\n> 8. The same SELECT query with dynamic timestamp but with an added 'ORDER BY a, updated_at DESC LIMIT 1', which matches the index defined on the table\n> 9. The same simple SELECT query, but now it's wrapped inside an inlineable SQL function, called with a dynamic timestamp\n> 10. The same simple SELECT query, but now it's wrapped inside a non-inlineable SQL function, called with a dynamic timestamp\n> 11. The same simple SELECT query, but now it's wrapped inside a plpgsql function, called with a dynamic timestamp\n> 12. The same query as 2) but then in an inlineable function\n> 13. The same query as 3) but then in an inlineable function\n> 14. A SELECT with nested loop (10 iterations) with opportunities for run-time pruning - some rows from a table are selected and the timestamp from rows in that table is used to join on another partitioned table\n> \n> The full results can be found in the attached file (results.txt). I also produced graphs of the results, which can be found on TimescaleDb's Github page [1]. Please take a look at these figures for an easy overview of the results. In general performance of HEAD looks really good.\n> \n> While looking at these results, there were a few questions that I couldn't answer.\n> 1) It seems like the queries inside plpgsql functions (case 6 and 11) perform relatively well in PG11 compared to a non-inlineable SQL function (case 5 and 10), when a table consists of many partitions. As far as I know, both plpgsql and non-inlineable SQL functions are executed with generic plans. What can explain this difference? Are non-inlineable SQL function plans not reused between transactions, while plpgsql plans are?\n> 2) Is running non-inlined SQL functions with a generic plan even the best option all the time? Wouldn't it be better to adopt a similar approach to what plpgsql does, where it tries to test if using a generic plan is beneficial? The non-inlineable SQL functions suffer a big performance hit for a large number of partitions, because they cannot rely on static planning-time pruning.\n\nI'd never noticed this before. It indeed seems to be the case that SQL\nfunctions and plpgsql functions are handled using completely different\ncode paths, of which only for the latter it's possible to use static\nplanning-time pruning.\n\n> There was one other thing I noticed, and I believe it was raised by Tom in a separate thread as well: the setup code itself is really slow. Creating of partitions is taking a long time (it's taking several minutes to create the 4096 partition table).\n\nYeah that's rather bad. Thinking of doing something about that for PG 13.\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/27380.1555270166%40sss.pgh.pa.us\n\n\n\n",
"msg_date": "Mon, 15 Apr 2019 19:28:30 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: partitioning performance tests after recent patches"
},
{
"msg_contents": "On Mon, 15 Apr 2019 at 19:33, Floris Van Nee <florisvannee@optiver.com> wrote:\n> Here's the output of explain/analyze for HEAD. At run-time, technically all partitions could be pruned directly. However, one partition remains in the output of explain/analyze because of other difficulties with removing all of them, if I remember correctly? Still, that partition is never executed. The only difference I can see is the Limit node on top, as well as apparently another partition appearing in the analyze output (4096_4096, last partition, remains in the first plan. 4096_1, the first partition, remains the second plan).\n>\n> -- select_now.sql\n> explain(analyze, verbose, buffers on)\n> select * from :tbl where a='abc' and updated_at between now() and now()+interval '1d';\n>\n> Append (cost=0.16..8949.61 rows=4096 width=112) (actual time=0.000..0.000 rows=0 loops=1)\n> Subplans Removed: 4095\n> -> Index Scan using p4096_4096_a_updated_at_idx on public.p4096_4096 (cost=0.16..2.18 rows=1 width=112) (never executed)\n> Output: p4096_4096.a, p4096_4096.b, p4096_4096.c, p4096_4096.d, p4096_4096.updated_at\n> Index Cond: ((p4096_4096.a = 'abc'::text) AND (p4096_4096.updated_at >= now()) AND (p4096_4096.updated_at <= (now() + '1 day'::interval)))\n> Planning Time: 237.603 ms\n> Execution Time: 0.475 ms\n>\n> -- select_now_limit.sql\n> explain(analyze, verbose, buffers on)\n> select * from :tbl where a='abc' and updated_at between now() and now()+interval '1d'\n> order by a, updated_at desc limit 1;\n>\n> Limit (cost=645.53..647.56 rows=1 width=112) (actual time=0.002..0.002 rows=0 loops=1)\n> Output: p4096_1.a, p4096_1.b, p4096_1.c, p4096_1.d, p4096_1.updated_at\n> -> Append (cost=645.53..8949.61 rows=4096 width=112) (actual time=0.000..0.000 rows=0 loops=1)\n> Subplans Removed: 4095\n> -> Index Scan using p4096_1_a_updated_at_idx on public.p4096_1 (cost=0.57..2.03 rows=1 width=54) (never executed)\n> Output: p4096_1.a, p4096_1.b, p4096_1.c, p4096_1.d, p4096_1.updated_at\n> Index Cond: ((p4096_1.a = 'abc'::text) AND (p4096_1.updated_at >= now()) AND (p4096_1.updated_at <= (now() + '1 day'::interval)))\n> Planning Time: 3897.687 ms\n> Execution Time: 0.491 ms\n\nI had a look at this and it's due to get_eclass_for_sort_expr() having\na hard time due to the EquivalenceClass having so many members. This\nmust be done for each partition, so search time is quadratic based on\nthe number of partitions. We only hit this in the 2nd plan due to\nbuild_index_paths() finding that there are useful pathkeys from\nquery_pathkeys. Of course, this does not happen for the first query\nsince it has no ORDER BY clause.\n\nTom and I were doing a bit of work in [1] to speed up cases when there\nare many EquivalenceClasses by storing a Bitmapset for each RelOptInfo\nto mark the indexes of each eq_classes they have members in. This\ndoes not really help this case since we're slow due to lots of members\nrather than lots of classes, but perhaps something similar can be done\nto allow members to be found more quickly. I'm not sure exactly how\nthat can be done without having something like an array of Lists\nindexed by relid in each EquivalenceClasses. That does not sound great\nfrom a memory consumption point of view. Maybe having\nEquivalenceMember in some data structure that we don't have to perform\na linear search on would be a better fix. Although, we don't\ncurrently have any means to hash or binary search for note types\nthough. Perhaps its time we did.\n\n[1] https://commitfest.postgresql.org/23/1984/\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 16 Apr 2019 01:04:14 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: partitioning performance tests after recent patches"
},
{
"msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> On 2019/04/15 4:19, Floris Van Nee wrote:\n>> 2) Is running non-inlined SQL functions with a generic plan even the best option all the time? Wouldn't it be better to adopt a similar approach to what plpgsql does, where it tries to test if using a generic plan is beneficial? The non-inlineable SQL functions suffer a big performance hit for a large number of partitions, because they cannot rely on static planning-time pruning.\n\n> I'd never noticed this before. It indeed seems to be the case that SQL\n> functions and plpgsql functions are handled using completely different\n> code paths, of which only for the latter it's possible to use static\n> planning-time pruning.\n\nYeah. Another big problem with the current implementation of SQL\nfunctions is that there's no possibility of cross-query plan caching.\nAt some point I'd like to throw away functions.c and start over\nwith an implementation more similar to how plpgsql does it (in\nparticular, with persistent state and use of the plan cache).\nIt hasn't gotten to the top of the to-do queue though, mostly because\nI think not many people use SQL-language functions except when they\nwant them to be inlined.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 11:00:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: partitioning performance tests after recent patches"
}
] |
[
{
"msg_contents": "Hi\n\nIs there reason why following code should not to work?\n\ndo $$\ndeclare r record; result int;\nbegin\n select 10 as a, 20 as b into r;\n raise notice 'a: %', r.a;\n execute 'select $1.a + $1.b' into result using r;\n raise notice '%', result;\nend;\n$$\n\nbut it fails\n\nNOTICE: a: 10\nERROR: could not identify column \"a\" in record data type\nLINE 1: select $1.a + $1.b\n ^\nQUERY: select $1.a + $1.b\nCONTEXT: PL/pgSQL function inline_code_block line 6 at EXECUTE\n\nRegards\n\nPavel\n\nHiIs there reason why following code should not to work?do $$declare r record; result int;begin select 10 as a, 20 as b into r; raise notice 'a: %', r.a; execute 'select $1.a + $1.b' into result using r; raise notice '%', result;end;$$but it failsNOTICE: a: 10ERROR: could not identify column \"a\" in record data typeLINE 1: select $1.a + $1.b ^QUERY: select $1.a + $1.bCONTEXT: PL/pgSQL function inline_code_block line 6 at EXECUTERegardsPavel",
"msg_date": "Mon, 15 Apr 2019 06:38:32 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "plpgsql - execute - cannot use a reference to record field"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> Is there reason why following code should not to work?\n\n> do $$\n> declare r record; result int;\n> begin\n> select 10 as a, 20 as b into r;\n> raise notice 'a: %', r.a;\n> execute 'select $1.a + $1.b' into result using r;\n> raise notice '%', result;\n> end;\n> $$\n\nYou can't select fields by name out of an unspecified record.\nThe EXECUTE'd query is not particularly different from\n\nregression=# prepare foo(record) as select $1.a + $1.b;\npsql: ERROR: could not identify column \"a\" in record data type\nLINE 1: prepare foo(record) as select $1.a + $1.b;\n ^\n\nand surely you wouldn't expect that to work.\n(The fact that either of the previous lines work is\nthanks to plpgsql-specific hacking.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 12:07:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql - execute - cannot use a reference to record field"
},
{
"msg_contents": "po 15. 4. 2019 v 18:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > Is there reason why following code should not to work?\n>\n> > do $$\n> > declare r record; result int;\n> > begin\n> > select 10 as a, 20 as b into r;\n> > raise notice 'a: %', r.a;\n> > execute 'select $1.a + $1.b' into result using r;\n> > raise notice '%', result;\n> > end;\n> > $$\n>\n> You can't select fields by name out of an unspecified record.\n> The EXECUTE'd query is not particularly different from\n>\n> regression=# prepare foo(record) as select $1.a + $1.b;\n> psql: ERROR: could not identify column \"a\" in record data type\n> LINE 1: prepare foo(record) as select $1.a + $1.b;\n> ^\n>\n> and surely you wouldn't expect that to work.\n> (The fact that either of the previous lines work is\n> thanks to plpgsql-specific hacking.)\n>\n\nyes. I looking to the code and I see so SPI_execute_with_args doesn't allow\npush typmods there.\n\nRegards\n\nPavel\n\n\n\n\n> regards, tom lane\n>\n\npo 15. 4. 2019 v 18:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> Is there reason why following code should not to work?\n\n> do $$\n> declare r record; result int;\n> begin\n> select 10 as a, 20 as b into r;\n> raise notice 'a: %', r.a;\n> execute 'select $1.a + $1.b' into result using r;\n> raise notice '%', result;\n> end;\n> $$\n\nYou can't select fields by name out of an unspecified record.\nThe EXECUTE'd query is not particularly different from\n\nregression=# prepare foo(record) as select $1.a + $1.b;\npsql: ERROR: could not identify column \"a\" in record data type\nLINE 1: prepare foo(record) as select $1.a + $1.b;\n ^\n\nand surely you wouldn't expect that to work.\n(The fact that either of the previous lines work is\nthanks to plpgsql-specific hacking.)yes. I looking to the code and I see so SPI_execute_with_args doesn't allow push typmods there.RegardsPavel\n\n regards, tom lane",
"msg_date": "Mon, 15 Apr 2019 19:00:52 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: plpgsql - execute - cannot use a reference to record field"
}
] |
[
{
"msg_contents": "Hi,\n\nI am not able to access the mailing list archive. Is the mailing list\nserver down or something?\n\n-- \nCheers\nRam 4.0\n\nHi,I am not able to access the mailing list archive. Is the mailing list server down or something?-- CheersRam 4.0",
"msg_date": "Mon, 15 Apr 2019 10:36:19 +0530",
"msg_from": "Ramanarayana <raam.soft@gmail.com>",
"msg_from_op": true,
"msg_subject": "Mailing list not working"
},
{
"msg_contents": "> Hi,\n> \n> I am not able to access the mailing list archive. Is the mailing list\n> server down or something?\n\nWhat kind of problem do you have?\n\nI can see your posting in the archive.\nhttps://www.postgresql.org/message-id/CAKm4Xs5%2BD%2BgB4yCQtsfKmdTMqvido1k5Qz7iwPAQj8CM-ptiXw%40mail.gmail.com\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 15 Apr 2019 15:37:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Mailing list not working"
},
{
"msg_contents": "On 2019/04/15 15:37, Tatsuo Ishii wrote:\n>> Hi,\n>>\n>> I am not able to access the mailing list archive. Is the mailing list\n>> server down or something?\n> \n> What kind of problem do you have?\n> \n> I can see your posting in the archive.\n> https://www.postgresql.org/message-id/CAKm4Xs5%2BD%2BgB4yCQtsfKmdTMqvido1k5Qz7iwPAQj8CM-ptiXw%40mail.gmail.com\n\nThere may have been some glitch for limited time couple of hours ago. I\ntoo was facing issues accessing the ML archive in the browser, such as\ngetting \"Guru Meditation\" / \"Error 503 Service Unavailable\" error pages.\nAlso, errors like \"Timeout when talking to search server\" when doing\nkeyword search.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Mon, 15 Apr 2019 16:13:36 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Mailing list not working"
},
{
"msg_contents": "Hi,\n\nI also got same error couple of hours ago. Now it is working fine.\n\nIt would be great to have alerting tools like prometheus to notify users\nwhen the server is down\n\nRegards,\nRam.\n\nHi,I also got same error couple of hours ago. Now it is working fine.It would be great to have alerting tools like prometheus to notify users when the server is downRegards,Ram.",
"msg_date": "Mon, 15 Apr 2019 14:31:44 +0530",
"msg_from": "Ramanarayana <raam.soft@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Mailing list not working"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 11:02 AM Ramanarayana <raam.soft@gmail.com> wrote:\n\n> Hi,\n>\n> I also got same error couple of hours ago. Now it is working fine.\n>\n\n\nThere were network issues in one of the Cisco firewalls for one of our\ndatacenters. It's the only \"Enterprise technology\" we use for pginfra I\nthink, and it does this every now and then. It was rebooted early this\nmorning and then services recovered.\n\n\nIt would be great to have alerting tools like prometheus to notify users\n> when the server is down\n>\n>\nThere were hundreds of notifications. But PostgreSQL does not have 24/7\nstaff (since it's all volunteers), so it takes some times to get things\nfixed. The issues started just past midnight, and were fixed at about\n7:30AM.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Apr 15, 2019 at 11:02 AM Ramanarayana <raam.soft@gmail.com> wrote:Hi,I also got same error couple of hours ago. Now it is working fine.There were network issues in one of the Cisco firewalls for one of our datacenters. It's the only \"Enterprise technology\" we use for pginfra I think, and it does this every now and then. It was rebooted early this morning and then services recovered.It would be great to have alerting tools like prometheus to notify users when the server is downThere were hundreds of notifications. But PostgreSQL does not have 24/7 staff (since it's all volunteers), so it takes some times to get things fixed. The issues started just past midnight, and were fixed at about 7:30AM.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 15 Apr 2019 11:06:37 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Mailing list not working"
}
] |
[
{
"msg_contents": "StartupDecodingContext() initializes ctx->reader->private_data with ctx, and\nit even does so twice. I couldn't find a place in the code where the\n(LogicalDecodingContext *) pointer is retrieved from the reader, and a simple\ntest of logical replication works if the patch below is applied. Thus I assume\nthat assignment is a thinko, isn't it?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Mon, 15 Apr 2019 13:51:34 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Accidental setting of XLogReaderState.private_data ?"
},
{
"msg_contents": "Antonin Houska <ah@cybertec.at> writes:\n> StartupDecodingContext() initializes ctx->reader->private_data with ctx, and\n> it even does so twice. I couldn't find a place in the code where the\n> (LogicalDecodingContext *) pointer is retrieved from the reader, and a simple\n> test of logical replication works if the patch below is applied. Thus I assume\n> that assignment is a thinko, isn't it?\n\nHmm. The second, duplicate assignment is surely pointless, but I think\nthat setting the ctx as the private_data is a good idea. It hardly seems\nout of the question that it might be needed in future.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 11:06:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Accidental setting of XLogReaderState.private_data ?"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 11:06:18AM -0400, Tom Lane wrote:\n> Hmm. The second, duplicate assignment is surely pointless, but I think\n> that setting the ctx as the private_data is a good idea. It hardly seems\n> out of the question that it might be needed in future.\n\nAgreed that we should keep the assignment done with\nXLogReaderAllocate(). I have committed the patch which removes the\nuseless assignment though.\n--\nMichael",
"msg_date": "Tue, 16 Apr 2019 15:13:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Accidental setting of XLogReaderState.private_data ?"
}
] |
[
{
"msg_contents": "I just noticed the following:\n\nCREATE TABLE foo (a int, b int);\nINSERT INTO foo SELECT x/10, x/100 FROM generate_series(1, 100) x;\nCREATE STATISTICS foo_s ON a,b FROM foo;\nANALYSE foo;\n\nSELECT pg_mcv_list_items(stxmcv) from pg_statistic_ext WHERE stxname = 'foo_s';\n\nwhich fails with\n\nERROR: cache lookup failed for type 0\n\nThat definitely used to work, so I'm guessing it got broken by the\nrecent reworking of the serialisation code, but I've not looked into\nit.\n\nThere should probably be regression test coverage of that function.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 15 Apr 2019 17:02:43 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Multivariate MCV lists -- pg_mcv_list_items() seems to be broken"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 05:02:43PM +0100, Dean Rasheed wrote:\n>I just noticed the following:\n>\n>CREATE TABLE foo (a int, b int);\n>INSERT INTO foo SELECT x/10, x/100 FROM generate_series(1, 100) x;\n>CREATE STATISTICS foo_s ON a,b FROM foo;\n>ANALYSE foo;\n>\n>SELECT pg_mcv_list_items(stxmcv) from pg_statistic_ext WHERE stxname = 'foo_s';\n>\n>which fails with\n>\n>ERROR: cache lookup failed for type 0\n>\n>That definitely used to work, so I'm guessing it got broken by the\n>recent reworking of the serialisation code, but I've not looked into\n>it.\n>\n\nYeah, that seems like a bug. I'll take a look.\n\n>There should probably be regression test coverage of that function.\n>\n\nAgreed. I plan to rework the existing tests to use the same approach as\nthe MCV, so I'll add a test for this function too.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 15 Apr 2019 18:21:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV lists -- pg_mcv_list_items() seems to be broken"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> SELECT pg_mcv_list_items(stxmcv) from pg_statistic_ext WHERE stxname = 'foo_s';\n> which fails with\n> ERROR: cache lookup failed for type 0\n\n> That definitely used to work, so I'm guessing it got broken by the\n> recent reworking of the serialisation code, but I've not looked into\n> it.\n\nYeah, looks like sloppy thinking about whether or not the types array\nparticipates in maxalign-forcing?\n\n> There should probably be regression test coverage of that function.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 12:26:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV lists -- pg_mcv_list_items() seems to be broken"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 12:26:02PM -0400, Tom Lane wrote:\n>Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>> SELECT pg_mcv_list_items(stxmcv) from pg_statistic_ext WHERE stxname = 'foo_s';\n>> which fails with\n>> ERROR: cache lookup failed for type 0\n>\n>> That definitely used to work, so I'm guessing it got broken by the\n>> recent reworking of the serialisation code, but I've not looked into\n>> it.\n>\n>Yeah, looks like sloppy thinking about whether or not the types array\n>participates in maxalign-forcing?\n>\n\nActually, no. It seems aligned just fine, AFAICS. The bug a bit more\nembarassing - the deserialization does\n\n memcpy(ptr, mcvlist->types, sizeof(Oid) * ndims);\n\nwhile it should be doing\n\n memcpy(mcvlist->types, ptr, sizeof(Oid) * ndims);\n\nWill fix.\n\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 15 Apr 2019 18:55:33 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV lists -- pg_mcv_list_items() seems to be broken"
}
] |
[
{
"msg_contents": "In connection with the issue discussed at [1], I tried to run\nthe core regression tests with extremely aggressive autovacuuming\n(I set autovacuum_naptime = 1s, autovacuum_vacuum_threshold = 5,\nautovacuum_vacuum_cost_delay = 0). I found that the timestamp\ntest tends to fail with diffs caused by unstable row order in\ntimestamp_tbl. This is evidently because it does a couple of\nDELETEs before inserting the table's final contents; if autovac\ncomes along at the right time then some of those slots can get\nrecycled in between insertions. I'm thinking of committing the\nattached patch to prevent this, since in principle such failures\ncould occur even without hacking the autovac settings. Thoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/15751.1555256860%40sss.pgh.pa.us",
"msg_date": "Mon, 15 Apr 2019 13:22:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Autovacuum-induced regression test instability"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 01:22:30PM -0400, Tom Lane wrote:\n> In connection with the issue discussed at [1], I tried to run\n> the core regression tests with extremely aggressive autovacuuming\n> (I set autovacuum_naptime = 1s, autovacuum_vacuum_threshold = 5,\n> autovacuum_vacuum_cost_delay = 0). I found that the timestamp\n> test tends to fail with diffs caused by unstable row order in\n> timestamp_tbl. This is evidently because it does a couple of\n> DELETEs before inserting the table's final contents; if autovac\n> comes along at the right time then some of those slots can get\n> recycled in between insertions. I'm thinking of committing the\n> attached patch to prevent this, since in principle such failures\n> could occur even without hacking the autovac settings. Thoughts?\n\nAren't extra ORDER BY clauses the usual response to tuple ordering? I\nreally think that we should be more aggressive with that. For table\nAM, it can prove to be very useful to run the main regression test\nsuite with default_table_access_method enforced, and most likely AMs\nwill not ensure the same tuple ordering as heap.\n--\nMichael",
"msg_date": "Tue, 16 Apr 2019 14:53:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum-induced regression test instability"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Aren't extra ORDER BY clauses the usual response to tuple ordering? I\n> really think that we should be more aggressive with that.\n\nI'm not excited about that. The traditional argument against it\nis that if we start testing ORDER BY queries exclusively (and it\nwould have to be pretty nearly exclusively, if we were to take\nthis seriously) then we'll lack test coverage for queries without\nORDER BY. Also, regardless of whether you think that regression\ntest results can be kicked around at will, we are certainly going\nto hear complaints from users if traditional behaviors like\n\"inserting N rows into a new table, then selecting them, gives\nthose rows back in the same order\" go away. Recall that we had\nto provide a way to disable the syncscan optimization because\nsome users complained about the loss of row-ordering consistency.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Apr 2019 11:08:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum-induced regression test instability"
}
] |
[
{
"msg_contents": "In the wake of the discussion at [1] I went looking for structs that\nshould be using FLEXIBLE_ARRAY_MEMBER and are not, by dint of grepping\nfor size calculations of the form \"offsetof(struct,fld) + n * sizeof(...)\"\nand then seeing how \"fld\" is declared. I haven't yet found anything\nlike that that I want to change, but I did come across this bit in\nmvdistinct.c's statext_ndistinct_serialize():\n\n len = VARHDRSZ + SizeOfMVNDistinct +\n ndistinct->nitems * (offsetof(MVNDistinctItem, attrs) + sizeof(int));\n\nGiven the way that the subsequent code looks, I would argue that\noffsetof(MVNDistinctItem, attrs) has got basically nothing to do with\nthis calculation, and that the right way to phrase it is just\n\n len = VARHDRSZ + SizeOfMVNDistinct +\n ndistinct->nitems * (sizeof(double) + sizeof(int));\n\nConsider if there happened to be alignment padding in MVNDistinctItem:\nas the code stands it'd overestimate the space needed. (There won't be\npadding on any machine we support, I believe, so this isn't a live bug ---\nbut it's overcomplicated code, and could become buggy if any\nless-than-double-width fields get added to MVNDistinctItem.)\n\nFor largely the same reason, I do not think that SizeOfMVNDistinct is\na helpful way to compute the space needed for those fields --- any\nalignment padding that might be included is irrelevant for this purpose.\nIn short I'd be inclined to phrase this just as\n\n len = VARHDRSZ + 3 * sizeof(uint32) +\n ndistinct->nitems * (sizeof(double) + sizeof(int));\n\nIt looks to me actually like all the uses of both SizeOfMVNDistinctItem\nand SizeOfMVNDistinct are wrong, because the code using those symbols\nis really thinking about the size of this serialized representation,\nwhich is guaranteed not to have any inter-field padding, unlike the\nstructs.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://postgr.es/m/a620f85a-42ab-e0f3-3337-b04b97e2e2f5@redhat.com\n\n\n",
"msg_date": "Mon, 15 Apr 2019 18:00:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Strange coding in mvdistinct.c"
},
{
"msg_contents": "Oh, and as I continue to grep, I found this in dependencies.c:\n\n dependencies = (MVDependencies *) repalloc(dependencies,\n offsetof(MVDependencies, deps)\n + dependencies->ndeps * sizeof(MVDependency));\n\nI'm pretty sure this is an actual bug: the calculation should be\n\n offsetof(MVDependencies, deps)\n + dependencies->ndeps * sizeof(MVDependency *));\n\nbecause deps is an array of MVDependency* not MVDependency.\n\nThis would lead to an overallocation not underallocation, and it's\nprobably pretty harmless because ndeps can't get too large (I hope;\nif it could, this would have O(N^2) performance problems). Still,\nyou oughta fix it.\n\n(There's a similar calculation later in the file that gets it right.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 18:12:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Strange coding in mvdistinct.c"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 06:12:24PM -0400, Tom Lane wrote:\n>Oh, and as I continue to grep, I found this in dependencies.c:\n>\n> dependencies = (MVDependencies *) repalloc(dependencies,\n> offsetof(MVDependencies, deps)\n> + dependencies->ndeps * sizeof(MVDependency));\n>\n>I'm pretty sure this is an actual bug: the calculation should be\n>\n> offsetof(MVDependencies, deps)\n> + dependencies->ndeps * sizeof(MVDependency *));\n>\n>because deps is an array of MVDependency* not MVDependency.\n>\n>This would lead to an overallocation not underallocation, and it's\n>probably pretty harmless because ndeps can't get too large (I hope;\n>if it could, this would have O(N^2) performance problems). Still,\n>you oughta fix it.\n>\n>(There's a similar calculation later in the file that gets it right.)\n>\n\nThanks. I noticed some of the bugs while investigating the recent MCV\nserialization, and I plan to fix them soon. This week, hopefully.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 16 Apr 2019 00:35:41 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Strange coding in mvdistinct.c"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 06:12:24PM -0400, Tom Lane wrote:\n>Oh, and as I continue to grep, I found this in dependencies.c:\n>\n> dependencies = (MVDependencies *) repalloc(dependencies,\n> offsetof(MVDependencies, deps)\n> + dependencies->ndeps * sizeof(MVDependency));\n>\n>I'm pretty sure this is an actual bug: the calculation should be\n>\n> offsetof(MVDependencies, deps)\n> + dependencies->ndeps * sizeof(MVDependency *));\n>\n>because deps is an array of MVDependency* not MVDependency.\n>\n>This would lead to an overallocation not underallocation, and it's\n>probably pretty harmless because ndeps can't get too large (I hope;\n>if it could, this would have O(N^2) performance problems). Still,\n>you oughta fix it.\n>\n>(There's a similar calculation later in the file that gets it right.)\n>\n\nI've pushed a fix correcting those issues - both for mvndistinct and\nfunctional dependencies. I've reworked the macros used to compute the\nserialized sizes not to use offsetof(), which however made them pretty\nuseless for other purposes. So I've made them private by moving them to\nthe .c files where they are moved them to the .c files.\n\nI don't think we need to backpatch this - we're not going to add new\nfields in backbranches, so there's little danger of introducing padding.\nAnd I'm not sure we want to remove the macros, although it's unlikely\nanyone else uses them.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 21 Apr 2019 20:34:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Strange coding in mvdistinct.c"
}
] |
[
{
"msg_contents": "Hi!\n\nCurrently we amcheck supports lossy checking for missing parent\ndownlinks. It collects bitmap of downlink hashes and use it to check\nsubsequent tree level. We've experienced some large corrupted indexes\nwhich pass this check due to its looseness.\n\nHowever, it seems to me we can implement this check in non-lossy\nmanner without making it significantly slower. We anyway traverse\ndownlinks from parent to children in order to verify that hikeys are\ncorresponding to downlink keys. We can also traverse from one\ndownlink to subsequent using rightlinks. So, if there are some\nintermediate pages between them, they are candidates to have missing\nparent downlinks. The patch is attached.\n\nWith this patch amcheck could successfully detect corruption for our\ncustomer, which unpatched amcheck couldn't find.\n\nOpinions?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 16 Apr 2019 05:30:05 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 7:30 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Currently we amcheck supports lossy checking for missing parent\n> downlinks. It collects bitmap of downlink hashes and use it to check\n> subsequent tree level. We've experienced some large corrupted indexes\n> which pass this check due to its looseness.\n\nCan you be more specific? What was the cause of the corruption? I'm\nalways very interested in hearing about cases that amcheck could have\ndetected, but didn't.\n\nWas the issue that the Bloom filter was simply undersized/ineffective?\n\n> However, it seems to me we can implement this check in non-lossy\n> manner without making it significantly slower. We anyway traverse\n> downlinks from parent to children in order to verify that hikeys are\n> corresponding to downlink keys.\n\nActually, we don't check the high keys in children against the parent\n(all other items are checked, though). We probably *should* do\nsomething with the high key when verifying consistency across levels,\nbut currently we don't. (We only use the high key for the same-page\nhigh key check -- more on this below.)\n\n> We can also traverse from one\n> downlink to subsequent using rightlinks. So, if there are some\n> intermediate pages between them, they are candidates to have missing\n> parent downlinks. The patch is attached.\n>\n> With this patch amcheck could successfully detect corruption for our\n> customer, which unpatched amcheck couldn't find.\n\nMaybe we can be a lot less conservative about sizing the Bloom filter\ninstead? That would be an easier fix IMV -- we can probably change\nthat logic to be a lot more aggressive without anybody noticing, since\nthe Bloom filter is already usually tiny. We are already not very\ncareful about saving work within bt_index_parent_check(), but with\nthis patch we follow each downlink twice instead of once. That seems\nwasteful.\n\nThe reason I used a Bloom filter here is because I would eventually\nlike the missing downlink check to fingerprint entire tuples, not just\nblock numbers. In L&Y terms, the check could in the future fingerprint\nthe separator key and the downlink at the same time, not just the\ndownlink. That way, we could probe the Bloom filter on the next level\ndown for its high key (with the right sibling pointer set to be\nconsistent with the parent) iff we don't see that the page split was\ninterrupted (i.e. iff P_INCOMPLETE_SPLIT() bit is not set). Obviously\nthis would be a more effective form of verification, since we would\nnotice high key values that don't agree with the parent's values for\nthe same sibling/cousin/child block.\n\nI didn't do it that way for v11 because of \"minus infinity\" items on\ninternal pages, which don't store the original key (the key remains\nthe high key of the left sibling page, but we truncate the original to\n0 attributes in _bt_pgaddtup()). I think that we should eventually\nstop truncating minus infinity items, and actually store a \"low key\"\nat P_FIRSTDATAKEY() within internal pages instead. That would be\nuseful for other things anyway (e.g. prefix compression).\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 16 Apr 2019 12:00:26 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 12:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Can you be more specific? What was the cause of the corruption? I'm\n> always very interested in hearing about cases that amcheck could have\n> detected, but didn't.\n\nFWIW, v4 indexes in Postgres 12 will support the new \"rootdescend\"\nverification option, which isn't lossy, and would certainly have\ndetected your customer issue in practice. Admittedly the new check is\nquite expensive, even compared to the other bt_index_parent_check()\nchecks, but it is nice that we now have a verification option that is\n*extremely* thorough, and uses _bt_search() directly.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 16 Apr 2019 12:04:28 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 12:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Apr 15, 2019 at 7:30 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > Currently we amcheck supports lossy checking for missing parent\n> > downlinks. It collects bitmap of downlink hashes and use it to check\n> > subsequent tree level. We've experienced some large corrupted indexes\n> > which pass this check due to its looseness.\n>\n> Can you be more specific? What was the cause of the corruption? I'm\n> always very interested in hearing about cases that amcheck could have\n> detected, but didn't.\n\nPing?\n\nI am interested in doing better here. The background information seems\nvery interesting.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Apr 2019 16:46:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 10:04 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Apr 16, 2019 at 12:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Can you be more specific? What was the cause of the corruption? I'm\n> > always very interested in hearing about cases that amcheck could have\n> > detected, but didn't.\n>\n> FWIW, v4 indexes in Postgres 12 will support the new \"rootdescend\"\n> verification option, which isn't lossy, and would certainly have\n> detected your customer issue in practice. Admittedly the new check is\n> quite expensive, even compared to the other bt_index_parent_check()\n> checks, but it is nice that we now have a verification option that is\n> *extremely* thorough, and uses _bt_search() directly.\n\n\"rootdescend\" is cool type of check. Thank you for noticing, I wasn't\naware of it.\nBut can it detect the missing downlink in following situation?\n\n A\n / \\\n B <-> C <-> D\n\nHere A has downlinks to B and D, which downlink to C is missing,\nwhile B, C and D are correctly connected with leftlinks and rightlinks.\nI can see \"rootdescend\" calls _bt_search(), which would just step\nright from C to D as if it was concurrent split.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\nOn Tue, Apr 16, 2019 at 10:04 PM Peter Geoghegan <pg@bowt.ie> wrote:>> On Tue, Apr 16, 2019 at 12:00 PM Peter Geoghegan <pg@bowt.ie> wrote:> > Can you be more specific? What was the cause of the corruption? I'm> > always very interested in hearing about cases that amcheck could have> > detected, but didn't.>> FWIW, v4 indexes in Postgres 12 will support the new \"rootdescend\"> verification option, which isn't lossy, and would certainly have> detected your customer issue in practice. Admittedly the new check is> quite expensive, even compared to the other bt_index_parent_check()> checks, but it is nice that we now have a verification option that is> *extremely* thorough, and uses _bt_search() directly.\"rootdescend\" is cool type of check. Thank you for noticing, I wasn't aware of it.But can it detect the missing downlink in following situation? A / \\ B <-> C <-> DHere A has downlinks to B and D, which downlink to C is missing,while B, C and D are correctly connected with leftlinks and rightlinks.I can see \"rootdescend\" calls _bt_search(), which would just stepright from C to D as if it was concurrent split.------Alexander KorotkovPostgres Professional: http://www.postgrespro.comThe Russian Postgres Company",
"msg_date": "Sun, 28 Apr 2019 02:57:11 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 4:57 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> \"rootdescend\" is cool type of check. Thank you for noticing, I wasn't aware of it.\n> But can it detect the missing downlink in following situation?\n>\n> A\n> / \\\n> B <-> C <-> D\n>\n> Here A has downlinks to B and D, which downlink to C is missing,\n> while B, C and D are correctly connected with leftlinks and rightlinks.\n> I can see \"rootdescend\" calls _bt_search(), which would just step\n> right from C to D as if it was concurrent split.\n\nThere is a comment about this scenario above bt_rootdescend() in amcheck.\n\nYou're right -- this is a kind of corruption that even the new\nrootdescend verification option would miss. We can imagine a version\nof rootdescend verification that tells the core code to only move\nright when there was an *interrupted* page split (i.e.\nP_INCOMPLETE_SPLIT() flag bit is set), but that isn't what happens\nright now.\n\nThat said, the lossy downlink check that you want to improve on\n*should* already catch this situation. Of course it might not because\nit is lossy (uses a Bloom filter), but I think that that's very\nunlikely. That's why I would like to understand the problem that you\nfound with the check.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 27 Apr 2019 17:03:39 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 10:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Apr 15, 2019 at 7:30 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > Currently we amcheck supports lossy checking for missing parent\n> > downlinks. It collects bitmap of downlink hashes and use it to check\n> > subsequent tree level. We've experienced some large corrupted indexes\n> > which pass this check due to its looseness.\n>\n> Can you be more specific? What was the cause of the corruption? I'm\n> always very interested in hearing about cases that amcheck could have\n> detected, but didn't.\n\nAFAIR, the cause of corruption in this case was our (Postgres Pro)\nmodification. Not something really present in PostgreSQL core.\n\n>\n> Was the issue that the Bloom filter was simply undersized/ineffective?\n\nYes, increasing of Bloom filter size also helps. But my intention was\nto make non-lossy check here.\n\n>\n> > However, it seems to me we can implement this check in non-lossy\n> > manner without making it significantly slower. We anyway traverse\n> > downlinks from parent to children in order to verify that hikeys are\n> > corresponding to downlink keys.\n>\n> Actually, we don't check the high keys in children against the parent\n> (all other items are checked, though). We probably *should* do\n> something with the high key when verifying consistency across levels,\n> but currently we don't. (We only use the high key for the same-page\n> high key check -- more on this below.)\n\nNice idea.\n\n> > We can also traverse from one\n> > downlink to subsequent using rightlinks. So, if there are some\n> > intermediate pages between them, they are candidates to have missing\n> > parent downlinks. The patch is attached.\n> >\n> > With this patch amcheck could successfully detect corruption for our\n> > customer, which unpatched amcheck couldn't find.\n>\n> Maybe we can be a lot less conservative about sizing the Bloom filter\n> instead? That would be an easier fix IMV -- we can probably change\n> that logic to be a lot more aggressive without anybody noticing, since\n> the Bloom filter is already usually tiny. We are already not very\n> careful about saving work within bt_index_parent_check(), but with\n> this patch we follow each downlink twice instead of once. That seems\n> wasteful.\n\nIt wouldn't be really wasteful, because the same children were\naccessed recently. So, they are likely not yet evicted from\nshared_buffers. I think we can fit both checks into one loop over\ndownlinks instead of two.\n\n> The reason I used a Bloom filter here is because I would eventually\n> like the missing downlink check to fingerprint entire tuples, not just\n> block numbers. In L&Y terms, the check could in the future fingerprint\n> the separator key and the downlink at the same time, not just the\n> downlink. That way, we could probe the Bloom filter on the next level\n> down for its high key (with the right sibling pointer set to be\n> consistent with the parent) iff we don't see that the page split was\n> interrupted (i.e. iff P_INCOMPLETE_SPLIT() bit is not set). Obviously\n> this would be a more effective form of verification, since we would\n> notice high key values that don't agree with the parent's values for\n> the same sibling/cousin/child block.\n>\n> I didn't do it that way for v11 because of \"minus infinity\" items on\n> internal pages, which don't store the original key (the key remains\n> the high key of the left sibling page, but we truncate the original to\n> 0 attributes in _bt_pgaddtup()). I think that we should eventually\n> stop truncating minus infinity items, and actually store a \"low key\"\n> at P_FIRSTDATAKEY() within internal pages instead. That would be\n> useful for other things anyway (e.g. prefix compression).\n\nYes, we can do more checks with bloom filter. But assuming that we\nanyway iterate over children for each non-leaf page, can we do:\n1) All the checks, which bt_downlink_check() does for now\n2) Check there are no missing downlinks\n3) Check hikeys\nin one pass, can't we?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sun, 28 Apr 2019 03:12:54 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 5:13 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Yes, increasing of Bloom filter size also helps. But my intention was\n> to make non-lossy check here.\n\nI agree that that might be a good goal, but I am interested in knowing\nif there is something naive about how the downlinkfilter Bloom filter\nis sized. I think that we could probably do better than this without\nmuch work:\n\n /*\n * Extra readonly downlink check.\n *\n * In readonly case, we know that there cannot be a concurrent\n * page split or a concurrent page deletion, which gives us the\n * opportunity to verify that every non-ignorable page had a\n * downlink one level up. We must be tolerant of interrupted page\n * splits and page deletions, though. This is taken care of in\n * bt_downlink_missing_check().\n */\n total_pages = (int64) state->rel->rd_rel->relpages;\n state->downlinkfilter = bloom_create(total_pages, work_mem, seed);\n\nMaybe we could use \"smgrnblocks(index->rd_smgr, MAIN_FORKNUM))\"\ninstead of relpages, for example.\n\n> It wouldn't be really wasteful, because the same children were\n> accessed recently. So, they are likely not yet evicted from\n> shared_buffers. I think we can fit both checks into one loop over\n> downlinks instead of two.\n\nI see your point, but if we're going to treat this as a bug then I\nwould prefer a simple fix.\n\n> Yes, we can do more checks with bloom filter. But assuming that we\n> anyway iterate over children for each non-leaf page, can we do:\n> 1) All the checks, which bt_downlink_check() does for now\n> 2) Check there are no missing downlinks\n> 3) Check hikeys\n> in one pass, can't we?\n\nWe can expect every high key in a page to have a copy contained within\nits parent, either as one of the keys, or as parent's own high key\n(assuming no concurrent or interrupted page splits or page deletions).\nThis is true today, even though we truncate negative infinity items in\ninternal pages.\n\nI think that the simple answer to your question is yes. It would be\nmore complicated that way, and the only extra check would be the check\nof high keys against their parent, but overall this does seem\npossible.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 27 Apr 2019 17:36:56 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 5:13 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Yes, increasing of Bloom filter size also helps. But my intention was\n> to make non-lossy check here.\n\nWhy is that your intention? Do you want to do this as a feature for\nPostgres 13, or do you want to treat this as a bug that we need to\nbackpatch a fix for?\n\nCan we avoid the problem you saw with the Bloom filter approach by\nusing the real size of the index (i.e.\nsmgrnblocks()/RelationGetNumberOfBlocks()) to size the Bloom filter,\nand/or by rethinking the work_mem cap? Maybe we can have a WARNING\nthat advertises that work_mem is probably too low?\n\nThe state->downlinkfilter Bloom filter should be small in almost all\ncases, so I still don't fully understand your concern. With a 100GB\nindex, we'll have ~13 million blocks. We only need a Bloom filter that\nis ~250MB to have less than a 1% chance of missing inconsistencies\neven with such a large index. I admit that its unfriendly that users\nare not warned about the shortage currently, but that is something we\ncan probably find a simple (backpatchable) fix for.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 27 Apr 2019 18:36:22 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Sun, Apr 28, 2019 at 4:36 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sat, Apr 27, 2019 at 5:13 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > Yes, increasing of Bloom filter size also helps. But my intention was\n> > to make non-lossy check here.\n>\n> Why is that your intention? Do you want to do this as a feature for\n> Postgres 13, or do you want to treat this as a bug that we need to\n> backpatch a fix for?\n\nI think this definitely not bug fix. Bloom filter was designed to be\nlossy, no way blaming it for that :)\n\n> Can we avoid the problem you saw with the Bloom filter approach by\n> using the real size of the index (i.e.\n> smgrnblocks()/RelationGetNumberOfBlocks()) to size the Bloom filter,\n> and/or by rethinking the work_mem cap? Maybe we can have a WARNING\n> that advertises that work_mem is probably too low?\n>\n> The state->downlinkfilter Bloom filter should be small in almost all\n> cases, so I still don't fully understand your concern. With a 100GB\n> index, we'll have ~13 million blocks. We only need a Bloom filter that\n> is ~250MB to have less than a 1% chance of missing inconsistencies\n> even with such a large index. I admit that its unfriendly that users\n> are not warned about the shortage currently, but that is something we\n> can probably find a simple (backpatchable) fix for.\n\nSounds reasonable. I'll think about proposing backpatch of something like this.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sun, 28 Apr 2019 20:14:59 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Sun, Apr 28, 2019 at 10:15 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> I think this definitely not bug fix. Bloom filter was designed to be\n> lossy, no way blaming it for that :)\n\nI will think about a simple fix, but after the upcoming point release.\nThere is no hurry.\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 30 Apr 2019 17:58:35 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Wed, May 1, 2019 at 12:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sun, Apr 28, 2019 at 10:15 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > I think this definitely not bug fix. Bloom filter was designed to be\n> > lossy, no way blaming it for that :)\n>\n> I will think about a simple fix, but after the upcoming point release.\n> There is no hurry.\n\nA bureaucratic question: What should the status be for this CF entry?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 14:52:51 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Sun, Jul 7, 2019 at 7:53 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, May 1, 2019 at 12:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I will think about a simple fix, but after the upcoming point release.\n> > There is no hurry.\n>\n> A bureaucratic question: What should the status be for this CF entry?\n\nI have plans to use RelationGetNumberOfBlocks() to size amcheck's\ndownlink Bloom filter. I think that that will solve the problems with\nunreliable estimates of index size. which seemed to be the basis of\nAlexander's complaint.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 8 Jul 2019 17:33:01 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 5:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I will think about a simple fix, but after the upcoming point release.\n> There is no hurry.\n\nAttached draft patch uses RelationGetNumberOfBlocks() to size each of\nthe two Bloom filters that may be used by amcheck to perform\nverification.\n\nThe basic heapallindexed Bloom filter is now sized based on the\nconservative assumption that there must be *at least*\n\"RelationGetNumberOfBlocks() * 50\" elements to fingerprint (reltuples\nwill continue to be used to size the basic heapallindexed Bloom filter\nin most cases, though). The patch also uses the same\nRelationGetNumberOfBlocks() value to size the downlink Bloom filter.\nThis second change will fix your problem very non-invasively.\n\nI intend to backpatch this to v11 in the next few days.\n\n-- \nPeter Geoghegan",
"msg_date": "Thu, 18 Jul 2019 17:20:53 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 3:21 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Apr 30, 2019 at 5:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I will think about a simple fix, but after the upcoming point release.\n> > There is no hurry.\n>\n> Attached draft patch uses RelationGetNumberOfBlocks() to size each of\n> the two Bloom filters that may be used by amcheck to perform\n> verification.\n>\n> The basic heapallindexed Bloom filter is now sized based on the\n> conservative assumption that there must be *at least*\n> \"RelationGetNumberOfBlocks() * 50\" elements to fingerprint (reltuples\n> will continue to be used to size the basic heapallindexed Bloom filter\n> in most cases, though). The patch also uses the same\n> RelationGetNumberOfBlocks() value to size the downlink Bloom filter.\n> This second change will fix your problem very non-invasively.\n>\n> I intend to backpatch this to v11 in the next few days.\n\nThank you for backpatching this!\n\nBTW, there is next revision of patch I'm proposing for v13.\n\nIn this revision check for missing downlinks is combined with\nbt_downlink_check(). So, pages visited by bt_downlink_check() patch\ndoesn't cause extra accessed. It only causes following additional\npage accesses:\n1) Downlinks corresponding to \"negative infinity\" keys,\n2) Pages of child level, which are not referenced by downlinks.\n\nBut I think these two kinds are very minority, and those accesses\ncould be trade off with more precise missing downlink check without\nbloom filter. What do you think?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 12 Aug 2019 22:00:49 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 12:01 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> BTW, there is next revision of patch I'm proposing for v13.\n\nCool.\n\n> In this revision check for missing downlinks is combined with\n> bt_downlink_check(). So, pages visited by bt_downlink_check() patch\n> doesn't cause extra accessed. It only causes following additional\n> page accesses:\n> 1) Downlinks corresponding to \"negative infinity\" keys,\n> 2) Pages of child level, which are not referenced by downlinks.\n>\n> But I think these two kinds are very minority, and those accesses\n> could be trade off with more precise missing downlink check without\n> bloom filter. What do you think?\n\nI am generally in favor of making the downlink check absolutely\nreliable, and am not too worried about the modest additional overhead.\nAfter all, bt_index_parent_check() is supposed to be thorough though\nexpensive. The only reason that I used a Bloom filter for\nfingerprinting downlink blocks was that it seemed important to quickly\nget amcheck coverage for subtle multi-level page deletion bugs just\nafter v11 feature freeze. We can now come up with a better design for\nthat.\n\nI was confused about how this patch worked at first. But then I\nremembered that Lehman and Yao consider downlinks to be distinct\nthings to separator keys in internal pages. The high key of an\ninternal page in the final separator key, so you have n downlinks and\nn + 1 separator keys per internal page -- two distinct things that\nappear in alternating order (the negative infinity item is not\nconsidered to have any separator key here). So, while internal page\nitems are explicitly \"(downlink, separator)\" pairs that are grouped\ninto a single tuple in nbtree, with a separate tuple just for the high\nkey, Lehman and Yao would find it just as natural to treat them as\n\"(separator, downlink)\" pairs. You have to skip the first downlink on\neach internal level to make that work, but this makes our\nbt_downlink_check() check have something from the target page (child's\nparent page) that is like the high key in the child.\n\nIt's already really confusing that we don't quite mean the same thing\nas Lehman and Yao when we say downlink (See also my long \"why a\nhighkey is never truly a copy of another item\" comment block within\nbt_target_page_check()), and that is not your patch's fault. But maybe\nwe need to fix that to make your patch easier to understand. (i.e.\nmaybe we need to go over every use of the word \"downlink\" in nbtree,\nand make it say something more precise, to make everything less\nconfusing.)\n\nOther feedback:\n\n* Did you miss the opportunity to verify that every high key matches\nits right sibling page's downlink tuple in parent page? We talked\nabout this already, but you don't seem to match the key (you only\nmatch the downlink block).\n\n* You are decoupling the new check from the bt_index_parent_check()\n\"heapallindexed\" option. That's probably a good thing, but you need to\nupdate the sgml docs.\n\n* Didn't you forget to remove the BtreeCheckState.rightsplit flag?\n\n* You've significantly changed the behavior of bt_downlink_check() --\nI would note this in its header comments. This is where ~99% of the\nnew work happens.\n\n* I don't like that you use the loaded term \"target\" here -- anything\ncalled \"target\" should be BtreeCheckState.target:\n\n> static void\n> -bt_downlink_missing_check(BtreeCheckState *state)\n> +bt_downlink_missing_check(BtreeCheckState *state, bool rightsplit,\n> + BlockNumber targetblock, Page target)\n> {\n\nIf it's unclear what I mean, this old comment should make it clearer:\n\n/*\n * State associated with verifying a B-Tree index\n *\n * target is the point of reference for a verification operation.\n *\n * Other B-Tree pages may be allocated, but those are always auxiliary (e.g.,\n * they are current target's child pages). Conceptually, problems are only\n * ever found in the current target page (or for a particular heap tuple during\n * heapallindexed verification). Each page found by verification's left/right,\n * top/bottom scan becomes the target exactly once.\n */\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 13 Aug 2019 13:43:57 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 11:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > In this revision check for missing downlinks is combined with\n> > bt_downlink_check(). So, pages visited by bt_downlink_check() patch\n> > doesn't cause extra accessed. It only causes following additional\n> > page accesses:\n> > 1) Downlinks corresponding to \"negative infinity\" keys,\n> > 2) Pages of child level, which are not referenced by downlinks.\n> >\n> > But I think these two kinds are very minority, and those accesses\n> > could be trade off with more precise missing downlink check without\n> > bloom filter. What do you think?\n>\n> I am generally in favor of making the downlink check absolutely\n> reliable, and am not too worried about the modest additional overhead.\n> After all, bt_index_parent_check() is supposed to be thorough though\n> expensive. The only reason that I used a Bloom filter for\n> fingerprinting downlink blocks was that it seemed important to quickly\n> get amcheck coverage for subtle multi-level page deletion bugs just\n> after v11 feature freeze. We can now come up with a better design for\n> that.\n\nGreat!\n\n> I was confused about how this patch worked at first. But then I\n> remembered that Lehman and Yao consider downlinks to be distinct\n> things to separator keys in internal pages. The high key of an\n> internal page in the final separator key, so you have n downlinks and\n> n + 1 separator keys per internal page -- two distinct things that\n> appear in alternating order (the negative infinity item is not\n> considered to have any separator key here). So, while internal page\n> items are explicitly \"(downlink, separator)\" pairs that are grouped\n> into a single tuple in nbtree, with a separate tuple just for the high\n> key, Lehman and Yao would find it just as natural to treat them as\n> \"(separator, downlink)\" pairs. You have to skip the first downlink on\n> each internal level to make that work, but this makes our\n> bt_downlink_check() check have something from the target page (child's\n> parent page) that is like the high key in the child.\n>\n> It's already really confusing that we don't quite mean the same thing\n> as Lehman and Yao when we say downlink (See also my long \"why a\n> highkey is never truly a copy of another item\" comment block within\n> bt_target_page_check()), and that is not your patch's fault. But maybe\n> we need to fix that to make your patch easier to understand. (i.e.\n> maybe we need to go over every use of the word \"downlink\" in nbtree,\n> and make it say something more precise, to make everything less\n> confusing.)\n\nI agree that current terms nbtree use to describe downlinks and\nseparator keys may be confusing. I'll try to fix this and come up\nwith patch if succeed.\n\n> Other feedback:\n>\n> * Did you miss the opportunity to verify that every high key matches\n> its right sibling page's downlink tuple in parent page? We talked\n> about this already, but you don't seem to match the key (you only\n> match the downlink block).\n>\n> * You are decoupling the new check from the bt_index_parent_check()\n> \"heapallindexed\" option. That's probably a good thing, but you need to\n> update the sgml docs.\n>\n> * Didn't you forget to remove the BtreeCheckState.rightsplit flag?\n>\n> * You've significantly changed the behavior of bt_downlink_check() --\n> I would note this in its header comments. This is where ~99% of the\n> new work happens.\n>\n> * I don't like that you use the loaded term \"target\" here -- anything\n> called \"target\" should be BtreeCheckState.target:\n>\n> > static void\n> > -bt_downlink_missing_check(BtreeCheckState *state)\n> > +bt_downlink_missing_check(BtreeCheckState *state, bool rightsplit,\n> > + BlockNumber targetblock, Page target)\n> > {\n>\n> If it's unclear what I mean, this old comment should make it clearer:\n>\n> /*\n> * State associated with verifying a B-Tree index\n> *\n> * target is the point of reference for a verification operation.\n> *\n> * Other B-Tree pages may be allocated, but those are always auxiliary (e.g.,\n> * they are current target's child pages). Conceptually, problems are only\n> * ever found in the current target page (or for a particular heap tuple during\n> * heapallindexed verification). Each page found by verification's left/right,\n> * top/bottom scan becomes the target exactly once.\n> */\n\nThe revised patch seems to fix all of above.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 19 Aug 2019 01:15:19 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Mon, Aug 19, 2019 at 01:15:19AM +0300, Alexander Korotkov wrote:\n> The revised patch seems to fix all of above.\n\nThe latest patch is failing to apply. Please provide a rebase.\n--\nMichael",
"msg_date": "Fri, 29 Nov 2019 15:03:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 03:03:01PM +0900, Michael Paquier wrote:\n>On Mon, Aug 19, 2019 at 01:15:19AM +0300, Alexander Korotkov wrote:\n>> The revised patch seems to fix all of above.\n>\n>The latest patch is failing to apply. Please provide a rebase.\n\nThis still does not apply (per cputube). Can you provide a fixed patch?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 16 Jan 2020 23:05:30 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 1:05 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Fri, Nov 29, 2019 at 03:03:01PM +0900, Michael Paquier wrote:\n> >On Mon, Aug 19, 2019 at 01:15:19AM +0300, Alexander Korotkov wrote:\n> >> The revised patch seems to fix all of above.\n> >\n> >The latest patch is failing to apply. Please provide a rebase.\n>\n> This still does not apply (per cputube). Can you provide a fixed patch?\n\nRebased patch is attached. Sorry for so huge delay.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 23 Jan 2020 05:41:37 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 6:41 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Rebased patch is attached. Sorry for so huge delay.\n\nI really like this patch. Your interest in amcheck is something that\nmakes me feel good about having put so much work into it myself.\n\nHere are some review comments:\n\n> + /*\n> + * Rightlink and incomplete split flag of previous block referenced by\n> + * downlink.\n> + */\n> + BlockNumber prevrightlink;\n> + bool previncompletesplit;\n> +\n\nWhat downlink? What does this mean? Do you mean the most recently\nfollowed rightlink on the current level, or InvalidBlockNumber if\ntarget page is the leftmost page on the current level on the scan?\n\n(Thinks some more...)\n\nActually, these two new fields track the state *one level down* from\nthe target page level when !readonly (unless target page is on the\nleaf level). Right? Comments should be explicit about this. The\ncurrent comments about downlinks isn't clear.\n\n> if (offset_is_negative_infinity(topaque, offset))\n> + {\n> + /*\n> + * Initialize downlink connectivity check if needed.\n> + */\n> + if (!P_ISLEAF(topaque) && state->readonly)\n> + {\n> + bt_downlink_connectivity_check(state,\n> + offset,\n> + NULL,\n> + topaque->btpo.level);\n> + }\n> continue;\n> + }\n\nDon't need the \"!P_ISLEAF()\" here. Also, you should say something like\n\"we need to call this here because the usual callsite in\nbt_downlink_check() won't be reached\".\n\n> /*\n> - * * Check if page has a downlink in parent *\n> - *\n> - * This can only be checked in heapallindexed + readonly case.\n> + * If we traversed the whole level to the rightmost page, there might be\n> + * missing downlinks for the pages to the right of rightmost downlink.\n> + * Check for them.\n> */\n\nYou mean \"to the right of the child page pointed to by our rightmost downlink\"?\n\nI think that the final bt_downlink_connectivity_check() call within\nbt_target_page_check() should make it clear that it is kind of special\ncompared to the other two calls.\n\n> +/*\n> + * Check connectivity of downlinks. Traverse rightlinks from previous downlink\n> + * to the current one. Check that there are no intermediate pages with missing\n> + * downlinks.\n> + *\n> + * If 'loaded_page' is given, it's assumed to be contents of downlink\n> + * referenced by 'downlinkoffnum'.\n> + */\n\nSay \"assumed to be the page pointed to by the downlink\", perhaps?\n\n> +static void\n> +bt_downlink_connectivity_check(BtreeCheckState *state,\n> + OffsetNumber downlinkoffnum,\n> + Page loaded_page,\n> + uint32 parent_level)\n> +{\n\nIn amcheck, we always have a current target page. Every page gets to\nbe the target exactly once, though sometimes other subsidiary pages\nare accessed. We try to blame the target page, even with checks that\nare technically against its child/sibling/whatever. The target page is\nalways our conceptual point of reference. Sometimes this is a bit\nartificial, but it's still worth doing consistently. So I think you\nshould change these argument names with that in mind (see below).\n\n> + /*\n> + * If we visit page with high key, check that it should be equal to\n> + * the target key next to corresponding downlink.\n> + */\n\nI suggest \"...check that it is equal to the target key...\"\n\n> + /*\n> + * There might be two situations when we examine high key. If\n> + * current child page is referenced by given downlink, we should\n> + * look to the next offset number for matching key.\n\nYou mean \"the next offset number for the matching key from the target\npage\"? I find it much easier to keep this stuff in my head if\neverything is defined in terms of its relationship with the current\ntarget page. For example, bt_downlink_connectivity_check()'s\n\"parent_level\" argument should be called \"target_level\" instead, while\nits \"loaded_page\" should be called \"loaded_child\". Maybe\n\"downlinkoffnum\" should be \"target_downlinkoffnum\". And downlinkoffnum\nshould definitely be explained in comments at the top of\nbt_downlink_connectivity_check() (e.g., say what it means when it is\nInvalidOffsetNumber).\n\n> Alternatively\n> + * we might find child with high key while traversing from\n> + * previous downlink to current one. Then matching key resides\n> + * the same offset number as current downlink.\n> + */\n\nNot sure what \"traversing from previous downlink to current one\" means at all.\n\n> + if (!offset_is_negative_infinity(topaque, pivotkey_offset) &&\n> + pivotkey_offset <= PageGetMaxOffsetNumber(state->target))\n> + {\n> + uint32 cmp = _bt_compare(state->rel,\n> + skey,\n> + state->target,\n> + pivotkey_offset);\n\nThere is no need to bother with a _bt_compare() here. Why not just use\nmemcmp() with a pointer to itup->t_tid.ip_posid (i.e. memcmp() that\nskips the block number)? I think that it is better to expect the keys\nto be *identical* among pivot tuples, including within tuple alignment\npadding (only the downlink block number can be different here). If\nnon-pivot tuples were involved then you couldn't do it this way, but\nthey're never involved, so it makes sense. A memcmp() will be faster,\nobviously. More importantly, it has the advantage of not relying on\nopclass infrastructure in any way. It might be worth adding an\ninternal verify_nbtree.c static helper function to do the memcmp() for\nyou -- bt_pivot_tuple_identical(), or something like that.\n\nI think bt_downlink_check() and bt_downlink_connectivity_check()\nshould be renamed to something broader. In my mind, downlink is\nbasically a block number. We have been sloppy about using the term\ndownlink when we really mean \"pivot tuple with a downlink\" -- I am\nguilty of this myself. But it seems more important, now that you have\nthe new high key check.\n\nI particularly don't like the way you sometimes say \"downlink\" when\nyou mean \"child page\". You do that in this error message:\n\n> + (errcode(ERRCODE_INDEX_CORRUPTED),\n> + errmsg(\"block found while traversing rightlinks from downlink of index \\\"%s\\\" has invalid level\",\n> + RelationGetRelationName(state->rel)),\n\nTypo here:\n\n> + /*\n> + * If no previos rightlink is memorized, get it from current downlink for\n> + * future usage.\n> + */\n\nYou mean \"previous\". Also, I think that you should say \"memorized for\ncurrent level just below target page's level\".\n\n> * within bt_check_level_from_leftmost() won't reach the page either,\n> * since the leaf's live siblings should have their sibling links updated\n> - * to bypass the deletion target page when it is marked fully dead.)\n> + * to bypass the deletion page under check when it is marked fully dead.)\n> *\n\nThis change seems wrong or unnecessary -- \"deletion target\" means\n\"page undergoing deletion\" (not necessarily marked P_ISDELETED() just\nyet), and has nothing to do with the amcheck target. You can change\nthis if you want, but I don't get it.\n\nI tested this by using pg_hexedit to corrupt the least significant\nbyte of a text key in the root page:\n\npg@tpce:5432 [32610]=# select bt_index_parent_check('pk_holding');\nDEBUG: verifying level 2 (true root level)\nDEBUG: verifying 9 items on internal block 290\nDEBUG: verifying level 1\nDEBUG: verifying 285 items on internal block 3\nERROR: mismatch between parent key and child high key index \"pk_holding\"\nDETAIL: Parent block=3 child block=9 parent page lsn=998/EFA21550.\n\nHappy to see that this works, even though this is one of the subtlest\npossible forms of index corruption. Previously, we could sometimes\ncatch this with \"rootdescend\" verification, but only if there were\n*current* items that a scan couldn't find on lower levels (often just\nthe leaf level). But now it doesn't matter -- we'll always detect it.\n(I think.)\n\nShouldn't this error message read '...in index \"pk_holding\"'? You\nmissed the \"in\". Also, why not have the DETAIL message call the\n\"Parent block\" the target block?\n\nI think that bt_downlink_connectivity_check() should have some\nhigh-level comments about what it's supposed to do. Perhaps an example\nis the best way to explain the concepts. Maybe say something about a\nthree level B-Tree. Each of the separator keys in the grandparent/root\npage should also appear as high keys at the parent level. Each of the\nseparator keys in the parent level should also appear as high keys on\nthe leaf level, including the separators from the parent level high\nkeys. Since each separator defines which subtrees are <= and > of the\nseparator, there must be an identical seam of separators (in high\nkeys) on lower levels. bt_downlink_connectivity_check() verifies that\nseparator keys agree across a single level, which verifies the\nintegrity of the whole tree.\n\n(Thinks some more...)\n\nActually, this patch doesn't quite manage to verify that there is this\n\"unbroken seam\" of separator keys from the root to the leaf, so my\nsuggested wording is kind of wrong -- but I think we can fix this\nweakness. The specific weakness that I saw and verified exists is as\nfollows:\n\nIf I corrupt the high key of most of the leaf pages in a multi-level\nindex just a little bit (by once again corrupting the least\nsignificant byte of the key using pg_hexedit), then the new check\nalone will usually detect the problem, which is good. However, if I\ndeliberately pick a leaf page that happens to be the rightmost child\nof some internal page, then it is a different story -- even the new\ncheck won't detect the problem (the existing rootdescend check may or\nmay not detect the problem, depending on the current non-pivot tuples\nnear the leaf high key in question). There is no real reason why we\nshouldn't be able to detect this problem, though.\n\nThe solution is to recognize that we sometimes need to use the\ntarget/parent page high key separator -- not the separator key from\nsome pivot tuple in parent that also contains a downlink. Goetz Graefe\ncalls this \"the cousin problem\" [1]. To be more precise, I think that\nthe \"pivotkey_offset <= PageGetMaxOffsetNumber(state->target))\" part\nof this test can be improved:\n\n> + /*\n> + * There might be two situations when we examine high key. If\n> + * current child page is referenced by given downlink, we should\n> + * look to the next offset number for matching key. Alternatively\n> + * we might find child with high key while traversing from\n> + * previous downlink to current one. Then matching key resides\n> + * the same offset number as current downlink.\n> + */\n> + if (blkno == downlink)\n> + pivotkey_offset = OffsetNumberNext(downlinkoffnum);\n> + else\n> + pivotkey_offset = downlinkoffnum;\n> +\n> + topaque = (BTPageOpaque) PageGetSpecialPointer(state->target);\n> +\n> + if (!offset_is_negative_infinity(topaque, pivotkey_offset) &&\n> + pivotkey_offset <= PageGetMaxOffsetNumber(state->target))\n\nWhen OffsetNumberNext(downlinkoffnum) exceeds the\nPageGetMaxOffsetNumber(state->target), doesn't that actually mean that\nthe high key offset (i.e. P_HIKEY) should be used to get an item from\nthe next level up? We can still correctly detect the problem that way.\nRemember, the high key on an internal page is a tuple that contains a\nseparator but no downlink, which is really not that special within an\ninternal page -- if you think about it from the Lehman & Yao\nperspective. So we should take the pivot tuple from the parent at\noffset P_HIKEY, and everything can work just the same.\n\nThat said, I guess you really do need the\n\"!offset_is_negative_infinity(topaque, pivotkey_offset)\" part of the\ntest. The only other possibility is that you access the target/parent\npage's downlink + separator in its own parent page (i.e. the\ngrandparent of the page whose high key might be corrupt), which is\nsignificantly more complexity -- that may not be worth it. (If you did\nthis, you would have to teach the code the difference between\n\"absolute\" negative infinity in the leftmost leaf page on the leaf\nlevel, and \"subtree relative\" negative infinity for other leaf pages\nthat are merely leftmost within a subtree).\n\nIn summary: I suppose that we can also solve \"the cousin problem\"\nquite easily, but only for rightmost cousins within a subtree --\nleftmost cousins might be too messy to verify for it to be worth it.\nWe don't want to have to jump two or three levels up within\nbt_downlink_connectivity_check() just for leftmost cousin pages. But\nmaybe you're feeling ambitious! What do you think?\n\nNote: There is an existing comment about this exact negative infinity\nbusiness within bt_downlink_check(). It starts with \"Negative\ninifinity items can be thought of as a strict lower bound that works\ntransitively...\". There should probably be some comment updates to\nthis comment block as part of this patch.\n\n[1] https://pdfs.semanticscholar.org/fd45/15ab23c00231d96c95c1091459d0d1eebfae.pdf\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 23 Jan 2020 17:31:18 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 5:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> In summary: I suppose that we can also solve \"the cousin problem\"\n> quite easily, but only for rightmost cousins within a subtree --\n> leftmost cousins might be too messy to verify for it to be worth it.\n> We don't want to have to jump two or three levels up within\n> bt_downlink_connectivity_check() just for leftmost cousin pages. But\n> maybe you're feeling ambitious! What do you think?\n\nI suppose the alternative is to get the high key of the parent's left\nsibling, rather than going to the parent's parent (i.e. our\ngrandparent). That would probably be the best way to get a separator\nkey to compare against the high key in the leftmost cousin page of a\nsubtree, if in fact we wanted to *fully* solve the \"cousin problem\".\nGoetz Graefe recommends keeping both a low key and a high key in every\npage for verification purposes. We don't actually have a low key (we\nonly have a truncated negative infinity item), but this approach isn't\nthat far off having a low key.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 23 Jan 2020 17:40:37 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 5:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I suppose the alternative is to get the high key of the parent's left\n> sibling, rather than going to the parent's parent (i.e. our\n> grandparent). That would probably be the best way to get a separator\n> key to compare against the high key in the leftmost cousin page of a\n> subtree, if in fact we wanted to *fully* solve the \"cousin problem\".\n\nI think I've confused myself here. The\n\"!offset_is_negative_infinity(topaque, pivotkey_offset)\" part of the\nbt_downlink_connectivity_check() high key check test that I mentioned\nnow *also* seem unnecessary. Any high key in a page that isn't marked\n\"half-dead\" or marked \"deleted\" or marked \"has incomplete split\" can\nbe targeted by the check. Once the page meets that criteria, there\nmust be a pivot tuple in the parent page that contains an\nidentical-to-highkey separator key (this could be the parent's own\nhigh key).\n\nThe only thing that you need to do is be careful about rightmost\nparent pages of a non-rightmost page -- stuff like that. But, I think\nthat that's only needed because an amcheck segfault isn't a very nice\nway to detect corruption.\n\nThe existing comment about negative infinity items within\nbt_downlink_check() that I mentioned in my main e-mail (the \"Note:\"\npart) don't quite apply here. You're not verifying a pivot tuple that\nhas a downlink (which might be negative infinity) against the lower\nlevels -- you're doing *the opposite*. That is, you're verifying a\nhigh key using the parent. Which seems like the right way to do it --\nyou can test for the incomplete split flag and so on by doing it\nbottom up (not top down). This must have been why I was confused.\n\nPhew!\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 23 Jan 2020 18:27:49 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "Hi, Peter!\n\nOn Fri, Jan 24, 2020 at 4:31 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Jan 22, 2020 at 6:41 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > Rebased patch is attached. Sorry for so huge delay.\n>\n> I really like this patch. Your interest in amcheck is something that\n> makes me feel good about having put so much work into it myself.\n>\n> Here are some review comments:\n\nGreat, thank you very much!\n\n> > + /*\n> > + * Rightlink and incomplete split flag of previous block referenced by\n> > + * downlink.\n> > + */\n> > + BlockNumber prevrightlink;\n> > + bool previncompletesplit;\n> > +\n>\n> What downlink? What does this mean? Do you mean the most recently\n> followed rightlink on the current level, or InvalidBlockNumber if\n> target page is the leftmost page on the current level on the scan?\n>\n> (Thinks some more...)\n>\n> Actually, these two new fields track the state *one level down* from\n> the target page level when !readonly (unless target page is on the\n> leaf level). Right? Comments should be explicit about this. The\n> current comments about downlinks isn't clear.\n\nI agree. I've used very vague terms in comments. Revised now.\n\n> > if (offset_is_negative_infinity(topaque, offset))\n> > + {\n> > + /*\n> > + * Initialize downlink connectivity check if needed.\n> > + */\n> > + if (!P_ISLEAF(topaque) && state->readonly)\n> > + {\n> > + bt_downlink_connectivity_check(state,\n> > + offset,\n> > + NULL,\n> > + topaque->btpo.level);\n> > + }\n> > continue;\n> > + }\n>\n> Don't need the \"!P_ISLEAF()\" here.\n\nWhy don't I need. bt_downlink_connectivity_check() checks one level\ndown to the target level. But there is no one level down to leaf...\n\n> Also, you should say something like\n> \"we need to call this here because the usual callsite in\n> bt_downlink_check() won't be reached\".\n\nSure, fixed.\n\n> > /*\n> > - * * Check if page has a downlink in parent *\n> > - *\n> > - * This can only be checked in heapallindexed + readonly case.\n> > + * If we traversed the whole level to the rightmost page, there might be\n> > + * missing downlinks for the pages to the right of rightmost downlink.\n> > + * Check for them.\n> > */\n>\n> You mean \"to the right of the child page pointed to by our rightmost downlink\"?\n\nYep, fixed.\n\n> I think that the final bt_downlink_connectivity_check() call within\n> bt_target_page_check() should make it clear that it is kind of special\n> compared to the other two calls.\n\nYes, this is fixed too.\n\n> > +/*\n> > + * Check connectivity of downlinks. Traverse rightlinks from previous downlink\n> > + * to the current one. Check that there are no intermediate pages with missing\n> > + * downlinks.\n> > + *\n> > + * If 'loaded_page' is given, it's assumed to be contents of downlink\n> > + * referenced by 'downlinkoffnum'.\n> > + */\n>\n> Say \"assumed to be the page pointed to by the downlink\", perhaps?\n\nYes, fixed.\n\n> > +static void\n> > +bt_downlink_connectivity_check(BtreeCheckState *state,\n> > + OffsetNumber downlinkoffnum,\n> > + Page loaded_page,\n> > + uint32 parent_level)\n> > +{\n>\n> In amcheck, we always have a current target page. Every page gets to\n> be the target exactly once, though sometimes other subsidiary pages\n> are accessed. We try to blame the target page, even with checks that\n> are technically against its child/sibling/whatever. The target page is\n> always our conceptual point of reference. Sometimes this is a bit\n> artificial, but it's still worth doing consistently. So I think you\n> should change these argument names with that in mind (see below).\n\nYes, the arguments were changes as you proposed.\n\n> > + /*\n> > + * If we visit page with high key, check that it should be equal to\n> > + * the target key next to corresponding downlink.\n> > + */\n>\n> I suggest \"...check that it is equal to the target key...\"\n\nAgree, fixed.\n\n> > + /*\n> > + * There might be two situations when we examine high key. If\n> > + * current child page is referenced by given downlink, we should\n> > + * look to the next offset number for matching key.\n>\n> You mean \"the next offset number for the matching key from the target\n> page\"? I find it much easier to keep this stuff in my head if\n> everything is defined in terms of its relationship with the current\n> target page. For example, bt_downlink_connectivity_check()'s\n> \"parent_level\" argument should be called \"target_level\" instead, while\n> its \"loaded_page\" should be called \"loaded_child\". Maybe\n> \"downlinkoffnum\" should be \"target_downlinkoffnum\". And downlinkoffnum\n> should definitely be explained in comments at the top of\n> bt_downlink_connectivity_check() (e.g., say what it means when it is\n> InvalidOffsetNumber).\n\nRenamed as you proposed.\n\n> > Alternatively\n> > + * we might find child with high key while traversing from\n> > + * previous downlink to current one. Then matching key resides\n> > + * the same offset number as current downlink.\n> > + */\n>\n> Not sure what \"traversing from previous downlink to current one\" means at all.\n\nI've rephrased this comment, please check.\n\n> > + if (!offset_is_negative_infinity(topaque, pivotkey_offset) &&\n> > + pivotkey_offset <= PageGetMaxOffsetNumber(state->target))\n> > + {\n> > + uint32 cmp = _bt_compare(state->rel,\n> > + skey,\n> > + state->target,\n> > + pivotkey_offset);\n>\n> There is no need to bother with a _bt_compare() here. Why not just use\n> memcmp() with a pointer to itup->t_tid.ip_posid (i.e. memcmp() that\n> skips the block number)? I think that it is better to expect the keys\n> to be *identical* among pivot tuples, including within tuple alignment\n> padding (only the downlink block number can be different here). If\n> non-pivot tuples were involved then you couldn't do it this way, but\n> they're never involved, so it makes sense. A memcmp() will be faster,\n> obviously. More importantly, it has the advantage of not relying on\n> opclass infrastructure in any way. It might be worth adding an\n> internal verify_nbtree.c static helper function to do the memcmp() for\n> you -- bt_pivot_tuple_identical(), or something like that.\n\nAgree, replaced _bt_compare() with bt_pivot_tuple_identical(). It\nbecomes even simpler now, thanks!\n\n> I think bt_downlink_check() and bt_downlink_connectivity_check()\n> should be renamed to something broader. In my mind, downlink is\n> basically a block number. We have been sloppy about using the term\n> downlink when we really mean \"pivot tuple with a downlink\" -- I am\n> guilty of this myself. But it seems more important, now that you have\n> the new high key check.\n\nHmm... Names are hard for me. I didn't do any renaming for now. What\nabout this?\nbt_downlink_check() => bt_child_check()\nbt_downlink_connectivity_check() => bt_child_connectivity_highkey_check()\n\n> I particularly don't like the way you sometimes say \"downlink\" when\n> you mean \"child page\". You do that in this error message:\n>\n> > + (errcode(ERRCODE_INDEX_CORRUPTED),\n> > + errmsg(\"block found while traversing rightlinks from downlink of index \\\"%s\\\" has invalid level\",\n> > + RelationGetRelationName(state->rel)),\n\nAgree. Error message has been changed.\n\n> Typo here:\n>\n> > + /*\n> > + * If no previos rightlink is memorized, get it from current downlink for\n> > + * future usage.\n> > + */\n>\n> You mean \"previous\". Also, I think that you should say \"memorized for\n> current level just below target page's level\".\n\nYep, fixed.\n\n> > * within bt_check_level_from_leftmost() won't reach the page either,\n> > * since the leaf's live siblings should have their sibling links updated\n> > - * to bypass the deletion target page when it is marked fully dead.)\n> > + * to bypass the deletion page under check when it is marked fully dead.)\n> > *\n>\n> This change seems wrong or unnecessary -- \"deletion target\" means\n> \"page undergoing deletion\" (not necessarily marked P_ISDELETED() just\n> yet), and has nothing to do with the amcheck target. You can change\n> this if you want, but I don't get it.\n\nSeems like random oversight. Change removed.\n\n> I tested this by using pg_hexedit to corrupt the least significant\n> byte of a text key in the root page:\n>\n> pg@tpce:5432 [32610]=# select bt_index_parent_check('pk_holding');\n> DEBUG: verifying level 2 (true root level)\n> DEBUG: verifying 9 items on internal block 290\n> DEBUG: verifying level 1\n> DEBUG: verifying 285 items on internal block 3\n> ERROR: mismatch between parent key and child high key index \"pk_holding\"\n> DETAIL: Parent block=3 child block=9 parent page lsn=998/EFA21550.\n>\n> Happy to see that this works, even though this is one of the subtlest\n> possible forms of index corruption. Previously, we could sometimes\n> catch this with \"rootdescend\" verification, but only if there were\n> *current* items that a scan couldn't find on lower levels (often just\n> the leaf level). But now it doesn't matter -- we'll always detect it.\n> (I think.)\n\nThank you for testing!\n\n> Shouldn't this error message read '...in index \"pk_holding\"'? You\n> missed the \"in\". Also, why not have the DETAIL message call the\n> \"Parent block\" the target block?\n\nYep, fixed.\n\n> I think that bt_downlink_connectivity_check() should have some\n> high-level comments about what it's supposed to do. Perhaps an example\n> is the best way to explain the concepts. Maybe say something about a\n> three level B-Tree. Each of the separator keys in the grandparent/root\n> page should also appear as high keys at the parent level. Each of the\n> separator keys in the parent level should also appear as high keys on\n> the leaf level, including the separators from the parent level high\n> keys. Since each separator defines which subtrees are <= and > of the\n> separator, there must be an identical seam of separators (in high\n> keys) on lower levels. bt_downlink_connectivity_check() verifies that\n> separator keys agree across a single level, which verifies the\n> integrity of the whole tree.\n\nI've revised finding the matching pivot key for high key. Now, code\nassumes we should always find a matching pivot key. It could use both\ntarget high key or left sibling high key (which is memorized as \"low\nkey\").\n\nI've checked this on normal indexes, but I didn't try to exercise this\nwith broken indexes. I would appreciate if you do.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 18 Feb 2020 13:15:47 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 1:15 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Fri, Jan 24, 2020 at 4:31 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Wed, Jan 22, 2020 at 6:41 PM Alexander Korotkov\n> > <a.korotkov@postgrespro.ru> wrote:\n> > > Rebased patch is attached. Sorry for so huge delay.\n> >\n> > I really like this patch. Your interest in amcheck is something that\n> > makes me feel good about having put so much work into it myself.\n> >\n> > Here are some review comments:\n>\n> Great, thank you very much!\n\nSorry, I've forgot to commit some comments before publishing a patch.\nThe right version is attached.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 18 Feb 2020 13:17:18 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 2:16 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Great, thank you very much!\n\nNo problem!\n\nMy remarks here are based on\n\"amcheck-btree-improve-missing-parent-downlinks-check-6.patch\". I have\nfound a false positive corruption report bug in this latest version --\nsee note below about incomplete page splits.\n\n> > Don't need the \"!P_ISLEAF()\" here.\n>\n> Why don't I need. bt_downlink_connectivity_check() checks one level\n> down to the target level. But there is no one level down to leaf...\n\nBecause offset_is_negative_infinity() checks P_ISLEAF() for you. Maybe\nit's better your way, though -- apparently it's clearer.\n\n> > > Alternatively\n> > > + * we might find child with high key while traversing from\n> > > + * previous downlink to current one. Then matching key resides\n> > > + * the same offset number as current downlink.\n> > > + */\n> >\n> > Not sure what \"traversing from previous downlink to current one\" means at all.\n>\n> I've rephrased this comment, please check.\n\n> Agree, replaced _bt_compare() with bt_pivot_tuple_identical(). It\n> becomes even simpler now, thanks!\n\nThere was actually an even better reason to invent\nbt_pivot_tuple_identical(): a call to _bt_compare() in amcheck needs\nto do something like the extra steps that you see in routines like\ninvariant_l_offset(). _bt_compare() will return 0 when the insertion\nscankey has a prefix of scankey/column values that are equal, even\nthough there may be additional columns in the index tuple that are not\ncompared. So, you could have a truncated multi-column high key that is\n\"equal\" to pivot tuple in parent that is actually to the right in the\nkey space. This blind spot would often occur with low cardinality\nindexes, where we often have something like this in pivot tuples on\ninternal pages:\n\n'foo, -inf'\n'foo, (1,24)'\n'food, -inf'. <-- This pivot tuple's downlink points to the final leaf\npage that's filled with duplicates of the value 'foo'\n'food, (3,19)' <-- This pivot tuple's downlink points to the *first*\nleaf page that's filled with duplicates of the value 'food'\n...\n\nThe situation is really common in low cardinality indexes because\nnbtsplitloc.c hates splitting a leaf page between two duplicates -- it\nis avoided whenever possible. You reliably get a '-inf' value for the\nTID in the first pivot tuple for the duplicate, followed by a real\nheap TID for later pivot tuples for pages with the same duplicate\nvalue.\n\n(Anyway, it's not important now.)\n\n> > I think bt_downlink_check() and bt_downlink_connectivity_check()\n> > should be renamed to something broader. In my mind, downlink is\n> > basically a block number. We have been sloppy about using the term\n> > downlink when we really mean \"pivot tuple with a downlink\" -- I am\n> > guilty of this myself. But it seems more important, now that you have\n> > the new high key check.\n>\n> Hmm... Names are hard for me. I didn't do any renaming for now. What\n> about this?\n> bt_downlink_check() => bt_child_check()\n> bt_downlink_connectivity_check() => bt_child_connectivity_highkey_check()\n\nI suggest:\n\nbt_downlink_check() => bt_child_check()\nbt_downlink_connectivity_check() => bt_child_highkey_check()\n\nWhile bt_downlink_connectivity_check() moves right on the target's\nchild level, this isn't the common case. Moving right like that\ndoesn't need to be suggested by the name of the function.\n\nMost of the time, we just check the high key -- right?\n\n> I've revised finding the matching pivot key for high key. Now, code\n> assumes we should always find a matching pivot key. It could use both\n> target high key or left sibling high key (which is memorized as \"low\n> key\").\n>\n> I've checked this on normal indexes, but I didn't try to exercise this\n> with broken indexes. I would appreciate if you do.\n\nI can confirm that checking the high key in target's child page\nagainst the high key in target (when appropriate) removes the \"cousin\npage verification blind spot\" that I noticed in the last version, as\nexpected. Great!\n\n* You should say \"target page lsn\" here instead:\n\npg@tpce:5432 [19852]=# select bt_index_parent_check(:'idxrelation',true, true);\nERROR: mismatch between parent key and child high key in index \"i_holding2\"\nDETAIL: Target block=1570 child block=1690 parent page lsn=0/0.\nTime: 12.509 ms\n\n* Maybe say \"Move to the right on the child level\" in a comment above\nthe bt_downlink_connectivity_check() \"while (true)\" loop here:\n\n> +\n> + while (true)\n> + {\n> + /*\n> + * Did we traverse the whole tree level and this is check for pages to\n> + * the right of rightmost downlink?\n> + */\n\n* If you are going to save a low key for the target page in memory,\nthen you only need to do so for \"state->readonly\"/parent verification.\n\n* You should s/lokey/lowkey/ -- I prefer the spelling \"lowkey\" or \"low\nkey\". This is a term that nbtsort.c now uses, in case you didn't know.\n\n* The reason for saving a low key for each target page is very\nunclear. Can we fix this?\n\nThe low key hardly ever gets used -- I can comment out the code that\nuses a low key within bt_downlink_connectivity_check(), and indexes\nthat I tested appear fine. So I imagine this is for incomplete split\ncases -- right?\n\nI tested incomplete split cases using this simple hack:\n\ndiff --git a/src/backend/access/nbtree/nbtinsert.c\nb/src/backend/access/nbtree/nbtinsert.c\nindex 4e5849ab8e..5811338584 100644\n--- a/src/backend/access/nbtree/nbtinsert.c\n+++ b/src/backend/access/nbtree/nbtinsert.c\n@@ -1821,7 +1821,7 @@ _bt_insert_parent(Relation rel,\n */\n _bt_relbuf(rel, rbuf);\n\n- if (pbuf == InvalidBuffer)\n+ if (random() <= (MAX_RANDOM_VALUE / 100))\n ereport(ERROR,\n (errcode(ERRCODE_INDEX_CORRUPTED),\n errmsg_internal(\"failed to re-find parent key in\nindex \\\"%s\\\" for split pages %u/%u\",\n\nNow, 1% of all page splits will fail after the first phase finishes,\nbut before the second phase begins/finishes. The incomplete split flag\nwill be found set by other, future inserts, though, at which point the\nsplit will be finished (unless we're unlucky again).\n\nAn INSERT that inserts many rows to bound to fail with this hack\napplied, but the INSERT shouldn't leave the index in a state that\nlooks like corruption to amcheck. I can see false positive reports of\ncorruption like this, though.\n\nConsider the following sample session. The session inserts data from\none table into another table with an equivalent schema. As you can\nsee, I have to get my INSERT to fail three times before I see the\nproblem:\n\npg@tpce:5432 [13026]=# truncate holding2;\nTRUNCATE TABLE\npg@tpce:5432 [13026]=# insert into holding2 select * from holding;\nERROR: failed to re-find parent key in index \"pk_holding2\" for split\npages 83/158\npg@tpce:5432 [13026]=# select bt_index_parent_check('i_holding2',true, true);\nDEBUG: verifying level 1 (true root level)\nDEBUG: verifying 200 items on internal block 3\nDEBUG: verifying level 0 (leaf level)\nDEBUG: verifying 262 items on leaf block 1\nDEBUG: verifying 258 items on leaf block 2\n*** SNIP ***\nDEBUG: verifying 123 items on leaf block 201\nDEBUG: verifying that tuples from index \"i_holding2\" are present in \"holding2\"\nDEBUG: finished verifying presence of 0 tuples from table \"holding2\"\nwith bitset 4.83% set\n┌───────────────────────┐\n│ bt_index_parent_check │ <-- No false positive yet\n├───────────────────────┤\n│ │\n└───────────────────────┘\n(1 row)\n\npg@tpce:5432 [13026]=# insert into holding2 select * from holding;\nDEBUG: finishing incomplete split of 83/158\nERROR: failed to re-find parent key in index \"i_holding2\" for split\npages 435/436\npg@tpce:5432 [13026]=# select bt_index_parent_check(:'idxrelation',true, true);\nDEBUG: verifying level 2 (true root level)\nDEBUG: verifying 2 items on internal block 303\n*** SNIP ***\nDEBUG: verifying 74 items on leaf block 436\nDEBUG: verifying that tuples from index \"i_holding2\" are present in \"holding2\"\nDEBUG: finished verifying presence of 0 tuples from table \"holding2\"\nwith bitset 9.97% set\n┌───────────────────────┐\n│ bt_index_parent_check │ <-- No false positive yet\n├───────────────────────┤\n│ │\n└───────────────────────┘\n(1 row)\n\npg@tpce:5432 [13026]=# insert into holding2 select * from holding;\nERROR: failed to re-find parent key in index \"i_holding2\" for split\npages 331/577\npg@tpce:5432 [13026]=# select bt_index_parent_check(:'idxrelation',true, true);\nDEBUG: verifying level 2 (true root level)\nDEBUG: verifying 3 items on internal block 303\nDEBUG: verifying level 1\nDEBUG: verifying 151 items on internal block 3\nDEBUG: verifying 189 items on internal block 521\nDEBUG: verifying 233 items on internal block 302\nWARNING: 577 rightsplit\nERROR: leaf index block lacks downlink in index \"i_holding2\". <--\nFalse positive \"corruption\"\nDETAIL: Block=127 page lsn=0/0.\n\nNotice I added a custom WARNING here, and we get the false positive\nerror just after the point that the WARNING is seen. This WARNING is\ntriggered by the temporary instrumentation change that I added to\namcheck, and can be seen in some cases that don't have an accompanying\nfalse positive report of corruption:\n\ndiff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c\nindex d6cb10e263..dcfee9e4fa 100644\n--- a/contrib/amcheck/verify_nbtree.c\n+++ b/contrib/amcheck/verify_nbtree.c\n@@ -1654,6 +1654,8 @@ bt_downlink_connectivity_check(BtreeCheckState *state,\n errmsg(\"circular link chain found in block %u of\nindex \\\"%s\\\"\",\n blkno, RelationGetRelationName(state->rel))));\n\n+ if (rightsplit)\n+ elog(WARNING, \"%u rightsplit\", blkno);\n if (!first && !P_IGNORE(opaque))\n {\n /* blkno probably has missing parent downlink */\n*** END DIFF ***\n\nAnyway, I didn't take the time to debug this today, but I suspect that\nthe problem is that the lowkey thing is kind of brittle. If you can't\nfigure it out, let me know and I'll help with it.\n\n* Can we reverse the order here, so that the common case (i.e. the\noffset_is_negative_infinity() case) comes first?:\n\n> + if (offset_is_negative_infinity(topaque, pivotkey_offset))\n> + {\n> + /*\n> + * We're going to try match child high key to \"negative\n> + * infinity key\". That means we should match to high key of\n> + * left sibling of target page.\n> + */\n *** SNIP ***\n\n> + }\n> + else\n> + {\n *** SNIP ***\n\n> + }\n\n* Can the offset_is_negative_infinity() branch explain what's going on\nat a much higher level than this?\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 18 Feb 2020 14:16:36 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "Hi!\n\nThank you for your review. The revised version is attached.\n\nOn Wed, Feb 19, 2020 at 1:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Feb 18, 2020 at 2:16 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > > Don't need the \"!P_ISLEAF()\" here.\n> >\n> > Why don't I need. bt_downlink_connectivity_check() checks one level\n> > down to the target level. But there is no one level down to leaf...\n>\n> Because offset_is_negative_infinity() checks P_ISLEAF() for you. Maybe\n> it's better your way, though -- apparently it's clearer.\n\nOh, I see. I prefer to leave it my way. Explicit check is nearly\nfree and makes it clearer for me.\n\n> > > > Alternatively\n> > > > + * we might find child with high key while traversing from\n> > > > + * previous downlink to current one. Then matching key resides\n> > > > + * the same offset number as current downlink.\n> > > > + */\n> > >\n> > > Not sure what \"traversing from previous downlink to current one\" means at all.\n> >\n> > I've rephrased this comment, please check.\n>\n> > Agree, replaced _bt_compare() with bt_pivot_tuple_identical(). It\n> > becomes even simpler now, thanks!\n>\n> There was actually an even better reason to invent\n> bt_pivot_tuple_identical(): a call to _bt_compare() in amcheck needs\n> to do something like the extra steps that you see in routines like\n> invariant_l_offset(). _bt_compare() will return 0 when the insertion\n> scankey has a prefix of scankey/column values that are equal, even\n> though there may be additional columns in the index tuple that are not\n> compared. So, you could have a truncated multi-column high key that is\n> \"equal\" to pivot tuple in parent that is actually to the right in the\n> key space. This blind spot would often occur with low cardinality\n> indexes, where we often have something like this in pivot tuples on\n> internal pages:\n>\n> 'foo, -inf'\n> 'foo, (1,24)'\n> 'food, -inf'. <-- This pivot tuple's downlink points to the final leaf\n> page that's filled with duplicates of the value 'foo'\n> 'food, (3,19)' <-- This pivot tuple's downlink points to the *first*\n> leaf page that's filled with duplicates of the value 'food'\n> ...\n>\n> The situation is really common in low cardinality indexes because\n> nbtsplitloc.c hates splitting a leaf page between two duplicates -- it\n> is avoided whenever possible. You reliably get a '-inf' value for the\n> TID in the first pivot tuple for the duplicate, followed by a real\n> heap TID for later pivot tuples for pages with the same duplicate\n> value.\n>\n\nThank you for the explanation!\n\n> > > I think bt_downlink_check() and bt_downlink_connectivity_check()\n> > > should be renamed to something broader. In my mind, downlink is\n> > > basically a block number. We have been sloppy about using the term\n> > > downlink when we really mean \"pivot tuple with a downlink\" -- I am\n> > > guilty of this myself. But it seems more important, now that you have\n> > > the new high key check.\n> >\n> > Hmm... Names are hard for me. I didn't do any renaming for now. What\n> > about this?\n> > bt_downlink_check() => bt_child_check()\n> > bt_downlink_connectivity_check() => bt_child_connectivity_highkey_check()\n>\n> I suggest:\n>\n> bt_downlink_check() => bt_child_check()\n> bt_downlink_connectivity_check() => bt_child_highkey_check()\n>\n> While bt_downlink_connectivity_check() moves right on the target's\n> child level, this isn't the common case. Moving right like that\n> doesn't need to be suggested by the name of the function.\n>\n> Most of the time, we just check the high key -- right?\n\nGood. Renamed as you proposed.\n\n> * You should say \"target page lsn\" here instead:\n>\n> pg@tpce:5432 [19852]=# select bt_index_parent_check(:'idxrelation',true, true);\n> ERROR: mismatch between parent key and child high key in index \"i_holding2\"\n> DETAIL: Target block=1570 child block=1690 parent page lsn=0/0.\n> Time: 12.509 ms\n\nYep, fixed.\n\n> * Maybe say \"Move to the right on the child level\" in a comment above\n> the bt_downlink_connectivity_check() \"while (true)\" loop here:\n\nComment is added.\n\n> > +\n> > + while (true)\n> > + {\n> > + /*\n> > + * Did we traverse the whole tree level and this is check for pages to\n> > + * the right of rightmost downlink?\n> > + */\n>\n> * If you are going to save a low key for the target page in memory,\n> then you only need to do so for \"state->readonly\"/parent verification.\n\nSure, now it's just for state->readonly.\n\n> * You should s/lokey/lowkey/ -- I prefer the spelling \"lowkey\" or \"low\n> key\". This is a term that nbtsort.c now uses, in case you didn't know.\n\nChanged, thank you for pointing.\n\n> * The reason for saving a low key for each target page is very\n> unclear. Can we fix this?\n\nActually, lowkey is used for removing \"cousin page verification blind\nspot\" when there are incomplete splits. It might happen that we read\nchild with hikey matching its parent high key only when\nbt_child_highkey_check() is called for \"minus infinity\" key of parent\nright sibling. Saving low key helps in this case.\n\n> The low key hardly ever gets used -- I can comment out the code that\n> uses a low key within bt_downlink_connectivity_check(), and indexes\n> that I tested appear fine. So I imagine this is for incomplete split\n> cases -- right?\n>\n> I tested incomplete split cases using this simple hack:\n>\n> diff --git a/src/backend/access/nbtree/nbtinsert.c\n> b/src/backend/access/nbtree/nbtinsert.c\n> index 4e5849ab8e..5811338584 100644\n> --- a/src/backend/access/nbtree/nbtinsert.c\n> +++ b/src/backend/access/nbtree/nbtinsert.c\n> @@ -1821,7 +1821,7 @@ _bt_insert_parent(Relation rel,\n> */\n> _bt_relbuf(rel, rbuf);\n>\n> - if (pbuf == InvalidBuffer)\n> + if (random() <= (MAX_RANDOM_VALUE / 100))\n> ereport(ERROR,\n> (errcode(ERRCODE_INDEX_CORRUPTED),\n> errmsg_internal(\"failed to re-find parent key in\n> index \\\"%s\\\" for split pages %u/%u\",\n>\n> Now, 1% of all page splits will fail after the first phase finishes,\n> but before the second phase begins/finishes. The incomplete split flag\n> will be found set by other, future inserts, though, at which point the\n> split will be finished (unless we're unlucky again).\n>\n> An INSERT that inserts many rows to bound to fail with this hack\n> applied, but the INSERT shouldn't leave the index in a state that\n> looks like corruption to amcheck. I can see false positive reports of\n> corruption like this, though.\n>\n> Consider the following sample session. The session inserts data from\n> one table into another table with an equivalent schema. As you can\n> see, I have to get my INSERT to fail three times before I see the\n> problem:\n>\n> pg@tpce:5432 [13026]=# truncate holding2;\n> TRUNCATE TABLE\n> pg@tpce:5432 [13026]=# insert into holding2 select * from holding;\n> ERROR: failed to re-find parent key in index \"pk_holding2\" for split\n> pages 83/158\n> pg@tpce:5432 [13026]=# select bt_index_parent_check('i_holding2',true, true);\n> DEBUG: verifying level 1 (true root level)\n> DEBUG: verifying 200 items on internal block 3\n> DEBUG: verifying level 0 (leaf level)\n> DEBUG: verifying 262 items on leaf block 1\n> DEBUG: verifying 258 items on leaf block 2\n> *** SNIP ***\n> DEBUG: verifying 123 items on leaf block 201\n> DEBUG: verifying that tuples from index \"i_holding2\" are present in \"holding2\"\n> DEBUG: finished verifying presence of 0 tuples from table \"holding2\"\n> with bitset 4.83% set\n> ┌───────────────────────┐\n> │ bt_index_parent_check │ <-- No false positive yet\n> ├───────────────────────┤\n> │ │\n> └───────────────────────┘\n> (1 row)\n>\n> pg@tpce:5432 [13026]=# insert into holding2 select * from holding;\n> DEBUG: finishing incomplete split of 83/158\n> ERROR: failed to re-find parent key in index \"i_holding2\" for split\n> pages 435/436\n> pg@tpce:5432 [13026]=# select bt_index_parent_check(:'idxrelation',true, true);\n> DEBUG: verifying level 2 (true root level)\n> DEBUG: verifying 2 items on internal block 303\n> *** SNIP ***\n> DEBUG: verifying 74 items on leaf block 436\n> DEBUG: verifying that tuples from index \"i_holding2\" are present in \"holding2\"\n> DEBUG: finished verifying presence of 0 tuples from table \"holding2\"\n> with bitset 9.97% set\n> ┌───────────────────────┐\n> │ bt_index_parent_check │ <-- No false positive yet\n> ├───────────────────────┤\n> │ │\n> └───────────────────────┘\n> (1 row)\n>\n> pg@tpce:5432 [13026]=# insert into holding2 select * from holding;\n> ERROR: failed to re-find parent key in index \"i_holding2\" for split\n> pages 331/577\n> pg@tpce:5432 [13026]=# select bt_index_parent_check(:'idxrelation',true, true);\n> DEBUG: verifying level 2 (true root level)\n> DEBUG: verifying 3 items on internal block 303\n> DEBUG: verifying level 1\n> DEBUG: verifying 151 items on internal block 3\n> DEBUG: verifying 189 items on internal block 521\n> DEBUG: verifying 233 items on internal block 302\n> WARNING: 577 rightsplit\n> ERROR: leaf index block lacks downlink in index \"i_holding2\". <--\n> False positive \"corruption\"\n> DETAIL: Block=127 page lsn=0/0.\n>\n> Notice I added a custom WARNING here, and we get the false positive\n> error just after the point that the WARNING is seen. This WARNING is\n> triggered by the temporary instrumentation change that I added to\n> amcheck, and can be seen in some cases that don't have an accompanying\n> false positive report of corruption:\n>\n> diff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c\n> index d6cb10e263..dcfee9e4fa 100644\n> --- a/contrib/amcheck/verify_nbtree.c\n> +++ b/contrib/amcheck/verify_nbtree.c\n> @@ -1654,6 +1654,8 @@ bt_downlink_connectivity_check(BtreeCheckState *state,\n> errmsg(\"circular link chain found in block %u of\n> index \\\"%s\\\"\",\n> blkno, RelationGetRelationName(state->rel))));\n>\n> + if (rightsplit)\n> + elog(WARNING, \"%u rightsplit\", blkno);\n> if (!first && !P_IGNORE(opaque))\n> {\n> /* blkno probably has missing parent downlink */\n> *** END DIFF ***\n>\n> Anyway, I didn't take the time to debug this today, but I suspect that\n> the problem is that the lowkey thing is kind of brittle. If you can't\n> figure it out, let me know and I'll help with it.\n\nIt appears that these false positives were cause by very basic error made here:\n\n if (!first && !P_IGNORE(opaque))\n {\n /* blkno probably has missing parent downlink */\n bt_downlink_missing_check(state, rightsplit, blkno, page);\n }\n\nactually it should be\n\n if (blkno != downlink && !P_IGNORE(opaque))\n {\n /* blkno probably has missing parent downlink */\n bt_downlink_missing_check(state, rightsplit, blkno, page);\n }\n\nSo \"blkno == downlink\" means blkno has downlink, not being checked\nfirst in the loop. This is remains of old version of patch which I\nforget to clean. Now the check you've described works for me.\n\nIf you still think lowkey check is a problem, please help me figure it out.\n\n> * Can we reverse the order here, so that the common case (i.e. the\n> offset_is_negative_infinity() case) comes first?:\n>\n> > + if (offset_is_negative_infinity(topaque, pivotkey_offset))\n> > + {\n> > + /*\n> > + * We're going to try match child high key to \"negative\n> > + * infinity key\". That means we should match to high key of\n> > + * left sibling of target page.\n> > + */\n> *** SNIP ***\n>\n> > + }\n> > + else\n> > + {\n> *** SNIP ***\n>\n> > + }\n>\n> * Can the offset_is_negative_infinity() branch explain what's going on\n> at a much higher level than this?\n\nSure, comment is revised!\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 25 Feb 2020 01:54:10 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "Hi Alexander,\n\nApologies for the delayed response. I was a little tired from the\ndeduplication project.\n\nOn Mon, Feb 24, 2020 at 2:54 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Thank you for your review. The revised version is attached.\n\nThis has bitrot, because of the deduplication patch. Shouldn't be hard\nto rebase, though.\n\n> > 'foo, -inf'\n> > 'foo, (1,24)'\n> > 'food, -inf'. <-- This pivot tuple's downlink points to the final leaf\n> > page that's filled with duplicates of the value 'foo'\n> > 'food, (3,19)' <-- This pivot tuple's downlink points to the *first*\n> > leaf page that's filled with duplicates of the value 'food'\n> > ...\n\n> Thank you for the explanation!\n\nI taught pageinspect to display a \"htid\" field for pivot tuples\nrecently, making it easier to visualize this example.\n\nI think that you should say more about how \"lowkey\" is used here:\n\n> /*\n> - * Record if page that is about to become target is the right half of\n> - * an incomplete page split. This can go stale immediately in\n> - * !readonly case.\n> + * Copy current target low key as the high key of right sibling.\n> + * Allocate memory in upper level context, so it would be cleared\n> + * after reset of target context.\n> + *\n> + * We only need low key for parent check.\n> */\n> - state->rightsplit = P_INCOMPLETE_SPLIT(opaque);\n> + if (state->readonly && !P_RIGHTMOST(opaque))\n> + {\n\nSay something about concurrent page splits, since they're the only\ncase where we actually use lowkey. Maybe say something like: \"We\nprobably won't end up doing anything with lowkey, but it's simpler for\nreadonly verification to always have it available\".\n\n> Actually, lowkey is used for removing \"cousin page verification blind\n> spot\" when there are incomplete splits. It might happen that we read\n> child with hikey matching its parent high key only when\n> bt_child_highkey_check() is called for \"minus infinity\" key of parent\n> right sibling. Saving low key helps in this case.\n\nThat makes sense to me.\n\n> It appears that these false positives were cause by very basic error made here:\n>\n> if (!first && !P_IGNORE(opaque))\n> {\n> /* blkno probably has missing parent downlink */\n> bt_downlink_missing_check(state, rightsplit, blkno, page);\n> }\n>\n> actually it should be\n>\n> if (blkno != downlink && !P_IGNORE(opaque))\n> {\n> /* blkno probably has missing parent downlink */\n> bt_downlink_missing_check(state, rightsplit, blkno, page);\n> }\n>\n> So \"blkno == downlink\" means blkno has downlink, not being checked\n> first in the loop. This is remains of old version of patch which I\n> forget to clean. Now the check you've described works for me.\n>\n> If you still think lowkey check is a problem, please help me figure it out.\n\n* I think that these comments could still be clearer:\n\n> + /*\n> + * We're going to try match child high key to \"negative\n> + * infinity key\". This normally happens when the last child\n> + * we visited for target's left sibling was an incomplete\n> + * split. So, we must be still on the child of target's left\n> + * sibling. Thus, we should match to target's left sibling\n> + * high key. Thankfully we saved it, it's called a \"low key\".\n> + */\n\nMaybe start with \"We cannot try to match child's high key to a\nnegative infinity key in target, since there is nothing to compare.\nHowever...\". Perhaps use terms like \"cousin page\" and \"subtree\", which\ncan be useful. Alternatively, mention this case in the diagram example\nat the top of bt_child_highkey_check(). It's tough to write comments\nlike this, but I think it's worth it.\n\nNote that a high key is also a pivot tuple, so I wouldn't mention high\nkeys here:\n\n> +/*\n> + * Check if two tuples are binary identical except the block number. So,\n> + * this function is capable to compare high keys with pivot keys.\n> + */\n> +static bool\n> +bt_pivot_tuple_identical(IndexTuple itup1, IndexTuple itup2)\n> +{\n\nv7 looks pretty close to being commitable, though I'll probably want\nto update some comments that you haven't touched when you commit this.\nI should probably wait until you've committed the patch to go do that.\nI'm thinking of things like old comments in bt_downlink_check().\n\nI will test the patch properly one more time when you produce a new\nrevision. I haven't really tested it since the last time.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 2 Mar 2020 16:03:52 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "Hi, Peter!\n\nOn Tue, Mar 3, 2020 at 3:04 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Apologies for the delayed response. I was a little tired from the\n> deduplication project.\n\nNo problem. Apologies for the delayed revision as well.\n\n> I taught pageinspect to display a \"htid\" field for pivot tuples\n> recently, making it easier to visualize this example.\n\nGreat!\n\n> I think that you should say more about how \"lowkey\" is used here:\n>\n> > /*\n> > - * Record if page that is about to become target is the right half of\n> > - * an incomplete page split. This can go stale immediately in\n> > - * !readonly case.\n> > + * Copy current target low key as the high key of right sibling.\n> > + * Allocate memory in upper level context, so it would be cleared\n> > + * after reset of target context.\n> > + *\n> > + * We only need low key for parent check.\n> > */\n> > - state->rightsplit = P_INCOMPLETE_SPLIT(opaque);\n> > + if (state->readonly && !P_RIGHTMOST(opaque))\n> > + {\n>\n> Say something about concurrent page splits, since they're the only\n> case where we actually use lowkey. Maybe say something like: \"We\n> probably won't end up doing anything with lowkey, but it's simpler for\n> readonly verification to always have it available\".\n\nI've revised this comment. Hopefully it's better now.\n\n> * I think that these comments could still be clearer:\n>\n> > + /*\n> > + * We're going to try match child high key to \"negative\n> > + * infinity key\". This normally happens when the last child\n> > + * we visited for target's left sibling was an incomplete\n> > + * split. So, we must be still on the child of target's left\n> > + * sibling. Thus, we should match to target's left sibling\n> > + * high key. Thankfully we saved it, it's called a \"low key\".\n> > + */\n>\n> Maybe start with \"We cannot try to match child's high key to a\n> negative infinity key in target, since there is nothing to compare.\n> However...\". Perhaps use terms like \"cousin page\" and \"subtree\", which\n> can be useful. Alternatively, mention this case in the diagram example\n> at the top of bt_child_highkey_check(). It's tough to write comments\n> like this, but I think it's worth it.\n\nI've updated this comment using terms \"cousin page\" and \"subtree\". I\ndidn't refer the diagram example, because it doesn't contain\nappropriate case. And I wouldn't like this diagram to contain such\ncase, because that probably makes this diagram too complex. I've also\ninvented term \"uncle page\". BTW, should it be \"aunt page\"? I don't\nknow.\n\n> Note that a high key is also a pivot tuple, so I wouldn't mention high\n> keys here:\n>\n> > +/*\n> > + * Check if two tuples are binary identical except the block number. So,\n> > + * this function is capable to compare high keys with pivot keys.\n> > + */\n> > +static bool\n> > +bt_pivot_tuple_identical(IndexTuple itup1, IndexTuple itup2)\n> > +{\n\nSure, this comment is revised.\n\n> v7 looks pretty close to being commitable, though I'll probably want\n> to update some comments that you haven't touched when you commit this.\n> I should probably wait until you've committed the patch to go do that.\n> I'm thinking of things like old comments in bt_downlink_check().\n>\n> I will test the patch properly one more time when you produce a new\n> revision. I haven't really tested it since the last time.\n\nAttached patch also has revised commit message. I'll wait for your\nresponse before commit.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 9 Mar 2020 00:52:37 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Mon, Mar 9, 2020 at 12:52 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Attached patch also has revised commit message. I'll wait for your\n> response before commit.\n\nOh, I found that I haven't attached the patch.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 9 Mar 2020 21:36:21 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Sun, Mar 8, 2020 at 2:52 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> I've revised this comment. Hopefully it's better now.\n\nI think that the new comments about why we need a low key for the page\nare much better now.\n\n> I've updated this comment using terms \"cousin page\" and \"subtree\". I\n> didn't refer the diagram example, because it doesn't contain\n> appropriate case. And I wouldn't like this diagram to contain such\n> case, because that probably makes this diagram too complex. I've also\n> invented term \"uncle page\". BTW, should it be \"aunt page\"? I don't\n> know.\n\nI have never heard the term \"uncle page\" before, but I like it --\nthough maybe say \"right uncle page\". That happens to be the exact\nrelationship that we're talking about here. I think any one of\n\"uncle\", \"aunt\", or \"uncle/aunt\" are acceptable. We'll probably never\nneed to use this term again, but it seems like the right term to use\nhere.\n\nAnyway, this section also seems much better now.\n\nOther things that I noticed:\n\n* Typo:\n\n> + /*\n> + * We don't call bt_child_check() for \"negative infinity\" items.\n> + * But if we're performatin downlink connectivity check, we do it\n> + * for every item including \"negative infinity\" one.\n> + */\n\ns/performatin/performing/\n\n* Suggest that you say \"has incomplete split flag set\" here:\n\n> + * - The call for block 4 will initialize data structure, but doesn't do actual\n> + * checks assuming page 4 has incomplete split.\n\n* More importantly, is this the right thing to say about page 4? Isn't\nit also true that page 4 is the leftmost leaf page, and therefore kind\nof special in another way? Even without having the incomplete split\nflag set at all? Wouldn't it be better to address the incomplete split\nflag issue by making that apply to some other page that isn't also the\nleftmost? That would allow you to talk about the leftmost case\ndirectly here. Or it would at least make it less confusing.\n\nBTW, a P_LEFTMOST() assertion at the beginning of\nbt_child_highkey_check() would make this easier to follow.\n\n* Correct spelling is \"happens\" here:\n\n> + * current child page is not incomplete split, then its high key\n> + * should match to the target's key of current offset number. This\n> + * happends when child page referenced by previous downlink is\n\n* Actually, maybe this whole sentence should be reworded instead --\nsay \"This happens when a previous call here (to\nbt_child_highkey_check()) found an incomplete split, and we reach a\nright sibling page without a downlink -- the right sibling page's high\nkey still needs to be matched to a separator key on the parent/target\nlevel\".\n\n* Maybe say \"Don't apply OffsetNumberNext() to target_downlinkoffnum\nwhen we already had to step right on the child level. Our traversal of\nthe child level must try to move in perfect lockstep behind (to the\nleft of) the target/parent level traversal.\"\n\nI found this detail very confusing at first.\n\n* The docs should say \"...relationships, including checking that there\nare no missing downlinks in the index structure\" here:\n\n> unlike <function>bt_index_check</function>,\n> <function>bt_index_parent_check</function> also checks\n> - invariants that span parent/child relationships.\n> + invariants that span parent/child relationships including check\n> + that there are no missing downlinks in the index structure.\n> <function>bt_index_parent_check</function> follows the general\n> convention of raising an error if it finds a logical\n> inconsistency or other problem.\n\nThis is very close now. I would be okay with you committing the patch\nonce you deal with this feedback. If you prefer, I can take another\nlook at a new revision.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 9 Mar 2020 17:07:30 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Tue, Mar 10, 2020 at 3:07 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sun, Mar 8, 2020 at 2:52 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > I've revised this comment. Hopefully it's better now.\n>\n> I think that the new comments about why we need a low key for the page\n> are much better now.\n\nGood, thank you.\n\n> > I've updated this comment using terms \"cousin page\" and \"subtree\". I\n> > didn't refer the diagram example, because it doesn't contain\n> > appropriate case. And I wouldn't like this diagram to contain such\n> > case, because that probably makes this diagram too complex. I've also\n> > invented term \"uncle page\". BTW, should it be \"aunt page\"? I don't\n> > know.\n>\n> I have never heard the term \"uncle page\" before, but I like it --\n> though maybe say \"right uncle page\". That happens to be the exact\n> relationship that we're talking about here. I think any one of\n> \"uncle\", \"aunt\", or \"uncle/aunt\" are acceptable. We'll probably never\n> need to use this term again, but it seems like the right term to use\n> here.\n\nAccording to context that should be left uncle page. I've changed the\ntext accordingly.\n\n> Anyway, this section also seems much better now.\n>\n> Other things that I noticed:\n>\n> * Typo:\n>\n> > + /*\n> > + * We don't call bt_child_check() for \"negative infinity\" items.\n> > + * But if we're performatin downlink connectivity check, we do it\n> > + * for every item including \"negative infinity\" one.\n> > + */\n>\n> s/performatin/performing/\n\nFixed.\n\n> * Suggest that you say \"has incomplete split flag set\" here:\n>\n> > + * - The call for block 4 will initialize data structure, but doesn't do actual\n> > + * checks assuming page 4 has incomplete split.\n\nYes, that sounds better. Changed here and in the other places.\n\n> * More importantly, is this the right thing to say about page 4? Isn't\n> it also true that page 4 is the leftmost leaf page, and therefore kind\n> of special in another way? Even without having the incomplete split\n> flag set at all? Wouldn't it be better to address the incomplete split\n> flag issue by making that apply to some other page that isn't also the\n> leftmost? That would allow you to talk about the leftmost case\n> directly here. Or it would at least make it less confusing.\n\nYes, current example looks confusing in this aspect. But your comment\nspotted to me an algorithmic issue. We don't match highkey of\nleftmost child against parent pivot key. But we can. The \"if\n(!BlockNumberIsValid(blkno))\" branch survived from the patch version\nwhen we didn't match high keys. I've revised it. Now we enter the\nloop even for leftmost page on child level and match high key for that\npage.\n\n> BTW, a P_LEFTMOST() assertion at the beginning of\n> bt_child_highkey_check() would make this easier to follow.\n\nYes, but why should it be an assert? We can imagine corruption, when\nthere is left sibling of first child of leftmost target. I guess,\ncurrent code would report such situation as an error, because this\nleft sibling lacks of parent downlink. I've revised that \"if\" branch,\nso we don't load a child page there anymore. Error reporting is added\nto the main loop.\n\n> * Correct spelling is \"happens\" here:\n>\n> > + * current child page is not incomplete split, then its high key\n> > + * should match to the target's key of current offset number. This\n> > + * happends when child page referenced by previous downlink is\n>\n> * Actually, maybe this whole sentence should be reworded instead --\n> say \"This happens when a previous call here (to\n> bt_child_highkey_check()) found an incomplete split, and we reach a\n> right sibling page without a downlink -- the right sibling page's high\n> key still needs to be matched to a separator key on the parent/target\n> level\".\n>\n> * Maybe say \"Don't apply OffsetNumberNext() to target_downlinkoffnum\n> when we already had to step right on the child level. Our traversal of\n> the child level must try to move in perfect lockstep behind (to the\n> left of) the target/parent level traversal.\"\n>\n> I found this detail very confusing at first.\n>\n> * The docs should say \"...relationships, including checking that there\n> are no missing downlinks in the index structure\" here:\n>\n> > unlike <function>bt_index_check</function>,\n> > <function>bt_index_parent_check</function> also checks\n> > - invariants that span parent/child relationships.\n> > + invariants that span parent/child relationships including check\n> > + that there are no missing downlinks in the index structure.\n> > <function>bt_index_parent_check</function> follows the general\n> > convention of raising an error if it finds a logical\n> > inconsistency or other problem.\n\nThis comments are revised as you proposed.\n\n> This is very close now. I would be okay with you committing the patch\n> once you deal with this feedback. If you prefer, I can take another\n> look at a new revision.\n\nThank you. I'd like to have another feedback from you assuming there\nare logic changes.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 10 Mar 2020 18:30:06 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Tue, Mar 10, 2020 at 8:30 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Yes, current example looks confusing in this aspect. But your comment\n> spotted to me an algorithmic issue. We don't match highkey of\n> leftmost child against parent pivot key. But we can. The \"if\n> (!BlockNumberIsValid(blkno))\" branch survived from the patch version\n> when we didn't match high keys. I've revised it. Now we enter the\n> loop even for leftmost page on child level and match high key for that\n> page.\n\nGreat. That looks better.\n\n> > BTW, a P_LEFTMOST() assertion at the beginning of\n> > bt_child_highkey_check() would make this easier to follow.\n>\n> Yes, but why should it be an assert? We can imagine corruption, when\n> there is left sibling of first child of leftmost target.\n\nI agree. I would only make it an assertion when it concerns an\nimplementation detail of amcheck, but that doesn't apply here.\n\n> Thank you. I'd like to have another feedback from you assuming there\n> are logic changes.\n\nThis looks committable. I only noticed one thing: The comments above\nbt_target_page_check() need to be updated to reflect the new check,\nwhich no longer has anything to do with \"heapallindexed = true\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 10 Mar 2020 21:19:31 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Wed, Mar 11, 2020 at 7:19 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> This looks committable. I only noticed one thing: The comments above\n> bt_target_page_check() need to be updated to reflect the new check,\n> which no longer has anything to do with \"heapallindexed = true\".\n\nThank you! Pushed with this comment revised!\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 11 Mar 2020 12:01:59 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
},
{
"msg_contents": "On Wed, Mar 11, 2020 at 2:02 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Thank you! Pushed with this comment revised!\n\nThanks!\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 11 Mar 2020 08:41:56 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Improve search for missing parent downlinks in amcheck"
}
] |
[
{
"msg_contents": "Using HEAD,\n\ncreate table t1 as select generate_series(1,40000000) id;\nvacuum analyze t1;\nexplain select * from t1, t1 t1b where t1.id = t1b.id;\n-- should indicate a hash join\nexplain analyze select * from t1, t1 t1b where t1.id = t1b.id;\n\n... watch the process's memory consumption bloat. (It runs for\nawhile before that starts to happen, but eventually it goes to\na couple of GB.)\n\nIt looks to me like the problem is that ExecHashJoinGetSavedTuple\ncalls ExecForceStoreMinimalTuple with shouldFree = true, and\nExecForceStoreMinimalTuple's second code branch simply ignores\nthe requirement to free the supplied tuple.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Apr 2019 22:46:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "ExecForceStoreMinimalTuple leaks memory like there's no tomorrow"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 10:46:56PM -0400, Tom Lane wrote:\n> create table t1 as select generate_series(1,40000000) id;\n> vacuum analyze t1;\n> explain select * from t1, t1 t1b where t1.id = t1b.id;\n> -- should indicate a hash join\n> explain analyze select * from t1, t1 t1b where t1.id = t1b.id;\n> \n> ... watch the process's memory consumption bloat. (It runs for\n> awhile before that starts to happen, but eventually it goes to\n> a couple of GB.)\n> \n> It looks to me like the problem is that ExecHashJoinGetSavedTuple\n> calls ExecForceStoreMinimalTuple with shouldFree = true, and\n> ExecForceStoreMinimalTuple's second code branch simply ignores\n> the requirement to free the supplied tuple.\n\nOpen item added, as the root comes from 4da597ed.\n--\nMichael",
"msg_date": "Tue, 16 Apr 2019 12:31:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ExecForceStoreMinimalTuple leaks memory like there's no tomorrow"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-15 22:46:56 -0400, Tom Lane wrote:\n> Using HEAD,\n> \n> create table t1 as select generate_series(1,40000000) id;\n> vacuum analyze t1;\n> explain select * from t1, t1 t1b where t1.id = t1b.id;\n> -- should indicate a hash join\n> explain analyze select * from t1, t1 t1b where t1.id = t1b.id;\n> \n> ... watch the process's memory consumption bloat. (It runs for\n> awhile before that starts to happen, but eventually it goes to\n> a couple of GB.)\n> \n> It looks to me like the problem is that ExecHashJoinGetSavedTuple\n> calls ExecForceStoreMinimalTuple with shouldFree = true, and\n> ExecForceStoreMinimalTuple's second code branch simply ignores\n> the requirement to free the supplied tuple.\n\nThanks for finding. The fix is obviously easy - but looking through the\ncode I think I found another similar issue. I'll fix both in one go\ntomorrow.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Apr 2019 19:04:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ExecForceStoreMinimalTuple leaks memory like there's no tomorrow"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-18 19:04:09 -0700, Andres Freund wrote:\n> On 2019-04-15 22:46:56 -0400, Tom Lane wrote:\n> > Using HEAD,\n> > \n> > create table t1 as select generate_series(1,40000000) id;\n> > vacuum analyze t1;\n> > explain select * from t1, t1 t1b where t1.id = t1b.id;\n> > -- should indicate a hash join\n> > explain analyze select * from t1, t1 t1b where t1.id = t1b.id;\n> > \n> > ... watch the process's memory consumption bloat. (It runs for\n> > awhile before that starts to happen, but eventually it goes to\n> > a couple of GB.)\n> > \n> > It looks to me like the problem is that ExecHashJoinGetSavedTuple\n> > calls ExecForceStoreMinimalTuple with shouldFree = true, and\n> > ExecForceStoreMinimalTuple's second code branch simply ignores\n> > the requirement to free the supplied tuple.\n> \n> Thanks for finding. The fix is obviously easy - but looking through the\n> code I think I found another similar issue. I'll fix both in one go\n> tomorrow.\n\nPushed the combined fix for that. Thanks!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 19 Apr 2019 11:55:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ExecForceStoreMinimalTuple leaks memory like there's no tomorrow"
}
] |
[
{
"msg_contents": "Hi all,\n\nThis is a continuation of the following thread, but I prefer spawning\na new thread for clarity:\nhttps://www.postgresql.org/message-id/20190416064512.GJ2673@paquier.xyz\n\nThe buildfarm has reported two similar failures when shutting down a\nnode:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2019-03-23%2022%3A28%3A59\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2019-04-16%2006%3A14%3A01\n\nIn both cases, the instance cannot shut down because it times out,\nwaiting for the shutdown checkpoint to finish but I suspect that this\ncheckpoint actually never happens.\n\nThe first case involves piculet which has --disable-atomics, gcc 6 and\nthe recovery test 016_min_consistency where we trigger a checkpoint,\nthen issue a fast shutdown on a standby. And at this point the test\nwaits forever.\n\nThe second case involves dragonet which has JIT enabled and clang.\nThe failure is on test 009_twophase.pl. The failure happens after\ntest preparing transaction xact_009_11, where a *standby* gets\nrestarted. Again, the test waits forever for the instance to shut\ndown.\n\nThe most recent commits which have touched checkpoints are 0dfe3d0e\nand c6c9474a, which maps roughly to the point where the failures\nbegan to happen, and that something related to standby clean shutdowns\nhas broken since.\n\nThanks,\n--\nMichael",
"msg_date": "Tue, 16 Apr 2019 16:01:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> The buildfarm has reported two similar failures when shutting down a\n> node:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2019-03-23%2022%3A28%3A59\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2019-04-16%2006%3A14%3A01\n\n> In both cases, the instance cannot shut down because it times out,\n> waiting for the shutdown checkpoint to finish but I suspect that this\n> checkpoint actually never happens.\n\nHmm, I don't think that that is actually where the problem is. In\npiculet's failure, the test script times out waiting for a \"fast\"\nshutdown of the standby server, and what we see in the standby's log is\n\n2019-03-23 22:44:12.181 UTC [9731] LOG: received fast shutdown request\n2019-03-23 22:44:12.181 UTC [9731] LOG: aborting any active transactions\n2019-03-23 22:44:12.181 UTC [9960] FATAL: terminating walreceiver process due to administrator command\n2019-03-23 22:50:13.088 UTC [9731] LOG: received immediate shutdown request\n\nwhere the last line indicates that the test script lost patience and\nissued an immediate shutdown. However, in a successful run of the\ntest, the log looks like\n\n2019-03-24 03:33:25.592 UTC [23816] LOG: received fast shutdown request\n2019-03-24 03:33:25.592 UTC [23816] LOG: aborting any active transactions\n2019-03-24 03:33:25.592 UTC [23895] FATAL: terminating walreceiver process due to administrator command\n2019-03-24 03:33:25.595 UTC [23819] LOG: shutting down\n2019-03-24 03:33:25.600 UTC [23816] LOG: database system is shut down\n2019-03-24 03:33:25.696 UTC [23903] LOG: starting PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.2.0-12) 8.2.0, 64-bit\n\nwhere the last line reflects restarting the server for the next test step.\nSo in the failure case we don't see the \"shutting down\" message, which\nmeans we never got to ShutdownXLOG, so no checkpoint request was made.\nEven if we had got to ShutdownXLOG, the process is just executing the\noperation directly, it's not sending a signal asking some other process\nto do the checkpoint; so it's hard to see how either of the commits\nyou mention could be involved.\n\nI think what we need to look for is reasons why (1) the postmaster\nnever sends SIGUSR2 to the checkpointer, or (2) the checkpointer's\nmain loop doesn't get to noticing shutdown_requested.\n\nA rather scary point for (2) is that said main loop seems to be\nassuming that MyLatch a/k/a MyProc->procLatch is not used for any\nother purposes in the checkpointer process. If there were something,\nlike say a condition variable wait, that would reset MyLatch at any\ntime during a checkpoint, then we could very easily go to sleep at the\nbottom of the loop and not notice that there's a pending shutdown request.\n\nNow, c6c9474aa did not break this, because the latch resets that\nit added happen in other processes not the checkpointer. But I'm\nfeeling suspicious that some other change we made recently might've\nborked it. And in general, it seems like we've managed to load a\nlot of potentially conflicting roles onto process latches.\n\nDo we need to think harder about establishing rules for multiplexed\nuse of the process latch? I'm imagining some rule like \"if you are\nnot the outermost event loop of a process, you do not get to\nsummarily clear MyLatch. Make sure to leave it set after waiting,\nif there was any possibility that it was set by something other than\nthe specific event you're concerned with\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Apr 2019 18:45:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 6:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Do we need to think harder about establishing rules for multiplexed\n> use of the process latch? I'm imagining some rule like \"if you are\n> not the outermost event loop of a process, you do not get to\n> summarily clear MyLatch. Make sure to leave it set after waiting,\n> if there was any possibility that it was set by something other than\n> the specific event you're concerned with\".\n\nHmm, yeah. If the latch is left set, then the outer loop will just go\nthrough an extra and unnecessary iteration, which seems fine. If the\nlatch is left clear, then the outer loop might miss a wakeup intended\nfor it and hang forever.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 16 Apr 2019 18:59:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-16 18:59:37 -0400, Robert Haas wrote:\n> On Tue, Apr 16, 2019 at 6:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Do we need to think harder about establishing rules for multiplexed\n> > use of the process latch? I'm imagining some rule like \"if you are\n> > not the outermost event loop of a process, you do not get to\n> > summarily clear MyLatch. Make sure to leave it set after waiting,\n> > if there was any possibility that it was set by something other than\n> > the specific event you're concerned with\".\n>\n> Hmm, yeah. If the latch is left set, then the outer loop will just go\n> through an extra and unnecessary iteration, which seems fine. If the\n> latch is left clear, then the outer loop might miss a wakeup intended\n> for it and hang forever.\n\nArguably that's a sign that the latch using code in the outer loop(s) isn't\nwritten correctly? If you do it as:\n\nwhile (true)\n{\n CHECK_FOR_INTERRUPTS();\n\n ResetLatch(MyLatch);\n\n if (work_needed)\n {\n Plenty();\n Code();\n Using(MyLatch);\n }\n else\n {\n WaitLatch(MyLatch);\n }\n}\n\nI think that's not a danger? I think the problem really is that we\nsuggest doing that WaitLatch() unconditionally:\n\n * The correct pattern to wait for event(s) is:\n *\n * for (;;)\n * {\n *\t ResetLatch();\n *\t if (work to do)\n *\t\t Do Stuff();\n *\t WaitLatch();\n * }\n *\n * It's important to reset the latch *before* checking if there's work to\n * do. Otherwise, if someone sets the latch between the check and the\n * ResetLatch call, you will miss it and Wait will incorrectly block.\n *\n * Another valid coding pattern looks like:\n *\n * for (;;)\n * {\n *\t if (work to do)\n *\t\t Do Stuff(); // in particular, exit loop if some condition satisfied\n *\t WaitLatch();\n *\t ResetLatch();\n * }\n\nObviously there's the issue that a lot of latch using code isn't written\nthat way - but I also don't think there's that many latch using code\nthat then also uses latch. Seems like we could fix that. While it has\nobviously dangers of not being followed, so does the\n'always-set-latch-unless-outermost-loop' approach.\n\nI'm not sure I like the idea of incurring another unnecessary SetLatch()\ncall for most latch using places.\n\nI guess there's a bit bigger danger of taking longer to notice\npostmaster-death. But I'm not sure I can quite see that being\nproblematic - seems like all we should incur is another cycle through\nthe loop, as the latch shouldn't be set anymore.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Apr 2019 17:05:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-16 17:05:36 -0700, Andres Freund wrote:\n> On 2019-04-16 18:59:37 -0400, Robert Haas wrote:\n> > On Tue, Apr 16, 2019 at 6:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Do we need to think harder about establishing rules for multiplexed\n> > > use of the process latch? I'm imagining some rule like \"if you are\n> > > not the outermost event loop of a process, you do not get to\n> > > summarily clear MyLatch. Make sure to leave it set after waiting,\n> > > if there was any possibility that it was set by something other than\n> > > the specific event you're concerned with\".\n> >\n> > Hmm, yeah. If the latch is left set, then the outer loop will just go\n> > through an extra and unnecessary iteration, which seems fine. If the\n> > latch is left clear, then the outer loop might miss a wakeup intended\n> > for it and hang forever.\n> \n> Arguably that's a sign that the latch using code in the outer loop(s) isn't\n> written correctly? If you do it as:\n> \n> while (true)\n> {\n> CHECK_FOR_INTERRUPTS();\n> \n> ResetLatch(MyLatch);\n> \n> if (work_needed)\n> {\n> Plenty();\n> Code();\n> Using(MyLatch);\n> }\n> else\n> {\n> WaitLatch(MyLatch);\n> }\n> }\n> \n> I think that's not a danger? I think the problem really is that we\n> suggest doing that WaitLatch() unconditionally:\n> \n> * The correct pattern to wait for event(s) is:\n> *\n> * for (;;)\n> * {\n> *\t ResetLatch();\n> *\t if (work to do)\n> *\t\t Do Stuff();\n> *\t WaitLatch();\n> * }\n> *\n> * It's important to reset the latch *before* checking if there's work to\n> * do. Otherwise, if someone sets the latch between the check and the\n> * ResetLatch call, you will miss it and Wait will incorrectly block.\n> *\n> * Another valid coding pattern looks like:\n> *\n> * for (;;)\n> * {\n> *\t if (work to do)\n> *\t\t Do Stuff(); // in particular, exit loop if some condition satisfied\n> *\t WaitLatch();\n> *\t ResetLatch();\n> * }\n> \n> Obviously there's the issue that a lot of latch using code isn't written\n> that way - but I also don't think there's that many latch using code\n> that then also uses latch. Seems like we could fix that. While it has\n> obviously dangers of not being followed, so does the\n> 'always-set-latch-unless-outermost-loop' approach.\n> \n> I'm not sure I like the idea of incurring another unnecessary SetLatch()\n> call for most latch using places.\n> \n> I guess there's a bit bigger danger of taking longer to notice\n> postmaster-death. But I'm not sure I can quite see that being\n> problematic - seems like all we should incur is another cycle through\n> the loop, as the latch shouldn't be set anymore.\n\nI think we should thus change our latch documentation to say:\n\nsomething like:\n\ndiff --git a/src/include/storage/latch.h b/src/include/storage/latch.h\nindex fc995819d35..dc46dd94c5b 100644\n--- a/src/include/storage/latch.h\n+++ b/src/include/storage/latch.h\n@@ -44,22 +44,31 @@\n * {\n * ResetLatch();\n * if (work to do)\n- * Do Stuff();\n- * WaitLatch();\n+ * DoStuff();\n+ * else\n+ * WaitLatch();\n * }\n *\n * It's important to reset the latch *before* checking if there's work to\n * do. Otherwise, if someone sets the latch between the check and the\n * ResetLatch call, you will miss it and Wait will incorrectly block.\n *\n+ * The reason to only wait on the latch in case there is nothing to do is that\n+ * code inside DoStuff() might use the same latch, and leave it reset, even\n+ * though a SetLatch() aimed for the outer loop arrived. Which again could\n+ * lead to incorrectly blocking in Wait.\n+ *\n * Another valid coding pattern looks like:\n *\n * for (;;)\n * {\n * if (work to do)\n- * Do Stuff(); // in particular, exit loop if some condition satisfied\n- * WaitLatch();\n- * ResetLatch();\n+ * DoStuff(); // in particular, exit loop if some condition satisfied\n+ * else\n+ * {\n+ * WaitLatch();\n+ * ResetLatch();\n+ * }\n * }\n *\n * This is useful to reduce latch traffic if it's expected that the loop's\n\nand adapt code to match (at least in the outer loops).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Apr 2019 17:21:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 10:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think what we need to look for is reasons why (1) the postmaster\n> never sends SIGUSR2 to the checkpointer, or (2) the checkpointer's\n> main loop doesn't get to noticing shutdown_requested.\n>\n> A rather scary point for (2) is that said main loop seems to be\n> assuming that MyLatch a/k/a MyProc->procLatch is not used for any\n> other purposes in the checkpointer process. If there were something,\n> like say a condition variable wait, that would reset MyLatch at any\n> time during a checkpoint, then we could very easily go to sleep at the\n> bottom of the loop and not notice that there's a pending shutdown request.\n\nAgreed on the non-composability of that coding, but if there actually\nis anything in that loop that can reach ResetLatch(), it's well\nhidden...\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Apr 2019 11:39:16 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Apr 17, 2019 at 10:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think what we need to look for is reasons why (1) the postmaster\n>> never sends SIGUSR2 to the checkpointer, or (2) the checkpointer's\n>> main loop doesn't get to noticing shutdown_requested.\n>> \n>> A rather scary point for (2) is that said main loop seems to be\n>> assuming that MyLatch a/k/a MyProc->procLatch is not used for any\n>> other purposes in the checkpointer process. If there were something,\n>> like say a condition variable wait, that would reset MyLatch at any\n>> time during a checkpoint, then we could very easily go to sleep at the\n>> bottom of the loop and not notice that there's a pending shutdown request.\n\n> Agreed on the non-composability of that coding, but if there actually\n> is anything in that loop that can reach ResetLatch(), it's well\n> hidden...\n\nWell, it's easy to see that there's no other ResetLatch call in\ncheckpointer.c. It's much less obvious that there's no such\ncall anywhere in the code reachable from e.g. CreateCheckPoint().\n\nTo try to investigate that, I hacked things up to force an assertion\nfailure if ResetLatch was called from any other place in the\ncheckpointer process (dirty patch attached for amusement's sake).\nThis gets through check-world without any assertions. That does not\nreally prove that there aren't corner timing cases where a latch\nwait and reset could happen, but it does put a big dent in my theory.\nQuestion is, what other theory has anybody got?\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 18 Apr 2019 10:53:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote (in the other thread):\n> Any idea whether it's something newly-introduced or of long standing?\n\nIt's the latter. I searched the buildfarm database for failure logs\nincluding the string \"server does not shut down\" within the last three\nyears, and got all of the hits attached. Not all of these look like\nthe failure pattern Michael pointed to, but enough of them do to say\nthat the problem has existed since at least mid-2017. To be concrete,\nwe have quite a sample of cases where a standby server has received a\n\"fast shutdown\" signal and acknowledged that in its log, but it never\ngets to the expected \"shutting down\" message, meaning it never starts\nthe shutdown checkpoint let alone finishes it. The oldest case that\nclearly looks like that is\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=nightjar&dt=2017-06-02%2018%3A54%3A29\n\nA significant majority of the recent cases look just like the piculet\nfailure Michael pointed to, that is we fail to shut down the \"london\"\nserver while it's acting as standby in the recovery/t/009_twophase.pl\ntest. But there are very similar failures in other tests.\n\nI also notice that the population of machines showing the problem seems\nheavily skewed towards, um, weird cases. For instance, in the set\nthat have shown this type of failure since January, we have\n\ndragonet: uses JIT\nfrancolin: --disable-spinlocks\ngull: armv7\nmereswine: armv7\npiculet: --disable-atomics\nsidewinder: amd64, but running netbsd 7 (and this was 9.6, note)\nspurfowl: fairly generic amd64\n\nThis leads me to suspect that the problem is (a) some very low-level issue\nin spinlocks or or latches or the like, or (b) a timing problem that just\ndoesn't show up on generic Intel-oid platforms. The timing theory is\nmaybe a bit stronger given that one test case shows this more often than\nothers. I've not got any clear ideas beyond that.\n\nAnyway, this is *not* new in v12.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 18 Apr 2019 17:57:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 2:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Question is, what other theory has anybody got?\n\nI wondered if there might be a way for PostmasterStateMachine() to be\nreached with without signals blocked, in the case where we fork a\nfresh checkpointers, and then it misses the SIGUSR2 that we\nimmediately send because it hasn't installed its handler yet. But I\ncan't see it.\n\nThis is a curious thing from dragonet's log:\n\n2019-04-16 08:23:24.178 CEST [8335] LOG: received fast shutdown request\n2019-04-16 08:23:24.178 CEST [8335] LOG: aborting any active transactions\n2019-04-16 08:23:24.178 CEST [8393] FATAL: terminating walreceiver\nprocess due to administrator command\n2019-04-16 08:28:23.166 CEST [8337] LOG: restartpoint starting: time\n\nLogCheckpointStart() is the thing that writes \"starting: ...\", and it\nprefers to report \"shutdown\" over \"time\", but it didn't.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Apr 2019 09:58:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> This is a curious thing from dragonet's log:\n\n> 2019-04-16 08:23:24.178 CEST [8335] LOG: received fast shutdown request\n> 2019-04-16 08:23:24.178 CEST [8335] LOG: aborting any active transactions\n> 2019-04-16 08:23:24.178 CEST [8393] FATAL: terminating walreceiver\n> process due to administrator command\n> 2019-04-16 08:28:23.166 CEST [8337] LOG: restartpoint starting: time\n\n> LogCheckpointStart() is the thing that writes \"starting: ...\", and it\n> prefers to report \"shutdown\" over \"time\", but it didn't.\n\nYeah, but since we don't see \"shutting down\", we know that the shutdown\ncheckpoint hasn't begun. Here's another similar case:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2018-11-30%2011%3A44%3A54\n\nThe relevant fragment of the standby server's log is\n\n2018-11-30 05:09:22.996 PST [4229] LOG: received fast shutdown request\n2018-11-30 05:09:23.628 PST [4229] LOG: aborting any active transactions\n2018-11-30 05:09:23.649 PST [4231] LOG: checkpoint complete: wrote 17 buffers (13.3%); 0 WAL file(s) added, 0 removed, 1 recycled; write=3.021 s, sync=0.000 s, total=3.563 s; sync files=0, longest=0.000 s, average=0.000 s; distance=16563 kB, estimate=16563 kB\n2018-11-30 05:09:23.679 PST [4229] LOG: background worker \"logical replication launcher\" (PID 4276) exited with exit code 1\n2018-11-30 05:11:23.757 PST [4288] master LOG: unexpected EOF on standby connection\n2018-11-30 05:11:23.883 PST [4229] LOG: received immediate shutdown request\n2018-11-30 05:11:23.907 PST [4229] LOG: database system is shut down\n\nTo the extent that I've found logs in which the checkpointer prints\nanything at all during this interval, it seems to be just quietly\nplodding along with its usual business, without any hint that it's\naware of the pending shutdown request. It'd be very easy to believe\nthat the postmaster -> checkpointer SIGUSR2 is simply getting dropped,\nor never issued.\n\nHmm ... actually, looking at the postmaster's logic, it won't issue\nSIGUSR2 to the checkpointer until the walreceiver (if any) is gone.\nAnd now that I think about it, several of these logs contain traces\nshowing that the walreceiver is still live. Like the one quoted above:\nseems like the line from PID 4288 has to be from a walreceiver.\n\nMaybe what we should be looking for is \"why doesn't the walreceiver\nshut down\"? But the dragonet log you quote above shows the walreceiver\nexiting, or at least starting to exit. Tis a puzzlement.\n\nI'm a bit tempted to add temporary debug logging to the postmaster so\nthat walreceiver start/stop is recorded at LOG level. We'd have to wait\na few weeks to have any clear result from the buildfarm, but I'm not sure\nhow we'll get any hard data without some such measures.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Apr 2019 18:22:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 05:57:39PM -0400, Tom Lane wrote:\n> It's the latter. I searched the buildfarm database for failure logs\n> including the string \"server does not shut down\" within the last three\n> years, and got all of the hits attached. Not all of these look like\n> the failure pattern Michael pointed to, but enough of them do to say\n> that the problem has existed since at least mid-2017. To be concrete,\n> we have quite a sample of cases where a standby server has received a\n> \"fast shutdown\" signal and acknowledged that in its log, but it never\n> gets to the expected \"shutting down\" message, meaning it never starts\n> the shutdown checkpoint let alone finishes it. The oldest case that\n> clearly looks like that is\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=nightjar&dt=2017-06-02%2018%3A54%3A29\n\nInteresting. I was sort of thinking about c6c3334 first but this\nfailed based on 9fcf670, which does not include the former.\n\n> This leads me to suspect that the problem is (a) some very low-level issue\n> in spinlocks or or latches or the like, or (b) a timing problem that just\n> doesn't show up on generic Intel-oid platforms. The timing theory is\n> maybe a bit stronger given that one test case shows this more often than\n> others. I've not got any clear ideas beyond that.\n> \n> Anyway, this is *not* new in v12.\n\nIndeed. It seems to me that v12 makes the problem easier to appear\nthough, and I got to wonder if c6c9474 is helping in that as more\ncases are popping up since mid-March.\n--\nMichael",
"msg_date": "Fri, 19 Apr 2019 11:30:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 2:30 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Apr 18, 2019 at 05:57:39PM -0400, Tom Lane wrote:\n> > Anyway, this is *not* new in v12.\n>\n> Indeed. It seems to me that v12 makes the problem easier to appear\n> though, and I got to wonder if c6c9474 is helping in that as more\n> cases are popping up since mid-March.\n\nInteresting, but I'm not sure how that could be though. Perhaps, a\nbit like the other thing that cropped up in the build farm after that\ncommit, removing ~200ms of needless sleeping around an earlier online\nCHECKPOINT made some other pre-existing race condition more likely to\ngo wrong.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Apr 2019 14:47:15 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Apr 18, 2019 at 05:57:39PM -0400, Tom Lane wrote:\n>> Anyway, this is *not* new in v12.\n\n> Indeed. It seems to me that v12 makes the problem easier to appear\n> though, and I got to wonder if c6c9474 is helping in that as more\n> cases are popping up since mid-March.\n\nYeah. Whether that's due to a server code change, or new or modified\ntest cases, is unknown. But looking at my summary of buildfarm runs\nthat failed like this, there's a really clear breakpoint at\n\n gull | 2018-08-24 03:27:16 | recoveryCheck | pg_ctl: server does not shut down\n\nSince that, most of the failures with this message have been in the\nrecoveryCheck step. Before that, the failures were all over the\nplace, and now that I look closely a big fraction of them were\nin bursts on particular animals, suggesting it was more about\nsome local problem on that animal than any real code issue.\n\nSo it might be worth groveling around in the commit logs from\nlast August...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Apr 2019 22:51:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Interesting, but I'm not sure how that could be though. Perhaps, a\n> bit like the other thing that cropped up in the build farm after that\n> commit, removing ~200ms of needless sleeping around an earlier online\n> CHECKPOINT made some other pre-existing race condition more likely to\n> go wrong.\n\nThe data that we've got is entirely consistent with the idea that\nthere's a timing-sensitive bug that gets made more or less likely\nto trigger by \"unrelated\" changes in test cases or server code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Apr 2019 22:57:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 10:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > 2019-04-16 08:23:24.178 CEST [8393] FATAL: terminating walreceiver\n> > process due to administrator command\n\n> Maybe what we should be looking for is \"why doesn't the walreceiver\n> shut down\"? But the dragonet log you quote above shows the walreceiver\n> exiting, or at least starting to exit. Tis a puzzlement.\n\nOne thing I noticed about this message: if you receive SIGTERM at a\nrare time when WalRcvImmediateInterruptOK is true, then that ereport()\nruns directly in the signal handler context. That's not strictly\nallowed, and could cause nasal demons. On the other hand, it probably\nwouldn't have managed to get the FATAL message out if that was the\nproblem here (previously we've seen reports of signal handlers\ndeadlocking while trying to ereport() but they couldn't get their\nmessage out at all, because malloc or some such was already locked in\nthe user context). Is there some way that the exit code could hang\n*after* that due to corruption of libc resources (FILE streams,\nmalloc, ...)? It doesn't seem likely to me (we'd hopefully see some\nmore clues) but I thought I'd mention the idea.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Apr 2019 15:41:30 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Apr 19, 2019 at 10:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Maybe what we should be looking for is \"why doesn't the walreceiver\n>> shut down\"? But the dragonet log you quote above shows the walreceiver\n>> exiting, or at least starting to exit. Tis a puzzlement.\n\n> ... Is there some way that the exit code could hang\n> *after* that due to corruption of libc resources (FILE streams,\n> malloc, ...)? It doesn't seem likely to me (we'd hopefully see some\n> more clues) but I thought I'd mention the idea.\n\nI agree it's not likely ... but that's part of the reason I was thinking\nabout adding some postmaster logging. Whatever we're chasing here is\n\"not likely\", per the observed buildfarm failure rate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Apr 2019 23:48:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": ">>> Maybe what we should be looking for is \"why doesn't the walreceiver\n>>> shut down\"? But the dragonet log you quote above shows the walreceiver\n>>> exiting, or at least starting to exit. Tis a puzzlement.\n\nhuh ... take a look at this little stanza in PostmasterStateMachine:\n\n if (pmState == PM_SHUTDOWN_2)\n {\n /*\n * PM_SHUTDOWN_2 state ends when there's no other children than\n * dead_end children left. There shouldn't be any regular backends\n * left by now anyway; what we're really waiting for is walsenders and\n * archiver.\n *\n * Walreceiver should normally be dead by now, but not when a fast\n * shutdown is performed during recovery.\n */\n if (PgArchPID == 0 && CountChildren(BACKEND_TYPE_ALL) == 0 &&\n WalReceiverPID == 0)\n {\n pmState = PM_WAIT_DEAD_END;\n }\n }\n\nI'm too tired to think through exactly what that last comment might be\nsuggesting, but it sure seems like it might be relevant to our problem.\nIf the walreceiver *isn't* dead yet, what's going to ensure that we\ncan move forward later?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Apr 2019 00:02:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "I have spent a fair amount of time trying to replicate these failures\nlocally, with little success. I now think that the most promising theory\nis Munro's idea in [1] that the walreceiver is hanging up during its\nunsafe attempt to do ereport(FATAL) from inside a signal handler. It's\nextremely plausible that that could result in a deadlock inside libc's\nmalloc/free, or some similar place. Moreover, if that's what's causing\nit, then the windows for trouble are fixed by the length of time that\nmalloc might hold internal locks, which fits with the results I've gotten\nthat inserting delays in various promising-looking places doesn't do a\nthing towards making this reproducible.\n\nEven if that isn't the proximate cause of the current reports, it's\nclearly trouble waiting to happen, and we should get rid of it.\nAccordingly, see attached proposed patch. This just flushes the\n\"immediate interrupt\" stuff in favor of making sure that\nlibpqwalreceiver.c will take care of any signals received while\nwaiting for input.\n\nThe existing code does not use PQsetnonblocking, which means that it's\ntheoretically at risk of blocking while pushing out data to the remote\nserver. In practice I think that risk is negligible because (IIUC) we\ndon't send very large amounts of data at one time. So I didn't bother to\nchange that. Note that for the most part, if that happened, the existing\ncode was at risk of slow response to SIGTERM anyway since it didn't have\nEnable/DisableWalRcvImmediateExit around the places that send data.\n\nMy thought is to apply this only to HEAD for now; it's kind of a large\nchange to shove into the back branches to handle a failure mode that's\nnot been reported from the field. Maybe we could back-patch after we\nhave more confidence in it.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2B%3D1G98m61VjNS-qGboJPwdZcF%2BrAPu2eC4XuWRTR3UPw%40mail.gmail.com",
"msg_date": "Sat, 27 Apr 2019 20:56:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On Sun, Apr 28, 2019 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Even if that isn't the proximate cause of the current reports, it's\n> clearly trouble waiting to happen, and we should get rid of it.\n> Accordingly, see attached proposed patch. This just flushes the\n> \"immediate interrupt\" stuff in favor of making sure that\n> libpqwalreceiver.c will take care of any signals received while\n> waiting for input.\n\n+1\n\nI see that we removed the code that this was modelled on back in 2015,\nand in fact your patch even removes a dangling reference in a comment:\n\n- * This is very much like what regular backends do with ImmediateInterruptOK,\n\n> The existing code does not use PQsetnonblocking, which means that it's\n> theoretically at risk of blocking while pushing out data to the remote\n> server. In practice I think that risk is negligible because (IIUC) we\n> don't send very large amounts of data at one time. So I didn't bother to\n> change that. Note that for the most part, if that happened, the existing\n> code was at risk of slow response to SIGTERM anyway since it didn't have\n> Enable/DisableWalRcvImmediateExit around the places that send data.\n\nRight.\n\n> My thought is to apply this only to HEAD for now; it's kind of a large\n> change to shove into the back branches to handle a failure mode that's\n> not been reported from the field. Maybe we could back-patch after we\n> have more confidence in it.\n\n+1\n\nThat reminds me, we should probably also clean up at least the\nereport-from-signal-handler hazard identified over in this thread:\n\nhttps://www.postgresql.org/message-id/CAEepm%3D10MtmKeDc1WxBM0PQM9OgtNy%2BRCeWqz40pZRRS3PNo5Q%40mail.gmail.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Apr 2019 16:52:37 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sun, Apr 28, 2019 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Even if that isn't the proximate cause of the current reports, it's\n>> clearly trouble waiting to happen, and we should get rid of it.\n\n> +1\n\n> That reminds me, we should probably also clean up at least the\n> ereport-from-signal-handler hazard identified over in this thread:\n> https://www.postgresql.org/message-id/CAEepm%3D10MtmKeDc1WxBM0PQM9OgtNy%2BRCeWqz40pZRRS3PNo5Q%40mail.gmail.com\n\nYeah, probably. I imagine the reason we've not already seen complaints\nabout that is that not that many custom bgworkers exist.\n\nI do not think we can get away with back-patching a change in that area,\nthough, since it'll move the goalposts about what bgworker code has to\ndo to cope with SIGTERMs. It might already be too late for v12, unless\nwe want to treat that as an Open Item.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Apr 2019 11:23:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-27 20:56:51 -0400, Tom Lane wrote:\n> Even if that isn't the proximate cause of the current reports, it's\n> clearly trouble waiting to happen, and we should get rid of it.\n> Accordingly, see attached proposed patch. This just flushes the\n> \"immediate interrupt\" stuff in favor of making sure that\n> libpqwalreceiver.c will take care of any signals received while\n> waiting for input.\n\nGood plan.\n\n\n> The existing code does not use PQsetnonblocking, which means that it's\n> theoretically at risk of blocking while pushing out data to the remote\n> server. In practice I think that risk is negligible because (IIUC) we\n> don't send very large amounts of data at one time. So I didn't bother to\n> change that. Note that for the most part, if that happened, the existing\n> code was at risk of slow response to SIGTERM anyway since it didn't have\n> Enable/DisableWalRcvImmediateExit around the places that send data.\n\nHm, I'm not convinced that's OK. What if there's a network hickup? We'll\nwait until there's an OS tcp timeout, no? It's bad enough that there\nwere cases of this before. Increasing the surface of cases where we\nmight want to shut down walreceiver, e.g. because we would rather switch\nto recovery_command, or just shut down the server, but just get stuck\nwaiting for an hour for a tcp timeout, doesn't seem OK.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Apr 2019 09:35:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-27 20:56:51 -0400, Tom Lane wrote:\n>> The existing code does not use PQsetnonblocking, which means that it's\n>> theoretically at risk of blocking while pushing out data to the remote\n>> server. In practice I think that risk is negligible because (IIUC) we\n>> don't send very large amounts of data at one time. So I didn't bother to\n>> change that. Note that for the most part, if that happened, the existing\n>> code was at risk of slow response to SIGTERM anyway since it didn't have\n>> Enable/DisableWalRcvImmediateExit around the places that send data.\n\n> Hm, I'm not convinced that's OK. What if there's a network hickup? We'll\n> wait until there's an OS tcp timeout, no?\n\nNo. send() is only going to block if there's no room in the kernel's\nbuffers, and that would only happen if we send a lot of data in between\nwaits to receive data. Which, AFAIK, the walreceiver never does.\nWe might possibly need to improve that code in the future, but I don't\nthink there's a need for it today.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Apr 2019 12:55:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-29 12:55:31 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Hm, I'm not convinced that's OK. What if there's a network hickup? We'll\n> > wait until there's an OS tcp timeout, no?\n> \n> No. send() is only going to block if there's no room in the kernel's\n> buffers, and that would only happen if we send a lot of data in between\n> waits to receive data. Which, AFAIK, the walreceiver never does.\n> We might possibly need to improve that code in the future, but I don't\n> think there's a need for it today.\n\nAh, right.\n\n- Andres\n\n\n",
"msg_date": "Mon, 29 Apr 2019 10:04:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 5:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I have spent a fair amount of time trying to replicate these failures\n> locally, with little success. I now think that the most promising theory\n> is Munro's idea in [1] that the walreceiver is hanging up during its\n> unsafe attempt to do ereport(FATAL) from inside a signal handler. It's\n> extremely plausible that that could result in a deadlock inside libc's\n> malloc/free, or some similar place. Moreover, if that's what's causing\n> it, then the windows for trouble are fixed by the length of time that\n> malloc might hold internal locks, which fits with the results I've gotten\n> that inserting delays in various promising-looking places doesn't do a\n> thing towards making this reproducible.\n\nFor Greenplum (based on 9.4 but current master code looks the same) we\ndid see deadlocks recently hit in CI many times for walreceiver which\nI believe confirms above finding.\n\n#0 __lll_lock_wait_private () at\n../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:95\n#1 0x00007f0637ee72bd in _int_free (av=0x7f063822bb20 <main_arena>,\np=0x26bb3b0, have_lock=0) at malloc.c:3962\n#2 0x00007f0637eeb53c in __GI___libc_free (mem=<optimized out>) at\nmalloc.c:2968\n#3 0x00007f0636629464 in ?? () from /usr/lib/x86_64-linux-gnu/libgnutls.so.30\n#4 0x00007f0636630720 in ?? () from /usr/lib/x86_64-linux-gnu/libgnutls.so.30\n#5 0x00007f063b5cede7 in _dl_fini () at dl-fini.c:235\n#6 0x00007f0637ea0ff8 in __run_exit_handlers (status=1,\nlistp=0x7f063822b5f8 <__exit_funcs>,\nrun_list_atexit=run_list_atexit@entry=true) at exit.c:82\n#7 0x00007f0637ea1045 in __GI_exit (status=<optimized out>) at exit.c:104\n#8 0x00000000008c72c7 in proc_exit ()\n#9 0x0000000000a75867 in errfinish ()\n#10 0x000000000089ea53 in ProcessWalRcvInterrupts ()\n#11 0x000000000089eac5 in WalRcvShutdownHandler ()\n#12 <signal handler called>\n#13 _int_malloc (av=av@entry=0x7f063822bb20 <main_arena>,\nbytes=bytes@entry=16384) at malloc.c:3802\n#14 0x00007f0637eeb184 in __GI___libc_malloc (bytes=16384) at malloc.c:2913\n#15 0x00000000007754c3 in makeEmptyPGconn ()\n#16 0x0000000000779686 in PQconnectStart ()\n#17 0x0000000000779b8b in PQconnectdb ()\n#18 0x00000000008aae52 in libpqrcv_connect ()\n#19 0x000000000089f735 in WalReceiverMain ()\n#20 0x00000000005c5eab in AuxiliaryProcessMain ()\n#21 0x00000000004cd5f1 in ServerLoop ()\n#22 0x000000000086fb18 in PostmasterMain ()\n#23 0x00000000004d2e28 in main ()\n\nImmediateInterruptOK was removed from regular backends but not for\nwalreceiver and walreceiver performing elog(FATAL) inside signal\nhandler is dangerous.\n\n\n",
"msg_date": "Mon, 29 Apr 2019 10:26:09 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> For Greenplum (based on 9.4 but current master code looks the same) we\n> did see deadlocks recently hit in CI many times for walreceiver which\n> I believe confirms above finding.\n\n> #0 __lll_lock_wait_private () at\n> ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:95\n> #1 0x00007f0637ee72bd in _int_free (av=0x7f063822bb20 <main_arena>,\n> p=0x26bb3b0, have_lock=0) at malloc.c:3962\n> #2 0x00007f0637eeb53c in __GI___libc_free (mem=<optimized out>) at\n> malloc.c:2968\n> #3 0x00007f0636629464 in ?? () from /usr/lib/x86_64-linux-gnu/libgnutls.so.30\n> #4 0x00007f0636630720 in ?? () from /usr/lib/x86_64-linux-gnu/libgnutls.so.30\n> #5 0x00007f063b5cede7 in _dl_fini () at dl-fini.c:235\n> #6 0x00007f0637ea0ff8 in __run_exit_handlers (status=1,\n> listp=0x7f063822b5f8 <__exit_funcs>,\n> run_list_atexit=run_list_atexit@entry=true) at exit.c:82\n> #7 0x00007f0637ea1045 in __GI_exit (status=<optimized out>) at exit.c:104\n> #8 0x00000000008c72c7 in proc_exit ()\n> #9 0x0000000000a75867 in errfinish ()\n> #10 0x000000000089ea53 in ProcessWalRcvInterrupts ()\n> #11 0x000000000089eac5 in WalRcvShutdownHandler ()\n> #12 <signal handler called>\n> #13 _int_malloc (av=av@entry=0x7f063822bb20 <main_arena>,\n> bytes=bytes@entry=16384) at malloc.c:3802\n> #14 0x00007f0637eeb184 in __GI___libc_malloc (bytes=16384) at malloc.c:2913\n> #15 0x00000000007754c3 in makeEmptyPGconn ()\n> #16 0x0000000000779686 in PQconnectStart ()\n> #17 0x0000000000779b8b in PQconnectdb ()\n> #18 0x00000000008aae52 in libpqrcv_connect ()\n> #19 0x000000000089f735 in WalReceiverMain ()\n> #20 0x00000000005c5eab in AuxiliaryProcessMain ()\n> #21 0x00000000004cd5f1 in ServerLoop ()\n> #22 0x000000000086fb18 in PostmasterMain ()\n> #23 0x00000000004d2e28 in main ()\n\nCool --- that stack trace is *exactly* what you'd expect if this\nwere the problem. Thanks for sending it along!\n\nCan you try applying a1a789eb5ac894b4ca4b7742f2dc2d9602116e46\nto see if it fixes the problem for you?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Apr 2019 13:35:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 10:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Can you try applying a1a789eb5ac894b4ca4b7742f2dc2d9602116e46\n> to see if it fixes the problem for you?\n\nYes, will give it a try on greenplum and report back the result.\n\nHave we decided if this will be applied to back branches?\n\n\n",
"msg_date": "Mon, 29 Apr 2019 10:44:00 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> On Mon, Apr 29, 2019 at 10:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Can you try applying a1a789eb5ac894b4ca4b7742f2dc2d9602116e46\n>> to see if it fixes the problem for you?\n\n> Yes, will give it a try on greenplum and report back the result.\n\n> Have we decided if this will be applied to back branches?\n\nMy feeling about it is \"maybe eventually, but most definitely not\nthe week before a set of minor releases\". Some positive experience\nwith Greenplum would help increase confidence in the patch, for sure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Apr 2019 13:50:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On 2019-Apr-29, Tom Lane wrote:\n\n> Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> > On Mon, Apr 29, 2019 at 10:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Can you try applying a1a789eb5ac894b4ca4b7742f2dc2d9602116e46\n> >> to see if it fixes the problem for you?\n> \n> > Yes, will give it a try on greenplum and report back the result.\n> \n> > Have we decided if this will be applied to back branches?\n\nHi Ashwin, did you have the chance to try this out?\n\n\n> My feeling about it is \"maybe eventually, but most definitely not\n> the week before a set of minor releases\". Some positive experience\n> with Greenplum would help increase confidence in the patch, for sure.\n\nI looked at the buildfarm failures for the recoveryCheck stage. It\nlooks like there is only one failure for branch master after this\ncommit, which was chipmunk saying:\n\n # poll_query_until timed out executing this query:\n # SELECT application_name, sync_priority, sync_state FROM pg_stat_replication ORDER BY application_name;\n # expecting this output:\n # standby1|1|sync\n # standby2|2|sync\n # standby3|2|potential\n # standby4|2|potential\n # last actual query output:\n # standby1|1|sync\n # standby2|2|potential\n # standby3|2|sync\n # standby4|2|potential\n # with stderr:\n not ok 6 - asterisk comes before another standby name\n\n # Failed test 'asterisk comes before another standby name'\n # at t/007_sync_rep.pl line 26.\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2019-05-12%2020%3A37%3A11\nAFAICS this is wholly unrelated to the problem at hand.\n\nNo other animal failed recoveryCheck test; before the commit, the\nfailure was not terribly frequent, but rarely would 10 days go by\nwithout it failing. So I suggest that the bug has indeed been fixed.\n\nMaybe now's a good time to get it back-patched? In branch\nREL_11_STABLE, it failed as recently as 11 days ago in gull,\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gull&dt=2019-06-01%2004%3A11%3A36\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 12 Jun 2019 13:42:01 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Apr-29, Tom Lane wrote:\n>> Ashwin Agrawal <aagrawal@pivotal.io> writes:\n>>> Have we decided if this will be applied to back branches?\n\n>> My feeling about it is \"maybe eventually, but most definitely not\n>> the week before a set of minor releases\". Some positive experience\n>> with Greenplum would help increase confidence in the patch, for sure.\n\n> I looked at the buildfarm failures for the recoveryCheck stage. It\n> looks like there is only one failure for branch master after this\n> commit, which was chipmunk saying:\n> ...\n> AFAICS this is wholly unrelated to the problem at hand.\n\nYeah, that seems unrelated.\n\n> No other animal failed recoveryCheck test; before the commit, the\n> failure was not terribly frequent, but rarely would 10 days go by\n> without it failing. So I suggest that the bug has indeed been fixed.\n\nI feel pretty good about it too.\n\n> Maybe now's a good time to get it back-patched?\n\nShould we do that now, or wait till after next week's releases?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Jun 2019 14:52:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On 2019-Jun-12, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> > Maybe now's a good time to get it back-patched?\n> \n> Should we do that now, or wait till after next week's releases?\n\nIMO this has been hammered enough in master, and we still have a few\ndays in the back-branches for buildfarm, that it's okay to do it now.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 12 Jun 2019 15:04:32 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jun-12, Tom Lane wrote:\n>> Should we do that now, or wait till after next week's releases?\n\n> IMO this has been hammered enough in master, and we still have a few\n> days in the back-branches for buildfarm, that it's okay to do it now.\n\nPoking at that, I find that a1a789eb5 back-patches reasonably painlessly\ninto v11 and v10, but trying to bring it back to 9.6 encounters a pile of\nmerge failures. Also, looking at the git logs shows that we did a hell\nof a lot of subtle work on that code (libpqwalreceiver.c in particular)\nduring the v10 cycle. So I've got no confidence that successful\nbuildfarm/beta1 testing of the HEAD patch means much of anything for\nputting it into pre-v10 branches.\n\nGiven that we've seen few if any field reports of this issue, my\ninclination is to back-patch as far as v10, but not take the risk\nand effort involved in going further.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Jun 2019 16:26:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 04:26:23PM -0400, Tom Lane wrote:\n> Poking at that, I find that a1a789eb5 back-patches reasonably painlessly\n> into v11 and v10, but trying to bring it back to 9.6 encounters a pile of\n> merge failures. Also, looking at the git logs shows that we did a hell\n> of a lot of subtle work on that code (libpqwalreceiver.c in particular)\n> during the v10 cycle. So I've got no confidence that successful\n> buildfarm/beta1 testing of the HEAD patch means much of anything for\n> putting it into pre-v10 branches.\n> \n> Given that we've seen few if any field reports of this issue, my\n> inclination is to back-patch as far as v10, but not take the risk\n> and effort involved in going further.\n\n+1 for only a back-patch to v10 per the invasiveness argument. I\nthink that you have made the right move here.\n--\nMichael",
"msg_date": "Thu, 13 Jun 2019 14:06:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 01:42:01PM -0400, Alvaro Herrera wrote:\n> I looked at the buildfarm failures for the recoveryCheck stage. It\n> looks like there is only one failure for branch master after this\n> commit, which was chipmunk saying:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2019-05-12%2020%3A37%3A11\n> AFAICS this is wholly unrelated to the problem at hand.\n\nYes, that's unrelated.\n\nThis failure is interesting, still it has happened only once per what\nI can see. I think that this points out a rare race condition in test\n007_sync_rep.pl because of this sequence which reorders the standbys:\n# Stop and start standbys to rearrange the order of standbys\n# in WalSnd array. Now, if standbys have the same priority,\n# standby2 is selected preferentially and standby3 is next.\n$node_standby_1->stop;\n$node_standby_2->stop;\n$node_standby_3->stop;\n\n$node_standby_2->start;\n$node_standby_3->start;\n\nThe failure actually indicates that even if standby3 has started\nafter standby2, standby3 has registered back into the WAL sender array\nof the primary before standby2 had the occasion to do it, but we have\nno guarantee that the ordering is actually right. So this has messed\nup with the expected set of sync standbys when these are not directly\nlisted. I think that this can happen as well when initializing the\nstandbys at the top of the test, but the window is much, much\nnarrower and basically impossible to reach.\n\nUsing a combo of safe_psql('checkpoint') + wait_for_catchup() makes\nthe test faster, but that's much more costly than using just\npoll_query_until() on pg_stat_replication to make sure that each\nstandby is registered on the primary's WAL sender array. In short,\nsomething like the attached should improve the stability of the test.\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 13 Jun 2019 15:01:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Race conditions with checkpointer and shutdown"
}
] |
[
{
"msg_contents": "Hi, hackers.\n\nI'm trying to build 64-bit windows binaries with kerberos support.\nI downloaded latest kerberos source package from here:\nhttps://kerberos.org/dist/index.html\nI followed the the instructions in src\\windows\\README, and executed the\nfollowing script in 64-bit Visual Studio Command Prompt to build and\ninstall it.\n\nset NO_LEASH=1\nset PATH=%PATH%;\"%WindowsSdkVerBinPath%\"\\x86\nset KRB_INSTALL_DIR=C:\\krb5\ncd src\nnmake -f Makefile.in prep-windows\nnmake NODEBUG=1\nnmake install NODEBUG=1\n\nTo compile postgres with kerberos support, we need to configure the install\nlocation in src/tools/msvc/config.pl\nour $config = {gss => 'C:/krb5'};\n\nIf I run build.pl the compiler will complain about gssapi.h not found.\nAt src/tools/msvc/Solution.pm line 633, we can see the include directory is\nset to '\\inc\\krb5'. This is no longer the case for 64-bit kerberos package.\n\nThe correct include directory is '\\include'. The library paths also need to\nbe fixed with 64-bit version.\n\nHere's a patch to fixed these paths, with this patch we can build 64-bit\nbinaries with kerberos support successfully.\n\nBest regards,\nPeifeng Qiu",
"msg_date": "Tue, 16 Apr 2019 17:17:50 +0800",
"msg_from": "Peifeng Qiu <pqiu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Compile with 64-bit kerberos on Windows"
}
] |
[
{
"msg_contents": "Hi all,\n\nI found a runtime pruning test case which may be a problem as follows:\n\n----\ncreate table t1 (id int, dt date) partition by range(dt);\ncreate table t1_1 partition of t1 for values from ('2019-01-01') to ('2019-04-01');\ncreate table t1_2 partition of t1 for values from ('2019-04-01') to ('2019-07-01');\ncreate table t1_3 partition of t1 for values from ('2019-07-01') to ('2019-10-01');\ncreate table t1_4 partition of t1 for values from ('2019-10-01') to ('2020-01-01');\n\nIn this example, current_date is 2019-04-16.\n\npostgres=# explain select * from t1 where dt = current_date + 400;\n QUERY PLAN \n------------------------------------------------------------\n Append (cost=0.00..198.42 rows=44 width=8)\n Subplans Removed: 3\n -> Seq Scan on t1_1 (cost=0.00..49.55 rows=11 width=8)\n Filter: (dt = (CURRENT_DATE + 400))\n(4 rows)\n\npostgres=# explain analyze select * from t1 where dt = current_date + 400;\n QUERY PLAN \n---------------------------------------------------------------------------------------\n Append (cost=0.00..198.42 rows=44 width=8) (actual time=0.000..0.001 rows=0 loops=1)\n Subplans Removed: 3\n -> Seq Scan on t1_1 (cost=0.00..49.55 rows=11 width=8) (never executed)\n Filter: (dt = (CURRENT_DATE + 400))\n Planning Time: 0.400 ms\n Execution Time: 0.070 ms\n(6 rows)\n----\n\nI realized t1_1 was not scanned actually since \"never executed\" \nwas displayed in the plan using EXPLAIN ANALYZE. But I think \n\"One-Time Filter: false\" and \"Subplans Removed: ALL\" or something\nlike that should be displayed instead.\n\nWhat do you think?\n\n\nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n\n\n",
"msg_date": "Tue, 16 Apr 2019 20:54:36 +0900",
"msg_from": "\"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Runtime pruning problem"
},
{
"msg_contents": "On Tue, 16 Apr 2019 at 23:55, Yuzuko Hosoya <hosoya.yuzuko@lab.ntt.co.jp> wrote:\n> postgres=# explain analyze select * from t1 where dt = current_date + 400;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------\n> Append (cost=0.00..198.42 rows=44 width=8) (actual time=0.000..0.001 rows=0 loops=1)\n> Subplans Removed: 3\n> -> Seq Scan on t1_1 (cost=0.00..49.55 rows=11 width=8) (never executed)\n> Filter: (dt = (CURRENT_DATE + 400))\n> Planning Time: 0.400 ms\n> Execution Time: 0.070 ms\n> (6 rows)\n> ----\n>\n> I realized t1_1 was not scanned actually since \"never executed\"\n> was displayed in the plan using EXPLAIN ANALYZE. But I think\n> \"One-Time Filter: false\" and \"Subplans Removed: ALL\" or something\n> like that should be displayed instead.\n>\n> What do you think?\n\nThis is intended behaviour explained by the following comment in nodeAppend.c\n\n/*\n* The case where no subplans survive pruning must be handled\n* specially. The problem here is that code in explain.c requires\n* an Append to have at least one subplan in order for it to\n* properly determine the Vars in that subplan's targetlist. We\n* sidestep this issue by just initializing the first subplan and\n* setting as_whichplan to NO_MATCHING_SUBPLANS to indicate that\n* we don't really need to scan any subnodes.\n*/\n\nIt's true that there is a small overhead in this case of having to\ninitialise a useless subplan, but the code never tries to pull any\ntuples from it, so it should be fairly minimal. I expected that using\na value that matches no partitions would be unusual enough not to go\ncontorting explain.c into working for this case.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 17 Apr 2019 00:09:52 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "Hi,\n\nOn 2019/04/16 21:09, David Rowley wrote:\n> On Tue, 16 Apr 2019 at 23:55, Yuzuko Hosoya <hosoya.yuzuko@lab.ntt.co.jp> wrote:\n>> postgres=# explain analyze select * from t1 where dt = current_date + 400;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------\n>> Append (cost=0.00..198.42 rows=44 width=8) (actual time=0.000..0.001 rows=0 loops=1)\n>> Subplans Removed: 3\n>> -> Seq Scan on t1_1 (cost=0.00..49.55 rows=11 width=8) (never executed)\n>> Filter: (dt = (CURRENT_DATE + 400))\n>> Planning Time: 0.400 ms\n>> Execution Time: 0.070 ms\n>> (6 rows)\n>> ----\n>>\n>> I realized t1_1 was not scanned actually since \"never executed\"\n>> was displayed in the plan using EXPLAIN ANALYZE. But I think\n>> \"One-Time Filter: false\" and \"Subplans Removed: ALL\" or something\n>> like that should be displayed instead.\n>>\n>> What do you think?\n> \n> This is intended behaviour explained by the following comment in nodeAppend.c\n> \n> /*\n> * The case where no subplans survive pruning must be handled\n> * specially. The problem here is that code in explain.c requires\n> * an Append to have at least one subplan in order for it to\n> * properly determine the Vars in that subplan's targetlist. We\n> * sidestep this issue by just initializing the first subplan and\n> * setting as_whichplan to NO_MATCHING_SUBPLANS to indicate that\n> * we don't really need to scan any subnodes.\n> */\n> \n> It's true that there is a small overhead in this case of having to\n> initialise a useless subplan, but the code never tries to pull any\n> tuples from it, so it should be fairly minimal. I expected that using\n> a value that matches no partitions would be unusual enough not to go\n> contorting explain.c into working for this case.\n\nWhen I saw this, I didn't think as much of the overhead of initializing a\nsubplan as I was surprised to see that result at all.\n\nWhen you see this:\n\nexplain select * from t1 where dt = current_date + 400;\n QUERY PLAN\n────────────────────────────────────────────────────────────\n Append (cost=0.00..198.42 rows=44 width=8)\n Subplans Removed: 3\n -> Seq Scan on t1_1 (cost=0.00..49.55 rows=11 width=8)\n Filter: (dt = (CURRENT_DATE + 400))\n(4 rows)\n\nDoesn't this give an impression that t1_1 *matches* the WHERE condition\nwhere it clearly doesn't? IMO, contorting explain.c to show an empty\nAppend like what Hosoya-san suggests doesn't sound too bad given that the\nfirst reaction to seeing the above result is to think it's a bug of\npartition pruning.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 17 Apr 2019 10:12:56 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Wed, 17 Apr 2019 at 13:13, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> When you see this:\n>\n> explain select * from t1 where dt = current_date + 400;\n> QUERY PLAN\n> ────────────────────────────────────────────────────────────\n> Append (cost=0.00..198.42 rows=44 width=8)\n> Subplans Removed: 3\n> -> Seq Scan on t1_1 (cost=0.00..49.55 rows=11 width=8)\n> Filter: (dt = (CURRENT_DATE + 400))\n> (4 rows)\n>\n> Doesn't this give an impression that t1_1 *matches* the WHERE condition\n> where it clearly doesn't? IMO, contorting explain.c to show an empty\n> Append like what Hosoya-san suggests doesn't sound too bad given that the\n> first reaction to seeing the above result is to think it's a bug of\n> partition pruning.\n\nWhere do you think the output list for EXPLAIN VERBOSE should put the\noutput column list in this case? On the Append node, or just not show\nthem?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 17 Apr 2019 14:29:17 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On 2019/04/17 11:29, David Rowley wrote:\n> On Wed, 17 Apr 2019 at 13:13, Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> When you see this:\n>>\n>> explain select * from t1 where dt = current_date + 400;\n>> QUERY PLAN\n>> ────────────────────────────────────────────────────────────\n>> Append (cost=0.00..198.42 rows=44 width=8)\n>> Subplans Removed: 3\n>> -> Seq Scan on t1_1 (cost=0.00..49.55 rows=11 width=8)\n>> Filter: (dt = (CURRENT_DATE + 400))\n>> (4 rows)\n>>\n>> Doesn't this give an impression that t1_1 *matches* the WHERE condition\n>> where it clearly doesn't? IMO, contorting explain.c to show an empty\n>> Append like what Hosoya-san suggests doesn't sound too bad given that the\n>> first reaction to seeing the above result is to think it's a bug of\n>> partition pruning.\n> \n> Where do you think the output list for EXPLAIN VERBOSE should put the\n> output column list in this case? On the Append node, or just not show\n> them?\n\nMaybe, not show them? That may be a bit inconsistent, because the point\nof VERBOSE is to the targetlist among other things, but maybe the users\nwouldn't mind not seeing it on such empty Append nodes. OTOH, they are\nmore likely to think seeing a subplan that's clearly prunable as a bug of\nthe pruning logic.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 17 Apr 2019 11:49:04 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> On 2019/04/17 11:29, David Rowley wrote:\n>> Where do you think the output list for EXPLAIN VERBOSE should put the\n>> output column list in this case? On the Append node, or just not show\n>> them?\n\n> Maybe, not show them?\n\nYeah, I think that seems like a reasonable idea. If we show the tlist\nfor Append in this case, when we never do otherwise, that will be\nconfusing, and it could easily break plan-reading apps like depesz.com.\n\nWhat I'm more worried about is whether this breaks any internal behavior\nof explain.c, as the comment David quoted upthread seems to think.\nIf we need to have a tlist to reference, can we make that code look\nto the pre-pruning plan tree, rather than the planstate tree?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Apr 2019 23:54:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Wed, 17 Apr 2019 at 15:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> > On 2019/04/17 11:29, David Rowley wrote:\n> >> Where do you think the output list for EXPLAIN VERBOSE should put the\n> >> output column list in this case? On the Append node, or just not show\n> >> them?\n>\n> > Maybe, not show them?\n>\n> Yeah, I think that seems like a reasonable idea. If we show the tlist\n> for Append in this case, when we never do otherwise, that will be\n> confusing, and it could easily break plan-reading apps like depesz.com.\n>\n> What I'm more worried about is whether this breaks any internal behavior\n> of explain.c, as the comment David quoted upthread seems to think.\n> If we need to have a tlist to reference, can we make that code look\n> to the pre-pruning plan tree, rather than the planstate tree?\n\nI think most of the complexity is in what to do in\nset_deparse_planstate() given that there might be no outer plan to\nchoose from for Append and MergeAppend. This controls what's done in\nresolve_special_varno() as this descends the plan tree down the outer\nside until it gets to the node that the outer var came from.\n\nWe wouldn't need to do this if we just didn't show the targetlist in\nEXPLAIN VERBOSE, but there's also MergeAppend sort keys to worry about\ntoo. Should we just skip on those as well?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 17 Apr 2019 15:58:52 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On 2019/04/17 12:58, David Rowley wrote:\n> On Wed, 17 Apr 2019 at 15:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>>> On 2019/04/17 11:29, David Rowley wrote:\n>>>> Where do you think the output list for EXPLAIN VERBOSE should put the\n>>>> output column list in this case? On the Append node, or just not show\n>>>> them?\n>>\n>>> Maybe, not show them?\n>>\n>> Yeah, I think that seems like a reasonable idea. If we show the tlist\n>> for Append in this case, when we never do otherwise, that will be\n>> confusing, and it could easily break plan-reading apps like depesz.com.\n>>\n>> What I'm more worried about is whether this breaks any internal behavior\n>> of explain.c, as the comment David quoted upthread seems to think.\n>> If we need to have a tlist to reference, can we make that code look\n>> to the pre-pruning plan tree, rather than the planstate tree?\n> \n> I think most of the complexity is in what to do in\n> set_deparse_planstate() given that there might be no outer plan to\n> choose from for Append and MergeAppend. This controls what's done in\n> resolve_special_varno() as this descends the plan tree down the outer\n> side until it gets to the node that the outer var came from.\n> \n> We wouldn't need to do this if we just didn't show the targetlist in\n> EXPLAIN VERBOSE, but there's also MergeAppend sort keys to worry about\n> too. Should we just skip on those as well?\n\nI guess so, if only to be consistent with Append.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 17 Apr 2019 13:04:03 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Wed, 17 Apr 2019 at 15:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I'm more worried about is whether this breaks any internal behavior\n>> of explain.c, as the comment David quoted upthread seems to think.\n>> If we need to have a tlist to reference, can we make that code look\n>> to the pre-pruning plan tree, rather than the planstate tree?\n\n> I think most of the complexity is in what to do in\n> set_deparse_planstate() given that there might be no outer plan to\n> choose from for Append and MergeAppend. This controls what's done in\n> resolve_special_varno() as this descends the plan tree down the outer\n> side until it gets to the node that the outer var came from.\n\n> We wouldn't need to do this if we just didn't show the targetlist in\n> EXPLAIN VERBOSE, but there's also MergeAppend sort keys to worry about\n> too. Should we just skip on those as well?\n\nNo, the larger issue is that *any* plan node above the Append might\nbe recursing down to/through the Append to find out what to print for\na Var reference. We have to be able to support that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Apr 2019 00:10:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On 2019/04/17 13:10, Tom Lane wrote:\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n>> On Wed, 17 Apr 2019 at 15:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> What I'm more worried about is whether this breaks any internal behavior\n>>> of explain.c, as the comment David quoted upthread seems to think.\n>>> If we need to have a tlist to reference, can we make that code look\n>>> to the pre-pruning plan tree, rather than the planstate tree?\n> \n>> I think most of the complexity is in what to do in\n>> set_deparse_planstate() given that there might be no outer plan to\n>> choose from for Append and MergeAppend. This controls what's done in\n>> resolve_special_varno() as this descends the plan tree down the outer\n>> side until it gets to the node that the outer var came from.\n> \n>> We wouldn't need to do this if we just didn't show the targetlist in\n>> EXPLAIN VERBOSE, but there's also MergeAppend sort keys to worry about\n>> too. Should we just skip on those as well?\n> \n> No, the larger issue is that *any* plan node above the Append might\n> be recursing down to/through the Append to find out what to print for\n> a Var reference. We have to be able to support that.\n\nHmm, yes.\n\nI see that the targetlist of Append, MergeAppend, and ModifyTable nodes is\nfinalized using set_dummy_tlist_references(), wherein the Vars in the\nnodes' (Plan struct's) targetlist are modified to be OUTER_VAR vars. The\ncomments around set_dummy_tlist_references() says it's done for explain.c\nto intercept any accesses to variables in these nodes' targetlist and\nreturn the corresponding variables in the nodes' 1st child subplan's\ntargetlist, which must have all the variables in the nodes' targetlist.c.\nThis arrangement makes it mandatory for these nodes to have at least one\nsubplan, so the hack in runtime pruning code.\n\nI wonder why the original targetlist of these nodes, adjusted using just\nfix_scan_list(), wouldn't have been better for EXPLAIN to use? If I\nreplace the set_dummy_tlist_references() call by fix_scan_list() for\nAppend for starters, I see that the targetlist of any nodes on top of the\nAppend list the Append's output variables without a \"refname\" prefix.\nThat can be confusing if the same parent table (Append's parent relation)\nis referenced multiple times. The refname is empty, because\nselect_rtable_names_for_explain() thinks an Append hasn't got one. Same\nis true for MergeAppend. ModifyTable, OTOH, has one because it has the\nnominalRelation field. Maybe it's not possible to have such a field for\nAppend and MergeAppend, because they don't *always* refer to a single\ntable (UNION ALL, partitionwise join come to mind). Anyway, even if we do\nmanage to print a refname for Append/MergeAppend somehow, that wouldn't be\nback-patchable to 11.\n\nAnother idea is to teach explain.c about this special case of run-time\npruning having pruned all child subplans even though appendplans contains\none element to cater for targetlist accesses. That is, Append will be\ndisplayed with \"Subplans Removed: All\" and no child subplans listed below\nit, even though appendplans[] has one. David already said he didn't do in\nthe first place to avoid PartitionPruneInfo details creeping into other\nmodules, but maybe there's no other way?\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Thu, 18 Apr 2019 14:20:32 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> On 2019/04/17 13:10, Tom Lane wrote:\n>> No, the larger issue is that *any* plan node above the Append might\n>> be recursing down to/through the Append to find out what to print for\n>> a Var reference. We have to be able to support that.\n\n> I wonder why the original targetlist of these nodes, adjusted using just\n> fix_scan_list(), wouldn't have been better for EXPLAIN to use?\n\nSo what I'm thinking is that I made a bad decision in 1cc29fe7c,\nwhich did this:\n\n ... In passing, simplify the EXPLAIN code by\n having it deal primarily in the PlanState tree rather than separately\n searching Plan and PlanState trees. This is noticeably cleaner for\n subplans, and about a wash elsewhere.\n\nIt was definitely silly to have the recursion in explain.c passing down\nboth Plan and PlanState nodes, when the former is always easily accessible\nfrom the latter. So that was an OK change, but at the same time I changed\nruleutils.c to accept PlanState pointers not Plan pointers from explain.c,\nand that is now looking like a bad idea. If we were to revert that\ndecision, then instead of assuming that an AppendState always has at least\none live child, we'd only have to assume that an Append has at least one\nlive child. Which is true.\n\nI don't recall that there was any really strong reason for switching\nruleutils' API like that, although maybe if we look harder we'll find one.\nI think it was mainly just for consistency with the way that explain.c\nnow looks at the world; which is not a negligible consideration, but\nit's certainly something we could overrule.\n\n> Another idea is to teach explain.c about this special case of run-time\n> pruning having pruned all child subplans even though appendplans contains\n> one element to cater for targetlist accesses. That is, Append will be\n> displayed with \"Subplans Removed: All\" and no child subplans listed below\n> it, even though appendplans[] has one. David already said he didn't do in\n> the first place to avoid PartitionPruneInfo details creeping into other\n> modules, but maybe there's no other way?\n\nI tried simply removing the hack in nodeAppend.c (per quick-hack patch\nbelow), and it gets through the core regression tests without a crash,\nand with output diffs that seem fine to me. However, that just shows that\nwe lack enough test coverage; we evidently have no regression cases where\nan upper node needs to print Vars that are coming from a fully-pruned\nAppend. Given the test case mentioned in this thread, I get\n\nregression=# explain verbose select * from t1 where dt = current_date + 400;\n QUERY PLAN \n---------------------------------------------\n Append (cost=0.00..198.42 rows=44 width=8)\n Subplans Removed: 4\n(2 rows)\n\nwhich seems fine, but\n\nregression=# explain verbose select * from t1 where dt = current_date + 400 order by id;\npsql: server closed the connection unexpectedly\n\nIt's dying trying to resolve Vars in the Sort node, of course.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 18 Apr 2019 13:25:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 10:49 PM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> Maybe, not show them? That may be a bit inconsistent, because the point\n> of VERBOSE is to the targetlist among other things, but maybe the users\n> wouldn't mind not seeing it on such empty Append nodes. OTOH, they are\n> more likely to think seeing a subplan that's clearly prunable as a bug of\n> the pruning logic.\n\nOr maybe we could show them, but the Append could also be flagged in\nsome way that indicates that its child is only a dummy.\n\nEverything Pruned: Yes\n\nOr something.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Apr 2019 14:13:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "I wrote:\n> [ let's fix this by reverting ruleutils back to using Plans not PlanStates ]\n\nBTW, while I suspect the above wouldn't be a huge patch, it doesn't\nseem trivial either. Since the issue is (a) cosmetic and (b) not new\n(v11 behaves the same way), I don't think we should consider it to be\nan open item for v12. I suggest leaving this as a to-do for v13.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Apr 2019 15:50:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On 2019/04/19 2:25, Tom Lane wrote:\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>> Another idea is to teach explain.c about this special case of run-time\n>> pruning having pruned all child subplans even though appendplans contains\n>> one element to cater for targetlist accesses. That is, Append will be\n>> displayed with \"Subplans Removed: All\" and no child subplans listed below\n>> it, even though appendplans[] has one. David already said he didn't do in\n>> the first place to avoid PartitionPruneInfo details creeping into other\n>> modules, but maybe there's no other way?\n> \n> I tried simply removing the hack in nodeAppend.c (per quick-hack patch\n> below), and it gets through the core regression tests without a crash,\n> and with output diffs that seem fine to me. However, that just shows that\n> we lack enough test coverage; we evidently have no regression cases where\n> an upper node needs to print Vars that are coming from a fully-pruned\n> Append. Given the test case mentioned in this thread, I get\n> \n> regression=# explain verbose select * from t1 where dt = current_date + 400;\n> QUERY PLAN \n> ---------------------------------------------\n> Append (cost=0.00..198.42 rows=44 width=8)\n> Subplans Removed: 4\n> (2 rows)\n> \n> which seems fine, but\n> \n> regression=# explain verbose select * from t1 where dt = current_date + 400 order by id;\n> psql: server closed the connection unexpectedly\n> \n> It's dying trying to resolve Vars in the Sort node, of course.\n\nAnother approach, as I mentioned above, is to extend the hack that begins\nin nodeAppend.c (and nodeMergeAppend.c) into explain.c, as in the\nattached. Then:\n\nexplain verbose select * from t1 where dt = current_date + 400 order by id;\n QUERY PLAN\n───────────────────────────────────────────────────\n Sort (cost=199.62..199.73 rows=44 width=8)\n Output: t1_1.id, t1_1.dt\n Sort Key: t1_1.id\n -> Append (cost=0.00..198.42 rows=44 width=8)\n Subplans Removed: 4\n(5 rows)\n\nIt's pretty confusing to see t1_1 which has been pruned away, but you\ndidn't seem very interested in the idea of teaching explain.c to use the\noriginal target list of plans like Append, MergeAppend, etc. that have\nchild subplans.\n\nJust a note: runtime pruning for MergeAppend is new in PG 12.\n\nThanks,\nAmit",
"msg_date": "Fri, 19 Apr 2019 17:00:57 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On 2019/04/19 3:13, Robert Haas wrote:\n> On Tue, Apr 16, 2019 at 10:49 PM Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> Maybe, not show them? That may be a bit inconsistent, because the point\n>> of VERBOSE is to the targetlist among other things, but maybe the users\n>> wouldn't mind not seeing it on such empty Append nodes. OTOH, they are\n>> more likely to think seeing a subplan that's clearly prunable as a bug of\n>> the pruning logic.\n> \n> Or maybe we could show them, but the Append could also be flagged in\n> some way that indicates that its child is only a dummy.\n> \n> Everything Pruned: Yes\n> \n> Or something.\n\nSuch an approach has been proposed too, although not with a new property text.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Fri, 19 Apr 2019 17:03:07 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On 2019/04/19 17:00, Amit Langote wrote:\n> On 2019/04/19 2:25, Tom Lane wrote:\n>> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>>> Another idea is to teach explain.c about this special case of run-time\n>>> pruning having pruned all child subplans even though appendplans contains\n>>> one element to cater for targetlist accesses. That is, Append will be\n>>> displayed with \"Subplans Removed: All\" and no child subplans listed below\n>>> it, even though appendplans[] has one. David already said he didn't do in\n>>> the first place to avoid PartitionPruneInfo details creeping into other\n>>> modules, but maybe there's no other way?\n>>\n>> I tried simply removing the hack in nodeAppend.c (per quick-hack patch\n>> below), and it gets through the core regression tests without a crash,\n>> and with output diffs that seem fine to me. However, that just shows that\n>> we lack enough test coverage; we evidently have no regression cases where\n>> an upper node needs to print Vars that are coming from a fully-pruned\n>> Append. Given the test case mentioned in this thread, I get\n>>\n>> regression=# explain verbose select * from t1 where dt = current_date + 400;\n>> QUERY PLAN \n>> ---------------------------------------------\n>> Append (cost=0.00..198.42 rows=44 width=8)\n>> Subplans Removed: 4\n>> (2 rows)\n>>\n>> which seems fine, but\n>>\n>> regression=# explain verbose select * from t1 where dt = current_date + 400 order by id;\n>> psql: server closed the connection unexpectedly\n>>\n>> It's dying trying to resolve Vars in the Sort node, of course.\n> \n> Another approach, as I mentioned above, is to extend the hack that begins\n> in nodeAppend.c (and nodeMergeAppend.c) into explain.c, as in the\n> attached. Then:\n> \n> explain verbose select * from t1 where dt = current_date + 400 order by id;\n> QUERY PLAN\n> ───────────────────────────────────────────────────\n> Sort (cost=199.62..199.73 rows=44 width=8)\n> Output: t1_1.id, t1_1.dt\n> Sort Key: t1_1.id\n> -> Append (cost=0.00..198.42 rows=44 width=8)\n> Subplans Removed: 4\n> (5 rows)\n> \n> It's pretty confusing to see t1_1 which has been pruned away, but you\n> didn't seem very interested in the idea of teaching explain.c to use the\n> original target list of plans like Append, MergeAppend, etc. that have\n> child subplans.\n> \n> Just a note: runtime pruning for MergeAppend is new in PG 12.\n\nThe patch I attached with the previous email didn't update the expected\noutput file. Correct one attached.\n\nThanks,\nAmit",
"msg_date": "Fri, 19 Apr 2019 17:13:43 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Fri, 19 Apr 2019 at 20:01, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> On 2019/04/19 2:25, Tom Lane wrote:\n> > Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> >> Another idea is to teach explain.c about this special case of run-time\n> >> pruning having pruned all child subplans even though appendplans contains\n> >> one element to cater for targetlist accesses. That is, Append will be\n> >> displayed with \"Subplans Removed: All\" and no child subplans listed below\n> >> it, even though appendplans[] has one. David already said he didn't do in\n> >> the first place to avoid PartitionPruneInfo details creeping into other\n> >> modules, but maybe there's no other way?\n> >\n> > I tried simply removing the hack in nodeAppend.c (per quick-hack patch\n> > below), and it gets through the core regression tests without a crash,\n> > and with output diffs that seem fine to me. However, that just shows that\n> > we lack enough test coverage; we evidently have no regression cases where\n> > an upper node needs to print Vars that are coming from a fully-pruned\n> > Append. Given the test case mentioned in this thread, I get\n> >\n> > regression=# explain verbose select * from t1 where dt = current_date + 400;\n> > QUERY PLAN\n> > ---------------------------------------------\n> > Append (cost=0.00..198.42 rows=44 width=8)\n> > Subplans Removed: 4\n> > (2 rows)\n> >\n> > which seems fine, but\n> >\n> > regression=# explain verbose select * from t1 where dt = current_date + 400 order by id;\n> > psql: server closed the connection unexpectedly\n> >\n> > It's dying trying to resolve Vars in the Sort node, of course.\n>\n> Another approach, as I mentioned above, is to extend the hack that begins\n> in nodeAppend.c (and nodeMergeAppend.c) into explain.c, as in the\n> attached. Then:\n>\n> explain verbose select * from t1 where dt = current_date + 400 order by id;\n> QUERY PLAN\n> ───────────────────────────────────────────────────\n> Sort (cost=199.62..199.73 rows=44 width=8)\n> Output: t1_1.id, t1_1.dt\n> Sort Key: t1_1.id\n> -> Append (cost=0.00..198.42 rows=44 width=8)\n> Subplans Removed: 4\n> (5 rows)\n\nWe could do that, but I feel that's making EXPLAIN tell lies, which is\nprobably a path we should avoid. The lies might be fairly innocent\ntoday, but maintaining them over time, like any lie, might become more\ndifficult. We did perform init on a subnode, the subnode might be an\nindex scan, which we'd have obtained a lock on the index. It could be\nfairly difficult to explain why that is given the lack of mention of\nit in the explain output.\n\nThe fix I was working on before heading away for Easter was around\nchanging ruleutils.c to look at Plan nodes rather than PlanState\nnodes. I'm afraid that this would still suffer from showing the alias\nof the first subnode but not show it as in the explain output you show\nabove, but it does allow us to get rid of the code the initialises the\nfirst subnode. I think that's a much cleaner way to do it.\n\nI agree with Tom about the v13 part. If we were having this discussion\nthis time last year, then I'd have likely pushed for a v11 fix, but\nsince it's already shipped like this in one release then there's not\nmuch more additional harm in two releases working this way. I'll try\nand finished off the patch I was working on soon and submit to v13's\nfirst commitfest.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sun, 21 Apr 2019 18:25:43 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On 2019/04/21 15:25, David Rowley wrote:\n> On Fri, 19 Apr 2019 at 20:01, Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> Another approach, as I mentioned above, is to extend the hack that begins\n>> in nodeAppend.c (and nodeMergeAppend.c) into explain.c, as in the\n>> attached. Then:\n>>\n>> explain verbose select * from t1 where dt = current_date + 400 order by id;\n>> QUERY PLAN\n>> ───────────────────────────────────────────────────\n>> Sort (cost=199.62..199.73 rows=44 width=8)\n>> Output: t1_1.id, t1_1.dt\n>> Sort Key: t1_1.id\n>> -> Append (cost=0.00..198.42 rows=44 width=8)\n>> Subplans Removed: 4\n>> (5 rows)\n> \n> We could do that, but I feel that's making EXPLAIN tell lies, which is\n> probably a path we should avoid. The lies might be fairly innocent\n> today, but maintaining them over time, like any lie, might become more\n> difficult. We did perform init on a subnode, the subnode might be an\n> index scan, which we'd have obtained a lock on the index. It could be\n> fairly difficult to explain why that is given the lack of mention of\n> it in the explain output.\n\nI had overlooked the fact that ExecInitAppend and ExecInitMergeAppend\nactually perform ExecInitNode on the subplan, so on second thought, I\nagree we've got to show it. Should this have been documented? The chance\nthat users may query for values that they've not defined partitions for\nmight well be be non-zero.\n\n> The fix I was working on before heading away for Easter was around\n> changing ruleutils.c to look at Plan nodes rather than PlanState\n> nodes. I'm afraid that this would still suffer from showing the alias\n> of the first subnode but not show it as in the explain output you show\n> above, but it does allow us to get rid of the code the initialises the\n> first subnode. I think that's a much cleaner way to do it.\n\nI agree.\n\n> I agree with Tom about the v13 part. If we were having this discussion\n> this time last year, then I'd have likely pushed for a v11 fix, but\n> since it's already shipped like this in one release then there's not\n> much more additional harm in two releases working this way. I'll try\n> and finished off the patch I was working on soon and submit to v13's\n> first commitfest.\n\nOK, I'll try to review it.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Mon, 22 Apr 2019 14:37:08 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Fri, 19 Apr 2019 at 05:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So what I'm thinking is that I made a bad decision in 1cc29fe7c,\n> which did this:\n>\n> ... In passing, simplify the EXPLAIN code by\n> having it deal primarily in the PlanState tree rather than separately\n> searching Plan and PlanState trees. This is noticeably cleaner for\n> subplans, and about a wash elsewhere.\n>\n> It was definitely silly to have the recursion in explain.c passing down\n> both Plan and PlanState nodes, when the former is always easily accessible\n> from the latter. So that was an OK change, but at the same time I changed\n> ruleutils.c to accept PlanState pointers not Plan pointers from explain.c,\n> and that is now looking like a bad idea. If we were to revert that\n> decision, then instead of assuming that an AppendState always has at least\n> one live child, we'd only have to assume that an Append has at least one\n> live child. Which is true.\n>\n> I don't recall that there was any really strong reason for switching\n> ruleutils' API like that, although maybe if we look harder we'll find one.\n> I think it was mainly just for consistency with the way that explain.c\n> now looks at the world; which is not a negligible consideration, but\n> it's certainly something we could overrule.\n\nI started working on this today and I've attached what I have so far.\n\nFor a plan like the following, as shown by master's EXPLAIN, we get:\n\npostgres=# explain verbose select *,(select * from t1 where\nt1.a=listp.a) z from listp where a = three() order by z;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------\n Sort (cost=1386.90..1386.95 rows=22 width=12)\n Output: listp1.a, listp1.b, ((SubPlan 1))\n Sort Key: ((SubPlan 1))\n -> Append (cost=0.00..1386.40 rows=22 width=12)\n Subplans Removed: 1\n -> Seq Scan on public.listp1 (cost=0.00..693.15 rows=11 width=12)\n Output: listp1.a, listp1.b, (SubPlan 1)\n Filter: (listp1.a = three())\n SubPlan 1\n -> Index Only Scan using t1_pkey on public.t1\n(cost=0.15..8.17 rows=1 width=4)\n Output: t1.a\n Index Cond: (t1.a = listp1.a)\n(12 rows)\n\nWith the attached we end up with:\n\npostgres=# explain verbose select *,(select * from t1 where\nt1.a=listp.a) z from listp where a = three() order by z;\n QUERY PLAN\n-----------------------------------------------------\n Sort (cost=1386.90..1386.95 rows=22 width=12)\n Output: listp1.a, listp1.b, ((SubPlan 1))\n Sort Key: ((SubPlan 1))\n -> Append (cost=0.00..1386.40 rows=22 width=12)\n Subplans Removed: 2\n(5 rows)\n\nnotice the reference to SubPlan 1, but no definition of what Subplan 1\nactually is. I don't think this is particularly good, but not all that\nsure what to do about it.\n\nThe code turned a bit more complex than I'd have hoped. In order to\nstill properly resolve the parameters in find_param_referent() I had\nto keep the ancestor list, but also had to add an ancestor_plan list\nso that we can properly keep track of the Plan node parents too.\nRemember that a Plan node may not have a corresponding PlanState node\nif the state was never initialized.\n\nFor Append and MergeAppend I ended up always using the first of the\ninitialized subnodes if at least one is present and only resorted to\nusing the first planned subnode if there are no subnodes in the\nAppendState/MergeAppendState. Without this, the Vars shown in the\nMergeAppend sort keys lost their alias prefix if the first subplan\nhappened to have been pruned, but magically would gain it again if it\nwas some other node that was pruned. This was just a bit too weird, so\nI ended up making a special case for this.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 23 Apr 2019 01:12:16 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Tue, 23 Apr 2019 at 01:12, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> I started working on this today and I've attached what I have so far.\n\nI've added this to the July commitfest so that I don't forget about it.\n\nhttps://commitfest.postgresql.org/23/2102/\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 24 Apr 2019 11:42:10 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Wed, 24 Apr 2019 at 11:42, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> I've added this to the July commitfest so that I don't forget about it.\n>\n> https://commitfest.postgresql.org/23/2102/\n\nand an updated patch, rebased after the pgindent run.\n\nHopefully, this will make the CF bot happy again.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sat, 25 May 2019 18:55:11 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Sat, 25 May 2019 at 18:55, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> and an updated patch, rebased after the pgindent run.\n>\n> Hopefully, this will make the CF bot happy again.\n\nand rebased again due to a conflict with some List changes that\ntouched ruleutils.c.\n\nI also made another couple of passes over this adding a few comments\nand fixing some spelling mistakes. I also added another regression\ntest to validate the EXPLAIN VERBOSE target list output of a\nMergeAppend that's had all its subnodes pruned. Previously the Vars\nfrom the pruned rel were only shown in the MergeAppend's sort clause.\nAfter doing all that I'm now pretty happy with it.\n\nThe part I wouldn't mind another set of eyes on is the ruleutils.c\nchanges. The patch changes things around so that we don't just pass\naround and track PlanStates, we also pass around the Plan node for\nthat state. In some cases, the PlanState can be NULL if the Plan has\nno PlanState. Currently, that only happens when run-time pruning\ndidn't initialise any PlanStates for the given subplan's Plan node.\nI've coded it so that Append and MergeAppend use the first PlanState\nto resolve Vars. I only resort to using the first Plan's vars when\nthere are no PlanStates. If we just took the first Plan node all the\ntime then it might get confusing for users reading an EXPLAIN when the\nfirst subplan was run-time pruned as we'd be resolving Vars from a\npruned subnode. It seems much less confusing to print the Plan vars\nwhen the Append/MergeAppend has no subplans.\n\nIf there are no objections to the changes then I'd really like to be\npushing this early next week.\n\nThe v3 patch is attached.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 23 Jul 2019 20:49:52 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> The part I wouldn't mind another set of eyes on is the ruleutils.c\n> changes.\n\nUm, sorry for not getting to this sooner.\n\nWhat I had in mind was to revert 1cc29fe7c's ruleutils changes\nentirely, so that ruleutils deals only in Plans not PlanStates.\nPerhaps we've grown some code since then that really needs the\nPlanStates, but what is that, and could we do it some other way?\nI'm not thrilled with passing both of these around, especially\nif the PlanState sometimes isn't there, meaning that no code in\nruleutils could safely assume it's there anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 18:27:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Wed, 31 Jul 2019 at 10:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > The part I wouldn't mind another set of eyes on is the ruleutils.c\n> > changes.\n>\n> Um, sorry for not getting to this sooner.\n>\n> What I had in mind was to revert 1cc29fe7c's ruleutils changes\n> entirely, so that ruleutils deals only in Plans not PlanStates.\n> Perhaps we've grown some code since then that really needs the\n> PlanStates, but what is that, and could we do it some other way?\n> I'm not thrilled with passing both of these around, especially\n> if the PlanState sometimes isn't there, meaning that no code in\n> ruleutils could safely assume it's there anyway.\n\nAre you not worried about the confusion that run-time pruning might\ncause if we always show the Vars from the first Append/MergeAppend\nplan node, even though the corresponding executor node might have been\npruned?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 31 Jul 2019 10:32:35 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Wed, 31 Jul 2019 at 10:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I had in mind was to revert 1cc29fe7c's ruleutils changes\n>> entirely, so that ruleutils deals only in Plans not PlanStates.\n>> Perhaps we've grown some code since then that really needs the\n>> PlanStates, but what is that, and could we do it some other way?\n>> I'm not thrilled with passing both of these around, especially\n>> if the PlanState sometimes isn't there, meaning that no code in\n>> ruleutils could safely assume it's there anyway.\n\n> Are you not worried about the confusion that run-time pruning might\n> cause if we always show the Vars from the first Append/MergeAppend\n> plan node, even though the corresponding executor node might have been\n> pruned?\n\nThe upper-level Vars should ideally be labeled with the append parent\nrel's name anyway, no? I think it's likely *more* confusing if those\nVars change appearance depending on which partitions get pruned or not.\n\nThis may be arguing for a change in ruleutils' existing behavior,\nnot sure. But when dealing with traditional-style inheritance,\nI've always thought that Vars above the Append were referring to\nthe parent rel in its capacity as the parent, not in its capacity\nas the first child. With new-style partitioning drawing a clear\ndistinction between the parent and all its children, it's easier\nto understand the difference.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 18:50:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "I wrote:\n> This may be arguing for a change in ruleutils' existing behavior,\n> not sure. But when dealing with traditional-style inheritance,\n> I've always thought that Vars above the Append were referring to\n> the parent rel in its capacity as the parent, not in its capacity\n> as the first child. With new-style partitioning drawing a clear\n> distinction between the parent and all its children, it's easier\n> to understand the difference.\n\nOK, so experimenting, I see that it is a change: HEAD does\n\nregression=# explain verbose select * from part order by a;\n QUERY PLAN \n---------------------------------------------------------------------------------\n Sort (cost=362.21..373.51 rows=4520 width=8)\n Output: part_p1.a, part_p1.b\n Sort Key: part_p1.a\n -> Append (cost=0.00..87.80 rows=4520 width=8)\n -> Seq Scan on public.part_p1 (cost=0.00..32.60 rows=2260 width=8)\n Output: part_p1.a, part_p1.b\n -> Seq Scan on public.part_p2_p1 (cost=0.00..32.60 rows=2260 width=8)\n Output: part_p2_p1.a, part_p2_p1.b\n(8 rows)\n\nThe portion of this below the Append is fine, but I argue that\nthe Vars above the Append should say \"part\", not \"part_p1\".\nIn that way they'd look the same regardless of which partitions\nhave been pruned or not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 18:56:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Wed, 31 Jul 2019 at 10:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> OK, so experimenting, I see that it is a change: HEAD does\n>\n> regression=# explain verbose select * from part order by a;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------\n> Sort (cost=362.21..373.51 rows=4520 width=8)\n> Output: part_p1.a, part_p1.b\n> Sort Key: part_p1.a\n> -> Append (cost=0.00..87.80 rows=4520 width=8)\n> -> Seq Scan on public.part_p1 (cost=0.00..32.60 rows=2260 width=8)\n> Output: part_p1.a, part_p1.b\n> -> Seq Scan on public.part_p2_p1 (cost=0.00..32.60 rows=2260 width=8)\n> Output: part_p2_p1.a, part_p2_p1.b\n> (8 rows)\n>\n> The portion of this below the Append is fine, but I argue that\n> the Vars above the Append should say \"part\", not \"part_p1\".\n> In that way they'd look the same regardless of which partitions\n> have been pruned or not.\n\nThat seems perfectly reasonable for Append / MergeAppend that are for\nscanning partitioned tables. What do you propose we do for inheritance\nand UNION ALLs?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 31 Jul 2019 11:14:29 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Wed, 31 Jul 2019 at 10:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The portion of this below the Append is fine, but I argue that\n>> the Vars above the Append should say \"part\", not \"part_p1\".\n>> In that way they'd look the same regardless of which partitions\n>> have been pruned or not.\n\n> That seems perfectly reasonable for Append / MergeAppend that are for\n> scanning partitioned tables. What do you propose we do for inheritance\n> and UNION ALLs?\n\nFor inheritance, I don't believe there would be any change, precisely\nbecause we've historically used the parent rel as reference.\n\nFor setops we've traditionally used the left input as reference.\nMaybe we could do better, but I'm not very sure how, since SQL\ndoesn't actually provide any explicit names for the setop result.\nMaking up a name with no basis in the query probably isn't an\nimprovement, or at least not enough of one to justify a change.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 19:31:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 8:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > On Wed, 31 Jul 2019 at 10:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The portion of this below the Append is fine, but I argue that\n> >> the Vars above the Append should say \"part\", not \"part_p1\".\n> >> In that way they'd look the same regardless of which partitions\n> >> have been pruned or not.\n>\n> > That seems perfectly reasonable for Append / MergeAppend that are for\n> > scanning partitioned tables. What do you propose we do for inheritance\n> > and UNION ALLs?\n>\n> For inheritance, I don't believe there would be any change, precisely\n> because we've historically used the parent rel as reference.\n\nI may be missing something, but Vars above an Append/MergeAppend,\nwhether it's scanning a partitioned table or a regular inheritance\ntable, always refer to the first child subplan, which may or may not\nbe for the inheritance parent in its role as a child, not the Append\nparent.\n\ncreate table parent (a int);\nalter table only parent add check (a = 1) no inherit;\ncreate table child1 (a int check (a = 2)) inherits (parent);\ncreate table child2 (a int check (a = 3)) inherits (parent);\n\nexplain (costs off, verbose) select * from parent where a > 1 order by 1;\n QUERY PLAN\n───────────────────────────────────────\n Sort\n Output: child1.a\n Sort Key: child1.a\n -> Append\n -> Seq Scan on public.child1\n Output: child1.a\n Filter: (child1.a > 1)\n -> Seq Scan on public.child2\n Output: child2.a\n Filter: (child2.a > 1)\n(10 rows)\n\nI think this is because we replace the original targetlist of such\nnodes by a dummy one using set_dummy_tlist_references(), where all the\nparent Vars are re-stamped with OUTER_VAR as varno. When actually\nprinting the EXPLAIN VERBOSE output, ruleutils.c considers the first\nchild of Append as the OUTER referent, as set_deparse_planstate()\nstates:\n\n /*\n * We special-case Append and MergeAppend to pretend that the first child\n * plan is the OUTER referent; we have to interpret OUTER Vars in their\n * tlists according to one of the children, and the first one is the most\n * natural choice.\n\nIf I change set_append_references() to comment out the\nset_dummy_tlist_references() call, I get this output:\n\nexplain (costs off, verbose) select * from parent where a > 1 order by 1;\n QUERY PLAN\n───────────────────────────────────────\n Sort\n Output: a\n Sort Key: a\n -> Append\n -> Seq Scan on public.child1\n Output: child1.a\n Filter: (child1.a > 1)\n -> Seq Scan on public.child2\n Output: child2.a\n Filter: (child2.a > 1)\n(10 rows)\n\nNot parent.a as I had expected. That seems to be because parent's RTE\nis considered unused in the plan. One might say that the plan's\nAppend node belongs to that RTE, but then Append doesn't have any RT\nindex attached to it, so it escapes ExplainPreScanNode()'s walk of the\nplan tree to collect the indexes of \"used RTEs\". I changed\nset_rtable_names() to get around that as follows:\n\n@@ -3458,7 +3458,7 @@ set_rtable_names(deparse_namespace *dpns, List\n*parent_namespaces,\n /* Just in case this takes an unreasonable amount of time ... */\n CHECK_FOR_INTERRUPTS();\n\n- if (rels_used && !bms_is_member(rtindex, rels_used))\n+ if (rels_used && !bms_is_member(rtindex, rels_used) && !rte->inh)\n\nand I get:\n\nexplain (costs off, verbose) select * from parent where a > 1 order by 1;\n QUERY PLAN\n───────────────────────────────────────\n Sort\n Output: parent.a\n Sort Key: parent.a\n -> Append\n -> Seq Scan on public.child1\n Output: child1.a\n Filter: (child1.a > 1)\n -> Seq Scan on public.child2\n Output: child2.a\n Filter: (child2.a > 1)\n(10 rows)\n\n> For setops we've traditionally used the left input as reference.\n> Maybe we could do better, but I'm not very sure how, since SQL\n> doesn't actually provide any explicit names for the setop result.\n> Making up a name with no basis in the query probably isn't an\n> improvement, or at least not enough of one to justify a change.\n\nI too am not sure what we should about Appends of setops, but with the\nabove hacks, I get this:\n\nexplain (costs off, verbose) select * from child1 union all select *\nfrom child2 order by 1;\n QUERY PLAN\n───────────────────────────────────────\n Sort\n Output: \"*SELECT* 1\".a\n Sort Key: \"*SELECT* 1\".a\n -> Append\n -> Seq Scan on public.child1\n Output: child1.a\n -> Seq Scan on public.child2\n Output: child2.a\n(8 rows)\n\nwhereas currently it prints:\n\nexplain (costs off, verbose) select * from child1 union all select *\nfrom child2 order by 1;\n QUERY PLAN\n───────────────────────────────────────\n Sort\n Output: child1.a\n Sort Key: child1.a\n -> Append\n -> Seq Scan on public.child1\n Output: child1.a\n -> Seq Scan on public.child2\n Output: child2.a\n(8 rows)\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 31 Jul 2019 11:29:56 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On 2019-Jul-30, Tom Lane wrote:\n\n> I wrote:\n> > This may be arguing for a change in ruleutils' existing behavior,\n> > not sure. But when dealing with traditional-style inheritance,\n> > I've always thought that Vars above the Append were referring to\n> > the parent rel in its capacity as the parent, not in its capacity\n> > as the first child. With new-style partitioning drawing a clear\n> > distinction between the parent and all its children, it's easier\n> > to understand the difference.\n> \n> OK, so experimenting, I see that it is a change: [...]\n\n> The portion of this below the Append is fine, but I argue that\n> the Vars above the Append should say \"part\", not \"part_p1\".\n> In that way they'd look the same regardless of which partitions\n> have been pruned or not.\n\nSo is anyone working on a patch to use this approach?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Sep 2019 11:11:53 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> So is anyone working on a patch to use this approach?\n\nIt's on my to-do list, but I'm not sure how soon I'll get to it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Sep 2019 10:24:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 10:24:13AM -0400, Tom Lane wrote:\n> It's on my to-do list, but I'm not sure how soon I'll get to it.\n\nSeems like it is better to mark this CF entry as returned with\nfeedback then.\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 10:58:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Sep 12, 2019 at 10:24:13AM -0400, Tom Lane wrote:\n>> It's on my to-do list, but I'm not sure how soon I'll get to it.\n\n> Seems like it is better to mark this CF entry as returned with\n> feedback then.\n\nFair enough, but I did actually spend some time on the issue today.\nJust to cross-link this thread to the latest, see\n\nhttps://www.postgresql.org/message-id/12424.1575168015%40sss.pgh.pa.us\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Nov 2019 21:43:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "On Sat, Nov 30, 2019 at 09:43:35PM -0500, Tom Lane wrote:\n> Fair enough, but I did actually spend some time on the issue today.\n> Just to cross-link this thread to the latest, see\n> \n> https://www.postgresql.org/message-id/12424.1575168015%40sss.pgh.pa.us\n\nThanks, just saw the update.\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 11:49:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jul-30, Tom Lane wrote:\n>> The portion of this below the Append is fine, but I argue that\n>> the Vars above the Append should say \"part\", not \"part_p1\".\n>> In that way they'd look the same regardless of which partitions\n>> have been pruned or not.\n\n> So is anyone working on a patch to use this approach?\n\nI spent some more time on this today, and successfully converted\nruleutils.c back to dealing only in Plan trees not PlanState trees.\nThe hard part of this turned out to be that in a Plan tree, it's\nnot so easy to identify subplans and initplans; the links that\nsimplify that in the existing ruleutils code get set up while\ninitializing the PlanState tree. I had to do two things to make\nit work:\n\n* To cope with CTEScans and initPlans, ruleutils now needs access to the\nPlannedStmt->subplans list, which can be set up along with the rtable.\nI thought adding that as a separate argument wasn't very forward-looking,\nso instead I changed the API of that function to pass the PlannedStmt.\n\n* To cope with SubPlans, I changed the definition of the \"ancestors\"\nlist so that it includes SubPlans along with regular Plan nodes.\nThis is slightly squirrely, because SubPlan isn't a subclass of Plan,\nbut it seems to work well. Notably, we don't have to search for\nrelevant SubPlan nodes in find_param_referent(). We'll just arrive\nat them naturally while chasing up the ancestors list.\n\nI don't think this is committable as it stands, because there are\na couple of undesirable changes in partition_prune.out. Those test\ncases are explaining queries in which the first child of a MergeAppend\ngets pruned during executor start. That results in ExplainPreScanNode\nnot seeing that node, so it deems the associated RTE to be unreferenced,\nso select_rtable_names_for_explain doesn't assign that RTE an alias.\nBut then when we drill down for a referent for a Var above the\nMergeAppend, we go to the first child of the MergeAppend (not the\nMergeAppendState), ie exactly the RTE that was deemed unreferenced.\nSo we end up with no table alias to print.\n\nThat's not ruleutils.c's fault obviously: it did what it was told.\nAnd it ties right into the question that's at the heart of this\ndiscussion, ie what do we want to print for such Vars? So I think\nthis patch is all right as a component of the full fix, but now we\nhave to move on to the main event. I have some ideas about what\nto do next, but they're not fully baked yet.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 03 Dec 2019 19:47:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "I wrote:\n>> This may be arguing for a change in ruleutils' existing behavior,\n>> not sure. But when dealing with traditional-style inheritance,\n>> I've always thought that Vars above the Append were referring to\n>> the parent rel in its capacity as the parent, not in its capacity\n>> as the first child. With new-style partitioning drawing a clear\n>> distinction between the parent and all its children, it's easier\n>> to understand the difference.\n\n> OK, so experimenting, I see that it is a change: HEAD does\n\n> regression=# explain verbose select * from part order by a;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------\n> Sort (cost=362.21..373.51 rows=4520 width=8)\n> Output: part_p1.a, part_p1.b\n> Sort Key: part_p1.a\n> -> Append (cost=0.00..87.80 rows=4520 width=8)\n> -> Seq Scan on public.part_p1 (cost=0.00..32.60 rows=2260 width=8)\n> Output: part_p1.a, part_p1.b\n> -> Seq Scan on public.part_p2_p1 (cost=0.00..32.60 rows=2260 width=8)\n> Output: part_p2_p1.a, part_p2_p1.b\n> (8 rows)\n\n> The portion of this below the Append is fine, but I argue that\n> the Vars above the Append should say \"part\", not \"part_p1\".\n> In that way they'd look the same regardless of which partitions\n> have been pruned or not.\n\nSo I've been thinking about how to make this actually happen.\nI do not think it's possible without adding more information\nto Plan trees. Which is not a show-stopper in itself --- there's\nalready various fields there that have no use except to support\nEXPLAIN --- but it'd behoove us to minimize the amount of work\nthe planner spends to generate such new info.\n\nI think it can be made to work with a design along these lines:\n\n* Add the planner's AppendRelInfo list to the finished PlannedStmt.\nWe would have no need for the translated_vars list, only for the\nrecently-added reverse-lookup array, so we could reduce the cost\nof copying plans by having setrefs.c zero out the translated_vars\nfields, much as it does for unnecessary fields of RTEs.\n\n* In Append and MergeAppend plan nodes, add a bitmapset field that\ncontains the relids of any inheritance parent rels formed by this\nappend operation. (It has to be a set, not a single relid, because\na partitioned join would form two appendrels at the same plan node.\nIn general, partitioned joins break a lot of the simpler ideas\nI'd had before this one...) I think this is probably just the relids\nof the path's parent RelOptInfo, so it's little or no extra cost to\ncalculate.\n\n* In ExplainPreScanNode, treat relids mentioned in such fields as\nreferenced by the query, so that they'll be assigned aliases by\nselect_rtable_names_for_explain. (Note that this will generally mean\nthat a partition root table gets its unmodified alias, and all child\nrels will have \"_N\" added, rather than the current situation where the\nfirst unpruned child gets the parent's unmodified alias. This seems\ngood to me from a consistency standpoint, although it'll mean another\nround of churn in the regression test results.)\n\n* When ruleutils has to resolve a Var, and it descends through an\nAppend or MergeAppend that has this field nonempty, remember the\nbitmapset of relevant relids as we continue recursing. Once we've\nfinally located a base Var, if the passed-down set of inheritance\nrelids isn't empty, then use the AppendRelInfo data to try to map\nthe base Var's varno/varattno back up to any one of these relids.\nIf successful, print the name of the mapped-to table and column\ninstead of the base Var's name.\n\nThis design will correctly print references to the \"same\" Var\ndifferently depending on where they appear in the plan tree, ie above\nor below the Append that forms the appendrel. I don't see any way we\ncan make that happen reliably without new plantree decoration --- in\nparticular, I don't think ruleutils can reverse-engineer which Appends\nform which appendrels without any help.\n\nAn interesting point is what to do if we see more than one such append\nnode as we descend. We should union the sets of relevant appendrel\nrelids, for sure, but now there is a possibility that more than one\nappendrel can be matched while chasing back up the AppendRelInfo data.\nI think that can only happen for an inheritance appendrel nested\ninside a UNION ALL appendrel, so the question becomes whether we'd\nrather report the inheritance root or whatever alias we're going to\nassign for UNION appendrels. Perhaps that choice should wait until\nwe've got some code to test these ideas with.\n\nI haven't tried to code this yet, but will go do so if there aren't\nobjections to this sketch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Dec 2019 10:30:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "I wrote:\n> I haven't tried to code this yet, but will go do so if there aren't\n> objections to this sketch.\n\nOK, so here's a finished set of patches for this issue.\n\n0001 is the same patch I posted on Tuesday; I kept it separate just\nbecause it seemed like a largely separable set of changes. (Note that\nthe undesirable regression test output changes are undone by 0002.)\n\n0002 implements the map-vars-back-to-the-inheritance parent change\nper my sketch. Notice that relation aliases and Var names change\nunderneath Appends/MergeAppends, but Vars above one are (mostly)\nprinted the same as before. On the whole I think this is a good\nset of test output changes, reflecting a more predictable approach\nto assigning aliases to inheritance children. But somebody else\nmight see it differently I suppose.\n\nFinally, 0003 is the remaining portion of David's patch to allow\ndeletion of all of an Append/MergeAppend's sub-plans during\nexecutor startup pruning.\n\nThoughts? I'd like to push this fairly soon, rather than waiting\nfor the next commitfest, because otherwise maintaining the\nregression test diffs is likely to be painful.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 05 Dec 2019 18:17:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
},
{
"msg_contents": "I wrote:\n> OK, so here's a finished set of patches for this issue.\n> 0001 is the same patch I posted on Tuesday; I kept it separate just\n> because it seemed like a largely separable set of changes. (Note that\n> the undesirable regression test output changes are undone by 0002.)\n> 0002 implements the map-vars-back-to-the-inheritance parent change\n> per my sketch. Notice that relation aliases and Var names change\n> underneath Appends/MergeAppends, but Vars above one are (mostly)\n> printed the same as before. On the whole I think this is a good\n> set of test output changes, reflecting a more predictable approach\n> to assigning aliases to inheritance children. But somebody else\n> might see it differently I suppose.\n> Finally, 0003 is the remaining portion of David's patch to allow\n> deletion of all of an Append/MergeAppend's sub-plans during\n> executor startup pruning.\n\nI pushed these, and the buildfarm immediately got a bad case of\nthe measles. All the members using force_parallel_mode = regress\nfail on the new regression test case added by 0003, with diffs\nlike this:\n\ndiff -U3 /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/partition_prune.out /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/partition_prune.out\n--- /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/partition_prune.out\tThu Dec 12 00:40:04 2019\n+++ /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/partition_prune.out\tThu Dec 12 00:45:44 2019\n@@ -3169,10 +3169,12 @@\n --------------------------------------------\n Limit (actual rows=0 loops=1)\n Output: ma_test.a, ma_test.b\n+ Worker 0: actual rows=0 loops=1\n -> Merge Append (actual rows=0 loops=1)\n Sort Key: ma_test.b\n+ Worker 0: actual rows=0 loops=1\n Subplans Removed: 3\n-(5 rows)\n+(7 rows)\n \n deallocate mt_q2;\n reset plan_cache_mode;\n\nThis looks to me like there's some other part of EXPLAIN that\nneeds to be updated for the possibility of zero child nodes, but\nI didn't find out just where in a few minutes of searching.\n\nAs a stopgap to get back to green buildfarm, I removed this\nspecific test case, but we need to look at it closer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Dec 2019 19:14:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Runtime pruning problem"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm somewhat unhappy in how much the no-fsm-for-small-rels exposed\ncomplexity that looks like it should be purely in freespacemap.c to\ncallers.\n\n\n extern Size GetRecordedFreeSpace(Relation rel, BlockNumber heapBlk);\n-extern BlockNumber GetPageWithFreeSpace(Relation rel, Size spaceNeeded);\n+extern BlockNumber GetPageWithFreeSpace(Relation rel, Size spaceNeeded,\n+ bool check_fsm_only);\n\nSo now freespace.c has an argument that says we should only check the\nfsm. That's confusing. And it's not explained to callers what that\nargument means, and when it should be set.\n\n\n@@ -176,20 +269,44 @@ RecordAndGetPageWithFreeSpace(Relation rel, BlockNumber oldPage,\n * Note that if the new spaceAvail value is higher than the old value stored\n * in the FSM, the space might not become visible to searchers until the next\n * FreeSpaceMapVacuum call, which updates the upper level pages.\n+ *\n+ * Callers have no need for a local map.\n */\n void\n-RecordPageWithFreeSpace(Relation rel, BlockNumber heapBlk, Size spaceAvail)\n+RecordPageWithFreeSpace(Relation rel, BlockNumber heapBlk,\n+ Size spaceAvail, BlockNumber nblocks)\n\nThere's no explanation as to what that \"nblocks\" argument is. One\nbasically has to search other callers to figure it out. It's not even\nclear to which fork it relates to. Nor that one can set it to\nInvalidBlockNumber if one doesn't have the relation size conveniently\nreachable. But it's not exposed to RecordAndGetPageWithFreeSpace(), for\na basically unexplained reason. There's a comment above\nfsm_allow_writes() - but that's file-local function that external\ncallers basically have need to know about.\n\nI can't figure out what \"Callers have no need for a local map.\" is\nsupposed to mean.\n\n\n+/*\n+ * Clear the local map. We must call this when we have found a block with\n+ * enough free space, when we extend the relation, or on transaction abort.\n+ */\n+void\n+FSMClearLocalMap(void)\n+{\n+ if (FSM_LOCAL_MAP_EXISTS)\n+ {\n+ fsm_local_map.nblocks = 0;\n+ memset(&fsm_local_map.map, FSM_LOCAL_NOT_AVAIL,\n+ sizeof(fsm_local_map.map));\n+ }\n+}\n+\n\nSo now there's a new function one needs to call after successfully using\nthe block returned by [RecordAnd]GetPageWithFreeSpace(). But it's not\nreferenced from those functions, so basically one has to just know that.\n\n\n+/* Only create the FSM if the heap has greater than this many blocks */\n+#define HEAP_FSM_CREATION_THRESHOLD 4\n\nHm, this seems to be tying freespace.c closer to heap than I think is\ngreat - think of new AMs like zheap, that also want to use it.\n\n\nI think this is mostly fallout about the prime issue I'm unhappy\nabout. There's now some global variable in freespacemap.c that code\nusing freespace.c has to know about and maintain.\n\n\n+static void\n+fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n+{\n+ BlockNumber blkno,\n+ cached_target_block;\n+\n+ /* The local map must not be set already. */\n+ Assert(!FSM_LOCAL_MAP_EXISTS);\n+\n+ /*\n+ * Starting at the current last block in the relation and working\n+ * backwards, mark alternating blocks as available.\n+ */\n+ blkno = cur_nblocks - 1;\n\nThat comment explains very little about why this is done, and why it's a\ngood idea.\n\n+/* Status codes for the local map. */\n+\n+/* Either already tried, or beyond the end of the relation */\n+#define FSM_LOCAL_NOT_AVAIL 0x00\n+\n+/* Available to try */\n+#define FSM_LOCAL_AVAIL 0x01\n\n+/* Local map of block numbers for small heaps with no FSM. */\n+typedef struct\n+{\n+ BlockNumber nblocks;\n+ uint8 map[HEAP_FSM_CREATION_THRESHOLD];\n+} FSMLocalMap;\n+\n\nHm, given realistic HEAP_FSM_CREATION_THRESHOLD, and the fact that we\nreally only need one bit per relation, it seems like map should really\nbe just a uint32 with one bit per page.\n\n\n+static bool\n+fsm_allow_writes(Relation rel, BlockNumber heapblk,\n+ BlockNumber nblocks, BlockNumber *get_nblocks)\n\n+ if (rel->rd_rel->relpages != InvalidBlockNumber &&\n+ rel->rd_rel->relpages > HEAP_FSM_CREATION_THRESHOLD)\n+ return true;\n+ else\n+ skip_get_nblocks = false;\n+ }\n\nThis badly needs a comment explaining that these values can be basically\narbitrarily out of date. Explaining why it's correct to rely on them\nanyway (Presumably because creating an fsm unnecessarily is ok, it just\navoid susing this optimization).\n\n\n+static bool\n+fsm_allow_writes(Relation rel, BlockNumber heapblk,\n+ BlockNumber nblocks, BlockNumber *get_nblocks)\n\n+ RelationOpenSmgr(rel);\n+ if (smgrexists(rel->rd_smgr, FSM_FORKNUM))\n+ return true;\n\nIsn't this like really expensive? mdexists() closes the relations and\nreopens it from scratch. Shouldn't we at the very least check\nsmgr_fsm_nblocks beforehand, so this is only done once?\n\n\nI'm kinda thinking that this is the wrong architecture.\n\n1) Unless I miss something, this will trigger a\n RelationGetNumberOfBlocks(), which in turn ends up doing an lseek(),\n once for each page we add to the relation. That strikes me as pretty\n suboptimal. I think it's even worse if multiple backends do the\n insertion, because the RelationGetTargetBlock(relation) logic will\n succeed less often.\n\n2) We'll repeatedly re-encounter the first few pages, because we clear\n the local map after each successful RelationGetBufferForTuple().\n\n3) The global variable based approach means that we cannot easily do\n better. Even if we had a cache that lives across\n RelationGetBufferForTuple() calls.\n\n4) fsm_allow_writes() does a smgrexists() for the FSM in some\n cases. That's pretty darn expensive if it's already open.\n\n\nI think if we want to keep something like this feature, we'd need to\ncache the relation size in a variable similar to how we cache the FSM\nsize (like SMgrRelationData.smgr_fsm_nblocks) *and* stash the bitmap of\npages that we think are used/free as a bitmap somewhere below the\nrelcache. If we cleared that variable at truncations, I think we should\nbe able to make that work reasonably well?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Apr 2019 11:04:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'm kinda thinking that this is the wrong architecture.\n\nThe bits of that patch that I've looked at seemed like a mess\nto me too. AFAICT, it's trying to use a single global \"map\"\nfor all relations (strike 1) without any clear tracking of\nwhich relation the map currently describes (strike 2).\nThis can only work at all if an inaccurate map is very fail-soft,\nwhich I'm not convinced it is, and in any case it seems pretty\ninefficient for workloads that insert into multiple tables.\n\nI'd have expected any such map to be per-table and be stored in\nthe relcache.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Apr 2019 14:31:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-16 14:31:25 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'm kinda thinking that this is the wrong architecture.\n> \n> The bits of that patch that I've looked at seemed like a mess\n> to me too. AFAICT, it's trying to use a single global \"map\"\n> for all relations (strike 1) without any clear tracking of\n> which relation the map currently describes (strike 2).\n\nWell, strike 2 basically is not a problem right now, because the map is\ncleared whenever a search for a target buffer succeeded. But that has\npretty obvious efficiency issues...\n\n\n> This can only work at all if an inaccurate map is very fail-soft,\n> which I'm not convinced it is\n\nI think it better needs to be fail-soft independent of this the no-fsm\npatch. Because the fsm is not WAL logged etc, it's pretty easy to get a\npretty corrupted version. And we better deal with that.\n\n\n> and in any case it seems pretty inefficient for workloads that insert\n> into multiple tables.\n\nAs is, it's inefficient for insertions into a *single* relation. The\nRelationGetTargetBlock() makes it not crazily expensive, but it's still\nplenty expensive.\n\n\n> I'd have expected any such map to be per-table and be stored in\n> the relcache.\n\nSame.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Apr 2019 12:16:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-16 14:31:25 -0400, Tom Lane wrote:\n>> This can only work at all if an inaccurate map is very fail-soft,\n>> which I'm not convinced it is\n\n> I think it better needs to be fail-soft independent of this the no-fsm\n> patch. Because the fsm is not WAL logged etc, it's pretty easy to get a\n> pretty corrupted version. And we better deal with that.\n\nYes, FSM has to be fail-soft from a *correctness* viewpoint; but it's\nnot fail-soft from a *performance* viewpoint. It can take awhile for\nus to self-heal a busted map. And this fake map spends almost all its\ntime busted and in need of (expensive) corrections. I think this may\nactually be the same performance complaint you're making, in different\nwords.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Apr 2019 15:24:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 2:04 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> I'm somewhat unhappy in how much the no-fsm-for-small-rels exposed\n> complexity that looks like it should be purely in freespacemap.c to\n> callers.\n>\n>\n> extern Size GetRecordedFreeSpace(Relation rel, BlockNumber heapBlk);\n> -extern BlockNumber GetPageWithFreeSpace(Relation rel, Size spaceNeeded);\n> +extern BlockNumber GetPageWithFreeSpace(Relation rel, Size spaceNeeded,\n> + bool check_fsm_only);\n>\n> So now freespace.c has an argument that says we should only check the\n> fsm. That's confusing. And it's not explained to callers what that\n> argument means, and when it should be set.\n\nWhen first looking for free space, it's \"false\": Within\nGetPageWithFreeSpace(), we call RelationGetNumberOfBlocks() if the FSM\nreturns invalid.\n\nIf we have to extend, after acquiring the lock to extend the relation,\nwe call GetPageWithFreeSpace() again to see if another backend already\nextended while waiting on the lock. If there's no FSM, the thinking\nis, it's not worth it to get the number of blocks again.\n\n> @@ -176,20 +269,44 @@ RecordAndGetPageWithFreeSpace(Relation rel, BlockNumber oldPage,\n> * Note that if the new spaceAvail value is higher than the old value stored\n> * in the FSM, the space might not become visible to searchers until the next\n> * FreeSpaceMapVacuum call, which updates the upper level pages.\n> + *\n> + * Callers have no need for a local map.\n> */\n> void\n> -RecordPageWithFreeSpace(Relation rel, BlockNumber heapBlk, Size spaceAvail)\n> +RecordPageWithFreeSpace(Relation rel, BlockNumber heapBlk,\n> + Size spaceAvail, BlockNumber nblocks)\n>\n> There's no explanation as to what that \"nblocks\" argument is. One\n> basically has to search other callers to figure it out. It's not even\n> clear to which fork it relates to. Nor that one can set it to\n> InvalidBlockNumber if one doesn't have the relation size conveniently\n> reachable. But it's not exposed to RecordAndGetPageWithFreeSpace(), for\n> a basically unexplained reason. There's a comment above\n> fsm_allow_writes() - but that's file-local function that external\n> callers basically have need to know about.\n\nOkay.\n\n> I can't figure out what \"Callers have no need for a local map.\" is\n> supposed to mean.\n\nIt was meant to contrast with [RecordAnd]GetPageWithFreeSpace(), but I\nsee how it's confusing.\n\n> +/*\n> + * Clear the local map. We must call this when we have found a block with\n> + * enough free space, when we extend the relation, or on transaction abort.\n> + */\n> +void\n> +FSMClearLocalMap(void)\n> +{\n> + if (FSM_LOCAL_MAP_EXISTS)\n> + {\n> + fsm_local_map.nblocks = 0;\n> + memset(&fsm_local_map.map, FSM_LOCAL_NOT_AVAIL,\n> + sizeof(fsm_local_map.map));\n> + }\n> +}\n> +\n>\n> So now there's a new function one needs to call after successfully using\n> the block returned by [RecordAnd]GetPageWithFreeSpace(). But it's not\n> referenced from those functions, so basically one has to just know that.\n\nRight.\n\n> +/* Only create the FSM if the heap has greater than this many blocks */\n> +#define HEAP_FSM_CREATION_THRESHOLD 4\n>\n> Hm, this seems to be tying freespace.c closer to heap than I think is\n> great - think of new AMs like zheap, that also want to use it.\n\nAmit and I kept zheap in mind when working on the patch. You'd have to\nwork around the metapage, but everything else should work the same.\n\n> I think this is mostly fallout about the prime issue I'm unhappy\n> about. There's now some global variable in freespacemap.c that code\n> using freespace.c has to know about and maintain.\n>\n>\n> +static void\n> +fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> +{\n> + BlockNumber blkno,\n> + cached_target_block;\n> +\n> + /* The local map must not be set already. */\n> + Assert(!FSM_LOCAL_MAP_EXISTS);\n> +\n> + /*\n> + * Starting at the current last block in the relation and working\n> + * backwards, mark alternating blocks as available.\n> + */\n> + blkno = cur_nblocks - 1;\n>\n> That comment explains very little about why this is done, and why it's a\n> good idea.\n\nShort answer: performance -- it's too expensive to try every block.\nThe explanation is in storage/freespace/README -- maybe that should be\nreferenced here?\n\n> +/* Status codes for the local map. */\n> +\n> +/* Either already tried, or beyond the end of the relation */\n> +#define FSM_LOCAL_NOT_AVAIL 0x00\n> +\n> +/* Available to try */\n> +#define FSM_LOCAL_AVAIL 0x01\n>\n> +/* Local map of block numbers for small heaps with no FSM. */\n> +typedef struct\n> +{\n> + BlockNumber nblocks;\n> + uint8 map[HEAP_FSM_CREATION_THRESHOLD];\n> +} FSMLocalMap;\n> +\n>\n> Hm, given realistic HEAP_FSM_CREATION_THRESHOLD, and the fact that we\n> really only need one bit per relation, it seems like map should really\n> be just a uint32 with one bit per page.\n\nI fail to see the advantage of that.\n\n> +static bool\n> +fsm_allow_writes(Relation rel, BlockNumber heapblk,\n> + BlockNumber nblocks, BlockNumber *get_nblocks)\n>\n> + if (rel->rd_rel->relpages != InvalidBlockNumber &&\n> + rel->rd_rel->relpages > HEAP_FSM_CREATION_THRESHOLD)\n> + return true;\n> + else\n> + skip_get_nblocks = false;\n> + }\n>\n> This badly needs a comment explaining that these values can be basically\n> arbitrarily out of date. Explaining why it's correct to rely on them\n> anyway (Presumably because creating an fsm unnecessarily is ok, it just\n> avoid susing this optimization).\n\nAgreed, and yes, your presumption is what I had in mind.\n\n> I'm kinda thinking that this is the wrong architecture.\n>\n> 1) Unless I miss something, this will trigger a\n> RelationGetNumberOfBlocks(), which in turn ends up doing an lseek(),\n> once for each page we add to the relation.\n\nThat was true previously anyway if the FSM returned InvalidBlockNumber.\n\n> That strikes me as pretty\n> suboptimal. I think it's even worse if multiple backends do the\n> insertion, because the RelationGetTargetBlock(relation) logic will\n> succeed less often.\n\nCould you explain why it would succeed less often?\n\n> 2) We'll repeatedly re-encounter the first few pages, because we clear\n> the local map after each successful RelationGetBufferForTuple().\n\nNot exactly sure what you mean? We only set the map if\nRelationGetTargetBlock() returns InvalidBlockNumber, or if it returned\na valid block, and inserting there already failed. So, not terribly\noften, I imagine.\n\n> 3) The global variable based approach means that we cannot easily do\n> better. Even if we had a cache that lives across\n> RelationGetBufferForTuple() calls.\n>\n> 4) fsm_allow_writes() does a smgrexists() for the FSM in some\n> cases. That's pretty darn expensive if it's already open.\n\n(from earlier)\n> Isn't this like really expensive? mdexists() closes the relations and\n> reopens it from scratch. Shouldn't we at the very least check\n> smgr_fsm_nblocks beforehand, so this is only done once?\n\nHmm, I can look into that.\n\n\nOn Wed, Apr 17, 2019 at 3:16 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-04-16 14:31:25 -0400, Tom Lane wrote:\n <snip>\n> > and in any case it seems pretty inefficient for workloads that insert\n> > into multiple tables.\n>\n> As is, it's inefficient for insertions into a *single* relation. The\n> RelationGetTargetBlock() makes it not crazily expensive, but it's still\n> plenty expensive.\n\nPerformance testing didn't reveal any performance regression. If you\nhave a realistic benchmark in mind that stresses this logic more\nheavily, I'd be happy to be convinced otherwise.\n\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 17 Apr 2019 13:09:05 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 10:39 AM John Naylor\n<john.naylor@2ndquadrant.com> wrote:\n> On Wed, Apr 17, 2019 at 2:04 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > +/* Only create the FSM if the heap has greater than this many blocks */\n> > +#define HEAP_FSM_CREATION_THRESHOLD 4\n> >\n> > Hm, this seems to be tying freespace.c closer to heap than I think is\n> > great - think of new AMs like zheap, that also want to use it.\n>\n> Amit and I kept zheap in mind when working on the patch. You'd have to\n> work around the metapage, but everything else should work the same.\n>\n\nI think we also need to take care of TPD pages along with meta page.\nThis might be less effective if we encounter TPD pages as well in\nsmall relation which shouldn't be a common scenario, but it won't\nhurt, otherwise. Those pages are anyway temporary and will be\nremoved.\n\nBTW there is one other thing which is tied to heap in FSM for which we\nmight want some handling.\n#define MaxFSMRequestSize MaxHeapTupleSize\n\nIn general, it will be good if we find some pluggable way for both the\ndefines, otherwise, also, it shouldn't cause a big problem.\n\n>\n> > I'm kinda thinking that this is the wrong architecture.\n> >\n> > 1) Unless I miss something, this will trigger a\n> > RelationGetNumberOfBlocks(), which in turn ends up doing an lseek(),\n> > once for each page we add to the relation.\n>\n> That was true previously anyway if the FSM returned InvalidBlockNumber.\n>\n> > That strikes me as pretty\n> > suboptimal. I think it's even worse if multiple backends do the\n> > insertion, because the RelationGetTargetBlock(relation) logic will\n> > succeed less often.\n>\n> Could you explain why it would succeed less often?\n>\n> > 2) We'll repeatedly re-encounter the first few pages, because we clear\n> > the local map after each successful RelationGetBufferForTuple().\n>\n> Not exactly sure what you mean? We only set the map if\n> RelationGetTargetBlock() returns InvalidBlockNumber, or if it returned\n> a valid block, and inserting there already failed. So, not terribly\n> often, I imagine.\n>\n> > 3) The global variable based approach means that we cannot easily do\n> > better. Even if we had a cache that lives across\n> > RelationGetBufferForTuple() calls.\n> >\n> > 4) fsm_allow_writes() does a smgrexists() for the FSM in some\n> > cases. That's pretty darn expensive if it's already open.\n>\n> (from earlier)\n> > Isn't this like really expensive? mdexists() closes the relations and\n> > reopens it from scratch. Shouldn't we at the very least check\n> > smgr_fsm_nblocks beforehand, so this is only done once?\n>\n> Hmm, I can look into that.\n>\n>\n> I think if we want to keep something like this feature, we'd need to\n> cache the relation size in a variable similar to how we cache the FSM\n> size (like SMgrRelationData.smgr_fsm_nblocks)\n>\n\nmakes sense. I think we should do this unless we face any problem with it.\n\n> *and* stash the bitmap of\n> pages that we think are used/free as a bitmap somewhere below the\n> relcache.\n\nI think maintaining at relcache level will be tricky when there are\ninsertions and deletions happening in the small relation. We have\nconsidered such a case during development wherein we don't want the\nFSM to be created if there are insertions and deletions in small\nrelation. The current mechanism addresses both this and normal case\nwhere there are not many deletions. Sure there is some overhead of\nbuilding the map again and rechecking each page. The first one is a\nmemory operation and takes a few cycles and for the second we\noptimized by checking the pages alternatively which means we won't\ncheck more than two pages at-a-time. This cost is paid by not\nchecking FSM and it could be somewhat better in some cases [1].\n\n\n> If we cleared that variable at truncations, I think we should\n> be able to make that work reasonably well?\n\nNot only that, I think it needs to be cleared whenever we create the\nFSM as well which could be tricky as it can be created by the vacuum.\n\nOTOH, if we want to extend it later for whatever reason to a relation\nlevel cache, it shouldn't be that difficult as the implementation is\nmostly contained in freespace.c (fsm* functions) and I think the\nrelation is accessible in most places. We might need to rip out some\ncalls to clearlocalmap.\n\n>\n> On Wed, Apr 17, 2019 at 3:16 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2019-04-16 14:31:25 -0400, Tom Lane wrote:\n> <snip>\n> > > and in any case it seems pretty inefficient for workloads that insert\n> > > into multiple tables.\n> >\n> > As is, it's inefficient for insertions into a *single* relation. The\n> > RelationGetTargetBlock() makes it not crazily expensive, but it's still\n> > plenty expensive.\n>\n\nDuring development, we were also worried about the performance\nregression that can happen due to this patch and we have done many\nrounds of performance tests where such a cache could be accessed\npretty frequently. In the original version, we do see a small\nregression as a result of which we came up with an alternate strategy\nof not checking every page. If you want, I can share the links of\nemails for performance testing.\n\n> Performance testing didn't reveal any performance regression. If you\n> have a realistic benchmark in mind that stresses this logic more\n> heavily, I'd be happy to be convinced otherwise.\n>\n\nIn fact, we have seen some wins. See the performance testing done [1]\nwith various approaches during development.\n\nAdded this as an open item.\n\n[1] - https://www.postgresql.org/message-id/CAD__Oui5%2BqiVxJSJqiXq2jA60QV8PKxrZA8_W%2BcCxROGAFJMWA%40mail.gmail.com\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Apr 2019 15:49:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-17 13:09:05 +0800, John Naylor wrote:\n> On Wed, Apr 17, 2019 at 2:04 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > I'm somewhat unhappy in how much the no-fsm-for-small-rels exposed\n> > complexity that looks like it should be purely in freespacemap.c to\n> > callers.\n> >\n> >\n> > extern Size GetRecordedFreeSpace(Relation rel, BlockNumber heapBlk);\n> > -extern BlockNumber GetPageWithFreeSpace(Relation rel, Size spaceNeeded);\n> > +extern BlockNumber GetPageWithFreeSpace(Relation rel, Size spaceNeeded,\n> > + bool check_fsm_only);\n> >\n> > So now freespace.c has an argument that says we should only check the\n> > fsm. That's confusing. And it's not explained to callers what that\n> > argument means, and when it should be set.\n> \n> When first looking for free space, it's \"false\": Within\n> GetPageWithFreeSpace(), we call RelationGetNumberOfBlocks() if the FSM\n> returns invalid.\n> \n> If we have to extend, after acquiring the lock to extend the relation,\n> we call GetPageWithFreeSpace() again to see if another backend already\n> extended while waiting on the lock. If there's no FSM, the thinking\n> is, it's not worth it to get the number of blocks again.\n\nI can get that (after reading through the code, grepping through all\ncallers, etc), but it means that every callsite needs to understand\nthat. That's making the API more complicated than needed, especially\nwhen we're going to grow more callers.\n\n\n\n> > +/* Only create the FSM if the heap has greater than this many blocks */\n> > +#define HEAP_FSM_CREATION_THRESHOLD 4\n> >\n> > Hm, this seems to be tying freespace.c closer to heap than I think is\n> > great - think of new AMs like zheap, that also want to use it.\n> \n> Amit and I kept zheap in mind when working on the patch. You'd have to\n> work around the metapage, but everything else should work the same.\n\nMy complaint is basically that it's apparently AM specific (we don't use\nthe logic for e.g. indexes), and that the name suggest it's specific to\nheap. And it's not controllable by the outside, which means it can't be\ntuned for the specific usecase.\n\n\n> > I think this is mostly fallout about the prime issue I'm unhappy\n> > about. There's now some global variable in freespacemap.c that code\n> > using freespace.c has to know about and maintain.\n> >\n> >\n> > +static void\n> > +fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> > +{\n> > + BlockNumber blkno,\n> > + cached_target_block;\n> > +\n> > + /* The local map must not be set already. */\n> > + Assert(!FSM_LOCAL_MAP_EXISTS);\n> > +\n> > + /*\n> > + * Starting at the current last block in the relation and working\n> > + * backwards, mark alternating blocks as available.\n> > + */\n> > + blkno = cur_nblocks - 1;\n> >\n> > That comment explains very little about why this is done, and why it's a\n> > good idea.\n> \n> Short answer: performance -- it's too expensive to try every block.\n> The explanation is in storage/freespace/README -- maybe that should be\n> referenced here?\n\nYes. Even just adding \"for performance reasons, only try every second\nblock. See also the README\" would be good.\n\nBut I'll note that the need to this - and potentially waste space -\ncounters your claim that there's no performance considerations with this\npatch.\n\n\n> > +/* Status codes for the local map. */\n> > +\n> > +/* Either already tried, or beyond the end of the relation */\n> > +#define FSM_LOCAL_NOT_AVAIL 0x00\n> > +\n> > +/* Available to try */\n> > +#define FSM_LOCAL_AVAIL 0x01\n> >\n> > +/* Local map of block numbers for small heaps with no FSM. */\n> > +typedef struct\n> > +{\n> > + BlockNumber nblocks;\n> > + uint8 map[HEAP_FSM_CREATION_THRESHOLD];\n> > +} FSMLocalMap;\n> > +\n> >\n> > Hm, given realistic HEAP_FSM_CREATION_THRESHOLD, and the fact that we\n> > really only need one bit per relation, it seems like map should really\n> > be just a uint32 with one bit per page.\n> \n> I fail to see the advantage of that.\n\nIt'd allow different AMs to have different numbers of dont-create-fsm\nthresholds without needing additional memory (up to 32 blocks).\n\n\n> > I'm kinda thinking that this is the wrong architecture.\n> >\n> > 1) Unless I miss something, this will trigger a\n> > RelationGetNumberOfBlocks(), which in turn ends up doing an lseek(),\n> > once for each page we add to the relation.\n> \n> That was true previously anyway if the FSM returned InvalidBlockNumber.\n\nTrue. That was already pretty annoying though.\n\n\n> > That strikes me as pretty\n> > suboptimal. I think it's even worse if multiple backends do the\n> > insertion, because the RelationGetTargetBlock(relation) logic will\n> > succeed less often.\n> \n> Could you explain why it would succeed less often?\n\nTwo aspects: 1) If more backends access a table, there'll be a higher\nchance the target page is full 2) There's more backends that don't have\na target page.\n\n\n> > 2) We'll repeatedly re-encounter the first few pages, because we clear\n> > the local map after each successful RelationGetBufferForTuple().\n> \n> Not exactly sure what you mean? We only set the map if\n> RelationGetTargetBlock() returns InvalidBlockNumber, or if it returned\n> a valid block, and inserting there already failed. So, not terribly\n> often, I imagine.\n\nIt's pretty common to have small tables that are modified by a number of\nbackends. A typical case is tables that implement locks for external\nprocesses and such.\n\n\n> On Wed, Apr 17, 2019 at 3:16 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2019-04-16 14:31:25 -0400, Tom Lane wrote:\n> <snip>\n> > > and in any case it seems pretty inefficient for workloads that insert\n> > > into multiple tables.\n> >\n> > As is, it's inefficient for insertions into a *single* relation. The\n> > RelationGetTargetBlock() makes it not crazily expensive, but it's still\n> > plenty expensive.\n> \n> Performance testing didn't reveal any performance regression. If you\n> have a realistic benchmark in mind that stresses this logic more\n> heavily, I'd be happy to be convinced otherwise.\n\nWell, try a few hundred relations on nfs (where stat is much more\nexpensive). Or just pgbench a concurrent workload with a few tables with\none live row each, updated by backends (to simulate lock tables and\nsuch).\n\nBut also, my concern here is to a significant degree architectural,\nrather than already measurable performance regressions. We ought to work\ntowards eliminating unnecessary syscalls, not the opposite.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Apr 2019 08:59:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-17 15:49:29 +0530, Amit Kapila wrote:\n> > *and* stash the bitmap of\n> > pages that we think are used/free as a bitmap somewhere below the\n> > relcache.\n> \n> I think maintaining at relcache level will be tricky when there are\n> insertions and deletions happening in the small relation. We have\n> considered such a case during development wherein we don't want the\n> FSM to be created if there are insertions and deletions in small\n> relation. The current mechanism addresses both this and normal case\n> where there are not many deletions. Sure there is some overhead of\n> building the map again and rechecking each page. The first one is a\n> memory operation and takes a few cycles\n\nYea, I think creating / resetting the map is basically free.\n\nI'm not sure I buy the concurrency issue - ISTM it'd be perfectly\nreasonable to cache the local map (in the relcache) and use it for local\nFSM queries, and rebuild it from scratch once no space is found. That'd\navoid a lot of repeated checking of relation size for small, but\ncommonly changed relations. Add a pre-check of smgr_fsm_nblocks (if >\n0, there has to be an fsm), and there should be fewer syscalls.\n\n\n> and for the second we optimized by checking the pages alternatively\n> which means we won't check more than two pages at-a-time. This cost\n> is paid by not checking FSM and it could be somewhat better in some\n> cases [1].\n\nWell, it's also paid by potentially higher bloat, because the\nintermediate pages aren't tested.\n\n\n> > If we cleared that variable at truncations, I think we should\n> > be able to make that work reasonably well?\n> \n> Not only that, I think it needs to be cleared whenever we create the\n> FSM as well which could be tricky as it can be created by the vacuum.\n\nISTM normal invalidation logic should just take of that kind of thing.\n\n\n> OTOH, if we want to extend it later for whatever reason to a relation\n> level cache, it shouldn't be that difficult as the implementation is\n> mostly contained in freespace.c (fsm* functions) and I think the\n> relation is accessible in most places. We might need to rip out some\n> calls to clearlocalmap.\n\nBut it really isn't contained to freespace.c. That's my primary\nconcern. You added new parameters (undocumented ones!),\nFSMClearLocalMap() needs to be called by callers and xlog, etc.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Apr 2019 09:16:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-17 15:49:29 +0530, Amit Kapila wrote:\n>> OTOH, if we want to extend it later for whatever reason to a relation\n>> level cache, it shouldn't be that difficult as the implementation is\n>> mostly contained in freespace.c (fsm* functions) and I think the\n>> relation is accessible in most places. We might need to rip out some\n>> calls to clearlocalmap.\n\n> But it really isn't contained to freespace.c. That's my primary\n> concern. You added new parameters (undocumented ones!),\n> FSMClearLocalMap() needs to be called by callers and xlog, etc.\n\nGiven where we are in the release cycle, and the major architectural\nconcerns that have been raised about this patch, should we just\nrevert it and try again in v13, rather than trying to fix it under\ntime pressure? It's not like there's not anything else on our\nplates to fix before beta.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Apr 2019 12:20:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 9:46 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-04-17 15:49:29 +0530, Amit Kapila wrote:\n> > > *and* stash the bitmap of\n> > > pages that we think are used/free as a bitmap somewhere below the\n> > > relcache.\n> >\n> > I think maintaining at relcache level will be tricky when there are\n> > insertions and deletions happening in the small relation. We have\n> > considered such a case during development wherein we don't want the\n> > FSM to be created if there are insertions and deletions in small\n> > relation. The current mechanism addresses both this and normal case\n> > where there are not many deletions. Sure there is some overhead of\n> > building the map again and rechecking each page. The first one is a\n> > memory operation and takes a few cycles\n>\n> Yea, I think creating / resetting the map is basically free.\n>\n> I'm not sure I buy the concurrency issue - ISTM it'd be perfectly\n> reasonable to cache the local map (in the relcache) and use it for local\n> FSM queries, and rebuild it from scratch once no space is found. That'd\n> avoid a lot of repeated checking of relation size for small, but\n> commonly changed relations.\n>\n\nOkay, so you mean to say that we need to perform additional system\ncall (to get a number of blocks) only when no space is found in the\nexisting set of pages? I think that is a fair point, but can't we\nachieve that by updating relpages in relation after a call to\nRelationGetNumberofBlocks?\n\n> Add a pre-check of smgr_fsm_nblocks (if >\n> 0, there has to be an fsm), and there should be fewer syscalls.\n>\n\nYes, that check is a good one and I see that we already do this check\nin fsm code before calling smgrexists.\n\n>\n> > and for the second we optimized by checking the pages alternatively\n> > which means we won't check more than two pages at-a-time. This cost\n> > is paid by not checking FSM and it could be somewhat better in some\n> > cases [1].\n>\n> Well, it's also paid by potentially higher bloat, because the\n> intermediate pages aren't tested.\n>\n>\n> > > If we cleared that variable at truncations, I think we should\n> > > be able to make that work reasonably well?\n> >\n> > Not only that, I think it needs to be cleared whenever we create the\n> > FSM as well which could be tricky as it can be created by the vacuum.\n>\n> ISTM normal invalidation logic should just take of that kind of thing.\n>\n\nDo you mean to say that we don't need to add any new invalidation call\nand the existing invalidation calls will automatically take care of\nsame?\n\n>\n> > OTOH, if we want to extend it later for whatever reason to a relation\n> > level cache, it shouldn't be that difficult as the implementation is\n> > mostly contained in freespace.c (fsm* functions) and I think the\n> > relation is accessible in most places. We might need to rip out some\n> > calls to clearlocalmap.\n>\n> But it really isn't contained to freespace.c. That's my primary\n> concern.\n\nOkay, I get that point. I think among that also the need to call\nFSMClearLocalMap seems to be your main worry which is fair, but OTOH,\nthe places where it should be called shouldn't be a ton.\n\n> You added new parameters (undocumented ones!),\n>\n\nI think this is mostly for compatibility with the old code. I agree\nthat is a wart, but without much inputs during development, it\ndoesn't seem advisable to change old behavior, that is why we have\nadded a new parameter to GetPageWithFreeSpace. However, if we want we\ncan remove that parameter or add document it in a better way.\n\n> FSMClearLocalMap() needs to be called by callers and xlog, etc.\n>\n\nAgreed, that this is an additional requirement, but we have documented\nthe cases atop of this function where it needs to be called. We might\nhave missed something, but we tried to cover all cases that we are\naware of. Can we make it more clear by adding the comments atop\nfreespace.c API where this map is used?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Apr 2019 12:16:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 9:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-04-17 15:49:29 +0530, Amit Kapila wrote:\n> >> OTOH, if we want to extend it later for whatever reason to a relation\n> >> level cache, it shouldn't be that difficult as the implementation is\n> >> mostly contained in freespace.c (fsm* functions) and I think the\n> >> relation is accessible in most places. We might need to rip out some\n> >> calls to clearlocalmap.\n>\n> > But it really isn't contained to freespace.c. That's my primary\n> > concern. You added new parameters (undocumented ones!),\n> > FSMClearLocalMap() needs to be called by callers and xlog, etc.\n>\n> Given where we are in the release cycle, and the major architectural\n> concerns that have been raised about this patch, should we just\n> revert it and try again in v13, rather than trying to fix it under\n> time pressure?\n>\n\nI respect and will follow whatever will be the consensus after\ndiscussion. However, I request you to wait for some time to let the\ndiscussion conclude. If we can't get to an\nagreement or one of John or me can't implement what is decided, then\nwe can anyway revert it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Apr 2019 12:18:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 2:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I respect and will follow whatever will be the consensus after\n> discussion. However, I request you to wait for some time to let the\n> discussion conclude. If we can't get to an\n> agreement or one of John or me can't implement what is decided, then\n> we can anyway revert it.\n\nAgreed. I suspect the most realistic way to address most of the\nobjections in a short amount of time would be to:\n\n1. rip out the local map\n2. restore hio.c to only checking the last block in the relation if\nthere is no FSM (and lower the threshold to reduce wasted space)\n3. reduce calls to smgr_exists()\n\nThoughts?\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 18 Apr 2019 16:40:06 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-17 12:20:29 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-04-17 15:49:29 +0530, Amit Kapila wrote:\n> >> OTOH, if we want to extend it later for whatever reason to a relation\n> >> level cache, it shouldn't be that difficult as the implementation is\n> >> mostly contained in freespace.c (fsm* functions) and I think the\n> >> relation is accessible in most places. We might need to rip out some\n> >> calls to clearlocalmap.\n> \n> > But it really isn't contained to freespace.c. That's my primary\n> > concern. You added new parameters (undocumented ones!),\n> > FSMClearLocalMap() needs to be called by callers and xlog, etc.\n> \n> Given where we are in the release cycle, and the major architectural\n> concerns that have been raised about this patch, should we just\n> revert it and try again in v13, rather than trying to fix it under\n> time pressure? It's not like there's not anything else on our\n> plates to fix before beta.\n\nHm. I'm of split mind here:\n\nIt's a nice improvement, and the fixes probably wouldn't be that\nhard. And we could have piped up a bit earlier about these concerns (I\nonly noticed this when rebasing zheap onto the newest version of\npostgres).\n\nBut as you it's also late, and there's other stuff to do. Although I\nthink neither Amit nor John is heavily involved in any...\n\nMy compromise suggestion would be to try to give John and Amit ~2 weeks\nto come up with a cleanup proposal, and then decide whether to 1) revert\n2) apply the new patch, 3) decide to live with the warts for 12, and\napply the patch in 13. As we would already have a patch, 3) seems like\nit'd be more tenable than without.\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Thu, 18 Apr 2019 14:10:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> My compromise suggestion would be to try to give John and Amit ~2 weeks\n> to come up with a cleanup proposal, and then decide whether to 1) revert\n> 2) apply the new patch, 3) decide to live with the warts for 12, and\n> apply the patch in 13. As we would already have a patch, 3) seems like\n> it'd be more tenable than without.\n\nSeems reasonable. I think we should shoot to have this resolved before\nthe end of the month, but it doesn't have to be done immediately.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Apr 2019 17:14:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 2:10 PM John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n> On Thu, Apr 18, 2019 at 2:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I respect and will follow whatever will be the consensus after\n> > discussion. However, I request you to wait for some time to let the\n> > discussion conclude. If we can't get to an\n> > agreement or one of John or me can't implement what is decided, then\n> > we can anyway revert it.\n>\n> Agreed. I suspect the most realistic way to address most of the\n> objections in a short amount of time would be to:\n>\n> 1. rip out the local map\n> 2. restore hio.c to only checking the last block in the relation if\n> there is no FSM (and lower the threshold to reduce wasted space)\n> 3. reduce calls to smgr_exists()\n>\n\nWon't you need an extra call to RelationGetNumberofBlocks to find the\nlast block? Also won't it be less efficient in terms of dealing with\nbloat as compare to current patch? I think if we go this route, then\nwe might need to revisit it in the future to optimize it, but maybe\nthat is the best alternative as of now.\n\nI am thinking that we should at least give it a try to move the map to\nrel cache level to see how easy or difficult it is and also let's wait\nfor a day or two to see if Andres/Tom has to say anything about this\nor on the response by me above to improve the current patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Apr 2019 08:07:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> I am thinking that we should at least give it a try to move the map to\n> rel cache level to see how easy or difficult it is and also let's wait\n> for a day or two to see if Andres/Tom has to say anything about this\n> or on the response by me above to improve the current patch.\n\nFWIW, it's hard for me to see how moving the map to the relcache isn't\nthe right thing to do. You will lose state during a relcache flush,\nbut that's still miles better than how often the state gets lost now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Apr 2019 22:53:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "\n\nOn April 18, 2019 7:53:58 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Amit Kapila <amit.kapila16@gmail.com> writes:\n>> I am thinking that we should at least give it a try to move the map\n>to\n>> rel cache level to see how easy or difficult it is and also let's\n>wait\n>> for a day or two to see if Andres/Tom has to say anything about this\n>> or on the response by me above to improve the current patch.\n>\n>FWIW, it's hard for me to see how moving the map to the relcache isn't\n>the right thing to do. You will lose state during a relcache flush,\n>but that's still miles better than how often the state gets lost now.\n\n+1\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 18 Apr 2019 22:51:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 10:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 18, 2019 at 2:10 PM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> > Agreed. I suspect the most realistic way to address most of the\n> > objections in a short amount of time would be to:\n> >\n> > 1. rip out the local map\n> > 2. restore hio.c to only checking the last block in the relation if\n> > there is no FSM (and lower the threshold to reduce wasted space)\n> > 3. reduce calls to smgr_exists()\n> >\n>\n> Won't you need an extra call to RelationGetNumberofBlocks to find the\n> last block?\n\nIf I understand you correctly, no, the call now in\nGetPageWithFreeSpace() just moves it back to where it was in v11. In\nthe corner case where we just measured the table size and the last\nblock is full, we can pass nblocks to RecordAndGetPageWithFreeSpace().\nThere might be further optimizations available if we're not creating a\nlocal map.\n\n> Also won't it be less efficient in terms of dealing with\n> bloat as compare to current patch?\n\nYes. The threshold would have to be 2 or 3 blocks, and it would stay\nbloated until it passed the threshold. Not great, but perhaps not bad\neither.\n\n> I think if we go this route, then\n> we might need to revisit it in the future to optimize it, but maybe\n> that is the best alternative as of now.\n\nIt's a much lighter-weight API, which has that much going for it.\nI have a draft implementation, which I can share if it comes to that\n-- it needs some more thought and polish first.\n\n> I am thinking that we should at least give it a try to move the map to\n> rel cache level to see how easy or difficult it is and also let's wait\n> for a day or two to see if Andres/Tom has to say anything about this\n> or on the response by me above to improve the current patch.\n\nSince we have a definite timeline, I'm okay with that, although I'm\nafraid I'm not quite knowledgeable enough to help much with the\nrelcache piece.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Apr 2019 15:47:27 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 1:17 PM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> On Fri, Apr 19, 2019 at 10:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I think if we go this route, then\n> > we might need to revisit it in the future to optimize it, but maybe\n> > that is the best alternative as of now.\n>\n> It's a much lighter-weight API, which has that much going for it.\n> I have a draft implementation, which I can share if it comes to that\n> -- it needs some more thought and polish first.\n>\n\nI understand that it is lighter-weight API, but OTOH, it will be less\nefficient as well. Also, the consensus seems to be that we should\nmove this to relcache.\n\n> > I am thinking that we should at least give it a try to move the map to\n> > rel cache level to see how easy or difficult it is and also let's wait\n> > for a day or two to see if Andres/Tom has to say anything about this\n> > or on the response by me above to improve the current patch.\n>\n> Since we have a definite timeline, I'm okay with that, although I'm\n> afraid I'm not quite knowledgeable enough to help much with the\n> relcache piece.\n>\n\nOkay, I can try to help. I think you can start by looking at\nRelationData members (for ex. see how we cache index's metapage in\nrd_amcache) and study a bit about routines in relcache.h.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Apr 2019 14:46:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 2:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>\n> > > I am thinking that we should at least give it a try to move the map to\n> > > rel cache level to see how easy or difficult it is and also let's wait\n> > > for a day or two to see if Andres/Tom has to say anything about this\n> > > or on the response by me above to improve the current patch.\n> >\n> > Since we have a definite timeline, I'm okay with that, although I'm\n> > afraid I'm not quite knowledgeable enough to help much with the\n> > relcache piece.\n> >\n>\n> Okay, I can try to help. I think you can start by looking at\n> RelationData members (for ex. see how we cache index's metapage in\n> rd_amcache) and study a bit about routines in relcache.h.\n>\n\nAttached is a hacky and work-in-progress patch to move fsm map to\nrelcache. This will give you some idea. I think we need to see if\nthere is a need to invalidate the relcache due to this patch. I think\nsome other comments of Andres also need to be addressed, see if you\ncan attempt to fix some of them.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 22 Apr 2019 18:49:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-22 18:49:44 +0530, Amit Kapila wrote:\n> Attached is a hacky and work-in-progress patch to move fsm map to\n> relcache. This will give you some idea. I think we need to see if\n> there is a need to invalidate the relcache due to this patch. I think\n> some other comments of Andres also need to be addressed, see if you\n> can attempt to fix some of them.\n\n\n\n> /*\n> @@ -1132,9 +1110,6 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> \tBlockNumber blkno,\n> \t\t\t\tcached_target_block;\n> \n> -\t/* The local map must not be set already. */\n> -\tAssert(!FSM_LOCAL_MAP_EXISTS);\n> -\n> \t/*\n> \t * Starting at the current last block in the relation and working\n> \t * backwards, mark alternating blocks as available.\n> @@ -1142,7 +1117,7 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> \tblkno = cur_nblocks - 1;\n> \twhile (true)\n> \t{\n> -\t\tfsm_local_map.map[blkno] = FSM_LOCAL_AVAIL;\n> +\t\trel->fsm_local_map->map[blkno] = FSM_LOCAL_AVAIL;\n> \t\tif (blkno >= 2)\n> \t\t\tblkno -= 2;\n> \t\telse\n> @@ -1150,13 +1125,13 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> \t}\n> \n> \t/* Cache the number of blocks. */\n> -\tfsm_local_map.nblocks = cur_nblocks;\n> +\trel->fsm_local_map->nblocks = cur_nblocks;\n> \n> \t/* Set the status of the cached target block to 'unavailable'. */\n> \tcached_target_block = RelationGetTargetBlock(rel);\n> \tif (cached_target_block != InvalidBlockNumber &&\n> \t\tcached_target_block < cur_nblocks)\n> -\t\tfsm_local_map.map[cached_target_block] = FSM_LOCAL_NOT_AVAIL;\n> +\t\trel->fsm_local_map->map[cached_target_block] = FSM_LOCAL_NOT_AVAIL;\n> }\n\nI think there shouldn't be any need for this anymore. After this change\nwe shouldn't invalidate the map until there's no space on it - thereby\naddressing the cost overhead, and greatly reducing the likelihood that\nthe local FSM can lead to increased bloat.\n\n\n> /*\n> @@ -1168,18 +1143,18 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> * This function is used when there is no FSM.\n> */\n> static BlockNumber\n> -fsm_local_search(void)\n> +fsm_local_search(Relation rel)\n> {\n> \tBlockNumber target_block;\n> \n> \t/* Local map must be set by now. */\n> -\tAssert(FSM_LOCAL_MAP_EXISTS);\n> +\tAssert(FSM_LOCAL_MAP_EXISTS(rel));\n> \n> -\ttarget_block = fsm_local_map.nblocks;\n> +\ttarget_block = rel->fsm_local_map->nblocks;\n> \tdo\n> \t{\n> \t\ttarget_block--;\n> -\t\tif (fsm_local_map.map[target_block] == FSM_LOCAL_AVAIL)\n> +\t\tif (rel->fsm_local_map->map[target_block] == FSM_LOCAL_AVAIL)\n> \t\t\treturn target_block;\n> \t} while (target_block > 0);\n> \n> @@ -1189,7 +1164,22 @@ fsm_local_search(void)\n> \t * first, which would otherwise lead to the same conclusion again and\n> \t * again.\n> \t */\n> -\tFSMClearLocalMap();\n> +\tfsm_clear_local_map(rel);\n\nI'm not sure I like this. My inclination would be that we should be able\nto check the local fsm repeatedly even if there's no space in the\nin-memory representation - otherwise the use of the local FSM increases\nbloat.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 10:04:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 10:34 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-04-22 18:49:44 +0530, Amit Kapila wrote:\n> > /*\n> > @@ -1132,9 +1110,6 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> > BlockNumber blkno,\n> > cached_target_block;\n> >\n> > - /* The local map must not be set already. */\n> > - Assert(!FSM_LOCAL_MAP_EXISTS);\n> > -\n> > /*\n> > * Starting at the current last block in the relation and working\n> > * backwards, mark alternating blocks as available.\n> > @@ -1142,7 +1117,7 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> > blkno = cur_nblocks - 1;\n> > while (true)\n> > {\n> > - fsm_local_map.map[blkno] = FSM_LOCAL_AVAIL;\n> > + rel->fsm_local_map->map[blkno] = FSM_LOCAL_AVAIL;\n> > if (blkno >= 2)\n> > blkno -= 2;\n> > else\n> > @@ -1150,13 +1125,13 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> > }\n> >\n> > /* Cache the number of blocks. */\n> > - fsm_local_map.nblocks = cur_nblocks;\n> > + rel->fsm_local_map->nblocks = cur_nblocks;\n> >\n> > /* Set the status of the cached target block to 'unavailable'. */\n> > cached_target_block = RelationGetTargetBlock(rel);\n> > if (cached_target_block != InvalidBlockNumber &&\n> > cached_target_block < cur_nblocks)\n> > - fsm_local_map.map[cached_target_block] = FSM_LOCAL_NOT_AVAIL;\n> > + rel->fsm_local_map->map[cached_target_block] = FSM_LOCAL_NOT_AVAIL;\n> > }\n>\n> I think there shouldn't be any need for this anymore. After this change\n> we shouldn't invalidate the map until there's no space on it - thereby\n> addressing the cost overhead, and greatly reducing the likelihood that\n> the local FSM can lead to increased bloat.\n>\n\nIf we invalidate it only when there's no space on the page, then when\nshould we set it back to available, because if we don't do that, then\nwe might miss the space due to concurrent deletes.\n\n>\n> > /*\n> > @@ -1168,18 +1143,18 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> > * This function is used when there is no FSM.\n> > */\n> > static BlockNumber\n> > -fsm_local_search(void)\n> > +fsm_local_search(Relation rel)\n> > {\n> > BlockNumber target_block;\n> >\n> > /* Local map must be set by now. */\n> > - Assert(FSM_LOCAL_MAP_EXISTS);\n> > + Assert(FSM_LOCAL_MAP_EXISTS(rel));\n> >\n> > - target_block = fsm_local_map.nblocks;\n> > + target_block = rel->fsm_local_map->nblocks;\n> > do\n> > {\n> > target_block--;\n> > - if (fsm_local_map.map[target_block] == FSM_LOCAL_AVAIL)\n> > + if (rel->fsm_local_map->map[target_block] == FSM_LOCAL_AVAIL)\n> > return target_block;\n> > } while (target_block > 0);\n> >\n> > @@ -1189,7 +1164,22 @@ fsm_local_search(void)\n> > * first, which would otherwise lead to the same conclusion again and\n> > * again.\n> > */\n> > - FSMClearLocalMap();\n> > + fsm_clear_local_map(rel);\n>\n> I'm not sure I like this. My inclination would be that we should be able\n> to check the local fsm repeatedly even if there's no space in the\n> in-memory representation - otherwise the use of the local FSM increases\n> bloat.\n>\n\nDo you mean to say that we always check all the pages (say 4)\nirrespective of their state in the local map?\n\nI think we should first try to see in this new scheme (a) when to set\nthe map, (b) when to clear it, (c) how to use. I have tried to\nsummarize my thoughts about it, let me know what do you think about\nthe same?\n\nWhen to set the map.\nAt the beginning (the first time relation is used in the backend), the\nmap will be clear. When the first time in the backend, we find that\nFSM doesn't exist and the number of blocks is lesser than\nHEAP_FSM_CREATION_THRESHOLD, we set the map for the total blocks that\nexist at that time and mark all or alternate blocks as available.\n\nAlso, when we find that none of the blocks are available in the map\n(basically they are marked invalid which means we have previously\nchecked that there is no space in them), we should get the number of\nblocks and if they are less than the threshold, then add it to the\nmap.\n\n\nWhen to clear the map?\nOnce we find out that the particular page doesn't have space, we can\nmark the corresponding page in the map as invalid (or not available to\ncheck). After relation extension, we can check if the latest block is\ngreater than the threshold value, then we can clear the map. At\ntruncate or some other similar times, when relcache entry is\ninvalidated, automatically the map will be cleared.\n\nHow to use the map?\nNow, whenever we find the map exists, we can check the blocks that are\nmarked as available in it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Apr 2019 15:46:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-23 15:46:17 +0530, Amit Kapila wrote:\n> On Mon, Apr 22, 2019 at 10:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-04-22 18:49:44 +0530, Amit Kapila wrote:\n> > > /*\n> > > @@ -1132,9 +1110,6 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> > > BlockNumber blkno,\n> > > cached_target_block;\n> > >\n> > > - /* The local map must not be set already. */\n> > > - Assert(!FSM_LOCAL_MAP_EXISTS);\n> > > -\n> > > /*\n> > > * Starting at the current last block in the relation and working\n> > > * backwards, mark alternating blocks as available.\n> > > @@ -1142,7 +1117,7 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> > > blkno = cur_nblocks - 1;\n> > > while (true)\n> > > {\n> > > - fsm_local_map.map[blkno] = FSM_LOCAL_AVAIL;\n> > > + rel->fsm_local_map->map[blkno] = FSM_LOCAL_AVAIL;\n> > > if (blkno >= 2)\n> > > blkno -= 2;\n> > > else\n> > > @@ -1150,13 +1125,13 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> > > }\n> > >\n> > > /* Cache the number of blocks. */\n> > > - fsm_local_map.nblocks = cur_nblocks;\n> > > + rel->fsm_local_map->nblocks = cur_nblocks;\n> > >\n> > > /* Set the status of the cached target block to 'unavailable'. */\n> > > cached_target_block = RelationGetTargetBlock(rel);\n> > > if (cached_target_block != InvalidBlockNumber &&\n> > > cached_target_block < cur_nblocks)\n> > > - fsm_local_map.map[cached_target_block] = FSM_LOCAL_NOT_AVAIL;\n> > > + rel->fsm_local_map->map[cached_target_block] = FSM_LOCAL_NOT_AVAIL;\n> > > }\n> >\n> > I think there shouldn't be any need for this anymore. After this change\n> > we shouldn't invalidate the map until there's no space on it - thereby\n> > addressing the cost overhead, and greatly reducing the likelihood that\n> > the local FSM can lead to increased bloat.\n\n> If we invalidate it only when there's no space on the page, then when\n> should we set it back to available, because if we don't do that, then\n> we might miss the space due to concurrent deletes.\n\nWell, deletes don't traditionally (i.e. with an actual FSM) mark free\nspace as available (for heap). I think RecordPageWithFreeSpace() should\nissue a invalidation if there's no FSM, and the block goes from full to\nempty (as there's no half-full IIUC).\n\n> > > /*\n> > > @@ -1168,18 +1143,18 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> > > * This function is used when there is no FSM.\n> > > */\n> > > static BlockNumber\n> > > -fsm_local_search(void)\n> > > +fsm_local_search(Relation rel)\n> > > {\n> > > BlockNumber target_block;\n> > >\n> > > /* Local map must be set by now. */\n> > > - Assert(FSM_LOCAL_MAP_EXISTS);\n> > > + Assert(FSM_LOCAL_MAP_EXISTS(rel));\n> > >\n> > > - target_block = fsm_local_map.nblocks;\n> > > + target_block = rel->fsm_local_map->nblocks;\n> > > do\n> > > {\n> > > target_block--;\n> > > - if (fsm_local_map.map[target_block] == FSM_LOCAL_AVAIL)\n> > > + if (rel->fsm_local_map->map[target_block] == FSM_LOCAL_AVAIL)\n> > > return target_block;\n> > > } while (target_block > 0);\n> > >\n> > > @@ -1189,7 +1164,22 @@ fsm_local_search(void)\n> > > * first, which would otherwise lead to the same conclusion again and\n> > > * again.\n> > > */\n> > > - FSMClearLocalMap();\n> > > + fsm_clear_local_map(rel);\n> >\n> > I'm not sure I like this. My inclination would be that we should be able\n> > to check the local fsm repeatedly even if there's no space in the\n> > in-memory representation - otherwise the use of the local FSM increases\n> > bloat.\n> >\n> \n> Do you mean to say that we always check all the pages (say 4)\n> irrespective of their state in the local map?\n\nI was wondering that, yes. But I think just issuing invalidations is the\nright approach instead, see above.\n\n\n> I think we should first try to see in this new scheme (a) when to set\n> the map, (b) when to clear it, (c) how to use. I have tried to\n> summarize my thoughts about it, let me know what do you think about\n> the same?\n> \n> When to set the map.\n> At the beginning (the first time relation is used in the backend), the\n> map will be clear. When the first time in the backend, we find that\n> FSM doesn't exist and the number of blocks is lesser than\n> HEAP_FSM_CREATION_THRESHOLD, we set the map for the total blocks that\n> exist at that time and mark all or alternate blocks as available.\n\nI think the alternate blocks scheme has to go. It's not defensible.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Apr 2019 10:28:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-23 15:46:17 +0530, Amit Kapila wrote:\n>> If we invalidate it only when there's no space on the page, then when\n>> should we set it back to available, because if we don't do that, then\n>> we might miss the space due to concurrent deletes.\n\n> Well, deletes don't traditionally (i.e. with an actual FSM) mark free\n> space as available (for heap). I think RecordPageWithFreeSpace() should\n> issue a invalidation if there's no FSM, and the block goes from full to\n> empty (as there's no half-full IIUC).\n\nWhy wouldn't we implement this just as a mini four-entry FSM stored in\nthe relcache, and update the entries according to the same rules we'd\nuse for regular FSM entries?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Apr 2019 13:31:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-23 13:31:25 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-04-23 15:46:17 +0530, Amit Kapila wrote:\n> >> If we invalidate it only when there's no space on the page, then when\n> >> should we set it back to available, because if we don't do that, then\n> >> we might miss the space due to concurrent deletes.\n> \n> > Well, deletes don't traditionally (i.e. with an actual FSM) mark free\n> > space as available (for heap). I think RecordPageWithFreeSpace() should\n> > issue a invalidation if there's no FSM, and the block goes from full to\n> > empty (as there's no half-full IIUC).\n> \n> Why wouldn't we implement this just as a mini four-entry FSM stored in\n> the relcache, and update the entries according to the same rules we'd\n> use for regular FSM entries?\n\nI mean the big difference is that it's not shared memory. So there needs\nto be some difference. My suggestion to handle that is to just issue an\ninvalidation when *increasing* the amount of space.\n\nAnd sure, leaving that aside we could store one byte per block - it's\njust not what the patch has done so far (or rather, it used one byte per\nblock, but only utilized one bit of it). It's possible that'd come with\nsome overhead - I've not thought sufficiently about it: I assume we'd\nstill start out in each backend assuming each page is empty, and we'd\nthen rely on RelationGetBufferForTuple() to update that. What I wonder\nis if we'd need to check if an on-disk FSM has been created every time\nthe space on a page is reduced? I think not, if we use invalidations to\nnotify others of the existance of an on-disk FSM. There's a small race,\nbut that seems ok.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Apr 2019 10:39:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 10:59 PM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2019-04-22 18:49:44 +0530, Amit Kapila wrote:\n> > > > /*\n> > > > @@ -1132,9 +1110,6 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> > > > /* Set the status of the cached target block to 'unavailable'. */\n> > > > cached_target_block = RelationGetTargetBlock(rel);\n> > > > if (cached_target_block != InvalidBlockNumber &&\n> > > > cached_target_block < cur_nblocks)\n> > > > - fsm_local_map.map[cached_target_block] = FSM_LOCAL_NOT_AVAIL;\n> > > > + rel->fsm_local_map->map[cached_target_block] = FSM_LOCAL_NOT_AVAIL;\n> > > > }\n> > >\n> > > I think there shouldn't be any need for this anymore. After this change\n> > > we shouldn't invalidate the map until there's no space on it - thereby\n> > > addressing the cost overhead, and greatly reducing the likelihood that\n> > > the local FSM can lead to increased bloat.\n>\n\nI have removed the code that was invalidating cached target block from\nthe above function.\n\n> > If we invalidate it only when there's no space on the page, then when\n> > should we set it back to available, because if we don't do that, then\n> > we might miss the space due to concurrent deletes.\n>\n> Well, deletes don't traditionally (i.e. with an actual FSM) mark free\n> space as available (for heap). I think RecordPageWithFreeSpace() should\n> issue a invalidation if there's no FSM, and the block goes from full to\n> empty (as there's no half-full IIUC).\n>\n\nSure, we can do that.\n\n> > > > /*\n> > > > @@ -1168,18 +1143,18 @@ fsm_local_set(Relation rel, BlockNumber cur_nblocks)\n> > > > * This function is used when there is no FSM.\n> > > > */\n> > > > static BlockNumber\n> > > > -fsm_local_search(void)\n> > > > +fsm_local_search(Relation rel)\n> > > > {\n> > > > BlockNumber target_block;\n> > > >\n> > > > /* Local map must be set by now. */\n> > > > - Assert(FSM_LOCAL_MAP_EXISTS);\n> > > > + Assert(FSM_LOCAL_MAP_EXISTS(rel));\n> > > >\n> > > > - target_block = fsm_local_map.nblocks;\n> > > > + target_block = rel->fsm_local_map->nblocks;\n> > > > do\n> > > > {\n> > > > target_block--;\n> > > > - if (fsm_local_map.map[target_block] == FSM_LOCAL_AVAIL)\n> > > > + if (rel->fsm_local_map->map[target_block] == FSM_LOCAL_AVAIL)\n> > > > return target_block;\n> > > > } while (target_block > 0);\n> > > >\n> > > > @@ -1189,7 +1164,22 @@ fsm_local_search(void)\n> > > > * first, which would otherwise lead to the same conclusion again and\n> > > > * again.\n> > > > */\n> > > > - FSMClearLocalMap();\n> > > > + fsm_clear_local_map(rel);\n> > >\n> > > I'm not sure I like this. My inclination would be that we should be able\n> > > to check the local fsm repeatedly even if there's no space in the\n> > > in-memory representation - otherwise the use of the local FSM increases\n> > > bloat.\n> > >\n> >\n> > Do you mean to say that we always check all the pages (say 4)\n> > irrespective of their state in the local map?\n>\n> I was wondering that, yes. But I think just issuing invalidations is the\n> right approach instead, see above.\n>\n\nRigh issuing invalidations can help with that.\n\n>\n> > I think we should first try to see in this new scheme (a) when to set\n> > the map, (b) when to clear it, (c) how to use. I have tried to\n> > summarize my thoughts about it, let me know what do you think about\n> > the same?\n> >\n> > When to set the map.\n> > At the beginning (the first time relation is used in the backend), the\n> > map will be clear. When the first time in the backend, we find that\n> > FSM doesn't exist and the number of blocks is lesser than\n> > HEAP_FSM_CREATION_THRESHOLD, we set the map for the total blocks that\n> > exist at that time and mark all or alternate blocks as available.\n>\n> I think the alternate blocks scheme has to go. It's not defensible.\n>\n\nFair enough, I have changed it in the attached patch. However, I\nthink we should test it once the patch is ready as we have seen a\nsmall performance regression due to that.\n\n> And sure, leaving that aside we could store one byte per block\n\nHmm, I think you mean to say one-bit per block, right?\n\n> - it's\n> just not what the patch has done so far (or rather, it used one byte per\n> block, but only utilized one bit of it).\n\nRight, I think this is an independently useful improvement, provided\nit doesn't have any additional overhead or complexity.\n\n> It's possible that'd come with\n> some overhead - I've not thought sufficiently about it: I assume we'd\n> still start out in each backend assuming each page is empty, and we'd\n> then rely on RelationGetBufferForTuple() to update that. What I wonder\n> is if we'd need to check if an on-disk FSM has been created every time\n> the space on a page is reduced? I think not, if we use invalidations to\n> notify others of the existance of an on-disk FSM. There's a small race,\n> but that seems ok.\n>\n\nDo you mean to say that vacuum or some backend should invalidate in\ncase it first time creates the FSM for a relation? I think it is quite\npossible that the vacuum takes time to trigger such invalidation if\nthere are fewer deletions. And we won't be able to use newly added\npage/s.\n\nIIUC, you are suggesting to issue invalidations when the (a) vacuum\nfinds there is no FSM and page has more space now, (b) invalidation to\nnotify the existence of FSM. IT seems to me that issuing more\ninvalidations for the purpose of FSM can lead to an increased number\nof relation builds in the overall system. I think this can have an\nimpact when there are many small relations in the system which in some\nscenarios might not be uncommon.\n\nThe two improvements in this code which are discussed in this thread\nand can be done independently to this patch are:\na. use one bit to represent each block in the map. This gives us the\nflexibility to use the map for the different threshold for some other\nstorage.\nb. improve the usage of smgrexists by checking smgr_fsm_nblocks.\n\nJohn, can you implement these two improvements either on HEAD or on\ntop of this patch?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 24 Apr 2019 11:28:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-24 11:28:32 +0530, Amit Kapila wrote:\n> On Tue, Apr 23, 2019 at 10:59 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > On 2019-04-22 18:49:44 +0530, Amit Kapila wrote:\n> > > I think we should first try to see in this new scheme (a) when to set\n> > > the map, (b) when to clear it, (c) how to use. I have tried to\n> > > summarize my thoughts about it, let me know what do you think about\n> > > the same?\n> > >\n> > > When to set the map.\n> > > At the beginning (the first time relation is used in the backend), the\n> > > map will be clear. When the first time in the backend, we find that\n> > > FSM doesn't exist and the number of blocks is lesser than\n> > > HEAP_FSM_CREATION_THRESHOLD, we set the map for the total blocks that\n> > > exist at that time and mark all or alternate blocks as available.\n> >\n> > I think the alternate blocks scheme has to go. It's not defensible.\n> >\n> \n> Fair enough, I have changed it in the attached patch. However, I\n> think we should test it once the patch is ready as we have seen a\n> small performance regression due to that.\n\nSure, but that was because you re-scanned from scratch after every\ninsertion, no?\n\n\n> > And sure, leaving that aside we could store one byte per block\n> \n> Hmm, I think you mean to say one-bit per block, right?\n\nNo, I meant byte. The normal FSM saves how much space there is for a\nblock, but the current local fsm doesn't. That means pages are marked as\nunavailble even though other tuples would possibly fit.\n\n\n> > It's possible that'd come with\n> > some overhead - I've not thought sufficiently about it: I assume we'd\n> > still start out in each backend assuming each page is empty, and we'd\n> > then rely on RelationGetBufferForTuple() to update that. What I wonder\n> > is if we'd need to check if an on-disk FSM has been created every time\n> > the space on a page is reduced? I think not, if we use invalidations to\n> > notify others of the existance of an on-disk FSM. There's a small race,\n> > but that seems ok.\n\n> Do you mean to say that vacuum or some backend should invalidate in\n> case it first time creates the FSM for a relation?\n\nRight.\n\n\n> I think it is quite possible that the vacuum takes time to trigger\n> such invalidation if there are fewer deletions. And we won't be able\n> to use newly added page/s.\n\nI'm not sure I understand what you mean by that? If the backend that\nends up creating the FSM - because it extended the relation beyond 4\npages - issues an invalidation, the time till other backends pick that\nup should be minimal?\n\n\n> IIUC, you are suggesting to issue invalidations when the (a) vacuum\n> finds there is no FSM and page has more space now, (b) invalidation to\n> notify the existence of FSM. IT seems to me that issuing more\n> invalidations for the purpose of FSM can lead to an increased number\n> of relation builds in the overall system. I think this can have an\n> impact when there are many small relations in the system which in some\n> scenarios might not be uncommon.\n\nIf this becomes an issue I'd just create a separate type of\ninvalidation, one that just signals that the FSM is being invalidated.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Apr 2019 09:19:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 9:49 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-04-24 11:28:32 +0530, Amit Kapila wrote:\n> > On Tue, Apr 23, 2019 at 10:59 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > On 2019-04-22 18:49:44 +0530, Amit Kapila wrote:\n> > > > I think we should first try to see in this new scheme (a) when to set\n> > > > the map, (b) when to clear it, (c) how to use. I have tried to\n> > > > summarize my thoughts about it, let me know what do you think about\n> > > > the same?\n> > > >\n> > > > When to set the map.\n> > > > At the beginning (the first time relation is used in the backend), the\n> > > > map will be clear. When the first time in the backend, we find that\n> > > > FSM doesn't exist and the number of blocks is lesser than\n> > > > HEAP_FSM_CREATION_THRESHOLD, we set the map for the total blocks that\n> > > > exist at that time and mark all or alternate blocks as available.\n> > >\n> > > I think the alternate blocks scheme has to go. It's not defensible.\n> > >\n> >\n> > Fair enough, I have changed it in the attached patch. However, I\n> > think we should test it once the patch is ready as we have seen a\n> > small performance regression due to that.\n>\n> Sure, but that was because you re-scanned from scratch after every\n> insertion, no?\n>\n\nPossible.\n\n>\n> > > And sure, leaving that aside we could store one byte per block\n> >\n> > Hmm, I think you mean to say one-bit per block, right?\n>\n> No, I meant byte.\n>\n\nUpthread you have said: \"Hm, given realistic\nHEAP_FSM_CREATION_THRESHOLD, and the fact that we really only need one\nbit per relation, it seems like map should really be just a uint32\nwith one bit per page. It'd allow different AMs to have different\nnumbers of dont-create-fsm thresholds without needing additional\nmemory (up to 32 blocks).\"\n\nI can understand the advantage of one-bit per-page suggestion, but now\nyou are telling one-byte per-page. I am confused between those two\noptions. Am, I missing something?\n\n> The normal FSM saves how much space there is for a\n> block, but the current local fsm doesn't. That means pages are marked as\n> unavailble even though other tuples would possibly fit.\n>\n\nSure, in regular FSM, the vacuum can update the available space, but\nwe don't have such a provision for local map unless we decide to keep\nit in shared memory.\n\n>\n> > > It's possible that'd come with\n> > > some overhead - I've not thought sufficiently about it: I assume we'd\n> > > still start out in each backend assuming each page is empty, and we'd\n> > > then rely on RelationGetBufferForTuple() to update that. What I wonder\n> > > is if we'd need to check if an on-disk FSM has been created every time\n> > > the space on a page is reduced? I think not, if we use invalidations to\n> > > notify others of the existance of an on-disk FSM. There's a small race,\n> > > but that seems ok.\n>\n> > Do you mean to say that vacuum or some backend should invalidate in\n> > case it first time creates the FSM for a relation?\n>\n> Right.\n>\n>\n> > I think it is quite possible that the vacuum takes time to trigger\n> > such invalidation if there are fewer deletions. And we won't be able\n> > to use newly added page/s.\n>\n> I'm not sure I understand what you mean by that? If the backend that\n> ends up creating the FSM - because it extended the relation beyond 4\n> pages - issues an invalidation, the time till other backends pick that\n> up should be minimal?\n>\n\nConsider when a backend-1 starts inserting into a relation, it has\njust two pages and we create a local map in the relation which has two\npages. Now, backend-2 extends the relation by one-page, how and when\nwill backend-1 comes to know about that. One possibility is that once\nall the pages present in backend-1's relation becomes invalid\n(used-up), we again check the number of blocks and update the local\nmap.\n\n>\n> > IIUC, you are suggesting to issue invalidations when the (a) vacuum\n> > finds there is no FSM and page has more space now, (b) invalidation to\n> > notify the existence of FSM. IT seems to me that issuing more\n> > invalidations for the purpose of FSM can lead to an increased number\n> > of relation builds in the overall system. I think this can have an\n> > impact when there are many small relations in the system which in some\n> > scenarios might not be uncommon.\n>\n> If this becomes an issue I'd just create a separate type of\n> invalidation, one that just signals that the FSM is being invalidated.\n>\n\nOh, clever idea, but I guess that will be some work unless we already\ndo something similar elsewhere. Anyway, we can look into it later.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Apr 2019 08:50:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 1:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> The two improvements in this code which are discussed in this thread\n> and can be done independently to this patch are:\n> a. use one bit to represent each block in the map. This gives us the\n> flexibility to use the map for the different threshold for some other\n> storage.\n> b. improve the usage of smgrexists by checking smgr_fsm_nblocks.\n>\n> John, can you implement these two improvements either on HEAD or on\n> top of this patch?\n\nI've done B in the attached. There is a more recent idea of using the\nbyte to store the actual free space in the same format as the FSM.\nThat might be v13 material, but in any case, I'll hold off on A for\nnow.\n\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 25 Apr 2019 11:21:21 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 1:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> <v2 patch>\n\nSorry for not noticing earlier, but this patch causes a regression\ntest failure for me (attached)\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 25 Apr 2019 15:09:10 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 12:39 PM John Naylor\n<john.naylor@2ndquadrant.com> wrote:\n>\n> On Wed, Apr 24, 2019 at 1:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > <v2 patch>\n>\n> Sorry for not noticing earlier, but this patch causes a regression\n> test failure for me (attached)\n>\n\nCan you please try to finish the remaining work of the patch (I am bit\ntied up with some other things)? I think the main thing apart from\nrepresentation of map as one-byte or one-bit per block is to implement\ninvalidation. Also, try to see if there is anything pending which I\nmight have missed. As discussed above, we need to issue an\ninvalidation for following points: (a) when vacuum finds there is no\nFSM and page has more space now, I think you can detect this in\nRecordPageWithFreeSpace (b) invalidation to notify the existence of\nFSM, this can be done both by vacuum and backend.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Apr 2019 09:21:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 2:04 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> I'm somewhat unhappy in how much the no-fsm-for-small-rels exposed\n> complexity that looks like it should be purely in freespacemap.c to\n> callers.\n\nI took a stab at untying the free space code from any knowledge about\nheaps, and made it the responsibility of each access method that calls\nthese free space routines to specify their own threshold (possibly\nzero). The attached applies on top of HEAD, because it's independent\nof the relcache logic being discussed. If I can get that working, the\nI'll rebase it on top of this API, if you like.\n\n> extern Size GetRecordedFreeSpace(Relation rel, BlockNumber heapBlk);\n> -extern BlockNumber GetPageWithFreeSpace(Relation rel, Size spaceNeeded);\n> +extern BlockNumber GetPageWithFreeSpace(Relation rel, Size spaceNeeded,\n> + bool check_fsm_only);\n>\n> So now freespace.c has an argument that says we should only check the\n> fsm. That's confusing. And it's not explained to callers what that\n> argument means, and when it should be set.\n\nI split this up into 2 routines: GetPageWithFreeSpace() is now exactly\nlike it is in v11, and GetAlternatePage() is available for access\nmethods that can use it.\n\n> +/* Only create the FSM if the heap has greater than this many blocks */\n> +#define HEAP_FSM_CREATION_THRESHOLD 4\n>\n> Hm, this seems to be tying freespace.c closer to heap than I think is\n> great - think of new AMs like zheap, that also want to use it.\n\nThis was a bit harder than expected. Because of the pg_upgrade\noptimization, it was impossible to put this symbol in hio.h or\nheapam.h, because they include things unsafe for frontend code. I\ndecided to create heap_fe.h, which is a hack. Also, because they have\nfreespace.c callers, I put other thresholds in\n\nsrc/backend/storage/freespace/indexfsm.c\nsrc/include/access/brin_pageops.h\n\nPutting the thresholds in 3 files with completely different purposes\nis a mess, and serves no example for future access methods, but I\ndon't have a better idea.\n\nOn the upside, untying free space from heap allowed me to remove most\nof the checks for\n\n(rel->rd_rel->relkind == RELKIND_RELATION ||\n rel->rd_rel->relkind == RELKIND_TOASTVALUE)\n\nexcept for the one in pg_upgrade.c, which is again a bit of a hack, but not bad.\n\n> Hm, given realistic HEAP_FSM_CREATION_THRESHOLD, and the fact that we\n> really only need one bit per relation, it seems like map should really\n> be just a uint32 with one bit per page.\n\nDone. Regarding the idea upthread about using bytes to store ordinary\nfreespace values, I think that's better for correctness, but also\nmakes it more difficult to use different thresholds per access method.\n\n> +static bool\n> +fsm_allow_writes(Relation rel, BlockNumber heapblk,\n> + BlockNumber nblocks, BlockNumber *get_nblocks)\n>\n> + RelationOpenSmgr(rel);\n> + if (smgrexists(rel->rd_smgr, FSM_FORKNUM))\n> + return true;\n>\n> Isn't this like really expensive? mdexists() closes the relations and\n> reopens it from scratch. Shouldn't we at the very least check\n> smgr_fsm_nblocks beforehand, so this is only done once?\n\nI did this in an earlier patch above -- do you have an opinion about that?\n\nI also removed the call to smgrnblocks(smgr, MAIN_FORKNUM) from\nXLogRecordPageWithFreeSpace() because I don't think it's actually\nneeded. There's a window where a table could have 5 blocks, but trying\nto record space on, say, block 2 won't actually create the FSM on the\nstandby. When block 5 fills up enough, then the xlog call will cause\nthe FSM to be created. Not sure if this is best, but it saves another\nsyscall, and this function is only called when freespace is less than\n20%, IIRC.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 26 Apr 2019 13:16:26 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 25, 2019 at 12:39 PM John Naylor\n> <john.naylor@2ndquadrant.com> wrote:\n> >\n> > On Wed, Apr 24, 2019 at 1:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > <v2 patch>\n> >\n> > Sorry for not noticing earlier, but this patch causes a regression\n> > test failure for me (attached)\n> >\n>\n> Can you please try to finish the remaining work of the patch (I am bit\n> tied up with some other things)? I think the main thing apart from\n> representation of map as one-byte or one-bit per block is to implement\n> invalidation. Also, try to see if there is anything pending which I\n> might have missed. As discussed above, we need to issue an\n> invalidation for following points: (a) when vacuum finds there is no\n> FSM and page has more space now, I think you can detect this in\n> RecordPageWithFreeSpace (b) invalidation to notify the existence of\n> FSM, this can be done both by vacuum and backend.\n\nYes, I'll work on it.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 26 Apr 2019 13:18:09 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> As discussed above, we need to issue an\n> invalidation for following points: (a) when vacuum finds there is no\n> FSM and page has more space now, I think you can detect this in\n> RecordPageWithFreeSpace\n\nI took a brief look and we'd have to know how much space was there\nbefore. That doesn't seem possible without first implementing the idea\nto save free space locally in the same way the FSM does. Even if we\nhave consensus on that, there's no code for it, and we're running out\nof time.\n\n> (b) invalidation to notify the existence of\n> FSM, this can be done both by vacuum and backend.\n\nI still don't claim to be anything but naive in this area, but does\nthe attached get us any closer?\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 30 Apr 2019 14:12:11 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On 2019-Apr-30, John Naylor wrote:\n\n> On Fri, Apr 26, 2019 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > As discussed above, we need to issue an\n> > invalidation for following points: (a) when vacuum finds there is no\n> > FSM and page has more space now, I think you can detect this in\n> > RecordPageWithFreeSpace\n> \n> I took a brief look and we'd have to know how much space was there\n> before. That doesn't seem possible without first implementing the idea\n> to save free space locally in the same way the FSM does. Even if we\n> have consensus on that, there's no code for it, and we're running out\n> of time.\n\nHmm ... so, if vacuum runs and frees up any space from any of the pages,\nthen it should send out an invalidation -- it doesn't matter what the\nFSM had, just that there is more free space now. That means every other\nprocess will need to determine a fresh FSM, but that seems correct.\nSounds better than keeping outdated entries indicating no-space-available.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 30 Apr 2019 10:22:28 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 7:52 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Apr-30, John Naylor wrote:\n>\n> > On Fri, Apr 26, 2019 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > As discussed above, we need to issue an\n> > > invalidation for following points: (a) when vacuum finds there is no\n> > > FSM and page has more space now, I think you can detect this in\n> > > RecordPageWithFreeSpace\n> >\n> > I took a brief look and we'd have to know how much space was there\n> > before. That doesn't seem possible without first implementing the idea\n> > to save free space locally in the same way the FSM does. Even if we\n> > have consensus on that, there's no code for it, and we're running out\n> > of time.\n>\n> Hmm ... so, if vacuum runs and frees up any space from any of the pages,\n> then it should send out an invalidation -- it doesn't matter what the\n> FSM had, just that there is more free space now. That means every other\n> process will need to determine a fresh FSM,\n>\n\nI think you intend to say the local space map because once FSM is\ncreated we will send invalidation and we won't further build relcache\nentry having local space map.\n\n> but that seems correct.\n> Sounds better than keeping outdated entries indicating no-space-available.\n>\n\nAgreed, but as mentioned in one of the above emails, I am also bit\nscared that it should not lead to many invalidation messages for small\nrelations, so may be we should send the invalidation message only when\nthe entire page is empty.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 1 May 2019 09:13:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 6:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 30, 2019 at 2:24 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > insert into atacc1 values (21, 22, 23);\n> > +ERROR: could not read block 0 in file \"base/16384/31379\": read only\n> > 0 of 8192 bytes\n> >\n> > I have analysed this failure. Seems that we have not reset the\n> > rel->fsm_local_map while truncating the relation pages by vacuum\n> > (lazy_truncate_heap). So when next time while accessing it we are\n> > getting the error. I think we need a mechanism to invalidate this\n> > when we truncate the relation pages. I am not sure whether we should\n> > invalidate the relcache entry here or just reset the\n> > rel->fsm_local_map?\n> >\n>\n> Thanks, this appears to be the missing case where we need to\n> invalidate the cache. So, as discussed above if we issue invalidation\n> call (in RecordPageWithFreeSpace) when the page becomes empty, then we\n> shouldn't encounter this. John, can we try this out and see if the\n> failure goes away?\n\nI added a clear/inval call in RecordPageWithFreeSpace and the failure\ngoes away. Thanks for the analysis, Dilip!\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 1 May 2019 12:08:09 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 12:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 26, 2019 at 10:46 AM John Naylor\n> <john.naylor@2ndquadrant.com> wrote:\n> >\n> > On Wed, Apr 17, 2019 at 2:04 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > I'm somewhat unhappy in how much the no-fsm-for-small-rels exposed\n> > > complexity that looks like it should be purely in freespacemap.c to\n> > > callers.\n> >\n> > I took a stab at untying the free space code from any knowledge about\n> > heaps, and made it the responsibility of each access method that calls\n> > these free space routines to specify their own threshold (possibly\n> > zero). The attached applies on top of HEAD, because it's independent\n> > of the relcache logic being discussed. If I can get that working, the\n> > I'll rebase it on top of this API, if you like.\n> >\n> > > extern Size GetRecordedFreeSpace(Relation rel, BlockNumber heapBlk);\n> > > -extern BlockNumber GetPageWithFreeSpace(Relation rel, Size spaceNeeded);\n> > > +extern BlockNumber GetPageWithFreeSpace(Relation rel, Size spaceNeeded,\n> > > + bool check_fsm_only);\n> > >\n> > > So now freespace.c has an argument that says we should only check the\n> > > fsm. That's confusing. And it's not explained to callers what that\n> > > argument means, and when it should be set.\n> >\n> > I split this up into 2 routines: GetPageWithFreeSpace() is now exactly\n> > like it is in v11, and GetAlternatePage() is available for access\n> > methods that can use it.\n> >\n>\n> I don't much like the new function name GetAlternatePage, may be\n> GetPageFromLocalFSM or something like that. OTOH, I am not sure if we\n> should go that far to address this concern of Andres's, maybe just\n> adding a proper comment is sufficient.\n\nThat's a clearer name. I think 2 functions is easier to follow than\nthe boolean parameter.\n\n> > Putting the thresholds in 3 files with completely different purposes\n> > is a mess, and serves no example for future access methods, but I\n> > don't have a better idea.\n> >\n>\n> Yeah, I am also not sure if it is a good idea because it still won't\n> be easy for pluggable storage especially the pg_upgrade part. I think\n> if we really want to make it easy for pluggable storage to define\n> this, then we might need to build something along the lines of how to\n> estimate relation size works.\n>\n> See how table_relation_estimate_size is defined and used\n> and TableAmRoutine heapam_methods\n> {\n> ..\n> relation_estimate_size\n> }\n\nThat might be the best way for table ams, but I guess we'd still need\nto keep the hard-coding for indexes to always have a FSM. That might\nnot be too bad.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 1 May 2019 12:27:43 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, May 1, 2019 at 9:57 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n> On Tue, Apr 30, 2019 at 12:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Apr 26, 2019 at 10:46 AM John Naylor\n> > <john.naylor@2ndquadrant.com> wrote:\n> > I don't much like the new function name GetAlternatePage, may be\n> > GetPageFromLocalFSM or something like that. OTOH, I am not sure if we\n> > should go that far to address this concern of Andres's, maybe just\n> > adding a proper comment is sufficient.\n>\n> That's a clearer name. I think 2 functions is easier to follow than\n> the boolean parameter.\n>\n\nOkay, but then add a few comments where you are calling that function.\n\n> > > Putting the thresholds in 3 files with completely different purposes\n> > > is a mess, and serves no example for future access methods, but I\n> > > don't have a better idea.\n> > >\n> >\n> > Yeah, I am also not sure if it is a good idea because it still won't\n> > be easy for pluggable storage especially the pg_upgrade part. I think\n> > if we really want to make it easy for pluggable storage to define\n> > this, then we might need to build something along the lines of how to\n> > estimate relation size works.\n> >\n> > See how table_relation_estimate_size is defined and used\n> > and TableAmRoutine heapam_methods\n> > {\n> > ..\n> > relation_estimate_size\n> > }\n>\n> That might be the best way for table ams, but I guess we'd still need\n> to keep the hard-coding for indexes to always have a FSM. That might\n> not be too bad.\n>\n\nI also think so.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 1 May 2019 10:05:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, May 1, 2019 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 30, 2019 at 7:52 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > but that seems correct.\n> > Sounds better than keeping outdated entries indicating no-space-available.\n>\n> Agreed, but as mentioned in one of the above emails, I am also bit\n> scared that it should not lead to many invalidation messages for small\n> relations, so may be we should send the invalidation message only when\n> the entire page is empty.\n\nOne way would be to send the inval if the new free space is greater\nthan some percentage of BLCKSZ.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 1 May 2019 13:19:14 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-18 14:10:29 -0700, Andres Freund wrote:\n> My compromise suggestion would be to try to give John and Amit ~2 weeks\n> to come up with a cleanup proposal, and then decide whether to 1) revert\n> 2) apply the new patch, 3) decide to live with the warts for 12, and\n> apply the patch in 13. As we would already have a patch, 3) seems like\n> it'd be more tenable than without.\n\nI think decision time has come. My tentative impression is that we're\nnot there yet, and should revert the improvements in v12, and apply the\nimproved version early in v13. As a second choice, we should live with\nthe current approach, if John and Amit \"promise\" further effort to clean\nthis up for v13.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 May 2019 08:24:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, May 1, 2019 at 08:24:25AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-04-18 14:10:29 -0700, Andres Freund wrote:\n> > My compromise suggestion would be to try to give John and Amit ~2 weeks\n> > to come up with a cleanup proposal, and then decide whether to 1) revert\n> > 2) apply the new patch, 3) decide to live with the warts for 12, and\n> > apply the patch in 13. As we would already have a patch, 3) seems like\n> > it'd be more tenable than without.\n> \n> I think decision time has come. My tentative impression is that we're\n> not there yet, and should revert the improvements in v12, and apply the\n> improved version early in v13. As a second choice, we should live with\n> the current approach, if John and Amit \"promise\" further effort to clean\n> this up for v13.\n\nMy ignorant opinion is that I have been surprised by the churn caused by\nthis change, and have therefore questioned the value of it. Frankly,\nthere has been so much churn I am unclear if it can be easily reverted.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 1 May 2019 11:28:11 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-01 11:28:11 -0400, Bruce Momjian wrote:\n> On Wed, May 1, 2019 at 08:24:25AM -0700, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2019-04-18 14:10:29 -0700, Andres Freund wrote:\n> > > My compromise suggestion would be to try to give John and Amit ~2 weeks\n> > > to come up with a cleanup proposal, and then decide whether to 1) revert\n> > > 2) apply the new patch, 3) decide to live with the warts for 12, and\n> > > apply the patch in 13. As we would already have a patch, 3) seems like\n> > > it'd be more tenable than without.\n> >\n> > I think decision time has come. My tentative impression is that we're\n> > not there yet, and should revert the improvements in v12, and apply the\n> > improved version early in v13. As a second choice, we should live with\n> > the current approach, if John and Amit \"promise\" further effort to clean\n> > this up for v13.\n>\n> My ignorant opinion is that I have been surprised by the churn caused by\n> this change, and have therefore questioned the value of it.\n\nHm, I don't think there has been that much churn? Sure, there was a\nrevert to figure out a regression test instability, but that doesn't\nseem that bad. Relevant commits in date order are:\n\n\nandres-classification: cleanup\ncommit 06c8a5090ed9ec188557a86d4de11384f5128ec0\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: 2019-03-16 06:55:56 +0530\n\n Improve code comments in b0eaa4c51b.\n\n Author: John Naylor\n Discussion: https://postgr.es/m/CACPNZCswjyGJxTT=mxHgK=Z=mJ9uJ4WEx_UO=bNwpR_i0EaHHg@mail.gmail.com\n\n\nandres-classification: incremental improvement\ncommit 13e8643bfc29d3c1455c0946281cdfc24758ffb6\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: 2019-03-15 08:25:57 +0530\n\n During pg_upgrade, conditionally skip transfer of FSMs.\n\n\nandres-classification: additional tests\ncommit 6f918159a97acf76ee2512e44f5ed5dcaaa0d923\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: 2019-03-12 08:14:28 +0530\n\n Add more tests for FSM.\n\n\nandres-classification: cleanup\ncommit a6e48da08844eeb5a72c8b59dad3aaab6e891fac\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: 2019-03-11 08:16:14 +0530\n\n Fix typos in commit 8586bf7ed8.\n\n\nandres-classification: bugfix\ncommit 9c32e4c35026bd52aaf340bfe7594abc653e42f0\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: 2019-03-01 07:38:47 +0530\n\n Clear the local map when not used.\n\n\nandres-classification: docs addition\ncommit 29d108cdecbe918452e70041d802cc515b2d56b8\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: 2019-02-20 17:37:39 +0530\n\n Doc: Update the documentation for FSM behavior for small tables.\n\n\nandres-classification: regression test stability\ncommit 08ecdfe7e5e0a31efbe1d58fefbe085b53bc79ca\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: 2019-02-04 10:08:29 +0530\n\n Make FSM test portable.\n\n\nandres-classification: feature\ncommit b0eaa4c51bbff3e3c600b11e5d104d6feb9ca77f\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: 2019-02-04 07:49:15 +0530\n\n Avoid creation of the free space map for small heap relations, take 2.\n\n\nSo sure, there's a few typo fixes, one bugfix, and one buildfarm test\nstability issue. Doesn't seem crazy for a nontrivial improvement.\n\n\n> Frankly, there has been so much churn I am unclear if it can be easily reverted.\n\nDoesn't seem that hard? There's some minor conflicts, but nothing bad?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 May 2019 09:08:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, May 1, 2019 at 09:08:54AM -0700, Andres Freund wrote:\n> So sure, there's a few typo fixes, one bugfix, and one buildfarm test\n> stability issue. Doesn't seem crazy for a nontrivial improvement.\n\nOK, my ignorant opinion was just based on the length of discussion\nthreads.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 1 May 2019 12:17:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On 2019-May-01, Amit Kapila wrote:\n\n> On Tue, Apr 30, 2019 at 7:52 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > Hmm ... so, if vacuum runs and frees up any space from any of the pages,\n> > then it should send out an invalidation -- it doesn't matter what the\n> > FSM had, just that there is more free space now. That means every other\n> > process will need to determine a fresh FSM,\n> \n> I think you intend to say the local space map because once FSM is\n> created we will send invalidation and we won't further build relcache\n> entry having local space map.\n\nYeah, I mean the map that records free space.\n\n> > but that seems correct. Sounds better than keeping outdated entries\n> > indicating no-space-available.\n> \n> Agreed, but as mentioned in one of the above emails, I am also bit\n> scared that it should not lead to many invalidation messages for small\n> relations, so may be we should send the invalidation message only when\n> the entire page is empty.\n\nI don't think that's a concern, is it? You typically won't be running\nmultiple vacuums per second, or even multiple vacuums per minute.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 1 May 2019 15:21:24 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Wed, May 1, 2019 at 11:24 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-04-18 14:10:29 -0700, Andres Freund wrote:\n> > My compromise suggestion would be to try to give John and Amit ~2 weeks\n> > to come up with a cleanup proposal, and then decide whether to 1) revert\n> > 2) apply the new patch, 3) decide to live with the warts for 12, and\n> > apply the patch in 13. As we would already have a patch, 3) seems like\n> > it'd be more tenable than without.\n>\n> I think decision time has come. My tentative impression is that we're\n> not there yet, and should revert the improvements in v12, and apply the\n> improved version early in v13. As a second choice, we should live with\n> the current approach, if John and Amit \"promise\" further effort to clean\n> this up for v13.\n\nYes, the revised approach is not currently as mature as the one in\nHEAD. It's not ready. Not wanting to attempt Promise Driven\nDevelopment, I'd rather revert, and only try again if there's enough\ntime and interest.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 May 2019 10:06:45 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Thu, May 2, 2019 at 3:42 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-May-01, Amit Kapila wrote:\n>\n> > On Tue, Apr 30, 2019 at 7:52 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> > > Hmm ... so, if vacuum runs and frees up any space from any of the pages,\n> > > then it should send out an invalidation -- it doesn't matter what the\n> > > FSM had, just that there is more free space now. That means every other\n> > > process will need to determine a fresh FSM,\n> >\n> > I think you intend to say the local space map because once FSM is\n> > created we will send invalidation and we won't further build relcache\n> > entry having local space map.\n>\n> Yeah, I mean the map that records free space.\n>\n> > > but that seems correct. Sounds better than keeping outdated entries\n> > > indicating no-space-available.\n> >\n> > Agreed, but as mentioned in one of the above emails, I am also bit\n> > scared that it should not lead to many invalidation messages for small\n> > relations, so may be we should send the invalidation message only when\n> > the entire page is empty.\n>\n> I don't think that's a concern, is it? You typically won't be running\n> multiple vacuums per second, or even multiple vacuums per minute.\n>\n\nThat's right. So let's try by adding invalidation call whenever space\nis reduced. Is there a good way to test whether the new invalidation\ncalls added by this patch has any significant impact?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 2 May 2019 08:54:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Thu, May 2, 2019 at 7:36 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n> On Wed, May 1, 2019 at 11:24 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2019-04-18 14:10:29 -0700, Andres Freund wrote:\n> > > My compromise suggestion would be to try to give John and Amit ~2 weeks\n> > > to come up with a cleanup proposal, and then decide whether to 1) revert\n> > > 2) apply the new patch, 3) decide to live with the warts for 12, and\n> > > apply the patch in 13. As we would already have a patch, 3) seems like\n> > > it'd be more tenable than without.\n> >\n> > I think decision time has come. My tentative impression is that we're\n> > not there yet,\n\nYou are right that patch is not in committable shape, but the patch to\nmove the map to relcache is presented and the main work left there is\nto review/test and add the invalidation calls as per discussion. It\nis just that I don't want to that in haste leading to some other\nproblems. So, that patch should not take too much time and will\nresolve the main complaint. Basically, I was planning to re-post that\npatch as the discussion concludes between me and Alvaro and then\nprobably you can also look into it once to see if that addresses the\nmain complaint. There are a few other points for which John has\nprepared a patch and that might need some work based on your inputs.\n\n>> and should revert the improvements in v12, and apply the\n> > improved version early in v13. As a second choice, we should live with\n> > the current approach, if John and Amit \"promise\" further effort to clean\n> > this up for v13.\n>\n> Yes, the revised approach is not currently as mature as the one in\n> HEAD. It's not ready. Not wanting to attempt Promise Driven\n> Development, I'd rather revert, and only try again if there's enough\n> time and interest.\n>\n\nI can certainly help with moving patch (for cleanup) forward.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 2 May 2019 09:16:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 11:42 AM John Naylor\n<john.naylor@2ndquadrant.com> wrote:\n>\n> On Fri, Apr 26, 2019 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > As discussed above, we need to issue an\n> > invalidation for following points: (a) when vacuum finds there is no\n> > FSM and page has more space now, I think you can detect this in\n> > RecordPageWithFreeSpace\n>\n> I took a brief look and we'd have to know how much space was there\n> before. That doesn't seem possible without first implementing the idea\n> to save free space locally in the same way the FSM does. Even if we\n> have consensus on that, there's no code for it, and we're running out\n> of time.\n>\n> > (b) invalidation to notify the existence of\n> > FSM, this can be done both by vacuum and backend.\n>\n> I still don't claim to be anything but naive in this area, but does\n> the attached get us any closer?\n>\n\n@@ -776,7 +776,10 @@ fsm_extend(Relation rel, BlockNumber fsm_nblocks)\n if ((rel->rd_smgr->smgr_fsm_nblocks == 0 ||\n rel->rd_smgr->smgr_fsm_nblocks == InvalidBlockNumber) &&\n !smgrexists(rel->rd_smgr, FSM_FORKNUM))\n+ {\n smgrcreate(rel->rd_smgr, FSM_FORKNUM, false);\n+ fsm_clear_local_map(rel);\n+ }\n\nI think this won't be correct because when we call fsm_extend via\nvacuum the local map won't be already existing, so it won't issue an\ninvalidation call. Isn't it better to directly call\nCacheInvalidateRelcache here to notify other backends that their local\nmaps are invalid now?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 2 May 2019 12:01:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Thu, May 2, 2019 at 2:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> @@ -776,7 +776,10 @@ fsm_extend(Relation rel, BlockNumber fsm_nblocks)\n> if ((rel->rd_smgr->smgr_fsm_nblocks == 0 ||\n> rel->rd_smgr->smgr_fsm_nblocks == InvalidBlockNumber) &&\n> !smgrexists(rel->rd_smgr, FSM_FORKNUM))\n> + {\n> smgrcreate(rel->rd_smgr, FSM_FORKNUM, false);\n> + fsm_clear_local_map(rel);\n> + }\n>\n> I think this won't be correct because when we call fsm_extend via\n> vacuum the local map won't be already existing, so it won't issue an\n> invalidation call. Isn't it better to directly call\n> CacheInvalidateRelcache here to notify other backends that their local\n> maps are invalid now?\n\nYes, you're quite correct.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 May 2019 15:09:26 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Thu, May 2, 2019 at 12:39 PM John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n> On Thu, May 2, 2019 at 2:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > @@ -776,7 +776,10 @@ fsm_extend(Relation rel, BlockNumber fsm_nblocks)\n> > if ((rel->rd_smgr->smgr_fsm_nblocks == 0 ||\n> > rel->rd_smgr->smgr_fsm_nblocks == InvalidBlockNumber) &&\n> > !smgrexists(rel->rd_smgr, FSM_FORKNUM))\n> > + {\n> > smgrcreate(rel->rd_smgr, FSM_FORKNUM, false);\n> > + fsm_clear_local_map(rel);\n> > + }\n> >\n> > I think this won't be correct because when we call fsm_extend via\n> > vacuum the local map won't be already existing, so it won't issue an\n> > invalidation call. Isn't it better to directly call\n> > CacheInvalidateRelcache here to notify other backends that their local\n> > maps are invalid now?\n>\n> Yes, you're quite correct.\n>\n\nOkay, I have updated the patch to incorporate your changes and call\nrelcache invalidation at required places. I have updated comments at a\nfew places as well. The summarization of this patch is that (a) it\nmoves the local map to relation cache (b) performs the cache\ninvalidation whenever we create fsm (either via backend or vacuum),\nwhen some space in a page is freed by vacuum (provided fsm doesn't\nexist) or whenever the local map is cleared (c) additionally, we clear\nthe local map when we found a block from FSM, when we have already\ntried all the blocks present in cache or when we are allowed to create\nFSM.\n\nIf we agree on this, then we can update the README accordingly.\n\nCan you please test/review?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 2 May 2019 14:26:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Thu, May 2, 2019 at 4:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 2, 2019 at 12:39 PM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> >\n> Okay, I have updated the patch to incorporate your changes and call\n> relcache invalidation at required places. I have updated comments at a\n> few places as well. The summarization of this patch is that (a) it\n> moves the local map to relation cache (b) performs the cache\n> invalidation whenever we create fsm (either via backend or vacuum),\n> when some space in a page is freed by vacuum (provided fsm doesn't\n> exist) or whenever the local map is cleared (c) additionally, we clear\n> the local map when we found a block from FSM, when we have already\n> tried all the blocks present in cache or when we are allowed to create\n> FSM.\n>\n> If we agree on this, then we can update the README accordingly.\n>\n> Can you please test/review?\n\nThere isn't enough time. But since I already wrote some debugging\ncalls earlier (attached), I gave it a brief spin, I found this patch\nisn't as careful as HEAD making sure we don't try the same block twice\nin a row. If you insert enough tuples into an empty table such that we\nneed to extend, you get something like this:\n\nDEBUG: Not enough space on block 0\nDEBUG: Now trying block 0\nDEBUG: Not enough space on block 0\nDEBUG: Updating local map for block 0\n\nAt this point, I'm sorry to say, but I'm in favor of reverting. There\njust wasn't enough time to redesign and debug a feature like this\nduring feature freeze.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 3 May 2019 14:12:48 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Fri, May 3, 2019 at 11:43 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> On Thu, May 2, 2019 at 4:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, May 2, 2019 at 12:39 PM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> > >\n> > Can you please test/review?\n>\n> There isn't enough time. But since I already wrote some debugging\n> calls earlier (attached), I gave it a brief spin, I found this patch\n> isn't as careful as HEAD making sure we don't try the same block twice\n> in a row. If you insert enough tuples into an empty table such that we\n> need to extend, you get something like this:\n>\n> DEBUG: Not enough space on block 0\n> DEBUG: Now trying block 0\n> DEBUG: Not enough space on block 0\n> DEBUG: Updating local map for block 0\n>\n> At this point, I'm sorry to say, but I'm in favor of reverting.\n>\n\nFair enough. I think we have tried to come up with a patch for an\nalternative approach, but it needs time. I will revert this tomorrow.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 May 2019 14:14:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Fri, May 3, 2019 at 2:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 3, 2019 at 11:43 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n> Fair enough. I think we have tried to come up with a patch for an\n> alternative approach, but it needs time. I will revert this tomorrow.\n>\n\nAttached is a revert patch. John, can you please once double-check to\nensure I have not missed anything?\n\nTo summarize for everyone: This patch avoids the fsm creation for\nsmall relations (which is a small but good improvement as it saves\nspace). This patch was using a process local map to track the first\nfew blocks and was reset as soon as we get the block with enough free\nspace. It was discussed in this thread that it would be better to\ntrack the local map in relcache and then invalidate it whenever vacuum\nfrees up space in the page or when FSM is created. There is a\nprototype patch written for the same, but it is not 100% clear to me\nthat the new idea would be a win in all cases (like code complexity or\nAPI design-wise) especially because resetting the map is not\nstraight-forward. As time was not enough, we couldn't complete the\npatch from all aspects to see if it is really better in all cases.\n\nWe have two options (a) revert this patch and try the new approach in\nnext release, (b) keep the current patch and replace with the new\napproach if it turns out to be better in next release.\n\nSo, do we want to keep this feature for this release?\n\nI am fine going with option (a), that's why I have prepared a revert\npatch, but I have a slight fear that the other option might not turn\nout to be better and even if it is then we can anyway replace it as\nshown in the prototype, so going with option (b) doesn't sound to be\ndumb.\n\nAnybody else wants to weigh in?\n\n\n\n\n--\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 4 May 2019 14:55:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Sat, May 4, 2019 at 5:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Attached is a revert patch. John, can you please once double-check to\n> ensure I have not missed anything?\n\nLooks complete to me.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 May 2019 06:58:18 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Sat, May 4, 2019 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 3, 2019 at 2:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, May 3, 2019 at 11:43 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n> Attached is a revert patch. John, can you please once double-check to\n> ensure I have not missed anything?\n>\n> To summarize for everyone: This patch avoids the fsm creation for\n> small relations (which is a small but good improvement as it saves\n> space). This patch was using a process local map to track the first\n> few blocks and was reset as soon as we get the block with enough free\n> space. It was discussed in this thread that it would be better to\n> track the local map in relcache and then invalidate it whenever vacuum\n> frees up space in the page or when FSM is created. There is a\n> prototype patch written for the same, but it is not 100% clear to me\n> that the new idea would be a win in all cases (like code complexity or\n> API design-wise) especially because resetting the map is not\n> straight-forward. As time was not enough, we couldn't complete the\n> patch from all aspects to see if it is really better in all cases.\n>\n> We have two options (a) revert this patch and try the new approach in\n> next release, (b) keep the current patch and replace with the new\n> approach if it turns out to be better in next release.\n>\n> So, do we want to keep this feature for this release?\n>\n> I am fine going with option (a), that's why I have prepared a revert\n> patch, but I have a slight fear that the other option might not turn\n> out to be better and even if it is then we can anyway replace it as\n> shown in the prototype, so going with option (b) doesn't sound to be\n> dumb.\n>\n> Anybody else wants to weigh in?\n>\n\nI understand that we have to take a call here shortly, but as there is\na weekend so I would like to wait for another day to see if anyone\nelse wants to share his opinion.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 5 May 2019 18:55:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-05 18:55:30 +0530, Amit Kapila wrote:\n> On Sat, May 4, 2019 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, May 3, 2019 at 2:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I am fine going with option (a), that's why I have prepared a revert\n> > patch, but I have a slight fear that the other option might not turn\n> > out to be better and even if it is then we can anyway replace it as\n> > shown in the prototype, so going with option (b) doesn't sound to be\n> > dumb.\n\nI don't think we realistically can \"anyway replace it as shown in the\nprototype\" - especially not if we discover we'd need to do so after (or\neven close) to 12's release.\n\n\n> I understand that we have to take a call here shortly, but as there is\n> a weekend so I would like to wait for another day to see if anyone\n> else wants to share his opinion.\n\nI still think that's the right course. I've previously stated that, so\nI'm probably not fulfilling the \"anyone else\" criterion though.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 May 2019 07:58:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-05 18:55:30 +0530, Amit Kapila wrote:\n>> I understand that we have to take a call here shortly, but as there is\n>> a weekend so I would like to wait for another day to see if anyone\n>> else wants to share his opinion.\n\n> I still think that's the right course. I've previously stated that, so\n> I'm probably not fulfilling the \"anyone else\" criterion though.\n\nI also prefer \"revert and try again in v13\", but I'm not \"anyone else\"\neither ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 11:03:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Sun, May 5, 2019 at 9:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I understand that we have to take a call here shortly, but as there is\n> a weekend so I would like to wait for another day to see if anyone\n> else wants to share his opinion.\n\nI haven't looked deeply into the issues with this patch, but it seems\nto me that if two other committers is saying that this should be\nreverted, and the original author of the patch is agreeing, and your\npatch to try to fix it still has demonstrable bugs ... it's time to\ngive up. We're well past feature freeze at this point.\n\nSome other random comments:\n\nI'm really surprised that the original design of this patch involved\nstoring state in global variables. That seems like a pretty poor\ndecision. This is properly per-relation information, and any approach\nlike that isn't going to work well when there are multiple relations\ninvolved, unless the information is only being used for a single\nattempt to find a free page, in which case it should use parameters\nand function-local variables, not globals.\n\nI think it's legitimate to question whether sending additional\ninvalidation messages as part of the design of this feature is a good\nidea. If it happens frequently, it could trigger expensive sinval\nresets more often. I don't understand the various proposals well\nenough to know whether that's really a problem, but if you've got a\nlot of relations for which this optimization is in use, I'm not sure I\nsee why it couldn't be.\n\nI think at some point it was proposed that, since an FSM access\ninvolves touching 3 blocks, it ought to be fine for any relation of 4\nor fewer blocks to just check all the others. I don't really\nunderstand why we drifted off that design principle, because it seems\nlike a reasonable theory. Such an approach doesn't require anything\nin the relcache, any global variables, or an every-other-page\nalgorithm.\n\nIf we wanted to avoid creating the FSM for relation with >4 blocks, we\nmight want to take a step back and think about a whole different\napproach. For instance, we could have a cache in shared memory that\ncan store N entries, and not bother creating FSM forks until things no\nlonger fit in that cache. Or, what might be better, we could put FSM\ndata for many relations into a single FSM file, instead of having a\nseparate fork for each relation. I think that would get at what's\nreally driving this work: having a zillion tiny little FSM files\nsucks. Of course, those kinds of changes are far too big to\ncontemplate for v12, but they might be worth some thought in the\nfuture.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 May 2019 11:10:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-06 11:10:15 -0400, Robert Haas wrote:\n> I'm really surprised that the original design of this patch involved\n> storing state in global variables. That seems like a pretty poor\n> decision. This is properly per-relation information, and any approach\n> like that isn't going to work well when there are multiple relations\n> involved, unless the information is only being used for a single\n> attempt to find a free page, in which case it should use parameters\n> and function-local variables, not globals.\n\nI'm was too.\n\n\n> I think it's legitimate to question whether sending additional\n> invalidation messages as part of the design of this feature is a good\n> idea. If it happens frequently, it could trigger expensive sinval\n> resets more often. I don't understand the various proposals well\n> enough to know whether that's really a problem, but if you've got a\n> lot of relations for which this optimization is in use, I'm not sure I\n> see why it couldn't be.\n\nI don't think it's an actual problem. We'd only do so when creating an\nFSM, or when freeing up additional space that'd otherwise not be visible\nto other backends. The alternative to sinval would thus be a) not\ndiscovering there's free space and extending the relation b) checking\ndisk state for a new FSM all the time. Which are much more expensive.\n\n\n> I think at some point it was proposed that, since an FSM access\n> involves touching 3 blocks, it ought to be fine for any relation of 4\n> or fewer blocks to just check all the others. I don't really\n> understand why we drifted off that design principle, because it seems\n> like a reasonable theory. Such an approach doesn't require anything\n> in the relcache, any global variables, or an every-other-page\n> algorithm.\n\nIt's not that cheap to touch three heap blocks every time a new target\npage is needed. Requires determining at least the target relation size\nor the existance of the FSM fork.\n\nWe'll also commonly *not* end up touching 3 blocks in the FSM -\nespecially when there's actually no free space. And the FSM contents are\nmuch less contended than the heap pages - the hot paths don't update the\nFSM, and if so, the exclusive locks are held for a very short time only.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 May 2019 08:27:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Mon, May 6, 2019 at 11:27 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think it's legitimate to question whether sending additional\n> > invalidation messages as part of the design of this feature is a good\n> > idea. If it happens frequently, it could trigger expensive sinval\n> > resets more often. I don't understand the various proposals well\n> > enough to know whether that's really a problem, but if you've got a\n> > lot of relations for which this optimization is in use, I'm not sure I\n> > see why it couldn't be.\n>\n> I don't think it's an actual problem. We'd only do so when creating an\n> FSM, or when freeing up additional space that'd otherwise not be visible\n> to other backends. The alternative to sinval would thus be a) not\n> discovering there's free space and extending the relation b) checking\n> disk state for a new FSM all the time. Which are much more expensive.\n\nNone of that addresses the question of the distributed cost of sending\nmore sinval messages. If you have a million little tiny relations and\nVACUUM goes through and clears one tuple out of each one, it will be\nspewing sinval messages really, really fast. How can that fail to\nthreaten extra sinval resets?\n\n> > I think at some point it was proposed that, since an FSM access\n> > involves touching 3 blocks, it ought to be fine for any relation of 4\n> > or fewer blocks to just check all the others. I don't really\n> > understand why we drifted off that design principle, because it seems\n> > like a reasonable theory. Such an approach doesn't require anything\n> > in the relcache, any global variables, or an every-other-page\n> > algorithm.\n>\n> It's not that cheap to touch three heap blocks every time a new target\n> page is needed. Requires determining at least the target relation size\n> or the existance of the FSM fork.\n>\n> We'll also commonly *not* end up touching 3 blocks in the FSM -\n> especially when there's actually no free space. And the FSM contents are\n> much less contended than the heap pages - the hot paths don't update the\n> FSM, and if so, the exclusive locks are held for a very short time only.\n\nWell, that seems like an argument that we just shouldn't do this at\nall. If the FSM is worthless for small relations, then eliding it\nmakes sense. But if having it is valuable even when the relation is\ntiny, then eliding it is the wrong thing to do, isn't it? The\nunderlying concerns that prompted this patch probably have to do with\neither [1] not wanting to have so many FSM forks on disk or [2] not\nwanting to consume 24kB of space to track free space for a relation\nthat may be only 8kB. I think those goals are valid, but if we accept\nyour argument then this is the wrong way to achieve them.\n\nI do find it a bit surprising that touching heap pages would be all\nthat much more expensive than touching FSM pages, but that doesn't\nmean that it isn't the case. I would also note that this algorithm\nought to beat the FSM algorithm in many cases where there IS space\navailable, because you'll often find some usable free space on the\nvery first page you try, which will never happen with the FSM. The\ncase where the pages are all full doesn't seem very important, because\nI don't see how you can stay in that situation for all that long.\nEach time it happens, the relation grows by a block immediately\nafterwards, and once it hits 5 blocks, it never happens again. I\nguess you could incur the overhead repeatedly if the relation starts\nout at 1 block, grows to 4, is vacuumed back down to 1, lather, rinse,\nrepeat, but is that actually realistic? It requires all the live\ntuples to live in block 0 at the beginning of each vacuum cycle, which\nseems like a fringe outcome.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 May 2019 11:52:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... I guess you could incur the overhead repeatedly if the relation starts\n> out at 1 block, grows to 4, is vacuumed back down to 1, lather, rinse,\n> repeat, but is that actually realistic?\n\nWhile I've not studied the patch, I assumed that once a relation has an\nFSM it won't disappear. Making it go away again if the relation gets\nshorter seems both fairly useless and a promising source of bugs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 12:05:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Mon, May 6, 2019 at 12:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > ... I guess you could incur the overhead repeatedly if the relation starts\n> > out at 1 block, grows to 4, is vacuumed back down to 1, lather, rinse,\n> > repeat, but is that actually realistic?\n>\n> While I've not studied the patch, I assumed that once a relation has an\n> FSM it won't disappear. Making it go away again if the relation gets\n> shorter seems both fairly useless and a promising source of bugs.\n\nRight, I think so too. That's not what I as going for, though. I was\ntrying to discuss a scenario where the relation repeatedly grows,\nnever reaching the size at which the FSM would be created, and then is\nrepeatedly truncated again.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 May 2019 12:16:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-06 11:52:12 -0400, Robert Haas wrote:\n> On Mon, May 6, 2019 at 11:27 AM Andres Freund <andres@anarazel.de> wrote:\n> > > I think it's legitimate to question whether sending additional\n> > > invalidation messages as part of the design of this feature is a good\n> > > idea. If it happens frequently, it could trigger expensive sinval\n> > > resets more often. I don't understand the various proposals well\n> > > enough to know whether that's really a problem, but if you've got a\n> > > lot of relations for which this optimization is in use, I'm not sure I\n> > > see why it couldn't be.\n> >\n> > I don't think it's an actual problem. We'd only do so when creating an\n> > FSM, or when freeing up additional space that'd otherwise not be visible\n> > to other backends. The alternative to sinval would thus be a) not\n> > discovering there's free space and extending the relation b) checking\n> > disk state for a new FSM all the time. Which are much more expensive.\n> \n> None of that addresses the question of the distributed cost of sending\n> more sinval messages. If you have a million little tiny relations and\n> VACUUM goes through and clears one tuple out of each one, it will be\n> spewing sinval messages really, really fast. How can that fail to\n> threaten extra sinval resets?\n\nVacuum triggers sinval messages already (via the pg_class update),\nshouldn't be too hard to ensure that there's no duplicate ones in this\ncase.\n\n\n> > > I think at some point it was proposed that, since an FSM access\n> > > involves touching 3 blocks, it ought to be fine for any relation of 4\n> > > or fewer blocks to just check all the others. I don't really\n> > > understand why we drifted off that design principle, because it seems\n> > > like a reasonable theory. Such an approach doesn't require anything\n> > > in the relcache, any global variables, or an every-other-page\n> > > algorithm.\n> >\n> > It's not that cheap to touch three heap blocks every time a new target\n> > page is needed. Requires determining at least the target relation size\n> > or the existance of the FSM fork.\n> >\n> > We'll also commonly *not* end up touching 3 blocks in the FSM -\n> > especially when there's actually no free space. And the FSM contents are\n> > much less contended than the heap pages - the hot paths don't update the\n> > FSM, and if so, the exclusive locks are held for a very short time only.\n> \n> Well, that seems like an argument that we just shouldn't do this at\n> all. If the FSM is worthless for small relations, then eliding it\n> makes sense. But if having it is valuable even when the relation is\n> tiny, then eliding it is the wrong thing to do, isn't it?\n\nWhy? The problem with the entirely stateless proposal is just that we'd\ndo that every single time we need new space. If we amortize that cost\nacross multiple insertions, I don't think there's a problem?\n\n\n> I do find it a bit surprising that touching heap pages would be all\n> that much more expensive than touching FSM pages, but that doesn't\n> mean that it isn't the case. I would also note that this algorithm\n> ought to beat the FSM algorithm in many cases where there IS space\n> available, because you'll often find some usable free space on the\n> very first page you try, which will never happen with the FSM.\n\nNote that without additional state we do not *know* that the heap is 5\npages long, we have to do an smgrnblocks() - which is fairly\nexpensive. That's precisely why I want to keep state about a\nnon-existant FSM in the relcache, and why'd need sinval messages to\nclear that. So we don't incur unnecessary syscalls when there's free\nspace.\n\nI completely agree that avoiding the FSM for the small-rels case has the\npotential to be faster, if we're not too naive about it. I think that\nmeans\n\n1) no checking of on-disk state for relation fork existance/sizes every\n time looking up a page with free space\n2) not re-scanning pages when we should know they're full (because we\n scanned them for the last target page, in a previous insert)\n3) ability to recognize concurrently freed space\n\n\n> The case where the pages are all full doesn't seem very important,\n> because I don't see how you can stay in that situation for all that\n> long. Each time it happens, the relation grows by a block immediately\n> afterwards, and once it hits 5 blocks, it never happens again.\n\n> I guess you could incur the overhead repeatedly if the relation starts\n> out at 1 block, grows to 4, is vacuumed back down to 1, lather, rinse,\n> repeat, but is that actually realistic? It requires all the live\n> tuples to live in block 0 at the beginning of each vacuum cycle, which\n> seems like a fringe outcome.\n\nI think it's much more likely to be encountered when there's a lot of\nchurn on a small table, but HOT pruning removes just about all the\nsuperflous space on a regular basis. Then the relation might actually\nnever get > 4 blocks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 May 2019 09:18:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Mon, May 6, 2019 at 12:18 PM Andres Freund <andres@anarazel.de> wrote:\n> > None of that addresses the question of the distributed cost of sending\n> > more sinval messages. If you have a million little tiny relations and\n> > VACUUM goes through and clears one tuple out of each one, it will be\n> > spewing sinval messages really, really fast. How can that fail to\n> > threaten extra sinval resets?\n>\n> Vacuum triggers sinval messages already (via the pg_class update),\n> shouldn't be too hard to ensure that there's no duplicate ones in this\n> case.\n\nYeah, if we can piggyback on the existing messages, then we can be\nconfident that we're not increasing the chances of sinval resets.\n\n> > Well, that seems like an argument that we just shouldn't do this at\n> > all. If the FSM is worthless for small relations, then eliding it\n> > makes sense. But if having it is valuable even when the relation is\n> > tiny, then eliding it is the wrong thing to do, isn't it?\n>\n> Why? The problem with the entirely stateless proposal is just that we'd\n> do that every single time we need new space. If we amortize that cost\n> across multiple insertions, I don't think there's a problem?\n\nHmm, I see.\n\n> Note that without additional state we do not *know* that the heap is 5\n> pages long, we have to do an smgrnblocks() - which is fairly\n> expensive. That's precisely why I want to keep state about a\n> non-existant FSM in the relcache, and why'd need sinval messages to\n> clear that. So we don't incur unnecessary syscalls when there's free\n> space.\n\nMakes sense.\n\n> > I guess you could incur the overhead repeatedly if the relation starts\n> > out at 1 block, grows to 4, is vacuumed back down to 1, lather, rinse,\n> > repeat, but is that actually realistic? It requires all the live\n> > tuples to live in block 0 at the beginning of each vacuum cycle, which\n> > seems like a fringe outcome.\n>\n> I think it's much more likely to be encountered when there's a lot of\n> churn on a small table, but HOT pruning removes just about all the\n> superflous space on a regular basis. Then the relation might actually\n> never get > 4 blocks.\n\nYeah, but if it leaves behind any tuples in block #3, the relation\nwill never be truncated. You can't repeatedly hit the\nall-blocks-are-full case without repeatedly extending the relation,\nand you can't repeatedly extend the relation without getting beyond 4\nblocks unless you are also truncating it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 May 2019 13:51:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Mon, May 6, 2019 at 8:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-05 18:55:30 +0530, Amit Kapila wrote:\n> >> I understand that we have to take a call here shortly, but as there is\n> >> a weekend so I would like to wait for another day to see if anyone\n> >> else wants to share his opinion.\n>\n> > I still think that's the right course. I've previously stated that, so\n> > I'm probably not fulfilling the \"anyone else\" criterion though.\n>\n> I also prefer \"revert and try again in v13\", but I'm not \"anyone else\"\n> either ...\n>\n\nFine, I will take care of that today.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 7 May 2019 08:40:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "On Mon, May 6, 2019 at 8:57 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-05-06 11:10:15 -0400, Robert Haas wrote:\n>\n> > I think it's legitimate to question whether sending additional\n> > invalidation messages as part of the design of this feature is a good\n> > idea. If it happens frequently, it could trigger expensive sinval\n> > resets more often. I don't understand the various proposals well\n> > enough to know whether that's really a problem, but if you've got a\n> > lot of relations for which this optimization is in use, I'm not sure I\n> > see why it couldn't be.\n>\n> I don't think it's an actual problem. We'd only do so when creating an\n> FSM, or when freeing up additional space that'd otherwise not be visible\n> to other backends.\n>\n\nThe other place we need to consider for this is when one of the\nbackends updates its map (due to unavailability of space in the\nexisting set of pages). We can choose not to send invalidation in\nthis case, but then different backends need to identify the same thing\nthemselves and reconstruct the map again.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 7 May 2019 11:37:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Mon, May 6, 2019 at 8:57 PM Andres Freund <andres@anarazel.de> wrote:\n>> On 2019-05-06 11:10:15 -0400, Robert Haas wrote:\n>>> I think it's legitimate to question whether sending additional\n>>> invalidation messages as part of the design of this feature is a good\n>>> idea.\n\n>> I don't think it's an actual problem. We'd only do so when creating an\n>> FSM, or when freeing up additional space that'd otherwise not be visible\n>> to other backends.\n\n> The other place we need to consider for this is when one of the\n> backends updates its map (due to unavailability of space in the\n> existing set of pages). We can choose not to send invalidation in\n> this case, but then different backends need to identify the same thing\n> themselves and reconstruct the map again.\n\nI'm inclined to wonder why bother with invals at all. The odds are\nquite good that no other backend will care (which, I imagine, is the\nreasoning behind why the original patch was designed like it was).\nA table that has a lot of concurrent write activity on it is unlikely\nto stay small enough to not have a FSM for long.\n\nThe approach I'm imagining here is not too different from Robert's\n\"just search the table's pages every time\" straw-man. Backends would\ncache the results of their own searches, but not communicate about it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 09:34:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 09:34:42 -0400, Tom Lane wrote:\n> I'm inclined to wonder why bother with invals at all. The odds are\n> quite good that no other backend will care (which, I imagine, is the\n> reasoning behind why the original patch was designed like it was).\n> A table that has a lot of concurrent write activity on it is unlikely\n> to stay small enough to not have a FSM for long.\n\nBut when updating the free space for the first four blocks, we're going\nto either have to do an smgrexists() to check whether somebody\nconcurrently created the FSM, or we might not update an existing FSM. An\ninval seems much better.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 08:57:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-07 09:34:42 -0400, Tom Lane wrote:\n>> I'm inclined to wonder why bother with invals at all.\n\n> But when updating the free space for the first four blocks, we're going\n> to either have to do an smgrexists() to check whether somebody\n> concurrently created the FSM, or we might not update an existing FSM. An\n> inval seems much better.\n\nI do not think sinval messaging is going to be sufficient to avoid that\nproblem. sinval is only useful to tell you about changes if you first\ntake a lock strong enough to guarantee that no interesting change is\nhappening while you hold the lock. We are certainly not going to let\nwrites take an exclusive lock, so I don't see how we could be certain\nthat we've seen an sinval message telling us about FSM status change.\n\nThis seems tied into the idea we've occasionally speculated about\nof tracking relation sizes in shared memory to avoid lseek calls.\nIf we could solve the problems around that, it'd provide a cheap(er)\nway to detect whether an FSM should exist or not.\n\nA different way of looking at it is that the FSM data is imprecise\nby definition, therefore it doesn't matter that much if some backend\nis slow to realize that the FSM exists. That still doesn't lead me\nto think that sinval messaging is a component of the solution though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 12:04:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 12:04:11 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-07 09:34:42 -0400, Tom Lane wrote:\n> >> I'm inclined to wonder why bother with invals at all.\n> \n> > But when updating the free space for the first four blocks, we're going\n> > to either have to do an smgrexists() to check whether somebody\n> > concurrently created the FSM, or we might not update an existing FSM. An\n> > inval seems much better.\n> \n> I do not think sinval messaging is going to be sufficient to avoid that\n> problem. sinval is only useful to tell you about changes if you first\n> take a lock strong enough to guarantee that no interesting change is\n> happening while you hold the lock. We are certainly not going to let\n> writes take an exclusive lock, so I don't see how we could be certain\n> that we've seen an sinval message telling us about FSM status change.\n\nSure, but it'll be pretty darn close, rather than there basically not\nbeing any limit except backend lifetime to how long we might not notice\nthat we'd need to switch to the on-disk FSM.\n\n\n> This seems tied into the idea we've occasionally speculated about\n> of tracking relation sizes in shared memory to avoid lseek calls.\n> If we could solve the problems around that, it'd provide a cheap(er)\n> way to detect whether an FSM should exist or not.\n\nI'm back on working on a patch that provides that, FWIW. Not yet really\nspending time on it, but I've re-skimmed through the code I'd previously\nwritten.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 09:06:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-07 12:04:11 -0400, Tom Lane wrote:\n>> I do not think sinval messaging is going to be sufficient to avoid that\n>> problem. sinval is only useful to tell you about changes if you first\n>> take a lock strong enough to guarantee that no interesting change is\n>> happening while you hold the lock. We are certainly not going to let\n>> writes take an exclusive lock, so I don't see how we could be certain\n>> that we've seen an sinval message telling us about FSM status change.\n\n> Sure, but it'll be pretty darn close, rather than there basically not\n> being any limit except backend lifetime to how long we might not notice\n> that we'd need to switch to the on-disk FSM.\n\nWhy do you think there's no limit? We ordinarily do\nRelationGetNumberOfBlocks at least once per query on a table, and\nI should think we could fix things so that a \"free\" side-effect of\nthat is to get the relcache entry updated with whether an FSM\nought to exist or not. So I think at worst we'd be one query behind.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 12:12:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 12:12:37 -0400, Tom Lane wrote:\n> Why do you think there's no limit? We ordinarily do\n> RelationGetNumberOfBlocks at least once per query on a table\n\nWell, for the main fork. Which already could have shrunk below the size\nthat led the FSM to be created. And we only do\nRelationGetNumberOfBlocks() when planning, right? Not when using\nprepared statements.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 09:19:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Unhappy about API changes in the no-fsm-for-small-rels patch"
}
] |
[
{
"msg_contents": "I have found that log_planner_stats only outputs stats until the generic\nplan is chosen. For example, if you run the following commands:\n\n\tSET client_min_messages = 'log';\n\tSET log_planner_stats = TRUE;\n\t\n\tPREPARE e AS SELECT relkind FROM pg_class WHERE relname = $1 ORDER BY 1;\n\n\tEXPLAIN ANALYZE VERBOSE EXECUTE e ('pg_class');\n\tEXPLAIN ANALYZE VERBOSE EXECUTE e ('pg_class');\n\tEXPLAIN ANALYZE VERBOSE EXECUTE e ('pg_class');\n\tEXPLAIN ANALYZE VERBOSE EXECUTE e ('pg_class');\n\tEXPLAIN ANALYZE VERBOSE EXECUTE e ('pg_class');\n\tEXPLAIN ANALYZE VERBOSE EXECUTE e ('pg_class');\n-->\tEXPLAIN ANALYZE VERBOSE EXECUTE e ('pg_class');\n\nThe last explain will _not_ show any log_planner_stats duration, though\nit does show an EXPLAIN planning time:\n\n\t Planning Time: 0.012 ms\n\nIt this expected behavior?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 16 Apr 2019 23:33:07 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "log_planner_stats and prepared statements"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I have found that log_planner_stats only outputs stats until the generic\n> plan is chosen. For example, if you run the following commands:\n\nUh, well, the planner doesn't get run after that point ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Apr 2019 00:04:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: log_planner_stats and prepared statements"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 12:04:35AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I have found that log_planner_stats only outputs stats until the generic\n> > plan is chosen. For example, if you run the following commands:\n> \n> Uh, well, the planner doesn't get run after that point ...\n\nYes, that was my analysis too, but EXPLAIN still prints a planner line,\nwith a duration:\n\n\t Planning Time: 0.674 ms\n\t Planning Time: 0.240 ms\n\t Planning Time: 0.186 ms\n\t Planning Time: 0.158 ms\n\t Planning Time: 0.159 ms\n\t Planning Time: 0.169 ms\n-->\t Planning Time: 0.012 ms\n\nIs that consistent? I just don't know. What is that line measuring?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 17 Apr 2019 09:11:20 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: log_planner_stats and prepared statements"
}
] |
[
{
"msg_contents": "Hello postgres hackers,\n\nRecently my colleagues and I encountered an issue: a standby can\nnot recover after an unclean shutdown and it's related to tablespace.\nThe issue is that the standby re-replay some xlog that needs tablespace\ndirectories (e.g. create a database with tablespace),\nbut the tablespace directories has already been removed in the\nprevious replay.\n\nIn details, the standby normally finishes replaying for the below\noperations, but due to unclean shutdown, the redo lsn\nis not updated in pg_control and is still kept a value before the 'create\ndb with tabspace' xlog, however since the tablespace\ndirectories were removed so it reports error when repay the database create\nwal.\n\ncreate db with tablespace\ndrop database\ndrop tablespace.\n\nHere is the log on the standby.\n2019-04-17 14:52:14.926 CST [23029] LOG: starting PostgreSQL 12devel on\nx86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat\n4.8.5-4), 64-bit\n2019-04-17 14:52:14.927 CST [23029] LOG: listening on IPv4 address\n\"192.168.35.130\", port 5432\n2019-04-17 14:52:14.929 CST [23029] LOG: listening on Unix socket\n\"/tmp/.s.PGSQL.5432\"\n2019-04-17 14:52:14.943 CST [23030] LOG: database system was interrupted\nwhile in recovery at log time 2019-04-17 14:48:27 CST\n2019-04-17 14:52:14.943 CST [23030] HINT: If this has occurred more than\nonce some data might be corrupted and you might need to choose an earlier\nrecovery target.\n2019-04-17 14:52:14.949 CST [23030] LOG: entering standby mode\n\n2019-04-17 14:52:14.950 CST [23030] LOG: redo starts at 0/30105B8\n\n2019-04-17 14:52:14.951 CST [23030] FATAL: could not create directory\n\"pg_tblspc/65546/PG_12_201904072/65547\": No such file or directory\n2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for\nDatabase/CREATE: copy dir 1663/1 to 65546/65547\n2019-04-17 14:52:14.951 CST [23029] LOG: startup process (PID 23030)\nexited with exit code 1\n2019-04-17 14:52:14.951 CST [23029] LOG: terminating any other active\nserver processes\n2019-04-17 14:52:14.953 CST [23029] LOG: database system is shut down\n\n\nSteps to reprodce:\n\n1. setup a master and standby.\n2. On both side, run: mkdir /tmp/some_isolation2_pg_basebackup_tablespace\n\n3. Run SQLs:\ndrop tablespace if exists some_isolation2_pg_basebackup_tablespace;\ncreate tablespace some_isolation2_pg_basebackup_tablespace location\n'/tmp/some_isolation2_pg_basebackup_tablespace';\n\n3. Clean shutdown and restart both postgres instances.\n\n4. Run the following SQLs:\n\ndrop database if exists some_database_with_tablespace;\ncreate database some_database_with_tablespace tablespace\nsome_isolation2_pg_basebackup_tablespace;\ndrop database some_database_with_tablespace;\ndrop tablespace some_isolation2_pg_basebackup_tablespace;\n\\! pkill -9 postgres; ssh host70 pkill -9 postgres\n\nNote immediate shutdown via pg_ctl should also be able to reproduce and the\nabove steps probably does not 100% reproduce.\n\nI created an initial patch for this issue (see the attachment). The idea is\nre-creating those directories recursively. The above issue exists\nin dbase_redo(),\nbut TablespaceCreateDbspace (for relation file create redo) is probably\nbuggy also so I modified that function also. Even there is no bug\nin that function, it seems that using simple pg_mkdir_p() is cleaner. Note\nreading TablespaceCreateDbspace(), I found it seems that this issue\nhas already be thought though insufficient but frankly this solution\n(directory recreation) seems to be not perfect given actually this should\nhave been the responsibility of tablespace creation (also tablespace\ncreation does more like symlink creation, etc). Also, I'm not sure whether\nwe need to use invalid page mechanism (see xlogutils.c).\n\nAnother solution is that, actually, we create a checkpoint when\ncreatedb/movedb/dropdb/droptablespace, maybe we should enforce to create\nrestartpoint on standby for such special kind of checkpoint wal - that\nmeans we need to set a flag in checkpoing wal and let checkpoint redo\ncode to create restartpoint if that flag is set. This solution seems to be\nsafer.\n\nThanks,\nPaul",
"msg_date": "Wed, 17 Apr 2019 15:56:30 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 1:27 PM Paul Guo <pguo@pivotal.io> wrote:\n>\n> create db with tablespace\n> drop database\n> drop tablespace.\n\nEssentially, that sequence of operations causes crash recovery to fail\nif the \"drop tablespace\" transaction was committed before crashing.\nThis is a bug in crash recovery in general and should be reproducible\nwithout configuring a standby. Is that right?\n\nYour patch creates missing directories in the destination. Don't we\nneed to create the tablespace symlink under pg_tblspc/? I would\nprefer extending the invalid page mechanism to deal with this, as\nsuggested by Ashwin off-list. It will allow us to avoid creating\ndirectories and files only to remove them shortly afterwards when the\ndrop database and drop tablespace records are replayed.\n\nAsim\n\n\n",
"msg_date": "Fri, 19 Apr 2019 10:08:04 +0530",
"msg_from": "Asim R P <apraveen@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Please see my replies inline. Thanks.\n\nOn Fri, Apr 19, 2019 at 12:38 PM Asim R P <apraveen@pivotal.io> wrote:\n\n> On Wed, Apr 17, 2019 at 1:27 PM Paul Guo <pguo@pivotal.io> wrote:\n> >\n> > create db with tablespace\n> > drop database\n> > drop tablespace.\n>\n> Essentially, that sequence of operations causes crash recovery to fail\n> if the \"drop tablespace\" transaction was committed before crashing.\n> This is a bug in crash recovery in general and should be reproducible\n> without configuring a standby. Is that right?\n>\n\nNo. In general, checkpoint is done for drop_db/create_db/drop_tablespace on\nmaster.\nThat makes the file/directory update-to-date if I understand the related\ncode correctly.\nFor standby, checkpoint redo does not ensure that.\n\n\n>\n> Your patch creates missing directories in the destination. Don't we\n> need to create the tablespace symlink under pg_tblspc/? I would\n>\n\n 'create db with tablespace' redo log does not include the tablespace real\ndirectory information.\nYes, we could add in it into the xlog, but that seems to be an overdesign.\n\n\n> prefer extending the invalid page mechanism to deal with this, as\n> suggested by Ashwin off-list. It will allow us to avoid creating\n\ndirectories and files only to remove them shortly afterwards when the\n> drop database and drop tablespace records are replayed.\n>\n>\n'invalid page' mechanism seems to be more proper for missing pages of a\nfile. For\nmissing directories, we could, of course, hack to use that (e.g. reading\nany page of\na relfile in that database) to make sure the tablespace create code\n(without symlink)\nsafer (It assumes those directories will be deleted soon).\n\nMore feedback about all of the previous discussed solutions is welcome.\n\nPlease see my replies inline. Thanks.On Fri, Apr 19, 2019 at 12:38 PM Asim R P <apraveen@pivotal.io> wrote:On Wed, Apr 17, 2019 at 1:27 PM Paul Guo <pguo@pivotal.io> wrote:\n>\n> create db with tablespace\n> drop database\n> drop tablespace.\n\nEssentially, that sequence of operations causes crash recovery to fail\nif the \"drop tablespace\" transaction was committed before crashing.\nThis is a bug in crash recovery in general and should be reproducible\nwithout configuring a standby. Is that right?No. In general, checkpoint is done for drop_db/create_db/drop_tablespace on master.That makes the file/directory update-to-date if I understand the related code correctly.For standby, checkpoint redo does not ensure that. \n\nYour patch creates missing directories in the destination. Don't we\nneed to create the tablespace symlink under pg_tblspc/? I would 'create db with tablespace' redo log does not include the tablespace real directory information.Yes, we could add in it into the xlog, but that seems to be an overdesign. \nprefer extending the invalid page mechanism to deal with this, as\nsuggested by Ashwin off-list. It will allow us to avoid creating \ndirectories and files only to remove them shortly afterwards when the\ndrop database and drop tablespace records are replayed. 'invalid page' mechanism seems to be more proper for missing pages of a file. Formissing directories, we could, of course, hack to use that (e.g. reading any page ofa relfile in that database) to make sure the tablespace create code (without symlink)safer (It assumes those directories will be deleted soon).More feedback about all of the previous discussed solutions is welcome.",
"msg_date": "Mon, 22 Apr 2019 12:36:43 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Hello.\n\nAt Mon, 22 Apr 2019 12:36:43 +0800, Paul Guo <pguo@pivotal.io> wrote in <CAEET0ZGpUrMGUzfyzVF9FuSq+zb=QovYa2cvyRnDOTvZ5vXxTw@mail.gmail.com>\n> Please see my replies inline. Thanks.\n> \n> On Fri, Apr 19, 2019 at 12:38 PM Asim R P <apraveen@pivotal.io> wrote:\n> \n> > On Wed, Apr 17, 2019 at 1:27 PM Paul Guo <pguo@pivotal.io> wrote:\n> > >\n> > > create db with tablespace\n> > > drop database\n> > > drop tablespace.\n> >\n> > Essentially, that sequence of operations causes crash recovery to fail\n> > if the \"drop tablespace\" transaction was committed before crashing.\n> > This is a bug in crash recovery in general and should be reproducible\n> > without configuring a standby. Is that right?\n> >\n> \n> No. In general, checkpoint is done for drop_db/create_db/drop_tablespace on\n> master.\n> That makes the file/directory update-to-date if I understand the related\n> code correctly.\n> For standby, checkpoint redo does not ensure that.\n\nThat's right partly. As you must have seen, fast shutdown forces\nrestartpoint for the last checkpoint and it prevents the problem\nfrom happening. Anyway it seems to be a problem.\n\n> > Your patch creates missing directories in the destination. Don't we\n> > need to create the tablespace symlink under pg_tblspc/? I would\n> >\n> \n> 'create db with tablespace' redo log does not include the tablespace real\n> directory information.\n> Yes, we could add in it into the xlog, but that seems to be an overdesign.\n\nBut I don't think creating directory that is to be removed just\nafter is a wanted solution. The directory most likely to be be\nremoved just after.\n\n> > prefer extending the invalid page mechanism to deal with this, as\n> > suggested by Ashwin off-list. It will allow us to avoid creating\n> \n> directories and files only to remove them shortly afterwards when the\n> > drop database and drop tablespace records are replayed.\n> >\n> >\n> 'invalid page' mechanism seems to be more proper for missing pages of a\n> file. For\n> missing directories, we could, of course, hack to use that (e.g. reading\n> any page of\n> a relfile in that database) to make sure the tablespace create code\n> (without symlink)\n> safer (It assumes those directories will be deleted soon).\n> \n> More feedback about all of the previous discussed solutions is welcome.\n\nIt doesn't seem to me that the invalid page mechanism is\napplicable in straightforward way, because it doesn't consider\nsimple file copy.\n\nDrop failure is ignored any time. I suppose we can ignore the\nerror to continue recovering as far as recovery have not reached\nconsistency. The attached would work *at least* your case, but I\nhaven't checked this covers all places where need the same\ntreatment.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 22 Apr 2019 16:15:13 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "Oops! The comment in the previous patch is wrong.\n\nAt Mon, 22 Apr 2019 16:15:13 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190422.161513.258021727.horiguchi.kyotaro@lab.ntt.co.jp>\n> At Mon, 22 Apr 2019 12:36:43 +0800, Paul Guo <pguo@pivotal.io> wrote in <CAEET0ZGpUrMGUzfyzVF9FuSq+zb=QovYa2cvyRnDOTvZ5vXxTw@mail.gmail.com>\n> > Please see my replies inline. Thanks.\n> > \n> > On Fri, Apr 19, 2019 at 12:38 PM Asim R P <apraveen@pivotal.io> wrote:\n> > \n> > > On Wed, Apr 17, 2019 at 1:27 PM Paul Guo <pguo@pivotal.io> wrote:\n> > > >\n> > > > create db with tablespace\n> > > > drop database\n> > > > drop tablespace.\n> > >\n> > > Essentially, that sequence of operations causes crash recovery to fail\n> > > if the \"drop tablespace\" transaction was committed before crashing.\n> > > This is a bug in crash recovery in general and should be reproducible\n> > > without configuring a standby. Is that right?\n> > >\n> > \n> > No. In general, checkpoint is done for drop_db/create_db/drop_tablespace on\n> > master.\n> > That makes the file/directory update-to-date if I understand the related\n> > code correctly.\n> > For standby, checkpoint redo does not ensure that.\n> \n> That's right partly. As you must have seen, fast shutdown forces\n> restartpoint for the last checkpoint and it prevents the problem\n> from happening. Anyway it seems to be a problem.\n> \n> > > Your patch creates missing directories in the destination. Don't we\n> > > need to create the tablespace symlink under pg_tblspc/? I would\n> > >\n> > \n> > 'create db with tablespace' redo log does not include the tablespace real\n> > directory information.\n> > Yes, we could add in it into the xlog, but that seems to be an overdesign.\n> \n> But I don't think creating directory that is to be removed just\n> after is a wanted solution. The directory most likely to be be\n> removed just after.\n> \n> > > prefer extending the invalid page mechanism to deal with this, as\n> > > suggested by Ashwin off-list. It will allow us to avoid creating\n> > \n> > directories and files only to remove them shortly afterwards when the\n> > > drop database and drop tablespace records are replayed.\n> > >\n> > >\n> > 'invalid page' mechanism seems to be more proper for missing pages of a\n> > file. For\n> > missing directories, we could, of course, hack to use that (e.g. reading\n> > any page of\n> > a relfile in that database) to make sure the tablespace create code\n> > (without symlink)\n> > safer (It assumes those directories will be deleted soon).\n> > \n> > More feedback about all of the previous discussed solutions is welcome.\n> \n> It doesn't seem to me that the invalid page mechanism is\n> applicable in straightforward way, because it doesn't consider\n> simple file copy.\n> \n> Drop failure is ignored any time. I suppose we can ignore the\n> error to continue recovering as far as recovery have not reached\n> consistency. The attached would work *at least* your case, but I\n> haven't checked this covers all places where need the same\n> treatment.\n\nThe comment for the new function XLogMakePGDirectory is wrong:\n\n+ * There is a possibility that WAL replay causes a creation of the same\n+ * directory left by the previous crash. Issuing ERROR prevents the caller\n+ * from continuing recovery.\n\nThe correct one is:\n\n+ * There is a possibility that WAL replay causes an error by creation of a\n+ * directory under a directory removed before the previous crash. Issuing\n+ * ERROR prevents the caller from continuing recovery.\n\nIt is fixed in the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 22 Apr 2019 16:40:27 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Mon, 22 Apr 2019 16:40:27 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190422.164027.33866403.horiguchi.kyotaro@lab.ntt.co.jp>\n> At Mon, 22 Apr 2019 16:15:13 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190422.161513.258021727.horiguchi.kyotaro@lab.ntt.co.jp>\n> > At Mon, 22 Apr 2019 12:36:43 +0800, Paul Guo <pguo@pivotal.io> wrote in <CAEET0ZGpUrMGUzfyzVF9FuSq+zb=QovYa2cvyRnDOTvZ5vXxTw@mail.gmail.com>\n> > > Please see my replies inline. Thanks.\n> > > \n> > > On Fri, Apr 19, 2019 at 12:38 PM Asim R P <apraveen@pivotal.io> wrote:\n> > > \n> > > > On Wed, Apr 17, 2019 at 1:27 PM Paul Guo <pguo@pivotal.io> wrote:\n> > > > >\n> > > > > create db with tablespace\n> > > > > drop database\n> > > > > drop tablespace.\n> > > >\n> > > > Essentially, that sequence of operations causes crash recovery to fail\n> > > > if the \"drop tablespace\" transaction was committed before crashing.\n> > > > This is a bug in crash recovery in general and should be reproducible\n> > > > without configuring a standby. Is that right?\n> > > >\n> > > \n> > > No. In general, checkpoint is done for drop_db/create_db/drop_tablespace on\n> > > master.\n> > > That makes the file/directory update-to-date if I understand the related\n> > > code correctly.\n> > > For standby, checkpoint redo does not ensure that.\n\nThe attached exercises this sequence, needing some changes in\nPostgresNode.pm and RecursiveCopy.pm to allow tablespaces.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 22 Apr 2019 21:19:33 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 09:19:33PM +0900, Kyotaro HORIGUCHI wrote:\n> The attached exercises this sequence, needing some changes in\n> PostgresNode.pm and RecursiveCopy.pm to allow tablespaces.\n\n+ # Check for symlink -- needed only on source dir\n+ # (note: this will fall through quietly if file is already gone)\n+ if (-l $srcpath)\n+ {\n+ croak \"Cannot operate on symlink \\\"$srcpath\\\"\"\n+ if ($srcpath !~ /\\/(pg_tblspc\\/[0-9]+)$/);\n+\n+ # We have mapped tablespaces. Copy them individually\n+ my $linkname = $1;\n+ my $tmpdir = TestLib::tempdir;\n+ my $dstrealdir = TestLib::real_dir($tmpdir);\n+ my $srcrealdir = readlink($srcpath);\n+\n+ opendir(my $dh, $srcrealdir);\n+ while (readdir $dh)\n+ {\n+ next if (/^\\.\\.?$/);\n+ my $spath = \"$srcrealdir/$_\";\n+ my $dpath = \"$dstrealdir/$_\";\n+\n+ copypath($spath, $dpath);\n+ }\n+ closedir $dh;\n+\n+ symlink $dstrealdir, $destpath;\n+ return 1;\n+ }\n\nThe same stuff is proposed here:\nhttps://www.postgresql.org/message-id/CAGRcZQUxd9YOfifOKXOfJ+Fp3JdpoeKCzt+zH_PRMNaaDaExdQ@mail.gmail.com\n\nSo there is a lot of demand for making the recursive copy more skilled\nat handling symlinks for tablespace tests, and I'd like to propose to\ndo something among those lines for the tests on HEAD, presumably for\nv12 and not v13 as we are talking about a bug fix here? I am not sure\nyet which one of the proposals is better than the other though.\n--\nMichael",
"msg_date": "Tue, 23 Apr 2019 11:34:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Tue, 23 Apr 2019 11:34:38 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190423023438.GH2712@paquier.xyz>\n> On Mon, Apr 22, 2019 at 09:19:33PM +0900, Kyotaro HORIGUCHI wrote:\n> > The attached exercises this sequence, needing some changes in\n> > PostgresNode.pm and RecursiveCopy.pm to allow tablespaces.\n> \n> + # Check for symlink -- needed only on source dir\n> + # (note: this will fall through quietly if file is already gone)\n> + if (-l $srcpath)\n> + {\n> + croak \"Cannot operate on symlink \\\"$srcpath\\\"\"\n> + if ($srcpath !~ /\\/(pg_tblspc\\/[0-9]+)$/);\n> +\n> + # We have mapped tablespaces. Copy them individually\n> + my $linkname = $1;\n> + my $tmpdir = TestLib::tempdir;\n> + my $dstrealdir = TestLib::real_dir($tmpdir);\n> + my $srcrealdir = readlink($srcpath);\n> +\n> + opendir(my $dh, $srcrealdir);\n> + while (readdir $dh)\n> + {\n> + next if (/^\\.\\.?$/);\n> + my $spath = \"$srcrealdir/$_\";\n> + my $dpath = \"$dstrealdir/$_\";\n> +\n> + copypath($spath, $dpath);\n> + }\n> + closedir $dh;\n> +\n> + symlink $dstrealdir, $destpath;\n> + return 1;\n> + }\n> \n> The same stuff is proposed here:\n> https://www.postgresql.org/message-id/CAGRcZQUxd9YOfifOKXOfJ+Fp3JdpoeKCzt+zH_PRMNaaDaExdQ@mail.gmail.com\n> \n> So there is a lot of demand for making the recursive copy more skilled\n> at handling symlinks for tablespace tests, and I'd like to propose to\n> do something among those lines for the tests on HEAD, presumably for\n> v12 and not v13 as we are talking about a bug fix here? I am not sure\n> yet which one of the proposals is better than the other though.\n\nTBH I like that (my one cieted above) not so much. However, I\nprefer to have v12 if this is a bug and to be fixed in\nv12. Otherwise we won't add a test for this later:p\n\nAnyway I'll visit there. Thanks. \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 14:02:34 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "Hi Kyotaro, ignoring the MakePGDirectory() failure will fix this database\ncreate redo error, but I suspect some other kind of redo, which depends on\nthe files under the directory (they are not copied since the directory is\nnot created) and also cannot be covered by the invalid page mechanism,\ncould fail. Thanks.\n\nOn Mon, Apr 22, 2019 at 3:40 PM Kyotaro HORIGUCHI <\nhoriguchi.kyotaro@lab.ntt.co.jp> wrote:\n\n> Oops! The comment in the previous patch is wrong.\n>\n> At Mon, 22 Apr 2019 16:15:13 +0900 (Tokyo Standard Time), Kyotaro\n> HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <\n> 20190422.161513.258021727.horiguchi.kyotaro@lab.ntt.co.jp>\n> > At Mon, 22 Apr 2019 12:36:43 +0800, Paul Guo <pguo@pivotal.io> wrote in\n> <CAEET0ZGpUrMGUzfyzVF9FuSq+zb=QovYa2cvyRnDOTvZ5vXxTw@mail.gmail.com>\n> > > Please see my replies inline. Thanks.\n> > >\n> > > On Fri, Apr 19, 2019 at 12:38 PM Asim R P <apraveen@pivotal.io> wrote:\n> > >\n> > > > On Wed, Apr 17, 2019 at 1:27 PM Paul Guo <pguo@pivotal.io> wrote:\n> > > > >\n> > > > > create db with tablespace\n> > > > > drop database\n> > > > > drop tablespace.\n> > > >\n> > > > Essentially, that sequence of operations causes crash recovery to\n> fail\n> > > > if the \"drop tablespace\" transaction was committed before crashing.\n> > > > This is a bug in crash recovery in general and should be reproducible\n> > > > without configuring a standby. Is that right?\n> > > >\n> > >\n> > > No. In general, checkpoint is done for\n> drop_db/create_db/drop_tablespace on\n> > > master.\n> > > That makes the file/directory update-to-date if I understand the\n> related\n> > > code correctly.\n> > > For standby, checkpoint redo does not ensure that.\n> >\n> > That's right partly. As you must have seen, fast shutdown forces\n> > restartpoint for the last checkpoint and it prevents the problem\n> > from happening. Anyway it seems to be a problem.\n> >\n> > > > Your patch creates missing directories in the destination. Don't we\n> > > > need to create the tablespace symlink under pg_tblspc/? I would\n> > > >\n> > >\n> > > 'create db with tablespace' redo log does not include the tablespace\n> real\n> > > directory information.\n> > > Yes, we could add in it into the xlog, but that seems to be an\n> overdesign.\n> >\n> > But I don't think creating directory that is to be removed just\n> > after is a wanted solution. The directory most likely to be be\n> > removed just after.\n> >\n> > > > prefer extending the invalid page mechanism to deal with this, as\n> > > > suggested by Ashwin off-list. It will allow us to avoid creating\n> > >\n> > > directories and files only to remove them shortly afterwards when the\n> > > > drop database and drop tablespace records are replayed.\n> > > >\n> > > >\n> > > 'invalid page' mechanism seems to be more proper for missing pages of a\n> > > file. For\n> > > missing directories, we could, of course, hack to use that (e.g.\n> reading\n> > > any page of\n> > > a relfile in that database) to make sure the tablespace create code\n> > > (without symlink)\n> > > safer (It assumes those directories will be deleted soon).\n> > >\n> > > More feedback about all of the previous discussed solutions is welcome.\n> >\n> > It doesn't seem to me that the invalid page mechanism is\n> > applicable in straightforward way, because it doesn't consider\n> > simple file copy.\n> >\n> > Drop failure is ignored any time. I suppose we can ignore the\n> > error to continue recovering as far as recovery have not reached\n> > consistency. The attached would work *at least* your case, but I\n> > haven't checked this covers all places where need the same\n> > treatment.\n>\n> The comment for the new function XLogMakePGDirectory is wrong:\n>\n> + * There is a possibility that WAL replay causes a creation of the same\n> + * directory left by the previous crash. Issuing ERROR prevents the caller\n> + * from continuing recovery.\n>\n> The correct one is:\n>\n> + * There is a possibility that WAL replay causes an error by creation of a\n> + * directory under a directory removed before the previous crash. Issuing\n> + * ERROR prevents the caller from continuing recovery.\n>\n> It is fixed in the attached.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n>\n\nHi Kyotaro, ignoring the MakePGDirectory() failure will fix this database create redo error, but I suspect some other kind of redo, which depends on the files under the directory (they are not copied since the directory is not created) and also cannot be covered by the invalid page mechanism, could fail. Thanks.On Mon, Apr 22, 2019 at 3:40 PM Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:Oops! The comment in the previous patch is wrong.\n\nAt Mon, 22 Apr 2019 16:15:13 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190422.161513.258021727.horiguchi.kyotaro@lab.ntt.co.jp>\n> At Mon, 22 Apr 2019 12:36:43 +0800, Paul Guo <pguo@pivotal.io> wrote in <CAEET0ZGpUrMGUzfyzVF9FuSq+zb=QovYa2cvyRnDOTvZ5vXxTw@mail.gmail.com>\n> > Please see my replies inline. Thanks.\n> > \n> > On Fri, Apr 19, 2019 at 12:38 PM Asim R P <apraveen@pivotal.io> wrote:\n> > \n> > > On Wed, Apr 17, 2019 at 1:27 PM Paul Guo <pguo@pivotal.io> wrote:\n> > > >\n> > > > create db with tablespace\n> > > > drop database\n> > > > drop tablespace.\n> > >\n> > > Essentially, that sequence of operations causes crash recovery to fail\n> > > if the \"drop tablespace\" transaction was committed before crashing.\n> > > This is a bug in crash recovery in general and should be reproducible\n> > > without configuring a standby. Is that right?\n> > >\n> > \n> > No. In general, checkpoint is done for drop_db/create_db/drop_tablespace on\n> > master.\n> > That makes the file/directory update-to-date if I understand the related\n> > code correctly.\n> > For standby, checkpoint redo does not ensure that.\n> \n> That's right partly. As you must have seen, fast shutdown forces\n> restartpoint for the last checkpoint and it prevents the problem\n> from happening. Anyway it seems to be a problem.\n> \n> > > Your patch creates missing directories in the destination. Don't we\n> > > need to create the tablespace symlink under pg_tblspc/? I would\n> > >\n> > \n> > 'create db with tablespace' redo log does not include the tablespace real\n> > directory information.\n> > Yes, we could add in it into the xlog, but that seems to be an overdesign.\n> \n> But I don't think creating directory that is to be removed just\n> after is a wanted solution. The directory most likely to be be\n> removed just after.\n> \n> > > prefer extending the invalid page mechanism to deal with this, as\n> > > suggested by Ashwin off-list. It will allow us to avoid creating\n> > \n> > directories and files only to remove them shortly afterwards when the\n> > > drop database and drop tablespace records are replayed.\n> > >\n> > >\n> > 'invalid page' mechanism seems to be more proper for missing pages of a\n> > file. For\n> > missing directories, we could, of course, hack to use that (e.g. reading\n> > any page of\n> > a relfile in that database) to make sure the tablespace create code\n> > (without symlink)\n> > safer (It assumes those directories will be deleted soon).\n> > \n> > More feedback about all of the previous discussed solutions is welcome.\n> \n> It doesn't seem to me that the invalid page mechanism is\n> applicable in straightforward way, because it doesn't consider\n> simple file copy.\n> \n> Drop failure is ignored any time. I suppose we can ignore the\n> error to continue recovering as far as recovery have not reached\n> consistency. The attached would work *at least* your case, but I\n> haven't checked this covers all places where need the same\n> treatment.\n\nThe comment for the new function XLogMakePGDirectory is wrong:\n\n+ * There is a possibility that WAL replay causes a creation of the same\n+ * directory left by the previous crash. Issuing ERROR prevents the caller\n+ * from continuing recovery.\n\nThe correct one is:\n\n+ * There is a possibility that WAL replay causes an error by creation of a\n+ * directory under a directory removed before the previous crash. Issuing\n+ * ERROR prevents the caller from continuing recovery.\n\nIt is fixed in the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 23 Apr 2019 13:31:58 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Hello.\n\nAt Tue, 23 Apr 2019 13:31:58 +0800, Paul Guo <pguo@pivotal.io> wrote in <CAEET0ZEcwz57z2yfWRds43b3TfQPPDSWmbjGmD43xRxLT41NDg@mail.gmail.com>\n> Hi Kyotaro, ignoring the MakePGDirectory() failure will fix this database\n> create redo error, but I suspect some other kind of redo, which depends on\n> the files under the directory (they are not copied since the directory is\n> not created) and also cannot be covered by the invalid page mechanism,\n> could fail. Thanks.\n\nIf recovery starts from just after tablespace creation, that's\nsimple. The Symlink to the removed tablespace is already removed\nin the case. Hence server innocently create files directly under\npg_tblspc, not in the tablespace. Finally all files that were\nsupposed to be created in the removed tablespace are removed\nlater in recovery.\n\nIf recovery starts from recalling page in a file that have been\nin the tablespace, XLogReadBufferExtended creates one (perhaps\ndirectly in pg_tblspc as described above) and the files are\nremoved later in recoery the same way to above. This case doen't\ncause FATAL/PANIC during recovery even in master.\n\nXLogReadBufferExtended@xlogutils.c\n| * Create the target file if it doesn't already exist. This lets us cope\n| * if the replay sequence contains writes to a relation that is later\n| * deleted. (The original coding of this routine would instead suppress\n| * the writes, but that seems like it risks losing valuable data if the\n| * filesystem loses an inode during a crash. Better to write the data\n| * until we are actually told to delete the file.)\n\nSo buffered access cannot be a problem for the reason above. The\nremaining possible issue is non-buffered access to files in\nremoved tablespaces. This is what I mentioned upthread:\n\nme> but I haven't checked this covers all places where need the same\nme> treatment.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 16:39:49 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "Mmm. I posted to wrong thread. Sorry.\n\nAt Tue, 23 Apr 2019 16:39:49 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190423.163949.36763221.horiguchi.kyotaro@lab.ntt.co.jp>\n> At Tue, 23 Apr 2019 13:31:58 +0800, Paul Guo <pguo@pivotal.io> wrote in <CAEET0ZEcwz57z2yfWRds43b3TfQPPDSWmbjGmD43xRxLT41NDg@mail.gmail.com>\n> > Hi Kyotaro, ignoring the MakePGDirectory() failure will fix this database\n> > create redo error, but I suspect some other kind of redo, which depends on\n> > the files under the directory (they are not copied since the directory is\n> > not created) and also cannot be covered by the invalid page mechanism,\n> > could fail. Thanks.\n> \n> If recovery starts from just after tablespace creation, that's\n> simple. The Symlink to the removed tablespace is already removed\n> in the case. Hence server innocently create files directly under\n> pg_tblspc, not in the tablespace. Finally all files that were\n> supposed to be created in the removed tablespace are removed\n> later in recovery.\n> \n> If recovery starts from recalling page in a file that have been\n> in the tablespace, XLogReadBufferExtended creates one (perhaps\n> directly in pg_tblspc as described above) and the files are\n> removed later in recoery the same way to above. This case doen't\n> cause FATAL/PANIC during recovery even in master.\n> \n> XLogReadBufferExtended@xlogutils.c\n> | * Create the target file if it doesn't already exist. This lets us cope\n> | * if the replay sequence contains writes to a relation that is later\n> | * deleted. (The original coding of this routine would instead suppress\n> | * the writes, but that seems like it risks losing valuable data if the\n> | * filesystem loses an inode during a crash. Better to write the data\n> | * until we are actually told to delete the file.)\n> \n> So buffered access cannot be a problem for the reason above. The\n> remaining possible issue is non-buffered access to files in\n> removed tablespaces. This is what I mentioned upthread:\n> \n> me> but I haven't checked this covers all places where need the same\n> me> treatment.\n\nRM_DBASE_ID is fixed by the patch.\n\nXLOG/XACT/CLOG/MULTIXACT/RELMAP/STANDBY/COMMIT_TS/REPLORIGIN/LOGICALMSG:\n - are not relevant.\n\nHEAP/HEAP2/BTREE/HASH/GIN/GIST/SEQ/SPGIST/BRIN/GENERIC:\n - Resources works on buffer is not affected.\n\nSMGR:\n - Both CREATE and TRUNCATE seems fine.\n\nTBLSPC:\n - We don't nest tablespace directories. No Problem.\n\nI don't find a similar case.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 24 Apr 2019 17:13:54 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 4:14 PM Kyotaro HORIGUCHI <\nhoriguchi.kyotaro@lab.ntt.co.jp> wrote:\n\n> Mmm. I posted to wrong thread. Sorry.\n>\n> At Tue, 23 Apr 2019 16:39:49 +0900 (Tokyo Standard Time), Kyotaro\n> HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <\n> 20190423.163949.36763221.horiguchi.kyotaro@lab.ntt.co.jp>\n> > At Tue, 23 Apr 2019 13:31:58 +0800, Paul Guo <pguo@pivotal.io> wrote in\n> <CAEET0ZEcwz57z2yfWRds43b3TfQPPDSWmbjGmD43xRxLT41NDg@mail.gmail.com>\n> > > Hi Kyotaro, ignoring the MakePGDirectory() failure will fix this\n> database\n> > > create redo error, but I suspect some other kind of redo, which\n> depends on\n> > > the files under the directory (they are not copied since the directory\n> is\n> > > not created) and also cannot be covered by the invalid page mechanism,\n> > > could fail. Thanks.\n> >\n> > If recovery starts from just after tablespace creation, that's\n> > simple. The Symlink to the removed tablespace is already removed\n> > in the case. Hence server innocently create files directly under\n> > pg_tblspc, not in the tablespace. Finally all files that were\n> > supposed to be created in the removed tablespace are removed\n> > later in recovery.\n> >\n> > If recovery starts from recalling page in a file that have been\n> > in the tablespace, XLogReadBufferExtended creates one (perhaps\n> > directly in pg_tblspc as described above) and the files are\n> > removed later in recoery the same way to above. This case doen't\n> > cause FATAL/PANIC during recovery even in master.\n> >\n> > XLogReadBufferExtended@xlogutils.c\n> > | * Create the target file if it doesn't already exist. This lets us\n> cope\n> > | * if the replay sequence contains writes to a relation that is later\n> > | * deleted. (The original coding of this routine would instead suppress\n> > | * the writes, but that seems like it risks losing valuable data if the\n> > | * filesystem loses an inode during a crash. Better to write the data\n> > | * until we are actually told to delete the file.)\n> >\n> > So buffered access cannot be a problem for the reason above. The\n> > remaining possible issue is non-buffered access to files in\n> > removed tablespaces. This is what I mentioned upthread:\n> >\n> > me> but I haven't checked this covers all places where need the same\n> > me> treatment.\n>\n> RM_DBASE_ID is fixed by the patch.\n>\n> XLOG/XACT/CLOG/MULTIXACT/RELMAP/STANDBY/COMMIT_TS/REPLORIGIN/LOGICALMSG:\n> - are not relevant.\n>\n> HEAP/HEAP2/BTREE/HASH/GIN/GIST/SEQ/SPGIST/BRIN/GENERIC:\n> - Resources works on buffer is not affected.\n>\n> SMGR:\n> - Both CREATE and TRUNCATE seems fine.\n>\n> TBLSPC:\n> - We don't nest tablespace directories. No Problem.\n>\n> I don't find a similar case.\n\n\nI took some time in digging into the related code. It seems that ignoring\nif the dst directory cannot be created directly\nshould be fine since smgr redo code creates tablespace path finally by\ncalling TablespaceCreateDbspace().\nWhat's more, I found some more issues.\n\n1) The below error message is actually misleading.\n\n2019-04-17 14:52:14.951 CST [23030] FATAL: could not create directory\n\"pg_tblspc/65546/PG_12_201904072/65547\": No such file or directory\n2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for\nDatabase/CREATE: copy dir 1663/1 to 65546/65547\n\nThat should be due to dbase_desc(). It could be simply fixed following the\ncode logic in GetDatabasePath().\n\n2) It seems that src directory could be missing then\ndbase_redo()->copydir() could error out. For example,\n\n\\!rm -rf /tmp/tbspace1\n\\!mkdir /tmp/tbspace1\n\\!rm -rf /tmp/tbspace2\n\\!mkdir /tmp/tbspace2\ncreate tablespace tbs1 location '/tmp/tbspace1';\ncreate tablespace tbs2 location '/tmp/tbspace2';\ncreate database db1 tablespace tbs1;\nalter database db1 set tablespace tbs2;\ndrop tablespace tbs1;\n\nLet's say, the standby finishes all replay but redo lsn on pg_control is\nstill the point at 'alter database', and then\nkill postgres, then in theory when startup, dbase_redo()->copydir() will\nERROR since 'drop tablespace tbs1'\nhas removed the directories (and symlink) of tbs1. Below simple code change\ncould fix that.\n\ndiff --git a/src/backend/commands/dbcommands.c\nb/src/backend/commands/dbcommands.c\nindex 9707afabd9..7d755c759e 100644\n--- a/src/backend/commands/dbcommands.c\n+++ b/src/backend/commands/dbcommands.c\n@@ -2114,6 +2114,15 @@ dbase_redo(XLogReaderState *record)\n */\n FlushDatabaseBuffers(xlrec->src_db_id);\n\n+ /*\n+ * It is possible that the source directory is missing if\n+ * we are re-replaying the xlog while subsequent xlogs\n+ * drop the tablespace in previous replaying. For this\n+ * we just skip.\n+ */\n+ if (!(stat(src_path, &st) == 0 && S_ISDIR(st.st_mode)))\n+ return;\n+\n /*\n * Copy this subdirectory to the new location\n *\n\nIf we want to fix the issue by ignoring the dst path create failure, I do\nnot think we should do\nthat in copydir() since copydir() seems to be a common function. I'm not\nsure whether it is\nused by some extensions or not. If no maybe we should move the dst patch\ncreate logic\nout of copydir().\n\nAlso I'd suggest we should use pg_mkdir_p() in TablespaceCreateDbspace() to\nreplace\nthe code block includes a lot of get_parent_directory(), MakePGDirectory(),\netc even it\nis not fixing a bug since pg_mkdir_p() code change seems to be more\ngraceful and simpler.\n\nWhatever ignore mkdir failure or mkdir_p, I found that these steps seem to\nbe error-prone\nalong with postgre evolving since they are hard to test and also we are not\neasy to think out\nvarious potential bad cases. Is it possible that we should do real\ncheckpoint (flush & update\nredo lsn) when seeing checkpoint xlogs for these operations? This will slow\ndown standby\nbut master also does this and also these operations are not usual,\nespeically it seems that it\ndoes not slow down wal receiving usually?\n\nOn Wed, Apr 24, 2019 at 4:14 PM Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:Mmm. I posted to wrong thread. Sorry.\n\nAt Tue, 23 Apr 2019 16:39:49 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190423.163949.36763221.horiguchi.kyotaro@lab.ntt.co.jp>\n> At Tue, 23 Apr 2019 13:31:58 +0800, Paul Guo <pguo@pivotal.io> wrote in <CAEET0ZEcwz57z2yfWRds43b3TfQPPDSWmbjGmD43xRxLT41NDg@mail.gmail.com>\n> > Hi Kyotaro, ignoring the MakePGDirectory() failure will fix this database\n> > create redo error, but I suspect some other kind of redo, which depends on\n> > the files under the directory (they are not copied since the directory is\n> > not created) and also cannot be covered by the invalid page mechanism,\n> > could fail. Thanks.\n> \n> If recovery starts from just after tablespace creation, that's\n> simple. The Symlink to the removed tablespace is already removed\n> in the case. Hence server innocently create files directly under\n> pg_tblspc, not in the tablespace. Finally all files that were\n> supposed to be created in the removed tablespace are removed\n> later in recovery.\n> \n> If recovery starts from recalling page in a file that have been\n> in the tablespace, XLogReadBufferExtended creates one (perhaps\n> directly in pg_tblspc as described above) and the files are\n> removed later in recoery the same way to above. This case doen't\n> cause FATAL/PANIC during recovery even in master.\n> \n> XLogReadBufferExtended@xlogutils.c\n> | * Create the target file if it doesn't already exist. This lets us cope\n> | * if the replay sequence contains writes to a relation that is later\n> | * deleted. (The original coding of this routine would instead suppress\n> | * the writes, but that seems like it risks losing valuable data if the\n> | * filesystem loses an inode during a crash. Better to write the data\n> | * until we are actually told to delete the file.)\n> \n> So buffered access cannot be a problem for the reason above. The\n> remaining possible issue is non-buffered access to files in\n> removed tablespaces. This is what I mentioned upthread:\n> \n> me> but I haven't checked this covers all places where need the same\n> me> treatment.\n\nRM_DBASE_ID is fixed by the patch.\n\nXLOG/XACT/CLOG/MULTIXACT/RELMAP/STANDBY/COMMIT_TS/REPLORIGIN/LOGICALMSG:\n - are not relevant.\n\nHEAP/HEAP2/BTREE/HASH/GIN/GIST/SEQ/SPGIST/BRIN/GENERIC:\n - Resources works on buffer is not affected.\n\nSMGR:\n - Both CREATE and TRUNCATE seems fine.\n\nTBLSPC:\n - We don't nest tablespace directories. No Problem.\n\nI don't find a similar case.I took some time in digging into the related code. It seems that ignoring if the dst directory cannot be created directlyshould be fine since smgr redo code creates tablespace path finally by calling TablespaceCreateDbspace().What's more, I found some more issues.1) The below error message is actually misleading.2019-04-17 14:52:14.951 CST [23030] FATAL: could not create directory \"pg_tblspc/65546/PG_12_201904072/65547\": No such file or directory2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for Database/CREATE: copy dir 1663/1 to 65546/65547That should be due to dbase_desc(). It could be simply fixed following the code logic in GetDatabasePath().2) It seems that src directory could be missing then dbase_redo()->copydir() could error out. For example,\\!rm -rf /tmp/tbspace1 \\!mkdir /tmp/tbspace1\\!rm -rf /tmp/tbspace2\\!mkdir /tmp/tbspace2create tablespace tbs1 location '/tmp/tbspace1'; create tablespace tbs2 location '/tmp/tbspace2'; create database db1 tablespace tbs1;alter database db1 set tablespace tbs2;drop tablespace tbs1;Let's say, the standby finishes all replay but redo lsn on pg_control is still the point at 'alter database', and thenkill postgres, then in theory when startup, dbase_redo()->copydir() will ERROR since 'drop tablespace tbs1'has removed the directories (and symlink) of tbs1. Below simple code change could fix that.diff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.cindex 9707afabd9..7d755c759e 100644--- a/src/backend/commands/dbcommands.c+++ b/src/backend/commands/dbcommands.c@@ -2114,6 +2114,15 @@ dbase_redo(XLogReaderState *record) */ FlushDatabaseBuffers(xlrec->src_db_id);+ /*+ * It is possible that the source directory is missing if+ * we are re-replaying the xlog while subsequent xlogs+ * drop the tablespace in previous replaying. For this+ * we just skip.+ */+ if (!(stat(src_path, &st) == 0 && S_ISDIR(st.st_mode)))+ return;+ /* * Copy this subdirectory to the new location *If we want to fix the issue by ignoring the dst path create failure, I do not think we should dothat in copydir() since copydir() seems to be a common function. I'm not sure whether it isused by some extensions or not. If no maybe we should move the dst patch create logicout of copydir().Also I'd suggest we should use pg_mkdir_p() in TablespaceCreateDbspace() to replacethe code block includes a lot of get_parent_directory(), MakePGDirectory(), etc even itis not fixing a bug since pg_mkdir_p() code change seems to be more graceful and simpler. Whatever ignore mkdir failure or mkdir_p, I found that these steps seem to be error-pronealong with postgre evolving since they are hard to test and also we are not easy to think outvarious potential bad cases. Is it possible that we should do real checkpoint (flush & updateredo lsn) when seeing checkpoint xlogs for these operations? This will slow down standbybut master also does this and also these operations are not usual, espeically it seems that itdoes not slow down wal receiving usually?",
"msg_date": "Sun, 28 Apr 2019 15:33:13 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "I updated the original patch to\n\n1) skip copydir() if either src path or dst parent path is missing in\ndbase_redo(). Both missing cases seem to be possible. For the src path\nmissing case, mkdir_p() is meaningless. It seems that moving the directory\nexistence check step to dbase_redo() has less impact on other code.\n\n2) Fixed dbase_desc(). Now the xlog output looks correct.\n\nrmgr: Database len (rec/tot): 42/ 42, tx: 486, lsn:\n0/016386A8, prev 0/01638630, desc: CREATE copy dir base/1 to\npg_tblspc/16384/PG_12_201904281/16386\n\nrmgr: Database len (rec/tot): 34/ 34, tx: 487, lsn:\n0/01638EB8, prev 0/01638E40, desc: DROP dir\npg_tblspc/16384/PG_12_201904281/16386\n\nI'm not familiar with the TAP test details previously. I learned a lot\nabout how to test such case from Kyotaro's patch series.👍\n\nOn Sun, Apr 28, 2019 at 3:33 PM Paul Guo <pguo@pivotal.io> wrote:\n\n>\n> On Wed, Apr 24, 2019 at 4:14 PM Kyotaro HORIGUCHI <\n> horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n>\n>> Mmm. I posted to wrong thread. Sorry.\n>>\n>> At Tue, 23 Apr 2019 16:39:49 +0900 (Tokyo Standard Time), Kyotaro\n>> HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <\n>> 20190423.163949.36763221.horiguchi.kyotaro@lab.ntt.co.jp>\n>> > At Tue, 23 Apr 2019 13:31:58 +0800, Paul Guo <pguo@pivotal.io> wrote\n>> in <CAEET0ZEcwz57z2yfWRds43b3TfQPPDSWmbjGmD43xRxLT41NDg@mail.gmail.com>\n>> > > Hi Kyotaro, ignoring the MakePGDirectory() failure will fix this\n>> database\n>> > > create redo error, but I suspect some other kind of redo, which\n>> depends on\n>> > > the files under the directory (they are not copied since the\n>> directory is\n>> > > not created) and also cannot be covered by the invalid page mechanism,\n>> > > could fail. Thanks.\n>> >\n>> > If recovery starts from just after tablespace creation, that's\n>> > simple. The Symlink to the removed tablespace is already removed\n>> > in the case. Hence server innocently create files directly under\n>> > pg_tblspc, not in the tablespace. Finally all files that were\n>> > supposed to be created in the removed tablespace are removed\n>> > later in recovery.\n>> >\n>> > If recovery starts from recalling page in a file that have been\n>> > in the tablespace, XLogReadBufferExtended creates one (perhaps\n>> > directly in pg_tblspc as described above) and the files are\n>> > removed later in recoery the same way to above. This case doen't\n>> > cause FATAL/PANIC during recovery even in master.\n>> >\n>> > XLogReadBufferExtended@xlogutils.c\n>> > | * Create the target file if it doesn't already exist. This lets us\n>> cope\n>> > | * if the replay sequence contains writes to a relation that is later\n>> > | * deleted. (The original coding of this routine would instead\n>> suppress\n>> > | * the writes, but that seems like it risks losing valuable data if the\n>> > | * filesystem loses an inode during a crash. Better to write the data\n>> > | * until we are actually told to delete the file.)\n>> >\n>> > So buffered access cannot be a problem for the reason above. The\n>> > remaining possible issue is non-buffered access to files in\n>> > removed tablespaces. This is what I mentioned upthread:\n>> >\n>> > me> but I haven't checked this covers all places where need the same\n>> > me> treatment.\n>>\n>> RM_DBASE_ID is fixed by the patch.\n>>\n>> XLOG/XACT/CLOG/MULTIXACT/RELMAP/STANDBY/COMMIT_TS/REPLORIGIN/LOGICALMSG:\n>> - are not relevant.\n>>\n>> HEAP/HEAP2/BTREE/HASH/GIN/GIST/SEQ/SPGIST/BRIN/GENERIC:\n>> - Resources works on buffer is not affected.\n>>\n>> SMGR:\n>> - Both CREATE and TRUNCATE seems fine.\n>>\n>> TBLSPC:\n>> - We don't nest tablespace directories. No Problem.\n>>\n>> I don't find a similar case.\n>\n>\n> I took some time in digging into the related code. It seems that ignoring\n> if the dst directory cannot be created directly\n> should be fine since smgr redo code creates tablespace path finally by\n> calling TablespaceCreateDbspace().\n> What's more, I found some more issues.\n>\n> 1) The below error message is actually misleading.\n>\n> 2019-04-17 14:52:14.951 CST [23030] FATAL: could not create directory\n> \"pg_tblspc/65546/PG_12_201904072/65547\": No such file or directory\n> 2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for\n> Database/CREATE: copy dir 1663/1 to 65546/65547\n>\n> That should be due to dbase_desc(). It could be simply fixed following the\n> code logic in GetDatabasePath().\n>\n> 2) It seems that src directory could be missing then\n> dbase_redo()->copydir() could error out. For example,\n>\n> \\!rm -rf /tmp/tbspace1\n> \\!mkdir /tmp/tbspace1\n> \\!rm -rf /tmp/tbspace2\n> \\!mkdir /tmp/tbspace2\n> create tablespace tbs1 location '/tmp/tbspace1';\n> create tablespace tbs2 location '/tmp/tbspace2';\n> create database db1 tablespace tbs1;\n> alter database db1 set tablespace tbs2;\n> drop tablespace tbs1;\n>\n> Let's say, the standby finishes all replay but redo lsn on pg_control is\n> still the point at 'alter database', and then\n> kill postgres, then in theory when startup, dbase_redo()->copydir() will\n> ERROR since 'drop tablespace tbs1'\n> has removed the directories (and symlink) of tbs1. Below simple code\n> change could fix that.\n>\n> diff --git a/src/backend/commands/dbcommands.c\n> b/src/backend/commands/dbcommands.c\n> index 9707afabd9..7d755c759e 100644\n> --- a/src/backend/commands/dbcommands.c\n> +++ b/src/backend/commands/dbcommands.c\n> @@ -2114,6 +2114,15 @@ dbase_redo(XLogReaderState *record)\n> */\n> FlushDatabaseBuffers(xlrec->src_db_id);\n>\n> + /*\n> + * It is possible that the source directory is missing if\n> + * we are re-replaying the xlog while subsequent xlogs\n> + * drop the tablespace in previous replaying. For this\n> + * we just skip.\n> + */\n> + if (!(stat(src_path, &st) == 0 && S_ISDIR(st.st_mode)))\n> + return;\n> +\n> /*\n> * Copy this subdirectory to the new location\n> *\n>\n> If we want to fix the issue by ignoring the dst path create failure, I do\n> not think we should do\n> that in copydir() since copydir() seems to be a common function. I'm not\n> sure whether it is\n> used by some extensions or not. If no maybe we should move the dst patch\n> create logic\n> out of copydir().\n>\n> Also I'd suggest we should use pg_mkdir_p() in TablespaceCreateDbspace()\n> to replace\n> the code block includes a lot of\n> get_parent_directory(), MakePGDirectory(), etc even it\n> is not fixing a bug since pg_mkdir_p() code change seems to be more\n> graceful and simpler.\n>\n> Whatever ignore mkdir failure or mkdir_p, I found that these steps seem to\n> be error-prone\n> along with postgre evolving since they are hard to test and also we are\n> not easy to think out\n> various potential bad cases. Is it possible that we should do real\n> checkpoint (flush & update\n> redo lsn) when seeing checkpoint xlogs for these operations? This will\n> slow down standby\n> but master also does this and also these operations are not usual,\n> espeically it seems that it\n> does not slow down wal receiving usually?\n>\n>\n>\n>",
"msg_date": "Tue, 30 Apr 2019 14:33:47 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Hi.\r\n\r\nAt Tue, 30 Apr 2019 14:33:47 +0800, Paul Guo <pguo@pivotal.io> wrote in <CAEET0ZGhmDKrq7JJu2rLLqcJBR8pA4OYrKsirZ5Ft8-deG1e8A@mail.gmail.com>\r\n> I updated the original patch to\r\n\r\nIt's reasonable not to touch copydir.\r\n\r\n> 1) skip copydir() if either src path or dst parent path is missing in\r\n> dbase_redo(). Both missing cases seem to be possible. For the src path\r\n> missing case, mkdir_p() is meaningless. It seems that moving the directory\r\n> existence check step to dbase_redo() has less impact on other code.\r\n\r\nNice catch.\r\n\r\n\r\n+ if (!(stat(parent_path, &st) == 0 && S_ISDIR(st.st_mode)))\r\n+ {\r\n\r\nThis patch is allowing missing source and destination directory\r\neven in consistent state. I don't think it is safe.\r\n\r\n\r\n\r\n+ ereport(WARNING,\r\n+ (errmsg(\"directory \\\"%s\\\" for copydir() does not exists.\"\r\n+ \"It is possibly expected. Skip copydir().\",\r\n+ parent_path)));\r\n\r\nThis message seems unfriendly to users, or it seems like an elog\r\nmessage. How about something like this. The same can be said for\r\nthe source directory.\r\n\r\n| WARNING: skipped creating database directory: \"%s\"\r\n| DETAIL: The tabelspace %u may have been removed just before crash.\r\n\r\n# I'm not confident in this at all:(\r\n\r\n> 2) Fixed dbase_desc(). Now the xlog output looks correct.\r\n> \r\n> rmgr: Database len (rec/tot): 42/ 42, tx: 486, lsn:\r\n> 0/016386A8, prev 0/01638630, desc: CREATE copy dir base/1 to\r\n> pg_tblspc/16384/PG_12_201904281/16386\r\n> \r\n> rmgr: Database len (rec/tot): 34/ 34, tx: 487, lsn:\r\n> 0/01638EB8, prev 0/01638E40, desc: DROP dir\r\n> pg_tblspc/16384/PG_12_201904281/16386\r\n\r\nWAL records don't convey such information. The previous\r\ndescription seems right to me.\r\n\r\n> I'm not familiar with the TAP test details previously. I learned a lot\r\n> about how to test such case from Kyotaro's patch series.👍\r\n\r\nYeah, good to hear.\r\n\r\n> On Sun, Apr 28, 2019 at 3:33 PM Paul Guo <pguo@pivotal.io> wrote:\r\n> > If we want to fix the issue by ignoring the dst path create failure, I do\r\n> > not think we should do\r\n> > that in copydir() since copydir() seems to be a common function. I'm not\r\n> > sure whether it is\r\n> > used by some extensions or not. If no maybe we should move the dst patch\r\n> > create logic\r\n> > out of copydir().\r\n\r\nAgreed to this.\r\n\r\n> > Also I'd suggest we should use pg_mkdir_p() in TablespaceCreateDbspace()\r\n> > to replace\r\n> > the code block includes a lot of\r\n> > get_parent_directory(), MakePGDirectory(), etc even it\r\n> > is not fixing a bug since pg_mkdir_p() code change seems to be more\r\n> > graceful and simpler.\r\n\r\nBut I don't agree to this. pg_mkdir_p goes above two-parents up,\r\nwhich would be unwanted here.\r\n\r\n> > Whatever ignore mkdir failure or mkdir_p, I found that these steps seem to\r\n> > be error-prone\r\n> > along with postgre evolving since they are hard to test and also we are\r\n> > not easy to think out\r\n> > various potential bad cases. Is it possible that we should do real\r\n> > checkpoint (flush & update\r\n> > redo lsn) when seeing checkpoint xlogs for these operations? This will\r\n> > slow down standby\r\n> > but master also does this and also these operations are not usual,\r\n> > espeically it seems that it\r\n> > does not slow down wal receiving usually?\r\n\r\nThat dramatically slows recovery (not replication) if databases\r\nare created and deleted frequently. That wouldn't be acceptable.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Tue, 07 May 2019 15:47:11 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "Thanks for the reply.\n\nOn Tue, May 7, 2019 at 2:47 PM Kyotaro HORIGUCHI <\nhoriguchi.kyotaro@lab.ntt.co.jp> wrote:\n\n>\n> + if (!(stat(parent_path, &st) == 0 && S_ISDIR(st.st_mode)))\n> + {\n>\n> This patch is allowing missing source and destination directory\n> even in consistent state. I don't think it is safe.\n>\n\nI do not understand this. Can you elaborate?\n\n\n>\n>\n>\n> + ereport(WARNING,\n> + (errmsg(\"directory \\\"%s\\\" for copydir() does not exists.\"\n> + \"It is possibly expected. Skip copydir().\",\n> + parent_path)));\n>\n> This message seems unfriendly to users, or it seems like an elog\n> message. How about something like this. The same can be said for\n> the source directory.\n>\n> | WARNING: skipped creating database directory: \"%s\"\n> | DETAIL: The tabelspace %u may have been removed just before crash.\n>\n\nYeah. Looks better.\n\n\n>\n> # I'm not confident in this at all:(\n>\n> > 2) Fixed dbase_desc(). Now the xlog output looks correct.\n> >\n> > rmgr: Database len (rec/tot): 42/ 42, tx: 486, lsn:\n> > 0/016386A8, prev 0/01638630, desc: CREATE copy dir base/1 to\n> > pg_tblspc/16384/PG_12_201904281/16386\n> >\n> > rmgr: Database len (rec/tot): 34/ 34, tx: 487, lsn:\n> > 0/01638EB8, prev 0/01638E40, desc: DROP dir\n> > pg_tblspc/16384/PG_12_201904281/16386\n>\n> WAL records don't convey such information. The previous\n> description seems right to me.\n>\n\n2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for\nDatabase/CREATE: copy dir 1663/1 to 65546/65547\nThe directories are definitely wrong and misleading.\n\n\n> > > Also I'd suggest we should use pg_mkdir_p() in\n> TablespaceCreateDbspace()\n> > > to replace\n> > > the code block includes a lot of\n> > > get_parent_directory(), MakePGDirectory(), etc even it\n> > > is not fixing a bug since pg_mkdir_p() code change seems to be more\n> > > graceful and simpler.\n>\n> But I don't agree to this. pg_mkdir_p goes above two-parents up,\n> which would be unwanted here.\n>\n> I do not understand this also. pg_mkdir_p() is similar to 'mkdir -p'.\nThis change just makes the code concise. Though in theory the change is not\nneeded.\n\n\n> > > Whatever ignore mkdir failure or mkdir_p, I found that these steps\n> seem to\n> > > be error-prone\n> > > along with postgre evolving since they are hard to test and also we are\n> > > not easy to think out\n> > > various potential bad cases. Is it possible that we should do real\n> > > checkpoint (flush & update\n> > > redo lsn) when seeing checkpoint xlogs for these operations? This will\n> > > slow down standby\n> > > but master also does this and also these operations are not usual,\n> > > espeically it seems that it\n> > > does not slow down wal receiving usually?\n>\n> That dramatically slows recovery (not replication) if databases\n> are created and deleted frequently. That wouldn't be acceptable.\n>\n\nThis behavior is rare and seems to have the same impact on master & standby\nfrom checkpoint/restartpoint.\nWe do not worry about master so we should not worry about standby also.\n\nThanks for the reply.On Tue, May 7, 2019 at 2:47 PM Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n\n+ if (!(stat(parent_path, &st) == 0 && S_ISDIR(st.st_mode)))\n+ {\n\nThis patch is allowing missing source and destination directory\neven in consistent state. I don't think it is safe.I do not understand this. Can you elaborate? \n\n\n\n+ ereport(WARNING,\n+ (errmsg(\"directory \\\"%s\\\" for copydir() does not exists.\"\n+ \"It is possibly expected. Skip copydir().\",\n+ parent_path)));\n\nThis message seems unfriendly to users, or it seems like an elog\nmessage. How about something like this. The same can be said for\nthe source directory.\n\n| WARNING: skipped creating database directory: \"%s\"\n| DETAIL: The tabelspace %u may have been removed just before crash.Yeah. Looks better. \n\n# I'm not confident in this at all:(\n\n> 2) Fixed dbase_desc(). Now the xlog output looks correct.\n> \n> rmgr: Database len (rec/tot): 42/ 42, tx: 486, lsn:\n> 0/016386A8, prev 0/01638630, desc: CREATE copy dir base/1 to\n> pg_tblspc/16384/PG_12_201904281/16386\n> \n> rmgr: Database len (rec/tot): 34/ 34, tx: 487, lsn:\n> 0/01638EB8, prev 0/01638E40, desc: DROP dir\n> pg_tblspc/16384/PG_12_201904281/16386\n\nWAL records don't convey such information. The previous\ndescription seems right to me.2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for Database/CREATE: copy dir 1663/1 to 65546/65547The directories are definitely wrong and misleading.\n> > Also I'd suggest we should use pg_mkdir_p() in TablespaceCreateDbspace()\n> > to replace\n> > the code block includes a lot of\n> > get_parent_directory(), MakePGDirectory(), etc even it\n> > is not fixing a bug since pg_mkdir_p() code change seems to be more\n> > graceful and simpler.\n\nBut I don't agree to this. pg_mkdir_p goes above two-parents up,\nwhich would be unwanted here.\nI do not understand this also. pg_mkdir_p() is similar to 'mkdir -p'.This change just makes the code concise. Though in theory the change is not needed. \n> > Whatever ignore mkdir failure or mkdir_p, I found that these steps seem to\n> > be error-prone\n> > along with postgre evolving since they are hard to test and also we are\n> > not easy to think out\n> > various potential bad cases. Is it possible that we should do real\n> > checkpoint (flush & update\n> > redo lsn) when seeing checkpoint xlogs for these operations? This will\n> > slow down standby\n> > but master also does this and also these operations are not usual,\n> > espeically it seems that it\n> > does not slow down wal receiving usually?\n\nThat dramatically slows recovery (not replication) if databases\nare created and deleted frequently. That wouldn't be acceptable.This behavior is rare and seems to have the same impact on master & standby from checkpoint/restartpoint.We do not worry about master so we should not worry about standby also.",
"msg_date": "Mon, 13 May 2019 17:37:50 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Hello.\n\nAt Mon, 13 May 2019 17:37:50 +0800, Paul Guo <pguo@pivotal.io> wrote in <CAEET0ZF9yN4DaXyuFLzOcAYyxuFF1Ms_OQWeA+Rwv3GhA5Q-SA@mail.gmail.com>\n> Thanks for the reply.\n> \n> On Tue, May 7, 2019 at 2:47 PM Kyotaro HORIGUCHI <\n> horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> \n> >\n> > + if (!(stat(parent_path, &st) == 0 && S_ISDIR(st.st_mode)))\n> > + {\n> >\n> > This patch is allowing missing source and destination directory\n> > even in consistent state. I don't think it is safe.\n> >\n> \n> I do not understand this. Can you elaborate?\n\nSuppose we were recoverying based on a backup at LSN1 targeting\nto LSN3 then it crashed at LSN2, where LSN1 < LSN2 <= LSN3. LSN2\nis called as \"consistency point\", before where the database is\nnot consistent. It's because we are applying WAL recored older\nthan those that were already applied in the second trial. The\nsame can be said for crash recovery, where LSN1 is the latest\ncheckpoint ('s redo LSN) and LSN2=LSN3 is the crashed LSN.\n\nCreation of an existing directory or dropping of a non-existent\ndirectory are apparently inconsistent or \"broken\" so we should\nstop recovery when seeing such WAL records while database is in\nconsistent state.\n\n> > + ereport(WARNING,\n> > + (errmsg(\"directory \\\"%s\\\" for copydir() does not exists.\"\n> > + \"It is possibly expected. Skip copydir().\",\n> > + parent_path)));\n> >\n> > This message seems unfriendly to users, or it seems like an elog\n> > message. How about something like this. The same can be said for\n> > the source directory.\n> >\n> > | WARNING: skipped creating database directory: \"%s\"\n> > | DETAIL: The tabelspace %u may have been removed just before crash.\n> >\n> \n> Yeah. Looks better.\n> \n> \n> >\n> > # I'm not confident in this at all:(\n> >\n> > > 2) Fixed dbase_desc(). Now the xlog output looks correct.\n> > >\n> > > rmgr: Database len (rec/tot): 42/ 42, tx: 486, lsn:\n> > > 0/016386A8, prev 0/01638630, desc: CREATE copy dir base/1 to\n> > > pg_tblspc/16384/PG_12_201904281/16386\n> > >\n> > > rmgr: Database len (rec/tot): 34/ 34, tx: 487, lsn:\n> > > 0/01638EB8, prev 0/01638E40, desc: DROP dir\n> > > pg_tblspc/16384/PG_12_201904281/16386\n> >\n> > WAL records don't convey such information. The previous\n> > description seems right to me.\n> >\n> \n> 2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for\n> Database/CREATE: copy dir 1663/1 to 65546/65547\n> The directories are definitely wrong and misleading.\n\nThe original description is right in the light of how the server\nrecognizes. The record exactly says that \"copy dir 1663/1 to\n65546/65547\" and the latter path is converted in filesystem layer\nvia a symlink.\n\n\n> > > > Also I'd suggest we should use pg_mkdir_p() in\n> > TablespaceCreateDbspace()\n> > > > to replace\n> > > > the code block includes a lot of\n> > > > get_parent_directory(), MakePGDirectory(), etc even it\n> > > > is not fixing a bug since pg_mkdir_p() code change seems to be more\n> > > > graceful and simpler.\n> >\n> > But I don't agree to this. pg_mkdir_p goes above two-parents up,\n> > which would be unwanted here.\n> >\n> > I do not understand this also. pg_mkdir_p() is similar to 'mkdir -p'.\n> This change just makes the code concise. Though in theory the change is not\n> needed.\n\nWe don't want to create tablespace direcotory after concurrent\nDROPing, as the comment just above is saying:\n\n| * Acquire TablespaceCreateLock to ensure that no DROP TABLESPACE\n| * or TablespaceCreateDbspace is running concurrently.\n\nIf the concurrent DROP TABLESPACE destroyed the grand parent\ndirectory, we mustn't create it again.\n\n> > > > Whatever ignore mkdir failure or mkdir_p, I found that these steps\n> > seem to\n> > > > be error-prone\n> > > > along with postgre evolving since they are hard to test and also we are\n> > > > not easy to think out\n> > > > various potential bad cases. Is it possible that we should do real\n> > > > checkpoint (flush & update\n> > > > redo lsn) when seeing checkpoint xlogs for these operations? This will\n> > > > slow down standby\n> > > > but master also does this and also these operations are not usual,\n> > > > espeically it seems that it\n> > > > does not slow down wal receiving usually?\n> >\n> > That dramatically slows recovery (not replication) if databases\n> > are created and deleted frequently. That wouldn't be acceptable.\n> >\n> \n> This behavior is rare and seems to have the same impact on master & standby\n> from checkpoint/restartpoint.\n> We do not worry about master so we should not worry about standby also.\n\nI didn't mention replication. I said that that slows recovery,\nwhich is not governed by master's speed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 14 May 2019 12:06:13 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Tue, May 14, 2019 at 11:06 AM Kyotaro HORIGUCHI <\nhoriguchi.kyotaro@lab.ntt.co.jp> wrote:\n\n> Hello.\n>\n> At Mon, 13 May 2019 17:37:50 +0800, Paul Guo <pguo@pivotal.io> wrote in <\n> CAEET0ZF9yN4DaXyuFLzOcAYyxuFF1Ms_OQWeA+Rwv3GhA5Q-SA@mail.gmail.com>\n> > Thanks for the reply.\n> >\n> > On Tue, May 7, 2019 at 2:47 PM Kyotaro HORIGUCHI <\n> > horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> >\n> > >\n> > > + if (!(stat(parent_path, &st) == 0 && S_ISDIR(st.st_mode)))\n> > > + {\n> > >\n> > > This patch is allowing missing source and destination directory\n> > > even in consistent state. I don't think it is safe.\n> > >\n> >\n> > I do not understand this. Can you elaborate?\n>\n> Suppose we were recoverying based on a backup at LSN1 targeting\n> to LSN3 then it crashed at LSN2, where LSN1 < LSN2 <= LSN3. LSN2\n> is called as \"consistency point\", before where the database is\n> not consistent. It's because we are applying WAL recored older\n> than those that were already applied in the second trial. The\n> same can be said for crash recovery, where LSN1 is the latest\n> checkpoint ('s redo LSN) and LSN2=LSN3 is the crashed LSN.\n>\n> Creation of an existing directory or dropping of a non-existent\n> directory are apparently inconsistent or \"broken\" so we should\n> stop recovery when seeing such WAL records while database is in\n> consistent state.\n>\n\nThis seems to be hard to detect. I thought using invalid_page mechanism\nlong ago,\nbut it seems to be hard to fully detect a dropped tablespace.\n\n> > > 2) Fixed dbase_desc(). Now the xlog output looks correct.\n> > > >\n> > > > rmgr: Database len (rec/tot): 42/ 42, tx: 486, lsn:\n> > > > 0/016386A8, prev 0/01638630, desc: CREATE copy dir base/1 to\n> > > > pg_tblspc/16384/PG_12_201904281/16386\n> > > >\n> > > > rmgr: Database len (rec/tot): 34/ 34, tx: 487, lsn:\n> > > > 0/01638EB8, prev 0/01638E40, desc: DROP dir\n> > > > pg_tblspc/16384/PG_12_201904281/16386\n> > >\n> > > WAL records don't convey such information. The previous\n> > > description seems right to me.\n> > >\n> >\n> > 2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for\n> > Database/CREATE: copy dir 1663/1 to 65546/65547\n> > The directories are definitely wrong and misleading.\n>\n> The original description is right in the light of how the server\n> recognizes. The record exactly says that \"copy dir 1663/1 to\n> 65546/65547\" and the latter path is converted in filesystem layer\n> via a symlink.\n>\n\nIn either $PG_DATA/pg_tblspc or symlinked real tablespace directory,\nthere is an additional directory like PG_12_201905221 between\ntablespace oid and database oid. See the directory layout as below,\nso the directory info in xlog dump output was not correct.\n\n$ ls -lh data/pg_tblspc/\n\n\ntotal 0\n\n\nlrwxrwxrwx. 1 gpadmin gpadmin 6 May 27 17:23 16384 -> /tmp/2\n\n\n$ ls -lh /tmp/2\n\n\ntotal 0\n\n\ndrwx------. 3 gpadmin gpadmin 18 May 27 17:24 PG_12_201905221\n\n>\n>\n> > > > > Also I'd suggest we should use pg_mkdir_p() in\n> > > TablespaceCreateDbspace()\n> > > > > to replace\n> > > > > the code block includes a lot of\n> > > > > get_parent_directory(), MakePGDirectory(), etc even it\n> > > > > is not fixing a bug since pg_mkdir_p() code change seems to be more\n> > > > > graceful and simpler.\n> > >\n> > > But I don't agree to this. pg_mkdir_p goes above two-parents up,\n> > > which would be unwanted here.\n> > >\n> > > I do not understand this also. pg_mkdir_p() is similar to 'mkdir -p'.\n> > This change just makes the code concise. Though in theory the change is\n> not\n> > needed.\n>\n> We don't want to create tablespace direcotory after concurrent\n> DROPing, as the comment just above is saying:\n>\n> | * Acquire TablespaceCreateLock to ensure that no DROP TABLESPACE\n> | * or TablespaceCreateDbspace is running concurrently.\n>\n> If the concurrent DROP TABLESPACE destroyed the grand parent\n> directory, we mustn't create it again.\n>\n\nYes, this is a good reason to keep the original code. Thanks.\n\nBy the way, based on your previous test patch I added another test which\ncould easily detect\nthe missing src directory case.\n\nOn Tue, May 14, 2019 at 11:06 AM Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:Hello.\n\nAt Mon, 13 May 2019 17:37:50 +0800, Paul Guo <pguo@pivotal.io> wrote in <CAEET0ZF9yN4DaXyuFLzOcAYyxuFF1Ms_OQWeA+Rwv3GhA5Q-SA@mail.gmail.com>\n> Thanks for the reply.\n> \n> On Tue, May 7, 2019 at 2:47 PM Kyotaro HORIGUCHI <\n> horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> \n> >\n> > + if (!(stat(parent_path, &st) == 0 && S_ISDIR(st.st_mode)))\n> > + {\n> >\n> > This patch is allowing missing source and destination directory\n> > even in consistent state. I don't think it is safe.\n> >\n> \n> I do not understand this. Can you elaborate?\n\nSuppose we were recoverying based on a backup at LSN1 targeting\nto LSN3 then it crashed at LSN2, where LSN1 < LSN2 <= LSN3. LSN2\nis called as \"consistency point\", before where the database is\nnot consistent. It's because we are applying WAL recored older\nthan those that were already applied in the second trial. The\nsame can be said for crash recovery, where LSN1 is the latest\ncheckpoint ('s redo LSN) and LSN2=LSN3 is the crashed LSN.\n\nCreation of an existing directory or dropping of a non-existent\ndirectory are apparently inconsistent or \"broken\" so we should\nstop recovery when seeing such WAL records while database is in\nconsistent state.This seems to be hard to detect. I thought using invalid_page mechanism long ago,but it seems to be hard to fully detect a dropped tablespace.\n> > > 2) Fixed dbase_desc(). Now the xlog output looks correct.\n> > >\n> > > rmgr: Database len (rec/tot): 42/ 42, tx: 486, lsn:\n> > > 0/016386A8, prev 0/01638630, desc: CREATE copy dir base/1 to\n> > > pg_tblspc/16384/PG_12_201904281/16386\n> > >\n> > > rmgr: Database len (rec/tot): 34/ 34, tx: 487, lsn:\n> > > 0/01638EB8, prev 0/01638E40, desc: DROP dir\n> > > pg_tblspc/16384/PG_12_201904281/16386\n> >\n> > WAL records don't convey such information. The previous\n> > description seems right to me.\n> >\n> \n> 2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for\n> Database/CREATE: copy dir 1663/1 to 65546/65547\n> The directories are definitely wrong and misleading.\n\nThe original description is right in the light of how the server\nrecognizes. The record exactly says that \"copy dir 1663/1 to\n65546/65547\" and the latter path is converted in filesystem layer\nvia a symlink.In either $PG_DATA/pg_tblspc or symlinked real tablespace directory,there is an additional directory like PG_12_201905221 betweentablespace oid and database oid. See the directory layout as below,so the directory info in xlog dump output was not correct.\n$ ls -lh data/pg_tblspc/ total 0 lrwxrwxrwx. 1 gpadmin gpadmin 6 May 27 17:23 16384 -> /tmp/2 $ ls -lh /tmp/2 total 0 drwx------. 3 gpadmin gpadmin 18 May 27 17:24 PG_12_201905221 \n\n\n> > > > Also I'd suggest we should use pg_mkdir_p() in\n> > TablespaceCreateDbspace()\n> > > > to replace\n> > > > the code block includes a lot of\n> > > > get_parent_directory(), MakePGDirectory(), etc even it\n> > > > is not fixing a bug since pg_mkdir_p() code change seems to be more\n> > > > graceful and simpler.\n> >\n> > But I don't agree to this. pg_mkdir_p goes above two-parents up,\n> > which would be unwanted here.\n> >\n> > I do not understand this also. pg_mkdir_p() is similar to 'mkdir -p'.\n> This change just makes the code concise. Though in theory the change is not\n> needed.\n\nWe don't want to create tablespace direcotory after concurrent\nDROPing, as the comment just above is saying:\n\n| * Acquire TablespaceCreateLock to ensure that no DROP TABLESPACE\n| * or TablespaceCreateDbspace is running concurrently.\n\nIf the concurrent DROP TABLESPACE destroyed the grand parent\ndirectory, we mustn't create it again.Yes, this is a good reason to keep the original code. Thanks. By the way, based on your previous test patch I added another test which could easily detectthe missing src directory case.",
"msg_date": "Mon, 27 May 2019 21:39:03 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Mon, May 27, 2019 at 9:39 PM Paul Guo <pguo@pivotal.io> wrote:\n\n>\n>\n> On Tue, May 14, 2019 at 11:06 AM Kyotaro HORIGUCHI <\n> horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n>\n>> Hello.\n>>\n>> At Mon, 13 May 2019 17:37:50 +0800, Paul Guo <pguo@pivotal.io> wrote in <\n>> CAEET0ZF9yN4DaXyuFLzOcAYyxuFF1Ms_OQWeA+Rwv3GhA5Q-SA@mail.gmail.com>\n>> > Thanks for the reply.\n>> >\n>> > On Tue, May 7, 2019 at 2:47 PM Kyotaro HORIGUCHI <\n>> > horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n>> >\n>> > >\n>> > > + if (!(stat(parent_path, &st) == 0 && S_ISDIR(st.st_mode)))\n>> > > + {\n>> > >\n>> > > This patch is allowing missing source and destination directory\n>> > > even in consistent state. I don't think it is safe.\n>> > >\n>> >\n>> > I do not understand this. Can you elaborate?\n>>\n>> Suppose we were recoverying based on a backup at LSN1 targeting\n>> to LSN3 then it crashed at LSN2, where LSN1 < LSN2 <= LSN3. LSN2\n>> is called as \"consistency point\", before where the database is\n>> not consistent. It's because we are applying WAL recored older\n>> than those that were already applied in the second trial. The\n>> same can be said for crash recovery, where LSN1 is the latest\n>> checkpoint ('s redo LSN) and LSN2=LSN3 is the crashed LSN.\n>>\n>> Creation of an existing directory or dropping of a non-existent\n>> directory are apparently inconsistent or \"broken\" so we should\n>> stop recovery when seeing such WAL records while database is in\n>> consistent state.\n>>\n>\n> This seems to be hard to detect. I thought using invalid_page mechanism\n> long ago,\n> but it seems to be hard to fully detect a dropped tablespace.\n>\n> > > > 2) Fixed dbase_desc(). Now the xlog output looks correct.\n>> > > >\n>> > > > rmgr: Database len (rec/tot): 42/ 42, tx: 486, lsn:\n>> > > > 0/016386A8, prev 0/01638630, desc: CREATE copy dir base/1 to\n>> > > > pg_tblspc/16384/PG_12_201904281/16386\n>> > > >\n>> > > > rmgr: Database len (rec/tot): 34/ 34, tx: 487, lsn:\n>> > > > 0/01638EB8, prev 0/01638E40, desc: DROP dir\n>> > > > pg_tblspc/16384/PG_12_201904281/16386\n>> > >\n>> > > WAL records don't convey such information. The previous\n>> > > description seems right to me.\n>> > >\n>> >\n>> > 2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for\n>> > Database/CREATE: copy dir 1663/1 to 65546/65547\n>> > The directories are definitely wrong and misleading.\n>>\n>> The original description is right in the light of how the server\n>> recognizes. The record exactly says that \"copy dir 1663/1 to\n>> 65546/65547\" and the latter path is converted in filesystem layer\n>> via a symlink.\n>>\n>\n> In either $PG_DATA/pg_tblspc or symlinked real tablespace directory,\n> there is an additional directory like PG_12_201905221 between\n> tablespace oid and database oid. See the directory layout as below,\n> so the directory info in xlog dump output was not correct.\n>\n> $ ls -lh data/pg_tblspc/\n>\n>\n> total 0\n>\n>\n> lrwxrwxrwx. 1 gpadmin gpadmin 6 May 27 17:23 16384 -> /tmp/2\n>\n>\n> $ ls -lh /tmp/2\n>\n>\n> total 0\n>\n>\n> drwx------. 3 gpadmin gpadmin 18 May 27 17:24 PG_12_201905221\n>\n>>\n>>\n>> > > > > Also I'd suggest we should use pg_mkdir_p() in\n>> > > TablespaceCreateDbspace()\n>> > > > > to replace\n>> > > > > the code block includes a lot of\n>> > > > > get_parent_directory(), MakePGDirectory(), etc even it\n>> > > > > is not fixing a bug since pg_mkdir_p() code change seems to be\n>> more\n>> > > > > graceful and simpler.\n>> > >\n>> > > But I don't agree to this. pg_mkdir_p goes above two-parents up,\n>> > > which would be unwanted here.\n>> > >\n>> > > I do not understand this also. pg_mkdir_p() is similar to 'mkdir -p'.\n>> > This change just makes the code concise. Though in theory the change is\n>> not\n>> > needed.\n>>\n>> We don't want to create tablespace direcotory after concurrent\n>> DROPing, as the comment just above is saying:\n>>\n>> | * Acquire TablespaceCreateLock to ensure that no DROP TABLESPACE\n>> | * or TablespaceCreateDbspace is running concurrently.\n>>\n>> If the concurrent DROP TABLESPACE destroyed the grand parent\n>> directory, we mustn't create it again.\n>>\n>\n> Yes, this is a good reason to keep the original code. Thanks.\n>\n> By the way, based on your previous test patch I added another test which\n> could easily detect\n> the missing src directory case.\n>\n>\n\nI updated the patch to v3. In this version, we skip the error if copydir\nfails due to missing src/dst directory,\nbut to make sure the ignoring is legal, I add a simple log/forget mechanism\n(Using List) similar to the xlog invalid page\nchecking mechanism. Two tap tests are included. One is actually from a\nprevious patch by Kyotaro in this\nemail thread and another is added by me. In addition, dbase_desc() is fixed\nto make the message accurate.\n\nThanks.",
"msg_date": "Wed, 19 Jun 2019 15:21:54 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Wed, Jun 19, 2019 at 7:22 PM Paul Guo <pguo@pivotal.io> wrote:\n> I updated the patch to v3. In this version, we skip the error if copydir fails due to missing src/dst directory,\n> but to make sure the ignoring is legal, I add a simple log/forget mechanism (Using List) similar to the xlog invalid page\n> checking mechanism. Two tap tests are included. One is actually from a previous patch by Kyotaro in this\n> email thread and another is added by me. In addition, dbase_desc() is fixed to make the message accurate.\n\nHello Paul,\n\nFYI t/011_crash_recovery.pl is failing consistently on Travis CI with\nthis patch applied:\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/555368907\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 15:15:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 11:16 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Wed, Jun 19, 2019 at 7:22 PM Paul Guo <pguo@pivotal.io> wrote:\n> > I updated the patch to v3. In this version, we skip the error if copydir\n> fails due to missing src/dst directory,\n> > but to make sure the ignoring is legal, I add a simple log/forget\n> mechanism (Using List) similar to the xlog invalid page\n> > checking mechanism. Two tap tests are included. One is actually from a\n> previous patch by Kyotaro in this\n> > email thread and another is added by me. In addition, dbase_desc() is\n> fixed to make the message accurate.\n>\n> Hello Paul,\n>\n> FYI t/011_crash_recovery.pl is failing consistently on Travis CI with\n> this patch applied:\n>\n>\n> https://urldefense.proofpoint.com/v2/url?u=https-3A__travis-2Dci.org_postgresql-2Dcfbot_postgresql_builds_555368907&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=Usi0ex6Ch92MsB5QQDgYFw&m=ABylo8AVfubiiYVbCBSgmNnHEMJhMqGXx5c0hkug7Vw&s=5h4m_JhrZwZqsRsu1CHCD3W2eBl14mT8jWLFsj2-bJ4&e=\n>\n>\n>\nThis failure is because the previous v3 patch does not align with a recent\npatch\n\ncommit 660a2b19038b2f6b9f6bcb2c3297a47d5e3557a8\n\nAuthor: Noah Misch <noah@leadboat.com>\n\nDate: Fri Jun 21 20:34:23 2019 -0700\n\n Consolidate methods for translating a Perl path to a Windows path.\n\n\nMy patch uses TestLib::real_dir which is now replaced\nwith TestLib::perl2host in the above commit.\n\nI've updated the patch to v4 to make my code align. Now the test passes in\nmy local environment.\n\nPlease see the attached v4 patch.\n\nThanks.",
"msg_date": "Mon, 15 Jul 2019 18:52:01 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 10:52 PM Paul Guo <pguo@pivotal.io> wrote:\n> Please see the attached v4 patch.\n\nWhile moving this to the next CF, I noticed that this needs updating\nfor the new pg_list.h API.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Aug 2019 10:37:46 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Thanks. I updated the patch to v5. It passes install-check testing and\nrecovery testing.\n\nOn Fri, Aug 2, 2019 at 6:38 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Mon, Jul 15, 2019 at 10:52 PM Paul Guo <pguo@pivotal.io> wrote:\n> > Please see the attached v4 patch.\n>\n> While moving this to the next CF, I noticed that this needs updating\n> for the new pg_list.h API.\n>\n> --\n> Thomas Munro\n>\n> https://urldefense.proofpoint.com/v2/url?u=https-3A__enterprisedb.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=Usi0ex6Ch92MsB5QQDgYFw&m=1zhC6VaaS7Ximav7vaUXMUt6EGjrVZpNZut32ug7LDI&s=jSDXnTPIW4WNZCCZ_HIbu7gZ3apEBx36DCeNeNuhLpY&e=\n>",
"msg_date": "Thu, 22 Aug 2019 21:13:20 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "22.08.2019 16:13, Paul Guo wrote:\n> Thanks. I updated the patch to v5. It passes install-check testing and \n> recovery testing.\nHi,\nThank you for working on this fix.\nThe overall design of the latest version looks good to me.\nBut during the review, I found a bug in the current implementation.\nNew behavior must apply to crash-recovery only, now it applies to \narchiveRecovery too.\nThat can cause a silent loss of a tablespace during regular standby \noperation\nsince it never calls CheckRecoveryConsistency().\n\nSteps to reproduce:\n1) run master and replica\n2) create dir for tablespace:\nmkdir /tmp/tblspc1\n\n3) create tablespace and database on the master:\ncreate tablespace tblspc1 location '/tmp/tblspc1';\ncreate database db1 tablespace tblspc1 ;\n\n4) wait for replica to receive this changes and pause replication:\nselect pg_wal_replay_pause();\n\n5) move replica's tablespace symlink to some empty directory, i.e. \n/tmp/tblspc2\nmkdir /tmp/tblspc2\nln -sfn /tmp/tblspc2 postgresql_data_replica/pg_tblspc/16384\n\n6) create another database in tblspc1 on master:\ncreate database db2 tablespace tblspc1 ;\n\n7) resume replication on standby:\nselect pg_wal_replay_resume();\n\n8) try to connect to db2 on standby\n\nIt's expected that dbase_redo() will fail because the directory on \nstandby is not found.\nWhile with the patch it suppresses the error until we attempt to connect \ndb2 on the standby:\n\n2019-08-22 18:34:39.178 MSK [21066] HINT: Execute \npg_wal_replay_resume() to continue.\n2019-08-22 18:42:41.656 MSK [21066] WARNING: Skip creating database \ndirectory \"pg_tblspc/16384/PG_13_201908012\". The dest tablespace may \nhave been removed before abnormal shutdown. If the removal is illegal \nafter later checking we will panic.\n2019-08-22 18:42:41.656 MSK [21066] CONTEXT: WAL redo at 0/3027738 for \nDatabase/CREATE: copy dir base/1 to pg_tblspc/16384/PG_13_201908012/16390\n2019-08-22 18:42:46.096 MSK [21688] FATAL: \n\"pg_tblspc/16384/PG_13_201908012/16390\" is not a valid data directory\n2019-08-22 18:42:46.096 MSK [21688] DETAIL: File \n\"pg_tblspc/16384/PG_13_201908012/16390/PG_VERSION\" is missing.\n\nAlso some nitpicking about code style:\n1) Please, add comment to forget_missing_directory().\n\n2) + elog(LOG, \"Directory \\\"%s\\\" was missing during \ndirectory copying \"\nI think we'd better update this message elevel to WARNING.\n\n3) Shouldn't we also move FlushDatabaseBuffers(xlrec->src_db_id); call under\n if (do_copydir) clause?\nI don't see a reason to flush pages that we won't use later.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n22.08.2019 16:13, Paul Guo wrote:\n\n\n\n\nThanks. I\n updated the patch to v5. It passes install-check testing and\n recovery testing.\n\n\n Hi,\n Thank you for working on this fix.\n The overall design of the latest version looks good to me.\n But during the review, I found a bug in the current implementation.\n New behavior must apply to crash-recovery only, now it applies to\n archiveRecovery too.\n That can cause a silent loss of a tablespace during regular standby\n operation\n since it never calls CheckRecoveryConsistency().\n\n Steps to reproduce:\n 1) run master and replica\n 2) create dir for tablespace:\n mkdir /tmp/tblspc1\n\n 3) create tablespace and database on the master:\n create tablespace tblspc1 location '/tmp/tblspc1';\n create database db1 tablespace tblspc1 ;\n\n 4) wait for replica to receive this changes and pause replication: \n select pg_wal_replay_pause();\n\n 5) move replica's tablespace symlink to some empty directory, i.e.\n /tmp/tblspc2\n mkdir /tmp/tblspc2\n ln -sfn /tmp/tblspc2 postgresql_data_replica/pg_tblspc/16384\n\n 6) create another database in tblspc1 on master:\n create database db2 tablespace tblspc1 ;\n\n 7) resume replication on standby:\n select pg_wal_replay_resume();\n\n 8) try to connect to db2 on standby\n\n It's expected that dbase_redo() will fail because the directory on\n standby is not found.\n While with the patch it suppresses the error until we attempt to\n connect db2 on the standby:\n\n 2019-08-22 18:34:39.178 MSK [21066] HINT: Execute\n pg_wal_replay_resume() to continue.\n 2019-08-22 18:42:41.656 MSK [21066] WARNING: Skip creating database\n directory \"pg_tblspc/16384/PG_13_201908012\". The dest tablespace may\n have been removed before abnormal shutdown. If the removal is\n illegal after later checking we will panic.\n 2019-08-22 18:42:41.656 MSK [21066] CONTEXT: WAL redo at 0/3027738\n for Database/CREATE: copy dir base/1 to\n pg_tblspc/16384/PG_13_201908012/16390\n 2019-08-22 18:42:46.096 MSK [21688] FATAL: \n \"pg_tblspc/16384/PG_13_201908012/16390\" is not a valid data\n directory\n 2019-08-22 18:42:46.096 MSK [21688] DETAIL: File\n \"pg_tblspc/16384/PG_13_201908012/16390/PG_VERSION\" is missing.\n\n Also some nitpicking about code style:\n 1) Please, add comment to forget_missing_directory().\n\n 2) + elog(LOG, \"Directory \\\"%s\\\" was missing during\n directory copying \"\n I think we'd better update this message elevel to WARNING.\n\n 3) Shouldn't we also move FlushDatabaseBuffers(xlrec->src_db_id);\n call under\n if (do_copydir) clause?\n I don't see a reason to flush pages that we won't use later. \n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 22 Aug 2019 19:13:05 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2019-Aug-22, Anastasia Lubennikova wrote:\n\n> 22.08.2019 16:13, Paul Guo wrote:\n> > Thanks. I updated the patch to v5. It passes install-check testing and\n> > recovery testing.\n> Hi,\n> Thank you for working on this fix.\n> The overall design of the latest version looks good to me.\n> But during the review, I found a bug in the current implementation.\n> New behavior must apply to crash-recovery only, now it applies to\n> archiveRecovery too.\n\nHello\n\nPaul, Kyotaro, are you working on updating this bugfix? FWIW the latest\npatch submitted by Paul is still current and CFbot says it passes its\nown test, but from Anastasia's email it still needs a bit of work.\n\nAlso: it would be good to have this new bogus scenario described by\nAnastasia covered by a new TAP test. Anastasia, can we enlist you to\nwrite that? Maybe Kyotaro?\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Sep 2019 11:58:55 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 11:58 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Aug-22, Anastasia Lubennikova wrote:\n>\n> > 22.08.2019 16:13, Paul Guo wrote:\n> > > Thanks. I updated the patch to v5. It passes install-check testing and\n> > > recovery testing.\n> > Hi,\n> > Thank you for working on this fix.\n> > The overall design of the latest version looks good to me.\n> > But during the review, I found a bug in the current implementation.\n> > New behavior must apply to crash-recovery only, now it applies to\n> > archiveRecovery too.\n>\n> Hello\n>\n> Paul, Kyotaro, are you working on updating this bugfix? FWIW the latest\n> patch submitted by Paul is still current and CFbot says it passes its\n> own test, but from Anastasia's email it still needs a bit of work.\n>\n> Also: it would be good to have this new bogus scenario described by\n> Anastasia covered by a new TAP test. Anastasia, can we enlist you to\n> write that? Maybe Kyotaro?\n>\n>\nThanks Anastasia and Alvaro for comment and suggestion. Sorry I've been busy\nworking on some non-PG stuffs recently. I've never worked on archive\nrecovery,\nso I expect a bit more time after I'm free (hopefully several days later)\nto take a look.\nOf course Kyotaro, Anastasia or anyone feel free to address the concern\nbefore that.\n\nOn Tue, Sep 3, 2019 at 11:58 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Aug-22, Anastasia Lubennikova wrote:\n\n> 22.08.2019 16:13, Paul Guo wrote:\n> > Thanks. I updated the patch to v5. It passes install-check testing and\n> > recovery testing.\n> Hi,\n> Thank you for working on this fix.\n> The overall design of the latest version looks good to me.\n> But during the review, I found a bug in the current implementation.\n> New behavior must apply to crash-recovery only, now it applies to\n> archiveRecovery too.\n\nHello\n\nPaul, Kyotaro, are you working on updating this bugfix? FWIW the latest\npatch submitted by Paul is still current and CFbot says it passes its\nown test, but from Anastasia's email it still needs a bit of work.\n\nAlso: it would be good to have this new bogus scenario described by\nAnastasia covered by a new TAP test. Anastasia, can we enlist you to\nwrite that? Maybe Kyotaro?Thanks Anastasia and Alvaro for comment and suggestion. Sorry I've been busyworking on some non-PG stuffs recently. I've never worked on archive recovery,so I expect a bit more time after I'm free (hopefully several days later) to take a look.Of course Kyotaro, Anastasia or anyone feel free to address the concern before that.",
"msg_date": "Thu, 5 Sep 2019 15:12:23 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Hi Anastasia\n\nOn Thu, Aug 22, 2019 at 9:43 PM Anastasia Lubennikova <\na.lubennikova@postgrespro.ru> wrote:\n>\n> But during the review, I found a bug in the current implementation.\n> New behavior must apply to crash-recovery only, now it applies to\narchiveRecovery too.\n> That can cause a silent loss of a tablespace during regular standby\noperation\n> since it never calls CheckRecoveryConsistency().\n>\n> Steps to reproduce:\n> 1) run master and replica\n> 2) create dir for tablespace:\n> mkdir /tmp/tblspc1\n>\n> 3) create tablespace and database on the master:\n> create tablespace tblspc1 location '/tmp/tblspc1';\n> create database db1 tablespace tblspc1 ;\n>\n> 4) wait for replica to receive this changes and pause replication:\n> select pg_wal_replay_pause();\n>\n> 5) move replica's tablespace symlink to some empty directory, i.e.\n/tmp/tblspc2\n> mkdir /tmp/tblspc2\n> ln -sfn /tmp/tblspc2 postgresql_data_replica/pg_tblspc/16384\n>\n\nBy changing the tablespace symlink target, we are silently nullifying\neffects of a committed transaction from the standby data directory - the\ndirectory structure created by the standby for create tablespace\ntransaction. This step, therefore, does not look like a valid test case to\nme. Can you share a sequence of steps that does not involve changing data\ndirectory manually?\n\n>\n> Also some nitpicking about code style:\n> 1) Please, add comment to forget_missing_directory().\n>\n> 2) + elog(LOG, \"Directory \\\"%s\\\" was missing during\ndirectory copying \"\n> I think we'd better update this message elevel to WARNING.\n>\n> 3) Shouldn't we also move FlushDatabaseBuffers(xlrec->src_db_id); call\nunder\n> if (do_copydir) clause?\n> I don't see a reason to flush pages that we won't use later.\n>\n\nThank you for the review feedback. I agree with all the points. Let me\nincorporate them (I plan to pick this work up and drive it to completion as\nPaul got busy with other things).\n\nBut before that I'm revisiting another solution upthread, that of creating\nrestart points when replaying create/drop database commands before making\nfilesystem changes such as removing a directory. The restart points should\nalign with checkpoints on master. The concern against this solution was\ncreation of restart points will slow down recovery. I don't think crash\nrecovery is affected by this solution because of the already existing\nenforcement of checkpoints. WAL records prior to a create/drop database\nwill not be seen by crash recovery due to the checkpoint enforced during\nthe command's normal execution.\n\nAsim\n\nHi AnastasiaOn Thu, Aug 22, 2019 at 9:43 PM Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:>> But during the review, I found a bug in the current implementation.> New behavior must apply to crash-recovery only, now it applies to archiveRecovery too.> That can cause a silent loss of a tablespace during regular standby operation> since it never calls CheckRecoveryConsistency().>> Steps to reproduce:> 1) run master and replica> 2) create dir for tablespace:> mkdir /tmp/tblspc1>> 3) create tablespace and database on the master:> create tablespace tblspc1 location '/tmp/tblspc1';> create database db1 tablespace tblspc1 ;>> 4) wait for replica to receive this changes and pause replication:> select pg_wal_replay_pause();>> 5) move replica's tablespace symlink to some empty directory, i.e. /tmp/tblspc2> mkdir /tmp/tblspc2> ln -sfn /tmp/tblspc2 postgresql_data_replica/pg_tblspc/16384>By changing the tablespace symlink target, we are silently nullifying effects of a committed transaction from the standby data directory - the directory structure created by the standby for create tablespace transaction. This step, therefore, does not look like a valid test case to me. Can you share a sequence of steps that does not involve changing data directory manually?>> Also some nitpicking about code style:> 1) Please, add comment to forget_missing_directory().>> 2) + elog(LOG, \"Directory \\\"%s\\\" was missing during directory copying \"> I think we'd better update this message elevel to WARNING.>> 3) Shouldn't we also move FlushDatabaseBuffers(xlrec->src_db_id); call under> if (do_copydir) clause?> I don't see a reason to flush pages that we won't use later.>Thank you for the review feedback. I agree with all the points. Let me incorporate them (I plan to pick this work up and drive it to completion as Paul got busy with other things).But before that I'm revisiting another solution upthread, that of creating restart points when replaying create/drop database commands before making filesystem changes such as removing a directory. The restart points should align with checkpoints on master. The concern against this solution was creation of restart points will slow down recovery. I don't think crash recovery is affected by this solution because of the already existing enforcement of checkpoints. WAL records prior to a create/drop database will not be seen by crash recovery due to the checkpoint enforced during the command's normal execution.Asim",
"msg_date": "Tue, 10 Sep 2019 17:12:55 +0530",
"msg_from": "Asim R P <apraveen@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "10.09.2019 14:42, Asim R P wrote:\n> Hi Anastasia\n>\n> On Thu, Aug 22, 2019 at 9:43 PM Anastasia Lubennikova \n> <a.lubennikova@postgrespro.ru <mailto:a.lubennikova@postgrespro.ru>> \n> wrote:\n> >\n> > But during the review, I found a bug in the current implementation.\n> > New behavior must apply to crash-recovery only, now it applies to \n> archiveRecovery too.\n> > That can cause a silent loss of a tablespace during regular standby \n> operation\n> > since it never calls CheckRecoveryConsistency().\n> >\n> > Steps to reproduce:\n> > 1) run master and replica\n> > 2) create dir for tablespace:\n> > mkdir /tmp/tblspc1\n> >\n> > 3) create tablespace and database on the master:\n> > create tablespace tblspc1 location '/tmp/tblspc1';\n> > create database db1 tablespace tblspc1 ;\n> >\n> > 4) wait for replica to receive this changes and pause replication:\n> > select pg_wal_replay_pause();\n> >\n> > 5) move replica's tablespace symlink to some empty directory, i.e. \n> /tmp/tblspc2\n> > mkdir /tmp/tblspc2\n> > ln -sfn /tmp/tblspc2 postgresql_data_replica/pg_tblspc/16384\n> >\n>\n> By changing the tablespace symlink target, we are silently nullifying \n> effects of a committed transaction from the standby data directory - \n> the directory structure created by the standby for create tablespace \n> transaction. This step, therefore, does not look like a valid test \n> case to me. Can you share a sequence of steps that does not involve \n> changing data directory manually?\n>\nHi, the whole idea of the test is to reproduce a data loss. For example, \nif the disk containing this tablespace failed.\nProbably, simply deleting the directory \n'postgresql_data_replica/pg_tblspc/16384'\nwould work as well, though I was afraid that it can be caught by some \nearlier checks and my example won't be so illustrative.\n>\n> Thank you for the review feedback. I agree with all the points. Let \n> me incorporate them (I plan to pick this work up and drive it to \n> completion as Paul got busy with other things).\n>\n> But before that I'm revisiting another solution upthread, that of \n> creating restart points when replaying create/drop database commands \n> before making filesystem changes such as removing a directory. The \n> restart points should align with checkpoints on master. The concern \n> against this solution was creation of restart points will slow down \n> recovery. I don't think crash recovery is affected by this solution \n> because of the already existing enforcement of checkpoints. WAL \n> records prior to a create/drop database will not be seen by crash \n> recovery due to the checkpoint enforced during the command's normal \n> execution.\n>\n\nI haven't measured the impact of generating extra restart points in \nprevious solution, so I cannot tell whether concerns upthread are \njustified. Still, I enjoy latest design more, since it is clear and \nsimilar with the code of checking unexpected uninitialized pages. In \nprinciple it works. And the issue, I described in previous review can be \neasily fixed by several additional checks of InHotStandby macro.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n10.09.2019 14:42, Asim R P wrote:\n\n\n\n\nHi Anastasia\n\n\nOn Thu, Aug 22, 2019 at 9:43 PM Anastasia Lubennikova <a.lubennikova@postgrespro.ru>\n wrote:\n >\n > But during the review, I found a bug in the current\n implementation.\n > New behavior must apply to crash-recovery only, now it\n applies to archiveRecovery too.\n > That can cause a silent loss of a tablespace during regular\n standby operation\n > since it never calls CheckRecoveryConsistency().\n >\n > Steps to reproduce:\n > 1) run master and replica\n > 2) create dir for tablespace:\n > mkdir /tmp/tblspc1\n >\n > 3) create tablespace and database on the master:\n > create tablespace tblspc1 location '/tmp/tblspc1';\n > create database db1 tablespace tblspc1 ;\n >\n > 4) wait for replica to receive this changes and pause\n replication:\n > select pg_wal_replay_pause();\n >\n > 5) move replica's tablespace symlink to some empty\n directory, i.e. /tmp/tblspc2\n > mkdir /tmp/tblspc2\n > ln -sfn /tmp/tblspc2\n postgresql_data_replica/pg_tblspc/16384\n>\n\n\nBy changing the tablespace symlink target, we are silently\n nullifying effects of a committed transaction from the standby\n data directory - the directory structure created by the\n standby for create tablespace transaction. This step,\n therefore, does not look like a valid test case to me. Can\n you share a sequence of steps that does not involve changing\n data directory manually?\n\n\n\n\n Hi, the whole idea of the test is to reproduce a data loss. For\n example, if the disk containing this tablespace failed.\n Probably, simply deleting the directory\n 'postgresql_data_replica/pg_tblspc/16384'\n would work as well, though I was afraid that it can be caught by\n some earlier checks and my example won't be so illustrative.\n\n\n\n\nThank you for the review feedback. I agree with all the\n points. Let me incorporate them (I plan to pick this work up\n and drive it to completion as Paul got busy with other\n things).\n\n\nBut before that I'm revisiting another solution upthread,\n that of creating restart points when replaying create/drop\n database commands before making filesystem changes such as\n removing a directory. The restart points should align with\n checkpoints on master. The concern against this solution was\n creation of restart points will slow down recovery. I don't\n think crash recovery is affected by this solution because of\n the already existing enforcement of checkpoints. WAL records\n prior to a create/drop database will not be seen by crash\n recovery due to the checkpoint enforced during the command's\n normal execution.\n\n\n\n\n\n\n I haven't measured the impact of generating extra restart points in\n previous solution, so I cannot tell whether concerns upthread are\n justified. Still, I enjoy latest design more, since it is clear and\n similar with the code of checking unexpected uninitialized pages. In\n principle it works. And the issue, I described in previous review\n can be easily fixed by several additional checks of InHotStandby\n macro.\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 11 Sep 2019 17:26:44 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Hello.\n\nAt Wed, 11 Sep 2019 17:26:44 +0300, Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote in <a82a896b-93f0-c26c-b941-f5665131381b@postgrespro.ru>\n> 10.09.2019 14:42, Asim R P wrote:\n> > Hi Anastasia\n> >\n> > On Thu, Aug 22, 2019 at 9:43 PM Anastasia Lubennikova\n> > <a.lubennikova@postgrespro.ru <mailto:a.lubennikova@postgrespro.ru>>\n> > wrote:\n> > >\n> > > But during the review, I found a bug in the current implementation.\n> > > New behavior must apply to crash-recovery only, now it applies to\n> > > archiveRecovery too.\n> > > That can cause a silent loss of a tablespace during regular standby\n> > > operation\n> > > since it never calls CheckRecoveryConsistency().\n\nYeah. We should take the same steps with redo operations on\nmissing pages. Just record failure during inconsistent state then\nforget it if underlying tablespace is gone. If we had a record\nwhen we reached concsistency, we're in a serious situation and\nshould stop recovery. log_invalid_page forget_invalid_pages and\nCheckRecoveryConsistency are the entry points of the feature to\nunderstand.\n\n> > > Steps to reproduce:\n> > > 1) run master and replica\n> > > 2) create dir for tablespace:\n> > > mkdir /tmp/tblspc1\n> > >\n> > > 3) create tablespace and database on the master:\n> > > create tablespace tblspc1 location '/tmp/tblspc1';\n> > > create database db1 tablespace tblspc1 ;\n> > >\n> > > 4) wait for replica to receive this changes and pause replication:\n> > > select pg_wal_replay_pause();\n> > >\n> > > 5) move replica's tablespace symlink to some empty directory,\n> > > i.e. /tmp/tblspc2\n> > > mkdir /tmp/tblspc2\n> > > ln -sfn /tmp/tblspc2 postgresql_data_replica/pg_tblspc/16384\n> > >\n> >\n> > By changing the tablespace symlink target, we are silently nullifying\n> > effects of a committed transaction from the standby data directory -\n> > the directory structure created by the standby for create tablespace\n> > transaction. This step, therefore, does not look like a valid test\n> > case to me. Can you share a sequence of steps that does not involve\n> > changing data directory manually?\n\nI see it as the same. WAL is inconsistent with what happend on\nstorage with the steps. Database is just broken.\n\n> Hi, the whole idea of the test is to reproduce a data loss. For\n> example, if the disk containing this tablespace failed.\n\nSo, apparently we must start recovery from a backup before that\nfailure happened in that case, and that should ends in success.\n\n# I remember that the start point of this patch is a crash after\n# table space drop subsequent to several operations within the\n# table space. Then, crash recovery fails at an operation in the\n# finally-removed tablespace. Is it right?\n\n> Probably, simply deleting the directory\n> 'postgresql_data_replica/pg_tblspc/16384'\n> would work as well, though I was afraid that it can be caught by some\n> earlier checks and my example won't be so illustrative.\n> >\n> > Thank you for the review feedback. I agree with all the points. Let\n> > me incorporate them (I plan to pick this work up and drive it to\n> > completion as Paul got busy with other things).\n> >\n> > But before that I'm revisiting another solution upthread, that of\n> > creating restart points when replaying create/drop database commands\n> > before making filesystem changes such as removing a directory. The\n> > restart points should align with checkpoints on master. The concern\n> > against this solution was creation of restart points will slow down\n> > recovery. I don't think crash recovery is affected by this solution\n> > because of the already existing enforcement of checkpoints. WAL\n> > records prior to a create/drop database will not be seen by crash\n> > recovery due to the checkpoint enforced during the command's normal\n> > execution.\n> >\n> \n> I haven't measured the impact of generating extra restart points in\n> previous solution, so I cannot tell whether concerns upthread are\n> justified. Still, I enjoy latest design more, since it is clear and\n> similar with the code of checking unexpected uninitialized pages. In\n> principle it works. And the issue, I described in previous review can\n> be easily fixed by several additional checks of InHotStandby macro.\n\nGenerally we shouldn't trigger useless restart point for\nuncertain reasons. If standby crashes, it starts the next\nrecovery from the latest *restart point*. Even in that case what\nwe should do is the same.\n\nOf course, for testing, we *should* establish a restartpoint\nmanually in order to establish the prerequisite state.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 12 Sep 2019 17:35:36 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 2:05 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n>\n> Hello.\n>\n> At Wed, 11 Sep 2019 17:26:44 +0300, Anastasia Lubennikova <\na.lubennikova@postgrespro.ru> wrote in <\na82a896b-93f0-c26c-b941-f5665131381b@postgrespro.ru>\n> > 10.09.2019 14:42, Asim R P wrote:\n> > > Hi Anastasia\n> > >\n> > > On Thu, Aug 22, 2019 at 9:43 PM Anastasia Lubennikova\n> > > <a.lubennikova@postgrespro.ru <mailto:a.lubennikova@postgrespro.ru>>\n> > > wrote:\n> > > >\n> > > > But during the review, I found a bug in the current implementation.\n> > > > New behavior must apply to crash-recovery only, now it applies to\n> > > > archiveRecovery too.\n> > > > That can cause a silent loss of a tablespace during regular standby\n> > > > operation\n> > > > since it never calls CheckRecoveryConsistency().\n>\n> Yeah. We should take the same steps with redo operations on\n> missing pages. Just record failure during inconsistent state then\n> forget it if underlying tablespace is gone. If we had a record\n> when we reached concsistency, we're in a serious situation and\n> should stop recovery. log_invalid_page forget_invalid_pages and\n> CheckRecoveryConsistency are the entry points of the feature to\n> understand.\n>\n\nYes, I get it now. I will adjust the patch written by Paul accordingly.\n\n>\n> # I remember that the start point of this patch is a crash after\n> # table space drop subsequent to several operations within the\n> # table space. Then, crash recovery fails at an operation in the\n> # finally-removed tablespace. Is it right?\n>\n\nThat's correct. Once the directories are removed from filesystem, any\nattempt to replay WAL records that depend on their existence fails.\n\n\n> > > But before that I'm revisiting another solution upthread, that of\n> > > creating restart points when replaying create/drop database commands\n> > > before making filesystem changes such as removing a directory. The\n> > > restart points should align with checkpoints on master. The concern\n> > > against this solution was creation of restart points will slow down\n> > > recovery. I don't think crash recovery is affected by this solution\n> > > because of the already existing enforcement of checkpoints. WAL\n> > > records prior to a create/drop database will not be seen by crash\n> > > recovery due to the checkpoint enforced during the command's normal\n> > > execution.\n> > >\n> >\n> > I haven't measured the impact of generating extra restart points in\n> > previous solution, so I cannot tell whether concerns upthread are\n> > justified. Still, I enjoy latest design more, since it is clear and\n> > similar with the code of checking unexpected uninitialized pages. In\n> > principle it works. And the issue, I described in previous review can\n> > be easily fixed by several additional checks of InHotStandby macro.\n>\n> Generally we shouldn't trigger useless restart point for\n> uncertain reasons. If standby crashes, it starts the next\n> recovery from the latest *restart point*. Even in that case what\n> we should do is the same.\n>\n\nThe reason is quite clear to me - removing directories from filesystem\nbreak the ability to replay WAL records second time. And we already create\ncheckpoints during normal operation in such a case, so crash recovery on a\nmaster node does not suffer from this bug. I've attached a patch that\nperforms restart points during drop database replay, just for reference.\nIt passes both the TAP tests written by Kyotaro and Paul. I had to modify\ndrop database WAL record a bit.\n\nAsim",
"msg_date": "Thu, 12 Sep 2019 17:32:41 +0530",
"msg_from": "Asim R P <apraveen@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Thu, Aug 22, 2019 at 6:44 PM Paul Guo <pguo@pivotal.io> wrote:\n>\n> Thanks. I updated the patch to v5. It passes install-check testing and\nrecovery testing.\n>\n\nThis patch contains one more bug, in addition to what Anastasia has found.\nIf the test case in the patch is tweaked slightly, as follows, the standby\ncrashes due to PANIC.\n\n--- a/src/test/recovery/t/011_crash_recovery.pl\n+++ b/src/test/recovery/t/011_crash_recovery.pl\n@@ -147,8 +147,6 @@ $node_standby->start;\n $node_master->poll_query_until(\n 'postgres', 'SELECT count(*) = 1 FROM pg_stat_replication');\n\n-$node_master->safe_psql('postgres', \"CREATE DATABASE db1 TABLESPACE ts1\");\n-\n # Make sure to perform restartpoint after tablespace creation\n $node_master->wait_for_catchup($node_standby, 'replay',\n\n $node_master->lsn('replay'));\n@@ -156,7 +154,8 @@ $node_standby->safe_psql('postgres', 'CHECKPOINT');\n\n # Do immediate shutdown ...\n $node_master->safe_psql('postgres',\n- q[ALTER DATABASE db1 SET\nTABLESPACE ts2;\n+ q[CREATE DATABASE db1\nTABLESPACE ts1;\n+ ALTER DATABASE db1 SET\nTABLESPACE ts2;\n DROP TABLESPACE ts1;]);\n $node_master->wait_for_catchup($node_standby, 'replay',\n\n $node_master->lsn('replay'));\n\nNotice the create additional create database in the above change. That\ncauses the same tablespace directory (ts1) logged twice in the list of\nmissing directories. At the end of crash recovery, there is one unmatched\nentry in the missing dirs list and the standby PANICs.\n\nPlease find attached a couple of tests that are built on top of what was\nalready written by Paul, Kyotaro. The patch includes a test to demonstrate\nthe above mentioned failure and a test case that my friend Alexandra wrote\nto implement the archive recovery scenario noted by Anastasia.\n\nIn order to fix the test failures, we need to distinguish between a missing\ndatabase directory and a missing tablespace directory. And also add logic\nto forget missing directories during tablespace drop. I am working on it.\n\nAsim",
"msg_date": "Thu, 19 Sep 2019 17:29:59 +0530",
"msg_from": "Asim R P <apraveen@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 5:29 PM Asim R P <apraveen@pivotal.io> wrote:\n>\n> In order to fix the test failures, we need to distinguish between a\nmissing database directory and a missing tablespace directory. And also\nadd logic to forget missing directories during tablespace drop. I am\nworking on it.\n\nPlease find attached a solution that builds on what Paul has propose. A\nhash table, similar to the invalid page hash table is used to track missing\ndirectory references. A missing directory may be a tablespace or a\ndatabase, based on whether the tablespace is found missing or the source\ndatabase is found missing. The crash recovery succeeds if the hash table\nis empty at the end.\n\nAsim",
"msg_date": "Fri, 20 Sep 2019 17:53:56 +0530",
"msg_from": "Asim R P <apraveen@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "20.09.2019 15:23, Asim R P wrote:\n> On Thu, Sep 19, 2019 at 5:29 PM Asim R P <apraveen@pivotal.io \n> <mailto:apraveen@pivotal.io>> wrote:\n> >\n> > In order to fix the test failures, we need to distinguish between a \n> missing database directory and a missing tablespace directory. And \n> also add logic to forget missing directories during tablespace drop. \n> I am working on it.\n>\n> Please find attached a solution that builds on what Paul has propose. \n> A hash table, similar to the invalid page hash table is used to track \n> missing directory references. A missing directory may be a tablespace \n> or a database, based on whether the tablespace is found missing or the \n> source database is found missing. The crash recovery succeeds if the \n> hash table is empty at the end.\n>\nThe v6-0003 patch had merge conflicts due to the recent \n'xl_dbase_drop_rec' change, so I rebased it.\nSee v7-0003 in attachment. Changes are pretty straightforward, though It \nwould be great, if you could check them once more.\n\nNewly introduced test 4 in t/011_crash_recovery.pl fails without the \npatch and passes with it.\nIt seems to me that everything is fine, so I mark it \"Ready For Committer\"\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 26 Dec 2019 17:45:19 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "I looked at this a little while and was bothered by the perl changes; it\nseems out of place to have RecursiveCopy be thinking about tablespaces,\nwhich is way out of its league. So I rewrote that to use a callback:\nthe PostgresNode code passes a callback that's in charge to handle the\ncase of a symlink. Things look much more in place with that. I didn't\nverify that all places that should use this are filled.\n\nIn 0002 I found adding a new function unnecessary: we can keep backwards\ncompat by checking 'ref' of the third argument. With that we don't have\nto add a new function. (POD changes pending.)\n\nI haven't reviewed 0003.\n\nv8 of all these patches attached.\n\n\"git am\" told me your 0001 was in unrecognized format. It applied fine\nwith \"patch\". I suggest that if you're going to submit a series with\ncommit messages and all, please use \"git format-patch\" with the same\n\"-v\" argument (9 in this case) for all patches.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 9 Jan 2020 21:22:45 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2020-Jan-09, Alvaro Herrera wrote:\n\n> I looked at this a little while and was bothered by the perl changes; it\n> seems out of place to have RecursiveCopy be thinking about tablespaces,\n> which is way out of its league. So I rewrote that to use a callback:\n> the PostgresNode code passes a callback that's in charge to handle the\n> case of a symlink. Things look much more in place with that. I didn't\n> verify that all places that should use this are filled.\n> \n> In 0002 I found adding a new function unnecessary: we can keep backwards\n> compat by checking 'ref' of the third argument. With that we don't have\n> to add a new function. (POD changes pending.)\n\nI forgot to add that something in these changes is broken (probably the\nsymlink handling callback) so the tests fail, but I couldn't stay away\nfrom my daughter's birthday long enough to figure out what or how. I'm\non something else today, so if one of you can research and submit fixed\nversions, that'd be great.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 10 Jan 2020 10:43:35 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 9:43 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2020-Jan-09, Alvaro Herrera wrote:\n>\n> > I looked at this a little while and was bothered by the perl changes; it\n> > seems out of place to have RecursiveCopy be thinking about tablespaces,\n> > which is way out of its league. So I rewrote that to use a callback:\n> > the PostgresNode code passes a callback that's in charge to handle the\n> > case of a symlink. Things look much more in place with that. I didn't\n> > verify that all places that should use this are filled.\n> >\n> > In 0002 I found adding a new function unnecessary: we can keep backwards\n> > compat by checking 'ref' of the third argument. With that we don't have\n> > to add a new function. (POD changes pending.)\n>\n> I forgot to add that something in these changes is broken (probably the\n> symlink handling callback) so the tests fail, but I couldn't stay away\n> from my daughter's birthday long enough to figure out what or how. I'm\n> on something else today, so if one of you can research and submit fixed\n> versions, that'd be great.\n>\n> Thanks,\n>\n\nI spent some time on this before getting off work today.\n\nWith below fix, the 4th test is now ok but the 5th (last one) hangs due to\npanic.\n\n(gdb) bt\n#0 0x0000003397e32625 in raise () from /lib64/libc.so.6\n#1 0x0000003397e33e05 in abort () from /lib64/libc.so.6\n#2 0x0000000000a90506 in errfinish (dummy=0) at elog.c:590\n#3 0x0000000000a92b4b in elog_finish (elevel=22, fmt=0xb2d580 \"cannot find\ndirectory %s tablespace %d database %d\") at elog.c:1465\n#4 0x000000000057aa0a in XLogLogMissingDir (spcNode=16384, dbNode=0,\npath=0x1885100 \"pg_tblspc/16384/PG_13_202001091/16389\") at xlogutils.c:104\n#5 0x000000000065e92e in dbase_redo (record=0x1841568) at dbcommands.c:2225\n#6 0x000000000056ac94 in StartupXLOG () at xlog.c:7200\n\n\ndiff --git a/src/include/commands/dbcommands.h\nb/src/include/commands/dbcommands.h\nindex b71b400e700..f8f6d5ffd03 100644\n--- a/src/include/commands/dbcommands.h\n+++ b/src/include/commands/dbcommands.h\n@@ -19,8 +19,6 @@\n #include \"lib/stringinfo.h\"\n #include \"nodes/parsenodes.h\"\n\n-extern void CheckMissingDirs4DbaseRedo(void);\n-\n extern Oid createdb(ParseState *pstate, const CreatedbStmt *stmt);\n extern void dropdb(const char *dbname, bool missing_ok, bool force);\n extern void DropDatabase(ParseState *pstate, DropdbStmt *stmt);\ndiff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm\nindex e6e7ea505d9..4eef8bb1985 100644\n--- a/src/test/perl/PostgresNode.pm\n+++ b/src/test/perl/PostgresNode.pm\n@@ -615,11 +615,11 @@ sub _srcsymlink\n my $srcrealdir = readlink($srcpath);\n\n opendir(my $dh, $srcrealdir);\n- while (readdir $dh)\n+ while (my $entry = (readdir $dh))\n {\n- next if (/^\\.\\.?$/);\n- my $spath = \"$srcrealdir/$_\";\n- my $dpath = \"$dstrealdir/$_\";\n+ next if ($entry eq '.' or $entry eq '..');\n+ my $spath = \"$srcrealdir/$entry\";\n+ my $dpath = \"$dstrealdir/$entry\";\n RecursiveCopy::copypath($spath, $dpath);\n }\n closedir $dh;\n\nOn Fri, Jan 10, 2020 at 9:43 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2020-Jan-09, Alvaro Herrera wrote:\n\n> I looked at this a little while and was bothered by the perl changes; it\n> seems out of place to have RecursiveCopy be thinking about tablespaces,\n> which is way out of its league. So I rewrote that to use a callback:\n> the PostgresNode code passes a callback that's in charge to handle the\n> case of a symlink. Things look much more in place with that. I didn't\n> verify that all places that should use this are filled.\n> \n> In 0002 I found adding a new function unnecessary: we can keep backwards\n> compat by checking 'ref' of the third argument. With that we don't have\n> to add a new function. (POD changes pending.)\n\nI forgot to add that something in these changes is broken (probably the\nsymlink handling callback) so the tests fail, but I couldn't stay away\nfrom my daughter's birthday long enough to figure out what or how. I'm\non something else today, so if one of you can research and submit fixed\nversions, that'd be great.\n\nThanks,I spent some time on this before getting off work today.With below fix, the 4th test is now ok but the 5th (last one) hangs due to panic.(gdb) bt#0 0x0000003397e32625 in raise () from /lib64/libc.so.6#1 0x0000003397e33e05 in abort () from /lib64/libc.so.6#2 0x0000000000a90506 in errfinish (dummy=0) at elog.c:590#3 0x0000000000a92b4b in elog_finish (elevel=22, fmt=0xb2d580 \"cannot find directory %s tablespace %d database %d\") at elog.c:1465#4 0x000000000057aa0a in XLogLogMissingDir (spcNode=16384, dbNode=0, path=0x1885100 \"pg_tblspc/16384/PG_13_202001091/16389\") at xlogutils.c:104#5 0x000000000065e92e in dbase_redo (record=0x1841568) at dbcommands.c:2225#6 0x000000000056ac94 in StartupXLOG () at xlog.c:7200diff --git a/src/include/commands/dbcommands.h b/src/include/commands/dbcommands.hindex b71b400e700..f8f6d5ffd03 100644--- a/src/include/commands/dbcommands.h+++ b/src/include/commands/dbcommands.h@@ -19,8 +19,6 @@ #include \"lib/stringinfo.h\" #include \"nodes/parsenodes.h\"-extern void CheckMissingDirs4DbaseRedo(void);- extern Oid createdb(ParseState *pstate, const CreatedbStmt *stmt); extern void dropdb(const char *dbname, bool missing_ok, bool force); extern void DropDatabase(ParseState *pstate, DropdbStmt *stmt);diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pmindex e6e7ea505d9..4eef8bb1985 100644--- a/src/test/perl/PostgresNode.pm+++ b/src/test/perl/PostgresNode.pm@@ -615,11 +615,11 @@ sub _srcsymlink my $srcrealdir = readlink($srcpath); opendir(my $dh, $srcrealdir);- while (readdir $dh)+ while (my $entry = (readdir $dh)) {- next if (/^\\.\\.?$/);- my $spath = \"$srcrealdir/$_\";- my $dpath = \"$dstrealdir/$_\";+ next if ($entry eq '.' or $entry eq '..');+ my $spath = \"$srcrealdir/$entry\";+ my $dpath = \"$dstrealdir/$entry\"; RecursiveCopy::copypath($spath, $dpath); } closedir $dh;",
"msg_date": "Mon, 13 Jan 2020 18:27:16 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "I further fixed the last test failure (due to a small bug in the test, not\nin code). Attached are the new patch series. Let's see the CI pipeline\nresult.",
"msg_date": "Wed, 15 Jan 2020 18:18:15 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "\n\nOn 2020/01/15 19:18, Paul Guo wrote:\n> I further fixed the last test failure (due to a small bug in the test, not in code). Attached are the new patch series. Let's see the CI pipeline result.\n\nThanks for updating the patches!\n\nI started reading the 0003 patch.\n\nThe approach that the 0003 patch uses is not the perfect solution.\nIf the standby crashes after tblspc_redo() removes the directory and before\nits subsequent COMMIT record is replayed, PANIC error would occur since\nthere can be some unresolved missing directory entries when we reach the\nconsistent state. The problem would very rarely happen, though...\nJust idea; calling XLogFlush() to update the minimum recovery point just\nbefore tblspc_redo() performs destroy_tablespace_directories() may be\nsafe and helpful for the problem?\n\n-\t\tappendStringInfo(buf, \"copy dir %u/%u to %u/%u\",\n-\t\t\t\t\t\t xlrec->src_tablespace_id, xlrec->src_db_id,\n-\t\t\t\t\t\t xlrec->tablespace_id, xlrec->db_id);\n+\t\tdbpath1 = GetDatabasePath(xlrec->src_db_id, xlrec->src_tablespace_id);\n+\t\tdbpath2 = GetDatabasePath(xlrec->db_id, xlrec->tablespace_id);\n+\t\tappendStringInfo(buf, \"copy dir %s to %s\", dbpath1, dbpath2);\n+\t\tpfree(dbpath2);\n+\t\tpfree(dbpath1);\n\nIf the patch is for the bug fix and would be back-ported, the above change\nwould lead to change pg_waldump's output for CREATE/DROP DATABASE between\nminor versions. IMO it's better to avoid such change and separate the above\nas a separate patch only for master.\n\n-\t\t\tappendStringInfo(buf, \" %u/%u\",\n-\t\t\t\t\t\t\t xlrec->tablespace_ids[i], xlrec->db_id);\n+\t\t{\n+\t\t\tdbpath1 = GetDatabasePath(xlrec->db_id, xlrec->tablespace_ids[i]);\n+\t\t\tappendStringInfo(buf, \"%s\", dbpath1);\n+\t\t\tpfree(dbpath1);\n+\t\t}\n\nSame as above.\n\nBTW, the above \"%s\" should be \" %s\", i.e., a space character needs to be\nappended to the head of \"%s\".\n\n+\t\t\tget_parent_directory(parent_path);\n+\t\t\tif (!(stat(parent_path, &st) == 0 && S_ISDIR(st.st_mode)))\n+\t\t\t{\n+\t\t\t\tXLogLogMissingDir(xlrec->tablespace_id, InvalidOid, dst_path);\n\nThe third argument of XLogLogMissingDir() should be parent_path instead of\ndst_path?\n\n+\tif (hash_search(missing_dir_tab, &key, HASH_REMOVE, NULL) == NULL)\n+\t\telog(DEBUG2, \"dir %s tablespace %d database %d is not missing\",\n+\t\t\t path, spcNode, dbNode);\n\nI think that this elog() is useless and rather confusing.\n\n+\t\tXLogForgetMissingDir(xlrec->ts_id, InvalidOid, \"\");\n\nThe third argument should be set to the actual path instead of an empty\nstring. Otherwise XLogForgetMissingDir() may emit a confusing DEBUG2\nmessage. Or the third argument of XLogForgetMissingDir() should be removed\nand the path in the DEBUG2 message should be calculated from the spcNode\nand dbNode in the hash entry in XLogForgetMissingDir().\n\n+#include \"common/file_perm.h\"\n\nThis seems not necessary.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 28 Jan 2020 00:24:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "\n\nOn 2020/01/28 0:24, Fujii Masao wrote:\n> \n> \n> On 2020/01/15 19:18, Paul Guo wrote:\n>> I further fixed the last test failure (due to a small bug in the test, not in code). Attached are the new patch series. Let's see the CI pipeline result.\n> \n> Thanks for updating the patches!\n> \n> I started reading the 0003 patch.\n\nI marked this patch as Waiting on Author in CF because there is no update\nsince my last review comments. Could you mark it as Needs Review again\nif you post the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 25 Mar 2020 14:52:10 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "> On 25 Mar 2020, at 06:52, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n> On 2020/01/28 0:24, Fujii Masao wrote:\n>> On 2020/01/15 19:18, Paul Guo wrote:\n>>> I further fixed the last test failure (due to a small bug in the test, not in code). Attached are the new patch series. Let's see the CI pipeline result.\n>> Thanks for updating the patches!\n>> I started reading the 0003 patch.\n> \n> I marked this patch as Waiting on Author in CF because there is no update\n> since my last review comments. Could you mark it as Needs Review again\n> if you post the updated version of the patch.\n\nThis thread has been stalled since effectively January, so I'm marking this\npatch Returned with Feedback. Feel free to open a new entry once the review\ncomments have been addressed.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 7 Jul 2020 23:12:30 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Looks like my previous reply was held for moderation (maybe due to my new email address).\r\nI configured my pg account today using the new email address. I guess this email would be\r\nheld for moderation.\r\n\r\nI’m now replying my previous reply email and attaching the new patch series.\r\n\r\n\r\nOn Jul 6, 2020, at 10:18 AM, Paul Guo <guopa@vmware.com<mailto:guopa@vmware.com>> wrote:\r\n\r\nThanks for the review. I’m now re-picking up the work. I modified the code following the comments.\r\nBesides, I tweaked the test code a bit. There are several things I’m not 100% sure. Please see\r\nmy replies below.\r\n\r\nOn Jan 27, 2020, at 11:24 PM, Fujii Masao <masao.fujii@oss.nttdata.com<mailto:masao.fujii@oss.nttdata.com>> wrote:\r\n\r\nOn 2020/01/15 19:18, Paul Guo wrote:\r\nI further fixed the last test failure (due to a small bug in the test, not in code). Attached are the new patch series. Let's see the CI pipeline result.\r\n\r\nThanks for updating the patches!\r\n\r\nI started reading the 0003 patch.\r\n\r\nThe approach that the 0003 patch uses is not the perfect solution.\r\nIf the standby crashes after tblspc_redo() removes the directory and before\r\nits subsequent COMMIT record is replayed, PANIC error would occur since\r\nthere can be some unresolved missing directory entries when we reach the\r\nconsistent state. The problem would very rarely happen, though...\r\nJust idea; calling XLogFlush() to update the minimum recovery point just\r\nbefore tblspc_redo() performs destroy_tablespace_directories() may be\r\nsafe and helpful for the problem?\r\n\r\nYes looks like an issue. My understanding is the below scenario.\r\n\r\nXLogLogMissingDir()\r\n\r\nXLogFlush() in redo (e.g. in a commit redo). <- create a minimum recovery point (we call it LSN_A).\r\n\r\ntblspc_redo()->XLogForgetMissingDir()\r\n <- If we panic immediately after we remove the directory in tblspc_redo()\r\n <- when we do replay during crash-recovery, we will check consistency at LSN_A and thus PANIC inXLogCheckMissingDirs()\r\n\r\ncommit\r\n\r\nWe should add a XLogFlush() in tblspc_redo(). This brings several other questions to my minds also.\r\n\r\n\r\n1. Should we call XLogFlush() in dbase_redo() for XLOG_DBASE_DROP also?\r\n It calls both XLogDropDatabase() and XLogForgetMissingDir, which seem to have this issue also?\r\n\r\n2. xact_redo_abort() calls DropRelationFiles() also. Why do not we call XLogFlush() there?\r\n\r\n\r\n\r\n- appendStringInfo(buf, \"copy dir %u/%u to %u/%u\",\r\n- xlrec->src_tablespace_id, xlrec->src_db_id,\r\n- xlrec->tablespace_id, xlrec->db_id);\r\n+ dbpath1 = GetDatabasePath(xlrec->src_db_id, xlrec->src_tablespace_id);\r\n+ dbpath2 = GetDatabasePath(xlrec->db_id, xlrec->tablespace_id);\r\n+ appendStringInfo(buf, \"copy dir %s to %s\", dbpath1, dbpath2);\r\n+ pfree(dbpath2);\r\n+ pfree(dbpath1);\r\n\r\nIf the patch is for the bug fix and would be back-ported, the above change\r\nwould lead to change pg_waldump's output for CREATE/DROP DATABASE between\r\nminor versions. IMO it's better to avoid such change and separate the above\r\nas a separate patch only for master.\r\n\r\nI know we do not want wal format between minor releases, but does wal description string change\r\nbetween minor releases affect users? Anyone I’ll extract this part into a separate patch in the series\r\nsince this change is actually independent of the other changes..\r\n\r\n\r\n- appendStringInfo(buf, \" %u/%u\",\r\n- xlrec->tablespace_ids[i], xlrec->db_id);\r\n+ {\r\n+ dbpath1 = GetDatabasePath(xlrec->db_id, xlrec->tablespace_ids[i]);\r\n+ appendStringInfo(buf, \"%s\", dbpath1);\r\n+ pfree(dbpath1);\r\n+ }\r\n\r\nSame as above.\r\n\r\nBTW, the above \"%s\" should be \" %s\", i.e., a space character needs to be\r\nappended to the head of \"%s”.\r\n\r\nOK\r\n\r\n\r\n+ get_parent_directory(parent_path);\r\n+ if (!(stat(parent_path, &st) == 0 && S_ISDIR(st.st_mode)))\r\n+ {\r\n+ XLogLogMissingDir(xlrec->tablespace_id, InvalidOid, dst_path);\r\n\r\nThe third argument of XLogLogMissingDir() should be parent_path instead of\r\ndst_path?\r\n\r\nThe argument is for debug message printing so both should be fine, but admittedly we are\r\nlogging for the tablespace directory so parent_path might be better.\r\n\r\n\r\n+ if (hash_search(missing_dir_tab, &key, HASH_REMOVE, NULL) == NULL)\r\n+ elog(DEBUG2, \"dir %s tablespace %d database %d is not missing\",\r\n+ path, spcNode, dbNode);\r\n\r\nI think that this elog() is useless and rather confusing.\r\n\r\nOK. Modified.\r\n\r\n\r\n+ XLogForgetMissingDir(xlrec->ts_id, InvalidOid, \"\");\r\n\r\nThe third argument should be set to the actual path instead of an empty\r\nstring. Otherwise XLogForgetMissingDir() may emit a confusing DEBUG2\r\nmessage. Or the third argument of XLogForgetMissingDir() should be removed\r\nand the path in the DEBUG2 message should be calculated from the spcNode\r\nand dbNode in the hash entry in XLogForgetMissingDir().\r\n\r\nI’m now removing the third argument. Use GetDatabasePath() to get the path if database did I snot InvalidOid.\r\n\r\n\r\n+#include \"common/file_perm.h\"\r\n\r\nThis seems not necessary.\r\n\r\nRight.",
"msg_date": "Wed, 8 Jul 2020 12:56:44 +0000",
"msg_from": "Paul Guo <guopa@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Wed, 8 Jul 2020 12:56:44 +0000, Paul Guo <guopa@vmware.com> wrote in \n> On 2020/01/15 19:18, Paul Guo wrote:\n> I further fixed the last test failure (due to a small bug in the test, not in code). Attached are the new patch series. Let's see the CI pipeline result.\n> \n> Thanks for updating the patches!\n> \n> I started reading the 0003 patch.\n> \n> The approach that the 0003 patch uses is not the perfect solution.\n> If the standby crashes after tblspc_redo() removes the directory and before\n> its subsequent COMMIT record is replayed, PANIC error would occur since\n> there can be some unresolved missing directory entries when we reach the\n> consistent state. The problem would very rarely happen, though...\n> Just idea; calling XLogFlush() to update the minimum recovery point just\n> before tblspc_redo() performs destroy_tablespace_directories() may be\n> safe and helpful for the problem?\n\nIt seems to me that what the current patch does is too complex. What\nwe need to do here is to remember every invalid operation then forget\nit when the prerequisite object is dropped.\n\nWhen a table space is dropped before consistency is established, we\ndon't need to care what has been performed inside the tablespace. In\nthis perspective, it is enough to remember tablespace ids when failed\nto do something inside it due to the absence of the tablespace and\nthen forget it when we remove it. We could remember individual\ndatabase id to show them in error messages, but I'm not sure it's\nuseful. The reason log_invalid_page records block numbers is to allow\nthe machinery handle partial table truncations, but this is not the\ncase since dropping tablespace cannot leave some of containing\ndatabases.\n\nAs the result, we won't see an unresolved invalid operations in a\ndropped tablespace.\n\nAm I missing something?\n\n\ndbase_redo:\n+ if (!(stat(parent_path, &st) == 0 && S_ISDIR(st.st_mode)))\n+ {\n+ XLogRecordMissingDir(xlrec->tablespace_id, InvalidOid, parent_path);\n\nThis means \"record the belonging table space directory if it is not\nfound OR it is not a directory\". The former can be valid but the\nlatter is unconditionally can not (I don't think we bother considering\nsymlinks there).\n\n+ /*\n+ * Source directory may be missing. E.g. the template database used\n+ * for creating this database may have been dropped, due to reasons\n+ * noted above. Moving a database from one tablespace may also be a\n+ * partner in the crime.\n+ */\n+ if (!(stat(src_path, &st) == 0 && S_ISDIR(st.st_mode)))\n+ {\n+ XLogLogMissingDir(xlrec->src_tablespace_id, xlrec->src_db_id, src_path);\n\nThis is a part of *creation* of the target directory. Lack of the\nsource directory cannot be valid even if the source directory is\ndropped afterwards in the WAL stream and we can allow that if the\n*target* tablespace is dropped afterwards. As the result, as I\nmentioned above, we don't need to record about the database directory.\n\nBy the way the name XLogLogMiss.. is somewhat confusing. How about\nXLogReportMissingDir (named after report_invalid_page).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 05 Jan 2021 10:07:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "Thanks for the review, please see the replies below.\r\n\r\n> On Jan 5, 2021, at 9:07 AM, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\r\n> \r\n> At Wed, 8 Jul 2020 12:56:44 +0000, Paul Guo <guopa@vmware.com> wrote in \r\n>> On 2020/01/15 19:18, Paul Guo wrote:\r\n>> I further fixed the last test failure (due to a small bug in the test, not in code). Attached are the new patch series. Let's see the CI pipeline result.\r\n>> \r\n>> Thanks for updating the patches!\r\n>> \r\n>> I started reading the 0003 patch.\r\n>> \r\n>> The approach that the 0003 patch uses is not the perfect solution.\r\n>> If the standby crashes after tblspc_redo() removes the directory and before\r\n>> its subsequent COMMIT record is replayed, PANIC error would occur since\r\n>> there can be some unresolved missing directory entries when we reach the\r\n>> consistent state. The problem would very rarely happen, though...\r\n>> Just idea; calling XLogFlush() to update the minimum recovery point just\r\n>> before tblspc_redo() performs destroy_tablespace_directories() may be\r\n>> safe and helpful for the problem?\r\n> \r\n> It seems to me that what the current patch does is too complex. What\r\n> we need to do here is to remember every invalid operation then forget\r\n> it when the prerequisite object is dropped.\r\n> \r\n> When a table space is dropped before consistency is established, we\r\n> don't need to care what has been performed inside the tablespace. In\r\n> this perspective, it is enough to remember tablespace ids when failed\r\n> to do something inside it due to the absence of the tablespace and\r\n> then forget it when we remove it. We could remember individual\r\n> database id to show them in error messages, but I'm not sure it's\r\n> useful. The reason log_invalid_page records block numbers is to allow\r\n> the machinery handle partial table truncations, but this is not the\r\n> case since dropping tablespace cannot leave some of containing\r\n> databases.\r\n> \r\n> As the result, we won't see an unresolved invalid operations in a\r\n> dropped tablespace.\r\n> \r\n> Am I missing something?\r\n\r\nYes, removing the database id from the hash key in the log/forget code should\r\nbe usually fine, but the previous code does stricter/safer checking.\r\n\r\nConsider the scenario:\r\n\r\nCREATE DATABASE newdb1 TEMPLATE template_db1;\r\nCREATE DATABASE newdb2 TEMPLATE template_db2; <- in case the template_db2 database directory is missing abnormally somehow.\r\nDROP DATABASE template_db1;\r\n\r\nThe previous code could detect this but if we remove the database id in the code,\r\nthis bad scenario is skipped.\r\n\r\n> \r\n> \r\n> dbase_redo:\r\n> + if (!(stat(parent_path, &st) == 0 && S_ISDIR(st.st_mode)))\r\n> + {\r\n> + XLogRecordMissingDir(xlrec->tablespace_id, InvalidOid, parent_path);\r\n> \r\n> This means \"record the belonging table space directory if it is not\r\n> found OR it is not a directory\". The former can be valid but the\r\n> latter is unconditionally can not (I don't think we bother considering\r\n> symlinks there).\r\n\r\nAgain this is a safer check, in the case the parent_path is a file for example somehow,\r\nwe should panic finally for the case and let the user checks and then does recovery again.\r\n\r\n> \r\n> + /*\r\n> + * Source directory may be missing. E.g. the template database used\r\n> + * for creating this database may have been dropped, due to reasons\r\n> + * noted above. Moving a database from one tablespace may also be a\r\n> + * partner in the crime.\r\n> + */\r\n> + if (!(stat(src_path, &st) == 0 && S_ISDIR(st.st_mode)))\r\n> + {\r\n> + XLogLogMissingDir(xlrec->src_tablespace_id, xlrec->src_db_id, src_path);\r\n> \r\n> This is a part of *creation* of the target directory. Lack of the\r\n> source directory cannot be valid even if the source directory is\r\n> dropped afterwards in the WAL stream and we can allow that if the\r\n> *target* tablespace is dropped afterwards. As the result, as I\r\n> mentioned above, we don't need to record about the database directory.\r\n> \r\n> By the way the name XLogLogMiss.. is somewhat confusing. How about\r\n> XLogReportMissingDir (named after report_invalid_page).\r\n\r\nAgree with you.\r\n\r\nAlso your words remind me that we should skip the checking if the consistency point\r\nis reached.\r\n\r\nHere is a git diff against the previous patch. I’ll send out the new rebased patches after\r\nthe consensus is reached.\r\n\r\ndiff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c\r\nindex 7ade385965..c8fe3fe228 100644\r\n--- a/src/backend/access/transam/xlogutils.c\r\n+++ b/src/backend/access/transam/xlogutils.c\r\n@@ -90,7 +90,7 @@ typedef struct xl_missing_dir\r\n static HTAB *missing_dir_tab = NULL;\r\n\r\n void\r\n-XLogLogMissingDir(Oid spcNode, Oid dbNode, char *path)\r\n+XLogReportMissingDir(Oid spcNode, Oid dbNode, char *path)\r\n {\r\n \txl_missing_dir_key key;\r\n \tbool found;\r\n@@ -103,16 +103,6 @@ XLogLogMissingDir(Oid spcNode, Oid dbNode, char *path)\r\n \t */\r\n \tAssert(OidIsValid(spcNode));\r\n\r\n-\tif (reachedConsistency)\r\n-\t{\r\n-\t\tif (dbNode == InvalidOid)\r\n-\t\t\telog(PANIC, \"cannot find directory %s (tablespace %d)\",\r\n-\t\t\t\t path, spcNode);\r\n-\t\telse\r\n-\t\t\telog(PANIC, \"cannot find directory %s (tablespace %d database %d)\",\r\n-\t\t\t\t path, spcNode, dbNode);\r\n-\t}\r\n-\r\n \tif (missing_dir_tab == NULL)\r\n \t{\r\n \t\t/* create hash table when first needed */\r\ndiff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.c\r\nindex fbff422c3b..7bd6d4efd9 100644\r\n--- a/src/backend/commands/dbcommands.c\r\n+++ b/src/backend/commands/dbcommands.c\r\n@@ -2205,7 +2205,7 @@ dbase_redo(XLogReaderState *record)\r\n \t\t\t\t\t\t(errmsg(\"some useless files may be left behind in old database directory \\\"%s\\\"\",\r\n \t\t\t\t\t\t\t\tdst_path)));\r\n \t\t}\r\n-\t\telse\r\n+\t\telse if (!reachedConsistency)\r\n \t\t{\r\n \t\t\t/*\r\n \t\t\t * It is possible that drop tablespace record appearing later in\r\n@@ -2221,7 +2221,7 @@ dbase_redo(XLogReaderState *record)\r\n \t\t\tget_parent_directory(parent_path);\r\n \t\t\tif (!(stat(parent_path, &st) == 0 && S_ISDIR(st.st_mode)))\r\n \t\t\t{\r\n-\t\t\t\tXLogLogMissingDir(xlrec->tablespace_id, InvalidOid, parent_path);\r\n+\t\t\t\tXLogReportMissingDir(xlrec->tablespace_id, InvalidOid, parent_path);\r\n \t\t\t\tskip = true;\r\n \t\t\t\tereport(WARNING,\r\n \t\t\t\t\t\t(errmsg(\"skipping create database WAL record\"),\r\n@@ -2239,9 +2239,10 @@ dbase_redo(XLogReaderState *record)\r\n \t\t * noted above. Moving a database from one tablespace may also be a\r\n \t\t * partner in the crime.\r\n \t\t */\r\n-\t\tif (!(stat(src_path, &st) == 0 && S_ISDIR(st.st_mode)))\r\n+\t\tif (!(stat(src_path, &st) == 0 && S_ISDIR(st.st_mode)) &&\r\n+\t\t\t!reachedConsistency)\r\n \t\t{\r\n-\t\t\tXLogLogMissingDir(xlrec->src_tablespace_id, xlrec->src_db_id, src_path);\r\n+\t\t\tXLogReportMissingDir(xlrec->src_tablespace_id, xlrec->src_db_id, src_path);\r\n \t\t\tskip = true;\r\n \t\t\tereport(WARNING,\r\n \t\t\t\t\t(errmsg(\"skipping create database WAL record\"),\r\n@@ -2311,7 +2312,8 @@ dbase_redo(XLogReaderState *record)\r\n \t\t\t\t\t\t(errmsg(\"some useless files may be left behind in old database directory \\\"%s\\\"\",\r\n \t\t\t\t\t\t\t\tdst_path)));\r\n\r\n-\t\t\tXLogForgetMissingDir(xlrec->tablespace_ids[i], xlrec->db_id);\r\n+\t\t\tif (!reachedConsistency)\r\n+\t\t\t\tXLogForgetMissingDir(xlrec->tablespace_ids[i], xlrec->db_id);\r\n\r\n \t\t\tpfree(dst_path);\r\n \t\t}\r\ndiff --git a/src/backend/commands/tablespace.c b/src/backend/commands/tablespace.c\r\nindex 294c9676b4..15eaa757cc 100644\r\n--- a/src/backend/commands/tablespace.c\r\n+++ b/src/backend/commands/tablespace.c\r\n@@ -1534,7 +1534,8 @@ tblspc_redo(XLogReaderState *record)\r\n \t{\r\n \t\txl_tblspc_drop_rec *xlrec = (xl_tblspc_drop_rec *) XLogRecGetData(record);\r\n\r\n-\t\tXLogForgetMissingDir(xlrec->ts_id, InvalidOid);\r\n+\t\tif (!reachedConsistency)\r\n+\t\t\tXLogForgetMissingDir(xlrec->ts_id, InvalidOid);\r\n\r\n \t\tXLogFlush(record->EndRecPtr);\r\n\r\ndiff --git a/src/include/access/xlogutils.h b/src/include/access/xlogutils.h\r\nindex da561af5ab..6561d9cebe 100644\r\n--- a/src/include/access/xlogutils.h\r\n+++ b/src/include/access/xlogutils.h\r\n@@ -23,7 +23,7 @@ extern void XLogDropDatabase(Oid dbid);\r\n extern void XLogTruncateRelation(RelFileNode rnode, ForkNumber forkNum,\r\n \t\t\t\t\t\t\t\t BlockNumber nblocks);\r\n\r\n-extern void XLogLogMissingDir(Oid spcNode, Oid dbNode, char *path);\r\n+extern void XLogReportMissingDir(Oid spcNode, Oid dbNode, char *path);\r\n extern void XLogForgetMissingDir(Oid spcNode, Oid dbNode);\r\n extern void XLogCheckMissingDirs(void);\r\n\r\ndiff --git a/src/test/recovery/t/011_crash_recovery.pl b/src/test/recovery/t/011_crash_recovery.pl\r\nindex 748200ebb5..95eb6d26cc 100644\r\n--- a/src/test/recovery/t/011_crash_recovery.pl\r\n+++ b/src/test/recovery/t/011_crash_recovery.pl\r\n@@ -141,7 +141,7 @@ $node_master->wait_for_catchup($node_standby, 'replay',\r\n $node_standby->safe_psql('postgres', 'CHECKPOINT');\r\n\r\n # Do immediate shutdown just after a sequence of CREAT DATABASE / DROP\r\n-# DATABASE / DROP TABLESPACE. This causes CREATE DATBASE WAL records\r\n+# DATABASE / DROP TABLESPACE. This causes CREATE DATABASE WAL records\r\n\r\n",
"msg_date": "Wed, 27 Jan 2021 08:36:22 +0000",
"msg_from": "Paul Guo <guopa@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2021-Jan-27, Paul Guo wrote:\n\n> Here is a git diff against the previous patch. I’ll send out the new\n> rebased patches after the consensus is reached.\n\nHmm, can you post a rebased set, where the points under discussion\nare marked in XXX comments explaining what the issue is? This thread is\nlong and old ago that it's pretty hard to navigate the whole thing in\norder to find out exactly what is being questioned.\n\nI think 0004 can be pushed without further ado, since it's a clear and\nsimple fix. 0001 needs a comment about the new parameter in\nRecursiveCopy's POD documentation.\n\nAs I understand, this is a backpatchable bug-fix.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W\n\n\n",
"msg_date": "Sat, 27 Mar 2021 11:23:16 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2021/3/27, 10:23 PM, \"Alvaro Herrera\" <alvherre@2ndquadrant.com> wrote:\r\n\r\n> Hmm, can you post a rebased set, where the points under discussion\r\n> are marked in XXX comments explaining what the issue is? This thread is\r\n> long and old ago that it's pretty hard to navigate the whole thing in\r\n> order to find out exactly what is being questioned.\r\n\r\nOK. Attached are the rebased version that includes the change I discussed\r\nin my previous reply. Also added POD documentation change for RecursiveCopy,\r\nand modified the patch to use the backup_options introduced in\r\n081876d75ea15c3bd2ee5ba64a794fd8ea46d794 for tablespace mapping.\r\n\r\n> I think 0004 can be pushed without further ado, since it's a clear and\r\n> simple fix. 0001 needs a comment about the new parameter in\r\n> RecursiveCopy's POD documentation.\r\n\r\nYeah, 0004 is no any risky. One concern seemed to be the compatibility of some\r\nWAL dump/analysis tools(?). I have no idea about this. But if we do not backport\r\n0004 we do not seem to need to worry about this.\r\n\r\n> As I understand, this is a backpatchable bug-fix.\r\n\r\nYes.\r\n\r\nThanks.",
"msg_date": "Tue, 30 Mar 2021 07:12:19 +0000",
"msg_from": "Paul Guo <guopa@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Tue, Mar 30, 2021 at 12:12 PM Paul Guo <guopa@vmware.com> wrote:\n\n> On 2021/3/27, 10:23 PM, \"Alvaro Herrera\" <alvherre@2ndquadrant.com> wrote:\n>\n> > Hmm, can you post a rebased set, where the points under discussion\n> > are marked in XXX comments explaining what the issue is? This thread\n> is\n> > long and old ago that it's pretty hard to navigate the whole thing in\n> > order to find out exactly what is being questioned.\n>\n> OK. Attached are the rebased version that includes the change I discussed\n> in my previous reply. Also added POD documentation change for\n> RecursiveCopy,\n> and modified the patch to use the backup_options introduced in\n> 081876d75ea15c3bd2ee5ba64a794fd8ea46d794 for tablespace mapping.\n>\n> > I think 0004 can be pushed without further ado, since it's a clear and\n> > simple fix. 0001 needs a comment about the new parameter in\n> > RecursiveCopy's POD documentation.\n>\n> Yeah, 0004 is no any risky. One concern seemed to be the compatibility of\n> some\n> WAL dump/analysis tools(?). I have no idea about this. But if we do not\n> backport\n> 0004 we do not seem to need to worry about this.\n>\n> > As I understand, this is a backpatchable bug-fix.\n>\n> Yes.\n>\n> Thanks.\n>\n> Patch does not apply successfully,\nhttp://cfbot.cputube.org/patch_33_2161.log\n\nCan you please rebase the patch.\n\n\n-- \nIbrar Ahmed\n\nOn Tue, Mar 30, 2021 at 12:12 PM Paul Guo <guopa@vmware.com> wrote:On 2021/3/27, 10:23 PM, \"Alvaro Herrera\" <alvherre@2ndquadrant.com> wrote:\n\n> Hmm, can you post a rebased set, where the points under discussion\n> are marked in XXX comments explaining what the issue is? This thread is\n> long and old ago that it's pretty hard to navigate the whole thing in\n> order to find out exactly what is being questioned.\n\nOK. Attached are the rebased version that includes the change I discussed\nin my previous reply. Also added POD documentation change for RecursiveCopy,\nand modified the patch to use the backup_options introduced in\n081876d75ea15c3bd2ee5ba64a794fd8ea46d794 for tablespace mapping.\n\n> I think 0004 can be pushed without further ado, since it's a clear and\n> simple fix. 0001 needs a comment about the new parameter in\n> RecursiveCopy's POD documentation.\n\nYeah, 0004 is no any risky. One concern seemed to be the compatibility of some\nWAL dump/analysis tools(?). I have no idea about this. But if we do not backport\n0004 we do not seem to need to worry about this.\n\n> As I understand, this is a backpatchable bug-fix.\n\nYes.\n\nThanks.\nPatch does not apply successfully, http://cfbot.cputube.org/patch_33_2161.logCan you please rebase the patch. -- Ibrar Ahmed",
"msg_date": "Sat, 10 Jul 2021 00:37:57 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Rebased.",
"msg_date": "Thu, 5 Aug 2021 10:20:44 +0000",
"msg_from": "Paul Guo <guopa@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Thu, Aug 5, 2021 at 6:20 AM Paul Guo <guopa@vmware.com> wrote:\n> Rebased.\n\nThe commit message for 0001 is not clear enough for me to understand\nwhat problem it's supposed to be fixing. The code comments aren't\nreally either. They make it sound like there's some problem with\ncopying symlinks but mostly they just talk about callbacks, which\ndoesn't really help me understand what problem we'd have if we just\ndidn't commit this (or reverted it later).\n\nI am not really convinced by Álvaro's claim that 0004 is a \"fix\"; I\nthink I'd call it an improvement. But either way I agree that could\njust be committed.\n\nI haven't analyzed 0002 and 0003 yet.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Aug 2021 16:56:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 4:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Aug 5, 2021 at 6:20 AM Paul Guo <guopa@vmware.com> wrote:\n> > Rebased.\n>\n> The commit message for 0001 is not clear enough for me to understand\n> what problem it's supposed to be fixing. The code comments aren't\n> really either. They make it sound like there's some problem with\n> copying symlinks but mostly they just talk about callbacks, which\n> doesn't really help me understand what problem we'd have if we just\n> didn't commit this (or reverted it later).\n\nThanks for reviewing. Let me explain a bit. The patch series includes\nfour patches.\n\n0001 and 0002 are test changes for the fix (0003).\n - 0001 is the test framework change that's needed by 0002.\n - 0002 is the test for the code fix (0003).\n0003 is the code change and the commit message explains the issue in detail.\n0004 as said is a small enhancement which is a bit independent of the\nprevious patches.\n\nBasically the issue is that without the fix crash recovery might fail\nrelevant to tablespace.\nHere is the log after I run the tests in 0001/0002 without the 0003 fix.\n\n2021-08-04 10:00:42.231 CST [875] FATAL: could not create directory\n\"pg_tblspc/16385/PG_15_202107261/16390\": No such file or directory\n2021-08-04 10:00:42.231 CST [875] CONTEXT: WAL redo at 0/3001320 for\nDatabase/CREATE: copy dir base/1 to\npg_tblspc/16385/PG_15_202107261/16390\n\n\n>\n> I am not really convinced by Álvaro's claim that 0004 is a \"fix\"; I\n> think I'd call it an improvement. But either way I agree that could\n> just be committed.\n>\n> I haven't analyzed 0002 and 0003 yet.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n\n\n-- \nPaul Guo (Vmware)\n\n\n",
"msg_date": "Wed, 11 Aug 2021 15:58:51 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 3:59 AM Paul Guo <paulguo@gmail.com> wrote:\n> Thanks for reviewing. Let me explain a bit. The patch series includes\n> four patches.\n>\n> 0001 and 0002 are test changes for the fix (0003).\n> - 0001 is the test framework change that's needed by 0002.\n> - 0002 is the test for the code fix (0003).\n> 0003 is the code change and the commit message explains the issue in detail.\n> 0004 as said is a small enhancement which is a bit independent of the\n> previous patches.\n>\n> Basically the issue is that without the fix crash recovery might fail\n> relevant to tablespace.\n> Here is the log after I run the tests in 0001/0002 without the 0003 fix.\n\nI do understand all of this, but I (or whoever might commit this)\nneeds to also be able to understand specifically what each patch is\ndoing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Aug 2021 08:59:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> The commit message for 0001 is not clear enough for me to understand\n> what problem it's supposed to be fixing. The code comments aren't\n> really either. They make it sound like there's some problem with\n> copying symlinks but mostly they just talk about callbacks, which\n> doesn't really help me understand what problem we'd have if we just\n> didn't commit this (or reverted it later).\n\n> I am not really convinced by Álvaro's claim that 0004 is a \"fix\"; I\n> think I'd call it an improvement. But either way I agree that could\n> just be committed.\n\n> I haven't analyzed 0002 and 0003 yet.\n\nI took a quick look through this:\n\n* I don't like 0001 either, though it seems like the issue is mostly\ndocumentation. sub _srcsymlink should have a comment explaining\nwhat it's doing and why. The documentation of copypath's new parameter\nseems like gobbledegook too --- I suppose it should read more like\n\"By default, copypath fails if a source item is a symlink. But if\nB<srcsymlinkfn> is provided, that subroutine is called to process any\nsymlink.\"\n\n* I'm allergic to 0002's completely undocumented changes to\npoll_query_until, especially since I don't see anything in the\npatch that actually uses them. Can't we just drop these diffs\nin PostgresNode.pm? BTW, the last error message in the patch,\ntalking about a 5-second timeout, seems wrong. With or without\nthese changes, poll_query_until's default timeout is 180 sec.\nThe actual test case might be okay other than that nit and a\ncomment typo or two.\n\n* 0003 might actually be okay. I've not read it line-by-line,\nbut it seems like it's implementing a sane solution and it's\nadequately commented.\n\n* I'm inclined to reject 0004 out of hand, because I don't\nagree with what it's doing. The purpose of the rmgrdesc\nfunctions is to show you what is in the WAL records, and\neverywhere else we interpret that as \"show the verbatim,\nnumeric field contents\". heapdesc.c, for example, doesn't\nattempt to look up the name of the table being operated on.\n0004 isn't adhering to that style, and aside from being\ninconsistent I'm afraid that it's adding failure modes\nwe don't want.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Sep 2021 14:14:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "> On 24 Sep 2021, at 20:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Robert Haas <robertmhaas@gmail.com> writes:\n>> The commit message for 0001 is not clear enough for me to understand\n>> what problem it's supposed to be fixing. The code comments aren't\n>> really either. They make it sound like there's some problem with\n>> copying symlinks but mostly they just talk about callbacks, which\n>> doesn't really help me understand what problem we'd have if we just\n>> didn't commit this (or reverted it later).\n> \n>> I am not really convinced by Álvaro's claim that 0004 is a \"fix\"; I\n>> think I'd call it an improvement. But either way I agree that could\n>> just be committed.\n> \n>> I haven't analyzed 0002 and 0003 yet.\n> \n> I took a quick look through this:\n> \n> * I don't like 0001 either, though it seems like the issue is mostly\n> documentation. sub _srcsymlink should have a comment explaining\n> what it's doing and why. The documentation of copypath's new parameter\n> seems like gobbledegook too --- I suppose it should read more like\n> \"By default, copypath fails if a source item is a symlink. But if\n> B<srcsymlinkfn> is provided, that subroutine is called to process any\n> symlink.\"\n> \n> * I'm allergic to 0002's completely undocumented changes to\n> poll_query_until, especially since I don't see anything in the\n> patch that actually uses them. Can't we just drop these diffs\n> in PostgresNode.pm? BTW, the last error message in the patch,\n> talking about a 5-second timeout, seems wrong. With or without\n> these changes, poll_query_until's default timeout is 180 sec.\n> The actual test case might be okay other than that nit and a\n> comment typo or two.\n> \n> * 0003 might actually be okay. I've not read it line-by-line,\n> but it seems like it's implementing a sane solution and it's\n> adequately commented.\n> \n> * I'm inclined to reject 0004 out of hand, because I don't\n> agree with what it's doing. The purpose of the rmgrdesc\n> functions is to show you what is in the WAL records, and\n> everywhere else we interpret that as \"show the verbatim,\n> numeric field contents\". heapdesc.c, for example, doesn't\n> attempt to look up the name of the table being operated on.\n> 0004 isn't adhering to that style, and aside from being\n> inconsistent I'm afraid that it's adding failure modes\n> we don't want.\n\nThis patch again fails to apply (seemingly from the Perl namespace work on the\ntestcode), and needs a few updates as per the above review.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 4 Nov 2021 13:34:33 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Thu, 4 Nov 2021 13:34:33 +0100, Daniel Gustafsson <daniel@yesql.se> wrote in \n> This patch again fails to apply (seemingly from the Perl namespace work on the\n> testcode), and needs a few updates as per the above review.\n\nRebased the latest patch removing some of the chages.\n\n0001: (I don't remember about this, though) I don't see how to make it\nwork on Windows. Anyway the next step would be to write comments.\n\n0002: I didin't see it in details and didn't check if it finds the\nissue but it actually scceeds with the fix. The change to\npoll_query_until is removed since it doesn't seem actually used.\n\n0003: The fix. I didn't touch this.\n\n0004: Removed at all. I agree to Tom. (And I faintly remember that I\nsaid something like that.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 08 Nov 2021 17:55:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Mon, Nov 08, 2021 at 05:55:16PM +0900, Kyotaro Horiguchi wrote:\n\nI have quickly looked at the patch set.\n\n> 0001: (I don't remember about this, though) I don't see how to make it\n> work on Windows. Anyway the next step would be to write comments.\n\nLook at Utils.pm where we have dir_symlink, then. symlink() does not\nwork on WIN32, so we have a wrapper that uses junction points. FWIW,\nI don't like much the behavior you are enforcing in init_from_backup\nwhen coldly copying a source path, but I have not looked enough at the\npatch set to have a strong opinion about this part, either.\n\n> 0002: I didn't see it in details and didn't check if it finds the\n> issue but it actually scceeds with the fix. The change to\n> poll_query_until is removed since it doesn't seem actually used.\n\n+# Create tablespace\n+my $dropme_ts_master1 = PostgreSQL::Test::Utils::tempdir();\n+$dropme_ts_master1 =\nPostgreSQL::Test::Utils::perl2host($dropme_ts_master1);\n+my $dropme_ts_master2 = PostgreSQL::Test::Utils::tempdir();\n+$dropme_ts_master2 =\nPostgreSQL::Test::Utils::perl2host($dropme_ts_master2);\n+my $source_ts_master = PostgreSQL::Test::Utils::tempdir();\n+$source_ts_master =\nPostgreSQL::Test::Utils::perl2host($source_ts_master);\n+my $target_ts_master = PostgreSQL::Test::Utils::tempdir();\n+$target_ts_master =\nPostgreSQL::Test::Utils::perl2host($target_ts_master);\n\nRather than creating N temporary directories, it would be simpler to\ncreate only one, and have subdirs in it for the rest? It seems to me\nthat it would make debugging much easier. The uses of perl2host()\nseem sufficient.\n--\nMichael",
"msg_date": "Tue, 9 Nov 2021 12:51:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Tue, 9 Nov 2021 12:51:15 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Nov 08, 2021 at 05:55:16PM +0900, Kyotaro Horiguchi wrote:\n> \n> I have quickly looked at the patch set.\n> \n> > 0001: (I don't remember about this, though) I don't see how to make it\n> > work on Windows. Anyway the next step would be to write comments.\n> \n> Look at Utils.pm where we have dir_symlink, then. symlink() does not\n> work on WIN32, so we have a wrapper that uses junction points. FWIW,\n> I don't like much the behavior you are enforcing in init_from_backup\n> when coldly copying a source path, but I have not looked enough at the\n> patch set to have a strong opinion about this part, either.\n\nThanks for the info. If we can handle symlink on Windows, we don't\nneed to have a cold copy.\n\n> > 0002: I didn't see it in details and didn't check if it finds the\n> > issue but it actually scceeds with the fix. The change to\n> > poll_query_until is removed since it doesn't seem actually used.\n> \n> +# Create tablespace\n> +my $dropme_ts_master1 = PostgreSQL::Test::Utils::tempdir();\n> +$dropme_ts_master1 =\n> PostgreSQL::Test::Utils::perl2host($dropme_ts_master1);\n> +my $dropme_ts_master2 = PostgreSQL::Test::Utils::tempdir();\n> +$dropme_ts_master2 =\n> PostgreSQL::Test::Utils::perl2host($dropme_ts_master2);\n> +my $source_ts_master = PostgreSQL::Test::Utils::tempdir();\n> +$source_ts_master =\n> PostgreSQL::Test::Utils::perl2host($source_ts_master);\n> +my $target_ts_master = PostgreSQL::Test::Utils::tempdir();\n> +$target_ts_master =\n> PostgreSQL::Test::Utils::perl2host($target_ts_master);\n> \n> Rather than creating N temporary directories, it would be simpler to\n> create only one, and have subdirs in it for the rest? It seems to me\n> that it would make debugging much easier. The uses of perl2host()\n> seem sufficient.\n\nThanks for the suggestion. My eyeballs got hopping around looking\nthat part so I gave up looking there in more detail:p I agree to that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 09 Nov 2021 17:05:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Tue, 09 Nov 2021 17:05:49 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 9 Nov 2021 12:51:15 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > Look at Utils.pm where we have dir_symlink, then. symlink() does not\n> > work on WIN32, so we have a wrapper that uses junction points. FWIW,\n> > I don't like much the behavior you are enforcing in init_from_backup\n> > when coldly copying a source path, but I have not looked enough at the\n> > patch set to have a strong opinion about this part, either.\n> \n> Thanks for the info. If we can handle symlink on Windows, we don't\n> need to have a cold copy.\n\nI bumped into the good-old 100-byte limit of the (v7?) tar format on\nwhich pg_basebackup is depending. It is unlikely in the real world but\nI think it is quite common in developping environment. The tablespace\ndirectory path in my dev environment was 110 chacters-long. As small\nas 10 bytes but it's quite annoying to chip off that number of bytes\nfrom the path..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 10 Nov 2021 17:14:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On 2021-Nov-10, Kyotaro Horiguchi wrote:\n\n> I bumped into the good-old 100-byte limit of the (v7?) tar format on\n> which pg_basebackup is depending. It is unlikely in the real world but\n> I think it is quite common in developping environment. The tablespace\n> directory path in my dev environment was 110 chacters-long. As small\n> as 10 bytes but it's quite annoying to chip off that number of bytes\n> from the path..\n\nCan you use PostgreSQL::Test::Utils::tempdir_short() for those\ntablespaces?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 10 Nov 2021 09:14:30 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Wed, 10 Nov 2021 09:14:30 -0300, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> Can you use PostgreSQL::Test::Utils::tempdir_short() for those\n> tablespaces?\n\nThanks for the suggestion!\n\nIt works for a live cluster. But doesn't work for backups, since I\nfind no way to relate a tablespace directory with a backup directory\nnot using a symlink. One way would be taking a backup with tentative\ntablespace directory in the short-named temporary directory then move\nit into the backup direcotry. I'm going that way for now.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 11 Nov 2021 11:13:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Thu, 11 Nov 2021 11:13:52 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 10 Nov 2021 09:14:30 -0300, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> > Can you use PostgreSQL::Test::Utils::tempdir_short() for those\n> > tablespaces?\n> \n> Thanks for the suggestion!\n> \n> It works for a live cluster. But doesn't work for backups, since I\n> find no way to relate a tablespace directory with a backup directory\n> not using a symlink. One way would be taking a backup with tentative\n> tablespace directory in the short-named temporary directory then move\n> it into the backup direcotry. I'm going that way for now.\n\nThis is that.\n\n0001 adds several routines to handle tablespace directories, and adds\ntablespace support to backup/_backup_fs.\n\nWe don't know an oid corresponding to a tablespace directory before\nactually assigning the oid to the tablespace. So we cannot name a\ntablespace directory after the oid. On the other hand, after defining\nthe tablespace, cold data files don't tell the real directory name of\nthe tablespace directory for an oid or a tablespace name, unless we\nhave readlink.\n\nThe function dir_readlink added to Utils.pm is that. Honestly I don't\nlike the way function works. It uses \"cmd /c \"dir /A:L $dir\"\" to\ncollect information of junctions. I'm not sure that the type label\n\"<JUNCTION>\" is immutable among locales but at least it is shown as\n\"<JUNCTION>\" on Japanese (CP-932) environment. I didn't actually\ntested it on Windows and msys environment ...yet.\n\nPremising the availability of the function, we can name tablespace\ndirectories from meaningful words.\n\nThe directory to store tablespace directories can be a temporary\ndirectory, but with that way it is needed to create a symlink to find\nthose directories from a backup. I chose to place tablespace\ndirectories directly under backup directory.\n\nThe attached first file is a revised (or remade) version of tablespace\nsupport for TAP test.\n\nThe second is the version adapted to the revised framework. (I\nconfirmed that the test actually detects the error.)\n\nThe third is not changed at all.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 12 Nov 2021 16:43:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "Just a complaint..\n\nAt Fri, 12 Nov 2021 16:43:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> \"<JUNCTION>\" on Japanese (CP-932) environment. I didn't actually\n> tested it on Windows and msys environment ...yet.\n\nActive perl cannot be installed because of (perhaps) a powershell\nversion issue... Annoying..\n\nhttps://community.activestate.com/t/please-update-your-powershell-install-scripts/7897\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 24 Dec 2021 19:21:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "Hi,\n\nOn Fri, Dec 24, 2021 at 07:21:59PM +0900, Kyotaro Horiguchi wrote:\n> Just a complaint..\n> \n> At Fri, 12 Nov 2021 16:43:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > \"<JUNCTION>\" on Japanese (CP-932) environment. I didn't actually\n> > tested it on Windows and msys environment ...yet.\n> \n> Active perl cannot be installed because of (perhaps) a powershell\n> version issue... Annoying..\n> \n> https://community.activestate.com/t/please-update-your-powershell-install-scripts/7897\n\nI'm not very familiar with windows, but maybe using strawberry perl instead\n([1]) would fix your problem? I think it's also quite popular and is commonly\nused to run pgBadger on Windows.\n\nOther than that, I see that the TAP tests are failing on all the environment,\ndue to Perl errors. For instance:\n\n[04:06:00.848] [04:05:54] t/003_promote.pl .....\n[04:06:00.848] Dubious, test returned 2 (wstat 512, 0x200)\nhttps://api.cirrus-ci.com/v1/artifact/task/4751213722861568/tap/src/bin/pg_basebackup/tmp_check/log/regress_log_020_pg_receivewal\n# Initializing node \"standby\" from backup \"my_backup\" of node \"primary\"\nOdd number of elements in hash assignment at /tmp/cirrus-ci-build/src/bin/pg_ctl/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 996.\nUse of uninitialized value in list assignment at /tmp/cirrus-ci-build/src/bin/pg_ctl/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 996.\nUse of uninitialized value $tsp in concatenation (.) or string at /tmp/cirrus-ci-build/src/bin/pg_ctl/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 1008.\nUse of uninitialized value $tsp in concatenation (.) or string at /tmp/cirrus-ci-build/src/bin/pg_ctl/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 1009.\n\nThat's apparently the same problem on every failure reported.\n\nCan you send a fixed patchset? In the meantime I will switch the cf entry to\nWaiting on Author.\n\n\n[1] https://strawberryperl.com/\n\n\n",
"msg_date": "Sun, 16 Jan 2022 12:43:03 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Sun, 16 Jan 2022 12:43:03 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> Hi,\n> \n> On Fri, Dec 24, 2021 at 07:21:59PM +0900, Kyotaro Horiguchi wrote:\n> > Just a complaint..\n> > \n> > At Fri, 12 Nov 2021 16:43:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > \"<JUNCTION>\" on Japanese (CP-932) environment. I didn't actually\n> > > tested it on Windows and msys environment ...yet.\n> > \n> > Active perl cannot be installed because of (perhaps) a powershell\n> > version issue... Annoying..\n> > \n> > https://community.activestate.com/t/please-update-your-powershell-install-scripts/7897\n> \n> I'm not very familiar with windows, but maybe using strawberry perl instead\n> ([1]) would fix your problem? I think it's also quite popular and is commonly\n> used to run pgBadger on Windows.\n\nThanks! I'll try it later.\n\n> Other than that, I see that the TAP tests are failing on all the environment,\n> due to Perl errors. For instance:\n> \n> [04:06:00.848] [04:05:54] t/003_promote.pl .....\n> [04:06:00.848] Dubious, test returned 2 (wstat 512, 0x200)\n> https://api.cirrus-ci.com/v1/artifact/task/4751213722861568/tap/src/bin/pg_basebackup/tmp_check/log/regress_log_020_pg_receivewal\n> # Initializing node \"standby\" from backup \"my_backup\" of node \"primary\"\n> Odd number of elements in hash assignment at /tmp/cirrus-ci-build/src/bin/pg_ctl/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 996.\n> Use of uninitialized value in list assignment at /tmp/cirrus-ci-build/src/bin/pg_ctl/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 996.\n> Use of uninitialized value $tsp in concatenation (.) or string at /tmp/cirrus-ci-build/src/bin/pg_ctl/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 1008.\n> Use of uninitialized value $tsp in concatenation (.) or string at /tmp/cirrus-ci-build/src/bin/pg_ctl/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 1009.\n> \n> That's apparently the same problem on every failure reported.\n> \n> Can you send a fixed patchset? In the meantime I will switch the cf entry to\n> Waiting on Author.\n\nI guess that failure came from a recent change that allows in-place\ntablespace directory. I'll check it out. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 17 Jan 2022 17:24:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Mon, 17 Jan 2022 17:24:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Sun, 16 Jan 2022 12:43:03 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > I'm not very familiar with windows, but maybe using strawberry perl instead\n> > ([1]) would fix your problem? I think it's also quite popular and is commonly\n> > used to run pgBadger on Windows.\n> \n> Thanks! I'll try it later.\n\nBuild is stopped by some unresolvable symbols.\n\nStrawberry perl is 5.28, which doesn't expose new_ctype, new_collate\nand new_numeric according the past discussion. (Active perl is 5.32).\n\nhttps://www.postgresql.org/message-id/20200501134711.08750c5f%40antares.wagner.home\n\nHowever, the patch provided revealed other around 70 unresolved symbol\nerrors...\n\n# Hmm. perl on CentOS 8 is 5.26..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 20 Jan 2022 13:25:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Sun, 16 Jan 2022 12:43:03 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> Other than that, I see that the TAP tests are failing on all the environment,\n> due to Perl errors. For instance:\n\nPerl seems to have changed its behavior for undef hash.\n\nIt is said that \"if (%undef_hash)\" is false but actually it is true\nand \"keys %undef_hash\" is 1.. Finally I had to make\nbackup_tablespaces() to return a hash reference. The test of\npg_basebackup takes a backup with tar mode, which broke the test\ninfrastructure. Cluster::backup now skips symlink adjustment when the\nbackup contains \"/base.tar\".\n\nI gave up testing on Windows on my own environment and used Cirrus CI.\n\n# However, it works for confirmation of a established code. TAT of CI\n# is still long to do trial and error of unestablished code..\n\nThis version works for Unixen but still doesn't for Windows. I'm\nsearching for a fix for Windows.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 20 Jan 2022 15:07:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Thu, 20 Jan 2022 15:07:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> This version works for Unixen but still doesn't for Windows. I'm\n> searching for a fix for Windows.\n\nAnd this version works for Windows. Maybe I've took a wrong version\nto post. dir_readlink manipulated target file (junction) name in the\nwrong way.\n\nCI now likes this version for all platforms.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 20 Jan 2022 17:19:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Thu, 20 Jan 2022 17:19:04 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 20 Jan 2022 15:07:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> CI now likes this version for all platforms.\n\nAn xlog.c refactoring happend recently hit this.\nJust rebased on the change.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 02 Mar 2022 16:59:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Wed, 02 Mar 2022 16:59:09 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 20 Jan 2022 17:19:04 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Thu, 20 Jan 2022 15:07:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > CI now likes this version for all platforms.\n> \n> An xlog.c refactoring happend recently hit this.\n> Just rebased on the change.\n\nA function added to Util.pm used perl2host, which has been removed\nrecently.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 02 Mar 2022 19:31:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Wed, 02 Mar 2022 19:31:24 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> A function added to Util.pm used perl2host, which has been removed\n> recently.\n\nAnd same function contained a maybe-should-have-been-removed line\nwhich makes Windows build unhappy.\n\nThis should make all platforms in the CI happy.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 04 Mar 2022 09:10:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Fri, Mar 04, 2022 at 09:10:48AM +0900, Kyotaro Horiguchi wrote:\n> And same function contained a maybe-should-have-been-removed line\n> which makes Windows build unhappy.\n> \n> This should make all platforms in the CI happy.\n\nd6d317d as solved the issue of tablespace paths across multiple nodes\nwith the new GUC called allow_in_place_tablespaces, and is getting\nsuccessfully used in the recovery tests as of 027_stream_regress.pl.\n\nShouldn't we rely on that rather than extending more our test perl\nmodules? One tricky part is the emulation of readlink for junction\npoints on Windows (dir_readlink in your patch), and the root of the\nproblem is that 0003 cares about the path structure of the\ntablespaces so we have no need, as far as I can see, for any\ndependency with link follow-up in the scope of this patch.\n\nThis means that you should be able to simplify the patch set, as we\ncould entirely drop 0001 in favor of enforcing the new dev GUC in the\nnodes created in the TAP test of 0002.\n\nSpeaking of 0002, perhaps this had better be in its own file rather\nthan extending more 011_crash_recovery.pl. 0003 looks like a good\nidea to check after the consistency of the path structures created\nduring replay, and it touches paths I'd expect it to touch, as of\ndatabase and tbspace redos.\n\n+ if (!reachedConsistency)\n+ XLogForgetMissingDir(xlrec->ts_id, InvalidOid);\n+\n+ XLogFlush(record->EndRecPtr);\nNot sure to understand why this is required. A comment may be in\norder to explain the hows and the whys.\n--\nMichael",
"msg_date": "Fri, 4 Mar 2022 13:51:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Thanks to look this!\n\nAt Fri, 4 Mar 2022 13:51:12 +0900, Michael Paquier <michael@paquier.xyz> wrote i\nn \n> On Fri, Mar 04, 2022 at 09:10:48AM +0900, Kyotaro Horiguchi wrote:\n> > And same function contained a maybe-should-have-been-removed line\n> > which makes Windows build unhappy.\n> > \n> > This should make all platforms in the CI happy.\n> \n> d6d317d as solved the issue of tablespace paths across multiple nodes\n> with the new GUC called allow_in_place_tablespaces, and is getting\n> successfully used in the recovery tests as of 027_stream_regress.pl.\n\nThe feature allows only one tablespace directory. but that uses (I'm\nnot sure it needs, though) multiple tablespace directories so I think\nthe feature doesn't work for the test.\n\nMaybe I'm missing something, but it doesn't use tablespaces. I see\nthat in 002_tablespace.pl but but the test uses only one tablespace\nlocation.\n\n> Shouldn't we rely on that rather than extending more our test perl\n> modules? One tricky part is the emulation of readlink for junction\n> points on Windows (dir_readlink in your patch), and the root of the\n\nYeah, I don't like that as I said before...\n\n> problem is that 0003 cares about the path structure of the\n> tablespaces so we have no need, as far as I can see, for any\n> dependency with link follow-up in the scope of this patch.\n\nI'm not sure how this related to 0001 but maybe I don't follow this.\n\n> This means that you should be able to simplify the patch set, as we\n> could entirely drop 0001 in favor of enforcing the new dev GUC in the\n> nodes created in the TAP test of 0002.\n\nMaybe it's possible by breaking the test into ones that need only one\ntablespace. I'll give it a try.\n\n> Speaking of 0002, perhaps this had better be in its own file rather\n> than extending more 011_crash_recovery.pl. 0003 looks like a good\n\nOk, no problem.\n\n> idea to check after the consistency of the path structures created\n> during replay, and it touches paths I'd expect it to touch, as of\n> database and tbspace redos.\n> \n> + if (!reachedConsistency)\n> + XLogForgetMissingDir(xlrec->ts_id, InvalidOid);\n> +\n> + XLogFlush(record->EndRecPtr);\n> Not sure to understand why this is required. A comment may be in\n> order to explain the hows and the whys.\n\nIs it about XLogFlush? As my understanding it is to update\nminRecoveryPoint to that LSN. I'll add a comment like that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 04 Mar 2022 15:30:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "So the new framework has been dropped in this version.\nThe second test is removed as it is irrelevant to this bug.\n\nIn this version the patch is a single file that contains the test.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 07 Mar 2022 17:39:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Mon, Mar 7, 2022 at 3:39 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> So the new framework has been dropped in this version.\n> The second test is removed as it is irrelevant to this bug.\n>\n> In this version the patch is a single file that contains the test.\n\nThe status of this patch in the CommitFest was set to \"Waiting for\nAuthor.\" Since a new patch has been submitted since that status was\nset, I have changed it to \"Needs Review.\" Since this is now in its\n15th CommitFest, we really should get it fixed; that's kind of\nridiculous. (I am as much to blame as anyone.) It does seem to be a\nlegitimate bug.\n\nA few questions about the patch:\n\n1. Why is it OK to just skip the operation without making it up later?\n\n2. Why not instead change the code so that the operation can succeed,\nby creating the prerequisite parent directories? Do we not have enough\ninformation for that? I'm not saying that we definitely should do it\nthat way rather than this way, but I think we do take that approach in\nsome cases.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 14 Mar 2022 17:37:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Mon, 14 Mar 2022 17:37:40 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Mon, Mar 7, 2022 at 3:39 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > So the new framework has been dropped in this version.\n> > The second test is removed as it is irrelevant to this bug.\n> >\n> > In this version the patch is a single file that contains the test.\n> \n> The status of this patch in the CommitFest was set to \"Waiting for\n> Author.\" Since a new patch has been submitted since that status was\n> set, I have changed it to \"Needs Review.\" Since this is now in its\n> 15th CommitFest, we really should get it fixed; that's kind of\n> ridiculous. (I am as much to blame as anyone.) It does seem to be a\n> legitimate bug.\n> \n> A few questions about the patch:\n\nThanks for looking this!\n\n> 1. Why is it OK to just skip the operation without making it up later?\n\nDoes \"it\" mean removal of directories? It is not okay, but in the\nfirst place it is out-of-scope of this patch to fix that. The patch\nleaves the existing code alone. This patch just has recovery ignore\ninvalid accesses into eventually removed objects.\n\nMaybe, I don't understand you question..\n\n> 2. Why not instead change the code so that the operation can succeed,\n> by creating the prerequisite parent directories? Do we not have enough\n> information for that? I'm not saying that we definitely should do it\n> that way rather than this way, but I think we do take that approach in\n> some cases.\n\nIt is proposed first by Paul Guo [1] then changed so that it ignores\nfailed directory creations in the very early stage in this thread.\nAfter that, it gets conscious of recovery consistency by managing\ninvalid-access list.\n\n[1] https://www.postgresql.org/message-id/flat/20210327142316.GA32517%40alvherre.pgsql#a557bd47207a446ce206879676e0140a\n\nI think there was no strong reason for the current shape but I\npersonally rather like the remembering-invalid-access way because it\ndoesn't dirty the data directory and it is consistent with how we\ntreat missing heap pages.\n\nI tried a slightly tweaked version (attached) of the first version and\nconfirmed that it works for the current test script. It doesn't check\nrecovery consistency but otherwise that way also seems fine.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.c\nindex c37e3c9a9a..28aed8d296 100644\n--- a/src/backend/commands/dbcommands.c\n+++ b/src/backend/commands/dbcommands.c\n@@ -47,6 +47,7 @@\n #include \"commands/defrem.h\"\n #include \"commands/seclabel.h\"\n #include \"commands/tablespace.h\"\n+#include \"common/file_perm.h\"\n #include \"mb/pg_wchar.h\"\n #include \"miscadmin.h\"\n #include \"pgstat.h\"\n@@ -2382,6 +2383,7 @@ dbase_redo(XLogReaderState *record)\n \t\txl_dbase_create_rec *xlrec = (xl_dbase_create_rec *) XLogRecGetData(record);\n \t\tchar\t *src_path;\n \t\tchar\t *dst_path;\n+\t\tchar\t *parent_path;\n \t\tstruct stat st;\n \n \t\tsrc_path = GetDatabasePath(xlrec->src_db_id, xlrec->src_tablespace_id);\n@@ -2401,6 +2403,41 @@ dbase_redo(XLogReaderState *record)\n \t\t\t\t\t\t\t\tdst_path)));\n \t\t}\n \n+\t\t/*\n+\t\t * It is possible that the tablespace was later dropped, but we are\n+\t\t * re-redoing database create before that. In that case, those\n+\t\t * directories are gone, and we do not create symlink.\n+\t\t */\n+\t\tif (stat(dst_path, &st) < 0 && errno == ENOENT)\n+\t\t{\n+\t\t\tparent_path = pstrdup(dst_path);\n+\t\t\tget_parent_directory(parent_path);\n+\t\t\telog(WARNING, \"creating missing directory: %s\", parent_path);\n+\t\t\tif (stat(parent_path, &st) != 0 && pg_mkdir_p(parent_path, pg_dir_create_mode) != 0)\n+\t\t\t{\n+\t\t\t\tereport(WARNING,\n+\t\t\t\t\t\t(errmsg(\"can not recursively create directory \\\"%s\\\"\",\n+\t\t\t\t\t\t\t\tparent_path)));\n+\t\t\t}\n+\t\t}\n+\n+\t\t/*\n+\t\t * There's a case where the copy source directory is missing for the\n+\t\t * same reason above. Create the emtpy source directory so that\n+\t\t * copydir below doesn't fail. The directory will be dropped soon by\n+\t\t * recovery.\n+\t\t */\n+\t\tif (stat(src_path, &st) < 0 && errno == ENOENT)\n+\t\t{\n+\t\t\telog(WARNING, \"creating missing copy source directory: %s\", src_path);\n+\t\t\tif (stat(src_path, &st) != 0 && pg_mkdir_p(src_path, pg_dir_create_mode) != 0)\n+\t\t\t{\n+\t\t\t\tereport(WARNING,\n+\t\t\t\t\t\t(errmsg(\"can not recursively create directory \\\"%s\\\"\",\n+\t\t\t\t\t\t\t\tsrc_path)));\n+\t\t\t}\n+\t\t}\n+\n \t\t/*\n \t\t * Force dirty buffers out to disk, to ensure source database is\n \t\t * up-to-date for the copy.\ndiff --git a/src/backend/commands/tablespace.c b/src/backend/commands/tablespace.c\nindex 40514ab550..675f578dfe 100644\n--- a/src/backend/commands/tablespace.c\n+++ b/src/backend/commands/tablespace.c\n@@ -155,8 +155,6 @@ TablespaceCreateDbspace(Oid spcNode, Oid dbNode, bool isRedo)\n \t\t\t\t/* Directory creation failed? */\n \t\t\t\tif (MakePGDirectory(dir) < 0)\n \t\t\t\t{\n-\t\t\t\t\tchar\t *parentdir;\n-\n \t\t\t\t\t/* Failure other than not exists or not in WAL replay? */\n \t\t\t\t\tif (errno != ENOENT || !isRedo)\n \t\t\t\t\t\tereport(ERROR,\n@@ -169,32 +167,8 @@ TablespaceCreateDbspace(Oid spcNode, Oid dbNode, bool isRedo)\n \t\t\t\t\t * continue by creating simple parent directories rather\n \t\t\t\t\t * than a symlink.\n \t\t\t\t\t */\n-\n-\t\t\t\t\t/* create two parents up if not exist */\n-\t\t\t\t\tparentdir = pstrdup(dir);\n-\t\t\t\t\tget_parent_directory(parentdir);\n-\t\t\t\t\tget_parent_directory(parentdir);\n-\t\t\t\t\t/* Can't create parent and it doesn't already exist? */\n-\t\t\t\t\tif (MakePGDirectory(parentdir) < 0 && errno != EEXIST)\n-\t\t\t\t\t\tereport(ERROR,\n-\t\t\t\t\t\t\t\t(errcode_for_file_access(),\n-\t\t\t\t\t\t\t\t errmsg(\"could not create directory \\\"%s\\\": %m\",\n-\t\t\t\t\t\t\t\t\t\tparentdir)));\n-\t\t\t\t\tpfree(parentdir);\n-\n-\t\t\t\t\t/* create one parent up if not exist */\n-\t\t\t\t\tparentdir = pstrdup(dir);\n-\t\t\t\t\tget_parent_directory(parentdir);\n-\t\t\t\t\t/* Can't create parent and it doesn't already exist? */\n-\t\t\t\t\tif (MakePGDirectory(parentdir) < 0 && errno != EEXIST)\n-\t\t\t\t\t\tereport(ERROR,\n-\t\t\t\t\t\t\t\t(errcode_for_file_access(),\n-\t\t\t\t\t\t\t\t errmsg(\"could not create directory \\\"%s\\\": %m\",\n-\t\t\t\t\t\t\t\t\t\tparentdir)));\n-\t\t\t\t\tpfree(parentdir);\n-\n \t\t\t\t\t/* Create database directory */\n-\t\t\t\t\tif (MakePGDirectory(dir) < 0)\n+\t\t\t\t\tif (pg_mkdir_p(dir, pg_dir_create_mode) < 0)\n \t\t\t\t\t\tereport(ERROR,\n \t\t\t\t\t\t\t\t(errcode_for_file_access(),\n \t\t\t\t\t\t\t\t errmsg(\"could not create directory \\\"%s\\\": %m\",",
"msg_date": "Tue, 15 Mar 2022 15:09:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On 2022-Mar-04, Michael Paquier wrote:\n\n> d6d317d as solved the issue of tablespace paths across multiple nodes\n> with the new GUC called allow_in_place_tablespaces, and is getting\n> successfully used in the recovery tests as of 027_stream_regress.pl.\n\nOK, but that means that the test suite is now not backpatchable. The\nimplication here is that either we're going to commit the fix without\nany tests at all on older branches, or that we're going to fix it only\nin branch master. Are you thinking that it's okay to leave this bug\nunfixed in older branches? That seems embarrasing.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No me acuerdo, pero no es cierto. No es cierto, y si fuera cierto,\n no me acuerdo.\" (Augusto Pinochet a una corte de justicia)\n\n\n",
"msg_date": "Mon, 21 Mar 2022 12:24:38 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "I had a look at this latest version of the patch, and found some things\nto tweak. Attached is v21 with three main changes from Kyotaro's v20:\n\n1. the XLogFlush is only done if consistent state has not been reached.\nAs I understand, it's not needed in normal mode. (In any case, if we do\ncall XLogFlush in normal mode, what it does is not advance the recovery\npoint, so the comment would be incorrect.)\n\n2. use %u to print OIDs rather than %d\n\n3. I changed the warning message wording to this:\n\n+ ereport(WARNING,\n+ (errmsg(\"skipping replay of database creation WAL record\"),\n+ errdetail(\"The source database directory \\\"%s\\\" was not found.\",\n+ src_path),\n+ errhint(\"A future WAL record that removes the directory before reaching consistent mode is expected.\")));\n\nI also renamed the function XLogReportMissingDir to\nXLogRememberMissingDir (which matches the \"forget\" part) and changed the\nDEBUG2 messages in that function to DEBUG1 (all the calls in other\nfunctions remain DEBUG2, because ISTM they are not as interesting).\nFinally, I made the TAP test search the WARNING line in the log.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No tengo por qué estar de acuerdo con lo que pienso\"\n (Carlos Caszeli)",
"msg_date": "Mon, 21 Mar 2022 19:43:52 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Mar-14, Robert Haas wrote:\n\n> 2. Why not instead change the code so that the operation can succeed,\n> by creating the prerequisite parent directories? Do we not have enough\n> information for that? I'm not saying that we definitely should do it\n> that way rather than this way, but I think we do take that approach in\n> some cases.\n\nIt seems we can choose freely between these two implementations -- I\nmean I don't see any upsides or downsides to either one.\n\nThe current one has the advantage that it never makes the datadir\n\"dirty\", to use Kyotaro's term. It verifies that the creation/drop form\na pair. A possible downside is that if there's a bug, we could end up\nwith a spurious PANIC at the end of recovery, and no way to recover.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 21 Mar 2022 20:03:07 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Mar-21, Alvaro Herrera wrote:\n\n> I had a look at this latest version of the patch, and found some things\n> to tweak. Attached is v21 with three main changes from Kyotaro's v20:\n\nPushed this, backpatching to 14 and 13. It would have been good to\nbackpatch further, but there's an (textually trivial) merge conflict\nrelated to commit e6d8069522c8. Because that commit conceptually\ntouches the same area that this bugfix is about, I'm not sure that\nbackpatching further without a lot more thought is wise -- particularly\nso when there's no way to automate the test in branches older than\nmaster.\n\nThis is quite annoying, considering that the bug was reported shortly\nbefore 12 went into beta.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"If you have nothing to say, maybe you need just the right tool to help you\nnot say it.\" (New York Times, about Microsoft PowerPoint)\n\n\n",
"msg_date": "Fri, 25 Mar 2022 13:26:05 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Fri, 25 Mar 2022 13:26:05 +0100, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2022-Mar-21, Alvaro Herrera wrote:\n> \n> > I had a look at this latest version of the patch, and found some things\n> > to tweak. Attached is v21 with three main changes from Kyotaro's v20:\n> \n> Pushed this, backpatching to 14 and 13. It would have been good to\n> backpatch further, but there's an (textually trivial) merge conflict\n> related to commit e6d8069522c8. Because that commit conceptually\n> touches the same area that this bugfix is about, I'm not sure that\n> backpatching further without a lot more thought is wise -- particularly\n> so when there's no way to automate the test in branches older than\n> master.\n\nThaks for committing.\n\n> This is quite annoying, considering that the bug was reported shortly\n> before 12 went into beta.\n\nSure. I'm going to look into that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 28 Mar 2022 10:01:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Mon, Mar 28, 2022 at 2:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Fri, 25 Mar 2022 13:26:05 +0100, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > Pushed this, backpatching to 14 and 13. It would have been good to\n> > backpatch further, but there's an (textually trivial) merge conflict\n> > related to commit e6d8069522c8. Because that commit conceptually\n> > touches the same area that this bugfix is about, I'm not sure that\n> > backpatching further without a lot more thought is wise -- particularly\n> > so when there's no way to automate the test in branches older than\n> > master.\n\nJust a thought: we could consider back-patching\nallow_in_place_tablespaces, after a little while, if we're happy with\nhow that is working out, if it'd be useful for verifying bug fixes in\nback branches. It's non-end-user-facing testing infrastructure.\n\n\n",
"msg_date": "Mon, 28 Mar 2022 14:34:44 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Mon, 28 Mar 2022 14:34:44 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Mon, Mar 28, 2022 at 2:01 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Fri, 25 Mar 2022 13:26:05 +0100, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > > Pushed this, backpatching to 14 and 13. It would have been good to\n> > > backpatch further, but there's an (textually trivial) merge conflict\n> > > related to commit e6d8069522c8. Because that commit conceptually\n> > > touches the same area that this bugfix is about, I'm not sure that\n> > > backpatching further without a lot more thought is wise -- particularly\n> > > so when there's no way to automate the test in branches older than\n> > > master.\n> \n> Just a thought: we could consider back-patching\n> allow_in_place_tablespaces, after a little while, if we're happy with\n> how that is working out, if it'd be useful for verifying bug fixes in\n> back branches. It's non-end-user-facing testing infrastructure.\n\nI appreciate if we accept that. The patch is simple. And it now has\nthe clear use-case for back-patching.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 28 Mar 2022 15:25:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Mon, 28 Mar 2022 10:01:05 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 25 Mar 2022 13:26:05 +0100, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> > Pushed this, backpatching to 14 and 13. It would have been good to\n> > backpatch further, but there's an (textually trivial) merge conflict\n> > related to commit e6d8069522c8. Because that commit conceptually\n> > touches the same area that this bugfix is about, I'm not sure that\n> > backpatching further without a lot more thought is wise -- particularly\n> > so when there's no way to automate the test in branches older than\n> > master.\n> \n> Thaks for committing.\n> \n> > This is quite annoying, considering that the bug was reported shortly\n> > before 12 went into beta.\n> \n> Sure. I'm going to look into that.\n\nThis is a preparatory patch and tentative (yes, it's just tentative)\ntest. This is made for 12 but applies with some warnings to 10-11.\n\n(Hope the attachments are attached as \"attachment\", not \"inline\".)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 28 Mar 2022 17:20:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 8:26 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Mar-21, Alvaro Herrera wrote:\n> > I had a look at this latest version of the patch, and found some things\n> > to tweak. Attached is v21 with three main changes from Kyotaro's v20:\n>\n> Pushed this, backpatching to 14 and 13. It would have been good to\n> backpatch further, but there's an (textually trivial) merge conflict\n> related to commit e6d8069522c8. Because that commit conceptually\n> touches the same area that this bugfix is about, I'm not sure that\n> backpatching further without a lot more thought is wise -- particularly\n> so when there's no way to automate the test in branches older than\n> master.\n>\n> This is quite annoying, considering that the bug was reported shortly\n> before 12 went into beta.\n\nI think that the warnings this patch issues may cause some unnecessary\nend-user alarm. It seems to me that they are basically warning about a\nsituation that is unusual but not scary. Isn't the appropriate level\nfor that DEBUG1, maybe without the errhint?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 28 Mar 2022 10:37:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 3:02 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > 2. Why not instead change the code so that the operation can succeed,\n> > by creating the prerequisite parent directories? Do we not have enough\n> > information for that? I'm not saying that we definitely should do it\n> > that way rather than this way, but I think we do take that approach in\n> > some cases.\n>\n> It seems we can choose freely between these two implementations -- I\n> mean I don't see any upsides or downsides to either one.\n\nWhat got committed here feels inconsistent to me. Suppose we have a\ncheckpoint, and then a series of operations that touch a tablespace,\nand then a drop database and drop tablespace. If the first operation\nhappens to be CREATE DATABASE, then this patch is going to fix it by\nskipping the operation. However, if the first operation happens to be\nalmost anything else, the way it's going to reference the dropped\ntablespace is via a block reference in a WAL record of a wide variety\nof types. That's going to result in a call to\nXLogReadBufferForRedoExtended() which will call\nXLogReadBufferExtended() which will do smgrcreate(smgr, forknum, true)\nwhich will in turn call TablespaceCreateDbspace() to fill in all the\nmissing directories.\n\nI don't think that's very good. It would be reasonable to decide that\nwe're never going to create the missing directories and instead just\nremember that they were not found so we can do a cross check. It's\nalso reasonable to just create the directories on the fly. But doing a\nmix of those systems doesn't really seem like the right idea -\nparticularly because it means that the cross-check system is probably\nnot very effective at finding actual problems in the code.\n\nAm I missing something here?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 28 Mar 2022 12:17:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Mon, 28 Mar 2022 10:37:04 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Fri, Mar 25, 2022 at 8:26 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > On 2022-Mar-21, Alvaro Herrera wrote:\n> > > I had a look at this latest version of the patch, and found some things\n> > > to tweak. Attached is v21 with three main changes from Kyotaro's v20:\n> >\n> > Pushed this, backpatching to 14 and 13. It would have been good to\n> > backpatch further, but there's an (textually trivial) merge conflict\n> > related to commit e6d8069522c8. Because that commit conceptually\n> > touches the same area that this bugfix is about, I'm not sure that\n> > backpatching further without a lot more thought is wise -- particularly\n> > so when there's no way to automate the test in branches older than\n> > master.\n> >\n> > This is quite annoying, considering that the bug was reported shortly\n> > before 12 went into beta.\n> \n> I think that the warnings this patch issues may cause some unnecessary\n> end-user alarm. It seems to me that they are basically warning about a\n> situation that is unusual but not scary. Isn't the appropriate level\n> for that DEBUG1, maybe without the errhint?\n\nlog_invalid_page reports missing pages with DEBUG1 before reaching\nconsistency. And since missing directory is not an issue if all of\nthose reports are forgotten until reaching consistency, DEBUG1 sounds\nreasonable. Maybe we lower the DEBUG1 messages to DEBUG2 in\nXLogRememberMissingDir?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 29 Mar 2022 10:34:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Mon, Mar 28, 2022 at 02:34:44PM +1300, Thomas Munro wrote:\n> Just a thought: we could consider back-patching\n> allow_in_place_tablespaces, after a little while, if we're happy with\n> how that is working out, if it'd be useful for verifying bug fixes in\n> back branches. It's non-end-user-facing testing infrastructure.\n\n+1 for a backpatch on that. That would be useful.\n--\nMichael",
"msg_date": "Tue, 29 Mar 2022 10:57:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Mon, 28 Mar 2022 12:17:50 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Mon, Mar 21, 2022 at 3:02 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > 2. Why not instead change the code so that the operation can succeed,\n> > > by creating the prerequisite parent directories? Do we not have enough\n> > > information for that? I'm not saying that we definitely should do it\n> > > that way rather than this way, but I think we do take that approach in\n> > > some cases.\n> >\n> > It seems we can choose freely between these two implementations -- I\n> > mean I don't see any upsides or downsides to either one.\n> \n> What got committed here feels inconsistent to me. Suppose we have a\n> checkpoint, and then a series of operations that touch a tablespace,\n> and then a drop database and drop tablespace. If the first operation\n> happens to be CREATE DATABASE, then this patch is going to fix it by\n> skipping the operation. However, if the first operation happens to be\n> almost anything else, the way it's going to reference the dropped\n> tablespace is via a block reference in a WAL record of a wide variety\n> of types. That's going to result in a call to\n> XLogReadBufferForRedoExtended() which will call\n> XLogReadBufferExtended() which will do smgrcreate(smgr, forknum, true)\n> which will in turn call TablespaceCreateDbspace() to fill in all the\n> missing directories.\n\nRight. I thought that recovery avoids that but that's wrong. This\nbehavior creates a bare (non-linked) directly within pg_tblspc. The\ndirectory would dissapear soon if recovery proceeds to the consistency\npoint, though.\n\n> I don't think that's very good. It would be reasonable to decide that\n> we're never going to create the missing directories and instead just\n> remember that they were not found so we can do a cross check. It's\n> also reasonable to just create the directories on the fly. But doing a\n> mix of those systems doesn't really seem like the right idea -\n> particularly because it means that the cross-check system is probably\n> not very effective at finding actual problems in the code.\n> \n> Am I missing something here?\n\nNo. I agree that mixing them is not good. On the other hand we\nalready doing that by heapam. AFAICS sometimes it avoid creating a\nnew page but sometimes creates it. But I don't mean to use the fact\nfor justifying this patch to do that, or denying to do that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 29 Mar 2022 13:55:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On 2022-Mar-29, Kyotaro Horiguchi wrote:\n\n> > That's going to result in a call to\n> > XLogReadBufferForRedoExtended() which will call\n> > XLogReadBufferExtended() which will do smgrcreate(smgr, forknum, true)\n> > which will in turn call TablespaceCreateDbspace() to fill in all the\n> > missing directories.\n> \n> Right. I thought that recovery avoids that but that's wrong. This\n> behavior creates a bare (non-linked) directly within pg_tblspc. The\n> directory would dissapear soon if recovery proceeds to the consistency\n> point, though.\n\nHmm, this is not good.\n\n> No. I agree that mixing them is not good. On the other hand we\n> already doing that by heapam. AFAICS sometimes it avoid creating a\n> new page but sometimes creates it. But I don't mean to use the fact\n> for justifying this patch to do that, or denying to do that.\n\nI think we should revert this patch and do it again using the other\napproach: create a stub directory during recovery that can be deleted\nlater.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Porque francamente, si para saber manejarse a uno mismo hubiera que\nrendir examen... ¿Quién es el machito que tendría carnet?\" (Mafalda)\n\n\n",
"msg_date": "Tue, 29 Mar 2022 13:37:34 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Tue, Mar 29, 2022 at 7:37 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I think we should revert this patch and do it again using the other\n> approach: create a stub directory during recovery that can be deleted\n> later.\n\nI'm fine with that approach, but I'd like to ask that we proceed\nexpeditiously, because I have another patch that I want to commit that\ntouches this area. I can commit to helping with whatever we decide to\ndo here, but I don't want to keep that patch on ice while we figure it\nout and then have it miss the release.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 29 Mar 2022 08:45:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Mar-29, Robert Haas wrote:\n\n> I'm fine with that approach, but I'd like to ask that we proceed\n> expeditiously, because I have another patch that I want to commit that\n> touches this area. I can commit to helping with whatever we decide to\n> do here, but I don't want to keep that patch on ice while we figure it\n> out and then have it miss the release.\n\nOK, this is a bug that's been open for years. A fix can be committed\nafter the feature freeze anyway.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 29 Mar 2022 15:28:26 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Tue, Mar 29, 2022 at 9:28 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> OK, this is a bug that's been open for years. A fix can be committed\n> after the feature freeze anyway.\n\n+1\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 29 Mar 2022 09:31:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Tue, 29 Mar 2022 09:31:42 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Tue, Mar 29, 2022 at 9:28 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > OK, this is a bug that's been open for years. A fix can be committed\n> > after the feature freeze anyway.\n> \n> +1\n\nBy the way, may I ask how do we fix this? The existing recovery code\nalready generates just-to-be-delete files in a real directory in\npg_tblspc sometimes, and elsewise skip applying WAL records on\nnonexistent heap pages. It is the \"mixed\" way.\n\n1. stop XLogReadBufferForRedo creating a file in nonexistent\n directories then remember the failure (I'm not sure how big the\n impact is.)\n\n\n2. unconditionally create all objects required for recovery to proceed..\n 2.1 and igore the failures.\n 2.2 and remember the failures.\n\n3. Any other?\n\n2 needs to create a real directory in pg_tblspc. So 1?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 01 Apr 2022 13:21:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Fri, Apr 1, 2022 at 12:22 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> By the way, may I ask how do we fix this? The existing recovery code\n> already generates just-to-be-delete files in a real directory in\n> pg_tblspc sometimes, and elsewise skip applying WAL records on\n> nonexistent heap pages. It is the \"mixed\" way.\n\nCan you be more specific about where we have each behavior now?\n\n> 1. stop XLogReadBufferForRedo creating a file in nonexistent\n> directories then remember the failure (I'm not sure how big the\n> impact is.)\n>\n> 2. unconditionally create all objects required for recovery to proceed..\n> 2.1 and igore the failures.\n> 2.2 and remember the failures.\n>\n> 3. Any other?\n>\n> 2 needs to create a real directory in pg_tblspc. So 1?\n\nI think we could either do 1 or 2. My intuition is that getting 2\nworking would be less scary and more likely to be something we would\nfeel comfortable back-patching, but 1 is probably a better design in\nthe long term. However, I might be wrong -- that's just a guess.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Apr 2022 14:51:58 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Fri, 1 Apr 2022 14:51:58 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Fri, Apr 1, 2022 at 12:22 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > By the way, may I ask how do we fix this? The existing recovery code\n> > already generates just-to-be-delete files in a real directory in\n> > pg_tblspc sometimes, and elsewise skip applying WAL records on\n> > nonexistent heap pages. It is the \"mixed\" way.\n> \n> Can you be more specific about where we have each behavior now?\n\nThey're done in XLogReadBufferExtended.\n\nThe second behavior happens here,\nxlogutils.c:\n>\t\t/* hm, page doesn't exist in file */\n>\t\tif (mode == RBM_NORMAL)\n>\t\t{\n>\t\t\tlog_invalid_page(rnode, forknum, blkno, false);\n+\t\t\tAssert(0);\n>\t\t\treturn InvalidBuffer;\n\nWith the assertion, 015_promotion_pages.pl crashes. This prevents page\ncreation and the following redo action on the page.\n\nThe first behavior is described as the following comment:\n\n>\t * Create the target file if it doesn't already exist. This lets us cope\n>\t * if the replay sequence contains writes to a relation that is later\n>\t * deleted. (The original coding of this routine would instead suppress\n>\t * the writes, but that seems like it risks losing valuable data if the\n>\t * filesystem loses an inode during a crash. Better to write the data\n>\t * until we are actually told to delete the file.)\n>\t */\n>\tsmgrcreate(smgr, forknum, true);\n\nWithout the smgrcreate call, make check-world fails due to missing\nfiles for FSM and visibility map, and init forks, which it's a bit\ndoubtful that the cases fall into the category so-called \"creates\ninexistent objects by redo access\". In a few places, XLOG_FPI records\nare used to create the first page of a file including main and init\nforks. But I don't see a case of main fork during make check-world.\n\n# Most of the failure cases happen as standby freeze. I was a bit\n# annoyed that make check-world doesn't tell what is the module\n# currently being tested. In that case I had to deduce it from the\n# sequence of preceding script names, but if the first TAP script of a\n# module freezes, I had to use ps to find the module..\n\n\n> > 1. stop XLogReadBufferForRedo creating a file in nonexistent\n> > directories then remember the failure (I'm not sure how big the\n> > impact is.)\n> >\n> > 2. unconditionally create all objects required for recovery to proceed..\n> > 2.1 and igore the failures.\n> > 2.2 and remember the failures.\n> >\n> > 3. Any other?\n> >\n> > 2 needs to create a real directory in pg_tblspc. So 1?\n> \n> I think we could either do 1 or 2. My intuition is that getting 2\n> working would be less scary and more likely to be something we would\n> feel comfortable back-patching, but 1 is probably a better design in\n> the long term. However, I might be wrong -- that's just a guess.\n\nThanks. I forgot to mention in the previous mail (but mentioned\nsomewhere upthread) but if we take 2, there's no way other than\ncreating a real directory in pg_tblspc while recovery. I don't think\nit is neat.\n\nI haven't found how the patch caused creation of a relation file that\nis to be removed soon. However, I find that v19 patch fails by maybe\ndue to some change in Cluster.pm. It takes a bit more time to check\nthat..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 04 Apr 2022 17:29:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Mon, 04 Apr 2022 17:29:48 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I haven't found how the patch caused creation of a relation file that\n> is to be removed soon. However, I find that v19 patch fails by maybe\n> due to some change in Cluster.pm. It takes a bit more time to check\n> that..\n\nI was a bit away, of course the wal-logged create database interfares\nwith the patch here. But I haven't found that why it stops creating\ndatabase directory under pg_tblspc.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 04 Apr 2022 17:54:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Mon, Apr 4, 2022 at 2:25 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 04 Apr 2022 17:29:48 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > I haven't found how the patch caused creation of a relation file that\n> > is to be removed soon. However, I find that v19 patch fails by maybe\n> > due to some change in Cluster.pm. It takes a bit more time to check\n> > that..\n>\n> I was a bit away, of course the wal-logged create database interfares\n> with the patch here. But I haven't found that why it stops creating\n> database directory under pg_tblspc.\n\nI did not understand what is the exact problem here, but the database\ndirectory and the version file are created under the default\ntablespace of the target database. However, other than the default\ntablespace of the database, the database directory will be created\nalong with the smgrcreate() so that we do not create an unnecessary\ndirectory under the tablespace where we do not have any data to be\ncopied.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Apr 2022 21:14:27 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Mon, 4 Apr 2022 21:14:27 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Mon, Apr 4, 2022 at 2:25 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Mon, 04 Apr 2022 17:29:48 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > > I haven't found how the patch caused creation of a relation file that\n> > > is to be removed soon. However, I find that v19 patch fails by maybe\n> > > due to some change in Cluster.pm. It takes a bit more time to check\n> > > that..\n> >\n> > I was a bit away, of course the wal-logged create database interfares\n> > with the patch here. But I haven't found that why it stops creating\n> > database directory under pg_tblspc.\n> \n> I did not understand what is the exact problem here, but the database\n> directory and the version file are created under the default\n> tablespace of the target database. However, other than the default\n> tablespace of the database, the database directory will be created\n> along with the smgrcreate() so that we do not create an unnecessary\n> directory under the tablespace where we do not have any data to be\n> copied.\n\nThanks. Yeah, I suspected something like that but I didn't find a\ndifference in the code I suspected to be related with, but it's was\nwrong. I took wrong steps trying to reveal that state and faced the\nwrong error message. With the correct steps, I could see that\nStorage/CREATE creates pg_tblspc/<directory>.\n\nSo, if we create missing tablespace directory, we have no way\notherthan creating it directly in pg_tblspc, which is violating the\nrule that there shouldn't be real directory in pg_tblspc (when\nallow_in_place_tablespaces is false).\n\nSo, I have the following points in my mind for now.\n\n- We create the directory \"since we know it is just tentative state\".\n\n- Then, check that no directory in pg_tblspc when reaching consistency\n when allow_in_place_tablespaces is false.\n\n- Leave the log_invalid_page() mechanism alone as it is always result\n in a corrpt page if a differential WAL record is applied on a newly\n created page that should have been exist.\n\nHowever, while working on it, I found that I found that recovery faces\nmissing tablespace directories *after* reaching consistency. I'm\nexamining that further.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 05 Apr 2022 11:16:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Tue, 05 Apr 2022 11:16:44 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> So, I have the following points in my mind for now.\n> \n> - We create the directory \"since we know it is just tentative state\".\n> \n> - Then, check that no directory in pg_tblspc when reaching consistency\n> when allow_in_place_tablespaces is false.\n> \n> - Leave the log_invalid_page() mechanism alone as it is always result\n> in a corrpt page if a differential WAL record is applied on a newly\n> created page that should have been exist.\n> \n> However, while working on it, I found that I found that recovery faces\n> missing tablespace directories *after* reaching consistency. I'm\n> examining that further.\n\nOkay, it was my thinko. But I faced another obstacle.\n\nThis is the first cut of the above.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 05 Apr 2022 16:38:06 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Tue, 05 Apr 2022 16:38:06 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > However, while working on it, I found that I found that recovery faces\n> > missing tablespace directories *after* reaching consistency. I'm\n> > examining that further.\n> \n> Okay, it was my thinko. But I faced another obstacle.\n\nI forgot to delete the second sentence. Please ingore it.\n\nregareds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 05 Apr 2022 16:54:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Tue, 05 Apr 2022 16:38:06 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 05 Apr 2022 11:16:44 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > So, I have the following points in my mind for now.\n> > \n> > - We create the directory \"since we know it is just tentative state\".\n> > \n> > - Then, check that no directory in pg_tblspc when reaching consistency\n> > when allow_in_place_tablespaces is false.\n> > \n> > - Leave the log_invalid_page() mechanism alone as it is always result\n> > in a corrpt page if a differential WAL record is applied on a newly\n> > created page that should have been exist.\n> > \n> > However, while working on it, I found that I found that recovery faces\n> > missing tablespace directories *after* reaching consistency. I'm\n> > examining that further.\n> \n> Okay, it was my thinko.\n> \n> This is the first cut of the above.\n\nIt had an unused variable for Windows.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 05 Apr 2022 17:18:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "Not a review, just a preparatory rebase across some trivially\nconflicting changes. I also noticed that\nsrc/test/recovery/t/031_recovery_conflict.pl, which was added two days\nafter v23 was sent, and which uses allow_in_place_tablespaces, bails out\nbecause of the checks introduced by this patch, so I made the check\nroutine do nothing in that case.\n\nAnyway, here's v24.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La conclusión que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusión de ellos\" (Tanenbaum)",
"msg_date": "Wed, 13 Jul 2022 18:43:45 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Here's a couple of fixups. 0001 is the same as before. In 0002 I think\nCheckTablespaceDirectory ends up easier to read if we split out the test\nfor validity of the link. Looking at that again, I think we don't need\nto piggyback on ignore_invalid_pages, which is already a stretch, so\nlet's not -- instead we can use allow_in_place_tablespaces if users need\na workaround. So that's 0003 (this bit needs more than zero docs,\nhowever.)\n\n0004 is straightforward: let's check for bad directories before logging\nabout consistent state.\n\nAfter all this, I'm not sure what to think of dbase_redo. At line 3102,\nis the directory supposed to exist or not? I'm confused as to what is\nthe expected state at that point. I rewrote this, but now I think my\nrewrite continues to be confusing, so I'll have to think more about it.\n\nAnother aspect are the tests. Robert described a scenario where the\npreviously committed version of this patch created trouble. Do we have\na test case to cover that problematic case? I think we should strive to\ncover it, if possible.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The eagle never lost so much time, as\nwhen he submitted to learn of the crow.\" (William Blake)",
"msg_date": "Thu, 14 Jul 2022 23:47:40 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Thu, 14 Jul 2022 23:47:40 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> Here's a couple of fixups. 0001 is the same as before. In 0002 I think\n\nThanks!\n\n+ \t\tif (!S_ISLNK(st.st_mode))\n+ #else\n+ \t\tif (!pgwin32_is_junction(path))\n+ #endif\n+ \t\t\telog(ignore_invalid_pages ? WARNING : PANIC,\n+ \t\t\t\t \"real directory found in pg_tblspc directory: %s\", de->d_name);\n\nA regular file with an oid-name also causes this error. Doesn't\nsomething like \"unexpected non-(sym)link entry...\" work?\n\n> CheckTablespaceDirectory ends up easier to read if we split out the test\n> for validity of the link. Looking at that again, I think we don't need\n> to piggyback on ignore_invalid_pages, which is already a stretch, so\n> let's not -- instead we can use allow_in_place_tablespaces if users need\n> a workaround. So that's 0003 (this bit needs more than zero docs,\n> however.)\n\nThe result of 0003 looks good.\n\n0002:\n+is_path_tslink(const char *path)\n\nWhat the \"ts\" of tslink stands for? If it stands for tablespace, the\nfunction is not specific for table spaces. We already have \n\n+\t\t\t\t\terrmsg(\"could not stat file \\\"%s\\\": %m\", path));\n\nI'm not sure we need such correctness, but what is failing there is\nlstat. I found similar codes in two places in backend and one place\nin frontend. So couldn't it be moved to /common and have a more\ngeneric name?\n\n-\tdir = AllocateDir(tblspc_path);\n-\twhile ((de = ReadDir(dir, tblspc_path)) != NULL)\n+\tdir = AllocateDir(\"pg_tblspc\");\n+\twhile ((de = ReadDir(dir, \"pg_tblspc\")) != NULL)\n\nxlog.c uses the macro XLOGDIR. Why don't we define TBLSPCDIR?\n\n-\t\tfor (p = de->d_name; *p && isdigit(*p); p++);\n-\t\tif (*p)\n+\t\tif (strspn(de->d_name, \"0123456789\") != strlen(de->d_name))\n \t\t\tcontinue;\n\nThe pattern \"strspn != strlen\" looks kind of remote, or somewhat\npedantic..\n\n+\t\tchar\tpath[MAXPGPATH + 10];\n..\n-\t\tsnprintf(path, MAXPGPATH, \"%s/%s\", tblspc_path, de->d_name);\n+\t\tsnprintf(path, sizeof(path), \"pg_tblspc/%s\", de->d_name);\n\nI don't think we need the extra 10 bytes. A bit paranoic, but we can\ncheck the return value to confirm the d_name is fully stored in the\nbuffer.\n\n> 0004 is straightforward: let's check for bad directories before logging\n> about consistent state.\n\nI was about to write a comment to do this when looking 0001.\n\n> After all this, I'm not sure what to think of dbase_redo. At line 3102,\n> is the directory supposed to exist or not? I'm confused as to what is\n> the expected state at that point. I rewrote this, but now I think my\n> rewrite continues to be confusing, so I'll have to think more about it.\n\nI'm not sure l3102 exactly points, but haven't we chosen to create\neverything required to keep recovery going, whether it is supposed to\nexist or not?\n\n> Another aspect are the tests. Robert described a scenario where the\n> previously committed version of this patch created trouble. Do we have\n> a test case to cover that problematic case? I think we should strive to\n> cover it, if possible.\n\nI counldn't recall that clearly and failed to dig out from the thread,\nbut doesn't the \"creating everything needed\" strategy naturally save\nthat case? We could add that test, but it seems to me a little\ncumbersome to confirm the test correctly detect that case..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 15 Jul 2022 16:30:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On 2022-Jul-15, Kyotaro Horiguchi wrote:\n\n> At Thu, 14 Jul 2022 23:47:40 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> > Here's a couple of fixups. 0001 is the same as before. In 0002 I think\n> \n> Thanks!\n> \n> + \t\tif (!S_ISLNK(st.st_mode))\n> + #else\n> + \t\tif (!pgwin32_is_junction(path))\n> + #endif\n> + \t\t\telog(ignore_invalid_pages ? WARNING : PANIC,\n> + \t\t\t\t \"real directory found in pg_tblspc directory: %s\", de->d_name);\n> \n> A regular file with an oid-name also causes this error. Doesn't\n> something like \"unexpected non-(sym)link entry...\" work?\n\nHmm, good point. I also wonder if we need to cater for using the term\n\"junction point\" rather than \"symlink\" when under Windows.\n\n> > CheckTablespaceDirectory ends up easier to read if we split out the test\n> > for validity of the link. Looking at that again, I think we don't need\n> > to piggyback on ignore_invalid_pages, which is already a stretch, so\n> > let's not -- instead we can use allow_in_place_tablespaces if users need\n> > a workaround. So that's 0003 (this bit needs more than zero docs,\n> > however.)\n> \n> The result of 0003 looks good.\n\nGreat, will merge.\n\n> 0002:\n> +is_path_tslink(const char *path)\n> \n> What the \"ts\" of tslink stands for? If it stands for tablespace, the\n> function is not specific for table spaces.\n\nOh, of course. \n\n> We already have \n> \n> +\t\t\t\t\terrmsg(\"could not stat file \\\"%s\\\": %m\", path));\n> \n> I'm not sure we need such correctness, but what is failing there is\n> lstat.\n\nI'll have a look at what we use for lstat failures in other places.\n\n> I found similar codes in two places in backend and one place\n> in frontend. So couldn't it be moved to /common and have a more\n> generic name?\n\nI'll have a look at those. I had the same instinct initially ...\n\n> -\tdir = AllocateDir(tblspc_path);\n> -\twhile ((de = ReadDir(dir, tblspc_path)) != NULL)\n> +\tdir = AllocateDir(\"pg_tblspc\");\n> +\twhile ((de = ReadDir(dir, \"pg_tblspc\")) != NULL)\n> \n> xlog.c uses the macro XLOGDIR. Why don't we define TBLSPCDIR?\n\nOh yes, let's do that. I'd even backpatch that, to avoid a future\nbackpatching gotcha.\n\n> -\t\tfor (p = de->d_name; *p && isdigit(*p); p++);\n> -\t\tif (*p)\n> +\t\tif (strspn(de->d_name, \"0123456789\") != strlen(de->d_name))\n> \t\t\tcontinue;\n> \n> The pattern \"strspn != strlen\" looks kind of remote, or somewhat\n> pedantic..\n> \n> +\t\tchar\tpath[MAXPGPATH + 10];\n> ..\n> -\t\tsnprintf(path, MAXPGPATH, \"%s/%s\", tblspc_path, de->d_name);\n> +\t\tsnprintf(path, sizeof(path), \"pg_tblspc/%s\", de->d_name);\n> \n> I don't think we need the extra 10 bytes.\n\nI forgot to mention this, but I just copied these bits from some other\nplace that processes pg_tblspc entries. It seemed to me that the\nbodiless for loop was a bit too suspicious-looking.\n\n> A bit paranoic, but we can check the return value to confirm the\n> d_name is fully stored in the buffer.\n\nHmm ... I don't think we need to care about that in this patch. This\ncoding pattern is already being used in other places. If we want to\nchange that, let's do it everywhere, and not in an unrelated\nbackpatchable bug fix.\n\n> > After all this, I'm not sure what to think of dbase_redo. At line 3102,\n> > is the directory supposed to exist or not? I'm confused as to what is\n> > the expected state at that point. I rewrote this, but now I think my\n> > rewrite continues to be confusing, so I'll have to think more about it.\n> \n> I'm not sure l3102 exactly points, but haven't we chosen to create\n> everything required to keep recovery going, whether it is supposed to\n> exist or not?\n\nI mean just after the two stat() calls for the target directory.\n\n> > Another aspect are the tests. Robert described a scenario where the\n> > previously committed version of this patch created trouble. Do we have\n> > a test case to cover that problematic case? I think we should strive to\n> > cover it, if possible.\n> \n> I counldn't recall that clearly and failed to dig out from the thread,\n> but doesn't the \"creating everything needed\" strategy naturally save\n> that case? We could add that test, but it seems to me a little\n> cumbersome to confirm the test correctly detect that case..\n\nWell, I *hope* it does ... but hope is no strategy, and I've frequently\nbeen on the wrong side when trusting that untested code does what I\nthink it does.\n\n\nThanks for reviewing,\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"It takes less than 2 seconds to get to 78% complete; that's a good sign.\nA few seconds later it's at 90%, but it seems to have stuck there. Did\nsomebody make percentages logarithmic while I wasn't looking?\"\n http://smylers.hates-software.com/2005/09/08/1995c749.html\n\n\n",
"msg_date": "Fri, 15 Jul 2022 09:56:10 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Jul-15, Kyotaro Horiguchi wrote:\n\n> 0002:\n> +is_path_tslink(const char *path)\n> \n> What the \"ts\" of tslink stands for? If it stands for tablespace, the\n> function is not specific for table spaces. We already have \n> \n> +\t\t\t\t\terrmsg(\"could not stat file \\\"%s\\\": %m\", path));\n> \n> I'm not sure we need such correctness, but what is failing there is\n> lstat. I found similar codes in two places in backend and one place\n> in frontend. So couldn't it be moved to /common and have a more\n> generic name?\n\nI wondered whether it'd be better to check whether get_dirent_type\nreturns PGFILETYPE_LNK. However, that doesn't deal with junction points\nat all, which seems pretty odd ... I mean, isn't it rather useful as an\nabstraction if it doesn't abstract away the one platform-dependent point\nwe have in the area?\n\nHowever, looking closer I noticed that on Windows we use our own\nreaddir() implementation, which AFAICT includes everything to handle\nreparse points as symlinks correctly in get_dirent_type. Which means\nthat do_pg_start_backup is wasting its time with the \"#ifdef WIN32\" bits\nto handle junction points separately. We could just do this\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex b809a2152c..4966213fde 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -8302,13 +8302,8 @@ do_pg_backup_start(const char *backupidstr, bool fast, TimeLineID *starttli_p,\n \t\t\t * we sometimes use allow_in_place_tablespaces to create\n \t\t\t * directories directly under pg_tblspc, which would fail below.\n \t\t\t */\n-#ifdef WIN32\n-\t\t\tif (!pgwin32_is_junction(fullpath))\n-\t\t\t\tcontinue;\n-#else\n \t\t\tif (get_dirent_type(fullpath, de, false, ERROR) != PGFILETYPE_LNK)\n \t\t\t\tcontinue;\n-#endif\n \n #if defined(HAVE_READLINK) || defined(WIN32)\n \t\t\trllen = readlink(fullpath, linkpath, sizeof(linkpath));\n\n\nAnd everything should continue to work.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 15 Jul 2022 12:58:44 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Jul-15, Alvaro Herrera wrote:\n\n> However, looking closer I noticed that on Windows we use our own\n> readdir() implementation, which AFAICT includes everything to handle\n> reparse points as symlinks correctly in get_dirent_type. Which means\n> that do_pg_start_backup is wasting its time with the \"#ifdef WIN32\" bits\n> to handle junction points separately. We could just do this\n> \n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index b809a2152c..4966213fde 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -8302,13 +8302,8 @@ do_pg_backup_start(const char *backupidstr, bool fast, TimeLineID *starttli_p,\n> \t\t\t * we sometimes use allow_in_place_tablespaces to create\n> \t\t\t * directories directly under pg_tblspc, which would fail below.\n> \t\t\t */\n> -#ifdef WIN32\n> -\t\t\tif (!pgwin32_is_junction(fullpath))\n> -\t\t\t\tcontinue;\n> -#else\n> \t\t\tif (get_dirent_type(fullpath, de, false, ERROR) != PGFILETYPE_LNK)\n> \t\t\t\tcontinue;\n> -#endif\n> \n> #if defined(HAVE_READLINK) || defined(WIN32)\n> \t\t\trllen = readlink(fullpath, linkpath, sizeof(linkpath));\n> \n> And everything should continue to work.\n\nHmm, but it does not:\nhttps://cirrus-ci.com/build/4824963784900608\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 15 Jul 2022 14:03:11 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "v26 here. I spent some time fighting the readdir() stuff for\nWindows (so that get_dirent_type returns LNK for junction points)\nbut couldn't make it to work and was unable to figure out why.\nSo I ended up doing what do_pg_backup_start is already doing:\nan #ifdef to call pgwin32_is_junction instead. I remove the\nnewly added path_is_symlink function, because I realized that\nit would mean an extra syscall everywhere other than Windows.\n\nSo if somebody wants to fix get_dirent_type() so that it works properly\non Windows, we can change all these places together.\n\nI also change the use of allow_invalid_pages to\nallow_in_place_tablespaces. We could add a\nseparate GUC for this, but it seems overengineering.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Most hackers will be perfectly comfortable conceptualizing users as entropy\n sources, so let's move on.\" (Nathaniel Smith)",
"msg_date": "Wed, 20 Jul 2022 12:50:49 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Jul-20, Alvaro Herrera wrote:\n\n> I also change the use of allow_invalid_pages to\n> allow_in_place_tablespaces. We could add a\n> separate GUC for this, but it seems overengineering.\n\nOh, but allow_in_place_tablespaces doesn't exist in versions 14 and\nolder, so this strategy doesn't really work.\n\nI see the following alternatives:\n\n1. not backpatch this fix to 14 and older\n2. use a different GUC; either allow_invalid_pages as previously\n suggested, or create a new one just for this purpose\n3. not provide any overriding mechanism in versions 14 and older\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Always assume the user will do much worse than the stupidest thing\nyou can imagine.\" (Julien PUYDT)\n\n\n",
"msg_date": "Wed, 20 Jul 2022 17:25:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Jul-20, Alvaro Herrera wrote:\n\n> On 2022-Jul-20, Alvaro Herrera wrote:\n> \n> > I also change the use of allow_invalid_pages to\n> > allow_in_place_tablespaces. We could add a\n> > separate GUC for this, but it seems overengineering.\n> \n> Oh, but allow_in_place_tablespaces doesn't exist in versions 14 and\n> older, so this strategy doesn't really work.\n\n... and get_dirent_type is new in 14, so that'll be one more hurdle.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Cuando no hay humildad las personas se degradan\" (A. Christie)\n\n\n",
"msg_date": "Wed, 20 Jul 2022 18:34:23 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Jul-20, Alvaro Herrera wrote:\n\n> I see the following alternatives:\n> \n> 1. not backpatch this fix to 14 and older\n> 2. use a different GUC; either allow_invalid_pages as previously\n> suggested, or create a new one just for this purpose\n> 3. not provide any overriding mechanism in versions 14 and older\n\nI've got no opinions on this. I don't like either 1 or 3, so I'm going\nto add and backpatch a new GUC allow_recovery_tablespaces as the\noverride mechanism.\n\nIf others disagree with this choice, please speak up.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 21 Jul 2022 13:01:06 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 10:51 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> v26 here. I spent some time fighting the readdir() stuff for\n> Windows (so that get_dirent_type returns LNK for junction points)\n> but couldn't make it to work and was unable to figure out why.\n\nWas it because of this?\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGKv%2B736Pc8kSj3%3DDijDGd1eC79-uT3Vi16n7jYkcc_raw%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 21 Jul 2022 23:11:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 11:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Jul-20, Alvaro Herrera wrote:\n> > I see the following alternatives:\n> >\n> > 1. not backpatch this fix to 14 and older\n> > 2. use a different GUC; either allow_invalid_pages as previously\n> > suggested, or create a new one just for this purpose\n> > 3. not provide any overriding mechanism in versions 14 and older\n>\n> I've got no opinions on this. I don't like either 1 or 3, so I'm going\n> to add and backpatch a new GUC allow_recovery_tablespaces as the\n> override mechanism.\n>\n> If others disagree with this choice, please speak up.\n\nWould it help if we back-patched the allow_in_place_tablespaces stuff?\n I'm not sure how hard/destabilising that would be, but I could take a\nlook tomorrow.\n\n\n",
"msg_date": "Thu, 21 Jul 2022 23:14:57 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Jul-21, Thomas Munro wrote:\n\n> On Wed, Jul 20, 2022 at 10:51 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > v26 here. I spent some time fighting the readdir() stuff for\n> > Windows (so that get_dirent_type returns LNK for junction points)\n> > but couldn't make it to work and was unable to figure out why.\n> \n> Was it because of this?\n> \n> https://www.postgresql.org/message-id/CA%2BhUKGKv%2B736Pc8kSj3%3DDijDGd1eC79-uT3Vi16n7jYkcc_raw%40mail.gmail.com\n\nOh, that sounds very likely, yeah. I didn't think of testing the\nFILE_ATTRIBUTE_DIRECTORY bit for junction points.\n\nI +1 pushing both of these patches to 14. Then this patch becomes a\ncouple of lines shorter.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Before you were born your parents weren't as boring as they are now. They\ngot that way paying your bills, cleaning up your room and listening to you\ntell them how idealistic you are.\" -- Charles J. Sykes' advice to teenagers\n\n\n",
"msg_date": "Thu, 21 Jul 2022 13:17:51 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Jul-21, Thomas Munro wrote:\n\n> On Thu, Jul 21, 2022 at 11:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > I've got no opinions on this. I don't like either 1 or 3, so I'm going\n> > to add and backpatch a new GUC allow_recovery_tablespaces as the\n> > override mechanism.\n> >\n> > If others disagree with this choice, please speak up.\n> \n> Would it help if we back-patched the allow_in_place_tablespaces stuff?\n> I'm not sure how hard/destabilising that would be, but I could take a\n> look tomorrow.\n\nYeah, I think that would reduce cruft. I'm not sure this is more\nagainst backpatching policy or less, compared to adding a separate\nGUC just for this bugfix.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The problem with the facetime model is not just that it's demoralizing, but\nthat the people pretending to work interrupt the ones actually working.\"\n (Paul Graham)\n\n\n",
"msg_date": "Thu, 21 Jul 2022 13:20:26 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Jul-21, Alvaro Herrera wrote:\n\n> Yeah, I think that would reduce cruft. I'm not sure this is more\n> against backpatching policy or less, compared to adding a separate\n> GUC just for this bugfix.\n\ncruft:\n\n {\n {\"allow_recovery_tablespaces\", PG_POSTMASTER, WAL_RECOVERY,\n gettext_noop(\"Continues recovery after finding invalid database directories.\"),\n gettext_noop(\"It is possible for tablespace drop to interfere with database creation \"\n \"so that WAL replay is forced to create fake database directories. \"\n \"These should have been dropped by the time recovery ends; \"\n \"but in case they aren't, this option lets recovery continue if they \"\n \"are present. Note that these directories must be removed manually afterwards.\"),\n GUC_NOT_IN_SAMPLE\n },\n &allow_recovery_tablespaces,\n false,\n NULL, NULL, NULL\n },\n\nThis is not a very good explanation, but I don't know how to make it\nbetter.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I think my standards have lowered enough that now I think 'good design'\nis when the page doesn't irritate the living f*ck out of me.\" (JWZ)\n\n\n",
"msg_date": "Thu, 21 Jul 2022 13:25:05 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Thu, 21 Jul 2022 23:14:57 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Thu, Jul 21, 2022 at 11:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > On 2022-Jul-20, Alvaro Herrera wrote:\n> > > I see the following alternatives:\n> > >\n> > > 1. not backpatch this fix to 14 and older\n> > > 2. use a different GUC; either allow_invalid_pages as previously\n> > > suggested, or create a new one just for this purpose\n> > > 3. not provide any overriding mechanism in versions 14 and older\n> >\n> > I've got no opinions on this. I don't like either 1 or 3, so I'm going\n> > to add and backpatch a new GUC allow_recovery_tablespaces as the\n> > override mechanism.\n> >\n> > If others disagree with this choice, please speak up.\n> \n> Would it help if we back-patched the allow_in_place_tablespaces stuff?\n> I'm not sure how hard/destabilising that would be, but I could take a\n> look tomorrow.\n\n+1. Addiotional reason for me is it is a developer option.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Jul 2022 09:20:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "At Thu, 21 Jul 2022 13:25:05 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2022-Jul-21, Alvaro Herrera wrote:\n> \n> > Yeah, I think that would reduce cruft. I'm not sure this is more\n> > against backpatching policy or less, compared to adding a separate\n> > GUC just for this bugfix.\n> \n> cruft:\n> \n> {\n> {\"allow_recovery_tablespaces\", PG_POSTMASTER, WAL_RECOVERY,\n> gettext_noop(\"Continues recovery after finding invalid database directories.\"),\n> gettext_noop(\"It is possible for tablespace drop to interfere with database creation \"\n> \"so that WAL replay is forced to create fake database directories. \"\n> \"These should have been dropped by the time recovery ends; \"\n> \"but in case they aren't, this option lets recovery continue if they \"\n> \"are present. Note that these directories must be removed manually afterwards.\"),\n> GUC_NOT_IN_SAMPLE\n> },\n> &allow_recovery_tablespaces,\n> false,\n> NULL, NULL, NULL\n> },\n> \n> This is not a very good explanation, but I don't know how to make it\n> better.\n\nIt looks a bit too detailed. I crafted the following..\n\nRecovery can create tentative in-place tablespace directories under\npg_tblspc/. They are assumed to be removed until reaching recovery\nconsistency, but otherwise PostgreSQL raises a PANIC-level error,\naborting the recovery. Setting allow_recovery_tablespaces to true\ncauses the system to allow such directories during normal\noperation. In case those directories are left after reaching\nconsistency, that implies data loss and metadata inconsistency and may\ncause failure of future tablespace creation.\n\nThough, after writing this, I became to think that piggy-backing on\nallow_in_place_tablespaces might be a bit different..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Jul 2022 10:02:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On 2022-Jul-22, Kyotaro Horiguchi wrote:\n\n> At Thu, 21 Jul 2022 23:14:57 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n\n> > Would it help if we back-patched the allow_in_place_tablespaces stuff?\n> > I'm not sure how hard/destabilising that would be, but I could take a\n> > look tomorrow.\n> \n> +1. Addiotional reason for me is it is a developer option.\n\nOK, I'll wait for allow_in_place_tablespaces to be backpatched then.\n\nI would like to get this fix pushed before the next set of minors, so if\nyou won't have time for the backpatches early enough, maybe I can work\non getting it done.\n\nWhich commits would we consider?\n\n7170f2159fb2\tAllow \"in place\" tablespaces.\nf6f0db4d6240 Fix pg_tablespace_location() with in-place tablespaces\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Most hackers will be perfectly comfortable conceptualizing users as entropy\n sources, so let's move on.\" (Nathaniel Smith)\n\n\n",
"msg_date": "Fri, 22 Jul 2022 10:18:58 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Fri, 22 Jul 2022 10:18:58 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> OK, I'll wait for allow_in_place_tablespaces to be backpatched then.\n> \n> I would like to get this fix pushed before the next set of minors, so if\n> you won't have time for the backpatches early enough, maybe I can work\n> on getting it done.\n> \n> Which commits would we consider?\n> \n> 7170f2159fb2\tAllow \"in place\" tablespaces.\n> f6f0db4d6240 Fix pg_tablespace_location() with in-place tablespaces\n\nThe second one is just to make the function work with in-place\ntablespaces. Without it the function yeilds the following error.\n\n> ERROR: could not read symbolic link \"pg_tblspc/16407\": Invalid argument\n\nThis looks actually odd but I think no need of back-patching because\nthere's no actual user of the feature is not seen in our test suite.\nIf we have a test that needs the feature in future, it would be enough\nto back-patch it then.\n\nSo I think only the first one is needed for now.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Jul 2022 17:49:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 8:19 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Jul-22, Kyotaro Horiguchi wrote:\n> > At Thu, 21 Jul 2022 23:14:57 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in\n> > > Would it help if we back-patched the allow_in_place_tablespaces stuff?\n> > > I'm not sure how hard/destabilising that would be, but I could take a\n> > > look tomorrow.\n> >\n> > +1. Addiotional reason for me is it is a developer option.\n>\n> OK, I'll wait for allow_in_place_tablespaces to be backpatched then.\n>\n> I would like to get this fix pushed before the next set of minors, so if\n> you won't have time for the backpatches early enough, maybe I can work\n> on getting it done.\n>\n> Which commits would we consider?\n\nI wonder how crazy it would be to back-patch\nsrc/test/recovery/t/027_stream_regress.pl too.\n\n\n",
"msg_date": "Fri, 22 Jul 2022 20:53:12 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Jul-22, Kyotaro Horiguchi wrote:\n\n> At Fri, 22 Jul 2022 10:18:58 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n\n> > Which commits would we consider?\n> > \n> > 7170f2159fb2\tAllow \"in place\" tablespaces.\n> > f6f0db4d6240 Fix pg_tablespace_location() with in-place tablespaces\n> \n> The second one is just to make the function work with in-place\n> tablespaces. Without it the function yeilds the following error.\n> \n> > ERROR: could not read symbolic link \"pg_tblspc/16407\": Invalid argument\n> \n> This looks actually odd but I think no need of back-patching because\n> there's no actual user of the feature is not seen in our test suite.\n> If we have a test that needs the feature in future, it would be enough\n> to back-patch it then.\n\nActually, I found that the new test added by the fix in this thread does\ndepend on this being fixed, so I included an even larger set, which I\nthink makes this more complete:\n\n7170f2159fb2 Allow \"in place\" tablespaces.\nc6f2f01611d4 Fix pg_basebackup with in-place tablespaces.\nf6f0db4d6240 Fix pg_tablespace_location() with in-place tablespaces\n7a7cd84893e0 doc: Remove mention to in-place tablespaces for pg_tablespace_location()\n5344723755bd Remove unnecessary Windows-specific basebackup code.\n\nI didn't include any of the test changes for now. I don't intend to do\nso, unless we see another reason for that; I think the new tests that\nare going to be added by the recovery bugfix should be sufficient\ncoverage.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La fuerza no está en los medios físicos\nsino que reside en una voluntad indomable\" (Gandhi)\n\n\n",
"msg_date": "Wed, 27 Jul 2022 08:07:16 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Okay, I think I'm done with this. Here's v27 for the master branch,\nwhere I fixed some comments as well as thinkos in the test script.\nThe ones on older branches aren't materially different, they just have\ntonnes of conflicts resolved. I'll get this pushed tomorrow morning.\n\nI have run it through CI and it seems ... not completely broken, at\nleast, but I have no working recipes for Windows on branches 14 and\nolder, so it doesn't really work fully. If anybody does, please share.\nYou can see mine here\nhttps://github.com/alvherre/postgres/commits/REL_11_STABLE [etc]\nhttps://cirrus-ci.com/build/5320904228995072\nhttps://cirrus-ci.com/github/alvherre/postgres\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Every machine is a smoke machine if you operate it wrong enough.\"\nhttps://twitter.com/libseybieda/status/1541673325781196801",
"msg_date": "Wed, 27 Jul 2022 20:54:49 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Wed, 27 Jul 2022 at 20:55, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Okay, I think I'm done with this. Here's v27 for the master branch,\n> where I fixed some comments as well as thinkos in the test script.\n> The ones on older branches aren't materially different, they just have\n> tonnes of conflicts resolved. I'll get this pushed tomorrow morning.\n>\n> I have run it through CI and it seems ... not completely broken, at\n> least, but I have no working recipes for Windows on branches 14 and\n> older, so it doesn't really work fully. If anybody does, please share.\n> You can see mine here\n> https://github.com/alvherre/postgres/commits/REL_11_STABLE [etc]\n> https://cirrus-ci.com/build/5320904228995072\n> https://cirrus-ci.com/github/alvherre/postgres\n\nI'd like to bring to your attention that the test that was introduced\nwith 9e4f914b seem to be flaky in FreeBSD 13 in the CFBot builds: it\nsometimes times out while waiting for the secondary to catch up. Or,\nat least I think it does, and I'm not too familiar with TAP failure\noutputs: it returns with error code 29 and logs that I'd expect when\nthe timeout is reached.\n\nSee bottom for examples (all 3 builds for different patches).\n\nKind regards,\n\nMatthias van de Meent.\n\n[1] https://cirrus-ci.com/task/4960990331666432?logs=test_world#L2631-L2662\n[2] https://cirrus-ci.com/task/5012678384025600?logs=test_world#L2631-L2662\n[3] https://cirrus-ci.com/task/5147001137397760?logs=test_world#L2631-L2662\n\n\n",
"msg_date": "Thu, 28 Jul 2022 20:04:38 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> I'd like to bring to your attention that the test that was introduced\n> with 9e4f914b seem to be flaky in FreeBSD 13 in the CFBot builds: it\n> sometimes times out while waiting for the secondary to catch up. Or,\n> at least I think it does, and I'm not too familiar with TAP failure\n> outputs: it returns with error code 29 and logs that I'd expect when\n> the timeout is reached.\n\nIt's also failing in the buildfarm, eg\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2022-07-28%2020%3A57%3A50\n\nLooks like only conchuela so far, reinforcing the idea that we're\nonly seeing it on FreeBSD. I'd tentatively bet on a timing problem\nthat requires some FreeBSD scheduling quirk to manifest; we've seen\nsuch quirks before.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Jul 2022 17:57:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 9:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > I'd like to bring to your attention that the test that was introduced\n> > with 9e4f914b seem to be flaky in FreeBSD 13 in the CFBot builds: it\n> > sometimes times out while waiting for the secondary to catch up. Or,\n> > at least I think it does, and I'm not too familiar with TAP failure\n> > outputs: it returns with error code 29 and logs that I'd expect when\n> > the timeout is reached.\n>\n> It's also failing in the buildfarm, eg\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2022-07-28%2020%3A57%3A50\n>\n> Looks like only conchuela so far, reinforcing the idea that we're\n> only seeing it on FreeBSD. I'd tentatively bet on a timing problem\n> that requires some FreeBSD scheduling quirk to manifest; we've seen\n> such quirks before.\n\nMaybe it just needs a replication slot? I see:\n\nERROR: requested WAL segment 000000010000000000000003 has already been removed\n\n\n",
"msg_date": "Fri, 29 Jul 2022 11:27:01 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "At Fri, 29 Jul 2022 11:27:01 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> Maybe it just needs a replication slot? I see:\n> \n> ERROR: requested WAL segment 000000010000000000000003 has already been removed\n\nAgreed, I see the same. The same failure can be surely reproducible\nby inserting wal-switch+checkpoint after taking backup [1]. And it is\nfixed by the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n[1]:\n--- a/src/test/recovery/t/033_replay_tsp_drops.pl\n+++ b/src/test/recovery/t/033_replay_tsp_drops.pl\n@@ -30,6 +30,13 @@ sub test_tablespace\n \tmy $backup_name = 'my_backup';\n \t$node_primary->backup($backup_name);\n \n+\t$node_primary->psql(\n+\t\t'postgres',\n+\t\tqq[\n+\t\tCREATE TABLE t(); DROP TABLE t; SELECT pg_switch_wal();\n+\t\tCHECKPOINT;\n+\t\t]);\n+\n \tmy $node_standby = PostgreSQL::Test::Cluster->new(\"standby2_$strategy\");\n \t$node_standby->init_from_backup($node_primary, $backup_name,\n \t\thas_streaming => 1);",
"msg_date": "Fri, 29 Jul 2022 14:20:08 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch\n and discussion)"
},
{
"msg_contents": "On 2022-Jul-29, Kyotaro Horiguchi wrote:\n\n> At Fri, 29 Jul 2022 11:27:01 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> > Maybe it just needs a replication slot? I see:\n> > \n> > ERROR: requested WAL segment 000000010000000000000003 has already been removed\n> \n> Agreed, I see the same. The same failure can be surely reproducible\n> by inserting wal-switch+checkpoint after taking backup [1]. And it is\n> fixed by the attached.\n\nWFM, pushed that way. I added a slot drop after the pg_stat_replication\ncount check to be a little less intrusive. Thanks Matthias for\nreporting. (Note that the Cirrus page has a download link for the\ncomplete logs as artifacts).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I'm always right, but sometimes I'm more right than other times.\"\n (Linus Torvalds)\n\n\n",
"msg_date": "Fri, 29 Jul 2022 12:59:03 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> WFM, pushed that way.\n\nLooks like conchuela is still intermittently unhappy.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2022-07-30%2004%3A57%3A51\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Jul 2022 10:37:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "I wrote:\n> Looks like conchuela is still intermittently unhappy.\n\nBTW, quite aside from stability, is it really necessary for this test to\nbe so freakin' slow? florican for instance reports\n\n[12:43:38] t/025_stuck_on_old_timeline.pl ....... ok 49010 ms ( 0.00 usr 0.00 sys + 3.64 cusr 2.49 csys = 6.13 CPU)\n[12:44:12] t/026_overwrite_contrecord.pl ........ ok 34751 ms ( 0.01 usr 0.00 sys + 3.14 cusr 1.76 csys = 4.91 CPU)\n[12:49:00] t/027_stream_regress.pl .............. ok 287278 ms ( 0.00 usr 0.00 sys + 9.66 cusr 6.95 csys = 16.60 CPU)\n[12:50:04] t/028_pitr_timelines.pl .............. ok 64543 ms ( 0.00 usr 0.00 sys + 3.59 cusr 3.20 csys = 6.78 CPU)\n[12:50:17] t/029_stats_restart.pl ............... ok 12505 ms ( 0.02 usr 0.00 sys + 3.16 cusr 1.40 csys = 4.57 CPU)\n[12:50:51] t/030_stats_cleanup_replica.pl ....... ok 33933 ms ( 0.01 usr 0.01 sys + 3.55 cusr 2.46 csys = 6.03 CPU)\n[12:51:25] t/031_recovery_conflict.pl ........... ok 34249 ms ( 0.00 usr 0.00 sys + 3.37 cusr 2.20 csys = 5.57 CPU)\n[12:52:09] t/032_relfilenode_reuse.pl ........... ok 44274 ms ( 0.01 usr 0.00 sys + 3.21 cusr 2.05 csys = 5.27 CPU)\n[12:54:07] t/033_replay_tsp_drops.pl ............ ok 117840 ms ( 0.01 usr 0.00 sys + 8.72 cusr 5.41 csys = 14.14 CPU)\n\n027 is so bloated because it runs the core regression tests YA time,\nwhich I'm not very happy about either; but that's no excuse for\nevery new test to contribute an additional couple of minutes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Jul 2022 12:51:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Sun, Jul 31, 2022 at 4:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, quite aside from stability, is it really necessary for this test to\n> be so freakin' slow? florican for instance reports\n>\n> [12:43:38] t/025_stuck_on_old_timeline.pl ....... ok 49010 ms ( 0.00 usr 0.00 sys + 3.64 cusr 2.49 csys = 6.13 CPU)\n> [12:44:12] t/026_overwrite_contrecord.pl ........ ok 34751 ms ( 0.01 usr 0.00 sys + 3.14 cusr 1.76 csys = 4.91 CPU)\n> [12:49:00] t/027_stream_regress.pl .............. ok 287278 ms ( 0.00 usr 0.00 sys + 9.66 cusr 6.95 csys = 16.60 CPU)\n> [12:50:04] t/028_pitr_timelines.pl .............. ok 64543 ms ( 0.00 usr 0.00 sys + 3.59 cusr 3.20 csys = 6.78 CPU)\n> [12:50:17] t/029_stats_restart.pl ............... ok 12505 ms ( 0.02 usr 0.00 sys + 3.16 cusr 1.40 csys = 4.57 CPU)\n> [12:50:51] t/030_stats_cleanup_replica.pl ....... ok 33933 ms ( 0.01 usr 0.01 sys + 3.55 cusr 2.46 csys = 6.03 CPU)\n> [12:51:25] t/031_recovery_conflict.pl ........... ok 34249 ms ( 0.00 usr 0.00 sys + 3.37 cusr 2.20 csys = 5.57 CPU)\n> [12:52:09] t/032_relfilenode_reuse.pl ........... ok 44274 ms ( 0.01 usr 0.00 sys + 3.21 cusr 2.05 csys = 5.27 CPU)\n> [12:54:07] t/033_replay_tsp_drops.pl ............ ok 117840 ms ( 0.01 usr 0.00 sys + 8.72 cusr 5.41 csys = 14.14 CPU)\n>\n> 027 is so bloated because it runs the core regression tests YA time,\n> which I'm not very happy about either; but that's no excuse for\n> every new test to contribute an additional couple of minutes.\n\nComplaints about 027 noted, I'm thinking about what we could do about that.\n\nAs for 033, I worried that it might be the new ProcSignalBarrier stuff\naround tablespaces, but thankfully the DEBUG logging I added there\nrecently shows those all completing in single digit milliseconds. I\nalso confirmed there are no unexpected fsync'd being produced here.\n\nThat is quite a lot of CPU, but it's a huge amount of total runtime.\nIt runs in 5-8 seconds on various modern systems, 19 seconds on my\nLinux RPi4, and 50 seconds on my Celeron-powered NAS box with spinning\ndisks.\n\nI noticed this is a 32 bit FBSD system. Is it running on UFS, perhaps\non slow storage? Are soft updates enabled (visible as options in\noutput of \"mount\")? Without soft updates, a lot more file system ops\nperform synchronous I/O, which really slows down our tests. In\ngeneral, UFS isn't as good as modern file systems at avoiding I/O for\nshort-lived files, and we set up and tear down a lot of them in our\ntesting. Another thing that makes a difference is to use a filesystem\nwith 8KB block size. This has been a subject of investigation for\nspeeding up CI (see src/tools/ci/gcp_freebsd_repartition.sh), but\nseveral mysteries remain unsolved...\n\n\n",
"msg_date": "Sun, 31 Jul 2022 11:08:01 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I noticed this is a 32 bit FBSD system. Is it running on UFS, perhaps\n> on slow storage? Are soft updates enabled (visible as options in\n> output of \"mount\")?\n\nIt's an ancient (2006) mac mini with 5400RPM spinning rust.\n\"mount\" says\n\n/dev/ada0s2a on / (ufs, local, soft-updates, journaled soft-updates)\ndevfs on /dev (devfs)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Jul 2022 19:17:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Sun, Jul 31, 2022 at 2:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > WFM, pushed that way.\n>\n> Looks like conchuela is still intermittently unhappy.\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2022-07-30%2004%3A57%3A51\n\nAnd here's one from CI that failed on Linux (this was a cfbot run with\nan unrelated patch, parent commit b998196 so a few commits after \"Fix\ntest instability\"):\n\nhttps://cirrus-ci.com/task/5282155000496128\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5282155000496128/log/src/test/recovery/tmp_check/log/033_replay_tsp_drops_primary1_WAL_LOG.log\n\nIt looks like this sequence is racy and we need to wait for more than\njust \"connection is made\" before dropping the slot?\n\n $node_standby->start;\n\n # Make sure connection is made\n $node_primary->poll_query_until('postgres',\n 'SELECT count(*) = 1 FROM pg_stat_replication');\n $node_primary->safe_psql('postgres', \"SELECT\npg_drop_replication_slot('slot')\");\n\nWhy not set the replication slot name so that the standby uses it\n\"properly\", like in other tests?\n\n\n",
"msg_date": "Sun, 31 Jul 2022 15:46:33 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-30 10:37:55 -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > WFM, pushed that way.\n> \n> Looks like conchuela is still intermittently unhappy.\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2022-07-30%2004%3A57%3A51\n\nCI as well:\nhttps://cirrus-ci.com/task/5295464063959040?logs=test_world#L2671\nhttps://cirrus-ci.com/task/5042590885085184?logs=test_world#L2664\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 31 Jul 2022 19:01:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Sun, Jul 31, 2022 at 3:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Jul 31, 2022 at 2:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > WFM, pushed that way.\n> >\n> > Looks like conchuela is still intermittently unhappy.\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2022-07-30%2004%3A57%3A51\n>\n> And here's one from CI that failed on Linux (this was a cfbot run with\n> an unrelated patch, parent commit b998196 so a few commits after \"Fix\n> test instability\"):\n>\n> https://cirrus-ci.com/task/5282155000496128\n>\n> https://api.cirrus-ci.com/v1/artifact/task/5282155000496128/log/src/test/recovery/tmp_check/log/033_replay_tsp_drops_primary1_WAL_LOG.log\n>\n> It looks like this sequence is racy and we need to wait for more than\n> just \"connection is made\" before dropping the slot?\n>\n> $node_standby->start;\n>\n> # Make sure connection is made\n> $node_primary->poll_query_until('postgres',\n> 'SELECT count(*) = 1 FROM pg_stat_replication');\n> $node_primary->safe_psql('postgres', \"SELECT\n> pg_drop_replication_slot('slot')\");\n>\n> Why not set the replication slot name so that the standby uses it\n> \"properly\", like in other tests?\n\nOr to keep doing it this way, does that pg_stat_replication query need\na WHERE clause looking at the state?\n\n\n",
"msg_date": "Wed, 3 Aug 2022 07:58:08 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On Sun, Jul 31, 2022 at 11:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I noticed this is a 32 bit FBSD system. Is it running on UFS, perhaps\n> > on slow storage? Are soft updates enabled (visible as options in\n> > output of \"mount\")?\n>\n> It's an ancient (2006) mac mini with 5400RPM spinning rust.\n> \"mount\" says\n>\n> /dev/ada0s2a on / (ufs, local, soft-updates, journaled soft-updates)\n> devfs on /dev (devfs)\n\nI don't have all the details and I may be way off here but I have the\nimpression that when you create and then unlink trees of files\nquickly, sometimes soft-updates are flushed synchronously, which turns\ninto many 5400 RPM seeks; dtrace could be used to check, but some\nclues in your numbers would be some kind of correlation between time\nand number of clusters that are set up and torn down by each test.\nWithout soft-updates, it'd be much worse, because then many more\nthings become synchronous I/O. Even with write caching enabled,\nsoft-updates flush the drive cache when there's a barrier needed for\ncrash safety. It may also be that there is something strange about\nApple hardware that makes it extra slow at full-cache-flush operations\n(cf unexplainable excess slowness of F_FULLFSYNC under macOS including\nold spinning rust systems and current flash systems, and complaints\nabout this general area on current Apple hardware from the Asahi\nLinux/M1 port people, though how relevant that is to 2006 spinning\nrust I dunno). It would be nice to look into how to tune, fix or work\naround all of that, as it also affects CI which has a IO limits\n(though admittedly a couple of orders of mag higher IOPS than 5400\nRPM).\n\n\n",
"msg_date": "Thu, 4 Aug 2022 14:54:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
},
{
"msg_contents": "On 2022-Jul-30, Tom Lane wrote:\n\n> BTW, quite aside from stability, is it really necessary for this test to\n> be so freakin' slow? florican for instance reports\n> \n> [12:54:07] t/033_replay_tsp_drops.pl ............ ok 117840 ms ( 0.01 usr 0.00 sys + 8.72 cusr 5.41 csys = 14.14 CPU)\n> \n> 027 is so bloated because it runs the core regression tests YA time,\n> which I'm not very happy about either; but that's no excuse for\n> every new test to contribute an additional couple of minutes.\n\nDefinitely not intended. It looks like the reason is just that the DROP\nDATABASE/TABLESPACE commands are super slow, and this test does a lot of\nthat. I added some instrumentation and the largest fraction of time\ngoes to execute this\n\n\t\tCREATE DATABASE dropme_db1 WITH TABLESPACE dropme_ts1;\n\t\tCREATE TABLE t (a int) TABLESPACE dropme_ts2;\n\t\tCREATE DATABASE dropme_db2 WITH TABLESPACE dropme_ts2;\n\t\tCREATE DATABASE moveme_db TABLESPACE source_ts;\n\t\tALTER DATABASE moveme_db SET TABLESPACE target_ts;\n\t\tCREATE DATABASE newdb TEMPLATE template_db;\n\t\tALTER DATABASE template_db IS_TEMPLATE = false;\n\t\tDROP DATABASE dropme_db1;\n\t\tDROP TABLE t;\n\t\tDROP DATABASE dropme_db2;\n\t\tDROP TABLESPACE dropme_ts2;\n\t\tDROP TABLESPACE source_ts;\n\t\tDROP DATABASE template_db;\n\nMaybe this is overkill and we can reduce the test without damaging the\ncoverage. I'll have a look during the weekend.\n\nI'll repair the reliability problem too, separately.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"This is a foot just waiting to be shot\" (Andrew Dunstan)\n\n\n",
"msg_date": "Fri, 5 Aug 2022 22:29:40 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby recovery fails (tablespace related) (tentative patch and\n discussion)"
}
] |
[
{
"msg_contents": "Hi all,\r\n\r\npg_test_timing accepts the following command-line options:\r\n-d duration\r\n--duration=duration\r\n\r\n Specifies the test duration, in seconds. Longer durations give slightly better accuracy, and are more likely to discover problems with the system clock moving backwards. The default test duration is 3 seconds.\r\n-V\r\n--version\r\n\r\n Print the pg_test_timing version and exit.\r\n-?\r\n--help\r\n\r\n Show help about pg_test_timing command line arguments, and exit.\r\n\r\n[https://www.postgresql.org/docs/11/pgtesttiming.html]\r\n\r\nHowever, when I use the following command, no error prompt. the command can run.\r\npg_test_timing --\r\n\r\nI think \"--\" is a illegal option, errors should be prompted.\r\n\r\nHere is a patch for prompt illegal option.\r\n\r\nBest Regards!",
"msg_date": "Wed, 17 Apr 2019 08:05:32 +0000",
"msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[patch] pg_test_timing does not prompt illegal option"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 6:21 PM Zhang, Jie <zhangjie2@cn.fujitsu.com> wrote:\n>\n> Hi all,\n>\n> pg_test_timing accepts the following command-line options:\n> -d duration\n> --duration=duration\n>\n> Specifies the test duration, in seconds. Longer durations give slightly better accuracy, and are more likely to discover problems with the system clock moving backwards. The default test duration is 3 seconds.\n> -V\n> --version\n>\n> Print the pg_test_timing version and exit.\n> -?\n> --help\n>\n> Show help about pg_test_timing command line arguments, and exit.\n>\n> [https://www.postgresql.org/docs/11/pgtesttiming.html]\n>\n> However, when I use the following command, no error prompt. the command can run.\n> pg_test_timing --\n>\n> I think \"--\" is a illegal option, errors should be prompted.\n>\n> Here is a patch for prompt illegal option.\n\nThis is not the problem only for pg_test_timing. If you want to\naddress this, the patch needs to cover all the client commands\nlike psql, createuser. I'm not sure if it's worth doing that.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 17 Apr 2019 23:14:19 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] pg_test_timing does not prompt illegal option"
},
{
"msg_contents": "Fujii Masao <masao.fujii@gmail.com> writes:\n> On Wed, Apr 17, 2019 at 6:21 PM Zhang, Jie <zhangjie2@cn.fujitsu.com> wrote:\n>> I think \"--\" is a illegal option, errors should be prompted.\n\n> This is not the problem only for pg_test_timing. If you want to\n> address this, the patch needs to cover all the client commands\n> like psql, createuser. I'm not sure if it's worth doing that.\n\nI think it might be an actively bad idea. There's a pretty\nwidespread convention that \"--\" is a no-op switch indicating\nthe end of switches. At least some of our tools appear to\nhonor that behavior (probably because glibc's getopt_long\ndoes; I do not think we are implementing it ourselves).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Apr 2019 10:24:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] pg_test_timing does not prompt illegal option"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 10:24:17AM -0400, Tom Lane wrote:\n> Fujii Masao <masao.fujii@gmail.com> writes:\n> > On Wed, Apr 17, 2019 at 6:21 PM Zhang, Jie <zhangjie2@cn.fujitsu.com> wrote:\n> >> I think \"--\" is a illegal option, errors should be prompted.\n> \n> > This is not the problem only for pg_test_timing. If you want to\n> > address this, the patch needs to cover all the client commands\n> > like psql, createuser. I'm not sure if it's worth doing that.\n> \n> I think it might be an actively bad idea. There's a pretty\n> widespread convention that \"--\" is a no-op switch indicating\n> the end of switches. At least some of our tools appear to\n> honor that behavior (probably because glibc's getopt_long\n> does; I do not think we are implementing it ourselves).\n\nYep, a simple 'ls' on Debian stretch shows it is a common convention:\n\n\t$ ls --\n\tfile1 file2\n\nFYI, 'gcc --' (using Debian 6.3.0-18+deb9u1) does throw an error, so it\nis inconsistent.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 17 Apr 2019 12:05:17 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] pg_test_timing does not prompt illegal option"
},
{
"msg_contents": "\n>> This is not the problem only for pg_test_timing. If you want to\n>> address this, the patch needs to cover all the client commands\n>> like psql, createuser. I'm not sure if it's worth doing that.\n>\n> I think it might be an actively bad idea. There's a pretty\n> widespread convention that \"--\" is a no-op switch indicating\n> the end of switches. At least some of our tools appear to\n> honor that behavior (probably because glibc's getopt_long\n> does; I do not think we are implementing it ourselves).\n\n\"src/port/getopt_long.c\" checks for \"--\" as the end of options.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 17 Apr 2019 18:13:47 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [patch] pg_test_timing does not prompt illegal option"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> I think it might be an actively bad idea. There's a pretty\n>> widespread convention that \"--\" is a no-op switch indicating\n>> the end of switches. At least some of our tools appear to\n>> honor that behavior (probably because glibc's getopt_long\n>> does; I do not think we are implementing it ourselves).\n\n> \"src/port/getopt_long.c\" checks for \"--\" as the end of options.\n\nAh. But I was checking this on a Linux build that's using glibc's\nimplementation, not our own. It's pretty easy to prove that psql,\nfor one, acts that way when using the glibc subroutine:\n\n$ psql -- -E\npsql: error: could not connect to server: FATAL: database \"-E\" does not exist\n\n\nWe've generally felt that deferring to the behavior of the platform's\ngetopt() or getopt_long() is a better idea than trying to enforce some\nlowest-common-denominator version of switch parsing, on the theory that\nusers of a given platform will be used to whatever its getopt does.\nThis does mean that we have undocumented behaviors on particular\nplatforms. I'd say that accepting \"--\" is one of them. Another example\nis that glibc's getopt is willing to reorder the arguments, so that\nfor example this works for me:\n\n$ psql template1 -E\npsql (12devel)\nType \"help\" for help.\n\ntemplate1=# \\set\n...\nECHO_HIDDEN = 'on'\n...\n\nOn other platforms that would not work, so we don't document it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Apr 2019 12:36:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] pg_test_timing does not prompt illegal option"
},
{
"msg_contents": "\nHello Tom,\n\n> We've generally felt that deferring to the behavior of the platform's\n> getopt() or getopt_long() is a better idea than trying to enforce some\n> lowest-common-denominator version of switch parsing, on the theory that\n> users of a given platform will be used to whatever its getopt does.\n> This does mean that we have undocumented behaviors on particular\n> platforms.\n\nInteresting.\n\n> I'd say that accepting \"--\" is one of them. Another example is that \n> glibc's getopt is willing to reorder the arguments, so that for example \n> this works for me:\n>\n> $ psql template1 -E\n> psql (12devel)\n\nYep, I noticed that one by accident once.\n\n> On other platforms that would not work, so we don't document it.\n\nPeople might get surprised anyway, because the very same command may or \nmay not work depending on the platform. Does not matter much, though.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 17 Apr 2019 22:10:55 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [patch] pg_test_timing does not prompt illegal option"
}
] |
[
{
"msg_contents": "We just had another complaint (bug #15767) about parallel dump's\ninability to cope with concurrent lock requests. The problem is\nwell described by the comments for lockTableForWorker():\n\n * Acquire lock on a table to be dumped by a worker process.\n *\n * The master process is already holding an ACCESS SHARE lock. Ordinarily\n * it's no problem for a worker to get one too, but if anything else besides\n * pg_dump is running, there's a possible deadlock:\n *\n * 1) Master dumps the schema and locks all tables in ACCESS SHARE mode.\n * 2) Another process requests an ACCESS EXCLUSIVE lock (which is not granted\n *\t because the master holds a conflicting ACCESS SHARE lock).\n * 3) A worker process also requests an ACCESS SHARE lock to read the table.\n *\t The worker is enqueued behind the ACCESS EXCLUSIVE lock request.\n * 4) Now we have a deadlock, since the master is effectively waiting for\n *\t the worker. The server cannot detect that, however.\n *\n * To prevent an infinite wait, prior to touching a table in a worker, request\n * a lock in ACCESS SHARE mode but with NOWAIT. If we don't get the lock,\n * then we know that somebody else has requested an ACCESS EXCLUSIVE lock and\n * so we have a deadlock. We must fail the backup in that case.\n\nFailing the whole backup is, of course, not a nice outcome.\n\nWhile thinking about that, it occurred to me that we could close the\ngap if the server somehow understood that the master was waiting for\nthe worker. And it's actually not that hard to make that happen:\nwe could use advisory locks. Consider a dance like the following:\n\n1. Master has A.S. lock on table t, and dispatches a dump job for t\nto worker.\n\n2. Worker chooses some key k, does pg_advisory_lock(k), and sends k\nback to master.\n\n3. Worker attempts to get A.S. lock on t. This might block, if some\nother session has a pending lock request on t.\n\n4. Upon receipt of message from worker, master also does\npg_advisory_lock(k). This blocks, but now the server can see that a\ndeadlock exists, and after deadlock_timeout elapses it will fix the\ndeadlock by letting the worker bypass the other pending lock request.\n\n5. Once worker has the lock on table t, it does pg_advisory_unlock(k)\nto release the master.\n\n6. Master also does pg_advisory_unlock(k), and goes on about its business.\n\n\nI've tested that the server side of this works, for either order of steps\n3 and 4. It seems like mostly just a small matter of programming to teach\npg_dump to do this, although there are some questions to resolve, mainly\nhow we choose the advisory lock keys. If there are user applications\nrunning that also use advisory locks, there could be unwanted\ninterference. One easy improvement is to use pg_try_advisory_lock(k) in\nstep 2, and just choose a different k if the lock's in use. Perhaps,\nsince we don't expect that the locks would be held long, that's\nsufficient --- but I suspect that users might wish for some pg_dump\noptions to restrict the set of keys it could use.\n\nAnother point is that the whole dance is unnecessary in the normal\ncase, so maybe we should only do this if an initial attempt to get\nthe lock on table t fails. However, LOCK TABLE NOWAIT throws an\nerror if it can't get the lock, so this would require using a\nsubtransaction or starting a whole new transaction in the worker,\nso maybe that's more trouble than it's worth. Some performance\ntesting might be called for. There's also the point that the code\npath would go almost entirely untested if it's not exercised always.\n\nLastly, we'd probably want both the master and worker processes to\nrun with small values of deadlock_timeout, so as to reduce the\ntime wasted before the server breaks the deadlock. Are there any\ndownsides to that? (If so, again maybe it's not worth the trouble,\nsince the typical case is that no wait is needed.)\n\nThoughts? I'm not volunteering to write this right now, but maybe\nsomebody else will take up the project.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Apr 2019 11:34:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Idea for fixing parallel pg_dump's lock acquisition problem"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 11:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> While thinking about that, it occurred to me that we could close the\n> gap if the server somehow understood that the master was waiting for\n> the worker. And it's actually not that hard to make that happen:\n> we could use advisory locks. Consider a dance like the following:\n>\n> 1. Master has A.S. lock on table t, and dispatches a dump job for t\n> to worker.\n>\n> 2. Worker chooses some key k, does pg_advisory_lock(k), and sends k\n> back to master.\n>\n> 3. Worker attempts to get A.S. lock on t. This might block, if some\n> other session has a pending lock request on t.\n>\n> 4. Upon receipt of message from worker, master also does\n> pg_advisory_lock(k). This blocks, but now the server can see that a\n> deadlock exists, and after deadlock_timeout elapses it will fix the\n> deadlock by letting the worker bypass the other pending lock request.\n>\n> 5. Once worker has the lock on table t, it does pg_advisory_unlock(k)\n> to release the master.\n>\n> 6. Master also does pg_advisory_unlock(k), and goes on about its business.\n\nNeat idea.\n\n> I've tested that the server side of this works, for either order of steps\n> 3 and 4. It seems like mostly just a small matter of programming to teach\n> pg_dump to do this, although there are some questions to resolve, mainly\n> how we choose the advisory lock keys. If there are user applications\n> running that also use advisory locks, there could be unwanted\n> interference. One easy improvement is to use pg_try_advisory_lock(k) in\n> step 2, and just choose a different k if the lock's in use. Perhaps,\n> since we don't expect that the locks would be held long, that's\n> sufficient --- but I suspect that users might wish for some pg_dump\n> options to restrict the set of keys it could use.\n\nThis seems like a pretty significant wart. I think we probably need a\nbetter solution, but I'm not sure what it is. I guess we could define\na new lock space that is specifically intended for this kind of\ninter-process coordination, where it's expected that the key is a PID.\n\n> Another point is that the whole dance is unnecessary in the normal\n> case, so maybe we should only do this if an initial attempt to get\n> the lock on table t fails. However, LOCK TABLE NOWAIT throws an\n> error if it can't get the lock, so this would require using a\n> subtransaction or starting a whole new transaction in the worker,\n> so maybe that's more trouble than it's worth. Some performance\n> testing might be called for. There's also the point that the code\n> path would go almost entirely untested if it's not exercised always.\n\nSeems like it might make sense just to do it always.\n\n> Lastly, we'd probably want both the master and worker processes to\n> run with small values of deadlock_timeout, so as to reduce the\n> time wasted before the server breaks the deadlock. Are there any\n> downsides to that? (If so, again maybe it's not worth the trouble,\n> since the typical case is that no wait is needed.)\n\nI think we shouldn't do this part. It's true that reducing\ndeadlock_timeout prevents time from being wasted if a deadlock occurs,\nbut that problem is not confined to this case; that's what\ndeadlock_timeout does in general. I can't see why we should\nsubstitute our judgement regarding the proper value of\ndeadlock_timeout for that of the DBA in this one case. I'm a little\nfuzzy-headed at the moment but it seems to me that no deadlock will\noccur unless the worker fails to get the lock, which should be rare,\nand even then I wonder if we couldn't somehow jigger things so that\nthe special case in ProcSleep (\"Determine where to add myself in the\nwait queue.\") rescues us. Even if not, a 1 second delay in a rare\ncase doesn't seem like a huge problem.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Apr 2019 13:00:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Idea for fixing parallel pg_dump's lock acquisition problem"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Apr 17, 2019 at 11:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... If there are user applications\n>> running that also use advisory locks, there could be unwanted\n>> interference. One easy improvement is to use pg_try_advisory_lock(k) in\n>> step 2, and just choose a different k if the lock's in use. Perhaps,\n>> since we don't expect that the locks would be held long, that's\n>> sufficient --- but I suspect that users might wish for some pg_dump\n>> options to restrict the set of keys it could use.\n\n> This seems like a pretty significant wart. I think we probably need a\n> better solution, but I'm not sure what it is. I guess we could define\n> a new lock space that is specifically intended for this kind of\n> inter-process coordination, where it's expected that the key is a PID.\n\nMy thought was that we'd like this to work without requiring any new\nserver-side facilities, so that pg_dump could use it against any server\nversion that supports parallel dump. If we're willing to restrict\nthe fix to server >= v13, or whenever this gets done, then yes we could\n(probably) arrange things to avoid the hazard. I'm not quite sure how\nit'd work though. We can't just invent a process-local key space,\nbecause both the master and worker need to be able to lock the same\nkey.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Apr 2019 13:17:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea for fixing parallel pg_dump's lock acquisition problem"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 7:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Apr 17, 2019 at 11:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> ... If there are user applications\n> >> running that also use advisory locks, there could be unwanted\n> >> interference. One easy improvement is to use pg_try_advisory_lock(k) in\n> >> step 2, and just choose a different k if the lock's in use. Perhaps,\n> >> since we don't expect that the locks would be held long, that's\n> >> sufficient --- but I suspect that users might wish for some pg_dump\n> >> options to restrict the set of keys it could use.\n>\n> > This seems like a pretty significant wart. I think we probably need a\n> > better solution, but I'm not sure what it is. I guess we could define\n> > a new lock space that is specifically intended for this kind of\n> > inter-process coordination, where it's expected that the key is a PID.\n>\n> My thought was that we'd like this to work without requiring any new\n> server-side facilities, so that pg_dump could use it against any server\n> version that supports parallel dump.\n\nCouldn't we use LOCKTAG_USERLOCK for that? It should be compatible\nwith all needed server versions, and the odds of collision seem low as\nthe extension as been dropped in pg 8.2 and the pgfoundry project has\nno activity since 2006. I'm not aware of any other extension using\nit, and a quick search didn't find anything.\n\n\n",
"msg_date": "Fri, 19 Apr 2019 22:52:08 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Idea for fixing parallel pg_dump's lock acquisition problem"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Fri, Apr 19, 2019 at 7:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> My thought was that we'd like this to work without requiring any new\n>> server-side facilities, so that pg_dump could use it against any server\n>> version that supports parallel dump.\n\n> Couldn't we use LOCKTAG_USERLOCK for that?\n\nUm, no, because there's no way for pg_dump to get at it in existing\nserver releases. The only available feature that supports mid-transaction\nunlock is the advisory-lock stuff.\n\nIf we have to add new code, we could perfectly well add another\nLockTagType to go with it. But that doesn't really solve the problem.\nWhatever SQL API we provide would have to be available to everybody\n(since pg_dump doesn't necessarily run as superuser), and as soon as\nsomebody says \"hey that's a neat feature, I think I'll use it in my\napp\" we're back to square one. It's not very apparent how we could\nhave a lock tag that's available to pg_dump processes and nobody else.\n\nI had some vague ideas about making it depend on the processes sharing\na snapshot; but it's not obvious how to get from there to a suitable\nlocktag, and in any case that certainly wouldn't be pre-existing\nserver functionality.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Apr 2019 17:34:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea for fixing parallel pg_dump's lock acquisition problem"
}
] |
[
{
"msg_contents": "Fix unportable code in pgbench.\n\nThe buildfarm points out that UINT64_FORMAT might not work with sscanf;\nit's calibrated for our printf implementation, which might not agree\nwith the platform-supplied sscanf. Fall back to just accepting an\nunsigned long, which is already more than the documentation promises.\n\nOversight in e6c3ba7fb; back-patch to v11, as that was.\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/1a75c1d0c5d967ea2adcd7129092687cded4e7bf\n\nModified Files\n--------------\nsrc/bin/pgbench/pgbench.c | 7 +++++--\n1 file changed, 5 insertions(+), 2 deletions(-)\n\n",
"msg_date": "Wed, 17 Apr 2019 21:30:36 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Fix unportable code in pgbench."
},
{
"msg_contents": "\nHello Tom,\n\n> Fix unportable code in pgbench.\n\nSorry for this unforseen issue... portability is a pain:-(\n\n> The buildfarm points out that UINT64_FORMAT might not work with sscanf;\n> it's calibrated for our printf implementation, which might not agree\n> with the platform-supplied sscanf. Fall back to just accepting an\n> unsigned long, which is already more than the documentation promises.\n\nYep, but ISTM that it is down to 32 bits, whereas the PRNG seed expects 48 \nbits a few lines below:\n\n base_random_sequence.xseed[0] = iseed & 0xFFFF;\n base_random_sequence.xseed[1] = (iseed >> 16) & 0xFFFF;\n base_random_sequence.xseed[2] = (iseed >> 32) & 0xFFFF;\n\nSo the third short is now always 0. Hmmm. I'll propose another option over \nthe week-end.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 17 Apr 2019 23:46:09 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix unportable code in pgbench."
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> Fix unportable code in pgbench.\n\n> Sorry for this unforseen issue... portability is a pain:-(\n\nI think it's my fault, actually --- I don't remember how much of\nthat patch was yours.\n\n> Yep, but ISTM that it is down to 32 bits,\n\nOnly on 32-bit-long machines, which are a dwindling minority (except\nfor Windows, which I don't really care about).\n\n> So the third short is now always 0. Hmmm. I'll propose another option over \n> the week-end.\n\nI suppose we could put pg_strtouint64 somewhere where pgbench can use it,\nbut TBH I don't think it's worth the trouble. The set of people using\nthe --random-seed=int option at all is darn near empty, I suspect,\nand the documentation only says you can write an int there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Apr 2019 17:57:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Fix unportable code in pgbench."
}
] |
[
{
"msg_contents": "What on God's green earth are these functions doing in\nsrc/include/catalog/index.h?\n\nThey don't have any obvious connection to indexes, let alone\ncatalog operations on indexes, which is what that file is for.\n\nThey weren't there before 2a96909a4, either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Apr 2019 18:57:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "itemptr_encode/itemptr_decode"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-17 18:57:00 -0400, Tom Lane wrote:\n> What on God's green earth are these functions doing in\n> src/include/catalog/index.h?\n\n> They don't have any obvious connection to indexes, let alone\n> catalog operations on indexes, which is what that file is for.\n\nWell, they were previously declared & defined in\nsrc/backend/catalog/index.c - that's where the location is coming from\n(and where they still are defined). And they're currently only used to\nimplement the index validation scans, which requires the validation scan\nto decode item pointers stored in the tuplesort presented to it.\n\nI'm happy to move them elsewhere, but I'm not sure there's really a good\nlocation. I guess we could move them to itemptr.h - but they're not\nreally something particularly generally usable.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Apr 2019 16:14:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: itemptr_encode/itemptr_decode"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-17 18:57:00 -0400, Tom Lane wrote:\n>> What on God's green earth are these functions doing in\n>> src/include/catalog/index.h?\n\n> I'm happy to move them elsewhere, but I'm not sure there's really a good\n> location. I guess we could move them to itemptr.h - but they're not\n> really something particularly generally usable.\n\nI don't have a better idea than that either, but I sure feel that they\ndon't belong in index.h. Is it worth inventing a whole new header\nfor these? If we stick 'em in itemptr.h, they'll be getting compiled\nby a whole lot of files :-(\n\nAs for the general usability argument, I'm not sure --- as we start\nto look at alternate AMs, we might have more use for them. When I first\nsaw the functions, I thought maybe they were part of sort acceleration\nfor TIDs; evidently they're not (yet), but that seems like another\npossible use-case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Apr 2019 19:22:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: itemptr_encode/itemptr_decode"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-17 19:22:08 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-04-17 18:57:00 -0400, Tom Lane wrote:\n> >> What on God's green earth are these functions doing in\n> >> src/include/catalog/index.h?\n> \n> > I'm happy to move them elsewhere, but I'm not sure there's really a good\n> > location. I guess we could move them to itemptr.h - but they're not\n> > really something particularly generally usable.\n> \n> I don't have a better idea than that either, but I sure feel that they\n> don't belong in index.h. Is it worth inventing a whole new header\n> for these? If we stick 'em in itemptr.h, they'll be getting compiled\n> by a whole lot of files :-(\n\nitemptr_utils.h? I don't have an opinion on whether we ought to move\nthem in v12 or v13. Don't think there's a beta1 pressure.\n\n\n> As for the general usability argument, I'm not sure --- as we start\n> to look at alternate AMs, we might have more use for them. When I first\n> saw the functions, I thought maybe they were part of sort acceleration\n> for TIDs; evidently they're not (yet), but that seems like another\n> possible use-case.\n\nWe ought to use them in a few more places. E.g. nodeTidscan.c's sorting\nwould likely be faster if we used something of that kind. And, what'd\nprobably substantially beneficial, for the junk ctid columns - where\nthey're currently IIRC transported as a by-ref datum, even on 64bit\nmachines.\n\nMildly related: Is there a reason we don't optimize fixed-length !byval\ndatum copies for typlen < sizeof(Datum) to something better than a full\npalloc? I guess it'd be somewhat of a breaking change?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 May 2019 18:31:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: itemptr_encode/itemptr_decode"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 4:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> As for the general usability argument, I'm not sure --- as we start\n> to look at alternate AMs, we might have more use for them. When I first\n> saw the functions, I thought maybe they were part of sort acceleration\n> for TIDs; evidently they're not (yet), but that seems like another\n> possible use-case.\n\nThere is also your join-or-to-union patch, which I thought might make\nuse of this for its TID sort.\n\nMaybe it would make sense to put this infrastructure in tuplesort.c,\nbut probably not. TIDs are 6 bytes, which as you once pointed out, is\nnot something that we have appropriate infrastructure for (there isn't\na DatumGet*() macro, and so on). The encoding scheme (which you\noriginally suggested as an alternative to my first idea, sort support\nfor item pointers) works particularly well as these things go -- it\nwas about 3x faster when everything fit in memory, and faster still\nwith external sorts. It allowed us to resolve comparisons at the\nSortTuple level within tuplesort.c, but also allowed tuplesort.c to\nuse the pass-by-value datum qsort specialization. It even allowed\nsorted array entries (TIDs/int8s) to be fetched without extra pointer\nchasing -- that can be a big bottleneck these days.\n\nThe encoding scheme is a bit ugly, but I suspect it would be simpler\nto stick to the same approach elsewhere than to try and hide all the\ndetails within tuplesort.c, or something like that. Unless we're\nwilling to treat TIDs as a whole new type of tuple with its own set of\nspecialized functions in tuplesort.c, which has problems of its own,\nthen it's kind of awkward to do it some other way.\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 18 May 2019 13:21:38 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: itemptr_encode/itemptr_decode"
}
] |
[
{
"msg_contents": "Hi all,\n\nFujii-san has sent me a report offline about REINDEX. And since 9.6,\ntrying to REINDEX directly an index of pg_class fails lamentably on an\nassertion failure (mbsync failure if bypassing the assert):\n#2 0x000055a9c5bfcc2c in ExceptionalCondition\n (conditionName=0x55a9c5ca9750\n \"!(!ReindexIsProcessingIndex(((indexRelation)->rd_id)))\",\n errorType=0x55a9c5ca969f \"FailedAssertion\",\n fileName=0x55a9c5ca9680 \"indexam.c\", lineNumber=204) at assert.c:54\n#3 0x000055a9c5686dcd in index_insert (indexRelation=0x7f58402fe5d8,\n values=0x7fff450c3270, isnull=0x7fff450c3250,\n heap_t_ctid=0x55a9c7e2c05c,\n heapRelation=0x7f584031eb68, checkUnique=UNIQUE_CHECK_YES,\n indexInfo=0x55a9c7e30520) at indexam.c:204\n#4 0x000055a9c5714a12 in CatalogIndexInsert\n (indstate=0x55a9c7e30408, heapTuple=0x55a9c7e2c058) at indexing.c:140\n#5 0x000055a9c5714b1d in CatalogTupleUpdate (heapRel=0x7f584031eb68,\n otid=0x55a9c7e2c05c, tup=0x55a9c7e2c058) at indexing.c:215\n#6 0x000055a9c5beda8a in RelationSetNewRelfilenode\n (relation=0x7f58402fe5d8, persistence=112 'p') at relcache.c:3531\n\nDoing a REINDEX TABLE directly on pg_class proves to work correctly,\nand CONCURRENTLY is not supported for catalog tables.\n\nBisecting my way through it, the first commit causing the breakage is\nthat:\ncommit: 01e386a325549b7755739f31308de4be8eea110d\nauthor: Tom Lane <tgl@sss.pgh.pa.us>\ndate: Wed, 23 Dec 2015 20:09:01 -0500\nAvoid VACUUM FULL altogether in initdb.\n\nCommit ed7b3b3811c5836a purported to remove initdb's use of VACUUM\nFULL,\nas had been agreed to in a pghackers discussion back in Dec 2014.\nBut it missed this one ...\n\nThe reason why this does not work is that CatalogIndexInsert() tries\nto do an index_insert directly on the index worked on. And the reason\nwhy this works at table level is that we have tweaks in\nreindex_relation() to enforce the list of valid indexes in the\nrelation cache with RelationSetIndexList(). It seems to me that the\nlogic in reindex_index() is wrong from the start, and that all the\nindex list handling done in reindex_relation() should just be in\nreindex_index() so as REINDEX INDEX gets the right call.\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 18 Apr 2019 10:14:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "REINDEX INDEX results in a crash for an index of pg_class since 9.6"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 10:14:30AM +0900, Michael Paquier wrote:\n> Doing a REINDEX TABLE directly on pg_class proves to work correctly,\n> and CONCURRENTLY is not supported for catalog tables.\n> \n> Bisecting my way through it, the first commit causing the breakage is\n> that:\n> commit: 01e386a325549b7755739f31308de4be8eea110d\n> author: Tom Lane <tgl@sss.pgh.pa.us>\n> date: Wed, 23 Dec 2015 20:09:01 -0500\n> Avoid VACUUM FULL altogether in initdb.\n\nThis brings down to a first, simple, solution which is to issue a\nVACUUM FULL on pg_class at the end of make_template0() in initdb.c to\navoid any subsequent problems if trying to issue a REINDEX on anything\nrelated to pg_class, and it won't fix any existing deployments:\n--- a/src/bin/initdb/initdb.c\n+++ b/src/bin/initdb/initdb.c\n@@ -2042,6 +2042,11 @@ make_template0(FILE *cmdfd)\n \t\t * Finally vacuum to clean up dead rows in pg_database\n \t\t */\n \t\t\"VACUUM pg_database;\\n\\n\",\n+\n+\t\t/*\n+\t\t * And rebuild pg_class.\n+\t\t */\n+\t\t\"VACUUM FULL pg_class;\\n\\n\",\n \t\tNULL\n \t};\nNow...\n\n> The reason why this does not work is that CatalogIndexInsert() tries\n> to do an index_insert directly on the index worked on. And the reason\n> why this works at table level is that we have tweaks in\n> reindex_relation() to enforce the list of valid indexes in the\n> relation cache with RelationSetIndexList(). It seems to me that the\n> logic in reindex_index() is wrong from the start, and that all the\n> index list handling done in reindex_relation() should just be in\n> reindex_index() so as REINDEX INDEX gets the right call.\n\nI got to wonder if this dance with the relation cache is actually\nnecessary, because we could directly tell CatalogIndexInsert() to not\ninsert a tuple into an index which is bring rebuilt, and the index\nrebuild would cause an entry to be added to pg_class anyway thanks to\nRelationSetNewRelfilenode(). This can obviously only happen for\npg_class indexes.\n\nAny thoughts about both approaches?\n--\nMichael",
"msg_date": "Tue, 23 Apr 2019 13:56:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Fujii-san has sent me a report offline about REINDEX. And since 9.6,\n> trying to REINDEX directly an index of pg_class fails lamentably on an\n> assertion failure (mbsync failure if bypassing the assert):\n\nSo ... I can't reproduce this on HEAD. Nor the back branches.\n\nregression=# \\d pg_class\n...\nIndexes:\n \"pg_class_oid_index\" UNIQUE, btree (oid)\n \"pg_class_relname_nsp_index\" UNIQUE, btree (relname, relnamespace)\n \"pg_class_tblspc_relfilenode_index\" btree (reltablespace, relfilenode)\n\nregression=# reindex index pg_class_relname_nsp_index;\nREINDEX\nregression=# reindex index pg_class_oid_index;\nREINDEX\nregression=# reindex index pg_class_tblspc_relfilenode_index;\nREINDEX\nregression=# reindex table pg_class; \nREINDEX\nregression=# reindex index pg_class_tblspc_relfilenode_index;\nREINDEX\n\nIs there some precondition you're not mentioning?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Apr 2019 16:47:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "On 2019-Apr-18, Michael Paquier wrote:\n\n> Fujii-san has sent me a report offline about REINDEX. And since 9.6,\n> trying to REINDEX directly an index of pg_class fails lamentably on an\n> assertion failure (mbsync failure if bypassing the assert):\n\nHmm, yeah, I ran into this crash too, more than a year ago, but I don't\nrecall what came out of the investigation, and my search-fu is failing\nme. I'll have a look at my previous laptop's drive ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 23 Apr 2019 17:17:52 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 04:47:19PM -0400, Tom Lane wrote:\n> regression=# reindex index pg_class_relname_nsp_index;\n> REINDEX\n> regression=# reindex index pg_class_oid_index;\n> REINDEX\n> regression=# reindex index pg_class_tblspc_relfilenode_index;\n> REINDEX\n> regression=# reindex table pg_class; \n> REINDEX\n> regression=# reindex index pg_class_tblspc_relfilenode_index;\n> REINDEX\n> \n> Is there some precondition you're not mentioning?\n\nHm. In my own init scripts, I create a new database just after\nstarting the instance. That seems to help in reproducing the\nfailure, because each time I create a new database, connect to it and\nreindex then I can see the crash. If I do a reindex of pg_class\nfirst, I don't see a crash of some rebuilds already happened, but if I\ndo directly a reindex of one of the indexes first, then the failure is\nplain. If I also add some regression tests, say in create_index.sql\nto stress a reindex of pg_class and its indexes, the crash also shows\nup. If I apply my previous patch to make CatalogIndexInsert() not do\nan insert on a catalog index being rebuilt, then things turn to be\nfine.\n--\nMichael",
"msg_date": "Wed, 24 Apr 2019 08:35:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Apr 23, 2019 at 04:47:19PM -0400, Tom Lane wrote:\n>> Is there some precondition you're not mentioning?\n\n> Hm. In my own init scripts, I create a new database just after\n> starting the instance.\n\nAh, there we go:\n\nregression=# create database d1;\nCREATE DATABASE\nregression=# \\c d1\nYou are now connected to database \"d1\" as user \"postgres\".\nd1=# reindex index pg_class_relname_nsp_index;\npsql: server closed the connection unexpectedly\n\nlog shows\n\nTRAP: FailedAssertion(\"!(!ReindexIsProcessingIndex(((indexRelation)->rd_id)))\", File: \"indexam.c\", Line: 204)\n\n#2 0x00000000008c74ed in ExceptionalCondition (\n conditionName=<value optimized out>, errorType=<value optimized out>, \n fileName=<value optimized out>, lineNumber=<value optimized out>)\n at assert.c:54\n#3 0x00000000004e4f8c in index_insert (indexRelation=0x7f80f849a5d8, \n values=0x7ffc4f65b030, isnull=0x7ffc4f65b130, heap_t_ctid=0x2842c0c, \n heapRelation=0x7f80f84bab68, checkUnique=UNIQUE_CHECK_YES, \n indexInfo=0x2843230) at indexam.c:204\n#4 0x000000000054c290 in CatalogIndexInsert (indstate=<value optimized out>, \n heapTuple=0x2842c08) at indexing.c:140\n#5 0x000000000054c472 in CatalogTupleUpdate (heapRel=0x7f80f84bab68, \n otid=0x2842c0c, tup=0x2842c08) at indexing.c:215\n#6 0x00000000008bca77 in RelationSetNewRelfilenode (relation=0x7f80f849a5d8, \n persistence=112 'p') at relcache.c:3531\n#7 0x0000000000548b3a in reindex_index (indexId=2663, \n skip_constraint_checks=false, persistence=112 'p', options=0)\n at index.c:3339\n#8 0x00000000005ed099 in ReindexIndex (indexRelation=<value optimized out>, \n options=0, concurrent=false) at indexcmds.c:2304\n#9 0x00000000007b5925 in standard_ProcessUtility (pstmt=0x281fd70, \n\n> If I apply my previous patch to make CatalogIndexInsert() not do\n> an insert on a catalog index being rebuilt, then things turn to be\n> fine.\n\nThat seems like pretty much of a hack :-(, in particular I'm not\nconvinced that we'd not end up with a missing index entry afterwards.\nMaybe it's the only way, but I think first we need to trace down\nexactly why this broke. I remember we had some special-case code\nfor reindexing pg_class, maybe something broke that?\n\nIt also seems quite odd that it doesn't fail every time; surely it's\nnot conditional whether we'll try to insert a new pg_class tuple or not?\nWe need to understand that, too. Maybe the old code never really\nworked in all cases? It seems clear that the commit you bisected to\njust allowed a pre-existing misbehavior to become exposed (more easily).\n\nNo time to look closer right now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Apr 2019 19:54:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "I wrote:\n> It also seems quite odd that it doesn't fail every time; surely it's\n> not conditional whether we'll try to insert a new pg_class tuple or not?\n> We need to understand that, too.\n\nOh! One gets you ten it \"works\" as long as the pg_class update is a\nHOT update, so that we don't actually end up touching the indexes.\nThis explains why the crash is less likely to happen in a database\nwhere one's done some work (and, probably, created some dead space in\npg_class). On the other hand, it doesn't quite fit the observation\nthat a VACUUM FULL masked the problem ... wouldn't that have ended up\nwith densely packed pg_class? Maybe not, if it rebuilt everything\nelse after pg_class...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Apr 2019 20:03:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 07:54:52PM -0400, Tom Lane wrote:\n> That seems like pretty much of a hack :-(, in particular I'm not\n> convinced that we'd not end up with a missing index entry afterwards.\n> Maybe it's the only way, but I think first we need to trace down\n> exactly why this broke. I remember we had some special-case code\n> for reindexing pg_class, maybe something broke that?\n\nYes, reindex_relation() has some infra to enforce the list of indexes\nin the cache for pg_class which has been introduced by a56a016 as far\nas it goes.\n\n> It also seems quite odd that it doesn't fail every time; surely it's\n> not conditional whether we'll try to insert a new pg_class tuple or not?\n> We need to understand that, too. Maybe the old code never really\n> worked in all cases? It seems clear that the commit you bisected to\n> just allowed a pre-existing misbehavior to become exposed (more easily).\n> \n> No time to look closer right now.\n\nYeah, there is a fishy smell underneath which comes from 9.6. When\ntesting with 9.5 or older a database creation does not create any\ncrash on a subsequent reindex. Not sure I'll have the time to look at\nthat more today, perhaps tomorrow depending on the odds.\n--\nMichael",
"msg_date": "Wed, 24 Apr 2019 09:13:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 7:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Tue, Apr 23, 2019 at 04:47:19PM -0400, Tom Lane wrote:\n> >> Is there some precondition you're not mentioning?\n>\n> > Hm. In my own init scripts, I create a new database just after\n> > starting the instance.\n>\n> Ah, there we go:\n>\n> regression=# create database d1;\n> CREATE DATABASE\n> regression=# \\c d1\n> You are now connected to database \"d1\" as user \"postgres\".\n> d1=# reindex index pg_class_relname_nsp_index;\n> psql: server closed the connection unexpectedly\n>\n> log shows\n>\n> TRAP:\n> FailedAssertion(\"!(!ReindexIsProcessingIndex(((indexRelation)->rd_id)))\",\n> File: \"indexam.c\", Line: 204)\n>\n\nCould reproduce TRAP:\nFailedAssertion(\"!(!ReindexIsProcessingIndex(((indexRelation)->rd_id)))\",\nFile: \"indexam.c\", Line: 204) in postgres log file.\n\n> #2 0x00000000008c74ed in ExceptionalCondition (\n> conditionName=<value optimized out>, errorType=<value optimized out>,\n> fileName=<value optimized out>, lineNumber=<value optimized out>)\n> at assert.c:54\n> #3 0x00000000004e4f8c in index_insert (indexRelation=0x7f80f849a5d8,\n> values=0x7ffc4f65b030, isnull=0x7ffc4f65b130, heap_t_ctid=0x2842c0c,\n> heapRelation=0x7f80f84bab68, checkUnique=UNIQUE_CHECK_YES,\n> indexInfo=0x2843230) at indexam.c:204\n> #4 0x000000000054c290 in CatalogIndexInsert (indstate=<value optimized\n> out>,\n> heapTuple=0x2842c08) at indexing.c:140\n> #5 0x000000000054c472 in CatalogTupleUpdate (heapRel=0x7f80f84bab68,\n> otid=0x2842c0c, tup=0x2842c08) at indexing.c:215\n> #6 0x00000000008bca77 in RelationSetNewRelfilenode\n> (relation=0x7f80f849a5d8,\n> persistence=112 'p') at relcache.c:3531\n> #7 0x0000000000548b3a in reindex_index (indexId=2663,\n> skip_constraint_checks=false, persistence=112 'p', options=0)\n> at index.c:3339\n> #8 0x00000000005ed099 in ReindexIndex (indexRelation=<value optimized\n> out>,\n> options=0, concurrent=false) at indexcmds.c:2304\n> #9 0x00000000007b5925 in standard_ProcessUtility (pstmt=0x281fd70,\n>\nBut could only see these stack in lldb -c corefile after type bt. Is there\na way to also print these stack in postgres log file , and how?\n\nOn Wed, Apr 24, 2019 at 7:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Apr 23, 2019 at 04:47:19PM -0400, Tom Lane wrote:\n>> Is there some precondition you're not mentioning?\n\n> Hm. In my own init scripts, I create a new database just after\n> starting the instance.\n\nAh, there we go:\n\nregression=# create database d1;\nCREATE DATABASE\nregression=# \\c d1\nYou are now connected to database \"d1\" as user \"postgres\".\nd1=# reindex index pg_class_relname_nsp_index;\npsql: server closed the connection unexpectedly\n\nlog shows\n\nTRAP: FailedAssertion(\"!(!ReindexIsProcessingIndex(((indexRelation)->rd_id)))\", File: \"indexam.c\", Line: 204)Could reproduce TRAP: FailedAssertion(\"!(!ReindexIsProcessingIndex(((indexRelation)->rd_id)))\", File: \"indexam.c\", Line: 204) in postgres log file.\n#2 0x00000000008c74ed in ExceptionalCondition (\n conditionName=<value optimized out>, errorType=<value optimized out>, \n fileName=<value optimized out>, lineNumber=<value optimized out>)\n at assert.c:54\n#3 0x00000000004e4f8c in index_insert (indexRelation=0x7f80f849a5d8, \n values=0x7ffc4f65b030, isnull=0x7ffc4f65b130, heap_t_ctid=0x2842c0c, \n heapRelation=0x7f80f84bab68, checkUnique=UNIQUE_CHECK_YES, \n indexInfo=0x2843230) at indexam.c:204\n#4 0x000000000054c290 in CatalogIndexInsert (indstate=<value optimized out>, \n heapTuple=0x2842c08) at indexing.c:140\n#5 0x000000000054c472 in CatalogTupleUpdate (heapRel=0x7f80f84bab68, \n otid=0x2842c0c, tup=0x2842c08) at indexing.c:215\n#6 0x00000000008bca77 in RelationSetNewRelfilenode (relation=0x7f80f849a5d8, \n persistence=112 'p') at relcache.c:3531\n#7 0x0000000000548b3a in reindex_index (indexId=2663, \n skip_constraint_checks=false, persistence=112 'p', options=0)\n at index.c:3339\n#8 0x00000000005ed099 in ReindexIndex (indexRelation=<value optimized out>, \n options=0, concurrent=false) at indexcmds.c:2304\n#9 0x00000000007b5925 in standard_ProcessUtility (pstmt=0x281fd70, But could only see these stack in lldb -c corefile after type bt. Is there a way to also print these stack in postgres log file , and how?",
"msg_date": "Thu, 25 Apr 2019 18:37:04 +0800",
"msg_from": "Shaoqi Bai <sbai@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 08:03:37PM -0400, Tom Lane wrote:\n> Oh! One gets you ten it \"works\" as long as the pg_class update is a\n> HOT update, so that we don't actually end up touching the indexes.\n> This explains why the crash is less likely to happen in a database\n> where one's done some work (and, probably, created some dead space in\n> pg_class). On the other hand, it doesn't quite fit the observation\n> that a VACUUM FULL masked the problem ... wouldn't that have ended up\n> with densely packed pg_class? Maybe not, if it rebuilt everything\n> else after pg_class...\n\nI have been able to spend a bit more time testing and looking at the\nroot of the problem, and I have found two things:\n1) The problem is reproducible with REL9_5_STABLE.\n2) Bisecting between the merge base points of REL9_4_STABLE/master and\nREL9_5_STABLE/master, I am being pointed to the introduction of\nreplication origins:\ncommit: 5aa2350426c4fdb3d04568b65aadac397012bbcb\nauthor: Andres Freund <andres@anarazel.de>\ndate: Wed, 29 Apr 2015 19:30:53 +0200\nIntroduce replication progress tracking infrastructure.\n\nIn order to see the problem, also one needs to patch initdb.c so as\nthe final VACUUM FULL on pg_database is replaced by VACUUM as on\n9.6~. The root of the problem is actually surprising, but manually\ntesting on 5aa2350 commit and 5aa2350~1 the difference shows up as the\nissue is easily reproducible here.\n--\nMichael",
"msg_date": "Thu, 25 Apr 2019 22:09:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "On 2019-Apr-25, Michael Paquier wrote:\n\n> 2) Bisecting between the merge base points of REL9_4_STABLE/master and\n> REL9_5_STABLE/master, I am being pointed to the introduction of\n> replication origins:\n> commit: 5aa2350426c4fdb3d04568b65aadac397012bbcb\n> author: Andres Freund <andres@anarazel.de>\n> date: Wed, 29 Apr 2015 19:30:53 +0200\n> Introduce replication progress tracking infrastructure.\n> \n> In order to see the problem, also one needs to patch initdb.c so as\n> the final VACUUM FULL on pg_database is replaced by VACUUM as on\n> 9.6~. The root of the problem is actually surprising, but manually\n> testing on 5aa2350 commit and 5aa2350~1 the difference shows up as the\n> issue is easily reproducible here.\n\nHmm ... I suspect the problem is even older, and that this commit made\nit possible to see as a side effect of changing the catalog contents\n(since it creates one more view and does a REVOKE, which becomes an\nupdate on pg_class.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 25 Apr 2019 11:05:54 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Apr 23, 2019 at 08:03:37PM -0400, Tom Lane wrote:\n>> Oh! One gets you ten it \"works\" as long as the pg_class update is a\n>> HOT update, so that we don't actually end up touching the indexes.\n\n> I have been able to spend a bit more time testing and looking at the\n> root of the problem, and I have found two things:\n> 1) The problem is reproducible with REL9_5_STABLE.\n\nActually, as far as I can tell, this has been broken since day 1.\nI can reproduce the assertion failure back to 9.1, and I think the\nonly reason it doesn't happen in older branches is that they lack\nthe ReindexIsProcessingIndex() check in RELATION_CHECKS :-(.\n\nWhat you have to do to get it to crash is to ensure that\nRelationSetNewRelfilenode's update of pg_class will be a non-HOT\nupdate. You can try to set that up with \"vacuum full pg_class\"\nbut it turns out that that tends to leave the pg_class entries\nfor pg_class's indexes in the last page of the relation, which\nis usually not totally full, so that a HOT update works and the\nbug doesn't manifest.\n\nA recipe like the following breaks every branch, by ensuring that\nthe page containing pg_class_relname_nsp_index's entry is full:\n\nregression=# vacuum full pg_class;\nVACUUM\nregression=# do $$ begin \nfor i in 100 .. 150 loop\nexecute 'create table dummy'||i||'(f1 int)';\nend loop;\nend $$;\nDO\nregression=# reindex index pg_class_relname_nsp_index;\npsql: server closed the connection unexpectedly\n\n\nAs for an actual fix, I tried just moving reindex_index's\nSetReindexProcessing call from where it is down to after\nRelationSetNewRelfilenode, but that isn't sufficient:\n\nregression=# reindex index pg_class_relname_nsp_index;\npsql: ERROR: could not read block 3 in file \"base/16384/41119\": read only 0 of 8192 bytes\n\n#0 errfinish (dummy=0) at elog.c:411\n#1 0x00000000007a9453 in mdread (reln=<value optimized out>, \n forknum=<value optimized out>, blocknum=<value optimized out>, \n buffer=0x7f608e6a7d00 \"\") at md.c:633\n#2 0x000000000077a9af in ReadBuffer_common (smgr=<value optimized out>, \n relpersistence=112 'p', forkNum=MAIN_FORKNUM, blockNum=3, mode=RBM_NORMAL, \n strategy=0x0, hit=0x7fff6a7452ef) at bufmgr.c:896\n#3 0x000000000077b67e in ReadBufferExtended (reln=0x7f608db5d670, \n forkNum=MAIN_FORKNUM, blockNum=3, mode=<value optimized out>, \n strategy=<value optimized out>) at bufmgr.c:664\n#4 0x00000000004ea95a in _bt_getbuf (rel=0x7f608db5d670, \n blkno=<value optimized out>, access=1) at nbtpage.c:805\n#5 0x00000000004eb67a in _bt_getroot (rel=0x7f608db5d670, access=2)\n at nbtpage.c:323\n#6 0x00000000004f2237 in _bt_search (rel=0x7f608db5d670, key=0x1d5a0c0, \n bufP=0x7fff6a7456a8, access=2, snapshot=0x0) at nbtsearch.c:99\n#7 0x00000000004e8caf in _bt_doinsert (rel=0x7f608db5d670, itup=0x1c85e58, \n checkUnique=UNIQUE_CHECK_YES, heapRel=0x1ccb8d0) at nbtinsert.c:219\n#8 0x00000000004efc17 in btinsert (rel=0x7f608db5d670, \n values=<value optimized out>, isnull=<value optimized out>, \n ht_ctid=0x1d12dc4, heapRel=0x1ccb8d0, checkUnique=UNIQUE_CHECK_YES, \n indexInfo=0x1c857f8) at nbtree.c:205\n#9 0x000000000054c320 in CatalogIndexInsert (indstate=<value optimized out>,\n heapTuple=0x1d12dc0) at indexing.c:140\n#10 0x000000000054c502 in CatalogTupleUpdate (heapRel=0x1ccb8d0, \n otid=0x1d12dc4, tup=0x1d12dc0) at indexing.c:215\n#11 0x00000000008bcba7 in RelationSetNewRelfilenode (relation=0x7f608db5d670, \n persistence=112 'p') at relcache.c:3531\n#12 0x0000000000548b16 in reindex_index (indexId=2663, \n skip_constraint_checks=false, persistence=112 'p', options=0)\n at index.c:3336\n#13 0x00000000005ed129 in ReindexIndex (indexRelation=<value optimized out>, \n options=0, concurrent=false) at indexcmds.c:2304\n#14 0x00000000007b5a45 in standard_ProcessUtility (pstmt=0x1c66d70, \n queryString=0x1c65f68 \"reindex index pg_class_relname_nsp_index;\", \n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, \n dest=0x1c66e68, completionTag=0x7fff6a745e40 \"\") at utility.c:787\n\nThe problem here is that RelationSetNewRelfilenode is aggressively\nchanging the index's relcache entry before it's written out the\nupdated tuple, so that the tuple update tries to make an index\nentry in the new storage which isn't filled yet. I think we can\nfix it by *not* doing that, but leaving it to the relcache inval\nduring the CommandCounterIncrement call to update the relcache\nentry. However, it looks like that will take some API refactoring,\nbecause the storage-creation functions expect to get the new\nrelfilenode out of the relcache entry, and they'll have to be\nchanged to not do it that way.\n\nI'll work on a patch ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2019 11:32:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "I wrote:\n> The problem here is that RelationSetNewRelfilenode is aggressively\n> changing the index's relcache entry before it's written out the\n> updated tuple, so that the tuple update tries to make an index\n> entry in the new storage which isn't filled yet. I think we can\n> fix it by *not* doing that, but leaving it to the relcache inval\n> during the CommandCounterIncrement call to update the relcache\n> entry. However, it looks like that will take some API refactoring,\n> because the storage-creation functions expect to get the new\n> relfilenode out of the relcache entry, and they'll have to be\n> changed to not do it that way.\n\nSo looking at that, it seems like the table_relation_set_new_filenode\nAPI is pretty darn ill-designed. It assumes that it's passed an\nalready-entirely-valid relcache entry, but it also supposes that\nit can pass back information that needs to go into the relation's\npg_class entry. One or the other side of that has to give, unless\nyou want to doom everything to updating pg_class twice.\n\nI'm not really sure what's the point of giving the tableam control\nof relfrozenxid+relminmxid at all, and I notice that index_create\nfor one is just Asserting that constant values are returned.\n\nI think we need to do one or possibly both of these things:\n\n* split table_relation_set_new_filenode into two functions,\none that doesn't take a relcache entry at all and returns\nappropriate relfrozenxid+relminmxid for a new rel, and then\none that just creates storage without dealing with the xid\nvalues;\n\n* change table_relation_set_new_filenode so that it is told\nthe relfilenode etc to use without assuming that it has a\nvalid relcache entry to work with.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2019 12:29:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-25 12:29:16 -0400, Tom Lane wrote:\n> I wrote:\n> > The problem here is that RelationSetNewRelfilenode is aggressively\n> > changing the index's relcache entry before it's written out the\n> > updated tuple, so that the tuple update tries to make an index\n> > entry in the new storage which isn't filled yet. I think we can\n> > fix it by *not* doing that, but leaving it to the relcache inval\n> > during the CommandCounterIncrement call to update the relcache\n> > entry. However, it looks like that will take some API refactoring,\n> > because the storage-creation functions expect to get the new\n> > relfilenode out of the relcache entry, and they'll have to be\n> > changed to not do it that way.\n> \n> So looking at that, it seems like the table_relation_set_new_filenode\n> API is pretty darn ill-designed. It assumes that it's passed an\n> already-entirely-valid relcache entry, but it also supposes that\n> it can pass back information that needs to go into the relation's\n> pg_class entry. One or the other side of that has to give, unless\n> you want to doom everything to updating pg_class twice.\n\nI'm not super happy about it either - but I think that's somewhat of an\noutgrowth of how this worked before. I mean there's two differences:\n\n1) Previously the RelationCreateStorage() was called unconditionally,\nnow it's\n\n\t\tcase RELKIND_INDEX:\n\t\tcase RELKIND_SEQUENCE:\n\t\t\tRelationCreateStorage(relation->rd_node, persistence);\n\t\t\tRelationOpenSmgr(relation);\n\t\t\tbreak;\n\n\t\tcase RELKIND_RELATION:\n\t\tcase RELKIND_TOASTVALUE:\n\t\tcase RELKIND_MATVIEW:\n\t\t\ttable_relation_set_new_filenode(relation, persistence,\n\t\t\t\t\t\t\t\t\t\t\t&freezeXid, &minmulti);\n\t\t\tbreak;\n\t}\n\nThat seems pretty obviously necessary.\n\n\n2) Previously AddNewRelationTuple() relation tuple determined the\ninitial horizon for table like things:\n\t/* Initialize relfrozenxid and relminmxid */\n\tif (relkind == RELKIND_RELATION ||\n\t\trelkind == RELKIND_MATVIEW ||\n\t\trelkind == RELKIND_TOASTVALUE)\n\t{\n\t\t/*\n\t\t * Initialize to the minimum XID that could put tuples in the table.\n\t\t * We know that no xacts older than RecentXmin are still running, so\n\t\t * that will do.\n\t\t */\n\t\tnew_rel_reltup->relfrozenxid = RecentXmin;\n\n\t\t/*\n\t\t * Similarly, initialize the minimum Multixact to the first value that\n\t\t * could possibly be stored in tuples in the table. Running\n\t\t * transactions could reuse values from their local cache, so we are\n\t\t * careful to consider all currently running multis.\n\t\t *\n\t\t * XXX this could be refined further, but is it worth the hassle?\n\t\t */\n\t\tnew_rel_reltup->relminmxid = GetOldestMultiXactId();\n\t}\n\nand inserted that. Now it's determined previously below heap_create(),\nand passed as an argument to AddNewRelationTuple().\n\nand similarly the caller to RelationSetNewRelfilenode() determined the\nnew horizons, but they also just were written into the relcache entry\nand then updated:\n\n11:\n\tclassform->relfrozenxid = freezeXid;\n\tclassform->relminmxid = minmulti;\n\tclassform->relpersistence = persistence;\n\n\tCatalogTupleUpdate(pg_class, &tuple->t_self, tuple);\nmaster:\n\tclassform->relfrozenxid = freezeXid;\n\tclassform->relminmxid = minmulti;\n\tclassform->relpersistence = persistence;\n\n\tCatalogTupleUpdate(pg_class, &tuple->t_self, tuple);\n\n\nI'm not quite sure why the current situation is any worse?\n\n\nPerhaps that's because I don't quite understand what you mean with \"It\nassumes that it's passed an already-entirely-valid relcache entry\". What\ndo you mean by that / where does it assume that? I guess we could warn\na bit more about the underlying tuple not necessarily existing yet it in\nthe callback's docs, but other than that? Previously heap.c also was\ndealing with an relcache entry without backing pg_class entry but with\nexisting storage, no?\n\n\n> I'm not really sure what's the point of giving the tableam control\n> of relfrozenxid+relminmxid at all\n\nWell, because not every AM is going to need those. It'd make very little\nsense to e.g. set them for an undo based design like zheap's - there is\nneed to freeze ever. The need for each page to rewritten multiple times\n(original write, hint bit sets, freezing for heap) imo is one of the\nmajor reasons people are working on alternative AMs. That seems to\nfundamentally require AMs having control over the relfrozenxid\n\n\n> and I notice that index_create for one is just Asserting that constant\n> values are returned.\n\nWell, that's not going to call into tableam at all? Those asserts\npreviously were in RelationSetNewRelfilenode() itself:\n\n\t/* Indexes, sequences must have Invalid frozenxid; other rels must not */\n\tAssert((relation->rd_rel->relkind == RELKIND_INDEX ||\n\t\t\trelation->rd_rel->relkind == RELKIND_SEQUENCE) ?\n\t\t freezeXid == InvalidTransactionId :\n\t\t TransactionIdIsNormal(freezeXid));\n\nbut given that e.g. not every tableam is going to have those values\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Apr 2019 11:16:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-25 12:29:16 -0400, Tom Lane wrote:\n>> So looking at that, it seems like the table_relation_set_new_filenode\n>> API is pretty darn ill-designed. It assumes that it's passed an\n>> already-entirely-valid relcache entry, but it also supposes that\n>> it can pass back information that needs to go into the relation's\n>> pg_class entry. One or the other side of that has to give, unless\n>> you want to doom everything to updating pg_class twice.\n\n> I'm not super happy about it either - but I think that's somewhat of an\n> outgrowth of how this worked before.\n\nI'm not saying that the previous code was nice; I'm just saying that\nwhat is there in HEAD needs to be factored differently so that we\ncan solve this problem in a reasonable way.\n\n> Perhaps that's because I don't quite understand what you mean with \"It\n> assumes that it's passed an already-entirely-valid relcache entry\". What\n> do you mean by that / where does it assume that?\n\nWell, I can see heapam_relation_set_new_filenode touching all of these\nfields right now:\n\nrel->rd_node\nrel->rd_rel->relpersistence (and why is it looking at that rather than\nthe passed-in persistence???)\nrel->rd_rel->relkind\nwhatever RelationOpenSmgr touches\nrel->rd_smgr\n\nAs far as I can see, there is no API restriction on what parts of the\nrelcache entry it may presume are valid. It *certainly* thinks that\nrd_rel is valid, which is rather at odds with the fact that this has\nto be called before the pg_class entry exists all (for the creation\ncase) or has been updated (for the set-new-relfilenode case). Unless\nyou want to redefine things so that we create/update the pg_class\nentry, put it into rel->rd_rel, call relation_set_new_filenode, and\nthen update the pg_class entry again with what that function gives back\nfor the xmin fields.\n\nThat's obviously stupid, of course. But my point is that we need to\nrestrict what the function can touch or assume valid, if it's going\nto be called before the pg_class update happens. And I'd rather that\nwe did so by restricting its argument list so that it hasn't even got\naccess to stuff we don't want it assuming valid.\n\nAlso, in order to fix this problem, we cannot change the actual\nrelcache entry contents until after we've performed the tuple update.\nSo if we want the tableam code to be getting the relfilenode or\npersistence info out of the relcache entry, rather than passing\nthose as standalone parameters, the call can't happen till after\nthe tuple update and CCI call. That's why I was thinking about\nsplitting it into two functions. Get the xid values, update the\npg_class tuple, CCI, then do relation_set_new_filenode with the\nupdated relcache entry would work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2019 14:50:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-25 14:50:09 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Perhaps that's because I don't quite understand what you mean with \"It\n> > assumes that it's passed an already-entirely-valid relcache entry\". What\n> > do you mean by that / where does it assume that?\n>\n> Well, I can see heapam_relation_set_new_filenode touching all of these\n> fields right now:\n>\n> rel->rd_node\n> rel->rd_rel->relpersistence (and why is it looking at that rather than\n> the passed-in persistence???)\n\nUgh.\n\n> rel->rd_rel->relkind\n> whatever RelationOpenSmgr touches\n> rel->rd_smgr\n\n\n> As far as I can see, there is no API restriction on what parts of the\n> relcache entry it may presume are valid. It *certainly* thinks that\n> rd_rel is valid, which is rather at odds with the fact that this has\n> to be called before the pg_class entry exists all (for the creation\n> case) or has been updated (for the set-new-relfilenode case).\n\nWell, that's just what we did before. And given the way the code is\nstructured, I am not sure I see a decent alternative that's not a\ndisproportionate amount of work. I mean, heap.c's heap_create() and\nheap_create_with_catalog() basically work off the Relation after the\nRelationBuildLocalRelation() call, a good bit before the underlying\nstorage is valid.\n\n\n> But my point is that we need to restrict what the function can touch\n> or assume valid, if it's going to be called before the pg_class update\n> happens. And I'd rather that we did so by restricting its argument\n> list so that it hasn't even got access to stuff we don't want it\n> assuming valid.\n\nOTOH, that'd mean we'd need to separately look up the amhandler, pass in\na lot more arguments etc. ISTM it'd be easier to just declare that only\nthe fields RelationBuildLocalRelation() sets are to be considered valid.\nSee the end of the email for a proposal.\n\n\n> Also, in order to fix this problem, we cannot change the actual\n> relcache entry contents until after we've performed the tuple update.\n> So if we want the tableam code to be getting the relfilenode or\n> persistence info out of the relcache entry, rather than passing\n> those as standalone parameters, the call can't happen till after\n> the tuple update and CCI call. That's why I was thinking about\n> splitting it into two functions. Get the xid values, update the\n> pg_class tuple, CCI, then do relation_set_new_filenode with the\n> updated relcache entry would work.\n\nI think that'd be hard for the initial relation creation. At the moment\nwe intentionally create the storage for the new relation before\ninserting the catalog contents.\n\nCurrently the only thing that table_relation_set_new_filenode() accesses\nthat already is updated is the RelFileNode. I wonder if we shouldn't\nchange the API so that table_relation_set_new_filenode() will get a\nrelcache entry *without* any updates passed in, then internally does\nGetNewRelFileNode() (if so desired by the AM), and returns the new rnode\nvia a new out parameter. So RelationSetNewRelfilenode() would basically\nwork like this:\n\n\tswitch (relation->rd_rel->relkind)\n\t{\n\t\tcase RELKIND_INDEX:\n\t\tcase RELKIND_SEQUENCE:\n newrelfilenode = GetNewRelFileNode(...);\n\t\t\tRelationCreateStorage(newrelfilenode, persistence);\n\t\t\tRelationOpenSmgr(relation);\n\t\t\tbreak;\n\t\tcase RELKIND_RELATION:\n\t\tcase RELKIND_TOASTVALUE:\n\t\tcase RELKIND_MATVIEW:\n\t\t\ttable_relation_set_new_filenode(relation, persistence,\n \t\t\t\t\t\t\t\t&newrnode, &freezeXid, &minmulti);\n\t\t\tbreak;\n\t}\n\n /* Now update the pg_class row. */\n\tif (relation->rd_rel->relkind != RELKIND_SEQUENCE)\n\t{\n\t\tclassform->relpages = 0;\t/* it's empty until further notice */\n\t\tclassform->reltuples = 0;\n\t\tclassform->relallvisible = 0;\n\t}\n\tclassform->relfrozenxid = freezeXid;\n\tclassform->relminmxid = minmulti;\n\tclassform->relpersistence = persistence;\n\n\t/*\n\t * If we're dealing with a mapped index, pg_class.relfilenode doesn't\n * change; instead we'll have to send the update to the relation mapper.\n * But we can do so only after doing the catalog update, otherwise the\n * contents of the old data is going to be invalid.\n *\n * XXX: Can this actually validly be reached for a mapped table?\n\t */\n if (!RelationIsMapped(relation))\n\t\tclassform->relfilenode = newrelfilenode;\n\n\tCatalogTupleUpdate(pg_class, &tuple->t_self, tuple);\n\n /* now that the catalog is updated, update relmapper if necessary */\n\tif (RelationIsMapped(relation))\n\t\tRelationMapUpdateMap(RelationGetRelid(relation),\n\t\t\t\t\t\t\t newrelfilenode,\n\t\t\t\t\t\t\t relation->rd_rel->relisshared,\n\t\t\t\t\t\t\t true);\n\n\t/*\n\t * Make the pg_class row change visible, as well as the relation map\n\t * change if any. This will cause the relcache entry to get updated, too.\n\t */\n\tCommandCounterIncrement();\n\n // XXX: Previously we called RelationInitPhysicalAddr() in this routine\n // but I don't think that should be needed due to the CCI?\n\nand the table AM would do the necessary\n *newrelfilenode = GetNewRelFileNode(...);\n\t\t\tRelationCreateStorage(*newrelfilenode, persistence);\n\t\t\tRelationOpenSmgr(relation);\ndance *iff* it wants to do so.\n\nThat seems like it'd go towards allowing a fair bit more flexibility for\ntable AMs (and possibly index AMs in the future).\n\nWe'd also have to make the equivalent set of changes to\nATExecSetTableSpace(). But that seems like it'd architecturally be\ncleaner anyway. I think we'd move the FlushRelationBuffers(),\nGetNewRelFileNode() logic into the callback. It'd probably make sense\nto have ATExecSetTableSpace() first call\ntable_relation_set_new_filenode() and then call\ntable_relation_copy_data() with the new relfilenode, but not yet updated\nrelcache entry.\n\nWe don't currently allow that, but as far as I can see the current\ncoding of ATExecSetTableSpace() also has bad problems with system\ncatalog updates. It copies the data and *then* does\nCatalogTupleUpdate(), but *witout* updating the reclache - which ijust\nwould cause the update to be lost.\n\n\nI could come up with a patch for that if you want me to.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Apr 2019 12:51:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-25 14:50:09 -0400, Tom Lane wrote:\n>> As far as I can see, there is no API restriction on what parts of the\n>> relcache entry it may presume are valid. It *certainly* thinks that\n>> rd_rel is valid, which is rather at odds with the fact that this has\n>> to be called before the pg_class entry exists all (for the creation\n>> case) or has been updated (for the set-new-relfilenode case).\n\n> Well, that's just what we did before. And given the way the code is\n> structured, I am not sure I see a decent alternative that's not a\n> disproportionate amount of work. I mean, heap.c's heap_create() and\n> heap_create_with_catalog() basically work off the Relation after the\n> RelationBuildLocalRelation() call, a good bit before the underlying\n> storage is valid.\n\nYou could imagine restructuring that ... but I agree it'd be a lot\nof work.\n\n> Currently the only thing that table_relation_set_new_filenode() accesses\n> that already is updated is the RelFileNode. I wonder if we shouldn't\n> change the API so that table_relation_set_new_filenode() will get a\n> relcache entry *without* any updates passed in, then internally does\n> GetNewRelFileNode() (if so desired by the AM), and returns the new rnode\n> via a new out parameter.\n\nThat could work. The important API spec is then that the relcache entry\nreflects the *previous* state of the relation, and is not to be modified\nby the tableam call. After the call, we perform the pg_class update and\ndo CCI, and the relcache inval seen by the CCI causes the relcache entry\nto be brought into sync with the new reality. So relcache entries change\nat CCI boundaries, not in between.\n\nIn the creation case, it works more or less the same, with the\nunderstanding that the \"previous state\" is some possibly-partly-dummy\nstate set up by RelationBuildLocalRelation. But we might need to add\na CCI call that wasn't there before; not sure.\n\n> We don't currently allow that, but as far as I can see the current\n> coding of ATExecSetTableSpace() also has bad problems with system\n> catalog updates. It copies the data and *then* does\n> CatalogTupleUpdate(), but *witout* updating the reclache - which ijust\n> would cause the update to be lost.\n\nWell, I imagine it's expecting the subsequent CCI to update the relcache\nentry, which I think is correct behavior in this worldview. We're\nbasically trying to make the relcache state follow transaction/command\nboundary semantics.\n\n> I could come up with a patch for that if you want me to.\n\nI'm happy to let you take a whack at it if you want.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2019 16:02:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-25 16:02:03 -0400, Tom Lane wrote:\n> > Currently the only thing that table_relation_set_new_filenode() accesses\n> > that already is updated is the RelFileNode. I wonder if we shouldn't\n> > change the API so that table_relation_set_new_filenode() will get a\n> > relcache entry *without* any updates passed in, then internally does\n> > GetNewRelFileNode() (if so desired by the AM), and returns the new rnode\n> > via a new out parameter.\n> \n> That could work. The important API spec is then that the relcache entry\n> reflects the *previous* state of the relation, and is not to be modified\n> by the tableam call.\n\nRight.\n\nI was wondering if we should just pass in the pg_class tuple as an \"out\"\nargument, instead of pointers to relfilnode/relfrozenxid/relminmxid.\n\n\n> > We don't currently allow that, but as far as I can see the current\n> > coding of ATExecSetTableSpace() also has bad problems with system\n> > catalog updates. It copies the data and *then* does\n> > CatalogTupleUpdate(), but *witout* updating the reclache - which ijust\n> > would cause the update to be lost.\n> \n> Well, I imagine it's expecting the subsequent CCI to update the relcache\n> entry, which I think is correct behavior in this worldview. We're\n> basically trying to make the relcache state follow transaction/command\n> boundary semantics.\n\nMy point was that given the current coding the code in\nATExecSetTableSpace() would make changes to the *old* relfilenode, after\nhaving already copied the contents to the new relfilenode. Which means\nthat if ATExecSetTableSpace() is ever used on pg_class or one of it's\nindexes, it'd just loose those changes, afaict.\n\n\n> > I could come up with a patch for that if you want me to.\n> \n> I'm happy to let you take a whack at it if you want.\n\nI'll give it a whack (after writing one more email on a separate but\nloosely related topic).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Apr 2019 14:03:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-25 16:02:03 -0400, Tom Lane wrote:\n>> That could work. The important API spec is then that the relcache entry\n>> reflects the *previous* state of the relation, and is not to be modified\n>> by the tableam call.\n\n> Right.\n\n> I was wondering if we should just pass in the pg_class tuple as an \"out\"\n> argument, instead of pointers to relfilnode/relfrozenxid/relminmxid.\n\nYeah, possibly. The whole business with xids is perhaps heapam specific,\nso decoupling this function's signature from them would be good.\n\n> My point was that given the current coding the code in\n> ATExecSetTableSpace() would make changes to the *old* relfilenode, after\n> having already copied the contents to the new relfilenode. Which means\n> that if ATExecSetTableSpace() is ever used on pg_class or one of it's\n> indexes, it'd just loose those changes, afaict.\n\nHmm.\n\nThere's another reason why we'd like the relcache contents to only change\nat CCI, which is that if we get a relcache invalidation somewhere before\nwe get to the CCI, relcache.c would proceed to reload it based on the\n*current* catalog contents (ie, pre-update, thanks to the magic of MVCC),\nso that the entry would revert back to its previous state until we did get\nto CCI. I wonder whether there's any demonstrable bug there. Though\nyou'd think the CLOBBER_CACHE_ALWAYS animals would've found it if so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2019 17:12:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-25 17:12:33 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > My point was that given the current coding the code in\n> > ATExecSetTableSpace() would make changes to the *old* relfilenode, after\n> > having already copied the contents to the new relfilenode. Which means\n> > that if ATExecSetTableSpace() is ever used on pg_class or one of it's\n> > indexes, it'd just loose those changes, afaict.\n> \n> Hmm.\n\nI think there's no a live bug in because we a) require\nallow_system_table_mods to modify system tables, and then b) have\nanother check\n\n /*\n * We cannot support moving mapped relations into different tablespaces.\n * (In particular this eliminates all shared catalogs.)\n */\n if (RelationIsMapped(rel))\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"cannot move system relation \\\"%s\\\"\",\n RelationGetRelationName(rel))));\n\nthat triggers even when allow_system_table_mods is off.\n\n\n> There's another reason why we'd like the relcache contents to only change\n> at CCI, which is that if we get a relcache invalidation somewhere before\n> we get to the CCI, relcache.c would proceed to reload it based on the\n> *current* catalog contents (ie, pre-update, thanks to the magic of MVCC),\n> so that the entry would revert back to its previous state until we did get\n> to CCI. I wonder whether there's any demonstrable bug there. Though\n> you'd think the CLOBBER_CACHE_ALWAYS animals would've found it if so.\n\nI think we basically assume that there's nothing triggering relcache\ninvals here inbetween updating the relcache entry, and doing the actual\ncatalog modification. That looks like it's currently somewhat OK in the\nplaces we've talked about so far.\n\nThis made me look at cluster.c and damn, I'd forgotten about that.\nLook at the following code in copy_table_data():\n\n\t\t/*\n\t\t * When doing swap by content, any toast pointers written into NewHeap\n\t\t * must use the old toast table's OID, because that's where the toast\n\t\t * data will eventually be found. Set this up by setting rd_toastoid.\n\t\t * This also tells toast_save_datum() to preserve the toast value\n\t\t * OIDs, which we want so as not to invalidate toast pointers in\n\t\t * system catalog caches, and to avoid making multiple copies of a\n\t\t * single toast value.\n\t\t *\n\t\t * Note that we must hold NewHeap open until we are done writing data,\n\t\t * since the relcache will not guarantee to remember this setting once\n\t\t * the relation is closed. Also, this technique depends on the fact\n\t\t * that no one will try to read from the NewHeap until after we've\n\t\t * finished writing it and swapping the rels --- otherwise they could\n\t\t * follow the toast pointers to the wrong place. (It would actually\n\t\t * work for values copied over from the old toast table, but not for\n\t\t * any values that we toast which were previously not toasted.)\n\t\t */\n\t\tNewHeap->rd_toastoid = OldHeap->rd_rel->reltoastrelid;\n\t}\n\telse\n\t\t*pSwapToastByContent = false;\n\nwhich then goes on to do things like a full blown sort or index\nscan. Sure, that's for the old relation, but that's so ugly. It works\nbecause RelationClearRelation() copies over rd_toastoid :/.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Apr 2019 14:56:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-25 17:12:33 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-04-25 16:02:03 -0400, Tom Lane wrote:\n> >> That could work. The important API spec is then that the relcache entry\n> >> reflects the *previous* state of the relation, and is not to be modified\n> >> by the tableam call.\n>\n> > Right.\n>\n> > I was wondering if we should just pass in the pg_class tuple as an \"out\"\n> > argument, instead of pointers to relfilnode/relfrozenxid/relminmxid.\n>\n> Yeah, possibly. The whole business with xids is perhaps heapam specific,\n> so decoupling this function's signature from them would be good.\n\nI've left that out in the attached. Currently VACUUM FULL / CLUSTER also\nneeds to handle those, and the callback for transactional rewrite\n(table_relation_copy_for_cluster()), also returns those as output\nparameter. I think I can see a way how we could clean up the relevant\ncluster.c code, but until that's done, I don't see much point in a\ndifferent interface (I'll probably write apatch .\n\nThe attached patch fixes the problem for me, and passes all existing\ntests. It contains a few changes that are not strictly necessary, but\nimo clear improvements.\n\nWe probably could split the tableam changes and related refactoring from\nthe fix to make backpatching simpler. I've not done that yet, but I\nthink we should before committing.\n\nQuestions:\n- Should we move the the CommandCounterIncrement() from\n RelationSetNewRelfilenode() to the callers? That'd allow them to do\n other things to the new relation (e.g. fill it), before making the\n changes visible. Don't think it'd currently help, but it seems like it\n could make code more robust in the future.\n\n- Should we introduce an assertion into CatalogIndexInsert()'s\n HeapTupleIsHeapOnly() path, that asserts that all the relevant indexes\n aren't ReindexIsProcessingIndex()? Otherwise it seems way too easy to\n re-introduce bugs like this one. Dirty hack for that included.\n\n- Wonder if we shouldn't introduce something akin to\n SetReindexProcessing() for table rewrites (e.g. VACUUM FULL), to\n prevent the related error of inserting/updating a catalog table that's\n currently being rewritten.\n\nTaking this as a WIP, what do you think?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 25 Apr 2019 19:02:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 11:32:21AM -0400, Tom Lane wrote:\n> What you have to do to get it to crash is to ensure that\n> RelationSetNewRelfilenode's update of pg_class will be a non-HOT\n> update. You can try to set that up with \"vacuum full pg_class\"\n> but it turns out that that tends to leave the pg_class entries\n> for pg_class's indexes in the last page of the relation, which\n> is usually not totally full, so that a HOT update works and the\n> bug doesn't manifest.\n\nIndeed, I can see that the update difference after and before the\ncommit. This could have blowed up on basically anything when\nbisecting. Changing the page size would have given something else\nperhaps..\n\n> As for an actual fix, I tried just moving reindex_index's\n> SetReindexProcessing call from where it is down to after\n> RelationSetNewRelfilenode, but that isn't sufficient:\n> \n> regression=# reindex index pg_class_relname_nsp_index;\n> psql: ERROR: could not read block 3 in file \"base/16384/41119\":\n> read only 0 of 8192 bytes\n\nYeah, that's one of the first things I tried as well when first\nlooking at the problem. Turns out it is not that simple.\n--\nMichael",
"msg_date": "Fri, 26 Apr 2019 12:14:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-25 17:12:33 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> I was wondering if we should just pass in the pg_class tuple as an \"out\"\n>>> argument, instead of pointers to relfilnode/relfrozenxid/relminmxid.\n\n>> Yeah, possibly. The whole business with xids is perhaps heapam specific,\n>> so decoupling this function's signature from them would be good.\n\n> I've left that out in the attached. Currently VACUUM FULL / CLUSTER also\n> needs to handle those, and the callback for transactional rewrite\n> (table_relation_copy_for_cluster()), also returns those as output\n> parameter. I think I can see a way how we could clean up the relevant\n> cluster.c code, but until that's done, I don't see much point in a\n> different interface (I'll probably write apatch .\n\nOK, we can leave that for later. I suppose there's little hope that\nv12's version of the tableam API can be chiseled onto stone tablets yet.\n\n> Questions:\n> - Should we move the the CommandCounterIncrement() from\n> RelationSetNewRelfilenode() to the callers? That'd allow them to do\n> other things to the new relation (e.g. fill it), before making the\n> changes visible. Don't think it'd currently help, but it seems like it\n> could make code more robust in the future.\n\nNo, I don't think so. The intermediate state where the relcache entry\nis inconsistent with the on-disk state is not something we want to\nbe propagating all over the place. As for robustness, I'd be more\nworried about somebody forgetting the CCI than about possibly being\nable to squeeze additional updates into the same CCI cycle.\n\n> - Should we introduce an assertion into CatalogIndexInsert()'s\n> HeapTupleIsHeapOnly() path, that asserts that all the relevant indexes\n> aren't ReindexIsProcessingIndex()? Otherwise it seems way too easy to\n> re-introduce bugs like this one. Dirty hack for that included.\n\nGood idea, but I think I'd try to keep the code the same in a non-assert\nbuild, that is more like\n\n+#ifndef USE_ASSERT_CHECKING\n\t/* HOT update does not require index inserts */\n\tif (HeapTupleIsHeapOnly(heapTuple))\n\t\treturn;\n+#endif\n\n\t/* required setup here ... */\n\n+#ifdef USE_ASSERT_CHECKING\n+\t/* HOT update does not require index inserts, but check we could have */\n+\tif (HeapTupleIsHeapOnly(heapTuple))\n+\t{\n+\t\t/* checking here */\n+\t\treturn;\n+\t}\n+#endif\n\n> - Wonder if we shouldn't introduce something akin to\n> SetReindexProcessing() for table rewrites (e.g. VACUUM FULL), to\n> prevent the related error of inserting/updating a catalog table that's\n> currently being rewritten.\n\nNot terribly excited about that, but if you are, maybe a follow-on\npatch could do that.\n\n> Taking this as a WIP, what do you think?\n\nSeems generally about right. One note is that in reindex_index,\nthe right fix is to push the intermediate code to above the PG_TRY:\nthere's no reason to start the TRY block any sooner than the\nSetReindexProcessing call.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Apr 2019 10:51:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Taking this as a WIP, what do you think?\n\n> Seems generally about right.\n\nAndres, are you pushing this forward? Next week's minor releases\nare coming up fast, and we're going to need to adapt the HEAD patch\nsignificantly for the back branches AFAICS. So there's little time\nto spare.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Apr 2019 18:07:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-29 18:07:07 -0400, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> Taking this as a WIP, what do you think?\n> \n> > Seems generally about right.\n> \n> Andres, are you pushing this forward? Next week's minor releases\n> are coming up fast, and we're going to need to adapt the HEAD patch\n> significantly for the back branches AFAICS. So there's little time\n> to spare.\n\nYea. I'm testing the backbranch'd bits (much simpler) and writing the\ncommit message atm.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Apr 2019 15:09:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-29 15:09:24 -0700, Andres Freund wrote:\n> On 2019-04-29 18:07:07 -0400, Tom Lane wrote:\n> > I wrote:\n> > > Andres Freund <andres@anarazel.de> writes:\n> > >> Taking this as a WIP, what do you think?\n> > \n> > > Seems generally about right.\n> > \n> > Andres, are you pushing this forward? Next week's minor releases\n> > are coming up fast, and we're going to need to adapt the HEAD patch\n> > significantly for the back branches AFAICS. So there's little time\n> > to spare.\n> \n> Yea. I'm testing the backbranch'd bits (much simpler) and writing the\n> commit message atm.\n\nI've pushed the master bits, and the other branches are running\ncheck-world right now and I'll push soon unless something breaks (it's a\nbit annoying that <= 9.6 can't run check-world in parallel...).\n\nTurns out, I was confused, and there wasn't much pre-existing breakage\nin RelationSetNewRelfilenode() (I guess I must have been thinking of\nATExecSetTableSpace?). That part was broken in d25f519107, I should have\ncaught this while reviewing and signficantly evolving the code in that\ncommit, mea culpa.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Apr 2019 20:03:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I've pushed the master bits, and the other branches are running\n> check-world right now and I'll push soon unless something breaks (it's a\n> bit annoying that <= 9.6 can't run check-world in parallel...).\n\nSeems like putting reindexes of pg_class into a test script that runs\nin parallel with other DDL wasn't a hot idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 00:37:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn April 29, 2019 9:37:33 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> I've pushed the master bits, and the other branches are running\n>> check-world right now and I'll push soon unless something breaks\n>(it's a\n>> bit annoying that <= 9.6 can't run check-world in parallel...).\n>\n>Seems like putting reindexes of pg_class into a test script that runs\n>in parallel with other DDL wasn't a hot idea.\n\nSaw that. Will try to reproduce (and if necessary either run separately or revert). But isn't that somewhat broken? They're not run in a transaction, so the locking shouldn't be deadlock prone.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Mon, 29 Apr 2019 21:44:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On April 29, 2019 9:37:33 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Seems like putting reindexes of pg_class into a test script that runs\n>> in parallel with other DDL wasn't a hot idea.\n\n> Saw that. Will try to reproduce (and if necessary either run separately or revert). But isn't that somewhat broken? They're not run in a transaction, so the locking shouldn't be deadlock prone.\n\nHm? REINDEX INDEX is deadlock-prone by definition, because it starts\nby opening/locking the index and then it has to open/lock the index's\ntable. Every other operation locks tables before their indexes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 00:50:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 00:50:20 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On April 29, 2019 9:37:33 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Seems like putting reindexes of pg_class into a test script that runs\n> >> in parallel with other DDL wasn't a hot idea.\n> \n> > Saw that. Will try to reproduce (and if necessary either run separately or revert). But isn't that somewhat broken? They're not run in a transaction, so the locking shouldn't be deadlock prone.\n> \n> Hm? REINDEX INDEX is deadlock-prone by definition, because it starts\n> by opening/locking the index and then it has to open/lock the index's\n> table. Every other operation locks tables before their indexes.\n\nWe claim to have solved that:\n\n/*\n * ReindexIndex\n *\t\tRecreate a specific index.\n */\nvoid\nReindexIndex(RangeVar *indexRelation, int options, bool concurrent)\n\n\n\t/*\n\t * Find and lock index, and check permissions on table; use callback to\n\t * obtain lock on table first, to avoid deadlock hazard. The lock level\n\t * used here must match the index lock obtained in reindex_index().\n\t */\n\tindOid = RangeVarGetRelidExtended(indexRelation,\n\t\t\t\t\t\t\t\t\t concurrent ? ShareUpdateExclusiveLock : AccessExclusiveLock,\n\t\t\t\t\t\t\t\t\t 0,\n\t\t\t\t\t\t\t\t\t RangeVarCallbackForReindexIndex,\n\t\t\t\t\t\t\t\t\t (void *) &heapOid);\n\nand I don't see an obvious hole in the general implementation. Minus the\ncomment that code exists back to 9.4.\n\nI suspect the problem isn't REINDEX INDEX in general, it's REINDEX INDEX\nover catalog tables modified during reindex. The callback acquires a\nShareLock lock on the index's table, but *also* during the reindex needs\na RowExclusiveLock on pg_class, etc. E.g. in RelationSetNewRelfilenode()\non pg_class, and on pg_index in index_build(). Which means there's a\nlock-upgrade hazard (Share to RowExclusive - well, that's more a\nside-grade, but still deadlock prone).\n\nI can think of ways to fix that (e.g. if reindex is on pg_class or\nindex, use SHARE ROW EXCLUSIVE, rather than SHARE), but we'd probably\nnot want to backpatch that.\n\nI'll try to reproduce tomorrow.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 00:05:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-30 00:50:20 -0400, Tom Lane wrote:\n>> Hm? REINDEX INDEX is deadlock-prone by definition, because it starts\n>> by opening/locking the index and then it has to open/lock the index's\n>> table. Every other operation locks tables before their indexes.\n\n> We claim to have solved that:\n\nOh, okay ...\n\n> I suspect the problem isn't REINDEX INDEX in general, it's REINDEX INDEX\n> over catalog tables modified during reindex.\n\nSo far, every one of the failures in the buildfarm looks like the REINDEX\nis deciding that it needs to wait for some other transaction, eg here\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2019-04-30%2014%3A43%3A11\n\nthe relevant bit of postmaster log is\n\n2019-04-30 14:44:13.478 UTC [16135:450] pg_regress/create_index LOG: statement: REINDEX TABLE pg_class;\n2019-04-30 14:44:14.478 UTC [16137:430] pg_regress/create_view LOG: process 16137 detected deadlock while waiting for AccessShareLock on relation 2662 of database 16384 after 1000.148 ms\n2019-04-30 14:44:14.478 UTC [16137:431] pg_regress/create_view DETAIL: Process holding the lock: 16135. Wait queue: .\n2019-04-30 14:44:14.478 UTC [16137:432] pg_regress/create_view STATEMENT: DROP SCHEMA temp_view_test CASCADE;\n2019-04-30 14:44:14.478 UTC [16137:433] pg_regress/create_view ERROR: deadlock detected\n2019-04-30 14:44:14.478 UTC [16137:434] pg_regress/create_view DETAIL: Process 16137 waits for AccessShareLock on relation 2662 of database 16384; blocked by process 16135.\n\tProcess 16135 waits for ShareLock on transaction 2875; blocked by process 16137.\n\tProcess 16137: DROP SCHEMA temp_view_test CASCADE;\n\tProcess 16135: REINDEX TABLE pg_class;\n2019-04-30 14:44:14.478 UTC [16137:435] pg_regress/create_view HINT: See server log for query details.\n2019-04-30 14:44:14.478 UTC [16137:436] pg_regress/create_view STATEMENT: DROP SCHEMA temp_view_test CASCADE;\n\nI haven't been able to reproduce this locally yet, but my guess is that\nthe REINDEX wants to update some row that was already updated by the\nconcurrent transaction, so it has to wait to see if the latter commits\nor not. And, of course, waiting while holding AccessExclusiveLock on\nany index of pg_class is a Bad Idea (TM). But I can't quite see why\nwe'd be doing something like that during the reindex ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 11:51:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "I wrote:\n> I haven't been able to reproduce this locally yet, but my guess is that\n> the REINDEX wants to update some row that was already updated by the\n> concurrent transaction, so it has to wait to see if the latter commits\n> or not. And, of course, waiting while holding AccessExclusiveLock on\n> any index of pg_class is a Bad Idea (TM). But I can't quite see why\n> we'd be doing something like that during the reindex ...\n\nAh-hah: the secret to making it reproducible is what prion is doing:\n-DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE\n\nHere's a stack trace from reindex's side:\n\n#0 0x00000033968e9223 in __epoll_wait_nocancel ()\n at ../sysdeps/unix/syscall-template.S:82\n#1 0x0000000000787cb5 in WaitEventSetWaitBlock (set=0x22d52f0, timeout=-1, \n occurred_events=0x7ffc77117c00, nevents=1, \n wait_event_info=<value optimized out>) at latch.c:1080\n#2 WaitEventSetWait (set=0x22d52f0, timeout=-1, \n occurred_events=0x7ffc77117c00, nevents=1, \n wait_event_info=<value optimized out>) at latch.c:1032\n#3 0x00000000007886da in WaitLatchOrSocket (latch=0x7f90679077f4, \n wakeEvents=<value optimized out>, sock=-1, timeout=-1, \n wait_event_info=50331652) at latch.c:407\n#4 0x000000000079993d in ProcSleep (locallock=<value optimized out>, \n lockMethodTable=<value optimized out>) at proc.c:1290\n#5 0x0000000000796ba2 in WaitOnLock (locallock=0x2200600, owner=0x2213470)\n at lock.c:1768\n#6 0x0000000000798719 in LockAcquireExtended (locktag=0x7ffc77117f90, \n lockmode=<value optimized out>, sessionLock=<value optimized out>, \n dontWait=false, reportMemoryError=true, locallockp=0x0) at lock.c:1050\n#7 0x00000000007939b7 in XactLockTableWait (xid=2874, \n rel=<value optimized out>, ctid=<value optimized out>, \n oper=XLTW_InsertIndexUnique) at lmgr.c:658\n#8 0x00000000004d4841 in heapam_index_build_range_scan (\n heapRelation=0x7f905eb3fcd8, indexRelation=0x7f905eb3c5b8, \n indexInfo=0x22d50c0, allow_sync=<value optimized out>, anyvisible=false, \n progress=true, start_blockno=0, numblocks=4294967295, \n callback=0x4f8330 <_bt_build_callback>, callback_state=0x7ffc771184f0, \n scan=0x2446fb0) at heapam_handler.c:1527\n#9 0x00000000004f9db0 in table_index_build_scan (heap=0x7f905eb3fcd8, \n index=0x7f905eb3c5b8, indexInfo=0x22d50c0)\n at ../../../../src/include/access/tableam.h:1437\n#10 _bt_spools_heapscan (heap=0x7f905eb3fcd8, index=0x7f905eb3c5b8, \n indexInfo=0x22d50c0) at nbtsort.c:489\n#11 btbuild (heap=0x7f905eb3fcd8, index=0x7f905eb3c5b8, indexInfo=0x22d50c0)\n at nbtsort.c:337\n#12 0x0000000000547e33 in index_build (heapRelation=0x7f905eb3fcd8, \n indexRelation=0x7f905eb3c5b8, indexInfo=0x22d50c0, isreindex=true, \n parallel=<value optimized out>) at index.c:2724\n#13 0x0000000000548b97 in reindex_index (indexId=2662, \n skip_constraint_checks=false, persistence=112 'p', options=0)\n at index.c:3349\n#14 0x00000000005490f1 in reindex_relation (relid=<value optimized out>, \n flags=5, options=0) at index.c:3592\n#15 0x00000000005ed295 in ReindexTable (relation=0x21e2938, options=0, \n concurrent=<value optimized out>) at indexcmds.c:2422\n#16 0x00000000007b5f69 in standard_ProcessUtility (pstmt=0x21e2cf0, \n queryString=0x21e1f18 \"REINDEX TABLE pg_class;\", \n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, \n dest=0x21e2de8, completionTag=0x7ffc77118d80 \"\") at utility.c:790\n#17 0x00000000007b1689 in PortalRunUtility (portal=0x2247c38, pstmt=0x21e2cf0, \n isTopLevel=<value optimized out>, setHoldSnapshot=<value optimized out>, \n dest=0x21e2de8, completionTag=<value optimized out>) at pquery.c:1175\n#18 0x00000000007b2611 in PortalRunMulti (portal=0x2247c38, isTopLevel=true, \n setHoldSnapshot=false, dest=0x21e2de8, altdest=0x21e2de8, \n completionTag=0x7ffc77118d80 \"\") at pquery.c:1328\n#19 0x00000000007b2eb0 in PortalRun (portal=0x2247c38, \n count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x21e2de8, \n altdest=0x21e2de8, completionTag=0x7ffc77118d80 \"\") at pquery.c:796\n#20 0x00000000007af2ab in exec_simple_query (\n query_string=0x21e1f18 \"REINDEX TABLE pg_class;\") at postgres.c:1215\n\nSo basically, the problem here lies in trying to re-verify uniqueness\nof pg_class's indexes --- there could easily be entries in pg_class that\nhaven't committed yet.\n\nI don't think there's an easy way to make this not deadlock against\nconcurrent DDL. For sure I don't want to disable the uniqueness\nchecks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 12:40:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "On 2019-04-30 11:51:10 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-04-30 00:50:20 -0400, Tom Lane wrote:\n> > I suspect the problem isn't REINDEX INDEX in general, it's REINDEX INDEX\n> > over catalog tables modified during reindex.\n>\n> So far, every one of the failures in the buildfarm looks like the REINDEX\n> is deciding that it needs to wait for some other transaction, eg here\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2019-04-30%2014%3A43%3A11\n>\n> the relevant bit of postmaster log is\n>\n> 2019-04-30 14:44:13.478 UTC [16135:450] pg_regress/create_index LOG: statement: REINDEX TABLE pg_class;\n> 2019-04-30 14:44:14.478 UTC [16137:430] pg_regress/create_view LOG: process 16137 detected deadlock while waiting for AccessShareLock on relation 2662 of database 16384 after 1000.148 ms\n> 2019-04-30 14:44:14.478 UTC [16137:431] pg_regress/create_view DETAIL: Process holding the lock: 16135. Wait queue: .\n> 2019-04-30 14:44:14.478 UTC [16137:432] pg_regress/create_view STATEMENT: DROP SCHEMA temp_view_test CASCADE;\n> 2019-04-30 14:44:14.478 UTC [16137:433] pg_regress/create_view ERROR: deadlock detected\n> 2019-04-30 14:44:14.478 UTC [16137:434] pg_regress/create_view DETAIL: Process 16137 waits for AccessShareLock on relation 2662 of database 16384; blocked by process 16135.\n> \tProcess 16135 waits for ShareLock on transaction 2875; blocked by process 16137.\n> \tProcess 16137: DROP SCHEMA temp_view_test CASCADE;\n> \tProcess 16135: REINDEX TABLE pg_class;\n> 2019-04-30 14:44:14.478 UTC [16137:435] pg_regress/create_view HINT: See server log for query details.\n> 2019-04-30 14:44:14.478 UTC [16137:436] pg_regress/create_view STATEMENT: DROP SCHEMA temp_view_test CASCADE;\n>\n> I haven't been able to reproduce this locally yet, but my guess is that\n> the REINDEX wants to update some row that was already updated by the\n> concurrent transaction, so it has to wait to see if the latter commits\n> or not. And, of course, waiting while holding AccessExclusiveLock on\n> any index of pg_class is a Bad Idea (TM). But I can't quite see why\n> we'd be doing something like that during the reindex ...\n\nI've reproduced something similar locally by running \"REINDEX INDEX\npg_class_oid_index;\" via pgbench. Fails over pretty much immediately.\n\nIt's the lock-upgrade problem I theorized about\nupthread. ReindexIndex(), via RangeVarCallbackForReindexIndex(), takes a\nShareLock on pg_class, and then goes on to upgrade to RowExclusiveLock\nin RelationSetNewRelfilenode(). But at that time another session\nobviously can already have the ShareLock and would also want to upgrade.\n\nThe same problem exists with reindexing indexes on pg_index.\n\nReindexTable is also affected. It locks the table with ShareLock, but\nthen subsidiary routines upgrade to RowExclusiveLock. The way to fix it\nwould be a bit different than for ReindexIndex(), as the locking happens\nvia RangeVarGetRelidExtended() directly, rather than in the callback.\n\nThere's a somewhat related issue in the new REINDEX CONCURRENTLY. See\nhttps://www.postgresql.org/message-id/20190430151735.wi52sxjvxsjvaxxt%40alap3.anarazel.de\n\nAttached is a *hacky* prototype patch that fixes the issues for me. This\nis *not* meant as an actual fix, just a demonstration.\n\nTurns out it's not even sufficient to take a ShareRowExclusive for\npg_class. That prevents issues of concurrent REINDEX INDEX\npg_class_oid_index blowing up, but if one runs REINDEX INDEX\npg_class_oid_index; and REINDEX TABLE pg_class; (or just the latter)\nconcurrently it still blows up, albeit taking longer to do so.\n\nThe problem is that other codepaths expect to be able to hold an\nAccessShareLock on pg_class, and multiple pg_class indexes\n(e.g. catcache initialization which is easy to hit with -C, [1]). If we\nwere to want this concurrency safe, I think it requires an AEL on at\nleast pg_class for reindex (I think ShareRowExclusiveLock might suffice\nfor pg_index).\n\nI'm not sure it's worth fixing this. It's crummy and somewhat fragile\nthat we'd have we'd have special locking rules for catalog tables. OTOH,\nit really also sucks that a lone REINDEX TABLE pg_class; can deadlock\nwith another session doing nothing more than establishing a connection.\n\nI guess it's not that common, and can be fixed by users by doing an\nexplicit BEGIN;LOCK pg_class;REINDEX TABLE pg_class;COMMIT;, but that's\nnot something anybody will know to do.\n\nPragmatically I don't think there's a meaningful difference between\nholding a ShareLock on pg_class + AEL on one or more indexes, to holding\nan AEL on pg_class. Just about every pg_class access is through an\nindex.\n\nGreetings,\n\nAndres Freund\n\n\n[1]\n#6 0x0000561dac7f9a36 in WaitOnLock (locallock=0x561dae101878, owner=0x561dae112ee8) at /home/andres/src/postgresql/src/backend/storage/lmgr/lock.c:1768\n#7 0x0000561dac7f869e in LockAcquireExtended (locktag=0x7ffd7a128650, lockmode=1, sessionLock=false, dontWait=false, reportMemoryError=true,\n locallockp=0x7ffd7a128648) at /home/andres/src/postgresql/src/backend/storage/lmgr/lock.c:1050\n#8 0x0000561dac7f5c15 in LockRelationOid (relid=2662, lockmode=1) at /home/andres/src/postgresql/src/backend/storage/lmgr/lmgr.c:116\n#9 0x0000561dac3a3aa2 in relation_open (relationId=2662, lockmode=1) at /home/andres/src/postgresql/src/backend/access/common/relation.c:56\n#10 0x0000561dac422560 in index_open (relationId=2662, lockmode=1) at /home/andres/src/postgresql/src/backend/access/index/indexam.c:156\n#11 0x0000561dac421bbe in systable_beginscan (heapRelation=0x561dae14af80, indexId=2662, indexOK=true, snapshot=0x561dacd26f80 <CatalogSnapshotData>,\n nkeys=1, key=0x7ffd7a128760) at /home/andres/src/postgresql/src/backend/access/index/genam.c:364\n#12 0x0000561dac982362 in ScanPgRelation (targetRelId=2663, indexOK=true, force_non_historic=false)\n at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:360\n#13 0x0000561dac983b18 in RelationBuildDesc (targetRelId=2663, insertIt=true) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:1058\n#14 0x0000561dac985d24 in RelationIdGetRelation (relationId=2663) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:2037\n#15 0x0000561dac3a3aac in relation_open (relationId=2663, lockmode=1) at /home/andres/src/postgresql/src/backend/access/common/relation.c:59\n#16 0x0000561dac422560 in index_open (relationId=2663, lockmode=1) at /home/andres/src/postgresql/src/backend/access/index/indexam.c:156\n#17 0x0000561dac976116 in InitCatCachePhase2 (cache=0x561dae13e400, touch_index=true) at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1050\n--Type <RET> for more, q to quit, c to continue without paging--\n#18 0x0000561dac990134 in InitCatalogCachePhase2 () at /home/andres/src/postgresql/src/backend/utils/cache/syscache.c:1078\n#19 0x0000561dac988955 in RelationCacheInitializePhase3 () at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:3960\n#20 0x0000561dac9acdac in InitPostgres (in_dbname=0x561dae111320 \"postgres\", dboid=0, username=0x561dae0dbaf8 \"andres\", useroid=0, out_dbname=0x0,\n override_allow_connections=false) at /home/andres/src/postgresql/src/backend/utils/init/postinit.c:1034",
"msg_date": "Tue, 30 Apr 2019 10:34:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It's the lock-upgrade problem I theorized about\n> upthread. ReindexIndex(), via RangeVarCallbackForReindexIndex(), takes a\n> ShareLock on pg_class, and then goes on to upgrade to RowExclusiveLock\n> in RelationSetNewRelfilenode(). But at that time another session\n> obviously can already have the ShareLock and would also want to upgrade.\n\nHmm. Note that this is totally independent of the deadlock mechanism\nI reported in my last message on this thread.\n\nI also wonder whether clobber-cache testing would expose cases\nwe haven't seen that trace to the additional catalog accesses\ncaused by cache reloads.\n\n> I'm not sure it's worth fixing this.\n\nI am not sure it's even *possible* to fix all these cases. Even\nif we could, it's out of scope for v12 let alone the back branches.\n\nI think the only practical solution is to remove those reindex tests.\nEven if we ran them in a script with no concurrent scripts, there'd\nbe risk of failures against autovacuum, I'm afraid. Not often, but\noften enough to be annoying.\n\nPossibly we could run them in a TAP test that configures a cluster\nwith autovac disabled?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 14:05:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 14:05:50 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > It's the lock-upgrade problem I theorized about\n> > upthread. ReindexIndex(), via RangeVarCallbackForReindexIndex(), takes a\n> > ShareLock on pg_class, and then goes on to upgrade to RowExclusiveLock\n> > in RelationSetNewRelfilenode(). But at that time another session\n> > obviously can already have the ShareLock and would also want to upgrade.\n> \n> Hmm. Note that this is totally independent of the deadlock mechanism\n> I reported in my last message on this thread.\n\nYea :(\n\n\n> > I'm not sure it's worth fixing this.\n> \n> I am not sure it's even *possible* to fix all these cases.\n\nI think it's worth fixing the most common ones though. It sure sucks\nthat a plain REINDEX TABLE pg_class; isn't safe to run.\n\n\n> Even if we could, it's out of scope for v12 let alone the back branches.\n\nUnfortunately agreed. It's possible we could come up with a fix to\nbackpatch after maturing some, but certainly not before the release.\n\n\n> I think the only practical solution is to remove those reindex tests.\n> Even if we ran them in a script with no concurrent scripts, there'd\n> be risk of failures against autovacuum, I'm afraid. Not often, but\n> often enough to be annoying.\n\n> Possibly we could run them in a TAP test that configures a cluster\n> with autovac disabled?\n\nHm. Would it be sufficient to instead move them to a non-concurrent\ntest group, and stick a BEGIN; LOCK pg_class, ....; COMMIT; around it? I\nthink that ought to make it safe against autovacuum, and theoretically\nthere shouldn't be any overlapping pg_class/index updates that we'd need\nto wait for?\n\nThis is a pretty finnicky area of the code, with obviously not enough\ntest coverage. I'm inclined to remove them from the back branches, and\ntry to get them working in master?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 11:27:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-30 14:05:50 -0400, Tom Lane wrote:\n>> Possibly we could run them in a TAP test that configures a cluster\n>> with autovac disabled?\n\n> Hm. Would it be sufficient to instead move them to a non-concurrent\n> test group, and stick a BEGIN; LOCK pg_class, ....; COMMIT; around it?\n\nDoubt it. Maybe you could get away with it given that autovacuum and\nautoanalyze only do non-transactional updates to pg_class, but that\nseems like a pretty shaky assumption.\n\n> This is a pretty finnicky area of the code, with obviously not enough\n> test coverage. I'm inclined to remove them from the back branches, and\n> try to get them working in master?\n\nI think trying to get this \"working\" is a v13 task now. We've obviously\nnever tried to stress the case before, so you're neither fixing a\nregression nor fixing a new-in-v12 issue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 14:41:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 14:41:00 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-04-30 14:05:50 -0400, Tom Lane wrote:\n> >> Possibly we could run them in a TAP test that configures a cluster\n> >> with autovac disabled?\n> \n> > Hm. Would it be sufficient to instead move them to a non-concurrent\n> > test group, and stick a BEGIN; LOCK pg_class, ....; COMMIT; around it?\n> \n> Doubt it. Maybe you could get away with it given that autovacuum and\n> autoanalyze only do non-transactional updates to pg_class, but that\n> seems like a pretty shaky assumption.\n\nI was pondering that autovacuum shouldn't play a role because it ought\nto never cause a DELETE_IN_PROGRESS, because it shouldn't effect the\nOldestXmin horizon. But that reasoning, even if correct, doesn't hold\nfor analyze, which does (much to my chargrin), holds a full blown\nsnapshot.\n\n\n> > This is a pretty finnicky area of the code, with obviously not enough\n> > test coverage. I'm inclined to remove them from the back branches, and\n> > try to get them working in master?\n> \n> I think trying to get this \"working\" is a v13 task now. We've obviously\n> never tried to stress the case before, so you're neither fixing a\n> regression nor fixing a new-in-v12 issue.\n\nWell, the test *do* test that a previously existing all-branches bug\ndoesn't exist, no (albeit one just triggering an assert)? I'm not\ntalking about making this concurrency safe, just about whether it's\npossible to somehow keep the tests.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 12:03:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-30 14:41:00 -0400, Tom Lane wrote:\n>> I think trying to get this \"working\" is a v13 task now. We've obviously\n>> never tried to stress the case before, so you're neither fixing a\n>> regression nor fixing a new-in-v12 issue.\n\n> Well, the test *do* test that a previously existing all-branches bug\n> doesn't exist, no (albeit one just triggering an assert)? I'm not\n> talking about making this concurrency safe, just about whether it's\n> possible to somehow keep the tests.\n\nWell, I told you what I thought was a safe way to run the tests.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 15:11:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 15:11:43 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-04-30 14:41:00 -0400, Tom Lane wrote:\n> > > On 2019-04-30 12:03:08 -0700, Andres Freund wrote:\n> > > > This is a pretty finnicky area of the code, with obviously not enough\n> > > > test coverage. I'm inclined to remove them from the back branches, and\n> > > > try to get them working in master?\n> > >\n> > > I think trying to get this \"working\" is a v13 task now. We've obviously\n> > > never tried to stress the case before, so you're neither fixing a\n> > > regression nor fixing a new-in-v12 issue.\n> \n> > Well, the test *do* test that a previously existing all-branches bug\n> > doesn't exist, no (albeit one just triggering an assert)? I'm not\n> > talking about making this concurrency safe, just about whether it's\n> > possible to somehow keep the tests.\n> \n> Well, I told you what I thought was a safe way to run the tests.\n\nShrug. I was responding to you talking about \"neither fixing a\nregression nor fixing a new-in-v12 issue\", when I explicitly was talking\nabout tests for the bug this thread is about. Not sure why \"Well, I told\nyou what I thought was a safe way to run the tests.\" is a helpful answer\nin turn.\n\nI'm not wild to go for a separate TAP test. A separate initdb cycle for\na a tests that takes about 30ms seems a bit over the top. So I'm\ninclined to either try running it in a serial step on the buildfarm\n(survived a few dozen cycles with -DRELCACHE_FORCE_RELEASE\n-DCATCACHE_FORCE_RELEASE, and a few with -DCLOBBER_CACHE_ALWAYS), or\njust remove them alltogether. Or remove it alltogether until we fix\nthis. Since you indicated a preference agains the former, I'll remove\nit in a bit until I hear otherwise.\n\nI'll add it to my todo list to try to fix the concurrency issues for 13.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 15:10:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'm not wild to go for a separate TAP test. A separate initdb cycle for\n> a a tests that takes about 30ms seems a bit over the top.\n\nFair enough.\n\n> So I'm\n> inclined to either try running it in a serial step on the buildfarm\n> (survived a few dozen cycles with -DRELCACHE_FORCE_RELEASE\n> -DCATCACHE_FORCE_RELEASE, and a few with -DCLOBBER_CACHE_ALWAYS), or\n> just remove them alltogether. Or remove it alltogether until we fix\n> this. Since you indicated a preference agains the former, I'll remove\n> it in a bit until I hear otherwise.\n\n> I'll add it to my todo list to try to fix the concurrency issues for 13.\n\nIf you're really expecting to have a go at that during the v13 cycle,\nI think we could live without these test cases till then.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 18:24:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Just when you thought it was safe to go back in the water ...\n\nmarkhor just reported in with results showing that we have worse\nproblems than deadlock-prone tests in the back branches: 9.4\nfor example looks like\n\n --\n -- whole tables\n REINDEX TABLE pg_class; -- mapped, non-shared, critical\n+ ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n REINDEX TABLE pg_index; -- non-mapped, non-shared, critical\n+ ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n REINDEX TABLE pg_operator; -- non-mapped, non-shared, critical\n+ ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n REINDEX TABLE pg_database; -- mapped, shared, critical\n+ ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n REINDEX TABLE pg_shdescription; -- mapped, shared non-critical\n+ ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n -- Check that individual system indexes can be reindexed. That's a bit\n -- different from the entire-table case because reindex_relation\n -- treats e.g. pg_class special.\n REINDEX INDEX pg_class_oid_index; -- mapped, non-shared, critical\n+ ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n REINDEX INDEX pg_class_relname_nsp_index; -- mapped, non-shared, non-critical\n+ ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n REINDEX INDEX pg_index_indexrelid_index; -- non-mapped, non-shared, critical\n+ ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n REINDEX INDEX pg_index_indrelid_index; -- non-mapped, non-shared, non-critical\n+ ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n REINDEX INDEX pg_database_oid_index; -- mapped, shared, critical\n+ ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n REINDEX INDEX pg_shdescription_o_c_index; -- mapped, shared, non-critical\n+ ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n\nNo doubt this is triggered by CLOBBER_CACHE_ALWAYS.\n\nGiven this, I'm rethinking my position that we can dispense with these\ntest cases. Let's try putting them in a standalone test script, and\nsee whether that leads to failures or not. Even if it does, we'd\nbetter keep them until we've got a fully clean bill of health from\nthe buildfarm.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 18:42:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 18:42:36 -0400, Tom Lane wrote:\n> markhor just reported in with results showing that we have worse\n> problems than deadlock-prone tests in the back branches: 9.4\n> for example looks like\n\n> -- whole tables\n> REINDEX TABLE pg_class; -- mapped, non-shared, critical\n> + ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n\nUgh. Also failed on 9.6.\n\n\n> Given this, I'm rethinking my position that we can dispense with these\n> test cases. Let's try putting them in a standalone test script, and\n> see whether that leads to failures or not. Even if it does, we'd\n> better keep them until we've got a fully clean bill of health from\n> the buildfarm.\n\nYea. Seems likely this indicates a proper, distinct, bug :/\n\nI'll move the test into a new \"reindex_catalog\" test, with a comment\nexplaining that the failure cases necessitating that are somewhere\nbetween bugs, ugly warts, an hard to fix edge cases.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 15:53:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "On 2019-04-30 15:53:07 -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-04-30 18:42:36 -0400, Tom Lane wrote:\n> > markhor just reported in with results showing that we have worse\n> > problems than deadlock-prone tests in the back branches: 9.4\n> > for example looks like\n> \n> > -- whole tables\n> > REINDEX TABLE pg_class; -- mapped, non-shared, critical\n> > + ERROR: could not read block 0 in file \"base/16384/27769\": read only 0 of 8192 bytes\n> \n> Ugh. Also failed on 9.6.\n\nI see the bug. Turns out we need to figure out another way to solve the\nassertion triggered by doing catalog updates within\nRelationSetNewRelfilenode() - we can't just move the\nSetReindexProcessing() before it. When CCA is enabled, the\nCommandCounterIncrement() near the tail of RelationSetNewRelfilenode()\ntriggers a rebuild of the catalog entries - but without the\nSetReindexProcessing() those scans will try to use the index currently\nbeing rebuilt. Which then predictably fails:\n\n#0 mdread (reln=0x5600aea36498, forknum=MAIN_FORKNUM, blocknum=0, buffer=0x7f71037db800 \"\") at /home/andres/src/postgresql/src/backend/storage/smgr/md.c:633\n#1 0x00005600ae3f656f in smgrread (reln=0x5600aea36498, forknum=MAIN_FORKNUM, blocknum=0, buffer=0x7f71037db800 \"\")\n at /home/andres/src/postgresql/src/backend/storage/smgr/smgr.c:590\n#2 0x00005600ae3b4c13 in ReadBuffer_common (smgr=0x5600aea36498, relpersistence=112 'p', forkNum=MAIN_FORKNUM, blockNum=0, mode=RBM_NORMAL, strategy=0x0, \n hit=0x7fff5bb11cab) at /home/andres/src/postgresql/src/backend/storage/buffer/bufmgr.c:896\n#3 0x00005600ae3b44ab in ReadBufferExtended (reln=0x7f7107972540, forkNum=MAIN_FORKNUM, blockNum=0, mode=RBM_NORMAL, strategy=0x0)\n at /home/andres/src/postgresql/src/backend/storage/buffer/bufmgr.c:664\n#4 0x00005600ae3b437f in ReadBuffer (reln=0x7f7107972540, blockNum=0) at /home/andres/src/postgresql/src/backend/storage/buffer/bufmgr.c:596\n#5 0x00005600ae00e0b3 in _bt_getbuf (rel=0x7f7107972540, blkno=0, access=1) at /home/andres/src/postgresql/src/backend/access/nbtree/nbtpage.c:805\n#6 0x00005600ae00dd2a in _bt_heapkeyspace (rel=0x7f7107972540) at /home/andres/src/postgresql/src/backend/access/nbtree/nbtpage.c:694\n#7 0x00005600ae01679c in _bt_first (scan=0x5600aea44440, dir=ForwardScanDirection) at /home/andres/src/postgresql/src/backend/access/nbtree/nbtsearch.c:1237\n#8 0x00005600ae012617 in btgettuple (scan=0x5600aea44440, dir=ForwardScanDirection) at /home/andres/src/postgresql/src/backend/access/nbtree/nbtree.c:247\n#9 0x00005600ae005572 in index_getnext_tid (scan=0x5600aea44440, direction=ForwardScanDirection)\n at /home/andres/src/postgresql/src/backend/access/index/indexam.c:550\n#10 0x00005600ae00571e in index_getnext_slot (scan=0x5600aea44440, direction=ForwardScanDirection, slot=0x5600ae9c6ed0)\n at /home/andres/src/postgresql/src/backend/access/index/indexam.c:642\n#11 0x00005600ae003e54 in systable_getnext (sysscan=0x5600aea44080) at /home/andres/src/postgresql/src/backend/access/index/genam.c:450\n#12 0x00005600ae564292 in ScanPgRelation (targetRelId=1259, indexOK=true, force_non_historic=false)\n at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:365\n#13 0x00005600ae568203 in RelationReloadNailed (relation=0x5600aea0c4d0) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:2292\n#14 0x00005600ae568621 in RelationClearRelation (relation=0x5600aea0c4d0, rebuild=true) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:2425\n#15 0x00005600ae569081 in RelationCacheInvalidate () at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:2858\n#16 0x00005600ae55b32b in InvalidateSystemCaches () at /home/andres/src/postgresql/src/backend/utils/cache/inval.c:649\n#17 0x00005600ae55b408 in AcceptInvalidationMessages () at /home/andres/src/postgresql/src/backend/utils/cache/inval.c:708\n#18 0x00005600ae3d7b22 in LockRelationOid (relid=1259, lockmode=1) at /home/andres/src/postgresql/src/backend/storage/lmgr/lmgr.c:136\n#19 0x00005600adf85ad2 in relation_open (relationId=1259, lockmode=1) at /home/andres/src/postgresql/src/backend/access/common/relation.c:56\n#20 0x00005600ae040337 in table_open (relationId=1259, lockmode=1) at /home/andres/src/postgresql/src/backend/access/table/table.c:43\n#21 0x00005600ae564215 in ScanPgRelation (targetRelId=2662, indexOK=false, force_non_historic=false)\n at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:348\n#22 0x00005600ae567ecf in RelationReloadIndexInfo (relation=0x7f7107972540) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:2170\n#23 0x00005600ae5681d3 in RelationReloadNailed (relation=0x7f7107972540) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:2270\n#24 0x00005600ae568621 in RelationClearRelation (relation=0x7f7107972540, rebuild=true) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:2425\n#25 0x00005600ae568d19 in RelationFlushRelation (relation=0x7f7107972540) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:2686\n#26 0x00005600ae568e32 in RelationCacheInvalidateEntry (relationId=2662) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:2738\n#27 0x00005600ae55b1af in LocalExecuteInvalidationMessage (msg=0x5600aea262a8) at /home/andres/src/postgresql/src/backend/utils/cache/inval.c:589\n#28 0x00005600ae55af06 in ProcessInvalidationMessages (hdr=0x5600aea26250, func=0x5600ae55b0a8 <LocalExecuteInvalidationMessage>)\n\n\nI can think of several ways to properly fix this:\n\n1) Remove the CommandCounterIncrement() from\n RelationSetNewRelfilenode(), move it to the callers. That would allow\n for, I think, proper sequencing in reindex_index():\n\n /*\n * Create a new relfilenode - note that this doesn't make the new\n * relfilenode visible yet, we'd otherwise run into danger of that\n * index (which is empty at this point) being used while processing\n * cache invalidations.\n */\n RelationSetNewRelfilenode(iRel, persistence);\n\n /*\n * Before making the new relfilenode visible, prevent its use of the\n * to-be-reindexed index while building it.\n */\n SetReindexProcessing(heapId, indexId);\n\n CommandCounterIncrement();\n\n\n2) Separate out the state for the assertion triggered by\n SetReindexProcessing from the prohibition of the use of the index for\n searches.\n\n3) Turn on REINDEX_REL_SUPPRESS_INDEX_USE mode when reindexing\n pg_class. But that seems like a bigger hammer than necessary?\n\n\n\nSidenote: It'd be pretty helpful to have an option for the buildfarm etc\nto turn md.c type errors like this into PANICs.\n\n\n> > Given this, I'm rethinking my position that we can dispense with these\n> > test cases. Let's try putting them in a standalone test script, and\n> > see whether that leads to failures or not. Even if it does, we'd\n> > better keep them until we've got a fully clean bill of health from\n> > the buildfarm.\n> \n> Yea. Seems likely this indicates a proper, distinct, bug :/\n> \n> I'll move the test into a new \"reindex_catalog\" test, with a comment\n> explaining that the failure cases necessitating that are somewhere\n> between bugs, ugly warts, an hard to fix edge cases.\n\nJust pushed that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 18:36:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-30 15:53:07 -0700, Andres Freund wrote:\n>> I'll move the test into a new \"reindex_catalog\" test, with a comment\n>> explaining that the failure cases necessitating that are somewhere\n>> between bugs, ugly warts, an hard to fix edge cases.\n\n> Just pushed that.\n\nlocust is kind of unimpressed:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2019-05-01%2003%3A12%3A13\n\nThe relevant bit of log is\n\n2019-05-01 05:24:47.527 CEST [97690:429] pg_regress/create_view LOG: statement: DROP SCHEMA temp_view_test CASCADE;\n2019-05-01 05:24:47.605 CEST [97690:430] pg_regress/create_view LOG: statement: DROP SCHEMA testviewschm2 CASCADE;\n2019-05-01 05:24:47.858 CEST [97694:1] [unknown] LOG: connection received: host=[local]\n2019-05-01 05:24:47.863 CEST [97694:2] [unknown] LOG: connection authorized: user=pgbuildfarm database=regression\n2019-05-01 05:24:47.878 CEST [97694:3] pg_regress/reindex_catalog LOG: statement: REINDEX TABLE pg_class;\n2019-05-01 05:24:48.887 CEST [97694:4] pg_regress/reindex_catalog ERROR: deadlock detected\n2019-05-01 05:24:48.887 CEST [97694:5] pg_regress/reindex_catalog DETAIL: Process 97694 waits for ShareLock on transaction 2559; blocked by process 97690.\n\tProcess 97690 waits for RowExclusiveLock on relation 1259 of database 16387; blocked by process 97694.\n\tProcess 97694: REINDEX TABLE pg_class;\n\tProcess 97690: DROP SCHEMA testviewschm2 CASCADE;\n2019-05-01 05:24:48.887 CEST [97694:6] pg_regress/reindex_catalog HINT: See server log for query details.\n2019-05-01 05:24:48.887 CEST [97694:7] pg_regress/reindex_catalog CONTEXT: while checking uniqueness of tuple (12,71) in relation \"pg_class\"\n2019-05-01 05:24:48.887 CEST [97694:8] pg_regress/reindex_catalog STATEMENT: REINDEX TABLE pg_class;\n2019-05-01 05:24:48.904 CEST [97690:431] pg_regress/create_view LOG: disconnection: session time: 0:00:03.748 user=pgbuildfarm database=regression host=[local]\n\nwhich is mighty confusing at first glance, but I think the explanation is\nthat what the postmaster is reporting is process 97690's *latest* query,\nnot what it's currently doing. What it's really currently doing at the\nmoment of the deadlock is cleaning out its temporary schema after the\nclient disconnected. So this says you were careless about where to insert\nthe reindex_catalog test in the test schedule: it can't be after anything\nthat creates any temp objects. That seems like kind of a problem :-(.\nWe could put it second, after the tablespace test, but that would mean\nthat we're reindexing after very little churn has happened in the\ncatalogs, which doesn't seem like much of a stress test.\n\nAnother fairly interesting thing is that this log includes the telltale\n\n2019-05-01 05:24:48.887 CEST [97694:7] pg_regress/reindex_catalog CONTEXT: while checking uniqueness of tuple (12,71) in relation \"pg_class\"\n\nWhy did I have to dig to find that information in HEAD? Have we lost\nsome useful context reporting? (Note this run is in the v10 branch.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 May 2019 00:43:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I see the bug. Turns out we need to figure out another way to solve the\n> assertion triggered by doing catalog updates within\n> RelationSetNewRelfilenode() - we can't just move the\n> SetReindexProcessing() before it. When CCA is enabled, the\n> CommandCounterIncrement() near the tail of RelationSetNewRelfilenode()\n> triggers a rebuild of the catalog entries - but without the\n> SetReindexProcessing() those scans will try to use the index currently\n> being rebuilt.\n\nYeah. I think what this demonstrates is that REINDEX INDEX has to have\nRelationSetIndexList logic similar to what REINDEX TABLE has, to control\nwhich indexes get updated when while we're rebuilding an index of\npg_class. In hindsight that seems glaringly obvious ... I wonder how we\nmissed that when we built that infrastructure for REINDEX TABLE?\n\nI'm pretty sure that infrastructure is my fault, so I'll take a\nwhack at fixing this.\n\nDid you figure out why this doesn't also happen in HEAD?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 May 2019 12:20:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "I wrote:\n> Did you figure out why this doesn't also happen in HEAD?\n\n... actually, HEAD *is* broken with CCA, just differently.\nI'm on it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 May 2019 12:59:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-01 12:20:22 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I see the bug. Turns out we need to figure out another way to solve the\n> > assertion triggered by doing catalog updates within\n> > RelationSetNewRelfilenode() - we can't just move the\n> > SetReindexProcessing() before it. When CCA is enabled, the\n> > CommandCounterIncrement() near the tail of RelationSetNewRelfilenode()\n> > triggers a rebuild of the catalog entries - but without the\n> > SetReindexProcessing() those scans will try to use the index currently\n> > being rebuilt.\n> \n> Yeah. I think what this demonstrates is that REINDEX INDEX has to have\n> RelationSetIndexList logic similar to what REINDEX TABLE has, to control\n> which indexes get updated when while we're rebuilding an index of\n> pg_class. In hindsight that seems glaringly obvious ... I wonder how we\n> missed that when we built that infrastructure for REINDEX TABLE?\n\nI'm not sure this is the right short-term answer. Why isn't it, for now,\nsufficient to do what I suggested with RelationSetNewRelfilenode() not\ndoing the CommandCounterIncrement(), and reindex_index() then doing the\nSetReindexProcessing() before a CommandCounterIncrement()? That's like\n~10 line code change, and a few more with comments.\n\nThere is the danger that the current and above approach basically relies\non there not to be any non-inplace updates during reindex. But at the\nmoment code does take care to use inplace updates\n(cf. index_update_stats()).\n\nIt's not clear to me whether the approach of using\nRelationSetIndexList() in reindex_index() would be meaningfully more\nrobust against non-inplace updates during reindex either - ISTM we'd\njust as well skip the necessary index insertions if we hid the index\nbeing rebuilt. Skipping to-be-rebuilt indexes works for\nreindex_relation() because they're going to be rebuilt subsequently (and\nthus the missing index rows don't matter) - but it'd not work for\nreindexing a single index, because it'll not get the result at a later\nstage.\n\n\n> I'm pretty sure that infrastructure is my fault, so I'll take a\n> whack at fixing this.\n> \n> Did you figure out why this doesn't also happen in HEAD?\n\nIt does for me now, at least when just doing a reindex in isolation (CCA\ntests would have taken too long last night). I'm not sure why I wasn't\npreviously able to trigger it and markhor hasn't run yet on master.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 May 2019 10:06:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "On 2019-05-01 10:06:03 -0700, Andres Freund wrote:\n> I'm not sure this is the right short-term answer. Why isn't it, for now,\n> sufficient to do what I suggested with RelationSetNewRelfilenode() not\n> doing the CommandCounterIncrement(), and reindex_index() then doing the\n> SetReindexProcessing() before a CommandCounterIncrement()? That's like\n> ~10 line code change, and a few more with comments.\n> \n> There is the danger that the current and above approach basically relies\n> on there not to be any non-inplace updates during reindex. But at the\n> moment code does take care to use inplace updates\n> (cf. index_update_stats()).\n> \n> It's not clear to me whether the approach of using\n> RelationSetIndexList() in reindex_index() would be meaningfully more\n> robust against non-inplace updates during reindex either - ISTM we'd\n> just as well skip the necessary index insertions if we hid the index\n> being rebuilt. Skipping to-be-rebuilt indexes works for\n> reindex_relation() because they're going to be rebuilt subsequently (and\n> thus the missing index rows don't matter) - but it'd not work for\n> reindexing a single index, because it'll not get the result at a later\n> stage.\n\nFWIW, the dirty-hack version (attached) of the CommandCounterIncrement()\napproach fixes the issue for a REINDEX pg_class_oid_index; in solation\neven when using CCA. Started a whole CCA testrun with it, but the\nresults of that will obviously not be in quick.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 1 May 2019 10:21:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-01 10:06:03 -0700, Andres Freund wrote:\n>> I'm not sure this is the right short-term answer. Why isn't it, for now,\n>> sufficient to do what I suggested with RelationSetNewRelfilenode() not\n>> doing the CommandCounterIncrement(), and reindex_index() then doing the\n>> SetReindexProcessing() before a CommandCounterIncrement()? That's like\n>> ~10 line code change, and a few more with comments.\n\nThat looks like a hack to me...\n\nThe main thing I'm worried about right now is that I realized that\nour recovery from errors in this area is completely hosed, cf\nhttps://www.postgresql.org/message-id/4541.1556736252@sss.pgh.pa.us\n\nThe problem with CCA is actually kind of convenient for testing that,\nsince it means you don't have to inject any new fault to get an error\nto be thrown while the index relcache entry is in the needing-to-be-\nreverted state. So I'm going to work on fixing the recovery first.\nBut I suspect that doing this right will require the more complicated\napproach anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 May 2019 15:08:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-01 10:21:15 -0700, Andres Freund wrote:\n> FWIW, the dirty-hack version (attached) of the CommandCounterIncrement()\n> approach fixes the issue for a REINDEX pg_class_oid_index; in solation\n> even when using CCA. Started a whole CCA testrun with it, but the\n> results of that will obviously not be in quick.\n\nNot finished yet, but it got pretty far:\n\nparallel group (5 tests): create_index_spgist index_including_gist index_including create_view create_index\n create_index ... ok 500586 ms\n create_index_spgist ... ok 86890 ms\n create_view ... ok 466512 ms\n index_including ... ok 150279 ms\n index_including_gist ... ok 109087 ms\ntest reindex_catalog ... ok 2285 ms\nparallel group (16 tests): create_cast roleattributes drop_if_exists create_aggregate vacuum create_am hash_func select create_function_3 constraints typed_table rolenames errors updatable_views triggers inherit\n\nthat's where it's at right now:\n\nparallel group (20 tests): init_privs security_label gin password drop_operator lock gist tablesample spgist\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 May 2019 12:39:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2019-05-01 10:06:03 -0700, Andres Freund wrote:\n>>> I'm not sure this is the right short-term answer. Why isn't it, for now,\n>>> sufficient to do what I suggested with RelationSetNewRelfilenode() not\n>>> doing the CommandCounterIncrement(), and reindex_index() then doing the\n>>> SetReindexProcessing() before a CommandCounterIncrement()? That's like\n>>> ~10 line code change, and a few more with comments.\n\n> That looks like a hack to me...\n\n> The main thing I'm worried about right now is that I realized that\n> our recovery from errors in this area is completely hosed, cf\n> https://www.postgresql.org/message-id/4541.1556736252@sss.pgh.pa.us\n\nOK, so per the other thread, it seems like the error recovery problem\nisn't going to affect this directly. However, I still don't like this\nproposal much; the reason being that it's a rather fundamental change\nin the API of RelationSetNewRelfilenode. This will certainly break\nany external callers of that function --- and silently, too.\n\nAdmittedly, there might not be any outside callers, but I don't really\nlike that assumption for something we're going to have to back-patch.\n\nThe solution I'm thinking of should have much more localized effects,\nbasically just in reindex_index and RelationSetNewRelfilenode, which is\nwhy I like it better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 May 2019 19:41:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-01 19:41:24 -0400, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> On 2019-05-01 10:06:03 -0700, Andres Freund wrote:\n> >>> I'm not sure this is the right short-term answer. Why isn't it, for now,\n> >>> sufficient to do what I suggested with RelationSetNewRelfilenode() not\n> >>> doing the CommandCounterIncrement(), and reindex_index() then doing the\n> >>> SetReindexProcessing() before a CommandCounterIncrement()? That's like\n> >>> ~10 line code change, and a few more with comments.\n> \n> > That looks like a hack to me...\n> \n> > The main thing I'm worried about right now is that I realized that\n> > our recovery from errors in this area is completely hosed, cf\n> > https://www.postgresql.org/message-id/4541.1556736252@sss.pgh.pa.us\n> \n> OK, so per the other thread, it seems like the error recovery problem\n> isn't going to affect this directly. However, I still don't like this\n> proposal much; the reason being that it's a rather fundamental change\n> in the API of RelationSetNewRelfilenode. This will certainly break\n> any external callers of that function --- and silently, too.\n> \n> Admittedly, there might not be any outside callers, but I don't really\n> like that assumption for something we're going to have to back-patch.\n\nCouldn't we just address that by adding a new\nRelationSetNewRelfilenodeInternal() that's then wrapped by\nRelationSetNewRelfilenode() which just does\nRelationSetNewRelfilenodeInternal();CCI();?\n\nDoesn't have to be ...Internal(), could also be\nRelationBeginSetNewRelfilenode() or such.\n\nI'm not sure why you think using CCI() for this purpose is a hack? To me\nthe ability to have catalog changes only take effect when they're all\ndone, and the system is ready for them, is one of the core purposes of\nthe infrastructure?\n\n\n> The solution I'm thinking of should have much more localized effects,\n> basically just in reindex_index and RelationSetNewRelfilenode, which is\n> why I like it better.\n\nWell, as I said before, I think hiding the to-be-rebuilt index from the\nlist of indexes is dangerous too - if somebody added an actual\nCatalogUpdate/Insert (rather than inplace_update) anywhere along the\nindex_build() path, we'd not get an assertion failure anymore, but just\nan index without the new entry. And given the fragility with HOT hiding\nthat a lot of the time, that seems dangerous to me.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 May 2019 18:25:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-01 19:41:24 -0400, Tom Lane wrote:\n>> OK, so per the other thread, it seems like the error recovery problem\n>> isn't going to affect this directly. However, I still don't like this\n>> proposal much; the reason being that it's a rather fundamental change\n>> in the API of RelationSetNewRelfilenode. This will certainly break\n>> any external callers of that function --- and silently, too.\n\n> Couldn't we just address that by adding a new\n> RelationSetNewRelfilenodeInternal() that's then wrapped by\n> RelationSetNewRelfilenode() which just does\n> RelationSetNewRelfilenodeInternal();CCI();?\n\nThat's just adding more ugliness ...\n\n>> The solution I'm thinking of should have much more localized effects,\n>> basically just in reindex_index and RelationSetNewRelfilenode, which is\n>> why I like it better.\n\n> Well, as I said before, I think hiding the to-be-rebuilt index from the\n> list of indexes is dangerous too - if somebody added an actual\n> CatalogUpdate/Insert (rather than inplace_update) anywhere along the\n> index_build() path, we'd not get an assertion failure anymore, but just\n> an index without the new entry. And given the fragility with HOT hiding\n> that a lot of the time, that seems dangerous to me.\n\nI think that argument is pretty pointless considering that \"REINDEX TABLE\npg_class\" does it this way, and that code is nearly old enough to vote.\nPerhaps there'd be value in rewriting things so that we don't need\nRelationSetIndexList at all, but it's not real clear to me what we'd do\ninstead, and in any case I don't agree with back-patching such a change.\nIn the near term it seems better to me to make \"REINDEX INDEX\nsome-pg_class-index\" handle this problem the same way \"REINDEX TABLE\npg_class\" has been doing for many years.\n\nAttached is a draft patch for this. It passes check-world with\nxxx_FORCE_RELEASE, and gets through reindexing pg_class with\nCLOBBER_CACHE_ALWAYS, but I've not completed a full CCA run.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 01 May 2019 22:01:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-01 22:01:53 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Well, as I said before, I think hiding the to-be-rebuilt index from the\n> > list of indexes is dangerous too - if somebody added an actual\n> > CatalogUpdate/Insert (rather than inplace_update) anywhere along the\n> > index_build() path, we'd not get an assertion failure anymore, but just\n> > an index without the new entry. And given the fragility with HOT hiding\n> > that a lot of the time, that seems dangerous to me.\n> \n> I think that argument is pretty pointless considering that \"REINDEX TABLE\n> pg_class\" does it this way, and that code is nearly old enough to\n> vote.\n\nIMO the reindex_relation() case isn't comparable. By my read the main\npurpose there is to prevent inserting into not-yet-rebuilt indexes. The\nrelevant comment says:\n\t * .... If we are processing pg_class itself, we want to make sure\n\t * that the updates do not try to insert index entries into indexes we\n\t * have not processed yet. (When we are trying to recover from corrupted\n\t * indexes, that could easily cause a crash.)\n\nNote the *not processed yet* bit. That's *not* comparable logic to\nhiding the index that *already* has been rebuilt, in the middle of\nreindex_index(). Yes, the way reindex_relation() is currently coded,\nthe RelationSetIndexList() *also* hides the already rebuilt index, but\nthat's hard for reindex_relation() to avoid, because it's outside of\nreindex_index().\n\n\n> +\t * If we are doing one index for reindex_relation, then we will find that\n> +\t * the index is already not present in the index list. In that case we\n> +\t * don't have to do anything to the index list here, which we mark by\n> +\t * clearing is_pg_class.\n> \t */\n\n> -\tRelationSetNewRelfilenode(iRel, persistence);\n> +\tis_pg_class = (RelationGetRelid(heapRelation) == RelationRelationId);\n> +\tif (is_pg_class)\n> +\t{\n> +\t\tallIndexIds = RelationGetIndexList(heapRelation);\n> +\t\tif (list_member_oid(allIndexIds, indexId))\n> +\t\t{\n> +\t\t\totherIndexIds = list_delete_oid(list_copy(allIndexIds), indexId);\n> +\t\t\t/* Ensure rd_indexattr is valid; see comments for RelationSetIndexList */\n> +\t\t\t(void) RelationGetIndexAttrBitmap(heapRelation, INDEX_ATTR_BITMAP_ALL);\n> +\t\t}\n> +\t\telse\n> +\t\t\tis_pg_class = false;\n> +\t}\n\nThat's not pretty either :(\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 May 2019 19:19:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-01 22:01:53 -0400, Tom Lane wrote:\n>> I think that argument is pretty pointless considering that \"REINDEX TABLE\n>> pg_class\" does it this way, and that code is nearly old enough to\n>> vote.\n\n> IMO the reindex_relation() case isn't comparable.\n\nIMV it's the exact same case: we need to perform a pg_class update while\none or more of pg_class's indexes shouldn't be touched. I am kind of\nwondering why it didn't seem to be necessary to cover this for REINDEX\nINDEX back in 2003, but it clearly is necessary now.\n\n> That's not pretty either :(\n\nSo, I don't like your patch, you don't like mine. Anybody else\nwant to weigh in?\n\nWe do not have the luxury of time to argue about this. If we commit\nsomething today, we *might* get a useful set of CLOBBER_CACHE_ALWAYS\nresults for all branches by Sunday. Those regression tests will have to\ncome out of the back branches on Sunday, because we are not shipping minor\nreleases with unstable regression tests, and I've heard no proposal for\navoiding the occasional-deadlock problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 May 2019 10:49:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-01 00:43:34 -0400, Tom Lane wrote:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2019-05-01%2003%3A12%3A13\n> \n> The relevant bit of log is\n> \n> 2019-05-01 05:24:47.527 CEST [97690:429] pg_regress/create_view LOG: statement: DROP SCHEMA temp_view_test CASCADE;\n> 2019-05-01 05:24:47.605 CEST [97690:430] pg_regress/create_view LOG: statement: DROP SCHEMA testviewschm2 CASCADE;\n> 2019-05-01 05:24:47.858 CEST [97694:1] [unknown] LOG: connection received: host=[local]\n> 2019-05-01 05:24:47.863 CEST [97694:2] [unknown] LOG: connection authorized: user=pgbuildfarm database=regression\n> 2019-05-01 05:24:47.878 CEST [97694:3] pg_regress/reindex_catalog LOG: statement: REINDEX TABLE pg_class;\n> 2019-05-01 05:24:48.887 CEST [97694:4] pg_regress/reindex_catalog ERROR: deadlock detected\n> 2019-05-01 05:24:48.887 CEST [97694:5] pg_regress/reindex_catalog DETAIL: Process 97694 waits for ShareLock on transaction 2559; blocked by process 97690.\n> \tProcess 97690 waits for RowExclusiveLock on relation 1259 of database 16387; blocked by process 97694.\n> \tProcess 97694: REINDEX TABLE pg_class;\n> \tProcess 97690: DROP SCHEMA testviewschm2 CASCADE;\n> 2019-05-01 05:24:48.887 CEST [97694:6] pg_regress/reindex_catalog HINT: See server log for query details.\n> 2019-05-01 05:24:48.887 CEST [97694:7] pg_regress/reindex_catalog CONTEXT: while checking uniqueness of tuple (12,71) in relation \"pg_class\"\n> 2019-05-01 05:24:48.887 CEST [97694:8] pg_regress/reindex_catalog STATEMENT: REINDEX TABLE pg_class;\n> 2019-05-01 05:24:48.904 CEST [97690:431] pg_regress/create_view LOG: disconnection: session time: 0:00:03.748 user=pgbuildfarm database=regression host=[local]\n> \n> which is mighty confusing at first glance, but I think the explanation is\n> that what the postmaster is reporting is process 97690's *latest* query,\n> not what it's currently doing. What it's really currently doing at the\n> moment of the deadlock is cleaning out its temporary schema after the\n> client disconnected. So this says you were careless about where to insert\n> the reindex_catalog test in the test schedule: it can't be after anything\n> that creates any temp objects. That seems like kind of a problem :-(.\n> We could put it second, after the tablespace test, but that would mean\n> that we're reindexing after very little churn has happened in the\n> catalogs, which doesn't seem like much of a stress test.\n\nI'm inclined to remove the tests from the backbranches, once we've\ncommitted a fix for the actual REINDEX issue, and most of the farm has\nbeen through a cycle or three. I don't think we'll figure out how to\nmake them robust in time for next week's release.\n\nI don't think we can really rely on the post-disconnect phase completing\nin a particularly deterministic time. I was wondering for a second\nwhether we could just trigger the cleanup of temp tables in the test\ngroup before the reindex_catalog table with an explicit DISCARD, but\nthat seems might fragile too.\n\n\nObviously not something trivially changable, and never even remotely\nbackpatchable, but once more I'm questioning the wisdom of all the\nearly-release logic we have for catalog tables...\n\n\n> Another fairly interesting thing is that this log includes the telltale\n> \n> 2019-05-01 05:24:48.887 CEST [97694:7] pg_regress/reindex_catalog CONTEXT: while checking uniqueness of tuple (12,71) in relation \"pg_class\"\n> \n> Why did I have to dig to find that information in HEAD? Have we lost\n> some useful context reporting? (Note this run is in the v10 branch.)\n\nHm. There's still code for it. And I found another run on HEAD still\nshowing it\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2019-05-01%2010%3A45%3A00\n\n+ERROR: deadlock detected\n+DETAIL: Process 13455 waits for ShareLock on transaction 2986; blocked by process 16881.\n+Process 16881 waits for RowExclusiveLock on relation 1259 of database 16384; blocked by process 13455.\n+HINT: See server log for query details.\n+CONTEXT: while checking uniqueness of tuple (39,35) in relation \"pg_class\"\n\nWhat made you think it's not present on HEAD?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 May 2019 07:50:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-01 00:43:34 -0400, Tom Lane wrote:\n>> ... What it's really currently doing at the\n>> moment of the deadlock is cleaning out its temporary schema after the\n>> client disconnected.\n\n> I'm inclined to remove the tests from the backbranches, once we've\n> committed a fix for the actual REINDEX issue, and most of the farm has\n> been through a cycle or three. I don't think we'll figure out how to\n> make them robust in time for next week's release.\n\nYeah, as I just said in my other message, I see no other alternative for\nnext week's releases. We can leave the test in place in HEAD a bit\nlonger, but I don't really want it there for the beta either, unless we\ncan think of some better plan.\n\n> I don't think we can really rely on the post-disconnect phase completing\n> in a particularly deterministic time.\n\nExactly :-(\n\n>> Another fairly interesting thing is that this log includes the telltale\n>> 2019-05-01 05:24:48.887 CEST [97694:7] pg_regress/reindex_catalog CONTEXT: while checking uniqueness of tuple (12,71) in relation \"pg_class\"\n>> Why did I have to dig to find that information in HEAD? Have we lost\n>> some useful context reporting? (Note this run is in the v10 branch.)\n\nFWIW, as best I can reconstruct the sequence of events, I might just\nnot've looked. I got an error and just assumed it was the same as what\nwe'd seen in the buildfarm; but now we realize that there were multiple\nways to get deadlocks, and only some of them would have shown this.\nFor the moment I'm willing to assume this isn't a real issue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 May 2019 11:08:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-02 10:49:00 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-01 22:01:53 -0400, Tom Lane wrote:\n> >> I think that argument is pretty pointless considering that \"REINDEX TABLE\n> >> pg_class\" does it this way, and that code is nearly old enough to\n> >> vote.\n>\n> > IMO the reindex_relation() case isn't comparable.\n>\n> IMV it's the exact same case: we need to perform a pg_class update while\n> one or more of pg_class's indexes shouldn't be touched. I am kind of\n> wondering why it didn't seem to be necessary to cover this for REINDEX\n> INDEX back in 2003, but it clearly is necessary now.\n>\n> > That's not pretty either :(\n>\n> So, I don't like your patch, you don't like mine. Anybody else\n> want to weigh in?\n\nWell, I think I can live with your fix. I think it's likely to hide\nfuture bugs, but this is an active bug. And, as you say, we don't have a\nlot of time.\n\n\nISTM that if we go down this path, we should split (not now, but either\nstill in v12, or *early* in v13), the sets of indexes that are intended\nto a) not being used for catalog queries b) may be skipped for index\ninsertions. It seems pretty likely that somebody will otherwise soon\nintroduce an heap_update() somewhere into the index build process, and\nit'll work just fine in testing due to HOT.\n\n\nWe already have somewhat separate and half complimentary mechanisms\nhere:\n1) When REINDEX_REL_SUPPRESS_INDEX_USE is set (only cluster.c), we mark\n indexes on tables as unused by SetReindexPending(). That prevents them\n from being used for catalog queries. But it disallows new inserts\n into them.\n\n2) When reindex_index() starts processing an index, it marks it as being\n processed. Indexes on this list are not alowed to be inserted to\n (enforced by assertions). Note that this currently removes the\n specific index from the list set by 1).\n\n It also marks the heap as being reindexed, which then triggers (as\n the sole effect afaict), some special case logic in\n index_update_stats(), that avoids the syscache and opts for a direct\n manual catalog scan. I'm a bit confused as to why that's necessary.\n\n3) Just for pg_class, reindex_relation(), just hard-sets the list of\n indexes that are alreday rebuilt. This allows index insertions into\n the the indexes that are later going to be rebuilt - which is\n necessary because we currently update pg_class in\n RelationSetNewRelfilenode().\n\nSeems odd to resort to RelationSetIndexList(), when we could just mildly\nextend the SetReindexPending() logic instead.\n\nI kinda wonder if there's not a third approach hiding somewhere here. We\ncould just stop updating pg_class in RelationSetNewRelfilenode() in\npg_class, when it's an index on pg_class. The pg_class changes for\nmapped indexes done aren't really crucial, and are going to be\noverwritten later by index_update_stats(). That'd have the big\nadvantage that we'd afaict not need the logic of having to allow\ncatalog modifications at all during the reindex path at all.\n\n\n> We do not have the luxury of time to argue about this. If we commit\n> something today, we *might* get a useful set of CLOBBER_CACHE_ALWAYS\n> results for all branches by Sunday.\n\nYea. I think I'll also just trigger a manual CCA run of check-world for\nall branches (thank god for old workstations). And CCR for at least a\nfew crucial bits.\n\n\n> Those regression tests will have to come out of the back branches on\n> Sunday, because we are not shipping minor releases with unstable\n> regression tests, and I've heard no proposal for avoiding the\n> occasional-deadlock problem.\n\nYea, I've just proposed the same in a separate thread.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 May 2019 08:31:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> ISTM that if we go down this path, we should split (not now, but either\n> still in v12, or *early* in v13), the sets of indexes that are intended\n> to a) not being used for catalog queries b) may be skipped for index\n> insertions. It seems pretty likely that somebody will otherwise soon\n> introduce an heap_update() somewhere into the index build process, and\n> it'll work just fine in testing due to HOT.\n\nGiven the assertions you added in CatalogIndexInsert, I'm not sure\nwhy that's a big hazard?\n\n> I kinda wonder if there's not a third approach hiding somewhere here. We\n> could just stop updating pg_class in RelationSetNewRelfilenode() in\n> pg_class, when it's an index on pg_class.\n\nHmm ... are all those indexes mapped? I guess so. But don't we need\nto worry about resetting relfrozenxid?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 May 2019 11:41:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-02 11:41:28 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > ISTM that if we go down this path, we should split (not now, but either\n> > still in v12, or *early* in v13), the sets of indexes that are intended\n> > to a) not being used for catalog queries b) may be skipped for index\n> > insertions. It seems pretty likely that somebody will otherwise soon\n> > introduce an heap_update() somewhere into the index build process, and\n> > it'll work just fine in testing due to HOT.\n> \n> Given the assertions you added in CatalogIndexInsert, I'm not sure\n> why that's a big hazard?\n\nAfaict the new RelationSetIndexList() trickery would prevent that\nassertion from being reached, because RelationGetIndexList() will not\nsee the current index, and therefore CatalogIndexInsert() won't know to\nassert it either. It kinda works today for reindex_relation(), because\nwe'll \"un-hide\" the already rebuilt indexes - i.e. we'd not notice the\nbug on pg_class' first index, but for later ones it'd trigger. I guess\nyou could argue that we'll just have to rely on REINDEX TABLE pg_class\nregression tests to make sure REINDEX INDEX pg_class_* ain't broken :/.\n\n\n> > I kinda wonder if there's not a third approach hiding somewhere here. We\n> > could just stop updating pg_class in RelationSetNewRelfilenode() in\n> > pg_class, when it's an index on pg_class.\n> \n> Hmm ... are all those indexes mapped? I guess so.\n\nThey are:\n\npostgres[13357][1]=# SELECT oid::regclass, relfilenode FROM pg_class WHERE oid IN (SELECT indexrelid FROM pg_index WHERE indrelid = 'pg_class'::regclass);\n┌───────────────────────────────────┬─────────────┐\n│ oid │ relfilenode │\n├───────────────────────────────────┼─────────────┤\n│ pg_class_oid_index │ 0 │\n│ pg_class_relname_nsp_index │ 0 │\n│ pg_class_tblspc_relfilenode_index │ 0 │\n└───────────────────────────────────┴─────────────┘\n(3 rows)\n\nI guess that doesn't stricly have to be the case for at least some of\nthem, but it seems unlikely that we'd want to change that.\n\n\n> But don't we need to worry about resetting relfrozenxid?\n\nIndexes don't have that though? We couldn't do it for pg_class itself,\nbut that's not a problem here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 May 2019 08:56:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-02 11:41:28 -0400, Tom Lane wrote:\n>> But don't we need to worry about resetting relfrozenxid?\n\n> Indexes don't have that though? We couldn't do it for pg_class itself,\n> but that's not a problem here.\n\nHmm. Again, that seems like the sort of assumption that could bite\nus later. But maybe we could add some assertions that the new values\nmatch the old? I'll experiment.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 May 2019 12:02:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2019-05-02 11:41:28 -0400, Tom Lane wrote:\n>>> But don't we need to worry about resetting relfrozenxid?\n\n>> Indexes don't have that though? We couldn't do it for pg_class itself,\n>> but that's not a problem here.\n\n> Hmm. Again, that seems like the sort of assumption that could bite\n> us later. But maybe we could add some assertions that the new values\n> match the old? I'll experiment.\n\nHuh, this actually seems to work. The attached is a quick hack for\ntesting. It gets through check-world straight up and with\nxxx_FORCE_RELEASE, and I've verified reindexing pg_class works with\nCLOBBER_CACHE_ALWAYS, but it'll be a few hours before I have a full CCA\nrun.\n\nOne interesting thing that turns up in check-world is that if wal_level\nis minimal, we have to manually force an XID to be assigned, else\nreindexing pg_class fails with \"cannot commit a transaction that deleted\nfiles but has no xid\" :-(. Perhaps there's some other cleaner place to\ndo that?\n\nIf we go this path, we should remove RelationSetIndexList altogether\n(in HEAD), but I've not done so here. The comments likely need more\nwork too.\n\nI have to go out and do some errands for the next few hours, so I can't\npush this forward any more right now.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 02 May 2019 12:59:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-02 12:59:55 -0400, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> On 2019-05-02 11:41:28 -0400, Tom Lane wrote:\n> >>> But don't we need to worry about resetting relfrozenxid?\n> \n> >> Indexes don't have that though? We couldn't do it for pg_class itself,\n> >> but that's not a problem here.\n> \n> > Hmm. Again, that seems like the sort of assumption that could bite\n> > us later. But maybe we could add some assertions that the new values\n> > match the old? I'll experiment.\n\nindex_create() has\n\tAssert(relfrozenxid == InvalidTransactionId);\n\tAssert(relminmxid == InvalidMultiXactId);\n\nI think we should just add the same to reindex_index() (modulo accessing\nrelcache rather than local vars, of course)?\n\n\n> Huh, this actually seems to work. The attached is a quick hack for\n> testing. It gets through check-world straight up and with\n> xxx_FORCE_RELEASE, and I've verified reindexing pg_class works with\n> CLOBBER_CACHE_ALWAYS, but it'll be a few hours before I have a full CCA\n> run.\n\nGreat.\n\n\n> One interesting thing that turns up in check-world is that if wal_level\n> is minimal, we have to manually force an XID to be assigned, else\n> reindexing pg_class fails with \"cannot commit a transaction that deleted\n> files but has no xid\" :-(. Perhaps there's some other cleaner place to\n> do that?\n\nHm. We could replace that RecordTransactionCommit() with an xid\nassignment or such. But that seems at least as fragile. Or we could\nexpand the logic we have for LogStandbyInvalidations() a few lines below\nthe elog to also be able to handle files. IIRC that was introduced to\nhandle somewhat related issues about being able to run VACUUM\n(containing invalidations) without an xid.\n\n\n> +\t * If we're dealing with a mapped index, pg_class.relfilenode doesn't\n> +\t * change; instead we have to send the update to the relation mapper.\n> +\t *\n> +\t * For mapped indexes, we don't actually change the pg_class entry at all;\n> +\t * this is essential when reindexing pg_class itself. That leaves us with\n> +\t * possibly-inaccurate values of relpages etc, but those will be fixed up\n> +\t * later.\n> \t */\n> \tif (RelationIsMapped(relation))\n> +\t{\n> +\t\t/* Since we're not updating pg_class, these had better not change */\n> +\t\tAssert(classform->relfrozenxid == freezeXid);\n> +\t\tAssert(classform->relminmxid == minmulti);\n> +\t\tAssert(classform->relpersistence == persistence);\n\nHm. Could we additionally assert that we're dealing with an index? The\nabove checks will trigger for tables right now, but I'm not sure that'll\nalways be the case.\n\n\n> +\t\t/*\n> +\t\t * In some code paths it's possible that the tuple update we'd\n> +\t\t * otherwise do here is the only thing that would assign an XID for\n> +\t\t * the current transaction. However, we must have an XID to delete\n> +\t\t * files, so make sure one is assigned.\n> +\t\t */\n> +\t\t(void) GetCurrentTransactionId();\n\nNot pretty, but seems tolerable.\n\n\n> -\t/* These changes are safe even for a mapped relation */\n> -\tif (relation->rd_rel->relkind != RELKIND_SEQUENCE)\n> -\t{\n> -\t\tclassform->relpages = 0;\t/* it's empty until further notice */\n> -\t\tclassform->reltuples = 0;\n> -\t\tclassform->relallvisible = 0;\n> -\t}\n> -\tclassform->relfrozenxid = freezeXid;\n> -\tclassform->relminmxid = minmulti;\n> -\tclassform->relpersistence = persistence;\n> +\t\t/* These changes are safe even for a mapped relation */\n\nYou'd probably have noticed that, but this one probably has to go.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 May 2019 10:28:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-02 12:59:55 -0400, Tom Lane wrote:\n>> One interesting thing that turns up in check-world is that if wal_level\n>> is minimal, we have to manually force an XID to be assigned, else\n>> reindexing pg_class fails with \"cannot commit a transaction that deleted\n>> files but has no xid\" :-(. Perhaps there's some other cleaner place to\n>> do that?\n\n> Hm. We could replace that RecordTransactionCommit() with an xid\n> assignment or such. But that seems at least as fragile. Or we could\n> expand the logic we have for LogStandbyInvalidations() a few lines below\n> the elog to also be able to handle files. IIRC that was introduced to\n> handle somewhat related issues about being able to run VACUUM\n> (containing invalidations) without an xid.\n\nWell, that's something we can maybe improve later. I'm content to leave\nthe patch as it is for now.\n\n>> if (RelationIsMapped(relation))\n>> +\t{\n>> +\t\t/* Since we're not updating pg_class, these had better not change */\n>> +\t\tAssert(classform->relfrozenxid == freezeXid);\n>> +\t\tAssert(classform->relminmxid == minmulti);\n>> +\t\tAssert(classform->relpersistence == persistence);\n\n> Hm. Could we additionally assert that we're dealing with an index?\n\nWill do.\n\n>> +\t\t/* These changes are safe even for a mapped relation */\n\n> You'd probably have noticed that, but this one probably has to go.\n\nAh, right. As I said, I'd not paid much attention to the comments yet.\n\nI just finished a successful run of the core regression tests with CCA.\nGiven the calendar, I think that's about as much CCA testing as I should\ndo personally. I'll make a cleanup pass on this patch and try to get it\npushed within a few hours, if there are not objections.\n\nHow do you feel about the other patch to rejigger the order of operations\nin CommandCounterIncrement? I think that's a bug, but it's probably\nnoncritical for most people. What I'm leaning towards for that one is\nwaiting till after the minor releases, then pushing it to all branches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 May 2019 16:54:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-02 16:54:11 -0400, Tom Lane wrote:\n> I just finished a successful run of the core regression tests with CCA.\n> Given the calendar, I think that's about as much CCA testing as I should\n> do personally. I'll make a cleanup pass on this patch and try to get it\n> pushed within a few hours, if there are not objections.\n\nSounds good to me.\n\n\n> How do you feel about the other patch to rejigger the order of operations\n> in CommandCounterIncrement? I think that's a bug, but it's probably\n> noncritical for most people. What I'm leaning towards for that one is\n> waiting till after the minor releases, then pushing it to all branches.\n\nI've not yet have the mental cycles to look more deeply into it. I\nthought your explanation why the current way is wrong made sense, but I\nwanted to look a bit more into how it came to be how it is now. I agree\nthat pushing after the minors would make sense, it's too subtle to go\nfor it now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 May 2019 14:02:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-02 16:54:11 -0400, Tom Lane wrote:\n>> How do you feel about the other patch to rejigger the order of operations\n>> in CommandCounterIncrement? I think that's a bug, but it's probably\n>> noncritical for most people. What I'm leaning towards for that one is\n>> waiting till after the minor releases, then pushing it to all branches.\n\n> I've not yet have the mental cycles to look more deeply into it. I\n> thought your explanation why the current way is wrong made sense, but I\n> wanted to look a bit more into how it came to be how it is now.\n\nWell, I wrote that code, and I can say pretty confidently that this\nfailure mode just didn't occur to me at the time.\n\n> I agree\n> that pushing after the minors would make sense, it's too subtle to go\n> for it now.\n\nIt is subtle, and given that it's been there this long, I don't feel\nurgency to fix it Right Now. I think we're already taking plenty of\nrisk back-patching the REINDEX patch :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 May 2019 17:12:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-02 16:54:11 -0400, Tom Lane wrote:\n>> I just finished a successful run of the core regression tests with CCA.\n>> Given the calendar, I think that's about as much CCA testing as I should\n>> do personally. I'll make a cleanup pass on this patch and try to get it\n>> pushed within a few hours, if there are not objections.\n\n> Sounds good to me.\n\nPushed --- hopefully, we have enough time before Sunday that we can get\nreasonably complete buildfarm testing.\n\nI did manually verify that all branches get through \"reindex table\npg_class\" and \"reindex index pg_class_oid_index\" under\nCLOBBER_CACHE_ALWAYS, as well as a normal-mode check-world. But CCA\nworld runs seem like a good idea.\n\nAs far as a permanent test scheme goes, I noticed while testing that\nsrc/bin/scripts/t/090_reindexdb.pl and\nsrc/bin/scripts/t/091_reindexdb_all.pl seem to be giving us a good\ndeal of coverage on this already, although of course they never caught the\nproblem with non-HOT updates, nor any of the deadlock issues. Still,\nit seems like maybe a core regression test that's been lobotomized enough\nto be perfectly parallel-safe might not give us more coverage than can\nbe had there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 May 2019 19:18:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "On Thu, May 02, 2019 at 07:18:19PM -0400, Tom Lane wrote:\n> I did manually verify that all branches get through \"reindex table\n> pg_class\" and \"reindex index pg_class_oid_index\" under\n> CLOBBER_CACHE_ALWAYS, as well as a normal-mode check-world. But CCA\n> world runs seem like a good idea.\n\n(catching up a bit..)\n\nsidewinder is still pissed of as of HEAD, pointing visibly to f912d7d\nas the root cause:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2019-05-03%2021%3A45%3A00\n\n REINDEX TABLE pg_class; -- mapped, non-shared, critical\n+ERROR: deadlock detected\n+DETAIL: Process 28266 waits for ShareLock on transaction 2988;\n blocked by process 20650.\n+Process 20650 waits for RowExclusiveLock on relation 1259 of\n database 16387; blocked by process 28266.\n+HINT: See server log for query details.\n\nI don't think that's sane just before the upcoming release..\n--\nMichael",
"msg_date": "Sat, 4 May 2019 22:06:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> sidewinder is still pissed of as of HEAD, pointing visibly to f912d7d\n> as the root cause:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2019-05-03%2021%3A45%3A00\n\nRight, the deadlocks are expected when some previous session is slow about\ncleaning out its temp schema. The plan is to leave that in place till\ntomorrow to see if any *other* failure modes turn up. But it has to come\nout before we wrap the releases.\n\nI don't think we discussed exactly what \"come out\" means. My thought is\nto leave the test scripts in place (so they can be invoked manually with\nEXTRA_TESTS) but remove them from the schedule files.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 May 2019 11:04:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-04 11:04:07 -0400, Tom Lane wrote:\n> I don't think we discussed exactly what \"come out\" means. My thought is\n> to leave the test scripts in place (so they can be invoked manually with\n> EXTRA_TESTS) but remove them from the schedule files.\n\nYea, that sounds sensible. I'll do so by tonight if you don't beat me to\nit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 5 May 2019 11:48:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-04 11:04:07 -0400, Tom Lane wrote:\n>> I don't think we discussed exactly what \"come out\" means. My thought is\n>> to leave the test scripts in place (so they can be invoked manually with\n>> EXTRA_TESTS) but remove them from the schedule files.\n\n> Yea, that sounds sensible. I'll do so by tonight if you don't beat me to\n> it.\n\nOn this coast, \"tonight\" is running into \"tomorrow\" ... you planning\nto do that soon?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 05 May 2019 23:56:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn May 5, 2019 8:56:58 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> On 2019-05-04 11:04:07 -0400, Tom Lane wrote:\n>>> I don't think we discussed exactly what \"come out\" means. My\n>thought is\n>>> to leave the test scripts in place (so they can be invoked manually\n>with\n>>> EXTRA_TESTS) but remove them from the schedule files.\n>\n>> Yea, that sounds sensible. I'll do so by tonight if you don't beat me\n>to\n>> it.\n>\n>On this coast, \"tonight\" is running into \"tomorrow\" ... you planning\n>to do that soon?\n\nI'd planned to finish cooking and eating, and then doing it. Seems like that'd be plenty early?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sun, 05 May 2019 20:58:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On May 5, 2019 8:56:58 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> On this coast, \"tonight\" is running into \"tomorrow\" ... you planning\n>> to do that soon?\n\n> I'd planned to finish cooking and eating, and then doing it. Seems like that'd be plenty early?\n\nSure, dinner can take priority.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 00:00:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "On 2019-05-06 00:00:04 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On May 5, 2019 8:56:58 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> On this coast, \"tonight\" is running into \"tomorrow\" ... you planning\n> >> to do that soon?\n>\n> > I'd planned to finish cooking and eating, and then doing it. Seems like that'd be plenty early?\n>\n> Sure, dinner can take priority.\n\nAnd pushed.\n\nI've not done so for 12. For one, because there's no imminent release,\nand there's plenty reindexing related changes in 12. But also because I\nhave two vague ideas that might allow us to keep the test in the regular\nschedule.\n\n1) Is there a way that we could just use the type of logic we use for\n CREATE INDEX CONCURRENTLY to force waiting for previously started\n sessions, before doing the REINDEXing of pg_class et al?\n\n I think it'd work to just add a CREATE INDEX CONCURRENTLY in a\n transaction, after taking an exclusive lock on pg_class - but I\n suspect that'd be just as deadlock prone, just for different reasons?\n\n2) Couldn't we just add a simple loop in plpgsql that checks that the\n previous session ended? A simple DO loop around SELECT pid FROM\n pg_stat_activity WHERE datname = current_database() AND pid <>\n pg_backend_pid(); doesn't sound like it'd be too complicated? That\n wouldn't work in older releases, because we e.g. wouldn't see\n autoanalyze anywhere conveniently.\n\n I'm afraid there'd still be the issue that an autoanalyze could spin\n up concurrently? And we can't just prevent that by locking pg_class,\n because we'd otherwise just have the same deadlock?\n\n\nI for sure thought I earlier had an idea that'd actually work. But\neither I've lost it, or it didn't actually work. But perhaps somebody\nelse can come up with something based on the above strawman ideas?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 May 2019 00:01:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I for sure thought I earlier had an idea that'd actually work. But\n> either I've lost it, or it didn't actually work. But perhaps somebody\n> else can come up with something based on the above strawman ideas?\n\nBoth of those ideas fail if an autovacuum starts up after you're\ndone looking. I still think the only way you could make this\nreliable enough for the buildfarm is to do it in a TAP test that's\nset up a cluster with autovacuum disabled. Whether it's worth the\ncycles to do so is pretty unclear, since that wouldn't be a terribly\nreal-world test environment. (I also wonder whether the existing\nTAP tests for reindexdb don't provide largely the same coverage.)\n\nMy advice is to let it go until we have time to work on getting rid\nof the deadlock issues. If we're successful at that, it might be\npossible to re-enable these tests in the regular regression environment.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 10:50:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 10:50:19 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I for sure thought I earlier had an idea that'd actually work. But\n> > either I've lost it, or it didn't actually work. But perhaps somebody\n> > else can come up with something based on the above strawman ideas?\n> \n> Both of those ideas fail if an autovacuum starts up after you're\n> done looking.\n\nWell, that's why I had proposed to basically to first lock pg_class, and\nthen wait for other sessions. Which'd be fine, except that it'd also\ncreate deadlock risks :(.\n\n\n> My advice is to let it go until we have time to work on getting rid\n> of the deadlock issues. If we're successful at that, it might be\n> possible to re-enable these tests in the regular regression environment.\n\nYea, that might be right. I'm planning to leave the tests in until a\nbunch of the open REINDEX issues are resolved. Not super likely that\nit'd break something, but probably worth anyway?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 08:59:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Yea, that might be right. I'm planning to leave the tests in until a\n> bunch of the open REINDEX issues are resolved. Not super likely that\n> it'd break something, but probably worth anyway?\n\nThe number of deadlock failures is kind of annoying, so I'd rather remove\nthe tests from HEAD sooner than later. What issues around that do you\nthink remain that these tests would be helpful for?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 12:07:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 12:07:37 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Yea, that might be right. I'm planning to leave the tests in until a\n> > bunch of the open REINDEX issues are resolved. Not super likely that\n> > it'd break something, but probably worth anyway?\n> \n> The number of deadlock failures is kind of annoying, so I'd rather remove\n> the tests from HEAD sooner than later. What issues around that do you\n> think remain that these tests would be helpful for?\n\nI was wondering about\nhttps://postgr.es/m/20190430151735.wi52sxjvxsjvaxxt%40alap3.anarazel.de\nbut perhaps it's too unlikely to break anything the tests would detect\nthough.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 09:09:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-07 12:07:37 -0400, Tom Lane wrote:\n>> The number of deadlock failures is kind of annoying, so I'd rather remove\n>> the tests from HEAD sooner than later. What issues around that do you\n>> think remain that these tests would be helpful for?\n\n> I was wondering about\n> https://postgr.es/m/20190430151735.wi52sxjvxsjvaxxt%40alap3.anarazel.de\n> but perhaps it's too unlikely to break anything the tests would detect\n> though.\n\nSince we don't allow REINDEX CONCURRENTLY on system catalogs, I'm not\nseeing any particular overlap there ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 12:14:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 12:14:43 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-07 12:07:37 -0400, Tom Lane wrote:\n> >> The number of deadlock failures is kind of annoying, so I'd rather remove\n> >> the tests from HEAD sooner than later. What issues around that do you\n> >> think remain that these tests would be helpful for?\n> \n> > I was wondering about\n> > https://postgr.es/m/20190430151735.wi52sxjvxsjvaxxt%40alap3.anarazel.de\n> > but perhaps it's too unlikely to break anything the tests would detect\n> > though.\n> \n> Since we don't allow REINDEX CONCURRENTLY on system catalogs, I'm not\n> seeing any particular overlap there ...\n\nWell, it rejiggers the way table locks are acquired for all REINDEX\nINDEX commands, not just in the CONCURRENTLY. But yea, it's probably\neasy to catch issues there on user tables.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 09:17:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 09:17:11 -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-05-07 12:14:43 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2019-05-07 12:07:37 -0400, Tom Lane wrote:\n> > >> The number of deadlock failures is kind of annoying, so I'd rather remove\n> > >> the tests from HEAD sooner than later. What issues around that do you\n> > >> think remain that these tests would be helpful for?\n> > \n> > > I was wondering about\n> > > https://postgr.es/m/20190430151735.wi52sxjvxsjvaxxt%40alap3.anarazel.de\n> > > but perhaps it's too unlikely to break anything the tests would detect\n> > > though.\n> > \n> > Since we don't allow REINDEX CONCURRENTLY on system catalogs, I'm not\n> > seeing any particular overlap there ...\n> \n> Well, it rejiggers the way table locks are acquired for all REINDEX\n> INDEX commands, not just in the CONCURRENTLY. But yea, it's probably\n> easy to catch issues there on user tables.\n\nPushed now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 May 2019 13:11:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-07 09:17:11 -0700, Andres Freund wrote:\n>> Well, it rejiggers the way table locks are acquired for all REINDEX\n>> INDEX commands, not just in the CONCURRENTLY. But yea, it's probably\n>> easy to catch issues there on user tables.\n\n> Pushed now.\n\nOK. I marked the open issue as closed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 16:29:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX INDEX results in a crash for an index of pg_class since\n 9.6"
}
] |
[
{
"msg_contents": "I thought of a case that results in pathological performance due to a\nbehavior of my nbtree patch series:\n\nregression=# create table uniquenulls(nully int4, constraint pp unique(nully));\nCREATE TABLE\nTime: 10.694 ms\nregression=# insert into uniquenulls select i from generate_series(1, 1e6) i;\nINSERT 0 1000000\nTime: 1356.025 ms (00:01.356)\nregression=# insert into uniquenulls select null from generate_series(1, 1e6) i;\nINSERT 0 1000000\nTime: 270834.196 ms (04:30.834)\n\nThe issue here is that the duration of the second INSERT statement is\nwildly excessive, because _bt_stepright() needs to step right many\nmany times for each tuple inserted. I would expect the second insert\nto take approximately as long as the first, but it takes ~200x longer\nhere. It could take much longer still if we pushed this example\nfurther. What we see here is a limited case in which the O(n ^ 2)\nperformance that \"getting tired\" used to prevent can occur. Clearly\nthat's totally unacceptable. Mea culpa.\n\nSure enough, the problem goes away when the index isn't a unique index\n(i.e. in the UNIQUE_CHECK_NO case):\n\nregression=# alter table uniquenulls drop constraint pp;\nALTER TABLE\nTime: 28.968 ms\nregression=# create index on uniquenulls (nully);\nCREATE INDEX\nTime: 1159.958 ms (00:01.160)\nregression=# insert into uniquenulls select null from generate_series(1, 1e6) i;\nINSERT 0 1000000\nTime: 1155.708 ms (00:01.156)\n\nTentatively, I think that the fix here is to not \"itup_key->scantid =\nNULL\" when a NULL value is involved (i.e. don't \"itup_key->scantid =\nNULL\" when we see IndexTupleHasNulls(itup) within _bt_doinsert()). We\nmay also want to avoid calling _bt_check_unique() entirely in this\nsituation. That way, the performance should be the same as the\nUNIQUE_CHECK_NO case: we descend to the appropriate leaf page\ndirectly, and then we're done. We don't have to find the appropriate\nleaf page by groveling through indefinitely-many existing leaf pages\nthat are full of NULL values, because we know that there cannot ever\nbe a unique violation for us to detect.\n\nI'll create an open item for this, and begin work on a patch tomorrow.\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 17 Apr 2019 19:37:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Pathological performance when inserting many NULLs into a unique\n index"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 7:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Tentatively, I think that the fix here is to not \"itup_key->scantid =\n> NULL\" when a NULL value is involved (i.e. don't \"itup_key->scantid =\n> NULL\" when we see IndexTupleHasNulls(itup) within _bt_doinsert()). We\n> may also want to avoid calling _bt_check_unique() entirely in this\n> situation.\n\n> I'll create an open item for this, and begin work on a patch tomorrow.\n\nI came up with the attached patch, which effectively treats a unique\nindex insertion as if the index was not unique at all in the case\nwhere new tuple is IndexTupleHasNulls() within _bt_doinsert(). This is\ncorrect because inserting a new tuple with a NULL value can neither\ncontribute to somebody else's duplicate violation, nor have a\nduplicate violation error of its own. I need to test this some more,\nthough I am fairly confident that I have the right idea.\n\nWe didn't have this problem before my v12 work due to the \"getting\ntired\" strategy, but that's not the full story. We also didn't (and\ndon't) move right within _bt_check_unique() when\nIndexTupleHasNulls(itup), because _bt_isequal() treats NULL values as\nnot equal to any value, including even NULL -- this meant that there\nwas no useless search to the right within _bt_check_unique().\nActually, the overall effect of how _bt_isequal() is used is that\n_bt_check_unique() does *no* useful work at all with a NULL scankey.\nIt's far simpler to remove responsibility for new tuples/scankeys with\nNULL values from _bt_check_unique() entirely, which is what I propose\nto do with this patch.\n\nThe patch actually ends up removing quite a lot more code than it\nadds, because it removes _bt_isequal() outright. I always thought that\nnbtinsert.c dealt with NULL values in unique indexes at the wrong\nlevel, and I'm glad to see _bt_isequal() go. Vadim's accompanying 1997\ncomment about \"not using _bt_compare in real comparison\" seems\nconfused to me. The new _bt_check_unique() may still need to compare\nthe scankey to some existing, adjacent tuple with a NULL value, but\n_bt_compare() is perfectly capable of recognizing that that adjacent\ntuple shouldn't be considered equal. IOW, _bt_compare() is merely\nincapable of behaving as if \"NULL != NULL\", which is a bad reason for\nkeeping _bt_isequal() around.\n\n--\nPeter Geoghegan",
"msg_date": "Thu, 18 Apr 2019 17:33:55 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Pathological performance when inserting many NULLs into a unique\n index"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 07:37:17PM -0700, Peter Geoghegan wrote:\n> I'll create an open item for this, and begin work on a patch tomorrow.\n\nAdding an open item is adapted, nice you found this issue. Testing on\nmy laptop with v11, the non-NULL case takes 5s and the NULL case 6s.\nOn HEAD, the non-NULL case takes 6s, and the NULL case takes... I\njust gave up and cancelled it :)\n--\nMichael",
"msg_date": "Fri, 19 Apr 2019 09:46:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Pathological performance when inserting many NULLs into a unique\n index"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 5:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Adding an open item is adapted, nice you found this issue. Testing on\n> my laptop with v11, the non-NULL case takes 5s and the NULL case 6s.\n> On HEAD, the non-NULL case takes 6s, and the NULL case takes... I\n> just gave up and cancelled it :)\n\nThis was one of those things that occurred to me out of the blue.\n\nIt just occurred to me that the final patch will need to be more\ncareful about non-key attributes in INCLUDE indexes. It's not okay for\nit to avoid calling _bt_check_unique() just because a non-key\nattribute was NULL. It should only do that when a key attribute is\nNULL.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 18 Apr 2019 20:13:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Pathological performance when inserting many NULLs into a unique\n index"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 8:13 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It just occurred to me that the final patch will need to be more\n> careful about non-key attributes in INCLUDE indexes. It's not okay for\n> it to avoid calling _bt_check_unique() just because a non-key\n> attribute was NULL. It should only do that when a key attribute is\n> NULL.\n\nAttached revision does it that way, specifically by adding a new field\nto the insertion scankey struct (BTScanInsertData). The field is\nfilled-in when initializing an insertion scankey in the usual way. I\nwas able to reuse alignment padding for the new field, so there is no\nadditional space overhead on Linux x86-64.\n\nThis is a bit more complicated than v1, but there is still an overall\nnet reduction in overall complexity (and in LOC). I'm going to commit\nthis early next week, barring any objections, and assuming I don't\nthink of any more problems between now and then.\n--\nPeter Geoghegan",
"msg_date": "Fri, 19 Apr 2019 18:34:29 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Pathological performance when inserting many NULLs into a unique\n index"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 6:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached revision does it that way, specifically by adding a new field\n> to the insertion scankey struct (BTScanInsertData).\n\nPushed.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 23 Apr 2019 10:40:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Pathological performance when inserting many NULLs into a unique\n index"
}
] |
[
{
"msg_contents": "Hi\n\nI wrote a pspg pager https://github.com/okbob/pspg\n\nThis pager is designed for tabular data. It can work in fallback mode as\nclassic pager, but it is not designed for this purpose (and I don't plan do\nit). Can we enhance a set of psql environment variables about\nPSQL_TABULAR_PAGER variable. This pager will be used, when psql will\ndisplay tabular data.\n\nComments, notes?\n\nRegards\n\nPavel\n\nHiI wrote a pspg pager https://github.com/okbob/pspgThis pager is designed for tabular data. It can work in fallback mode as classic pager, but it is not designed for this purpose (and I don't plan do it). Can we enhance a set of psql environment variables about PSQL_TABULAR_PAGER variable. This pager will be used, when psql will display tabular data.Comments, notes?RegardsPavel",
"msg_date": "Thu, 18 Apr 2019 07:20:37 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "proposal: psql PSQL_TABULAR_PAGER variable"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 07:20:37AM +0200, Pavel Stehule wrote:\n> Hi\n> \n> I wrote a pspg pager https://github.com/okbob/pspg\n> \n> This pager is designed for tabular data. It can work in fallback mode as\n> classic pager, but it is not designed for this purpose (and I don't plan do\n> it). Can we enhance a set of psql environment variables about\n> PSQL_TABULAR_PAGER variable. This pager will be used, when psql will display\n> tabular data.\n\nIn testing pspg, it seems to work fine with tabular and \\x-non-tabular\ndata. Are you asking for a pager option that is only used for non-\\x\ndisplay? What do people want the non-pspg pager to do?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 18 Apr 2019 09:51:07 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal: psql PSQL_TABULAR_PAGER variable"
},
{
"msg_contents": "čt 18. 4. 2019 v 15:51 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> On Thu, Apr 18, 2019 at 07:20:37AM +0200, Pavel Stehule wrote:\n> > Hi\n> >\n> > I wrote a pspg pager https://github.com/okbob/pspg\n> >\n> > This pager is designed for tabular data. It can work in fallback mode as\n> > classic pager, but it is not designed for this purpose (and I don't plan\n> do\n> > it). Can we enhance a set of psql environment variables about\n> > PSQL_TABULAR_PAGER variable. This pager will be used, when psql will\n> display\n> > tabular data.\n>\n> In testing pspg, it seems to work fine with tabular and \\x-non-tabular\n> data. Are you asking for a pager option that is only used for non-\\x\n> display? What do people want the non-pspg pager to do?\n>\n\nMy idea is following - pseudocode\n\n\nif view is a table\n{\n if is_defined PSQL_TABULAR_PAGER\n {\n pager = PSQL_TABULAR_PAGER\n }\n else if is_defined PSQL_PAGER\n {\n pager = PSQL_PAGER\n }\n else\n {\n pager = PAGER\n }\n}\nelse /* for \\h xxx */\n{\n if is_defined PSQL_PAGER\n {\n pager = PSQL_PAGER\n }\n else\n {\n pager = PAGER\n }\n}\n\nI expect some configuration like\n\nPSQL_TABULAR_PAGER=pspg\nPSQL_PAGER=\"less -S\"\n\nRegards\n\nPavel\n\n\n\n> --\n> Bruce Momjian <bruce@momjian.us> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n> + As you are, so once was I. As I am, so you will be. +\n> + Ancient Roman grave inscription +\n>\n\nčt 18. 4. 2019 v 15:51 odesílatel Bruce Momjian <bruce@momjian.us> napsal:On Thu, Apr 18, 2019 at 07:20:37AM +0200, Pavel Stehule wrote:\n> Hi\n> \n> I wrote a pspg pager https://github.com/okbob/pspg\n> \n> This pager is designed for tabular data. It can work in fallback mode as\n> classic pager, but it is not designed for this purpose (and I don't plan do\n> it). Can we enhance a set of psql environment variables about\n> PSQL_TABULAR_PAGER variable. This pager will be used, when psql will display\n> tabular data.\n\nIn testing pspg, it seems to work fine with tabular and \\x-non-tabular\ndata. Are you asking for a pager option that is only used for non-\\x\ndisplay? What do people want the non-pspg pager to do?My idea is following - pseudocodeif view is a table { if is_defined PSQL_TABULAR_PAGER { pager = PSQL_TABULAR_PAGER } else if is_defined PSQL_PAGER { pager = PSQL_PAGER } else { pager = PAGER }}else /* for \\h xxx */{ if is_defined PSQL_PAGER { pager = PSQL_PAGER } else { pager = PAGER }} I expect some configuration likePSQL_TABULAR_PAGER=pspgPSQL_PAGER=\"less -S\"RegardsPavel\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 18 Apr 2019 17:45:24 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: psql PSQL_TABULAR_PAGER variable"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 05:45:24PM +0200, Pavel Stehule wrote:\n> čt 18. 4. 2019 v 15:51 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n> In testing pspg, it seems to work fine with tabular and \\x-non-tabular\n> data. Are you asking for a pager option that is only used for non-\\x\n> display? What do people want the non-pspg pager to do?\n>\n> My idea is following - pseudocode\n> \n> else /* for \\h xxx */\n\nWell, normal output and \\x looks fine in pspg, and \\h doesn't use the\npager unless it is more than one screen. If I do '\\h *' it uses pspg,\nbut now often do people do that? Most \\h display doesn't use a pager,\nso I don't see the point.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 18 Apr 2019 11:58:13 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal: psql PSQL_TABULAR_PAGER variable"
},
{
"msg_contents": "čt 18. 4. 2019 v 17:58 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> On Thu, Apr 18, 2019 at 05:45:24PM +0200, Pavel Stehule wrote:\n> > čt 18. 4. 2019 v 15:51 odesílatel Bruce Momjian <bruce@momjian.us>\n> napsal:\n> > In testing pspg, it seems to work fine with tabular and\n> \\x-non-tabular\n> > data. Are you asking for a pager option that is only used for non-\\x\n> > display? What do people want the non-pspg pager to do?\n> >\n> > My idea is following - pseudocode\n> >\n> > else /* for \\h xxx */\n>\n> Well, normal output and \\x looks fine in pspg, and \\h doesn't use the\n> pager unless it is more than one screen. If I do '\\h *' it uses pspg,\n> but now often do people do that? Most \\h display doesn't use a pager,\n> so I don't see the point.\n>\n\nIt depends on terminal size. On my terminal pager is mostly every time. \\?\nis same.\n\npspg can works like classic pager, but it is not optimized for this\npurpose.\n\n\n\n\n\n> --\n> Bruce Momjian <bruce@momjian.us> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n> + As you are, so once was I. As I am, so you will be. +\n> + Ancient Roman grave inscription +\n>\n\nčt 18. 4. 2019 v 17:58 odesílatel Bruce Momjian <bruce@momjian.us> napsal:On Thu, Apr 18, 2019 at 05:45:24PM +0200, Pavel Stehule wrote:\n> čt 18. 4. 2019 v 15:51 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n> In testing pspg, it seems to work fine with tabular and \\x-non-tabular\n> data. Are you asking for a pager option that is only used for non-\\x\n> display? What do people want the non-pspg pager to do?\n>\n> My idea is following - pseudocode\n> \n> else /* for \\h xxx */\n\nWell, normal output and \\x looks fine in pspg, and \\h doesn't use the\npager unless it is more than one screen. If I do '\\h *' it uses pspg,\nbut now often do people do that? Most \\h display doesn't use a pager,\nso I don't see the point.It depends on terminal size. On my terminal pager is mostly every time. \\? is same.pspg can works like classic pager, but it is not optimized for this purpose. \n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 18 Apr 2019 18:06:40 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: psql PSQL_TABULAR_PAGER variable"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 06:06:40PM +0200, Pavel Stehule wrote:\n> \n> \n> čt 18. 4. 2019 v 17:58 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n> \n> On Thu, Apr 18, 2019 at 05:45:24PM +0200, Pavel Stehule wrote:\n> > čt 18. 4. 2019 v 15:51 odesílatel Bruce Momjian <bruce@momjian.us>\n> napsal:\n> > In testing pspg, it seems to work fine with tabular and \\\n> x-non-tabular\n> > data. Are you asking for a pager option that is only used for non-\\x\n> > display? What do people want the non-pspg pager to do?\n> >\n> > My idea is following - pseudocode\n> >\n> > else /* for \\h xxx */\n> \n> Well, normal output and \\x looks fine in pspg, and \\h doesn't use the\n> pager unless it is more than one screen. If I do '\\h *' it uses pspg,\n> but now often do people do that? Most \\h display doesn't use a pager,\n> so I don't see the point.\n> \n> \n> It depends on terminal size. On my terminal pager is mostly every time. \\? is\n> same.\n> \n> pspg can works like classic pager, but it is not optimized for this purpose.\n\nUh, the odd thing is that \\? and sometimes \\h are the only case I can\nsee where using the classic page has much value. Are there more cases? \nIf not, I don't see the value in having a separate configuration\nvariable for this.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 18 Apr 2019 12:35:44 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal: psql PSQL_TABULAR_PAGER variable"
},
{
"msg_contents": "čt 18. 4. 2019 v 18:35 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> On Thu, Apr 18, 2019 at 06:06:40PM +0200, Pavel Stehule wrote:\n> >\n> >\n> > čt 18. 4. 2019 v 17:58 odesílatel Bruce Momjian <bruce@momjian.us>\n> napsal:\n> >\n> > On Thu, Apr 18, 2019 at 05:45:24PM +0200, Pavel Stehule wrote:\n> > > čt 18. 4. 2019 v 15:51 odesílatel Bruce Momjian <bruce@momjian.us>\n> > napsal:\n> > > In testing pspg, it seems to work fine with tabular and \\\n> > x-non-tabular\n> > > data. Are you asking for a pager option that is only used for\n> non-\\x\n> > > display? What do people want the non-pspg pager to do?\n> > >\n> > > My idea is following - pseudocode\n> > >\n> > > else /* for \\h xxx */\n> >\n> > Well, normal output and \\x looks fine in pspg, and \\h doesn't use the\n> > pager unless it is more than one screen. If I do '\\h *' it uses\n> pspg,\n> > but now often do people do that? Most \\h display doesn't use a\n> pager,\n> > so I don't see the point.\n> >\n> >\n> > It depends on terminal size. On my terminal pager is mostly every time.\n> \\? is\n> > same.\n> >\n> > pspg can works like classic pager, but it is not optimized for this\n> purpose.\n>\n> Uh, the odd thing is that \\? and sometimes \\h are the only case I can\n> see where using the classic page has much value. Are there more cases?\n> If not, I don't see the value in having a separate configuration\n> variable for this.\n>\n\nI don't know any about other cases. Other results in psql has tabular\nformat.\n\nPavel\n\n\n> --\n> Bruce Momjian <bruce@momjian.us> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n> + As you are, so once was I. As I am, so you will be. +\n> + Ancient Roman grave inscription +\n>\n\nčt 18. 4. 2019 v 18:35 odesílatel Bruce Momjian <bruce@momjian.us> napsal:On Thu, Apr 18, 2019 at 06:06:40PM +0200, Pavel Stehule wrote:\n> \n> \n> čt 18. 4. 2019 v 17:58 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n> \n> On Thu, Apr 18, 2019 at 05:45:24PM +0200, Pavel Stehule wrote:\n> > čt 18. 4. 2019 v 15:51 odesílatel Bruce Momjian <bruce@momjian.us>\n> napsal:\n> > In testing pspg, it seems to work fine with tabular and \\\n> x-non-tabular\n> > data. Are you asking for a pager option that is only used for non-\\x\n> > display? What do people want the non-pspg pager to do?\n> >\n> > My idea is following - pseudocode\n> >\n> > else /* for \\h xxx */\n> \n> Well, normal output and \\x looks fine in pspg, and \\h doesn't use the\n> pager unless it is more than one screen. If I do '\\h *' it uses pspg,\n> but now often do people do that? Most \\h display doesn't use a pager,\n> so I don't see the point.\n> \n> \n> It depends on terminal size. On my terminal pager is mostly every time. \\? is\n> same.\n> \n> pspg can works like classic pager, but it is not optimized for this purpose.\n\nUh, the odd thing is that \\? and sometimes \\h are the only case I can\nsee where using the classic page has much value. Are there more cases? \nIf not, I don't see the value in having a separate configuration\nvariable for this.I don't know any about other cases. Other results in psql has tabular format.Pavel\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 18 Apr 2019 18:41:15 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: psql PSQL_TABULAR_PAGER variable"
},
{
"msg_contents": "On 2019-Apr-18, Pavel Stehule wrote:\n\n> I don't know any about other cases. Other results in psql has tabular\n> format.\n\nWhat about EXPLAIN?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 18 Apr 2019 15:12:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: psql PSQL_TABULAR_PAGER variable"
},
{
"msg_contents": "čt 18. 4. 2019 v 21:12 odesílatel Alvaro Herrera <alvherre@2ndquadrant.com>\nnapsal:\n\n> On 2019-Apr-18, Pavel Stehule wrote:\n>\n> > I don't know any about other cases. Other results in psql has tabular\n> > format.\n>\n> What about EXPLAIN?\n>\n\nI forgot it, thank you\n\nPavel\n\n\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nčt 18. 4. 2019 v 21:12 odesílatel Alvaro Herrera <alvherre@2ndquadrant.com> napsal:On 2019-Apr-18, Pavel Stehule wrote:\n\n> I don't know any about other cases. Other results in psql has tabular\n> format.\n\nWhat about EXPLAIN?I forgot it, thank youPavel\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 18 Apr 2019 21:29:05 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: psql PSQL_TABULAR_PAGER variable"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 11:46 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> My idea is following - pseudocode\n>\n> if view is a table\n> {\n> if is_defined PSQL_TABULAR_PAGER\n> {\n> pager = PSQL_TABULAR_PAGER\n> }\n> else if is_defined PSQL_PAGER\n> {\n> pager = PSQL_PAGER\n> }\n> else\n> {\n> pager = PAGER\n> }\n> }\n> else /* for \\h xxx */\n> {\n> if is_defined PSQL_PAGER\n> {\n> pager = PSQL_PAGER\n> }\n> else\n> {\n> pager = PAGER\n> }\n>\n\nSeems like pspg could just hand off to the regular pager if it\ndiscovers that the input is not in a format it finds suitable.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Apr 2019 09:46:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: psql PSQL_TABULAR_PAGER variable"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Seems like pspg could just hand off to the regular pager if it\n> discovers that the input is not in a format it finds suitable.\n\nIt might be slightly tricky to do that after having already consumed\nsome of the input :-(.\n\nStill, I've got to say that I find this proposal pretty horrid.\nI already thought that PSQL_PAGER was a dubious idea: what other\nprogram do you know anywhere that isn't satisfied with PAGER?\nInventing still more variables of the same ilk is making it even\nmessier, and more obviously poorly designed, and more obviously\nlikely to end up with forty-nine different variables for slightly\ndifferent purposes.\n\nI think that the general problem here is \"we need psql to be able to\ngive some context info to pspg\", and the obvious way to handle that\nis to make a provision for arguments on pspg's command line. That\nis, instead of just calling \"pspg\", call \"pspg table\" or \"pspg help\"\netc etc, with the understanding that the set of context words could\nbe extended over time. We could shoehorn this into what we already\nhave by saying that PSQL_PAGER is interpreted as a format, and if\nit contains say \"%c\" then replace that with a context word (and\nagain, there's room for more format codes over time). Probably\nbest *not* to apply such an interpretation to PAGER, though.\n\nWhether the whole problem is really worth this much infrastructure\nis a fair question. But if we're going to do something, I'd rather\ngo down a path like this than inventing a new environment variable\nevery month.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 10:21:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal: psql PSQL_TABULAR_PAGER variable"
},
{
"msg_contents": "po 22. 4. 2019 v 15:46 odesílatel Robert Haas <robertmhaas@gmail.com>\nnapsal:\n\n> On Thu, Apr 18, 2019 at 11:46 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > My idea is following - pseudocode\n> >\n> > if view is a table\n> > {\n> > if is_defined PSQL_TABULAR_PAGER\n> > {\n> > pager = PSQL_TABULAR_PAGER\n> > }\n> > else if is_defined PSQL_PAGER\n> > {\n> > pager = PSQL_PAGER\n> > }\n> > else\n> > {\n> > pager = PAGER\n> > }\n> > }\n> > else /* for \\h xxx */\n> > {\n> > if is_defined PSQL_PAGER\n> > {\n> > pager = PSQL_PAGER\n> > }\n> > else\n> > {\n> > pager = PAGER\n> > }\n> >\n>\n> Seems like pspg could just hand off to the regular pager if it\n> discovers that the input is not in a format it finds suitable.\n>\n\nThis is possible, and I wrote it. But it is \"little bit\" strange, start\nanother pager from a pager.\n\nI think so task oriented pagers can enhance custom experience of TUI\napplications - and there is a big space for enhancement.\n\nCurrently pspg have to reparse data and there are some heuristic to detect\nformat. Can be nice, if psql can send some additional info about the data.\n\nMaybe psql can send raw data, and printing formatting can be on parser side.\n\nPavel\n\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\npo 22. 4. 2019 v 15:46 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:On Thu, Apr 18, 2019 at 11:46 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> My idea is following - pseudocode\n>\n> if view is a table\n> {\n> if is_defined PSQL_TABULAR_PAGER\n> {\n> pager = PSQL_TABULAR_PAGER\n> }\n> else if is_defined PSQL_PAGER\n> {\n> pager = PSQL_PAGER\n> }\n> else\n> {\n> pager = PAGER\n> }\n> }\n> else /* for \\h xxx */\n> {\n> if is_defined PSQL_PAGER\n> {\n> pager = PSQL_PAGER\n> }\n> else\n> {\n> pager = PAGER\n> }\n>\n\nSeems like pspg could just hand off to the regular pager if it\ndiscovers that the input is not in a format it finds suitable.This is possible, and I wrote it. But it is \"little bit\" strange, start another pager from a pager.I think so task oriented pagers can enhance custom experience of TUI applications - and there is a big space for enhancement.Currently pspg have to reparse data and there are some heuristic to detect format. Can be nice, if psql can send some additional info about the data. Maybe psql can send raw data, and printing formatting can be on parser side.Pavel\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 22 Apr 2019 16:25:27 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: psql PSQL_TABULAR_PAGER variable"
},
{
"msg_contents": "po 22. 4. 2019 v 16:21 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Seems like pspg could just hand off to the regular pager if it\n> > discovers that the input is not in a format it finds suitable.\n>\n> It might be slightly tricky to do that after having already consumed\n> some of the input :-(.\n>\n\npspg supports both direction scrolling, so data are in buffer, and can be\ndisplayed again.\n\n\n> Still, I've got to say that I find this proposal pretty horrid.\n> I already thought that PSQL_PAGER was a dubious idea: what other\n> program do you know anywhere that isn't satisfied with PAGER?\n> Inventing still more variables of the same ilk is making it even\n> messier, and more obviously poorly designed, and more obviously\n> likely to end up with forty-nine different variables for slightly\n> different purposes.\n>\n\nThe programs with some complex output usually doesn't use a pagers - or use\npagers only for part of their output.\n\nInitially I would to teach \"less\" to support tabular data - but the after\nsome initial research I found so I am not able to modify \"less\".\n\n\n> I think that the general problem here is \"we need psql to be able to\n> give some context info to pspg\", and the obvious way to handle that\n> is to make a provision for arguments on pspg's command line. That\n> is, instead of just calling \"pspg\", call \"pspg table\" or \"pspg help\"\n> etc etc, with the understanding that the set of context words could\n> be extended over time. We could shoehorn this into what we already\n> have by saying that PSQL_PAGER is interpreted as a format, and if\n> it contains say \"%c\" then replace that with a context word (and\n> again, there's room for more format codes over time). Probably\n> best *not* to apply such an interpretation to PAGER, though.\n>\n\nIt can be a way. There are some issues unfixable on pager side - like\ndynamic column resizing when FETCH_COUNT > 0 and some others.\n\nI can imagine a situation, when psql send just raw data in some easy\nmachine readable format (like CSV), and specialized pager can format these\ndata, and can support some interactive work (hiding columns, columns\nswitch, ..)\n\nRegards\n\nPavel\n\n\n>\n> Whether the whole problem is really worth this much infrastructure\n> is a fair question. But if we're going to do something, I'd rather\n> go down a path like this than inventing a new environment variable\n> every month.\n>\n> regards, tom lane\n>\n\npo 22. 4. 2019 v 16:21 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Robert Haas <robertmhaas@gmail.com> writes:\n> Seems like pspg could just hand off to the regular pager if it\n> discovers that the input is not in a format it finds suitable.\n\nIt might be slightly tricky to do that after having already consumed\nsome of the input :-(.pspg supports both direction scrolling, so data are in buffer, and can be displayed again.\n\nStill, I've got to say that I find this proposal pretty horrid.\nI already thought that PSQL_PAGER was a dubious idea: what other\nprogram do you know anywhere that isn't satisfied with PAGER?\nInventing still more variables of the same ilk is making it even\nmessier, and more obviously poorly designed, and more obviously\nlikely to end up with forty-nine different variables for slightly\ndifferent purposes.The programs with some complex output usually doesn't use a pagers - or use pagers only for part of their output.Initially I would to teach \"less\" to support tabular data - but the after some initial research I found so I am not able to modify \"less\". \n\nI think that the general problem here is \"we need psql to be able to\ngive some context info to pspg\", and the obvious way to handle that\nis to make a provision for arguments on pspg's command line. That\nis, instead of just calling \"pspg\", call \"pspg table\" or \"pspg help\"\netc etc, with the understanding that the set of context words could\nbe extended over time. We could shoehorn this into what we already\nhave by saying that PSQL_PAGER is interpreted as a format, and if\nit contains say \"%c\" then replace that with a context word (and\nagain, there's room for more format codes over time). Probably\nbest *not* to apply such an interpretation to PAGER, though.It can be a way. There are some issues unfixable on pager side - like dynamic column resizing when FETCH_COUNT > 0 and some others.I can imagine a situation, when psql send just raw data in some easy machine readable format (like CSV), and specialized pager can format these data, and can support some interactive work (hiding columns, columns switch, ..) RegardsPavel \n\nWhether the whole problem is really worth this much infrastructure\nis a fair question. But if we're going to do something, I'd rather\ngo down a path like this than inventing a new environment variable\nevery month.\n\n regards, tom lane",
"msg_date": "Mon, 22 Apr 2019 16:47:20 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: psql PSQL_TABULAR_PAGER variable"
}
] |
[
{
"msg_contents": "Up until quite recently, it worked to do \"make installcheck\" in\nsrc/test/recovery, following the instructions in the README\nfile there:\n\n NOTE: You must have given the --enable-tap-tests argument to configure.\n Also, to use \"make installcheck\", you must have built and installed\n contrib/test_decoding in addition to the core code.\n \n Run\n make check\n or\n make installcheck\n You can use \"make installcheck\" if you previously did \"make install\".\n In that case, the code in the installation tree is tested. With\n \"make check\", a temporary installation tree is built from the current\n sources and then tested.\n\nNow, however, the 016_min_consistency.pl test is falling over,\nwith symptoms indicating that it expects to have the pageinspect\nextension installed as well:\n\nerror running SQL: 'psql:<stdin>:2: ERROR: could not open extension control fil\ne \"/home/postgres/testversion/share/extension/pageinspect.control\": No such file\n or directory'\nwhile running 'psql -XAtq -d port=64106 host=/tmp/KaoBFubKfw dbname='postgres' -\nf - -v ON_ERROR_STOP=1' with sql '\nCREATE EXTENSION pageinspect;\n...\n\nIs this extra dependency actually essential? I'm not really\nhappy about increasing the number of moving parts in this test.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Apr 2019 01:45:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "\"make installcheck\" fails in src/test/recovery"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 01:45:45AM -0400, Tom Lane wrote:\n> Is this extra dependency actually essential? I'm not really\n> happy about increasing the number of moving parts in this test.\n\nHmmm. I don't actually object to removing the part depending on\npageinspect in the tests. Relying on the on-disk page format has\nproved to be more reliable for the buildfarm than I initially\nthought, and we are actually able to keep the same coverage without\nthe dependency on pageinspect.\n\nNow, I don't think that this is not a problem only for\nsrc/test/recovery/ but to any path using EXTRA_INSTALL. For example,\nif you take contrib/ltree_plpython/, then issue \"make install\" from\nthis path followed by an installcheck, then the tests complain about\nltree missing from the installation. For the recovery tests, we\nalready require test_decoding so I would expect the problem to get\nworse with the time as we should not restrict the dependencies with\nother modules if they make sense for some TAP tests.\n\nI am wondering if it would be better to just install automatically all\nthe paths listed in EXTRA_INSTALL when invoking installcheck. We\nenforce the target in src/test/recovery/Makefile, still we could use\nthis opportunity to mark it with TAP_TESTS=1.\n--\nMichael",
"msg_date": "Fri, 19 Apr 2019 09:27:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"make installcheck\" fails in src/test/recovery"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I am wondering if it would be better to just install automatically all\n> the paths listed in EXTRA_INSTALL when invoking installcheck.\n\nAbsolutely not! In the first place, \"make installcheck\" is supposed to\ntest the installed tree, not editorialize upon what's in it; and in the\nsecond place, you wouldn't necessarily have permissions to change that\ntree.\n\nIf we think that depending on pageinspect is worthwhile here, the right\nthing to do is just to fix the README to say that you need it. I'm\nmerely asking whether it's really worth the extra dependency.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Apr 2019 21:31:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: \"make installcheck\" fails in src/test/recovery"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 09:31:21PM -0400, Tom Lane wrote:\n> If we think that depending on pageinspect is worthwhile here, the right\n> thing to do is just to fix the README to say that you need it. I'm\n> merely asking whether it's really worth the extra dependency.\n\nThe dependency is not strongly mandatory for the test coverage. I'll\njust go remove it if that's an annoyance.\n--\nMichael",
"msg_date": "Fri, 19 Apr 2019 11:23:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"make installcheck\" fails in src/test/recovery"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 11:23:04AM +0900, Michael Paquier wrote:\n> The dependency is not strongly mandatory for the test coverage. I'll\n> just go remove it if that's an annoyance.\n\nAnd done.\n--\nMichael",
"msg_date": "Fri, 19 Apr 2019 15:57:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"make installcheck\" fails in src/test/recovery"
}
] |
[
{
"msg_contents": "when I fetch from holdable cursor, I found the fact is more complex than I\nexpected.\n\nsuppose we fetched 20 rows.\n\n1). It will fill a PortalStore, the dest is not the client, it is the\nDestTupleStore, called ExecutePlan once and receiveSlot will be call 20\ntimes.\n\n2). the portal for client then RunFromStore and send the result to client.\nthe receiveSlot will be call 20 times again.\n\n3). at last, when we HoldPortal, called ExecutePlan once again and\nreceiveSlot will be call 20 times\n\n```\n0 in ExecutePlan of execMain.c:1696\n1 in standard_ExecutorRun of execMain.c:366\n2 in ExecutorRun of execMain.c:309\n3 in PersistHoldablePortal of portalcmds.c:392\n4 in HoldPortal of portalmem.c:639\n5 in PreCommit_Portals of portalmem.c:733\n6 in CommitTransaction of xact.c:2007\n7 in CommitTransactionCommand of xact.c:2801\n8 in finish_xact_command of postgres.c:2529\n9 in exec_simple_query of postgres.c:1176\n10 in exec_docdb_simple_query of postgres.c:5069\n11 in _exec_query_with_intercept_exception of op_executor.c:38\n12 in exec_op_query of op_executor.c:102\n13 in exec_op_find of op_executor.c:204\n14 in run_op_find_common of op_find_common.c:42\n15 in _cmd_run_find of cmd_find.c:31\n16 in run_commands of commands.c:610\n17 in DocdbMain of postgres.c:4792\n18 in DocdbBackendRun of postmaster.c:4715\n19 in DocdbBackendStartup of postmaster.c:4196\n20 in ServerLoop of postmaster.c:1760\n21 in PostmasterMain of postmaster.c:1406\n22 in main of main.c:228\n```\n\nwhy the 3rd time is necessary and will the performance be bad due to this\ndesign?\n\nThanks for your help!\n\nwhen I fetch from holdable cursor, I found the fact is more complex than I expected. suppose we fetched 20 rows.1). It will fill a PortalStore, the dest is not the client, it is the DestTupleStore, called ExecutePlan once and receiveSlot will be call 20 times.2). the portal for client then RunFromStore and send the result to client. the receiveSlot will be call 20 times again. 3). at last, when we HoldPortal, called ExecutePlan once again and receiveSlot will be call 20 times```0 in ExecutePlan of execMain.c:16961 in standard_ExecutorRun of execMain.c:3662 in ExecutorRun of execMain.c:3093 in PersistHoldablePortal of portalcmds.c:3924 in HoldPortal of portalmem.c:6395 in PreCommit_Portals of portalmem.c:7336 in CommitTransaction of xact.c:20077 in CommitTransactionCommand of xact.c:28018 in finish_xact_command of postgres.c:25299 in exec_simple_query of postgres.c:117610 in exec_docdb_simple_query of postgres.c:506911 in _exec_query_with_intercept_exception of op_executor.c:3812 in exec_op_query of op_executor.c:10213 in exec_op_find of op_executor.c:20414 in run_op_find_common of op_find_common.c:4215 in _cmd_run_find of cmd_find.c:3116 in run_commands of commands.c:61017 in DocdbMain of postgres.c:479218 in DocdbBackendRun of postmaster.c:471519 in DocdbBackendStartup of postmaster.c:419620 in ServerLoop of postmaster.c:176021 in PostmasterMain of postmaster.c:140622 in main of main.c:228```why the 3rd time is necessary and will the performance be bad due to this design?Thanks for your help!",
"msg_date": "Thu, 18 Apr 2019 19:50:45 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Question about the holdable cursor"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> when I fetch from holdable cursor, I found the fact is more complex than I\n> expected.\n> ...\n> why the 3rd time is necessary and will the performance be bad due to this\n> design?\n\nIf you read the whole cursor output, then close the transaction and\npersist the cursor, yes we'll read it twice, and yes it's bad for that\ncase. The design is intended to perform well in these other cases:\n\n1. The HOLD option isn't really being used, ie you just read and\nclose the cursor within the original transaction. This is important\nbecause applications are frequently sloppy about marking cursors as\nWITH HOLD.\n\n2. You declare the cursor and persist it before reading anything from it.\n(This is really the typical use-case for held cursors, IMV.)\n\nFWIW, I don't see any intermediate tuplestore in there when\ndealing with a PORTAL_ONE_SELECT query, which is the only\ncase that's possible with a cursor no?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Apr 2019 10:09:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about the holdable cursor"
},
{
"msg_contents": "On Thu, Apr 18, 2019 at 10:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > when I fetch from holdable cursor, I found the fact is more complex\n> than I\n> > expected.\n> > ...\n> > why the 3rd time is necessary and will the performance be bad due to this\n> > design?\n>\n> If you read the whole cursor output, then close the transaction and\n> persist the cursor, yes we'll read it twice, and yes it's bad for that\n> case. The design is intended to perform well in these other cases:\n>\n> Thanks you Tom for the reply!! Looks this situation is really hard to\nproduce but I just got there:( Please help me to confirm my\nunderstanding:\n\n1. we can have 2 methods to reproduce it:\n\nMethod 1:\na). begin; // begin the transaction explicitly\nb). declare c1 cursor WITH HOLD for select * from t; // declare the\ncursor with HOLD option.\nc). fetch n c1; // this will run ExecutePlan the first time.\nd). commit // commit the transaction explicitly, which caused the 2nd\nExecutePlan. Write \"ALL the records\" into tuplestore.\n\nMethod 2:\n\na). declare c1 cursor WITH HOLD for select * from t; fetch n c1; // send\n1 query with 2 statements, with implicitly transaction begin/commit;\n\n\n(even though, I don't know how to send \"declare c1 cursor WITH HOLD for\nselect * from t; fetch n c1; \" as one query in psql shell)\n\n\n2. with a bit of more normal case:\n\na). declare c1 cursor WITH HOLD for select * from t; // declare the cursor\nwith HOLD option. the transaction is started implicitly and commit\nimplicitly.\nduring the commit, \"ExecutePlan\" is called first time and \"GET ALL THE\nRECORDS\" and store ALL OF them (what if it is very big, write to file)?\n\nb). fetch 10 c1; // will not run ExecutePlan any more.\n\neven though, \"GET ALL THE RECORDS\" at the step 1 is expensive.\n\n3). without hold option\n\na) begin;\nb). declare c1 cursor for select * from t; .// without hold option.\nc). fetch 1 c1; // this only scan 1 row.\nd). commit;\n\nif so, the connection can't be used for other transactions until I commit\nthe transaction for cursor (which is something I dislike for now).\n\n\nCould you help to me confirm my understandings are correct regarding the 3\ntopics? Thanks\n\n\n1. The HOLD option isn't really being used, ie you just read and\n> close the cursor within the original transaction. This is important\n> because applications are frequently sloppy about marking cursors as\n> WITH HOLD.\n>\n> 2. You declare the cursor and persist it before reading anything from it.\n> (This is really the typical use-case for held cursors, IMV.)\n>\n> FWIW, I don't see any intermediate tuplestore in there when\n> dealing with a PORTAL_ONE_SELECT query, which is the only\n> case that's possible with a cursor no?\n>\n> regards, tom lane\n>\n\nOn Thu, Apr 18, 2019 at 10:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> when I fetch from holdable cursor, I found the fact is more complex than I\n> expected.\n> ...\n> why the 3rd time is necessary and will the performance be bad due to this\n> design?\n\nIf you read the whole cursor output, then close the transaction and\npersist the cursor, yes we'll read it twice, and yes it's bad for that\ncase. The design is intended to perform well in these other cases:\nThanks you Tom for the reply!! Looks this situation is really hard to produce but I just got there:( Please help me to confirm my understanding: 1. we can have 2 methods to reproduce it: Method 1:a). begin; // begin the transaction explicitlyb). declare c1 cursor WITH HOLD for select * from t; // declare the cursor with HOLD option. c). fetch n c1; // this will run ExecutePlan the first time.d). commit // commit the transaction explicitly, which caused the 2nd ExecutePlan. Write \"ALL the records\" into tuplestore. Method 2:a). declare c1 cursor WITH HOLD for select * from t; fetch n c1; // send 1 query with 2 statements, with implicitly transaction begin/commit; (even though, I don't know how to send \"declare c1 cursor WITH HOLD for select * from t; fetch n c1; \" as one query in psql shell)2. with a bit of more normal case: a). declare c1 cursor WITH HOLD for select * from t; // declare the cursor with HOLD option. the transaction is started implicitly and commit implicitly. during the commit, \"ExecutePlan\" is called first time and \"GET ALL THE RECORDS\" and store ALL OF them (what if it is very big, write to file)? b). fetch 10 c1; // will not run ExecutePlan any more. even though, \"GET ALL THE RECORDS\" at the step 1 is expensive.3). without hold optiona) begin;b). declare c1 cursor for select * from t; .// without hold option. c). fetch 1 c1; // this only scan 1 row. d). commit;if so, the connection can't be used for other transactions until I commit the transaction for cursor (which is something I dislike for now).Could you help to me confirm my understandings are correct regarding the 3 topics? Thanks\n1. The HOLD option isn't really being used, ie you just read and\nclose the cursor within the original transaction. This is important\nbecause applications are frequently sloppy about marking cursors as\nWITH HOLD.\n\n2. You declare the cursor and persist it before reading anything from it.\n(This is really the typical use-case for held cursors, IMV.)\n\nFWIW, I don't see any intermediate tuplestore in there when\ndealing with a PORTAL_ONE_SELECT query, which is the only\ncase that's possible with a cursor no?\n\n regards, tom lane",
"msg_date": "Fri, 19 Apr 2019 01:02:39 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Question about the holdable cursor"
}
] |
[
{
"msg_contents": "Hello. As mentioned before [1], read_page callback in\nXLogReaderState is a cause of headaches. Adding another\nremote-controlling stuff to xlog readers makes things messier [2].\n\nI refactored XLOG reading functions so that we don't need the\ncallback. In short, ReadRecrod now calls XLogPageRead directly\nwith the attached patch set.\n\n| while (XLogReadRecord(xlogreader, RecPtr, &record, &errormsg)\n| == XLREAD_NEED_DATA)\n| XLogPageRead(xlogreader, fetching_ckpt, emode, randAccess);\n\nOn the other hand, XLogReadRecord became a bit complex. The patch\nmakes XLogReadRecord a state machine. I'm not confident that the\nadditional complexity is worth doing. Anyway I'll gegister this\nto the next CF.\n\n[1] https://www.postgresql.org/message-id/47215279-228d-f30d-35d1-16af695e53f3@iki.fi\n\n[2] https://www.postgresql.org/message-id/20190412.122711.158276916.horiguchi.kyotaro@lab.ntt.co.jp\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 18 Apr 2019 21:02:57 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n\n> Hello. As mentioned before [1], read_page callback in\n> XLogReaderState is a cause of headaches. Adding another\n> remote-controlling stuff to xlog readers makes things messier [2].\n\nThe patch I posted in thread [2] tries to solve another problem: it tries to\nmerge xlogutils.c:XLogRead(), walsender.c:XLogRead() and\npg_waldump.c:XLogDumpXLogRead() into a single function,\nxlogutils.c:XLogRead().\n\n> [2]\n> https://www.postgresql.org/message-id/20190412.122711.158276916.horiguchi.kyotaro@lab.ntt.co.jp\n\n> I refactored XLOG reading functions so that we don't need the\n> callback.\n\nI was curious about the patch, so I reviewed it:\n\n* xlogreader.c\n\n ** Comments mention \"opcode\", \"op\" and \"expression step\" - probably leftover\n from the executor, which seems to have inspired you.\n\n ** XLR_DISPATCH() seems to be unused\n\n ** Comment: \"duplicatedly\" -> \"repeatedly\" ?\n\n ** XLogReadRecord(): comment \"All internal state need ...\" -> \"needs\"\n\n ** XLogNeedData()\n\n *** shouldn't only the minimum amount of data needed (SizeOfXLogLongPHD)\n be requested here?\n\n\tstate->loadLen = XLOG_BLCKSZ;\n\tXLR_LEAVE(XLND_STATE_SEGHEAD, true);\n\nNote that ->loadLen is also set only to the minimum amount of data needed\nelsewhere.\n\n *** you still mention \"read_page callback\" in a comment.\n\n *** state->readLen is checked before one of the calls of XLR_LEAVE(), but\n I think it should happen before *each* call. Otherwise data can be read\n from the page even if it's already in the buffer.\n\n* xlogreader.h\n\n ** XLND_STATE_PAGEFULLHEAD - maybe LONG rather than FULL? And perhaps HEAD\n -> HDR, so it's clear that it's about (page) header, not e.g. list head.\n\n ** XLogReaderState.loadLen - why not reqLen? loadLen sounds to me like \"loaded\"\n as opposed to \"requested\". And assignemnt like this\n\n\tint reqLen\t= xlogreader->loadLen;\n\n will also be less confusing with ->reqLen.\n\n Maybe also ->loadPagePtr should be renamed to ->targetPagePtr.\n\n\n* trailing whitespace: xlogreader.h:130, xlogreader.c:1058\n\n\n* The 2nd argument of SimpleXLogPageRead() is \"private\", which seems too\n generic given that the function is no longer used as a callback. Since the\n XLogPageReadPrivate structure only has two fields, I think it'd be o.k. to\n pass them to the function directly.\n\n* I haven't found CF entry for this patch.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Thu, 25 Apr 2019 13:58:20 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Hello. Thank you for looking this.\n\nAt Thu, 25 Apr 2019 13:58:20 +0200, Antonin Houska <ah@cybertec.at> wrote in <18581.1556193500@localhost>\n> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> \n> > Hello. As mentioned before [1], read_page callback in\n> > XLogReaderState is a cause of headaches. Adding another\n> > remote-controlling stuff to xlog readers makes things messier [2].\n> \n> The patch I posted in thread [2] tries to solve another problem: it tries to\n> merge xlogutils.c:XLogRead(), walsender.c:XLogRead() and\n> pg_waldump.c:XLogDumpXLogRead() into a single function,\n> xlogutils.c:XLogRead().\n> \n> > [2]\n> > https://www.postgresql.org/message-id/20190412.122711.158276916.horiguchi.kyotaro@lab.ntt.co.jp\n> \n> > I refactored XLOG reading functions so that we don't need the\n> > callback.\n> \n> I was curious about the patch, so I reviewed it:\n\nThnak for the comment. (It's a shame that I might made it more complex..)\n\n> * xlogreader.c\n> \n> ** Comments mention \"opcode\", \"op\" and \"expression step\" - probably leftover\n> from the executor, which seems to have inspired you.\n\nUggh. Yes, exactly. I believed change them all. Fixed.\n\n> ** XLR_DISPATCH() seems to be unused\n\nRight. XLR_ macros are used to dispatch internally in a function\ndifferently from EEO_ macros so I thought it uesless but I\nhesitated to remove it. I remove it.\n\n> ** Comment: \"duplicatedly\" -> \"repeatedly\" ?\n\nIt aimed reentrance. But I notieced that it doesn't work when\nERROR-exiting. So I remove the assertion and related code..\n\n> ** XLogReadRecord(): comment \"All internal state need ...\" -> \"needs\"\n\nFixed.\n\n> ** XLogNeedData()\n> \n> *** shouldn't only the minimum amount of data needed (SizeOfXLogLongPHD)\n> be requested here?\n> \n> \tstate->loadLen = XLOG_BLCKSZ;\n> \tXLR_LEAVE(XLND_STATE_SEGHEAD, true);\n> \n> Note that ->loadLen is also set only to the minimum amount of data needed\n> elsewhere.\n\nMaybe right, but it is existing behavior so I preserved it as\nfocusing on refactoring.\n\n> *** you still mention \"read_page callback\" in a comment.\n\nThanks. \"the read_page callback\" were translated to \"the caller\"\nand it seems the last one.\n\n> *** state->readLen is checked before one of the calls of XLR_LEAVE(), but\n> I think it should happen before *each* call. Otherwise data can be read\n> from the page even if it's already in the buffer.\n\nThat doesn't happen since XLogReadRecord doesn't LEAVE unless\nXLogNeedData returns true (that is, needs more data) and\nXLogNeedData returns true only when requested data is not on the\nbuffer yet. (If I refactored it correctly and it seems to me so.)\n\n> * xlogreader.h\n> \n> ** XLND_STATE_PAGEFULLHEAD - maybe LONG rather than FULL? And perhaps HEAD\n> -> HDR, so it's clear that it's about (page) header, not e.g. list head.\n\nPerhaps that's better. Thanks.\n\n> \n> ** XLogReaderState.loadLen - why not reqLen? loadLen sounds to me like \"loaded\"\n> as opposed to \"requested\". And assignemnt like this\n>\n> \tint reqLen\t= xlogreader->loadLen;\n> \n> will also be less confusing with ->reqLen.\n> \n> Maybe also ->loadPagePtr should be renamed to ->targetPagePtr.\n\nYeah, that's annoyance. reqLen *was* actually the \"requested\"\nlength to XLogNeedData FKA ReadPageInternal, but in the current\nshape XLogNeedData makes different request to the callers (when\nfetching the first page in the newly visited segment), so the two\n(req(uest)Len and (to be)loadLen) are different things. At the\nsame time, targetPagePoint is different from loadPagePtr.\n\nOf course the naming as arguable.\n\n> * trailing whitespace: xlogreader.h:130, xlogreader.c:1058\n\nThanks, it have been fixed on my repo.\n\n> * The 2nd argument of SimpleXLogPageRead() is \"private\", which seems too\n> generic given that the function is no longer used as a callback. Since the\n> XLogPageReadPrivate structure only has two fields, I think it'd be o.k. to\n> pass them to the function directly.\n\nSound reasonable. Fixed.\n\n> * I haven't found CF entry for this patch.\n\nYeah, I'll register this, maybe the week after next week.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 26 Apr 2019 17:40:34 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n\n> Hello. Thank you for looking this.\n> ...\n> Yeah, I'll register this, maybe the week after next week.\n\nI've checked the new version. One more thing I noticed now is that XLR_STATE.j\nis initialized to zero, either by XLogReaderAllocate() which zeroes the whole\nreader state, or later by XLREAD_RESET. This special value then needs to be\nhandled here:\n\n#define XLR_SWITCH()\t\t\t\t\t\\\n\tdo {\t\t\t\t\t\t\\\n\t\tif ((XLR_STATE).j)\t\t\t\\\n\t\t\tgoto *((void *) (XLR_STATE).j);\t\\\n\t\tXLR_CASE(XLR_INIT_STATE);\t\t\\\n\t} while (0)\n\nI think it's better to set the label always to (&&XLR_INIT_STATE) so that\nXLR_SWITCH can perform the jump unconditionally.\n\nAttached is also an (unrelated) comment fix proposal.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Wed, 22 May 2019 13:53:23 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Thank you for looking this, Antonin.\n\nAt Wed, 22 May 2019 13:53:23 +0200, Antonin Houska <ah@cybertec.at> wrote in <25494.1558526003@spoje.net>\n> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> \n> > Hello. Thank you for looking this.\n> > ...\n> > Yeah, I'll register this, maybe the week after next week.\n> \n> I've checked the new version. One more thing I noticed now is that XLR_STATE.j\n> is initialized to zero, either by XLogReaderAllocate() which zeroes the whole\n> reader state, or later by XLREAD_RESET. This special value then needs to be\n> handled here:\n> \n> #define XLR_SWITCH()\t\t\t\t\t\\\n> \tdo {\t\t\t\t\t\t\\\n> \t\tif ((XLR_STATE).j)\t\t\t\\\n> \t\t\tgoto *((void *) (XLR_STATE).j);\t\\\n> \t\tXLR_CASE(XLR_INIT_STATE);\t\t\\\n> \t} while (0)\n> \n> I think it's better to set the label always to (&&XLR_INIT_STATE) so that\n> XLR_SWITCH can perform the jump unconditionally.\n\nI thought the same but did not do that since label is\nfunction-scoped so it cannot be referred outside the defined\nfunction.\n\nI moved the state variable from XLogReaderState into functions\nstatic variable. It's not problem since the functions are\nnon-reentrant in the first place.\n\n> Attached is also an (unrelated) comment fix proposal.\n\nSounds reasoable. I found another typo \"acutually\" there.\n\n- int32 readLen; /* bytes acutually read, must be larger than\n+ int32 readLen; /* bytes acutually read, must be at least\n\nI fixed it with other typos found.\n\nv3-0001 : Changed macrosas suggested.\n\nv3-0002, 0004: Fixed comments. Fixes following changes of\n macros. Renamed state symbols.\n\nv3-0003, 0005-0010: No substantial change from v2.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 24 May 2019 11:56:24 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n\n> v3-0001 : Changed macrosas suggested.\n\nThis attachment is missing, please send it too.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Tue, 28 May 2019 09:55:04 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Hi,\n\nOn 2019-04-18 21:02:57 +0900, Kyotaro HORIGUCHI wrote:\n> Hello. As mentioned before [1], read_page callback in\n> XLogReaderState is a cause of headaches. Adding another\n> remote-controlling stuff to xlog readers makes things messier [2].\n> \n> I refactored XLOG reading functions so that we don't need the\n> callback. In short, ReadRecrod now calls XLogPageRead directly\n> with the attached patch set.\n> \n> | while (XLogReadRecord(xlogreader, RecPtr, &record, &errormsg)\n> | == XLREAD_NEED_DATA)\n> | XLogPageRead(xlogreader, fetching_ckpt, emode, randAccess);\n> \n> On the other hand, XLogReadRecord became a bit complex. The patch\n> makes XLogReadRecord a state machine. I'm not confident that the\n> additional complexity is worth doing. Anyway I'll gegister this\n> to the next CF.\n\nJust FYI, to me this doesn't clearly enough look like an improvement,\nfor a change of this size.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 28 May 2019 04:45:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Hello. The patch gets disliked by my tool chain. Fixed the usage\nof PG_USED_FOR_ASSERTS_ONLY and rebased to bd56cd75d2.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 10 Jul 2019 13:18:10 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Tue, 28 May 2019 04:45:24 -0700, Andres Freund <andres@anarazel.de> wrote in <20190528114524.dvj6ymap2virlzro@alap3.anarazel.de>\n> Hi,\n> \n> On 2019-04-18 21:02:57 +0900, Kyotaro HORIGUCHI wrote:\n> > Hello. As mentioned before [1], read_page callback in\n> > XLogReaderState is a cause of headaches. Adding another\n> > remote-controlling stuff to xlog readers makes things messier [2].\n> > \n> > I refactored XLOG reading functions so that we don't need the\n> > callback. In short, ReadRecrod now calls XLogPageRead directly\n> > with the attached patch set.\n> > \n> > | while (XLogReadRecord(xlogreader, RecPtr, &record, &errormsg)\n> > | == XLREAD_NEED_DATA)\n> > | XLogPageRead(xlogreader, fetching_ckpt, emode, randAccess);\n> > \n> > On the other hand, XLogReadRecord became a bit complex. The patch\n> > makes XLogReadRecord a state machine. I'm not confident that the\n> > additional complexity is worth doing. Anyway I'll gegister this\n> > to the next CF.\n> \n> Just FYI, to me this doesn't clearly enough look like an improvement,\n> for a change of this size.\n\nThanks for the opiniton. I kinda agree about size but it is a\ndecision between \"having multiple callbacks called under the\nhood\" vs \"just calling a series of functions\". I think the\npatched XlogReadRecord is easy to use in many situations.\n\nIt would be better if I could completely refactor the function\nwithout the syntax tricks but I think the current patch is still\nsmaller and clearer than overhauling it.\n\nIf many of the folks think that adding a callback is better than\nthis refactoring, I will withdraw this..\n\nreagrds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 12 Jul 2019 16:10:16 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On 12/07/2019 10:10, Kyotaro Horiguchi wrote:\n> At Tue, 28 May 2019 04:45:24 -0700, Andres Freund <andres@anarazel.de> wrote in <20190528114524.dvj6ymap2virlzro@alap3.anarazel.de>\n>> Hi,\n>>\n>> On 2019-04-18 21:02:57 +0900, Kyotaro HORIGUCHI wrote:\n>>> Hello. As mentioned before [1], read_page callback in\n>>> XLogReaderState is a cause of headaches. Adding another\n>>> remote-controlling stuff to xlog readers makes things messier [2].\n>>>\n>>> I refactored XLOG reading functions so that we don't need the\n>>> callback. In short, ReadRecrod now calls XLogPageRead directly\n>>> with the attached patch set.\n>>>\n>>> | while (XLogReadRecord(xlogreader, RecPtr, &record, &errormsg)\n>>> | == XLREAD_NEED_DATA)\n>>> | XLogPageRead(xlogreader, fetching_ckpt, emode, randAccess);\n>>>\n>>> On the other hand, XLogReadRecord became a bit complex. The patch\n>>> makes XLogReadRecord a state machine. I'm not confident that the\n>>> additional complexity is worth doing. Anyway I'll gegister this\n>>> to the next CF.\n>>\n>> Just FYI, to me this doesn't clearly enough look like an improvement,\n>> for a change of this size.\n> \n> Thanks for the opiniton. I kinda agree about size but it is a\n> decision between \"having multiple callbacks called under the\n> hood\" vs \"just calling a series of functions\". I think the\n> patched XlogReadRecord is easy to use in many situations.\n> \n> It would be better if I could completely refactor the function\n> without the syntax tricks but I think the current patch is still\n> smaller and clearer than overhauling it.\n\nI like the idea of refactoring XLogReadRecord() to not use a callback, \nand return a XLREAD_NEED_DATA value instead. It feels like a nicer, \neasier-to-use, interface, given that all the page-read functions need \nquite a bit of state and internal logic themselves. I remember that I \nfelt that that would be a nicer interface when we originally extracted \nxlogreader.c into a reusable module, but I didn't want to make such big \nchanges to XLogReadRecord() at that point.\n\nI don't much like the \"continuation\" style of implementing the state \nmachine. Nothing wrong with such a style in principle, but we don't do \nthat anywhere else, and the macros seem like overkill, and turning the \nlocal variables static is pretty ugly. But I think XLogReadRecord() \ncould be rewritten into a more traditional state machine.\n\nI started hacking on that, to get an idea of what it would look like and \ncame up with the attached patch, to be applied on top of all your \npatches. It's still very messy, it needs quite a lot of cleanup before \nit can be committed, but I think the resulting switch-case state machine \nin XLogReadRecord() is quite straightforward at high level, with four \nstates.\n\nI made some further changes to the XLogReadRecord() interface:\n\n* If you pass a valid ReadPtr (i.e. the starting point to read from) \nargument to XLogReadRecord(), it always restarts reading from that \nrecord, even if it was in the middle of reading another record \npreviously. (Perhaps it would be more convenient to provide a separate \nfunction to set the starting point, and remove the RecPtr argument from \nXLogReadRecord altogther?)\n\n* XLogReaderState->readBuf is now allocated and controlled by the \ncaller, not by xlogreader.c itself. When XLogReadRecord() needs data, \nthe caller makes the data available in readBuf, which can point to the \nsame buffer in all calls, or the caller may allocate a new buffer, or it \nmay point to a part of a larger buffer, whatever is convenient for the \ncaller. (Currently, all callers just allocate a BLCKSZ'd buffer, \nthough). The caller also sets readPagPtr, readLen and readPageTLI to \ntell XLogReadRecord() what's in the buffer. So all these read* fields \nare now set by the caller, XLogReadRecord() only reads them.\n\n* In your patch, if XLogReadRecord() was called with state->readLen == \n-1, XLogReadRecord() returned an error. That seemed a bit silly; if an \nerror happened while reading the data, why call XLogReadRecord() at all? \nYou could just report the error directly. So I removed that.\n\nI'm not sure how intelligible this patch is in its current state. But I \nthink the general idea is good. I plan to clean it up further next week, \nbut feel free to work on it before that, either based on this patch or \nby starting afresh from your patch set.\n\n- Heikki",
"msg_date": "Mon, 29 Jul 2019 22:39:57 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Thank you for the suggestion, Heikki.\n\nAt Mon, 29 Jul 2019 22:39:57 +0300, Heikki Linnakangas <hlinnaka@iki.fi> wrote in <e1ecb53b-663d-98ed-2249-dfa30a74f8c1@iki.fi>\n> On 12/07/2019 10:10, Kyotaro Horiguchi wrote:\n> >> Just FYI, to me this doesn't clearly enough look like an improvement,\n> >> for a change of this size.\n> > Thanks for the opiniton. I kinda agree about size but it is a\n> > decision between \"having multiple callbacks called under the\n> > hood\" vs \"just calling a series of functions\". I think the\n> > patched XlogReadRecord is easy to use in many situations.\n> > It would be better if I could completely refactor the function\n> > without the syntax tricks but I think the current patch is still\n> > smaller and clearer than overhauling it.\n> \n> I like the idea of refactoring XLogReadRecord() to not use a callback,\n> and return a XLREAD_NEED_DATA value instead. It feels like a nicer,\n> easier-to-use, interface, given that all the page-read functions need\n> quite a bit of state and internal logic themselves. I remember that I\n> felt that that would be a nicer interface when we originally extracted\n> xlogreader.c into a reusable module, but I didn't want to make such\n> big changes to XLogReadRecord() at that point.\n> \n> I don't much like the \"continuation\" style of implementing the state\n> machine. Nothing wrong with such a style in principle, but we don't do\n> that anywhere else, and the macros seem like overkill, and turning the\n\nAgreed that it's a kind of ugly. I could overhaul the logic to\nreduce state variables, but I thought that it would make the\npatch hardly reviewable.\n\nThe \"continuation\" style was intended to impact the main path's\nshape as small as possible. For the same reason I made variables\nstatic instead of using individual state struct or reducing state\nvariables. (And it the style was fun for me:p)\n\n> local variables static is pretty ugly. But I think XLogReadRecord()\n> could be rewritten into a more traditional state machine.\n> \n> I started hacking on that, to get an idea of what it would look like\n> and came up with the attached patch, to be applied on top of all your\n> patches. It's still very messy, it needs quite a lot of cleanup before\n> it can be committed, but I think the resulting switch-case state\n> machine in XLogReadRecord() is quite straightforward at high level,\n> with four states.\n\nSorry for late reply. It seems less messy than I thought it could\nbe if I refactored it more aggressively.\n\n> I made some further changes to the XLogReadRecord() interface:\n> \n> * If you pass a valid ReadPtr (i.e. the starting point to read from)\n> * argument to XLogReadRecord(), it always restarts reading from that\n> * record, even if it was in the middle of reading another record\n> * previously. (Perhaps it would be more convenient to provide a separate\n> * function to set the starting point, and remove the RecPtr argument\n> * from XLogReadRecord altogther?)\n\nSeems reasonable. randAccess property was replaced with the\nstate.PrevRecPtr = Invalid. It is easier to understand for me.\n\n> * XLogReaderState->readBuf is now allocated and controlled by the\n> * caller, not by xlogreader.c itself. When XLogReadRecord() needs data,\n> * the caller makes the data available in readBuf, which can point to the\n> * same buffer in all calls, or the caller may allocate a new buffer, or\n> * it may point to a part of a larger buffer, whatever is convenient for\n> * the caller. (Currently, all callers just allocate a BLCKSZ'd buffer,\n> * though). The caller also sets readPagPtr, readLen and readPageTLI to\n> * tell XLogReadRecord() what's in the buffer. So all these read* fields\n> * are now set by the caller, XLogReadRecord() only reads them.\n\nThe caller knows how many byes to be read. So the caller provides\nthe required buffer seems reasonable.\n\n> * In your patch, if XLogReadRecord() was called with state->readLen ==\n> * -1, XLogReadRecord() returned an error. That seemed a bit silly; if an\n> * error happened while reading the data, why call XLogReadRecord() at\n> * all? You could just report the error directly. So I removed that.\n\nAgreed. I forgot to move the error handling to more proper location.\n\n> I'm not sure how intelligible this patch is in its current state. But\n> I think the general idea is good. I plan to clean it up further next\n> week, but feel free to work on it before that, either based on this\n> patch or by starting afresh from your patch set.\n\nI think you diff is intelligible enough for me. I'll take this if\nyou haven't done. Anyway I'm staring on this.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 22 Aug 2019 10:43:52 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On 22/08/2019 04:43, Kyotaro Horiguchi wrote:\n> At Mon, 29 Jul 2019 22:39:57 +0300, Heikki Linnakangas <hlinnaka@iki.fi> wrote in <e1ecb53b-663d-98ed-2249-dfa30a74f8c1@iki.fi>\n>> On 12/07/2019 10:10, Kyotaro Horiguchi wrote:\n>> * XLogReaderState->readBuf is now allocated and controlled by the\n>> * caller, not by xlogreader.c itself. When XLogReadRecord() needs data,\n>> * the caller makes the data available in readBuf, which can point to the\n>> * same buffer in all calls, or the caller may allocate a new buffer, or\n>> * it may point to a part of a larger buffer, whatever is convenient for\n>> * the caller. (Currently, all callers just allocate a BLCKSZ'd buffer,\n>> * though). The caller also sets readPagPtr, readLen and readPageTLI to\n>> * tell XLogReadRecord() what's in the buffer. So all these read* fields\n>> * are now set by the caller, XLogReadRecord() only reads them.\n> \n> The caller knows how many byes to be read. So the caller provides\n> the required buffer seems reasonable.\n\nI also had in mind that the caller could provide a larger buffer, \nspanning multiple pages, in one XLogReadRecord() call. It might be \nconvenient to load a whole WAL file in memory and pass it to \nXLogReadRecord() in one call, for example. I think the interface would \nnow allow that, but the code won't actually take advantage of that. \nXLogReadRecord() will always ask for one page at a time, and I think it \nwill ask the caller for more data between each page, even if the caller \nsupplies more than one page in one call.\n\n>> I'm not sure how intelligible this patch is in its current state. But\n>> I think the general idea is good. I plan to clean it up further next\n>> week, but feel free to work on it before that, either based on this\n>> patch or by starting afresh from your patch set.\n> \n> I think you diff is intelligible enough for me. I'll take this if\n> you haven't done. Anyway I'm staring on this.\n\nThanks! I did actually spend some time on this last week, but got \ndistracted by something else before finishing it up and posting a patch. \nHere's a snapshot of what I have in my local branch. It seems to pass \n\"make check-world\", but probably needs some more cleanup.\n\nMain changes since last version:\n\n* I changed the interface so that there is a new function to set the \nstarting position, XLogBeginRead(), and XLogReadRecord() always \ncontinues from where it left off. I think that's more clear than having \na starting point argument in XLogReadRecord(), which was only set on the \nfirst call. It makes the calling code more clear, too, IMO.\n\n* Refactored the implementation of XLogFindNextRecord(). \nXLogFindNextRecord() is now a sibling function of XLogBeginRead(). It \nsets the starting point like XLogBeginRead(). The difference is that \nwith XLogFindNextRecord(), the starting point doesn't need to point to a \nvalid record, it will \"fast forward\" to the next valid record after the \npoint. The \"fast forwarding\" is done in an extra state in the state \nmachine in XLogReadRecord().\n\n* I refactored XLogReadRecord() and the internal XLogNeedData() \nfunction. XLogNeedData() used to contain logic for verifying segment and \npage headers. That works quite differently now. Checking the segment \nheader is now a new state in the state machine, and the page header is \nverified at the top of XLogReadRecord(), whenever the caller provides \nnew data. I think that makes the code in XLogReadRecord() more clear.\n\n- Heikki",
"msg_date": "Thu, 22 Aug 2019 12:43:30 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Thu, 22 Aug 2019 10:43:52 +0900 (Tokyo Standard Time), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in <20190822.104352.26342272.horikyota.ntt@gmail.com>\n> I think you diff is intelligible enough for me. I'll take this if\n> you haven't done. Anyway I'm staring on this.\n\n\n- Reducing state variables\n\nIt was a problem for me that there seems to be many state\nvariables than required. So first I tried to reduce them.\n\nNow readPagePtr and readLen are used bidirectionally.\nXLogNeedData sets it as request and page reader set readLen to\nthe actual length. Similarly verified* changes only when page\nheader is verified, so I introduced page_verified instead of the\nvariables.\n\n\n- Changed calling convention of XLogReadRecord\n\nTo make caller loop simple, XLogReadRecord now allows to specify\nthe same valid value while reading the a record. No longer need\nto change lsn to invalid after the first call in the following\nreader loop.\n\n while (XLogReadRecord(state, lsn, &record, &errormsg) == XLREAD_NEED_DATA)\n {\n if (!page_reader(state))\n break;\n }\n\n- Frequent data request caused by seeing long page header.\n\nXLogNeedData now takes the fourth parameter includes_page_header.\nTrue means the caller is requesting with reqLen that is not\ncounting page header length. But it makes the function a bit too\ncomplex than expected. Blindly requsting anticipating long page\nheader for a new page may prevent page-reader from returning the\nbytes already at hand by waiting for bytes that won't come. To\nprevent such a case the funtion should request anticipating short\npage header first for a new page, then make a re-request using\nSizeOfLongPHD if needed. Of course it is unlikely to happen for\nfile sources, and unlikely to harm physical replication (and the\nbehavior is not changed). Finally, the outcome is more or less\nthe same with just stashing the seemingly bogus retry from\nXLogReadRecord to XLogNeedData. If we are allowed to utilize the\nknowlege that long page header is attached to only the first page\nof a segment, such complexitly could be eliminated.\n\n\n- Moving page buffer allocation\n\nAs for page buffer allocation, I'm not sure it is meaningful, as\nthe reader assumes the buffer is in the same with page size,\nwhich is immutable system-wide. It would be surely meanintful if\nit were on the caller to decide its own block size, or loading\nunit. Anyway it is in the third patch.\n\n- Restored early check-out of record header\n\nThe current record reader code seems to be designed to bail-out\nby broken record header as earlier as possible, perhaps in order\nto prevent impossible size of read in. So I restored the\nbehavior.\n\n\n\nThe attched are the current status, it is separated to two\nsignificant parts plus one for readability.\n\nv5-0001-Move-callback-call-from-ReadPageInternal-to-XLogR.patch:\n\n ReadPageInternal part of the patch. Moves callback calls from\n ReadPageInternal up to XLogReadRecord. Some of recovery tests\n fail applyin only this one but I don't want to put more efforts\n to make this state perfecgt.\n\nv5-0002-Move-page-reader-out-of-XLogReadRecord.patch\n\n The remaining part of the main work. Eliminates callback calls\n from XLogReadRecord. Applies to current master. Passes all\n regression and TAP tests.\n\nv5-0003-Change-policy-of-XLog-read-buffer-allocation.patch\n\n Separate patch to move page buffer allocation from\n XLogReaderAllocation from allers of XLogReadRecord.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 06 Sep 2019 16:33:18 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Attached is new version:\n\n- Rebased. Cleaned up\n\n- Rebased to the current master\n\n- Fixed a known bug in the first step patch. It caused\n timeline-following failure on a standby of a promoted primary.\n\n- Fixed confused naming and setting of the parameter\n includes_paeg_header.\n\n- Removed useless XLogNeedData call in\n XLREAD_NEED_CONTINUATION. The first call to the function\n ensures that all required data is loaded. Finally, every case\n block has just one XLogNeedData call.\n\n- Removed the label \"again\" in XLogReadRecord. It is now needed\n only to repeat XLREAD_NEED_CONTINUATION state. It is naturally\n writtable as a while loop.\n\n- Ensure record == NULL when XLogReadRecord returns other than\n XLREAD_SUCCESS. Previously the value was not changed in that\n case and it was not intuitive behavior for callers.\n\n- Renamed XLREAD_NEED_* to XLREAD_*.\n\n- Removed global variables readOff, readLen, readSegNo. (0003)\n\n Other similar variables like readFile/readSource are left alone\n as they are not common states of page reader and not in\n XLogReaderState.\n\n\nThe attched are the current status, it is separated to two\nsignificant parts plus one for readability.\n\nv6-0001-Move-callback-call-from-ReadPageInternal-to-XLogR.patch:\n\n ReadPageInternal part of the patch. Moves callback calls from\n ReadPageInternal up to XLogReadRecord. Rerorded commit message\n and fixed the bug in v5.\n\nv6-0002-Move-page-reader-out-of-XLogReadRecord.patch\n\n The remaining part of the main work. Eliminates callback calls\n from XLogReadRecord. Reworded commit message and fixed several\n bugs.\n\nv6-0003-Remove-globals-readSegNo-readOff-readLen.patch\n\n Seprate patch to remove some globals that are duplicate with\n members of XLogReaderState.\n\nv6-0004-Change-policy-of-XLog-read-buffer-allocation.patch\n\n Separate patch to move page buffer allocation from\n XLogReaderAllocation from allers of XLogReadRecord.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 10 Sep 2019 17:40:54 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Hello.\n\n709d003fbd hit this. Rebased.\n\nWorks fine but needs detailed verification and maybe further\ncosmetic fixes.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 25 Sep 2019 15:50:32 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Wed, 25 Sep 2019 15:50:32 +0900 (Tokyo Standard Time), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in <20190925.155032.13779064.horikyota.ntt@gmail.com>\n> 709d003fbd hit this. Rebased.\n\nOops! I found a silly silent bug that it doesn't verify the first\npage in new segments. Moreover it didin't load the first page in\na new loaded segment.\n\n\n- Fixed a bug that it didn't load the first segment once new\n loaded segment is loaded.\n\n- Fixed a bug that it didn't verify the first segment if it is\n not the target page.\n\nSome fishy codes are reaminig but I'll post once the fixed\nversion.\n\nregares.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 27 Sep 2019 12:07:26 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Rebased.\n\nI intentionally left duplicate code in XLogNeedData but changed my\nmind to remove it. It makes the function small and clearer.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 24 Oct 2019 14:51:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Thu, 24 Oct 2019 14:51:01 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Rebased.\n\n0dc8ead463 hit this. Rebased.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 27 Nov 2019 12:09:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 12:09:23PM +0900, Kyotaro Horiguchi wrote:\n> 0dc8ead463 hit this. Rebased.\n\nNote: Moved to next CF.\n--\nMichael",
"msg_date": "Wed, 27 Nov 2019 12:57:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On 2019-Nov-27, Kyotaro Horiguchi wrote:\n\n> At Thu, 24 Oct 2019 14:51:01 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Rebased.\n> \n> 0dc8ead463 hit this. Rebased.\n\nPlease review the pg_waldump.c hunks in 0001; they revert recent changes.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Nov 2019 01:11:40 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Wed, 27 Nov 2019 01:11:40 -0300, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2019-Nov-27, Kyotaro Horiguchi wrote:\n> \n> > At Thu, 24 Oct 2019 14:51:01 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > Rebased.\n> > \n> > 0dc8ead463 hit this. Rebased.\n> \n> Please review the pg_waldump.c hunks in 0001; they revert recent changes.\n\nUghhh! I'l check it. Thank you for noticing!!\n\nAt Wed, 27 Nov 2019 12:57:47 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Note: Moved to next CF.\n\nThanks.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 28 Nov 2019 21:37:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Thu, 28 Nov 2019 21:37:03 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > 0dc8ead463 hit this. Rebased.\n> > \n> > Please review the pg_waldump.c hunks in 0001; they revert recent changes.\n> \n> Ughhh! I'l check it. Thank you for noticing!!\n\nFixed that, re-rebased and small comment and cosmetic changes in this\nversion.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 29 Nov 2019 17:14:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On 29/11/2019 10:14, Kyotaro Horiguchi wrote:\n> At Thu, 28 Nov 2019 21:37:03 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>>>> 0dc8ead463 hit this. Rebased.\n>>>\n>>> Please review the pg_waldump.c hunks in 0001; they revert recent changes.\n>>\n>> Ughhh! I'l check it. Thank you for noticing!!\n> \n> Fixed that, re-rebased and small comment and cosmetic changes in this\n> version.\n\nThanks! I finally got around to look at this again. A lot has happened \nsince I last looked at this. Notably, commit 0dc8ead463 introduced \nanother callback function into the XLogReader interface. It's not \nentirely clear what the big picture with the new callback was and how \nthat interacts with the refactoring here. I'll have to spend some time \nto make myself familiar with those changes.\n\nEarlier in this thread, you wrote:\n> \n> - Changed calling convention of XLogReadRecord\n> \n> To make caller loop simple, XLogReadRecord now allows to specify\n> the same valid value while reading the a record. No longer need\n> to change lsn to invalid after the first call in the following\n> reader loop.\n> \n> while (XLogReadRecord(state, lsn, &record, &errormsg) == XLREAD_NEED_DATA)\n> {\n> if (!page_reader(state))\n> break;\n> }\n\nActually, I had also made a similar change in the patch version I posted \nat \nhttps://www.postgresql.org/message-id/4f7a5fad-fa04-b0a3-231b-56d200b646dc%40iki.fi. \nMaybe you missed that email? It looks like you didn't include any of the \nchanges from that in the patch series. In any case, clearly that idea \nhas some merit, since we both independently made a change in that \ncalling convention :-).\n\nI changed that by adding new function, XLogBeginRead(), that repositions \nthe reader, and removed the 'lsn' argument from XLogReadRecord() \naltogether. All callers except one in findLastCheckPoint() pg_rewind.c \npositioned the reader once, and then just read sequentially from there, \nso I think that API is more convenient. With that, the usage looks \nsomething like this:\n\nstate = XLogReaderAllocate (...)\nXLogBeginRead(state, start_lsn);\nwhile (ctx->reader->EndRecPtr < end_of_wal)\n{\n while (XLogReadRecord(state, &record, &errormsg) == XLREAD_NEED_DATA)\n {\n if (!page_reader(state))\n break;\n }\n /* do stuff */\n ....\n}\n\nActually, I propose that we make that change first, and then continue \nreviewing the rest of these patches. I think it's a more convenient \ninterface, independently of the callback refactoring. What do you think \nof the attached patch?\n\n- Heikki",
"msg_date": "Fri, 17 Jan 2020 20:14:36 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On 2020-Jan-17, Heikki Linnakangas wrote:\n\n> I changed that by adding new function, XLogBeginRead(), that repositions the\n> reader, and removed the 'lsn' argument from XLogReadRecord() altogether. All\n> callers except one in findLastCheckPoint() pg_rewind.c positioned the reader\n> once, and then just read sequentially from there, so I think that API is\n> more convenient.\n\nI like it. +1\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 17 Jan 2020 16:22:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Thanks!\n\nAt Fri, 17 Jan 2020 20:14:36 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in \n> On 29/11/2019 10:14, Kyotaro Horiguchi wrote:\n> > At Thu, 28 Nov 2019 21:37:03 +0900 (JST), Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote in\n> >>>> 0dc8ead463 hit this. Rebased.\n> >>>\n> >>> Please review the pg_waldump.c hunks in 0001; they revert recent\n> >>> changes.\n> >>\n> >> Ughhh! I'l check it. Thank you for noticing!!\n> > Fixed that, re-rebased and small comment and cosmetic changes in this\n> > version.\n> \n> Thanks! I finally got around to look at this again. A lot has happened\n> since I last looked at this. Notably, commit 0dc8ead463 introduced\n> another callback function into the XLogReader interface. It's not\n> entirely clear what the big picture with the new callback was and how\n> that interacts with the refactoring here. I'll have to spend some time\n> to make myself familiar with those changes.\n> \n> Earlier in this thread, you wrote:\n> > - Changed calling convention of XLogReadRecord\n> > To make caller loop simple, XLogReadRecord now allows to specify\n> > the same valid value while reading the a record. No longer need\n> > to change lsn to invalid after the first call in the following\n> > reader loop.\n> > while (XLogReadRecord(state, lsn, &record, &errormsg) ==\n> > XLREAD_NEED_DATA)\n> > {\n> > if (!page_reader(state))\n> > break;\n> > }\n> \n> Actually, I had also made a similar change in the patch version I\n> posted at\n> https://www.postgresql.org/message-id/4f7a5fad-fa04-b0a3-231b-56d200b646dc%40iki.fi. Maybe\n> you missed that email? It looks like you didn't include any of the\n> changes from that in the patch series. In any case, clearly that idea\n> has some merit, since we both independently made a change in that\n> calling convention :-).\n\nI'm very sorry but I missed that...\n\n> Actually, I propose that we make that change first, and then continue\n> reviewing the rest of these patches. I think it's a more convenient\n> interface, independently of the callback refactoring. What do you\n> think of the attached patch?\n\nSeparating XLogBeginRead seems reasonable. The annoying recptr trick\nis no longer needed. In particular some logical-decoding stuff become\nfar cleaner by the patch.\n\nfetching_ckpt of ReadRecord is a bit annoying but that coundn't be\nmoved outside to, say, ReadCheckpointRecord in a clean way. (Anyway\nXLogBeginRead is not the place, of course).\n\nAs I looked through it, it looks good to me but give me some more time\nto look closer.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 20 Jan 2020 17:24:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Hello.\n\nAt Mon, 20 Jan 2020 17:24:07 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Separating XLogBeginRead seems reasonable. The annoying recptr trick\n> is no longer needed. In particular some logical-decoding stuff become\n> far cleaner by the patch.\n> \n> fetching_ckpt of ReadRecord is a bit annoying but that coundn't be\n> moved outside to, say, ReadCheckpointRecord in a clean way. (Anyway\n> XLogBeginRead is not the place, of course).\n> \n> As I looked through it, it looks good to me but give me some more time\n> to look closer.\n\nIt seems to me that it works perfectly, and everything looks good to\nme except one point.\n\n-\t\t * In this case, the passed-in record pointer should already be\n+\t\t * In this case, EndRecPtr record pointer should already be\n\nI'm not confident but isn't the \"record pointer\" redundant? EndRecPtr\nseems containing that meaning, and the other occurances of that kind\nof variable names are not accompanied by that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 21 Jan 2020 19:45:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On Tue, 21 Jan 2020 at 18:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> At Mon, 20 Jan 2020 17:24:07 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > Separating XLogBeginRead seems reasonable. The annoying recptr trick\n> > is no longer needed. In particular some logical-decoding stuff become\n> > far cleaner by the patch.\n> >\n> > fetching_ckpt of ReadRecord is a bit annoying but that coundn't be\n> > moved outside to, say, ReadCheckpointRecord in a clean way. (Anyway\n> > XLogBeginRead is not the place, of course).\n> >\n> > As I looked through it, it looks good to me but give me some more time\n> > to look closer.\n>\n> It seems to me that it works perfectly, and everything looks good\n\nI seem to remember some considerable pain in this area when it came to\ntimeline switches. Especially with logical decoding and xlog records\nthat split across a segment boundary.\n\nMy first attempts at logical decoding timeline following appeared fine\nand passed tests until they were put under extended real world\nworkloads, at which point they exploded when they tripped corner cases\nlike this.\n\nI landed up writing ridiculous regression tests to trigger some of\nthese behaviours. I don't recall how many of them made it into the\nfinal patch to core but it's worth a look in the TAP test suite.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Tue, 21 Jan 2020 19:33:40 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On 21/01/2020 12:45, Kyotaro Horiguchi wrote:\n> At Mon, 20 Jan 2020 17:24:07 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>> Separating XLogBeginRead seems reasonable. The annoying recptr trick\n>> is no longer needed. In particular some logical-decoding stuff become\n>> far cleaner by the patch.\n>>\n>> fetching_ckpt of ReadRecord is a bit annoying but that coundn't be\n>> moved outside to, say, ReadCheckpointRecord in a clean way. (Anyway\n>> XLogBeginRead is not the place, of course).\n>>\n>> As I looked through it, it looks good to me but give me some more time\n>> to look closer.\n> \n> It seems to me that it works perfectly, and everything looks good to\n> me except one point.\n> \n> -\t\t * In this case, the passed-in record pointer should already be\n> +\t\t * In this case, EndRecPtr record pointer should already be\n> \n> I'm not confident but isn't the \"record pointer\" redundant? EndRecPtr\n> seems containing that meaning, and the other occurances of that kind\n> of variable names are not accompanied by that.\n\nI fixed that, fixed the XLogFindNextRecord() function so that it \npositions the reader like XLogBeginRead() does (I had already adjusted \nthe comments to say that, but the function didn't actually do it), and \npushed. Thanks for the review!\n\nI'll continue looking at the callback API changes on Monday.\n\n- Heikki\n\n\n",
"msg_date": "Sun, 26 Jan 2020 11:40:05 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On 21/01/2020 13:33, Craig Ringer wrote:\n> On Tue, 21 Jan 2020 at 18:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>> It seems to me that it works perfectly, and everything looks good\n> \n> I seem to remember some considerable pain in this area when it came to\n> timeline switches. Especially with logical decoding and xlog records\n> that split across a segment boundary.\n> \n> My first attempts at logical decoding timeline following appeared fine\n> and passed tests until they were put under extended real world\n> workloads, at which point they exploded when they tripped corner cases\n> like this.\n> \n> I landed up writing ridiculous regression tests to trigger some of\n> these behaviours. I don't recall how many of them made it into the\n> final patch to core but it's worth a look in the TAP test suite.\n\nYeah, the timeline switching stuff is complicated. The small \nXLogBeginRead() patch isn't really affected, but it's definitely \nsomething to watch out for in the callback API patch. If you happen to \nhave any extra ridiculous tests still lying around, would be nice to \nlook at them.\n\n- Heikki\n\n\n",
"msg_date": "Sun, 26 Jan 2020 11:42:32 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Tue, 21 Jan 2020 19:45:10 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Hello.\n> \n> At Mon, 20 Jan 2020 17:24:07 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Separating XLogBeginRead seems reasonable. The annoying recptr trick\n> > is no longer needed. In particular some logical-decoding stuff become\n> > far cleaner by the patch.\n> > \n> > fetching_ckpt of ReadRecord is a bit annoying but that coundn't be\n> > moved outside to, say, ReadCheckpointRecord in a clean way. (Anyway\n> > XLogBeginRead is not the place, of course).\n> > \n> > As I looked through it, it looks good to me but give me some more time\n> > to look closer.\n> \n> It seems to me that it works perfectly, and everything looks good to\n> me except one point.\n> \n> -\t\t * In this case, the passed-in record pointer should already be\n> +\t\t * In this case, EndRecPtr record pointer should already be\n> \n> I'm not confident but isn't the \"record pointer\" redundant? EndRecPtr\n> seems containing that meaning, and the other occurances of that kind\n> of variable names are not accompanied by that.\n\nI rebased this on the 38a957316d and its follow-on patch 30012a04a6.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 28 Jan 2020 21:20:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "5d0c2d5eba shot out this.\n\nRebased.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 24 Mar 2020 18:24:13 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "I found this conficts with a7e8ece41cf7a96d8a9c4c037cdfef304d950831.\nRebased on it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 21 Apr 2020 17:04:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Tue, 21 Apr 2020 17:04:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n>\n\nMmm. The message body seems disappearing for uncertain reason.\n\ncd12323440 conflicts with this. Rebased.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 22 Apr 2020 10:12:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Wed, 22 Apr 2020 10:12:46 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> cd12323440 conflicts with this. Rebased.\n\nb060dbe000 is conflicting. I gave up isolating XLogOpenSegment from\nxlogreader.c, since the two are tightly coupled than I thought.\n\nThis patch removes all the three callbacks (open/close/page_read) in\nXL_ROUTINE from XLogReaderState. It only has \"cleanup\" callback\ninstead.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 26 May 2020 16:40:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On Tue, 26 May 2020, 15:40 Kyotaro Horiguchi, <horikyota.ntt@gmail.com>\nwrote:\n\n>\n> This patch removes all the three callbacks (open/close/page_read) in\n> XL_ROUTINE from XLogReaderState. It only has \"cleanup\" callback\n> instead.\n>\n\nI actually have a use in mind for these callbacks - to support reading WAL\nfor logical decoding from a restore_command like tool. So we can archive\nwal when it's no longer required for recovery and reduce the risk of\nfilling pg_wal if a standby lags.\n\nI don't object to your cleanup at all. I'd like it to be properly\npluggable, whereas right now it has hard coded callbacks that differ for\nlittle reason.\n\nJust noting that the idea of a callback here isn't a bad thing.\n\n>\n\nOn Tue, 26 May 2020, 15:40 Kyotaro Horiguchi, <horikyota.ntt@gmail.com> wrote:\n\nThis patch removes all the three callbacks (open/close/page_read) in\nXL_ROUTINE from XLogReaderState. It only has \"cleanup\" callback\ninstead.I actually have a use in mind for these callbacks - to support reading WAL for logical decoding from a restore_command like tool. So we can archive wal when it's no longer required for recovery and reduce the risk of filling pg_wal if a standby lags.I don't object to your cleanup at all. I'd like it to be properly pluggable, whereas right now it has hard coded callbacks that differ for little reason.Just noting that the idea of a callback here isn't a bad thing.",
"msg_date": "Tue, 26 May 2020 20:17:47 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Thank you for the comment.\n\nAt Tue, 26 May 2020 20:17:47 +0800, Craig Ringer <craig@2ndquadrant.com> wrote in \n> On Tue, 26 May 2020, 15:40 Kyotaro Horiguchi, <horikyota.ntt@gmail.com>\n> wrote:\n> \n> >\n> > This patch removes all the three callbacks (open/close/page_read) in\n> > XL_ROUTINE from XLogReaderState. It only has \"cleanup\" callback\n> > instead.\n> >\n> \n> I actually have a use in mind for these callbacks - to support reading WAL\n> for logical decoding from a restore_command like tool. So we can archive\n> wal when it's no longer required for recovery and reduce the risk of\n> filling pg_wal if a standby lags.\n> \n> I don't object to your cleanup at all. I'd like it to be properly\n> pluggable, whereas right now it has hard coded callbacks that differ for\n> little reason.\n>\n> Just noting that the idea of a callback here isn't a bad thing.\n\nI agree that plugin is generally not bad as far as it were standalone,\nthat is, as far as it is not tightly cooperative with the opposite\nside of the caller of it. However, actually it seems to me that the\nxlogreader plugins are too-deeply coupled with the callers of\nxlogreader in many aspects involving error-handling and\nretry-mechanism.\n\nAs Alvaro mentioned we may have page-decrypt callback shortly as\nanother callback of xlogreader. Xlogreader could be more messy by\nintroducing such plugins, that actually have no business with\nxlogreader at all.\n\nEvidently xlogreader can be a bottom-end module (that is, a module\nthat doesn't depend on another module). It is I think a good thing to\nisolate xlogreader from the changes of its callers and correlated\nplugins.\n\nA major problem of this patch is that the state machine used there\nmight be another mess here, though.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 27 May 2020 10:06:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "cfbot is complaining as this is no longer applicable. Rebased.\n\nIn v14, some reference to XLogReaderState parameter to read_pages\nfunctions are accidentally replaced by the reference to the global\nvariable xlogreader. Fixed it, too.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 02 Jul 2020 13:53:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Hi,\n\nI applied your v15 patchset to master\ned2c7f65bd9f15f8f7cd21ad61602f983b1e72e9. Here are three feedback points\nfor you:\n\n\n= 1. Build error when WAL_DEBUG is defined manually =\nHow to reproduce:\n\n $ sed -i -E -e 's|^/\\* #define WAL_DEBUG \\*/$|#define WAL_DEBUG|'\nsrc/include/pg_config_manual.h\n $ ./configure && make\n\nExpected: PostgreSQL is successfully made.\nActual: I got the following make error:\n\n>>>>>>>>\ngcc -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-format-truncation -Wno-stringop-truncation -O2\n-I../../../../src/include -D_GNU_SOURCE -c -o xlog.o xlog.c\nIn file included from /usr/include/x86_64-linux-gnu/bits/types/stack_t.h:23,\n from /usr/include/signal.h:303,\n from ../../../../src/include/storage/sinval.h:17,\n from ../../../../src/include/access/xact.h:22,\n from ../../../../src/include/access/twophase.h:17,\n from xlog.c:33:\nxlog.c: In function ‘XLogInsertRecord’:\nxlog.c:1219:56: error: called object is not a function or function pointer\n 1219 | debug_reader = XLogReaderAllocate(wal_segment_size, NULL NULL);\n | ^~~~\nxlog.c:1219:19: error: too few arguments to function ‘XLogReaderAllocate’\n 1219 | debug_reader = XLogReaderAllocate(wal_segment_size, NULL NULL);\n | ^~~~~~~~~~~~~~~~~~\nIn file included from ../../../../src/include/access/clog.h:14,\n from xlog.c:25:\n../../../../src/include/access/xlogreader.h:243:25: note: declared here\n 243 | extern XLogReaderState *XLogReaderAllocate(int wal_segment_size,\n | ^~~~~~~~~~~~~~~~~~\nmake[4]: *** [<builtin>: xlog.o] Error 1\n<<<<<<<<\n\nThe following chunk in 0002 seems to be the cause of the error. There is\nno comma between two NULLs.\n\n>>>>>>>>\ndiff --git a/src/backend/access/transam/xlog.c\nb/src/backend/access/transam/xlog.c\nindex e570e56a24..f9b0108602 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n(..snipped..)\n@@ -1225,8 +1218,7 @@ XLogInsertRecord(XLogRecData *rdata,\n appendBinaryStringInfo(&recordBuf, rdata->data, rdata->len);\n\n if (!debug_reader)\n- debug_reader = XLogReaderAllocate(wal_segment_size, NULL,\n- XL_ROUTINE(), NULL);\n+ debug_reader = XLogReaderAllocate(wal_segment_size, NULL NULL);\n\n if (!debug_reader)\n {\n<<<<<<<<\n\n\n= 2. readBuf allocation in XLogReaderAllocate =\nAFAIU, not XLogReaderAllocate() itself but its caller is now responsible\nfor allocating XLogReaderState->readBuf. However, the following code still\nremains in src/backend/access/transam/xlogreader.c:\n\n>>>>>>>>\n 74 XLogReaderState *\n 75 XLogReaderAllocate(int wal_segment_size, const char *waldir,\n 76 WALSegmentCleanupCB cleanup_cb)\n 77 {\n :\n 98 state->readBuf = (char *) palloc_extended(XLOG_BLCKSZ,\n 99 MCXT_ALLOC_NO_OOM);\n<<<<<<<<\n\nIs this okay?\n\n\n= 3. XLOG_FROM_ANY assigned to global readSource =\nRegarding the following chunk in 0003:\n\n>>>>>>>>\ndiff --git a/src/backend/access/transam/xlog.c\nb/src/backend/access/transam/xlog.c\nindex 6b42d9015f..bcb4ef270f 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -804,18 +804,14 @@ static XLogSegNo openLogSegNo = 0;\n * These variables are used similarly to the ones above, but for reading\n * the XLOG. Note, however, that readOff generally represents the offset\n * of the page just read, not the seek position of the FD itself, which\n- * will be just past that page. readLen indicates how much of the current\n- * page has been read into readBuf, and readSource indicates where we got\n- * the currently open file from.\n+ * will be just past that page. readSource indicates where we got the\n+ * currently open file from.\n * Note: we could use Reserve/ReleaseExternalFD to track consumption of\n * this FD too; but it doesn't currently seem worthwhile, since the XLOG is\n * not read by general-purpose sessions.\n */\n static int readFile = -1;\n-static XLogSegNo readSegNo = 0;\n-static uint32 readOff = 0;\n-static uint32 readLen = 0;\n-static XLogSource readSource = XLOG_FROM_ANY;\n+static XLogSource readSource = 0; /* XLOG_FROM_* code */\n\n /*\n * Keeps track of which source we're currently reading from. This is\n<<<<<<<<\n\nI think it is better to keep the line \"static XLogSource readSource =\nXLOG_FROM_ANY;\". XLOG_FROM_ANY is already defined as 0 in\nsrc/backend/access/transam/xlog.c.\n\n\nRegards,\nTakashi\n\n\n\n2020年7月2日(木) 13:53 Kyotaro Horiguchi <horikyota.ntt@gmail.com>:\n\n> cfbot is complaining as this is no longer applicable. Rebased.\n>\n> In v14, some reference to XLogReaderState parameter to read_pages\n> functions are accidentally replaced by the reference to the global\n> variable xlogreader. Fixed it, too.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\nHi,I applied your v15 patchset to master ed2c7f65bd9f15f8f7cd21ad61602f983b1e72e9. Here are three feedback points for you:= 1. Build error when WAL_DEBUG is defined manually =How to reproduce: $ sed -i -E -e 's|^/\\* #define WAL_DEBUG \\*/$|#define WAL_DEBUG|' src/include/pg_config_manual.h $ ./configure && makeExpected: PostgreSQL is successfully made.Actual: I got the following make error:>>>>>>>>gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../../../src/include -D_GNU_SOURCE -c -o xlog.o xlog.cIn file included from /usr/include/x86_64-linux-gnu/bits/types/stack_t.h:23, from /usr/include/signal.h:303, from ../../../../src/include/storage/sinval.h:17, from ../../../../src/include/access/xact.h:22, from ../../../../src/include/access/twophase.h:17, from xlog.c:33:xlog.c: In function ‘XLogInsertRecord’:xlog.c:1219:56: error: called object is not a function or function pointer 1219 | debug_reader = XLogReaderAllocate(wal_segment_size, NULL NULL); | ^~~~xlog.c:1219:19: error: too few arguments to function ‘XLogReaderAllocate’ 1219 | debug_reader = XLogReaderAllocate(wal_segment_size, NULL NULL); | ^~~~~~~~~~~~~~~~~~In file included from ../../../../src/include/access/clog.h:14, from xlog.c:25:../../../../src/include/access/xlogreader.h:243:25: note: declared here 243 | extern XLogReaderState *XLogReaderAllocate(int wal_segment_size, | ^~~~~~~~~~~~~~~~~~make[4]: *** [<builtin>: xlog.o] Error 1<<<<<<<<The following chunk in 0002 seems to be the cause of the error. There is no comma between two NULLs.>>>>>>>>diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.cindex e570e56a24..f9b0108602 100644--- a/src/backend/access/transam/xlog.c+++ b/src/backend/access/transam/xlog.c(..snipped..)@@ -1225,8 +1218,7 @@ XLogInsertRecord(XLogRecData *rdata, appendBinaryStringInfo(&recordBuf, rdata->data, rdata->len); if (!debug_reader)- debug_reader = XLogReaderAllocate(wal_segment_size, NULL,- XL_ROUTINE(), NULL);+ debug_reader = XLogReaderAllocate(wal_segment_size, NULL NULL); if (!debug_reader) {<<<<<<<<= 2. readBuf allocation in XLogReaderAllocate =AFAIU, not XLogReaderAllocate() itself but its caller is now responsible for allocating XLogReaderState->readBuf. However, the following code still remains in src/backend/access/transam/xlogreader.c:>>>>>>>> 74 XLogReaderState * 75 XLogReaderAllocate(int wal_segment_size, const char *waldir, 76 WALSegmentCleanupCB cleanup_cb) 77 { : 98 state->readBuf = (char *) palloc_extended(XLOG_BLCKSZ, 99 MCXT_ALLOC_NO_OOM);<<<<<<<<Is this okay?= 3. XLOG_FROM_ANY assigned to global readSource =Regarding the following chunk in 0003:>>>>>>>>diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.cindex 6b42d9015f..bcb4ef270f 100644--- a/src/backend/access/transam/xlog.c+++ b/src/backend/access/transam/xlog.c@@ -804,18 +804,14 @@ static XLogSegNo openLogSegNo = 0; * These variables are used similarly to the ones above, but for reading * the XLOG. Note, however, that readOff generally represents the offset * of the page just read, not the seek position of the FD itself, which- * will be just past that page. readLen indicates how much of the current- * page has been read into readBuf, and readSource indicates where we got- * the currently open file from.+ * will be just past that page. readSource indicates where we got the+ * currently open file from. * Note: we could use Reserve/ReleaseExternalFD to track consumption of * this FD too; but it doesn't currently seem worthwhile, since the XLOG is * not read by general-purpose sessions. */ static int readFile = -1;-static XLogSegNo readSegNo = 0;-static uint32 readOff = 0;-static uint32 readLen = 0;-static XLogSource readSource = XLOG_FROM_ANY;+static XLogSource readSource = 0; /* XLOG_FROM_* code */ /* * Keeps track of which source we're currently reading from. This is<<<<<<<<I think it is better to keep the line \"static XLogSource readSource = XLOG_FROM_ANY;\". XLOG_FROM_ANY is already defined as 0 in src/backend/access/transam/xlog.c.Regards,Takashi2020年7月2日(木) 13:53 Kyotaro Horiguchi <horikyota.ntt@gmail.com>:cfbot is complaining as this is no longer applicable. Rebased.\n\nIn v14, some reference to XLogReaderState parameter to read_pages\nfunctions are accidentally replaced by the reference to the global\nvariable xlogreader. Fixed it, too.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n-- Takashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Fri, 17 Jul 2020 14:14:44 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On Fri, Jul 17, 2020 at 02:14:44PM +0900, Takashi Menjo wrote:\n> I applied your v15 patchset to master\n> ed2c7f65bd9f15f8f7cd21ad61602f983b1e72e9. Here are three feedback points\n> for you:\n\nAnd the CF bot complains as well here. Horiguchi-san, this patch is\nwaiting on author for a couple of weeks now. Could you rebase the\npatch and comment on the points raised upthread?\n--\nMichael",
"msg_date": "Mon, 7 Sep 2020 11:51:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Thank you for the comment, Menjo-san, and noticing me of that, Michael.\n\nSorry for late reply.\n\nAt Fri, 17 Jul 2020 14:14:44 +0900, Takashi Menjo <takashi.menjo@gmail.com> wrote in \n> Hi,\n> \n> I applied your v15 patchset to master\n> ed2c7f65bd9f15f8f7cd21ad61602f983b1e72e9. Here are three feedback points\n> for you:\n> \n> \n> = 1. Build error when WAL_DEBUG is defined manually =\n> How to reproduce:\n> \n> $ sed -i -E -e 's|^/\\* #define WAL_DEBUG \\*/$|#define WAL_DEBUG|'\n> src/include/pg_config_manual.h\n> $ ./configure && make\n> \n> Expected: PostgreSQL is successfully made.\n> Actual: I got the following make error:\n> \n> >>>>>>>>\n> gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n> -Wno-format-truncation -Wno-stringop-truncation -O2\n> -I../../../../src/include -D_GNU_SOURCE -c -o xlog.o xlog.c\n> In file included from /usr/include/x86_64-linux-gnu/bits/types/stack_t.h:23,\n> from /usr/include/signal.h:303,\n> from ../../../../src/include/storage/sinval.h:17,\n> from ../../../../src/include/access/xact.h:22,\n> from ../../../../src/include/access/twophase.h:17,\n> from xlog.c:33:\n> xlog.c: In function ‘XLogInsertRecord’:\n> xlog.c:1219:56: error: called object is not a function or function pointer\n> 1219 | debug_reader = XLogReaderAllocate(wal_segment_size, NULL NULL);\n> | ^~~~\n> xlog.c:1219:19: error: too few arguments to function ‘XLogReaderAllocate’\n> 1219 | debug_reader = XLogReaderAllocate(wal_segment_size, NULL NULL);\n> | ^~~~~~~~~~~~~~~~~~\n> In file included from ../../../../src/include/access/clog.h:14,\n> from xlog.c:25:\n> ../../../../src/include/access/xlogreader.h:243:25: note: declared here\n> 243 | extern XLogReaderState *XLogReaderAllocate(int wal_segment_size,\n> | ^~~~~~~~~~~~~~~~~~\n> make[4]: *** [<builtin>: xlog.o] Error 1\n> <<<<<<<<\n> \n> The following chunk in 0002 seems to be the cause of the error. There is\n> no comma between two NULLs.\n> \n> >>>>>>>>\n> diff --git a/src/backend/access/transam/xlog.c\n> b/src/backend/access/transam/xlog.c\n> index e570e56a24..f9b0108602 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> (..snipped..)\n> @@ -1225,8 +1218,7 @@ XLogInsertRecord(XLogRecData *rdata,\n> appendBinaryStringInfo(&recordBuf, rdata->data, rdata->len);\n> \n> if (!debug_reader)\n> - debug_reader = XLogReaderAllocate(wal_segment_size, NULL,\n> - XL_ROUTINE(), NULL);\n> + debug_reader = XLogReaderAllocate(wal_segment_size, NULL NULL);\n> \n> if (!debug_reader)\n> {\n> <<<<<<<<\n> \n> \n> = 2. readBuf allocation in XLogReaderAllocate =\n> AFAIU, not XLogReaderAllocate() itself but its caller is now responsible\n> for allocating XLogReaderState->readBuf. However, the following code still\n> remains in src/backend/access/transam/xlogreader.c:\n> \n> >>>>>>>>\n> 74 XLogReaderState *\n> 75 XLogReaderAllocate(int wal_segment_size, const char *waldir,\n> 76 WALSegmentCleanupCB cleanup_cb)\n> 77 {\n> :\n> 98 state->readBuf = (char *) palloc_extended(XLOG_BLCKSZ,\n> 99 MCXT_ALLOC_NO_OOM);\n> <<<<<<<<\n> \n> Is this okay?\n> \n> \n> = 3. XLOG_FROM_ANY assigned to global readSource =\n> Regarding the following chunk in 0003:\n> \n> >>>>>>>>\n> diff --git a/src/backend/access/transam/xlog.c\n> b/src/backend/access/transam/xlog.c\n> index 6b42d9015f..bcb4ef270f 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -804,18 +804,14 @@ static XLogSegNo openLogSegNo = 0;\n> * These variables are used similarly to the ones above, but for reading\n> * the XLOG. Note, however, that readOff generally represents the offset\n> * of the page just read, not the seek position of the FD itself, which\n> - * will be just past that page. readLen indicates how much of the current\n> - * page has been read into readBuf, and readSource indicates where we got\n> - * the currently open file from.\n> + * will be just past that page. readSource indicates where we got the\n> + * currently open file from.\n> * Note: we could use Reserve/ReleaseExternalFD to track consumption of\n> * this FD too; but it doesn't currently seem worthwhile, since the XLOG is\n> * not read by general-purpose sessions.\n> */\n> static int readFile = -1;\n> -static XLogSegNo readSegNo = 0;\n> -static uint32 readOff = 0;\n> -static uint32 readLen = 0;\n> -static XLogSource readSource = XLOG_FROM_ANY;\n> +static XLogSource readSource = 0; /* XLOG_FROM_* code */\n> \n> /*\n> * Keeps track of which source we're currently reading from. This is\n> <<<<<<<<\n> \n> I think it is better to keep the line \"static XLogSource readSource =\n> XLOG_FROM_ANY;\". XLOG_FROM_ANY is already defined as 0 in\n> src/backend/access/transam/xlog.c.\n> \n> \n> Regards,\n> Takashi\n> \n> \n> \n> 2020年7月2日(木) 13:53 Kyotaro Horiguchi <horikyota.ntt@gmail.com>:\n> \n> > cfbot is complaining as this is no longer applicable. Rebased.\n> >\n> > In v14, some reference to XLogReaderState parameter to read_pages\n> > functions are accidentally replaced by the reference to the global\n> > variable xlogreader. Fixed it, too.\n> >\n> > regards.\n> >\n> > --\n> > Kyotaro Horiguchi\n> > NTT Open Source Software Center\n> >\n> \n> \n> -- \n> Takashi Menjo <takashi.menjo@gmail.com>\n\n\n",
"msg_date": "Tue, 08 Sep 2020 11:56:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Tue, 08 Sep 2020 11:56:29 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Thank you for the comment, Menjo-san, and noticing me of that, Michael.\n\nI found why the message I was writing was gone from the draft folder..\n\nSorry for the garbage.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 08 Sep 2020 13:09:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Thank you for the comment and sorry for late reply.\n\nAt Fri, 17 Jul 2020 14:14:44 +0900, Takashi Menjo <takashi.menjo@gmail.com> wrote in \n> Hi,\n> \n> I applied your v15 patchset to master\n> ed2c7f65bd9f15f8f7cd21ad61602f983b1e72e9. Here are three feedback points\n> for you:\n> \n> = 1. Build error when WAL_DEBUG is defined manually =\n..\n> Expected: PostgreSQL is successfully made.\n> Actual: I got the following make error:\n...\n> 1219 | debug_reader = XLogReaderAllocate(wal_segment_size, NULL NULL);\n\nAh, I completely forgot about WAL_DEBUG paths. Fixed.\n\n> = 2. readBuf allocation in XLogReaderAllocate =\n> AFAIU, not XLogReaderAllocate() itself but its caller is now responsible\n> for allocating XLogReaderState->readBuf. However, the following code still\n> remains in src/backend/access/transam/xlogreader.c:\n> \n> >>>>>>>>\n> 74 XLogReaderState *\n> 75 XLogReaderAllocate(int wal_segment_size, const char *waldir,\n> 76 WALSegmentCleanupCB cleanup_cb)\n> 77 {\n> :\n> 98 state->readBuf = (char *) palloc_extended(XLOG_BLCKSZ,\n> 99 MCXT_ALLOC_NO_OOM);\n> <<<<<<<<\n> \n> Is this okay?\n\nOops! That's silly. However, I put a rethink on this. The reason of\nthe moving of responsibility comes from the fact that the actual\nsubject that fills-in the buffer is the callers of xlogreader, who\nknows its size. In that light it's quite strange that xlogreader\nworks based on the fixed size of XLOG_BLCKSZ. I don't think it is\nuseful just now, but I changed 0004 so that XLOG_BLCKSZ is eliminated\nfrom xlogreader.c. Buffer allocation is restored to\nXLogReaderAllocate.\n\n(But, I'm not sure it's worth doing..)\n\n> = 3. XLOG_FROM_ANY assigned to global readSource =\n> Regarding the following chunk in 0003:\n...\n> -static XLogSource readSource = XLOG_FROM_ANY;\n> +static XLogSource readSource = 0; /* XLOG_FROM_* code */\n> \n> I think it is better to keep the line \"static XLogSource readSource =\n> XLOG_FROM_ANY;\". XLOG_FROM_ANY is already defined as 0 in\n> src/backend/access/transam/xlog.c.\n\nThat seems to be a mistake while past rebasding. XLOG_FROM_ANY is the\nright thing to use here.\n\nThe attached is the rebased, and fixed version.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 08 Sep 2020 16:35:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "A recent commot about LSN_FORMAT_ARGS conflicted this.\nJust rebased.\n\nregards\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 04 Mar 2021 11:28:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 3:29 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> A recent commot about LSN_FORMAT_ARGS conflicted this.\n> Just rebased.\n\nFYI I've been looking at this, and I think it's a very nice\nimprovement. I'll post some review comments and a rebase shortly.\n\n\n",
"msg_date": "Wed, 31 Mar 2021 10:00:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Wed, 31 Mar 2021 10:00:02 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Thu, Mar 4, 2021 at 3:29 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > A recent commot about LSN_FORMAT_ARGS conflicted this.\n> > Just rebased.\n> \n> FYI I've been looking at this, and I think it's a very nice\n> improvement. I'll post some review comments and a rebase shortly.\n\nThanks for looking at this!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 31 Mar 2021 15:17:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On Wed, Mar 31, 2021 at 7:17 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Wed, 31 Mar 2021 10:00:02 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in\n> > On Thu, Mar 4, 2021 at 3:29 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > A recent commot about LSN_FORMAT_ARGS conflicted this.\n> > > Just rebased.\n> >\n> > FYI I've been looking at this, and I think it's a very nice\n> > improvement. I'll post some review comments and a rebase shortly.\n>\n> Thanks for looking at this!\n\nI rebased and pgindent-ed the first three patches and did some\ntesting. I think it looks pretty good, though I still need to check\nthe code coverage when running the recovery tests. There are three\ncompiler warnings from GCC when not using --enable-cassert, including\nuninitialized variables: pageHeader and targetPagePtr. It looks like\nthey could be silenced as follows, or maybe you see a better way?\n\n- XLogPageHeader pageHeader;\n+ XLogPageHeader pageHeader = NULL;\n uint32 pageHeaderSize;\n- XLogRecPtr targetPagePtr;\n+ XLogRecPtr targetPagePtr =\nInvalidXLogRecPtr;\n\nTo summarise the patches:\n\n0001 + 0002 get rid of the callback interface and replace it with a\nstate machine, making it the client's problem to supply data when it\nreturns XLREAD_NEED_DATA. I found this interface nicer to work with,\nfor my WAL decoding buffer patch (CF 2410), and I understand that the\nencryption patch set can also benefit from it. Certainly when I\nrebased my project on this patch set, I prefered the result.\n\n0003 is nice global variable cleanup.\n\nI haven't looked at 0004.",
"msg_date": "Wed, 7 Apr 2021 05:09:53 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 5:09 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> 0001 + 0002 get rid of the callback interface and replace it with a\n> state machine, making it the client's problem to supply data when it\n> returns XLREAD_NEED_DATA. I found this interface nicer to work with,\n> for my WAL decoding buffer patch (CF 2410), and I understand that the\n> encryption patch set can also benefit from it. Certainly when I\n> rebased my project on this patch set, I prefered the result.\n\n+ state->readLen = pageHeaderSize;\n\nThis variable is used for the XLogPageReader to say how much data it\nwants, but also for the caller to indicate how much data is loaded.\nWouldn't it be better to split this into two variables: bytesWanted\nand bytesAvailable? (I admit that I spent a whole afternoon debugging\nafter confusing myself about that, when rebasing my WAL readahead\npatch recently).\n\nI wonder if it would be better to have the client code access these\nvalues through functions (even if they just access the variables in a\nstatic inline function), to create a bit more separation? Something\nlike XLogReaderGetWanted(&page_lsn, &bytes_wanted), and then\nXLogReaderSetAvailable(state, 42)? Just an idea.\n\n\n",
"msg_date": "Wed, 7 Apr 2021 10:57:04 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "Hi,\n\nOn 2021-04-07 05:09:53 +1200, Thomas Munro wrote:\n> From 560cdfa444a3b05a0e6b8054f3cfeadf56e059fc Mon Sep 17 00:00:00 2001\n> From: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>\n> Date: Thu, 5 Sep 2019 20:21:55 +0900\n> Subject: [PATCH v18 1/3] Move callback-call from ReadPageInternal to\n> XLogReadRecord.\n> \n> The current WAL record reader reads page data using a call back\n> function. Redesign the interface so that it asks the caller for more\n> data when required. This model works better for proposed projects that\n> encryption, prefetching and other new features that would require\n> extending the callback interface for each case.\n> \n> As the first step of that change, this patch moves the page reader\n> function out of ReadPageInternal(), then the remaining tasks of the\n> function are taken over by the new function XLogNeedData().\n\n> -static int\n> +static bool\n> XLogPageRead(XLogReaderState *xlogreader, XLogRecPtr targetPagePtr, int reqLen,\n> \t\t\t XLogRecPtr targetRecPtr, char *readBuf)\n> {\n> @@ -12170,7 +12169,8 @@ retry:\n> \t\t\treadLen = 0;\n> \t\t\treadSource = XLOG_FROM_ANY;\n> \n> -\t\t\treturn -1;\n> +\t\t\txlogreader->readLen = -1;\n> +\t\t\treturn false;\n> \t\t}\n> \t}\n\nIt seems a bit weird to assign to XlogReaderState->readLen inside the\ncallbacks. I first thought it was just a transient state, but it's\nnot. I think it'd be good to wrap the xlogreader->readLen assignment an\nan inline function. That we can add more asserts etc over time.\n\n\n\n> -/* pg_waldump's XLogReaderRoutine->page_read callback */\n> +/*\n> + * pg_waldump's WAL page rader, also used as page_read callback for\n> + * XLogFindNextRecord\n> + */\n> static bool\n> -WALDumpReadPage(XLogReaderState *state, XLogRecPtr targetPagePtr, int reqLen,\n> -\t\t\t\tXLogRecPtr targetPtr, char *readBuff)\n> +WALDumpReadPage(XLogReaderState *state, void *priv)\n> {\n> -\tXLogDumpPrivate *private = state->private_data;\n> +\tXLogRecPtr\ttargetPagePtr = state->readPagePtr;\n> +\tint\t\t\treqLen\t\t = state->readLen;\n> +\tchar\t *readBuff\t = state->readBuf;\n> +\tXLogDumpPrivate *private = (XLogDumpPrivate *) priv;\n\nIt seems weird to pass a void *priv to a function that now doesn't at\nall need the type punning anymore.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Apr 2021 16:09:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On 2021-Apr-07, Thomas Munro wrote:\n\n> I wonder if it would be better to have the client code access these\n> values through functions (even if they just access the variables in a\n> static inline function), to create a bit more separation? Something\n> like XLogReaderGetWanted(&page_lsn, &bytes_wanted), and then\n> XLogReaderSetAvailable(state, 42)? Just an idea.\n\nI think more opacity is good in this area, generally speaking. There\nare way too many globals, and they interact in nontrivial ways across\nthe codebase. Just look at the ThisTimeLineID recent disaster. I\ndon't have this patch sufficiently paged-in to say that bytes_wanted/\nbytes_available is precisely the thing we need, but if it makes for a\ncleaner interface, I'm for it. This module keeps some state inside\nitself, and others part of the state is in its users; that's not good,\nand any cleanup on that is welcome.\n\nBTRW it's funny that after these patches, \"xlogreader\" no longer reads\nanything. It's more an \"xlog interpreter\" -- the piece of code that\nsplits individual WAL records from a stream of WAL bytes that's caller's\nresponsibility to obtain somehow. But (and, again, I haven't read this\npatch recently) it still offers pieces that support a reader, in\naddition to its main interface as the interpreter. Maybe it's not a\ntotally stupid idea to split it in even more different files.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Tue, 6 Apr 2021 19:18:50 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 11:18 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> BTRW it's funny that after these patches, \"xlogreader\" no longer reads\n> anything. It's more an \"xlog interpreter\" -- the piece of code that\n> splits individual WAL records from a stream of WAL bytes that's caller's\n> responsibility to obtain somehow. But (and, again, I haven't read this\n> patch recently) it still offers pieces that support a reader, in\n> addition to its main interface as the interpreter. Maybe it's not a\n> totally stupid idea to split it in even more different files.\n\nYeah, I like \"decoder\", and it's already called that in some places...\n\n\n",
"msg_date": "Wed, 7 Apr 2021 11:37:11 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On 2021-Apr-07, Thomas Munro wrote:\n\n> On Wed, Apr 7, 2021 at 11:18 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > BTRW it's funny that after these patches, \"xlogreader\" no longer reads\n> > anything. It's more an \"xlog interpreter\" -- the piece of code that\n> > splits individual WAL records from a stream of WAL bytes that's caller's\n> > responsibility to obtain somehow. But (and, again, I haven't read this\n> > patch recently) it still offers pieces that support a reader, in\n> > addition to its main interface as the interpreter. Maybe it's not a\n> > totally stupid idea to split it in even more different files.\n> \n> Yeah, I like \"decoder\", and it's already called that in some places...\n\nYeah, that works ...\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Los trabajadores menos efectivos son sistematicamente llevados al lugar\ndonde pueden hacer el menor da�o posible: gerencia.\" (El principio Dilbert)\n\n\n",
"msg_date": "Tue, 6 Apr 2021 19:49:07 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Tue, 6 Apr 2021 16:09:55 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> > XLogPageRead(XLogReaderState *xlogreader, XLogRecPtr targetPagePtr, int reqLen,\n> > \t\t\t XLogRecPtr targetRecPtr, char *readBuf)\n> > {\n> > @@ -12170,7 +12169,8 @@ retry:\n> > \t\t\treadLen = 0;\n> > \t\t\treadSource = XLOG_FROM_ANY;\n> > \n> > -\t\t\treturn -1;\n> > +\t\t\txlogreader->readLen = -1;\n> > +\t\t\treturn false;\n> > \t\t}\n> > \t}\n> \n> It seems a bit weird to assign to XlogReaderState->readLen inside the\n> callbacks. I first thought it was just a transient state, but it's\n> not. I think it'd be good to wrap the xlogreader->readLen assignment an\n> an inline function. That we can add more asserts etc over time.\n\nSounds reasonable. The variable is split up into request/result\nvariables and setting the result variable is wrapped by a\nfunction. (0005).\n\n> > -/* pg_waldump's XLogReaderRoutine->page_read callback */\n> > +/*\n> > + * pg_waldump's WAL page rader, also used as page_read callback for\n> > + * XLogFindNextRecord\n> > + */\n> > static bool\n> > -WALDumpReadPage(XLogReaderState *state, XLogRecPtr targetPagePtr, int reqLen,\n> > -\t\t\t\tXLogRecPtr targetPtr, char *readBuff)\n> > +WALDumpReadPage(XLogReaderState *state, void *priv)\n> > {\n> > -\tXLogDumpPrivate *private = state->private_data;\n> > +\tXLogRecPtr\ttargetPagePtr = state->readPagePtr;\n> > +\tint\t\t\treqLen\t\t = state->readLen;\n> > +\tchar\t *readBuff\t = state->readBuf;\n> > +\tXLogDumpPrivate *private = (XLogDumpPrivate *) priv;\n> \n> It seems weird to pass a void *priv to a function that now doesn't at\n> all need the type punning anymore.\n\nMmm. I omitted it since client code was somewhat out-of-scope. In the\nattached 0004 WALDumpReadPage() is no longer used as the callback of\nXLogFindNextRecord.\n\nOn the way fixing them, I found that XLogReaderState.readPageTLI has\nbeen moved to XLogReaderState.seg.ws_tli so I removed it from 0001.\n\nI haven't changed the name \"XLog reader\" to \"XLog decoder\". I'm doing\nthat but it affects somewhat wide range of code.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 07 Apr 2021 17:50:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 8:50 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I haven't changed the name \"XLog reader\" to \"XLog decoder\". I'm doing\n> that but it affects somewhat wide range of code.\n\nThanks for the new patch set! Let's not worry about renaming it for now.\n\nThis fails in check-world as seen on cfbot; I am not 100% sure but\nthis change fixes it:\n\n@@ -1231,7 +1231,7 @@ XLogFindNextRecord(XLogFindNextRecordState *state)\n {\n /* Rewind the reader to the beginning of the\nlast record. */\n state->currRecPtr = state->reader_state->ReadRecPtr;\n- XLogBeginRead(state->reader_state, found);\n+ XLogBeginRead(state->reader_state, state->currRecPtr);\n\nThe variable \"found\" seem to be useless.\n\nI still see the 3 warnings mentioned earlier when compiling without\n--enable-cassert.\n\nThere is a stray elog(HOGE) :-)\n\n\n",
"msg_date": "Thu, 8 Apr 2021 10:04:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "I squashed the patch set into one because half of them were fixups,\nand the two main patches were really parts of the same change and\nshould go in together.\n\nI fixed a few compiler warnings (GCC 10.2 reported several\nuninitialised variables, comparisons that are always true, etc) and\nsome comments. You can see these in the fixup patch.\n\n+static inline void\n+XLogReaderNotifySize(XLogReaderState *state, int32 len)\n\nI think maybe it it should really be XLogReaderSetInputData(state,\ntli, data, size) in a later release. In the meantime, I changed it to\nXLogReaderSetInputData(state, size), hope that name make sense...",
"msg_date": "Thu, 8 Apr 2021 21:46:06 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 9:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I squashed the patch set into one because half of them were fixups,\n> and the two main patches were really parts of the same change and\n> should go in together.\n>\n> I fixed a few compiler warnings (GCC 10.2 reported several\n> uninitialised variables, comparisons that are always true, etc) and\n> some comments. You can see these in the fixup patch.\n\nPushed. Luckily there are plenty more improvements possible for\nXLogReader/XLogDecoder in the next cycle.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 23:51:34 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Thu, 8 Apr 2021 23:51:34 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Thu, Apr 8, 2021 at 9:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I squashed the patch set into one because half of them were fixups,\n> > and the two main patches were really parts of the same change and\n> > should go in together.\n> >\n> > I fixed a few compiler warnings (GCC 10.2 reported several\n> > uninitialised variables, comparisons that are always true, etc) and\n> > some comments. You can see these in the fixup patch.\n> \n> Pushed. Luckily there are plenty more improvements possible for\n> XLogReader/XLogDecoder in the next cycle.\n\nI'm surprised to see this pushed this soon. Thanks for pushing this!\n\nAnd thanks for fixing the remaining mistakes including some stupid\nones..\n\nAt Thu, 8 Apr 2021 10:04:26 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> There is a stray elog(HOGE) :-)\n\nUgggghhhh! This looks like getting slipped-in while investigating\nanother issue.. Thanks for preventing the repository from being\ncontaminated by such a thing..\n\nAt Thu, 8 Apr 2021 21:46:06 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> I think maybe it it should really be XLogReaderSetInputData(state,\n> tli, data, size) in a later release. In the meantime, I changed it to\n> XLogReaderSetInputData(state, size), hope that name make sense...\n\nSounds better. I didn't like that page-readers are allowed to touch\nXLogReaderStats.seg directly. Anyway it would be a small change.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 09 Apr 2021 09:36:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Fri, 09 Apr 2021 09:36:59 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I'm surprised to see this pushed this soon. Thanks for pushing this!\n\nThen this has been reverted. I'm not sure how to check for the\npossible defect happens on that platform, but, anyways I reverted the\nCF item to \"Needs Review\" then moved to the next CF.\n\nMaybe I will rebase it soon.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 30 Jun 2021 16:54:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On Wed, Jun 30, 2021 at 12:54 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Fri, 09 Apr 2021 09:36:59 +0900 (JST), Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote in\n> > I'm surprised to see this pushed this soon. Thanks for pushing this!\n>\n> Then this has been reverted. I'm not sure how to check for the\n> possible defect happens on that platform, but, anyways I reverted the\n> CF item to \"Needs Review\" then moved to the next CF.\n>\n> Maybe I will rebase it soon.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n>\n> Yes, rebase is required, therefore I am changing the status to \"Waiting On\nAuthor\"\nhttp://cfbot.cputube.org/patch_33_2113.log\n\n\n-- \nIbrar Ahmed\n\nOn Wed, Jun 30, 2021 at 12:54 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Fri, 09 Apr 2021 09:36:59 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I'm surprised to see this pushed this soon. Thanks for pushing this!\n\nThen this has been reverted. I'm not sure how to check for the\npossible defect happens on that platform, but, anyways I reverted the\nCF item to \"Needs Review\" then moved to the next CF.\n\nMaybe I will rebase it soon.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\nYes, rebase is required, therefore I am changing the status to \"Waiting On Author\"http://cfbot.cputube.org/patch_33_2113.log-- Ibrar Ahmed",
"msg_date": "Thu, 15 Jul 2021 00:39:52 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Thu, 15 Jul 2021 00:39:52 +0500, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote in \n> On Wed, Jun 30, 2021 at 12:54 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> > Maybe I will rebase it soon.\n> >\n> > Yes, rebase is required, therefore I am changing the status to \"Waiting On\n> Author\"\n> http://cfbot.cputube.org/patch_33_2113.log\n\nGah... Thank you for noticing me. I thought that I have sent the\nrebased version. This is the rebased version on the current master.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 15 Jul 2021 13:48:06 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 4:48 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Gah... Thank you for noticing me. I thought that I have sent the\n> rebased version. This is the rebased version on the current master.\n\nHi Kyotaro,\n\nDid you see this?\n\nhttps://www.postgresql.org/message-id/20210429022553.4h5qii5jb5eclu4i%40alap3.anarazel.de\n\n\n",
"msg_date": "Mon, 27 Sep 2021 17:31:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Mon, 27 Sep 2021 17:31:03 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Thu, Jul 15, 2021 at 4:48 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Gah... Thank you for noticing me. I thought that I have sent the\n> > rebased version. This is the rebased version on the current master.\n> \n> Hi Kyotaro,\n> \n> Did you see this?\n> \n> https://www.postgresql.org/message-id/20210429022553.4h5qii5jb5eclu4i%40alap3.anarazel.de\n\nThank you for pinging me. I haven't noticed of that.\nI'll check on that line.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 30 Sep 2021 09:40:06 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
},
{
"msg_contents": "At Thu, 30 Sep 2021 09:40:06 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 27 Sep 2021 17:31:03 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> > On Thu, Jul 15, 2021 at 4:48 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > Gah... Thank you for noticing me. I thought that I have sent the\n> > > rebased version. This is the rebased version on the current master.\n> > \n> > Hi Kyotaro,\n> > \n> > Did you see this?\n> > \n> > https://www.postgresql.org/message-id/20210429022553.4h5qii5jb5eclu4i%40alap3.anarazel.de\n> \n> Thank you for pinging me. I haven't noticed of that.\n> I'll check on that line.\n\nIt looks like the XLogFindNextRecord was not finished. It should have\nbeen turned into a state machine.\n\nIn this version (v18),\n\nThis contains only page-reader refactoring stuff.\n\n- Rebased to the current master, including additional change for\n XLOG_OVERWRITE_CONTRECORD stuff. (This needed the new function\n XLogTerminateRead.)\n\n- Finished XLogFindNextRecord including the fixup from Thomas' v17.\n\n- Added a test for XLogFindNextRecord, on the behavior that\n page-skipping on seeking for the first record.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 07 Oct 2021 17:28:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove page-read callback from XLogReaderState."
}
] |
[
{
"msg_contents": "Hi,\n\nFor not the first time I was trying to remember why and when the whole\nnodeIndexscan.c:IndexNextWithReorder() business is needed. The comment\nabout reordering\n\n *\t\tIndexNextWithReorder\n *\n *\t\tLike IndexNext, but this version can also re-check ORDER BY\n *\t\texpressions, and reorder the tuples as necessary.\n\nor\n+ /* Initialize sort support, if we need to re-check ORDER BY exprs */\n\nor\n\n+ /*\n+ * If there are ORDER BY expressions, look up the sort operators for\n+ * their datatypes.\n+ */\n\n\nnor any other easy to spot ones really explain that. It's not even\nobvious that this isn't talking about an ordering ordering by a column\n(expression could maybe be taken as a hint, but that's fairly thin)\n\nBy reading enough code one can stitch together that that's really only\nneeded for KNN like order bys with lossy distance functions. It'd be\ngood if one had to dig less for that.\n\n\nthat logic was (originally) added in:\n\ncommit 35fcb1b3d038a501f3f4c87c05630095abaaadab\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\nDate: 2015-05-15 14:26:51 +0300\n\n Allow GiST distance function to return merely a lower-bound.\n\n\nbut I think some of the documentation & naming for related\ndatastructures was a bit hard to grasp before then too - it's e.g. IMO\ncertainly not obvious that IndexPath.indexorderbys isn't about plain\nORDER BYs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Apr 2019 17:30:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Comments for lossy ORDER BY are lacking"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-18 17:30:20 -0700, Andres Freund wrote:\n> For not the first time I was trying to remember why and when the whole\n> nodeIndexscan.c:IndexNextWithReorder() business is needed. The comment\n> about reordering\n> \n> *\t\tIndexNextWithReorder\n> *\n> *\t\tLike IndexNext, but this version can also re-check ORDER BY\n> *\t\texpressions, and reorder the tuples as necessary.\n> \n> or\n> + /* Initialize sort support, if we need to re-check ORDER BY exprs */\n> \n> or\n> \n> + /*\n> + * If there are ORDER BY expressions, look up the sort operators for\n> + * their datatypes.\n> + */\n\nSecondary point: has anybody actually checked whether the extra\nreordering infrastructure is a measurable overhead? It's obviously fine\nfor index scans that need reordering (i.e. lossy ones), but currently\nit's at least initialized for distance based order bys. I guess that's\nlargely because currently opclasses don't signal the fact that they\nmight return loss amcanorderby results, but that seems like it could\nhave been fixed back then?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Apr 2019 17:37:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Comments for lossy ORDER BY are lacking"
}
] |
[
{
"msg_contents": "Can we include the CustomScan node in the list of nodes that do not \nsupport projection?\nReason is that custom node can contain quite arbitrary logic that does \nnot guarantee projection support.\nSecondly. If planner does not need a separate Result node, it just \nassign tlist to subplan (i.e. changes targetlist of custom node) and \ndoes not change the custom_scan_tlist.\nPerhaps I do not fully understand the logic of using the \ncustom_scan_tlist field. But if into the PlanCustomPath() routine our \ncustom node does not build own custom_scan_tlist (may be it will use \ntlist as base for custom_scan_tlist) we will get errors in the \nset_customscan_references() call.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Apr 2019 09:35:44 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Do CustomScan as not projection capable node"
},
{
"msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> Can we include the CustomScan node in the list of nodes that do not \n> support projection?\n\nThat seems like a pretty bad idea. Maybe it's a good thing for whatever\nunspecified extension you have in mind right now, but it's likely to be\na net negative for many more. As an example, if some custom extension has\na better way to calculate some output expression than the core code does,\na restriction like this would prevent that from being implemented.\n\n> Reason is that custom node can contain quite arbitrary logic that does \n> not guarantee projection support.\n\nI don't buy this for a minute. Where do you think projection is\ngoing to happen? There isn't any existing node type that *couldn't*\nsupport projection if we insisted that that be done across-the-board.\nI think it's mostly just a legacy thing that some don't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Apr 2019 00:45:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do CustomScan as not projection capable node"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 12:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't buy this for a minute. Where do you think projection is\n> going to happen? There isn't any existing node type that *couldn't*\n> support projection if we insisted that that be done across-the-board.\n> I think it's mostly just a legacy thing that some don't.\n\nI think there may actually be some good reasons for that. If\nsomething like an Append or Material node projects, it seems to me\nthat this means that we built the wrong tlist for its input(s).\n\nThat justification doesn't apply to custom scans, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Apr 2019 09:40:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do CustomScan as not projection capable node"
},
{
"msg_contents": "\n\nOn 22/04/2019 18:40, Robert Haas wrote:\n> On Fri, Apr 19, 2019 at 12:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't buy this for a minute. Where do you think projection is\n>> going to happen? There isn't any existing node type that *couldn't*\n>> support projection if we insisted that that be done across-the-board.\n>> I think it's mostly just a legacy thing that some don't.\n> \n> I think there may actually be some good reasons for that. If\n> something like an Append or Material node projects, it seems to me\n> that this means that we built the wrong tlist for its input(s).\n> \n> That justification doesn't apply to custom scans, though.\nThe main reason for my question was incomplete information about the \nparameter custom_scan_tlist / fdw_scan_tlist.\nIn the process of testing my custom node, I encountered an error in \nsetrefs.c caused by optimization of the projection operation. In order \nto reliably understand how to properly use custom_scan_tlist, I had to \nstudy in detail the mechanics of the FDW plan generator and now the \nproblem is solved.\nWe have only three references to this parameter in the hackers mailing \nlist, a brief reference on postgresql.org and limited comments into two \npatches: 1a8a4e5 and e7cb7ee.\nIt is possible that custom_scan_tlist is designed too nontrivially, and \nit is possible that it needs some comments describing in more detail how \nto use it.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 22 Apr 2019 22:23:28 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Do CustomScan as not projection capable node"
},
{
"msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> It is possible that custom_scan_tlist is designed too nontrivially, and \n> it is possible that it needs some comments describing in more detail how \n> to use it.\n\nI totally buy the argument that the custom scan stuff is\nunderdocumented :-(.\n\nFWIW, if we did have a use-case showing that somebody would like to make\ncustom scans that can't project, the way to do it would be to add a flag\nbit showing whether a particular CustomPath/CustomScan could project or\nnot. Not to assume that they all can't. This wouldn't be that much code\nreally, but I'd still like to see a plausible use-case before adding it,\nbecause it'd be a small API break for existing CustomPath providers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 14:06:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do CustomScan as not projection capable node"
}
] |
[
{
"msg_contents": "Hi, Hackers\n\npg_logical_emit_message() can be used by any user,\nbut the following document says that it can be used by only superuser.\n\n> Table 9.88. Replication SQL Functions\n> Use of these functions is restricted to superusers.\n\nI think that pg_logicl_emit_message() should be used by any user.\nTherefore, I attach the document patch.\n\nRyo\nMatsumura",
"msg_date": "Fri, 19 Apr 2019 06:21:14 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Patch: doc for pg_logical_emit_message()"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 3:21 PM Matsumura, Ryo\n<matsumura.ryo@jp.fujitsu.com> wrote:\n>\n> Hi, Hackers\n>\n> pg_logical_emit_message() can be used by any user,\n> but the following document says that it can be used by only superuser.\n>\n> > Table 9.88. Replication SQL Functions\n> > Use of these functions is restricted to superusers.\n>\n> I think that pg_logicl_emit_message() should be used by any user.\n> Therefore, I attach the document patch.\n\nThanks for the patch!\n\nUse of not only pg_logical_emit_message() but also other replication\nfunctions in Table 9.88 is not restricted to superuser. For example,\nreplication role can execute pg_create_physical_replication_slot().\nSo I think that the patch should fix also the description for those\nreplication functions. Thought?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Tue, 23 Apr 2019 02:59:02 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch: doc for pg_logical_emit_message()"
},
{
"msg_contents": "On Tue. Apr. 23, 2019 at 02:59 AM Masao, Fujii\r\n<masao.fujii@gmail.com> wrote:\r\n\r\nThank you for the comment.\r\n\r\n> So I think that the patch should fix also the description for those\r\n> replication functions. Thought?\r\n\r\nI think so too.\r\nI attach a new patch.\r\n\r\nRegards\r\nRyo Matsumura",
"msg_date": "Wed, 24 Apr 2019 02:12:21 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Patch: doc for pg_logical_emit_message()"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 11:12 AM Matsumura, Ryo\n<matsumura.ryo@jp.fujitsu.com> wrote:\n>\n> On Tue. Apr. 23, 2019 at 02:59 AM Masao, Fujii\n> <masao.fujii@gmail.com> wrote:\n>\n> Thank you for the comment.\n>\n> > So I think that the patch should fix also the description for those\n> > replication functions. Thought?\n>\n> I think so too.\n> I attach a new patch.\n\nThanks for updating the patch!\n\n+ Use of functions for replication origin is restricted to superusers.\n+ Use of functions for replication slot is restricted to superusers\nand replication roles.\n\n\"replication role\" is a bit confusing. For example, we have\n\"replication role\" related to session_replication_role. So\nI think it's better to use something like \"users having\n<literal>REPLICATION</literal> privilege\".\n\n+ Only <function>pg_logical_emit_message</function> can be used by any users.\n\nNot any user, I think. For example, what about a user not having\nEXECUTE privilege on pg_logical_emit_message function?\nI don't think that this description only for pg_logical_emit_message\nis necessary.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 24 Apr 2019 23:39:32 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch: doc for pg_logical_emit_message()"
},
{
"msg_contents": "On Wed. Apr. 24, 2019 at 11:40 PM Masao, Fujii\r\n<masao.fujii@gmail.com> wrote:\r\n\r\nThank you for the comment.\r\nI understand about REPLICATION privilege and notice my unecessary words.\r\nI update the patch.\r\n\r\nRegards\r\nRyo Matsumura",
"msg_date": "Fri, 26 Apr 2019 01:51:53 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Patch: doc for pg_logical_emit_message()"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 10:52 AM Matsumura, Ryo\n<matsumura.ryo@jp.fujitsu.com> wrote:\n>\n> On Wed. Apr. 24, 2019 at 11:40 PM Masao, Fujii\n> <masao.fujii@gmail.com> wrote:\n>\n> Thank you for the comment.\n> I understand about REPLICATION privilege and notice my unecessary words.\n> I update the patch.\n\nThanks! Pushed.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 9 May 2019 01:48:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch: doc for pg_logical_emit_message()"
},
{
"msg_contents": "On Thu. May. 9, 2019 at 01:48 AM Masao, Fujii\r\n<masao.fujii@gmail.com> wrote:\r\n\r\n> Thanks! Pushed.\r\n\r\nThank you.\r\n\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Thu, 9 May 2019 00:18:14 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Patch: doc for pg_logical_emit_message()"
}
] |
[
{
"msg_contents": "\nHello Tom,\n\n>> Yep, but ISTM that it is down to 32 bits,\n>\n> Only on 32-bit-long machines, which are a dwindling minority (except\n> for Windows, which I don't really care about).\n>\n>> So the third short is now always 0. Hmmm. I'll propose another option over\n>> the week-end.\n>\n> I suppose we could put pg_strtouint64 somewhere where pgbench can use it,\n> but TBH I don't think it's worth the trouble. The set of people using\n> the --random-seed=int option at all is darn near empty, I suspect,\n> and the documentation only says you can write an int there.\n\nAlthough I agree it is not worth a lot of trouble, and even if I don't do \nWindows, I think it valuable that the behavior is the same on all \nplatform. The attached match shares pg_str2*int64 functions between \nfrontend and backend by moving them to \"common/\", which avoids some code \nduplication.\n\nThis is more refactoring, and it fixes the behavior change on 32 bit \narchitectures.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 20 Apr 2019 13:22:02 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "refactoring - share str2*int64 functions"
},
{
"msg_contents": "As usual, it is better with the attachement. Sorry for the noise.\n\n>>> Yep, but ISTM that it is down to 32 bits,\n>> \n>> Only on 32-bit-long machines, which are a dwindling minority (except\n>> for Windows, which I don't really care about).\n>> \n>>> So the third short is now always 0. Hmmm. I'll propose another option over\n>>> the week-end.\n>> \n>> I suppose we could put pg_strtouint64 somewhere where pgbench can use it,\n>> but TBH I don't think it's worth the trouble. The set of people using\n>> the --random-seed=int option at all is darn near empty, I suspect,\n>> and the documentation only says you can write an int there.\n>\n> Although I agree it is not worth a lot of trouble, and even if I don't do \n> Windows, I think it valuable that the behavior is the same on all platform. \n> The attached match shares pg_str2*int64 functions between frontend and \n> backend by moving them to \"common/\", which avoids some code duplication.\n>\n> This is more refactoring, and it fixes the behavior change on 32 bit \n> architectures.\n>\n>\n\n-- \nFabien.",
"msg_date": "Sat, 20 Apr 2019 13:23:42 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": ">> Although I agree it is not worth a lot of trouble, and even if I don't do \n>> Windows, I think it valuable that the behavior is the same on all platform. \n>> The attached match shares pg_str2*int64 functions between frontend and \n>> backend by moving them to \"common/\", which avoids some code duplication.\n>> \n>> This is more refactoring, and it fixes the behavior change on 32 bit \n>> architectures.\n\nV2 is a rebase.\n\n-- \nFabien.",
"msg_date": "Thu, 23 May 2019 17:23:14 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Fri, May 24, 2019 at 3:23 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> >> Although I agree it is not worth a lot of trouble, and even if I don't do\n> >> Windows, I think it valuable that the behavior is the same on all platform.\n> >> The attached match shares pg_str2*int64 functions between frontend and\n> >> backend by moving them to \"common/\", which avoids some code duplication.\n> >>\n> >> This is more refactoring, and it fixes the behavior change on 32 bit\n> >> architectures.\n>\n> V2 is a rebase.\n\nHi Fabien,\n\nHere's some semi-automated feedback, noted while going through\nfailures on cfbot.cputube.org. You have a stray editor file\nsrc/backend/parser/parse_node.c.~1~. Something is failing to compile\nwhile doing the temp-install in make check-world, which probably\nindicates that some test or contrib module is using the interface you\nchanged?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 15:22:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's some semi-automated feedback, noted while going through\n> failures on cfbot.cputube.org. You have a stray editor file\n> src/backend/parser/parse_node.c.~1~. Something is failing to compile\n> while doing the temp-install in make check-world, which probably\n> indicates that some test or contrib module is using the interface you\n> changed?\n\nPlease disregard the the comment about the \".~1~\" file, my mistake.\nAs for the check-world failure, it's here:\n\npg_stat_statements.c:1024:11: error: implicit declaration of function\n'pg_strtouint64' is invalid in C99\n[-Werror,-Wimplicit-function-declaration]\n rows = pg_strtouint64(completionTag + 5, NULL, 10);\n ^\nApparently it needs to include common/string.h.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 13 Jul 2019 23:17:00 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hello Thomas,\n\n> pg_stat_statements.c:1024:11: error: implicit declaration of function\n> 'pg_strtouint64' is invalid in C99\n> [-Werror,-Wimplicit-function-declaration]\n> rows = pg_strtouint64(completionTag + 5, NULL, 10);\n> ^\n> Apparently it needs to include common/string.h.\n\nYep, I gathered that as well, but did not act promptly on your help.\nThanks for it!\n\nHere is the updated patch on which I checked \"make check-world\".\n\n-- \nFabien.",
"msg_date": "Sat, 13 Jul 2019 17:22:05 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Sun, Jul 14, 2019 at 3:22 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Here is the updated patch on which I checked \"make check-world\".\n\nThanks! So, we're moving pg_strtouint64() to a place where frontend\ncode can use it, and getting rid of some duplication. I like it. I\nwanted this once before myself[1].\n\n+extern bool pg_strtoint64(const char *str, bool errorOK, int64 *result);\n+extern uint64 pg_strtouint64(const char *str, char **endptr, int base);\n\nOne of these things is not like the other. Let's see... the int64\nversion is used only by pgbench and is being promoted to common where\nit can be used by more code. With a name like that, wouldn't it make\nsense to bring it into line with the uint64 interface, and then move\npgbench's error reporting stuff back into pgbench? The uint64 one\nderives its shape from the family of standard functions like strtol()\nso I think it wins.\n\n[1] https://www.postgresql.org/message-id/CAEepm=2KeC8xDbEWgDTDObXGqPHFW4kcD7BZXR6NMfiHjjnKhQ@mail.gmail.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jul 2019 13:43:50 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hello Thomas,\n\n> +extern bool pg_strtoint64(const char *str, bool errorOK, int64 *result);\n> +extern uint64 pg_strtouint64(const char *str, char **endptr, int base);\n>\n> One of these things is not like the other.\n\nIndeed.\n\nI agree that it is unfortunate, and it was bothering me a little as well.\n\n> Let's see... the int64 version is used only by pgbench and is being \n> promoted to common where it can be used by more code.\n\nNo and yes.\n\nThe pgbench code was a copy of server-side internal \"scanint8\", so it is \nused both by pgbench and the server-side handling of \"int8\", it is used \nsignificantly, taking advantage of its versatile error reporting feature \non both sides.\n\n> With a name like that, wouldn't it make sense to bring it into line with \n> the uint64 interface, and then move pgbench's error reporting stuff back \n> into pgbench?\n\nThat would need moving the server-side error handling as well, which I \nwould not really be happy with.\n\n> The uint64 one derives its shape from the family of standard functions \n> like strtol() so I think it wins.\n\nYep, it cannot be changed either.\n\nI do not think that changing the error handling capability is appropriate, \nit is really a feature of the function. The function could try to use an \ninternal pg_strtoint64 which would look like the other unsigned version, \nbut then it would not differentiate the various error conditions (out of \nrange vs syntax error).\n\nThe compromise I can offer is to change the name of the first one, say to \n\"pg_scanint8\" to reflect its former backend name. Attached a v4 which does \na renaming so as to avoid the name similarity but signature difference. I \nalso made both error messages identical.\n\n-- \nFabien.",
"msg_date": "Mon, 15 Jul 2019 07:08:42 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 5:08 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> The compromise I can offer is to change the name of the first one, say to\n> \"pg_scanint8\" to reflect its former backend name. Attached a v4 which does\n> a renaming so as to avoid the name similarity but signature difference. I\n> also made both error messages identical.\n\nCool. I'm not exactly sure when we should include 'pg_' in identifier\nnames. It seems to be used for functions/macros that wrap or replace\nsomething else with a similar name, like pg_pwrite(),\npg_attribute_noreturn(), ... In this case it's just our own code that\nwe're moving, so I'm wondering if we should just call it scanint8().\n\nIf you agree, I think this is ready to commit.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jul 2019 11:16:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On 2019-Jul-16, Thomas Munro wrote:\n\n> On Mon, Jul 15, 2019 at 5:08 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > The compromise I can offer is to change the name of the first one, say to\n> > \"pg_scanint8\" to reflect its former backend name. Attached a v4 which does\n> > a renaming so as to avoid the name similarity but signature difference. I\n> > also made both error messages identical.\n> \n> Cool. I'm not exactly sure when we should include 'pg_' in identifier\n> names. It seems to be used for functions/macros that wrap or replace\n> something else with a similar name, like pg_pwrite(),\n> pg_attribute_noreturn(), ... In this case it's just our own code that\n> we're moving, so I'm wondering if we should just call it scanint8().\n\nIsn't it annoying that pg_strtouint64() has an implementation that\nsuggests that it ought to be in src/port? The fact that the signatures\nare so different suggests to me that we should indeed put them separate.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 15 Jul 2019 19:44:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 11:16:31AM +1200, Thomas Munro wrote:\n> Cool. I'm not exactly sure when we should include 'pg_' in identifier\n> names. It seems to be used for functions/macros that wrap or replace\n> something else with a similar name, like pg_pwrite(),\n> pg_attribute_noreturn(), ... In this case it's just our own code that\n> we're moving, so I'm wondering if we should just call it scanint8().\n\nFWIW, I was looking forward to putting my hands on this patch and try\nto get it merged so as we can get rid of those duplications. Here are\nsome comments.\n\n+#ifdef FRONTEND\n+ fprintf(stderr,\n+ \"invalid input syntax for type %s: \\\"%s\\\"\\n\",\n\"bigint\", str);\n+#else\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n+ errmsg(\"invalid input syntax for type %s: \\\"%s\\\"\",\n+ \"bigint\", str)));\n+#endif\nHave you looked at using the wrapper pg_log_error() here?\n\n+extern bool pg_scanint8(const char *str, bool errorOK, int64\n*result);\n+extern uint64 pg_strtouint64(const char *str, char **endptr, int\nbase);\nHmm. With this patch we have strtoint and pg_strtouint64, which makes\nthe whole set inconsistent.\n\n+\n #endif /* COMMON_STRING_H */\nNoise diff.. \n\nnumutils.c also has pg_strtoint16 and pg_strtoint32, so the locations\nbecome rather inconsistent with inconsistent APIs for the manipulation\nof int2 and int4 fields, and scanint8 is just a derivative of the same\nlogic. We have two categories of routines here:\n- The wrappers on top of strtol and strtoul & co, which are named\nrespectively strtoint and pg_strtouint64 with the patch. The naming\npart is inconsistent, and we only handle uint64 and int32. We don't\nneed to worry about int64 and uint32 because they are not used?\nThat's fine by me, but at least let's have a consistent naming.\nPrefixing the functions with pg_* is a better practice in my opinion\nas we will unlikely run into conflicts this way.\n- The str->integer conversion routines, which actually have very\nsimilar characteristics to the strtol families as they remove trailing\nwhitespaces first, check for a sign, etc, except that they work only\non base 10. And here we get into a state where pg_scanint8 should be\nactually called pg_strtoint64, with an interface inconsistent with its\nint32/int16 relatives now only in the backend. Could we consider more\nconsolidation here please? Like moving the whole set to src/common/?\n\n> If you agree, I think this is ready to commit.\n\nThomas, are you planning to look at this patch as committer? I had it\nin my agenda, and was planning to look at it sooner than later. Now\nif you are on it, I won't step on your toes.\n--\nMichael",
"msg_date": "Tue, 16 Jul 2019 16:11:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 7:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Thomas, are you planning to look at this patch as committer? I had it\n> in my agenda, and was planning to look at it sooner than later. Now\n> if you are on it, I won't step on your toes.\n\nHi Michael, please go ahead, it's all yours.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jul 2019 19:20:33 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "\nHello Thomas,\n\n> On Mon, Jul 15, 2019 at 5:08 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>> The compromise I can offer is to change the name of the first one, say to\n>> \"pg_scanint8\" to reflect its former backend name. Attached a v4 which does\n>> a renaming so as to avoid the name similarity but signature difference. I\n>> also made both error messages identical.\n>\n> Cool. I'm not exactly sure when we should include 'pg_' in identifier\n> names. It seems to be used for functions/macros that wrap or replace\n> something else with a similar name, like pg_pwrite(),\n> pg_attribute_noreturn(), ... In this case it's just our own code that\n> we're moving, so I'm wondering if we should just call it scanint8().\n\nI added the pg_ prefix as a poor man's namespace because the function can \nbe used by external tools (eg contribs), so as to avoid potential name \nconflicts.\n\nI agree that such conflicts are less probable if the name does not replace \nsomething existing.\n\n> If you agree, I think this is ready to commit.\n\nIt can be removed, or not. So you do as you feel.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 16 Jul 2019 07:30:43 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Jul 16, 2019, at 3:30 AM, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>> Cool. I'm not exactly sure when we should include 'pg_' in identifier\n>> names. It seems to be used for functions/macros that wrap or replace\n>> something else with a similar name, like pg_pwrite(),\n>> pg_attribute_noreturn(), ... In this case it's just our own code that\n>> we're moving, so I'm wondering if we should just call it scanint8().\n> \n> I added the pg_ prefix as a poor man's namespace because the function can be used by external tools (eg contribs), so as to avoid potential name conflicts.\n\nYeah, I think if we are going to expose it to front end code there is a good argument for some kind of prefix that makes it sound PostgreSQL-related.\n\n...Robert\n\n\n",
"msg_date": "Tue, 16 Jul 2019 08:39:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Jul 16, 2019, at 3:30 AM, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>>> Cool. I'm not exactly sure when we should include 'pg_' in identifier\n>>> names.\n\n>> I added the pg_ prefix as a poor man's namespace because the function can be used by external tools (eg contribs), so as to avoid potential name conflicts.\n\n> Yeah, I think if we are going to expose it to front end code there is a good argument for some kind of prefix that makes it sound PostgreSQL-related.\n\nYeah, I'd tend to err in favor of including \"pg_\". We might get away\nwithout that as long as the name is never exposed to non-PG code, but\nfor stuff that's going into src/common/ or src/port/ I think that's\na risky assumption to make.\n\nI'm also in agreement with Michael's comments in\n<20190716071144.GF1439@paquier.xyz> that this would be a good time\nto bring some consistency to the naming of related functions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jul 2019 09:41:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-15 07:08:42 +0200, Fabien COELHO wrote:\n> I do not think that changing the error handling capability is appropriate,\n> it is really a feature of the function. The function could try to use an\n> internal pg_strtoint64 which would look like the other unsigned version, but\n> then it would not differentiate the various error conditions (out of range\n> vs syntax error).\n\n> The compromise I can offer is to change the name of the first one, say to\n> \"pg_scanint8\" to reflect its former backend name. Attached a v4 which does a\n> renaming so as to avoid the name similarity but signature difference. I also\n> made both error messages identical.\n\nI think the interface of that function is not that good, and the \"scan\"\nin the name isn't great for discoverability (for one it's a different\nnaming than pg_strtoint16 etc), and the *8 meaning 64bit is confusing\nenough in the backend, we definitely shouldn't extend that to frontend\ncode.\n\nReferencing \"bigint\" and \"input syntax\" from frontend code imo doesn't\nmake a lot of sense. And int8in is the only caller that uses\nerrorOK=False anyway, so there's currently no need for the frontend\nerror strings afaict.\n\nISTM that something like\n\ntypedef enum StrToIntConversion\n{\n STRTOINT_OK = 0,\n STRTOINT_SYNTAX_ERROR = 1,\n STRTOINT_OUT_OF_RANGE = 2\n} StrToIntConversion;\nStrToIntConversion pg_strtoint64(const char *str, int64 *result);\n\nwould make more sense.\n\nThere is the issue that there already is pg_strtoint16 and\npg_strtoint32, which do not have the option to not raise an error. I'd\nprobably name the non-error throwing ones something like pg_strtointNN_e\n(for extended, or error handling), and have pg_strtointNN wrappers that\njust handle the errors, or reverse the naming (which'd cause a bit of\nchurn, but not that much).\n\nThat'd also make the code for pg_strtointNN a bit nicer, because we'd\nnot need the gotos anymore, they're just there to avoid redundant error\nmessages - which'd not be an issue if the error handling were just a\nswitch in a separate function. E.g.\n\nint32\npg_strtoint32(const char *s)\n{\n int32 result;\n\n switch (pg_strtoint32_e(s, &result))\n {\n case STRTOINT_OK:\n return result;\n\n case STRTOINT_SYNTAX_ERROR:\n ereport(ERROR,\n (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n errmsg(\"value \\\"%s\\\" is out of range for type %s\",\n s, \"integer\")));\n\n case STRTOINT_OUT_OF_RANGE:\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n errmsg(\"invalid input syntax for type %s: \\\"%s\\\"\",\n \"integer\", s)));\n\n }\n\n return 0; /* keep compiler quiet */\n}\n\nwhich does seem nicer than what we have right now.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Jul 2019 13:04:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-16 16:11:44 +0900, Michael Paquier wrote:\n\n> numutils.c also has pg_strtoint16 and pg_strtoint32, so the locations\n> become rather inconsistent with inconsistent APIs for the manipulation\n> of int2 and int4 fields, and scanint8 is just a derivative of the same\n> logic. We have two categories of routines here:\n\n> - The wrappers on top of strtol and strtoul & co, which are named\n> respectively strtoint and pg_strtouint64 with the patch. The naming\n> part is inconsistent, and we only handle uint64 and int32. We don't\n> need to worry about int64 and uint32 because they are not used?\n> That's fine by me, but at least let's have a consistent naming.\n\nYea, consistent naming seems like a strong requirement\nhere. Additionally I think we should just provide a consistent set\nrather than what's needed just now. That'll just lead to people\ninventing their own again down the line.\n\n\n> Prefixing the functions with pg_* is a better practice in my opinion\n> as we will unlikely run into conflicts this way.\n\n+1\n\n\n> - The str->integer conversion routines, which actually have very\n> similar characteristics to the strtol families as they remove trailing\n> whitespaces first, check for a sign, etc, except that they work only\n> on base 10.\n\nThere's afaict neither a caller that needs the base argument at the\nmoment, nor one in the tree previously. I'd argue for just making\npg_strtouint64's API consistent.\n\nI'd probably also just use the implementation we have for signed\nintegers (minus the relevant negation and overflow checks, obviously) -\nit's a lot faster, and I think there's value in keeping the\nimplementations in sync.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Jul 2019 13:18:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 01:18:38PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-07-16 16:11:44 +0900, Michael Paquier wrote:\n> Yea, consistent naming seems like a strong requirement\n> here. Additionally I think we should just provide a consistent set\n> rather than what's needed just now. That'll just lead to people\n> inventing their own again down the line.\n\nAgreed. The first versions of pg_rewind in the tree have been using\ncopy_file_range(), which has been introduced in Linux.\n\n> > - The str->integer conversion routines, which actually have very\n> > similar characteristics to the strtol families as they remove trailing\n> > whitespaces first, check for a sign, etc, except that they work only\n> > on base 10.\n> \n> There's afaict neither a caller that needs the base argument at the\n> moment, nor one in the tree previously. I'd argue for just making\n> pg_strtouint64's API consistent.\n\nGood point, indeed, this could be much more simplified. I have not\npaid attention at that part.\n\n> I'd probably also just use the implementation we have for signed\n> integers (minus the relevant negation and overflow checks, obviously) -\n> it's a lot faster, and I think there's value in keeping the\n> implementations in sync.\n\nYou mean that it is much faster than the set of wrappers for strtol\nthan we have? Is that because we don't care about the base?\n--\nMichael",
"msg_date": "Wed, 17 Jul 2019 12:04:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 01:04:38PM -0700, Andres Freund wrote:\n> There is the issue that there already is pg_strtoint16 and\n> pg_strtoint32, which do not have the option to not raise an error. I'd\n> probably name the non-error throwing ones something like pg_strtointNN_e\n> (for extended, or error handling), and have pg_strtointNN wrappers that\n> just handle the errors, or reverse the naming (which'd cause a bit of\n> churn, but not that much).\n> \n> That'd also make the code for pg_strtointNN a bit nicer, because we'd\n> not need the gotos anymore, they're just there to avoid redundant error\n> messages - which'd not be an issue if the error handling were just a\n> switch in a separate function. E.g.\n\nAgreed on that. I am wondering if we should use a common wrapper for\nall the internal functions taking in input a set of bits16 flags to\ncontrol its behavior and put all that into common/script.c:\n- Issue an error.\n- Check for signedness.\n- Base length: 16, 32 or 64.\nThis would have the advantage to move the error string generation, the\ntrailing whitespace check, sign handling and such in a single place.\nWe could have the internal routine return uint64 which is casted\nafterwards to a proper result depending on what we use. (Perhaps\nthat's what you actually meant?)\n\nI would also rather not touch the strtol wrappers that we have able to\nhandle the base. There is nothing in the tree using it, but likely\nthere are extensions relying on it. Switching all the existing\ncallers in the tree to the new routines sounds good to me of course.\n\nConsolidating all that still needs more work, so for now I am\nswitching the patch as waiting on author.\n--\nMichael",
"msg_date": "Wed, 17 Jul 2019 12:18:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n> FWIW, I was looking forward to putting my hands on this patch and try\n> to get it merged so as we can get rid of those duplications. Here are\n> some comments.\n>\n> +#ifdef FRONTEND\n> + fprintf(stderr,\n> + \"invalid input syntax for type %s: \\\"%s\\\"\\n\",\n> \"bigint\", str);\n> +#else\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n> + errmsg(\"invalid input syntax for type %s: \\\"%s\\\"\",\n> + \"bigint\", str)));\n> +#endif\n> Have you looked at using the wrapper pg_log_error() here?\n\nI have not.\n\nI have simply merged the two implementations (pgbench & backend) as they \nwere.\n\n> +extern bool pg_scanint8(const char *str, bool errorOK, int64 *result);\n> +extern uint64 pg_strtouint64(const char *str, char **endptr, int base);\n\n> Hmm. With this patch we have strtoint and pg_strtouint64, which makes\n> the whole set inconsistent.\n\nI understand that you mean bits vs bytes? Indeed it can bite!\n\n> +\n> #endif /* COMMON_STRING_H */\n> Noise diff..\n\nIndeed.\n\n> numutils.c also has pg_strtoint16 and pg_strtoint32, so the locations\n> become rather inconsistent with inconsistent APIs for the manipulation\n> of int2 and int4 fields, and scanint8 is just a derivative of the same\n> logic. We have two categories of routines here:\n\nYep, but the int2/4 functions are not used elsewhere.\n\n> - The wrappers on top of strtol and strtoul & co, which are named\n> respectively strtoint and pg_strtouint64 with the patch. The naming\n> part is inconsistent, and we only handle uint64 and int32. We don't\n> need to worry about int64 and uint32 because they are not used?\n\nIndeed, it seems that they are not needed/used by client code, AFAICT.\n\n> That's fine by me, but at least let's have a consistent naming.\n\nOk.\n\n> Prefixing the functions with pg_* is a better practice in my opinion\n> as we will unlikely run into conflicts this way.\n\nOk.\n\n> - The str->integer conversion routines, which actually have very\n> similar characteristics to the strtol families as they remove trailing\n> whitespaces first, check for a sign, etc, except that they work only\n> on base 10. And here we get into a state where pg_scanint8 should be\n> actually called pg_strtoint64,\n\nI just removed that:-)\n\nISTM that the issue is that the error handling of these functions is \npretty different.\n\n> with an interface inconsistent with its int32/int16 relatives now only \n> in the backend.\n\nWe can, but I'm not at ease with changing the error handling approach.\n\n> Could we consider more consolidation here please? Like moving the whole \n> set to src/common/?\n\nMy initial plan was simply to remove direct code duplications between \nfront-end and back-end, not to re-engineer the full set of string to int \nconversion functions:-)\n\nOn the re-engineering front: Given the various points on the thread, ISTM \nthat there should probably be two behaviors for str to signed/unsigned \nint{16,32,64}, and having only one kind of signature for all types would \nbe definitely better.\n\nOne low-level one that does the conversion or return an error.\n\nAnother higher-level one which possibly adds an error message (stderr for \nfront-end, log for back-end).\n\nOne choice is whether there are two functions (the higher one calling the \nlower one and adding the messages) or just one with a boolean to trigger \nthe messages. I do not have a strong opinion. Maybe one function would be \nsimpler. Andres boolean-compatible enum return looks like a good option.\n\nOverall, this leads to something like:\n\nenum { STRTOINT_OK, STRTOINT_OVERFLOW_ERROR, STRTOINT_SYNTAX_ERROR }\n pg_strto{,u}int{8?,16,32,64}\n (const char * string, const TYPE * result, const bool verbose);\n\n-- \nFabien.",
"msg_date": "Wed, 17 Jul 2019 07:55:39 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 07:55:39AM +0000, Fabien COELHO wrote:\n>> numutils.c also has pg_strtoint16 and pg_strtoint32, so the locations\n>> become rather inconsistent with inconsistent APIs for the manipulation\n>> of int2 and int4 fields, and scanint8 is just a derivative of the same\n>> logic. We have two categories of routines here:\n> \n> Yep, but the int2/4 functions are not used elsewhere.\n\nThe worry is more about having people invent the same stuff all over\nagain. If we can get a clean interface, that would ease adoption.\nHopefully.\n\n> Overall, this leads to something like:\n> \n> enum { STRTOINT_OK, STRTOINT_OVERFLOW_ERROR, STRTOINT_SYNTAX_ERROR }\n> pg_strto{,u}int{8?,16,32,64}\n> (const char * string, const TYPE * result, const bool verbose);\n\nSomething like that. \"verbose\" may mean \"error_ok\" though. Not\nhaving 6 times the same trailing whitespace checks and such would be\nnice.\n\nActually, one thing which may be a problem is that we lack currently\nthe equivalents of pg_mul_s16_overflow and such for unsigned\nintegers. The point of contention comes from pgbench's\nset_random_seed() in this case as we can expect an unsigned seed as\nthe docs say. But if we give up on the signedness of the random seed\nwhich remains with 8 bytes, then we could let pg_strtouint64 as\nbackend-only and only worry about porting this set of APIs for signed\nintegers.\n--\nMichael",
"msg_date": "Wed, 17 Jul 2019 17:29:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-17 12:18:19 +0900, Michael Paquier wrote:\n> On Tue, Jul 16, 2019 at 01:04:38PM -0700, Andres Freund wrote:\n> > There is the issue that there already is pg_strtoint16 and\n> > pg_strtoint32, which do not have the option to not raise an error. I'd\n> > probably name the non-error throwing ones something like pg_strtointNN_e\n> > (for extended, or error handling), and have pg_strtointNN wrappers that\n> > just handle the errors, or reverse the naming (which'd cause a bit of\n> > churn, but not that much).\n> > \n> > That'd also make the code for pg_strtointNN a bit nicer, because we'd\n> > not need the gotos anymore, they're just there to avoid redundant error\n> > messages - which'd not be an issue if the error handling were just a\n> > switch in a separate function. E.g.\n> \n> Agreed on that. I am wondering if we should use a common wrapper for\n> all the internal functions taking in input a set of bits16 flags to\n> control its behavior and put all that into common/script.c:\n> - Issue an error.\n> - Check for signedness.\n> - Base length: 16, 32 or 64.\n\nThat'd be considerably slower, so I'm *strongly* against that. These\nconversion routines are *really* hot in a number of workloads,\ne.g. bulk-loading with COPY. Check e.g.\nhttps://www.postgresql.org/message-id/20171208214437.qgn6zdltyq5hmjpk%40alap3.anarazel.de\n\n\n> I would also rather not touch the strtol wrappers that we have able to\n> handle the base. There is nothing in the tree using it, but likely\n> there are extensions relying on it.\n\nI doubt it - it's not of that long-standing vintage (23a27b039d9,\n2016-03-12), and if so they are very likely to use base 10. We shouldn't\nkeep some barely tested function around, just for the hypothetical\nscenario that some extension uses it. Especially if that function is\nconsiderably slower than the potential replacement.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 11:14:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-17 12:04:32 +0900, Michael Paquier wrote:\n> On Tue, Jul 16, 2019 at 01:18:38PM -0700, Andres Freund wrote:\n> > I'd probably also just use the implementation we have for signed\n> > integers (minus the relevant negation and overflow checks, obviously) -\n> > it's a lot faster, and I think there's value in keeping the\n> > implementations in sync.\n> \n> You mean that it is much faster than the set of wrappers for strtol\n> than we have? Is that because we don't care about the base?\n\nYes: https://www.postgresql.org/message-id/20171208214437.qgn6zdltyq5hmjpk%40alap3.anarazel.de\n\nNot caring about the base is one significant part, that removes a fair\nbit of branches and more importantly allows the compiler to replace\ndivisions with much faster code (glibc tries to avoid the division too,\nwith lookup tables, but that's still expensive). Additionally there's\nalso some locale awareness in strtoll etc that we don't need. It's also\nplainly not that well implemented at least in glibc and musl.\n\nHaving an implementation that reliably works the same across all\nplatforms is also advantageous.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 11:21:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-17 07:55:39 +0000, Fabien COELHO wrote:\n> > - The str->integer conversion routines, which actually have very\n> > similar characteristics to the strtol families as they remove trailing\n> > whitespaces first, check for a sign, etc, except that they work only\n> > on base 10. And here we get into a state where pg_scanint8 should be\n> > actually called pg_strtoint64,\n> \n> I just removed that:-)\n\nWhat do you mean by that?\n\n\n> > with an interface inconsistent with its int32/int16 relatives now only\n> > in the backend.\n> \n> We can, but I'm not at ease with changing the error handling approach.\n\nWhy?\n\n\n> > Could we consider more consolidation here please? Like moving the whole\n> > set to src/common/?\n> \n> My initial plan was simply to remove direct code duplications between\n> front-end and back-end, not to re-engineer the full set of string to int\n> conversion functions:-)\n\nWell, if you expose functions to more places - e.g. now the whole\nfrontend - it becomes more important that they're reasonably designed.\n\n\n> On the re-engineering front: Given the various points on the thread, ISTM\n> that there should probably be two behaviors for str to signed/unsigned\n> int{16,32,64}, and having only one kind of signature for all types would be\n> definitely better.\n\nI don't understand why we'd want to have different behaviours for\nsigned/unsigned? Maybe I'm mis-understanding your sentence, and you just\nmean that there should be one that throws, and one that returns an\nerrorcode?\n\n\n> Another higher-level one which possibly adds an error message (stderr for\n> front-end, log for back-end).\n\nIs there actually any need for a non-backend one that has an\nerror-message? I'm not convinced that in the frontend it's very useful\nto have such a function that exits - in the backend we have a much more\ncomplete way to handle that, including pointing to the right location in\nthe query strings etc.\n\n\n> One choice is whether there are two functions (the higher one calling the\n> lower one and adding the messages) or just one with a boolean to trigger the\n> messages. I do not have a strong opinion. Maybe one function would be\n> simpler. Andres boolean-compatible enum return looks like a good option.\n\nThe boolean makes the calling code harder to understand, the function\nslower, and the code harder to grep.\n\n\n> Overall, this leads to something like:\n> \n> enum { STRTOINT_OK, STRTOINT_OVERFLOW_ERROR, STRTOINT_SYNTAX_ERROR }\n> pg_strto{,u}int{8?,16,32,64}\n> (const char * string, const TYPE * result, const bool verbose);\n\nWhat's with hthe const for the result? I assume that was just a copy&pasto?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 11:28:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-17 17:29:58 +0900, Michael Paquier wrote:\n> Actually, one thing which may be a problem is that we lack currently\n> the equivalents of pg_mul_s16_overflow and such for unsigned\n> integers.\n\nIt's much simpler to implement them for unsigned than for signed,\nbecause unsigned overflow is well-defined. So I'd not be particularly\nworried about just adding them. E.g. comparing the \"slow\" version of\npg_mul_s64_overflow() with an untested implementation of\npg_mul_u64_overflow():\n\npg_mul_s64_overflow:\n\t/*\n\t * Overflow can only happen if at least one value is outside the range\n\t * sqrt(min)..sqrt(max) so check that first as the division can be quite a\n\t * bit more expensive than the multiplication.\n\t *\n\t * Multiplying by 0 or 1 can't overflow of course and checking for 0\n\t * separately avoids any risk of dividing by 0. Be careful about dividing\n\t * INT_MIN by -1 also, note reversing the a and b to ensure we're always\n\t * dividing it by a positive value.\n\t *\n\t */\n\tif ((a > PG_INT32_MAX || a < PG_INT32_MIN ||\n\t\t b > PG_INT32_MAX || b < PG_INT32_MIN) &&\n\t\ta != 0 && a != 1 && b != 0 && b != 1 &&\n\t\t((a > 0 && b > 0 && a > PG_INT64_MAX / b) ||\n\t\t (a > 0 && b < 0 && b < PG_INT64_MIN / a) ||\n\t\t (a < 0 && b > 0 && a < PG_INT64_MIN / b) ||\n\t\t (a < 0 && b < 0 && a < PG_INT64_MAX / b)))\n\t{\n\t\t*result = 0x5EED;\t\t/* to avoid spurious warnings */\n\t\treturn true;\n\t}\n\t*result = a * b;\n\treturn false;\n\npg_mul_s64_overflow:\n\n /*\n * Checking for unsigned overflow is simple, just check\n * if reversing the multiplication indicates that the\n * multiplication overflowed.\n */\n int64 res = a * b;\n if (a != 0 && b != res / a)\n {\n\t\t*result = 0x5EED;\t\t/* to avoid spurious warnings */\n\t\treturn true;\n\t}\n\t*result = res;\n\treturn false;\n\n\nThe cases for addition/subtraction are even easier:\naddition:\nres = a + b;\nif (res < a)\n /* overflow */\n\nsubtraction:\nif (a < b)\n /* underflow */\nres = a - b;\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 11:48:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "\n\n>>> - The str->integer conversion routines, which actually have very\n>>> similar characteristics to the strtol families as they remove trailing\n>>> whitespaces first, check for a sign, etc, except that they work only\n>>> on base 10. And here we get into a state where pg_scanint8 should be\n>>> actually called pg_strtoint64,\n>>\n>> I just removed that:-)\n>\n> What do you mean by that?\n\nThat I renamed something from a previous patch version and someone is \ncomplaining that I did.\n\n>>> with an interface inconsistent with its int32/int16 relatives now only\n>>> in the backend.\n>>\n>> We can, but I'm not at ease with changing the error handling approach.\n>\n> Why?\n\nIf a function reports an error to log, it should keep on doing it, \notherwise there would be a regression.\n\n>>> Could we consider more consolidation here please? Like moving the whole\n>>> set to src/common/?\n>>\n>> My initial plan was simply to remove direct code duplications between\n>> front-end and back-end, not to re-engineer the full set of string to int\n>> conversion functions:-)\n>\n> Well, if you expose functions to more places - e.g. now the whole\n> frontend - it becomes more important that they're reasonably designed.\n\nI can somehow only agree with that. Note that the contraposite assumption \nthat badly designed functions would be okay for backend seems doubtful.\n\n>> On the re-engineering front: Given the various points on the thread, \n>> ISTM that there should probably be two behaviors for str to \n>> signed/unsigned int{16,32,64}, and having only one kind of signature \n>> for all types would be definitely better.\n>\n> I don't understand why we'd want to have different behaviours for\n> signed/unsigned? Maybe I'm mis-understanding your sentence, and you just\n> mean that there should be one that throws, and one that returns an\n> errorcode?\n\nYep for the backend (if reporting an error generates a longjump), for the \nfrontend there is no exception mechanism, so it is showing the error or \nnot to stderr, and returning whether it was ok.\n\n>> Another higher-level one which possibly adds an error message (stderr for\n>> front-end, log for back-end).\n>\n> Is there actually any need for a non-backend one that has an\n> error-message?\n\nPgbench uses it. If the function is shared and one is loging something, it \nlooks ok to send to stderr for front-end?\n\n> I'm not convinced that in the frontend it's very useful to have such a \n> function that exits - in the backend we have a much more complete way to \n> handle that, including pointing to the right location in the query \n> strings etc.\n\nSure. There is not exit though, just messages to stderr and return false.\n\n>> One choice is whether there are two functions (the higher one calling the\n>> lower one and adding the messages) or just one with a boolean to trigger the\n>> messages. I do not have a strong opinion. Maybe one function would be\n>> simpler. Andres boolean-compatible enum return looks like a good option.\n>\n> The boolean makes the calling code harder to understand, the function\n> slower,\n\nHmmm. So I understand that you would prefer 2 functions, one raw (fast) \none and the other with the other with the better error reporting facity, \nand the user must chose the one they like. I'm fine with that as well.\n\n> and the code harder to grep.\n\n>> Overall, this leads to something like:\n>>\n>> enum { STRTOINT_OK, STRTOINT_OVERFLOW_ERROR, STRTOINT_SYNTAX_ERROR }\n>> pg_strto{,u}int{8?,16,32,64}\n>> (const char * string, const TYPE * result, const bool verbose);\n>\n> What's with hthe const for the result? I assume that was just a copy&pasto?\n\nYep. The pointer is constant, not the value pointed, maybe it should be \n\"TYPE * const result\" or something like that. Or no const at all on \nresult.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 17 Jul 2019 22:59:01 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-17 22:59:01 +0000, Fabien COELHO wrote:\n> > > > with an interface inconsistent with its int32/int16 relatives now only\n> > > > in the backend.\n> > > \n> > > We can, but I'm not at ease with changing the error handling approach.\n> > \n> > Why?\n> \n> If a function reports an error to log, it should keep on doing it, otherwise\n> there would be a regression.\n\nErr, huh. Especially if we change the signature, I fail to see how it's\na regression if we change the behaviour.\n\n\n> > > Another higher-level one which possibly adds an error message (stderr for\n> > > front-end, log for back-end).\n> > \n> > Is there actually any need for a non-backend one that has an\n> > error-message?\n> \n> Pgbench uses it. If the function is shared and one is loging something, it\n> looks ok to send to stderr for front-end?\n\n> > I'm not convinced that in the frontend it's very useful to have such a\n> > function that exits - in the backend we have a much more complete way to\n> > handle that, including pointing to the right location in the query\n> > strings etc.\n> \n> Sure. There is not exit though, just messages to stderr and return false.\n\nI think it's a seriously bad idea to have a function that returns\ndepending in the error case depending on whether it's frontend or\nbackend code. We shouldn't do stuff like that, it just leads to bugs.\n\n\n> > > One choice is whether there are two functions (the higher one calling the\n> > > lower one and adding the messages) or just one with a boolean to trigger the\n> > > messages. I do not have a strong opinion. Maybe one function would be\n> > > simpler. Andres boolean-compatible enum return looks like a good option.\n> > \n> > The boolean makes the calling code harder to understand, the function\n> > slower,\n> \n> Hmmm. So I understand that you would prefer 2 functions, one raw (fast) one\n> and the other with the other with the better error reporting facity, and the\n> user must chose the one they like. I'm fine with that as well.\n\nWell, the one with error reporting would use the former.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 16:17:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 11:14:28AM -0700, Andres Freund wrote:\n> That'd be considerably slower, so I'm *strongly* against that. These\n> conversion routines are *really* hot in a number of workloads,\n> e.g. bulk-loading with COPY. Check e.g.\n> https://www.postgresql.org/message-id/20171208214437.qgn6zdltyq5hmjpk%40alap3.anarazel.de\n\nThanks for the link. That makes sense! So stacking more function\ncalls could also be an issue. Even if using static inline for the\ninner wrapper? That may sound like a stupid question but you have\nlikely more experience than me regarding that with profiling.\n\n> I doubt it - it's not of that long-standing vintage (23a27b039d9,\n> 2016-03-12), and if so they are very likely to use base 10. We shouldn't\n> keep some barely tested function around, just for the hypothetical\n> scenario that some extension uses it. Especially if that function is\n> considerably slower than the potential replacement.\n\nOkay, I won't fight hard on that either.\n--\nMichael",
"msg_date": "Thu, 18 Jul 2019 09:28:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-18 09:28:28 +0900, Michael Paquier wrote:\n> On Wed, Jul 17, 2019 at 11:14:28AM -0700, Andres Freund wrote:\n> > That'd be considerably slower, so I'm *strongly* against that. These\n> > conversion routines are *really* hot in a number of workloads,\n> > e.g. bulk-loading with COPY. Check e.g.\n> > https://www.postgresql.org/message-id/20171208214437.qgn6zdltyq5hmjpk%40alap3.anarazel.de\n>\n> Thanks for the link. That makes sense! So stacking more function\n> calls could also be an issue. Even if using static inline for the\n> inner wrapper? That may sound like a stupid question but you have\n> likely more experience than me regarding that with profiling.\n\nA static inline would be fine, depending on how you do that. I'm not\nquite sure what you mean with \"inner wrapper\" - isn't a wrapper normally\noutside?\n\nI'd probably do something like\n\nstatic inline int64\nstrtoint64(const char *str)\n{\n int64 res;\n\n e = strtoint64_e(str, &res);\n if (likely(e == STRTOINT_OK))\n return res;\n else\n {\n report_strtoint_error(str, e, \"int64\");\n return 0; /* pacify compiler */\n }\n}\n\nand then have one non-inline report_strtoint_error() shared across the\nvarious functions. Even leaving code-duplication aside, not having the\nelog call itself in the inline function is nice, as that's quite a few\ninstructions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 18:16:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "\nHello Andres,\n\n>> If a function reports an error to log, it should keep on doing it, otherwise\n>> there would be a regression.\n>\n> Err, huh. Especially if we change the signature, I fail to see how it's\n> a regression if we change the behaviour.\n\nISTM that we do not understand one another, because I'm only trying to \nstate the obvious. Currently error messages on overflow or syntax error \nare displayed. I just mean that such messages must keep on being emitted \nsomehow otherwise there would be a regression.\n\n sql> SELECT INT8 '12345678901234567890';\n ERROR: value \"12345678901234567890\" is out of range for type bigint\n LINE 1: SELECT INT8 '12345678901234567890';\n\n>> Sure. There is not exit though, just messages to stderr and return false.\n>\n> I think it's a seriously bad idea to have a function that returns\n> depending in the error case depending on whether it's frontend or\n> backend code. We shouldn't do stuff like that, it just leads to bugs.\n\nOk, you really want two functions and getting read of the errorOK boolean.\nI got that. The scanint8 already exists with its ereport ERROR vs return \nbased on a boolean, so the point is to get read of this kind of signature.\n\n>>> The boolean makes the calling code harder to understand, the function\n>>> slower,\n>>\n>> Hmmm. So I understand that you would prefer 2 functions, one raw (fast) one\n>> and the other with the other with the better error reporting facity, and the\n>> user must chose the one they like. I'm fine with that as well.\n>\n> Well, the one with error reporting would use the former.\n\nYep, possibly two functions, no boolean.\n\nHaving a common function will not escape the fact that there is no \nexception mechanism for front-end, and none seems desirable just for \nparsing ints. So if you want NOT to have a same function with return \nsomething vs raises an error, that would make three functions.\n\nOne basic int parsing in the fe/be common part.\n\n typedef enum { STRTOINT_OK, STRTOINT_OVERFLOW, STRTOINT_SYNTAX_ERROR }\n strtoint_error;\n\n strtoint_error pg_strtoint64(const char * str, int64 * result);\n\nThen for backend, probably in backend somewhere:\n\n // raises an exception on errors\n void pg_strtoint64_log(const char * str, int64 * result) {\n switch (pg_strtoint64) {\n case STRTOINT_OK: return;\n case STRTOINT...: ereport(ERROR, ...);\n }\n }\n\n\nAnd for frontend, only in frontend somewhere:\n\n // print to stderr and return false on error\n bool pg_strtoint64_err(const char * str, int64 * result);\n\nI'm unhappy with the function names though, feel free to improve.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 18 Jul 2019 07:57:41 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 07:57:41AM +0000, Fabien COELHO wrote:\n> I'm unhappy with the function names though, feel free to improve.\n\nI would have something rather close to what you are suggesting, still\nnot exactly that because we just don't care about the error strings\ngenerated for the frontend. And my bet is that each frontend would\nlike to have their own error message depending on the error case.\n\nFWIW, I had a similar experience with pg_strong_random() not so long\nago, which required a backend-specific handling because the fallback\nrandom implementation needed some tweaks at postmaster startup (that's\nwhy we have an alias pg_backend_random in include/port.h). So I would\nrecommend the following, roughly:\n- One set of functions in src/port/ which return the status code for\nthe error handling, without any error reporting in it to avoid any\nifdef FRONTEND business, which have a generic name pg_strto[u]intXX\n(XX = {16,32,64}). And have all that in a new, separate file, say\nstrtoint.c?\n- One set of functions for the backend, called pg_stro[u]intXX_backend\nor pg_backend_stro[u]intXX which can take as extra argument error_ok,\ncalling the portions in src/port/, and move those functions in a new\nfile prefixed with \"backend_\" in src/backend/utils/misc/ with a name\nconsistent with the one in src/port/ (with the previous naming that\nwould be backend_strtoint.c)\n- We also need the unsigned-specific equivalents of\npg_mul_s64_overflow and such, so I would suggest putting that in a new\nheader, simply uint.h. If I finish by committing this stuff, I would\nhandle that in a separate commit.\n--\nMichael",
"msg_date": "Fri, 19 Jul 2019 12:21:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-19 12:21:27 +0900, Michael Paquier wrote:\n> On Thu, Jul 18, 2019 at 07:57:41AM +0000, Fabien COELHO wrote:\n> > I'm unhappy with the function names though, feel free to improve.\n> \n> I would have something rather close to what you are suggesting, still\n> not exactly that because we just don't care about the error strings\n> generated for the frontend. And my bet is that each frontend would\n> like to have their own error message depending on the error case.\n\nYea, the error messages pgbench is currently generating, for example,\ndon't make a lot of sense.\n\n> FWIW, I had a similar experience with pg_strong_random() not so long\n> ago, which required a backend-specific handling because the fallback\n> random implementation needed some tweaks at postmaster startup (that's\n> why we have an alias pg_backend_random in include/port.h). So I would\n> recommend the following, roughly:\n> - One set of functions in src/port/ which return the status code for\n> the error handling, without any error reporting in it to avoid any\n> ifdef FRONTEND business, which have a generic name pg_strto[u]intXX\n> (XX = {16,32,64}). And have all that in a new, separate file, say\n> strtoint.c?\n\nWhy not common? It's not a platform dependent bit. Could even be put\ninto the already existing string.c.\n\n\n> - One set of functions for the backend, called pg_stro[u]intXX_backend\n> or pg_backend_stro[u]intXX which can take as extra argument error_ok,\n> calling the portions in src/port/, and move those functions in a new\n> file prefixed with \"backend_\" in src/backend/utils/misc/ with a name\n> consistent with the one in src/port/ (with the previous naming that\n> would be backend_strtoint.c)\n\nI'm not following. What would be the point of any of this? The error_ok\nbit is unnecessary, because the function is exactly the same as the\ngeneric function. And the backend_ prefix would be pretty darn weird,\ngiven that that's already below src/backend.\n\n\n> - We also need the unsigned-specific equivalents of\n> pg_mul_s64_overflow and such, so I would suggest putting that in a new\n> header, simply uint.h. If I finish by committing this stuff, I would\n> handle that in a separate commit.\n\nWhy not the same header? I fail to see what we'd gain by splitting it\nup.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Jul 2019 21:16:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 09:16:22PM -0700, Andres Freund wrote:\n> On 2019-07-19 12:21:27 +0900, Michael Paquier wrote:\n> Why not common? It's not a platform dependent bit. Could even be put\n> into the already existing string.c.\n\nThat would be fine to me, it is not like this file is bloated now.\n\n>> - One set of functions for the backend, called pg_stro[u]intXX_backend\n>> or pg_backend_stro[u]intXX which can take as extra argument error_ok,\n>> calling the portions in src/port/, and move those functions in a new\n>> file prefixed with \"backend_\" in src/backend/utils/misc/ with a name\n>> consistent with the one in src/port/ (with the previous naming that\n>> would be backend_strtoint.c)\n> \n> I'm not following. What would be the point of any of this? The error_ok\n> bit is unnecessary, because the function is exactly the same as the\n> generic function. And the backend_ prefix would be pretty darn weird,\n> given that that's already below src/backend.\n\nDo you have a better idea of name for those functions?\n\n>> - We also need the unsigned-specific equivalents of\n>> pg_mul_s64_overflow and such, so I would suggest putting that in a new\n>> header, simply uint.h. If I finish by committing this stuff, I would\n>> handle that in a separate commit.\n> \n> Why not the same header? I fail to see what we'd gain by splitting it\n> up.\n\nNo objections to that at the end.\n\nFabien, are you planning to send an updated patch? This stuff has\nvalue.\n--\nMichael",
"msg_date": "Mon, 29 Jul 2019 10:48:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n> Fabien, are you planning to send an updated patch? This stuff has \n> value.\n\nYep, I will try for this week.\n\n-- \nFabien.",
"msg_date": "Mon, 29 Jul 2019 07:04:09 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-29 10:48:41 +0900, Michael Paquier wrote:\n> On Thu, Jul 18, 2019 at 09:16:22PM -0700, Andres Freund wrote:\n> >> - One set of functions for the backend, called pg_stro[u]intXX_backend\n> >> or pg_backend_stro[u]intXX which can take as extra argument error_ok,\n> >> calling the portions in src/port/, and move those functions in a new\n> >> file prefixed with \"backend_\" in src/backend/utils/misc/ with a name\n> >> consistent with the one in src/port/ (with the previous naming that\n> >> would be backend_strtoint.c)\n> > \n> > I'm not following. What would be the point of any of this? The error_ok\n> > bit is unnecessary, because the function is exactly the same as the\n> > generic function. And the backend_ prefix would be pretty darn weird,\n> > given that that's already below src/backend.\n> \n> Do you have a better idea of name for those functions?\n\nI don't understand why they need any prefix. pg_strto[u]int{32,64}{,_checked}.\nThe unchecked variant would be for both frontend backend. The checked one\neither for both frontend/backend, or just for backend. I also could live with\n_raises, _throws or such instead of _checked. Implement all of them in one\nfile in /common/, possibly hidin gthe ones not currently implemented for the\nfrontend.\n\nImo if _checked is implemented for both frontend/backend they'd need\ndifferent error messages. In my opinion\nout_of_range:\n\tif (!errorOK)\n\t\tfprintf(stderr,\n\t\t\t\t\"value \\\"%s\\\" is out of range for type bigint\\n\", str);\n\treturn false;\n\ninvalid_syntax:\n\tif (!errorOK)\n\t\tfprintf(stderr,\n\t\t\t\t\"invalid input syntax for type bigint: \\\"%s\\\"\\n\", str);\n\nis unsuitable for generic code. In fact, I'm doubtful that it's applicable for\nany use, except for int8in(), which makes me think it better ought to use the\na non-checked variant, and include the errors directly. If we still want to\nhave _checked - which is reasonable imo - it should refer to 64bit integers or somesuch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 28 Jul 2019 22:05:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 07:04:09AM +0200, Fabien COELHO wrote:\n> Bonjour Michaël,\n> Yep, I will try for this week.\n\nPlease note that for now I have marked the patch as returned with\nfeedback as the CF is ending.\n--\nMichael",
"msg_date": "Thu, 1 Aug 2019 13:20:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Bonjour Micha�l,\n\n>> Yep, I will try for this week.\n>\n> Please note that for now I have marked the patch as returned with\n> feedback as the CF is ending.\n\nOk.\n\nI have looked quickly at it, but I'm not sure that there is an agreement \nabout what should be done precisely, so the feedback is not clearly \nactionable.\n\n-- \nFabien.",
"msg_date": "Thu, 1 Aug 2019 09:00:41 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 09:00:41AM +0200, Fabien COELHO wrote:\n> I have looked quickly at it, but I'm not sure that there is an agreement\n> about what should be done precisely, so the feedback is not clearly\n> actionable.\n\nPer the latest trends, it seems that the input of Andres was kind of\nthe most interesting pieces. If you don't have room for it, I would\nnot mind doing the legwork myself.\n--\nMichael",
"msg_date": "Thu, 1 Aug 2019 17:23:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Michaᅵl-san,\n\n>> I have looked quickly at it, but I'm not sure that there is an agreement\n>> about what should be done precisely, so the feedback is not clearly\n>> actionable.\n>\n> Per the latest trends, it seems that the input of Andres was kind of\n> the most interesting pieces.\n\nYes, definitely. I understood that we want in \"string.h\" something like\n(just the spirit):\n\n typedef enum {\n STRTOINT_OK, STRTOINT_RANGE_ERROR, STRTOINT_SYNTAX_ERROR\n } strtoint_status;\n\n strtoint_status pg_strtoint64(const char * str, int64 * result);\n\nHowever there is a contrary objective to have a unified interface,\nbut there also exists a:\n\n extern uint64 pg_strtouint64(const char *str, char **endptr, int base);\n\ncalled 3 times, always with base == 10. We have a similar name but a \ntotally different interface, so basically it would have to be replaced\nby something like the first interface.\n\n> If you don't have room for it, I would not mind doing the legwork \n> myself.\n\nI think that it would be quick if the what is clear enough, so I can do \nit.\n\n-- \nFabien.",
"msg_date": "Thu, 1 Aug 2019 11:34:34 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 11:34:34AM +0200, Fabien COELHO wrote:\n> However there is a contrary objective to have a unified interface,\n> but there also exists a:\n> \n> extern uint64 pg_strtouint64(const char *str, char **endptr, int base);\n> \n> called 3 times, always with base == 10. We have a similar name but a totally\n> different interface, so basically it would have to be replaced\n> by something like the first interface.\n\nMy understanding on this one was to nuke the base argument and unify\nthe interface with our own, faster routines:\nhttps://www.postgresql.org/message-id/20190716201838.rwrd7xzbrybq7dop%40alap3.anarazel.de\n--\nMichael",
"msg_date": "Thu, 1 Aug 2019 20:47:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "\n>> extern uint64 pg_strtouint64(const char *str, char **endptr, int base);\n>>\n>> called 3 times, always with base == 10. We have a similar name but a totally\n>> different interface, so basically it would have to be replaced\n>> by something like the first interface.\n>\n> My understanding on this one was to nuke the base argument and unify\n> the interface with our own, faster routines:\n> https://www.postgresql.org/message-id/20190716201838.rwrd7xzbrybq7dop%40alap3.anarazel.de\n\nOk, so there is an agreement on reworking the unsigned function. I missed \nthis bit.\n\nSo I'll set out to write and use \"pg_strtou?int64\", i.e. 2 functions, and \nthen possibly generalize to lower sizes, 32, 16, depending on what is \nactually needed.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 1 Aug 2019 16:48:35 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi Fabien,\n\nOn Thu, Aug 01, 2019 at 04:48:35PM +0200, Fabien COELHO wrote:\n> Ok, so there is an agreement on reworking the unsigned function. I missed\n> this bit.\n> \n> So I'll set out to write and use \"pg_strtou?int64\", i.e. 2 functions, and\n> then possibly generalize to lower sizes, 32, 16, depending on what is\n> actually needed.\n\nI am interested in this patch, and the next commit fest is close by.\nAre you working on an updated version? If not, do you mind if I work\non it and post a new version by the beginning of next week based on\nall the feedback gathered?\n--\nMichael",
"msg_date": "Mon, 26 Aug 2019 16:28:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n>> So I'll set out to write and use \"pg_strtou?int64\", i.e. 2 functions, and\n>> then possibly generalize to lower sizes, 32, 16, depending on what is\n>> actually needed.\n>\n> I am interested in this patch, and the next commit fest is close by.\n> Are you working on an updated version? If not, do you mind if I work\n> on it and post a new version by the beginning of next week based on\n> all the feedback gathered?\n\nI have started to do something, and I can spend some time on that this \nweek, but I'm pretty unclear about what exactly should be done.\n\nThe error returning stuff is simple enough, but I'm unclear about what to \ndo with pg_uint64, which has a totally different signature. Should it be \naligned?\n\n-- \nFabien Coelho - CRI, MINES ParisTech",
"msg_date": "Mon, 26 Aug 2019 11:05:55 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Mon, Aug 26, 2019 at 11:05:55AM +0200, Fabien COELHO wrote:\n> I have started to do something, and I can spend some time on that this week,\n> but I'm pretty unclear about what exactly should be done.\n\nThanks.\n\n> The error returning stuff is simple enough, but I'm unclear about what to do\n> with pg_uint64, which has a totally different signature. Should it be\n> aligned?\n\nI am not sure what you mean with aligned here. If you mean\nconsistent, getting into a state where we have all functions for all\nthree sizes, signed and unsigned, would be nice.\n--\nMichael",
"msg_date": "Tue, 27 Aug 2019 13:05:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n>> The error returning stuff is simple enough, but I'm unclear about what to do\n>> with pg_uint64, which has a totally different signature. Should it be\n>> aligned?\n>\n> I am not sure what you mean with aligned here.\n\nI meant same signature.\n\n> If you mean consistent, getting into a state where we have all functions \n> for all three sizes, signed and unsigned, would be nice.\n\nOk, I look into it.\n\n-- \nFabien.",
"msg_date": "Tue, 27 Aug 2019 08:59:18 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n>> So I'll set out to write and use \"pg_strtou?int64\", i.e. 2 functions, and\n>> then possibly generalize to lower sizes, 32, 16, depending on what is\n>> actually needed.\n>\n> I am interested in this patch, and the next commit fest is close by.\n> Are you working on an updated version? If not, do you mind if I work\n> on it and post a new version by the beginning of next week based on\n> all the feedback gathered?\n\nHere is an updated patch for the u?int64 conversion functions.\n\nI have taken the liberty to optimize the existing int64 function by \nremoving spurious tests. I have not created uint64 specific inlined \noverflow functions.\n\nIf it looks ok, a separate patch could address the 32 & 16 versions.\n\n-- \nFabien.",
"msg_date": "Wed, 28 Aug 2019 08:51:29 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Wed, Aug 28, 2019 at 08:51:29AM +0200, Fabien COELHO wrote:\n> Here is an updated patch for the u?int64 conversion functions.\n\nThanks!\n\n> I have taken the liberty to optimize the existing int64 function by removing\n> spurious tests.\n\nWhich are?\n\n> I have not created uint64 specific inlined overflow functions.\n\nWhy? There is a comment below ;p\n\n> If it looks ok, a separate patch could address the 32 & 16 versions.\n\nI am surprised to see a negative diff actually just by doing that\n(adding the 32 and 16 parts will add much more code of course). At\nquick glance, I think that this is on the right track. Some comments\nI have on the top of my mind:\n- It would me good to have the unsigned equivalents of\npg_mul_s64_overflow, etc. These are simple enough, and per the\nfeedback from Andres they could live in common/int.h.\n- It is more consistent to use upper-case statuses in the enum\nstrtoint_status. Could it be renamed to pg_strtoint_status?\n--\nMichael",
"msg_date": "Wed, 28 Aug 2019 16:13:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Michaël,\n\n>> I have taken the liberty to optimize the existing int64 function by removing\n>> spurious tests.\n>\n> Which are?\n\n - *ptr && WHATEVER(*ptr)\n *ptr is redundant, WHATEVER yields false on '\\0', and it costs on each\n char but at the end. It might be debatable in some places, e.g. it is\n likely that there are no spaces in the string, but likely that there are\n more than one digit.\n\n If you want all/some *ptr added back, no problem.\n\n - isdigit repeated on if and following while, used if/do-while instead.\n\n>> I have not created uint64 specific inlined overflow functions.\n>\n> Why? There is a comment below ;p\n\nSee comment about comment below:-)\n\n>> If it looks ok, a separate patch could address the 32 & 16 versions.\n>\n> I am surprised to see a negative diff\n\nIs it? Long duplicate functions are factored out (this was my initial \nintent), one file is removed…\n\n> actually just by doing that (adding the 32 and 16 parts will add much \n> more code of course). At quick glance, I think that this is on the \n> right track. Some comments I have on the top of my mind:\n\n> - It would me good to have the unsigned equivalents of\n> pg_mul_s64_overflow, etc. These are simple enough,\n\nHmmm. Have you looked at the fallback cases when the corresponding \nbuiltins are not available?\n\nI'm unsure of a reliable way to detect a generic unsigned int overflow \nwithout expensive dividing back and having to care about zero…\n\nSo I was pretty happy with my two discreet, small and efficient tests.\n\n> and per the feedback from Andres they could live in common/int.h.\n\nCould be, however \"string.c\" already contains a string to int conversion \nfunction, so I put them together. Probably this function should be \nremoved in the end, though.\n\n> - It is more consistent to use upper-case statuses in the enum\n> strtoint_status.\n\nI thought of that, but first enum I found used lower case, so it did not \nseem obvious that pg style was to use upper case. Indeed, some enum \nconstants use upper cases.\n\n> Could it be renamed to pg_strtoint_status?\n\nSure. I also prefixed the enum constants for consistency.\n\nAttached patch uses a prefix and uppers constants. Waiting for further \ninput about other comments.\n\n-- \nFabien.",
"msg_date": "Wed, 28 Aug 2019 09:50:44 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Wed, Aug 28, 2019 at 09:50:44AM +0200, Fabien COELHO wrote:\n> - *ptr && WHATEVER(*ptr)\n> *ptr is redundant, WHATEVER yields false on '\\0', and it costs on each\n> char but at the end. It might be debatable in some places, e.g. it is\n> likely that there are no spaces in the string, but likely that there are\n> more than one digit.\n\nStill this makes the checks less robust?\n\n> If you want all/some *ptr added back, no problem.\n> \n> - isdigit repeated on if and following while, used if/do-while instead.\n\nI see, you don't check twice if the first character is a digit this\nway.\n\n> Hmmm. Have you looked at the fallback cases when the corresponding builtins\n> are not available?\n>\n> I'm unsure of a reliable way to detect a generic unsigned int overflow\n> without expensive dividing back and having to care about zero…\n\nMr Freund has mentioned that here:\nhttps://www.postgresql.org/message-id/20190717184820.iqz7schxdbucmdmu@alap3.anarazel.de\n\n> So I was pretty happy with my two discreet, small and efficient tests.\n\nThat's also a matter of code and interface consistency IMHO.\n--\nMichael",
"msg_date": "Thu, 29 Aug 2019 09:22:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Bonjour Michaël,\n\n>> - *ptr && WHATEVER(*ptr)\n>> *ptr is redundant, WHATEVER yields false on '\\0', and it costs on each\n>> char but at the end. It might be debatable in some places, e.g. it is\n>> likely that there are no spaces in the string, but likely that there are\n>> more than one digit.\n>\n> Still this makes the checks less robust?\n\nI do not see any downside, but maybe I lack imagination.\n\nOn my laptop isWHATEVER is implementd through an array mapping characters \nto a bitfield saying whether each char is WHATEVER, depending on the bit. \nThis array is well defined for index 0 ('\\0').\n\nIf an implementation is based on comparisons, for isdigit it would be:\n\n c >= '0' && c <= '9'\n\nThen checking c != '\\0' is redundant with c >= '0'.\n\nGiven the way the character checks are used in the function, we do not go \nbeyond the end of the string because we only proceed further when a \ncharacter is actually recognized, else we return.\n\nSo I cannot see any robustness issue, just a few cycles saved.\n\n>> Hmmm. Have you looked at the fallback cases when the corresponding builtins\n>> are not available?\n>>\n>> I'm unsure of a reliable way to detect a generic unsigned int overflow\n>> without expensive dividing back and having to care about zero…\n>\n> Mr Freund has mentioned that here:\n> https://www.postgresql.org/message-id/20190717184820.iqz7schxdbucmdmu@alap3.anarazel.de\n\nYep, that is what I mean by expensive: there is an integer division, which \nis avoided if b is known to be 10, hence is not zero and the limit value \ncan be checked directly on the input without having to perform a division \neach time.\n\n>> So I was pretty happy with my two discreet, small and efficient tests.\n>\n> That's also a matter of code and interface consistency IMHO.\n\nPossibly.\n\nISTM that part of the motivation is to reduce costs for heavily used \nconversions, eg on COPY. Function \"scanf\" is overly expensive because it \nhas to interpret its format, and we have to check for overflows…\n\nAnyway, if we assume that the builtins exist and rely on efficient \nhardware check, maybe we do not care about the fallback cases which would \njust be slow but never executed.\n\nNote that all this is moot, as all instances of string to uint64 \nconversion do not check for errors.\n\nAttached v7 does create uint64 overflow inline functions. The stuff yet is \nnot moved to \"common/int.c\", a file which does not exists, even if \n\"common/int.h\" does.\n\n-- \nFabien.",
"msg_date": "Thu, 29 Aug 2019 08:14:54 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Thu, Aug 29, 2019 at 08:14:54AM +0200, Fabien COELHO wrote:\n> Attached v7 does create uint64 overflow inline functions. The stuff yet is\n> not moved to \"common/int.c\", a file which does not exists, even if\n> \"common/int.h\" does.\n\nThanks for the new version. I have begun reviewing your patch, and\nattached is a first cut that I would like to commit separately which\nadds all the compatibility overflow routines to int.h for uint16,\nuint32 and uint64 with all the fallback implementations (int128-based\nmethod added as well if available). I have also grouped at the top of\nthe file the comments about each routine's return policy to avoid\nduplication. For the fallback implementations of uint64 using int128,\nI think that it is cleaner to use uint128 so as there is no need to\ncheck after negative results for multiplications, and I have made the\nvarious expressions consistent for each size.\n\nAttached is a small module called \"overflow\" with various regression\ntests that I used to check each implementation. I don't propose that\nfor commit as I am not sure if that's worth the extra CPU, so let's\nconsider it as a toy for now.\n\nWhat do you think?\n--\nMichael",
"msg_date": "Fri, 30 Aug 2019 16:34:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Michaël,\n\n> attached is a first cut that I would like to commit separately which \n> adds all the compatibility overflow routines to int.h for uint16, uint32 \n> and uint64 with all the fallback implementations (int128-based method \n> added as well if available). I have also grouped at the top of the file \n> the comments about each routine's return policy to avoid duplication. \n> For the fallback implementations of uint64 using int128, I think that it \n> is cleaner to use uint128 so as there is no need to check after negative \n> results for multiplications, and I have made the various expressions \n> consistent for each size.\n\nPatch applies cleanly, compiles, \"make check\" ok, but the added functions \nare not used (yet).\n\nI think that factoring out comments is a good move.\n\nFor symmetry and efficiency, ISTM that uint16 mul overflow could use \nuint32 and uint32 could use uint64, and the division-based method be \ndropped in these cases.\n\nMaybe I would add a comment before each new section telling about the \ntype, eg:\n\n /*\n * UINT16\n */\n add/sub/mul uint16 functions…\n\nI would tend to commit working solutions per type rather than by \ninstallment, so that at least all added functions are actually used \nsomewhere, but it does not matter much, really.\n\nI was unsure that having int128 implies uint128 availability, so I did not \nuse it.\n\nThe checking extension is funny, but ISTM that these checks should be (are \nalready?) included in some standard sql test, which will test the macros \nfrom direct SQL operations:\n\n fabien=# SELECT INT2 '1234512434343';\n ERROR: value \"1234512434343\" is out of range for type smallint\n\nWell, a quick look at \"src/test/regress/sql/int2.sql\" suggests that\nthere the existing tests should be extended… :-(\n\n-- \nFabien.",
"msg_date": "Fri, 30 Aug 2019 10:06:11 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Fri, Aug 30, 2019 at 10:06:11AM +0200, Fabien COELHO wrote:\n> Patch applies cleanly, compiles, \"make check\" ok, but the added functions\n> are not used (yet).\n\nThanks.\n\n> I think that factoring out comments is a good move.\n> \n> For symmetry and efficiency, ISTM that uint16 mul overflow could use uint32\n> and uint32 could use uint64, and the division-based method be dropped in\n> these cases.\n\nYes, the division would be worse than the other. What do you think\nabout using the previous module I sent and measure how long it takes\nto evaluate the overflows in some specific cases N times in loops?\n\n> Maybe I would add a comment before each new section telling about the type,\n> eg:\n> \n> /*\n> * UINT16\n> */\n> add/sub/mul uint16 functions.\n\nLet's do as you suggest here.\n\n> I would tend to commit working solutions per type rather than by\n> installment, so that at least all added functions are actually used\n> somewhere, but it does not matter much, really.\n\nI prefer by section, with testing dedicated to each part properly\ndone so as we can move to the next parts.\n\n> I was unsure that having int128 implies uint128 availability, so I did not\n> use it.\n\nThe recent Ryu-floating point work has begun using them (see f2s.c and\nd2s.c).\n\n> The checking extension is funny, but ISTM that these checks should be (are\n> already?) included in some standard sql test, which will test the macros\n> from direct SQL operations:\n\nSure. But not for the unsigned part as there are no equivalent\nin-core data types, still it is possible to trick things with signed\ninteger arguments. I found my toy useful to check test all\nimplementations consistently.\n\n> fabien=# SELECT INT2 '1234512434343';\n> ERROR: value \"1234512434343\" is out of range for type smallint\n> \n> Well, a quick look at \"src/test/regress/sql/int2.sql\" suggests that\n> there the existing tests should be extended… :-(\n\nWe can tackle that separately. -32768 is perfectly legal for\nsmallint, and the test is wrong here.\n--\nMichael",
"msg_date": "Fri, 30 Aug 2019 22:44:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Michaël,\n\n>> For symmetry and efficiency, ISTM that uint16 mul overflow could use uint32\n>> and uint32 could use uint64, and the division-based method be dropped in\n>> these cases.\n>\n> Yes, the division would be worse than the other. What do you think\n> about using the previous module I sent and measure how long it takes\n> to evaluate the overflows in some specific cases N times in loops?\n\nGiven the overheads of the SQL interpreter, I'm unsure about what you \nwould measure at the SQL level. You could just write a small standalone C \nprogram to test perf and accuracy. Maybe this is what you have in mind.\n\n>> I would tend to commit working solutions per type rather than by\n>> installment, so that at least all added functions are actually used\n>> somewhere, but it does not matter much, really.\n>\n> I prefer by section, with testing dedicated to each part properly\n> done so as we can move to the next parts.\n\nThis suggests that you will test twice: once when adding the inlined \nfunctions, once when calling from SQL.\n\n>> The checking extension is funny, but ISTM that these checks should be (are\n>> already?) included in some standard sql test, which will test the macros\n>> from direct SQL operations:\n>\n> Sure. But not for the unsigned part as there are no equivalent\n> in-core data types,\n\nYep, it bothered me sometimes, but not enough to propose to add them.\n\n> still it is possible to trick things with signed integer arguments.\n\nIs it?\n\n> I found my toy useful to check test all implementations consistently.\n\nOk.\n\n>> fabien=# SELECT INT2 '1234512434343';\n>> ERROR: value \"1234512434343\" is out of range for type smallint\n>>\n>> Well, a quick look at \"src/test/regress/sql/int2.sql\" suggests that\n>> there the existing tests should be extended… :-(\n>\n> We can tackle that separately. -32768 is perfectly legal for\n> smallint, and the test is wrong here.\n\nDo you mean:\n\n sql> SELECT -32768::INT2;\n ERROR: smallint out of range\n\nThis is not a negative constant but the reverse of a positive, which is \nindeed out of range, although the error message could help more.\n\n sql> SELECT (-32768)::INT2;\n -32768 # ok\n\n sql> SELECT INT2 '-32768';\n -32768 # ok\n\n-- \nFabien.",
"msg_date": "Fri, 30 Aug 2019 16:50:21 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Fri, Aug 30, 2019 at 04:50:21PM +0200, Fabien COELHO wrote:\n> Given the overheads of the SQL interpreter, I'm unsure about what you would\n> measure at the SQL level. You could just write a small standalone C program\n> to test perf and accuracy. Maybe this is what you have in mind.\n\nAfter a certain threshold, you can see the difference anyway by paying\nonce the overhead of the function. See for example the updated module\nattached that I used for my tests.\n\nI have been testing the various implementations, and doing 2B\niterations leads to roughly the following with a non-assert, -O2\nbuild using mul_u32:\n- __builtin_sub_overflow => 5s\n- cast to uint64 => 5.9s\n- division => 8s\nYou are right as well that having symmetry with the signed methods is\nmuch better. In order to see the difference, you can just do that\nwith the extension attached, after of course hijacking int.h with some\nundefs and recompiling the backend and the module:\nselect pg_overflow_check(10000, 10000, 2000000000, 'uint32', 'mul');\n\n>> still it is possible to trick things with signed integer arguments.\n> \n> Is it?\n\nIf you pass -1 and then you can fall back to the maximum of each 16,\n32 or 64 bits for the unsigned (see the regression tests I added with\nthe module).\n\n> Do you mean:\n> \n> sql> SELECT -32768::INT2;\n> ERROR: smallint out of range\n\nYou are incorrect here, as the minus sign is ignored by the cast.\nThis works though:\n=# SELECT (-32768)::INT2;\n int2\n--------\n -32768\n(1 row)\n\nIf you look at int2.sql, we do that:\n-- largest and smallest values\nINSERT INTO INT2_TBL(f1) VALUES ('32767');\nINSERT INTO INT2_TBL(f1) VALUES ('-32767');\nThat's the part I mean is wrong, as the minimum is actually -32768,\nbut the test fails to consider that. I'll go fix that after\ndouble-checking other similar tests for int4 and int8.\n\nAttached is an updated patch to complete the work for common/int.h,\nwith the following changes:\n- Changed the multiplication methods for uint16 and uint32 to not be\ndivision-based. uint64 can use that only if int128 exists.\n- Added comments on top of each sub-sections for the types checked.\n\nAttached is also an updated version of the module I used to validate\nthis stuff. Fabien, any thoughts?\n--\nMichael",
"msg_date": "Sun, 1 Sep 2019 14:11:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Bonjour Michaël,\n\n> You are right as well that having symmetry with the signed methods is \n> much better. In order to see the difference, you can just do that with \n> the extension attached, after of course hijacking int.h with some undefs \n> and recompiling the backend and the module: select \n> pg_overflow_check(10000, 10000, 2000000000, 'uint32', 'mul');\n\nOk.\n\n>>> still it is possible to trick things with signed integer arguments.\n>>\n>> Is it?\n>\n> If you pass -1 and then you can fall back to the maximum of each 16,\n> 32 or 64 bits for the unsigned (see the regression tests I added with\n> the module).\n\n> Attached is also an updated version of the module I used to validate\n> this stuff. Fabien, any thoughts?\n\nPatch apply cleanly, compiles, \"make check\" ok (although changes \nare untested).\n\nI would put back unlikely() on overflow tests, as there are indeed \nunlikely to occur and it may help some compilers, and cannot be harmful. \nIt also helps the code reader to know that these path are not expected to \nbe taken often.\n\nOn reflection, I'm not sure that add_u64 and sub_u64 overflow with uint128 \nare useful. The res < a or b > a tricks should suffice, just like for u16 \nand u32 cases, and it may cost a little less anyway.\n\nI would suggest keep the overflow extension as \"contrib/overflow_test\". \nFor mul tests, I'd suggest not to try only min/max values like add/sub, \nbut also \"standard\" multiplications that overflow or not. It would be good \nif \"make check\" could be made to work\", for some reason it requires \n\"installcheck\".\n\nI could not test performance directly, loops are optimized out by the \ncompiler. I added \"volatile\" on input value declarations to work around \nthat. On 2B iterations I got on my laptop:\n\n int16: mul = 2770 ms, add = 1830 ms, sub = 1826 ms\n int32: mul = 1838 ms, add = 1835 ms, sub = 1840 ms\n int64: mul = 1836 ms, add = 1834 ms, sub = 1833 ms\n\n uint16: mul = 3670 ms, add = 1830 ms, sub = 2148 ms\n uint32: mul = 2438 ms, add = 1834 ms, sub = 1831 ms\n uint64: mul = 2139 ms, add = 1841 ms, sub = 1882 ms\n\nWhy int16 mul, uint* mul and uint16 sub are bad is unclear.\n\nWith fallback code triggered with:\n\n #undef HAVE__BUILTIN_OP_OVERFLOW\n\n int16: mul = 1433 ms, add = 1424 ms, sub = 1254 ms\n int32: mul = 1433 ms, add = 1425 ms, sub = 1443 ms\n int64: mul = 1430 ms, add = 1429 ms, sub = 1441 ms\n\n uint16: mul = 1445 ms, add = 1291 ms, sub = 1265 ms\n uint32: mul = 1419 ms, add = 1434 ms, sub = 1493 ms\n uint64: mul = 1266 ms, add = 1430 ms, sub = 1440 ms\n\nFor some unclear reason, 4 tests are significantly faster.\n\nForcing further down fallback code with:\n\n #undef HAVE_INT128\n\n int64: mul = 1424 ms, add = 1429 ms, sub = 1440 ms\n uint64: mul = 24145 ms, add = 1434 ms, sub = 1435 ms\n\nThere is no doubt that dividing 64 bits integers is a very bad idea, at \nleast on my architecture!\n\nNote that checks depends on value, so actual performance may vary \ndepending on actual val1 and val2 passed. I used 10000 10000 like your \nexample.\n\nThese results are definitely depressing because the fallback code is \nnearly twice as fast as the builtin overflow detection version. For the \nrecord: gcc 7.4.0 on ubuntu 18.04 LTS. Not sure what to advise, relying on \nthe builtin should be the better idea…\n\n-- \nFabien.",
"msg_date": "Sun, 1 Sep 2019 13:57:06 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Sun, Sep 01, 2019 at 01:57:06PM +0200, Fabien COELHO wrote:\n> I would put back unlikely() on overflow tests, as there are indeed unlikely\n> to occur and it may help some compilers, and cannot be harmful. It also\n> helps the code reader to know that these path are not expected to be taken\n> often.\n\nHm. I don't agree about that part, and the original signed portions\ndon't do that. I think that we should let the callers of the routines\ndecide if a problem is unlikely to happen or not as we do now.\n\n> On reflection, I'm not sure that add_u64 and sub_u64 overflow with uint128\n> are useful. The res < a or b > a tricks should suffice, just like for u16\n> and u32 cases, and it may cost a little less anyway.\n\nActually, I agree and this is something I can see as well with some\nextra measurements. mul_u64 without int128 is twice slower, while\nadd_u64 and sub_u64 are 15~20% faster.\n\n> I would suggest keep the overflow extension as \"contrib/overflow_test\". For\n> mul tests, I'd suggest not to try only min/max values like add/sub, but also\n> \"standard\" multiplications that overflow or not. It would be good if \"make\n> check\" could be made to work\", for some reason it requires \"installcheck\".\n\nAny extensions out of core can only work with \"installcheck\", and\ncheck is not supported (see pgxs.mk). I am still not convinced that\nthis module is worth the extra cycles to justify its existence\nthough.\n\n> There is no doubt that dividing 64 bits integers is a very bad idea, at\n> least on my architecture!\n\nThat's surprising. I cannot reproduce that. Are you sure that you\ndidn't just undefine HAVE_INT128? This would cause\nHAVE__BUILTIN_OP_OVERFLOW to still be active in all the code paths.\nHere are a couple of results from my side with this query, FWIW, and\nsome numbers for all the compile flags (-O2 used):\nselect pg_overflow_check(10000, 10000, 2000000000, 'XXX', 'XXX');\n1) uint16:\n1-1) mul:\n- built-in: 5.5s\n- fallback: 5.5s\n1-2) sub:\n- built-in: 5.3s\n- fallback: 5.4s\n1-3) add:\n- built-in: 5.3s\n- fallback: 6.2s\n2) uint32:\n2-1) mul:\n- built-in: 5.1s\n- fallback: 5.9s\n2-2) sub:\n- built-in: 5.2s\n- fallback: 5.4s\n2-3) add:\n- built-in: 5.2s\n- fallback: 6.2s\n2) uint64:\n2-1) mul:\n- built-in: 5.1s\n- fallback (with uint128): 8.0s\n- fallback (without uint128): 18.1s\n2-2) sub:\n- built-in: 5.2s\n- fallback (with uint128): 7.1s\n- fallback (without uint128): 5.5s\n2-3) add:\n- built-in: 5.2s\n- fallback (with uint128): 7.1s\n- fallback (without uint128): 6.3s\n\nSo, the built-in option is always faster, and keeping the int128 path\nif available for the multiplication makes sense, but not for the\nsubtraction and the addition. I am wondering if we should review\nfurther the signed part for add and sub, but I'd rather not touch it\nin this patch.\n\n> Note that checks depends on value, so actual performance may vary depending\n> on actual val1 and val2 passed. I used 10000 10000 like your example.\n\nSure. Still that offers helpful hints as we do the same operations\nfor all code paths the same number of times.\n\nIf you have done any changes on my previous patch, or if you have a\nscript to share I could use to attempt to reproduce your results, I\nwould be happy to do so.\n\nSo, do you have more comments?\n--\nMichael",
"msg_date": "Sun, 1 Sep 2019 22:10:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Michaᅵl,\n\n>> I would put back unlikely() on overflow tests, as there are indeed unlikely\n>> to occur and it may help some compilers, and cannot be harmful. It also\n>> helps the code reader to know that these path are not expected to be taken\n>> often.\n>\n> Hm. I don't agree about that part, and the original signed portions\n> don't do that. I think that we should let the callers of the routines\n> decide if a problem is unlikely to happen or not as we do now.\n\nHmmm. Maybe inlining propates them, but otherwise they make sense for a \ncompiler perspective.\n\n> I am still not convinced that this module is worth the extra cycles to \n> justify its existence though.\n\nThey allow to quickly do performance tests, for me it is useful to keep it \naround, but you are the committer, you do as you feel.\n\n>> [...]\n>> There is no doubt that dividing 64 bits integers is a very bad idea, at\n>> least on my architecture!\n>\n> That's surprising. I cannot reproduce that.\n\nIt seems to me that somehow you can, you have a 5 to 18 seconds drop \nbelow. There are actual reasons why some processors are more expensive \nthan others, it is not just marketing:-)\n\n> 2-1) mul:\n> - built-in: 5.1s\n> - fallback (with uint128): 8.0s\n> - fallback (without uint128): 18.1s\n\n> So, the built-in option is always faster, and keeping the int128 path\n> if available for the multiplication makes sense, but not for the\n> subtraction and the addition.\n\nYep.\n\n> I am wondering if we should review further the signed part for add and \n> sub, but I'd rather not touch it in this patch.\n\nThe signed overflows are trickier even, I have not paid attention to the \nfallback code. I agree that it is better left untouched for know.\n\n> If you have done any changes on my previous patch, or if you have a\n> script to share I could use to attempt to reproduce your results, I\n> would be happy to do so.\n\nHmmm. I did manual tests really. Attached a psql script replicating them.\n\n # with builtin overflow detection\n sh> psql < oc.sql\n NOTICE: int 16 mul: 00:00:02.747269 # slow\n NOTICE: int 16 add: 00:00:01.83281\n NOTICE: int 16 sub: 00:00:01.8501\n NOTICE: uint 16 mul: 00:00:03.68362 # slower\n NOTICE: uint 16 add: 00:00:01.835294\n NOTICE: uint 16 sub: 00:00:02.136895 # slow\n NOTICE: int 32 mul: 00:00:01.828065\n NOTICE: int 32 add: 00:00:01.840269\n NOTICE: int 32 sub: 00:00:01.843557\n NOTICE: uint 32 mul: 00:00:02.447052 # slow\n NOTICE: uint 32 add: 00:00:01.849899\n NOTICE: uint 32 sub: 00:00:01.840773\n NOTICE: int 64 mul: 00:00:01.839051\n NOTICE: int 64 add: 00:00:01.839065\n NOTICE: int 64 sub: 00:00:01.838599\n NOTICE: uint 64 mul: 00:00:02.161346 # slow\n NOTICE: uint 64 add: 00:00:01.839404\n NOTICE: uint 64 sub: 00:00:01.838549\n DO\n\n> So, do you have more comments?\n\nNo other comments.\n\n-- \nFabien.",
"msg_date": "Sun, 1 Sep 2019 20:07:06 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Sun, Sep 01, 2019 at 08:07:06PM +0200, Fabien COELHO wrote:\n> They allow to quickly do performance tests, for me it is useful to keep it\n> around, but you are the committer, you do as you feel.\n\nIf there are more voices for having that in core, we could consider\nit. For now I have added that into my own plugin repository with all\nthe functions discussed on this thread:\nhttps://github.com/michaelpq/pg_plugins/\n\n> The signed overflows are trickier even, I have not paid attention to the\n> fallback code. I agree that it is better left untouched for know.\n\nThanks.\n\n> Hmmm. I did manual tests really. Attached a psql script replicating them.\n> \n> # with builtin overflow detection\n> sh> psql < oc.sql\n> NOTICE: int 16 mul: 00:00:02.747269 # slow\n> NOTICE: int 16 add: 00:00:01.83281\n> NOTICE: int 16 sub: 00:00:01.8501\n> NOTICE: uint 16 mul: 00:00:03.68362 # slower\n> NOTICE: uint 16 add: 00:00:01.835294\n> NOTICE: uint 16 sub: 00:00:02.136895 # slow\n> NOTICE: int 32 mul: 00:00:01.828065\n> NOTICE: int 32 add: 00:00:01.840269\n> NOTICE: int 32 sub: 00:00:01.843557\n> NOTICE: uint 32 mul: 00:00:02.447052 # slow\n> NOTICE: uint 32 add: 00:00:01.849899\n> NOTICE: uint 32 sub: 00:00:01.840773\n> NOTICE: int 64 mul: 00:00:01.839051\n> NOTICE: int 64 add: 00:00:01.839065\n> NOTICE: int 64 sub: 00:00:01.838599\n> NOTICE: uint 64 mul: 00:00:02.161346 # slow\n> NOTICE: uint 64 add: 00:00:01.839404\n> NOTICE: uint 64 sub: 00:00:01.838549\n\nActually that's much faster than a single core on my debian SID with\ngcc 9.2.1.\n\nHere are more results from me:\n Built-in undef Built-in\nint16 mul 00:00:05.425207 00:00:05.634417\nint16 add 00:00:05.389738 00:00:06.38885 \nint16 sub 00:00:05.446529 00:00:06.39569 \nuint16 mul 00:00:05.499066 00:00:05.541617\nuint16 add 00:00:05.281622 00:00:06.252511\nuint16 sub 00:00:05.366424 00:00:05.457148\nint32 mul 00:00:05.121209 00:00:06.154989\nint32 add 00:00:05.228722 00:00:06.344721\nint32 sub 00:00:05.237594 00:00:06.323543\nuint32 mul 00:00:05.126339 00:00:05.921738\nuint32 add 00:00:05.212085 00:00:06.183031\nuint32 sub 00:00:05.201884 00:00:05.363667\nint64 mul 00:00:05.136129 00:00:06.148101\nint64 add 00:00:05.200201 00:00:06.329091\nint64 sub 00:00:05.218028 00:00:06.313114\nuint64 mul 00:00:05.444733 00:00:08.089742\nuint64 add 00:00:05.603978 00:00:06.377753\nuint64 sub 00:00:05.544838 00:00:05.490873\n\nThis part has been committed, now let's move to the next parts of your\npatch.\n--\nMichael",
"msg_date": "Mon, 2 Sep 2019 09:51:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "> This part has been committed, now let's move to the next parts of your\n> patch.\n\nAttached a rebased version which implements the int64/uint64 stuff. It is \nbasically the previous patch without the overflow inlined functions.\n\n-- \nFabien Coelho - CRI, MINES ParisTech",
"msg_date": "Tue, 3 Sep 2019 20:10:37 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Tue, Sep 03, 2019 at 08:10:37PM +0200, Fabien COELHO wrote:\n> Attached a rebased version which implements the int64/uint64 stuff. It is\n> basically the previous patch without the overflow inlined functions.\n\n- if (!strtoint64(yytext, true, &yylval->ival))\n+ if (unlikely(pg_strtoint64(yytext, &yylval->ival) != PG_STRTOINT_OK))\nIt seems to me that we should have unlikely() only within\npg_strtoint64(), pg_strtouint64(), etc.\n\n- /* skip leading spaces; cast is consistent with strtoint64 */\n- while (*ptr && isspace((unsigned char) *ptr))\n+ /* skip leading spaces; cast is consistent with pg_strtoint64 */\n+ while (isspace((unsigned char) *ptr))\n ptr++;\nWhat do you think about splitting this part in two? I would suggest\nto do the refactoring in a first patch, and the consider all the\noptimizations for the routines you have in mind afterwards.\n\nI think that we don't actually need is_an_int() and str2int64() at all\nin pgbench.c as we could just check for the return code of\npg_strtoint64() and switch to the double parsing only if we don't have\nPG_STRTOINT_OK.\n\nscanint8() only has one caller in the backend with your patch, and we\ndon't check after its return result in int8.c, so I would suggest to\nmove the error handling directly in int8in() and to remove scanint8().\n\nI think that we should also introduce the [u]int[16|32] flavors and\nexpand them in the code in a single patch, in a way consistent with\nwhat you have done for int64/uint64 as the state that we reach with\nthe patch is kind of weird as there are existing versions numutils.c.\n\nHave you done some performance testing of the patch? The COPY\nbulkload is a good example of that:\nhttps://www.postgresql.org/message-id/20190717181428.krqpmduejbqr4m6g%40alap3.anarazel.de\n--\nMichael",
"msg_date": "Wed, 4 Sep 2019 17:02:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-03 20:10:37 +0200, Fabien COELHO wrote:\n> @@ -113,7 +113,7 @@ parse_output_parameters(List *options, uint32 *protocol_version,\n> \t\t\t\t\t\t errmsg(\"conflicting or redundant options\")));\n> \t\t\tprotocol_version_given = true;\n>\n> -\t\t\tif (!scanint8(strVal(defel->arg), true, &parsed))\n> +\t\t\tif (unlikely(pg_strtoint64(strVal(defel->arg), &parsed) != PG_STRTOINT_OK))\n> \t\t\t\tereport(ERROR,\n> \t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> \t\t\t\t\t\t errmsg(\"invalid proto_version\")));\n\nUnexcited about adding unlikely() to any place that's not a hot path -\nwhich this certainly is not.\n\nBut I also wonder if we shouldn't just put the branch likelihood of\npg_strtoint64 not failing into one central location. E.g. something like\n\nstatic inline pg_strtoint_status\npg_strtoint64(const char *str, int64 *result)\n{\n pg_strtoint_status status;\n\n status = pg_strtoint64_impl(str, result);\n\n likely(status == PG_STRTOINT_OK);\n\n return status;\n}\n\nI've not tested this, but IIRC that should be sufficient to propagate\nthat knowledge up to callers.\n\n\n> +\tif (likely(stat == PG_STRTOINT_OK))\n> +\t\treturn true;\n> +\telse if (stat == PG_STRTOINT_RANGE_ERROR)\n> \t{\n> -\t\t/* could fail if input is most negative number */\n> -\t\tif (unlikely(tmp == PG_INT64_MIN))\n> -\t\t\tgoto out_of_range;\n> -\t\ttmp = -tmp;\n> -\t}\n> -\n> -\t*result = tmp;\n> -\treturn true;\n> -\n> -out_of_range:\n> -\tif (!errorOK)\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> \t\t\t\t errmsg(\"value \\\"%s\\\" is out of range for type %s\",\n> \t\t\t\t\t\tstr, \"bigint\")));\n> -\treturn false;\n> -\n> -invalid_syntax:\n> -\tif (!errorOK)\n> +\t\treturn false;\n> +\t}\n> +\telse if (stat == PG_STRTOINT_SYNTAX_ERROR)\n> +\t{\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n> \t\t\t\t errmsg(\"invalid input syntax for type %s: \\\"%s\\\"\",\n> \t\t\t\t\t\t\"bigint\", str)));\n> -\treturn false;\n> +\t\treturn false;\n> +\t}\n> +\telse\n> +\t\t/* cannot get here */\n> +\t\tAssert(0);\n\nI'd write this as a switch over the enum - that way we'll get a\ncompile-time warning if we're not handling a value.\n\n\n> +static bool\n> +str2int64(const char *str, int64 *val)\n> +{\n> +\tpg_strtoint_status\t\tstat = pg_strtoint64(str, val);\n> +\n\nI find it weird to have a wrapper that's named 'str2...' that then calls\n'strto' to implement itself.\n\n\n> +/*\n> + * pg_strtoint64 -- convert a string to 64-bit integer\n> + *\n> + * The function returns if the conversion failed, or\n> + * \"*result\" is set to the result.\n> + */\n> +pg_strtoint_status\n> +pg_strtoint64(const char *str, int64 *result)\n> +{\n> +\tconst char *ptr = str;\n> +\tint64\t\ttmp = 0;\n> +\tbool\t\tneg = false;\n> +\n> +\t/*\n> +\t * Do our own scan, rather than relying on sscanf which might be broken\n> +\t * for long long.\n> +\t *\n> +\t * As INT64_MIN can't be stored as a positive 64 bit integer, accumulate\n> +\t * value as a negative number.\n> +\t */\n> +\n> +\t/* skip leading spaces */\n> +\twhile (isspace((unsigned char) *ptr))\n> +\t\tptr++;\n> +\n> +\t/* handle sign */\n> +\tif (*ptr == '-')\n> +\t{\n> +\t\tptr++;\n> +\t\tneg = true;\n> +\t}\n> +\telse if (*ptr == '+')\n> +\t\tptr++;\n> +\n> +\t/* require at least one digit */\n> +\tif (unlikely(!isdigit((unsigned char) *ptr)))\n> +\t\treturn PG_STRTOINT_SYNTAX_ERROR;\n> +\n> +\t/* process digits, we know that we have one ahead */\n> +\tdo\n> +\t{\n> +\t\tint64\t\tdigit = (*ptr++ - '0');\n> +\n> +\t\tif (unlikely(pg_mul_s64_overflow(tmp, 10, &tmp)) ||\n> +\t\t\tunlikely(pg_sub_s64_overflow(tmp, digit, &tmp)))\n> +\t\t\treturn PG_STRTOINT_RANGE_ERROR;\n> +\t}\n> +\twhile (isdigit((unsigned char) *ptr));\n\nHm. If we're concerned about the cost of isdigit etc - and I think\nthat's reasonable, after looking at their implementation in glibc (it\nperforms a lookup in a global table to handle potential locale changes)\n- I think we ought to just not use the provided isdigit, and probably\nnot isspace either. This code effectively relies on the input being\nascii anyway, so we don't need any locale specific behaviour afaict\n(we'd e.g. get wrong results if isdigit() returned true for something\nother than the ascii chars).\n\nTo me the generated code looks considerably better if I use something\nlike\n\nstatic inline bool\npg_isdigit(char c)\n{\n\treturn c >= '0' && c <= '9';\n}\n\nstatic inline bool\npg_isspace(char c)\n{\n\treturn c == ' ';\n}\n\n(if we were to actually go for this, we'd probably want to add some docs\nthat we don't expect EOF, or make the code safe against that).\n\nI've not benchmarked that, but I'd be surprised if it didn't improve\nmatters.\n\nAnd once coded using the above, there's no point in the do/while\nconversion imo, as any compiler can trivially optimize redundant checks.\n\n\n> +/*\n> + * pg_strtouint64 -- convert a string to unsigned 64-bit integer\n> + *\n> + * The function returns if the conversion failed, or\n> + * \"*result\" is set to the result.\n> + */\n> +pg_strtoint_status\n> +pg_strtouint64(const char *str, uint64 *result)\n> +{\n> +\tconst char *ptr = str;\n> +\tuint64\t\ttmp = 0;\n> +\n> +\t/* skip leading spaces */\n> +\twhile (isspace((unsigned char) *ptr))\n> +\t\tptr++;\n> +\n> +\t/* handle sign */\n> +\tif (*ptr == '+')\n> +\t\tptr++;\n> +\telse if (unlikely(*ptr == '-'))\n> +\t\treturn PG_STRTOINT_SYNTAX_ERROR;\n\nHm. Seems like that should return PG_STRTOINT_RANGE_ERROR?\n\n\n> +typedef enum {\n\nPlease don't define anonyous types (the enum itself, which now is only\nreachable via the typedef). Also, there's a missing newline here).\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 4 Sep 2019 02:08:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n>> Attached a rebased version which implements the int64/uint64 stuff. It is\n>> basically the previous patch without the overflow inlined functions.\n>\n> - if (!strtoint64(yytext, true, &yylval->ival))\n> + if (unlikely(pg_strtoint64(yytext, &yylval->ival) != PG_STRTOINT_OK))\n\n> It seems to me that we should have unlikely() only within\n> pg_strtoint64(), pg_strtouint64(), etc.\n\n From a compiler perspective, the (un)likely tip is potentially useful on \nany test. We know when parsing a that it is very unlikely that the string \nconversion would fail, so we can tell that, so that the compiler knows \nwhich branch it should optimize first.\n\nYou can argue against that if the functions are inlined, because maybe the \ncompiler would propagate the information, but for distinct functions \ncompiled separately the information is useful at each level.\n\n> - /* skip leading spaces; cast is consistent with strtoint64 */\n> - while (*ptr && isspace((unsigned char) *ptr))\n\n> + /* skip leading spaces; cast is consistent with pg_strtoint64 */\n> + while (isspace((unsigned char) *ptr))\n> ptr++;\n\n> What do you think about splitting this part in two? I would suggest\n> to do the refactoring in a first patch, and the consider all the\n> optimizations for the routines you have in mind afterwards.\n\nI would not bother.\n\n> I think that we don't actually need is_an_int() and str2int64() at all\n> in pgbench.c as we could just check for the return code of\n> pg_strtoint64() and switch to the double parsing only if we don't have\n> PG_STRTOINT_OK.\n\nYep, you are right, now that the conversion functions does not error out a \nmessage, its failure can be used as a test.\n\nThe version attached changes slightly the semantics, because on int \noverflows a double conversion will be attempted instead of erroring. I do \nnot think that it is worth the effort of preserving the previous semantic \nof erroring.\n\n> scanint8() only has one caller in the backend with your patch, and we\n> don't check after its return result in int8.c, so I would suggest to\n> move the error handling directly in int8in() and to remove scanint8().\n\nOk.\n\n> I think that we should also introduce the [u]int[16|32] flavors and\n> expand them in the code in a single patch, in a way consistent with\n> what you have done for int64/uint64 as the state that we reach with\n> the patch is kind of weird as there are existing versions numutils.c.\n\nBefore dealing with the 16/32 versions, which involve quite a significant \namount of changes, I would want a clear message that \"the 64 bit approach\" \nis the model to follow.\n\nMoreover, I'm unsure how to rename the existing pg_strtoint32 and others\nwhich call ereport, if the name is used for the common error returning \nversion.\n\n> Have you done some performance testing of the patch? The COPY\n> bulkload is a good example of that:\n> https://www.postgresql.org/message-id/20190717181428.krqpmduejbqr4m6g%40alap3.anarazel.de\n\nI have done no such thing for now.\n\nI would not expect any significant performance difference when loading \nint8 things because basically scanint8 has just been renamed pg_strtoint64 \nand made global, and that is more or less all. It might be a little bit \nslower because possible the compiler cannot inline the conversion, but on \nthe other hand, the *likely hints and removed tests may marginaly help \nperformance. I think that the only way to test performance significantly \nwould be to write a specific program that loops over a conversion.\n\n-- \nFabien.",
"msg_date": "Wed, 4 Sep 2019 12:49:17 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Wed, Sep 04, 2019 at 02:08:39AM -0700, Andres Freund wrote:\n>> +static bool\n>> +str2int64(const char *str, int64 *val)\n>> +{\n>> +\tpg_strtoint_status\t\tstat = pg_strtoint64(str, val);\n>> +\n> \n> I find it weird to have a wrapper that's named 'str2...' that then calls\n> 'strto' to implement itself.\n\nIt happens that this wrapper in pgbench.c is not actually needed.\n\n> Hm. If we're concerned about the cost of isdigit etc - and I think\n> that's reasonable, after looking at their implementation in glibc (it\n> performs a lookup in a global table to handle potential locale changes)\n> - I think we ought to just not use the provided isdigit, and probably\n> not isspace either. This code effectively relies on the input being\n> ascii anyway, so we don't need any locale specific behaviour afaict\n> (we'd e.g. get wrong results if isdigit() returned true for something\n> other than the ascii chars).\n\nYeah. It seems to me that we have more optimizations that could come\nin line here, and actually we have perhaps more refactoring at hand\nwith each one of the 6 functions we'd like to add at the end. I had\nin mind about first shaping the refactoring patch, consolidating all\nthe interfaces, and then evaluate the improvements we can come up with\nas after the refactoring we'd need to update only common/string.c.\n\n> I've not benchmarked that, but I'd be surprised if it didn't improve\n> matters.\n\nI think that you are right here, there is something to gain. Looking\nat their stuff this makes use of __isctype as told by ctype/ctype.h.\n--\nMichael",
"msg_date": "Thu, 5 Sep 2019 11:50:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Wed, Sep 04, 2019 at 12:49:17PM +0200, Fabien COELHO wrote:\n> From a compiler perspective, the (un)likely tip is potentially useful on any\n> test. We know when parsing a that it is very unlikely that the string\n> conversion would fail, so we can tell that, so that the compiler knows which\n> branch it should optimize first.\n> \n> You can argue against that if the functions are inlined, because maybe the\n> compiler would propagate the information, but for distinct functions\n> compiled separately the information is useful at each level.\n\nHmm. There has been an extra recent argument on the matter here:\nhttps://www.postgresql.org/message-id/20190904090839.stp3madovtynq3px@alap3.anarazel.de\n\nI am not sure that we should tackle that as part of the first\nrefactoring though, as what we'd want is first to put all the\ninterfaces in a single place we can deal with afterwards.\n\n> Yep, you are right, now that the conversion functions does not error out a\n> message, its failure can be used as a test.\n> \n> The version attached changes slightly the semantics, because on int\n> overflows a double conversion will be attempted instead of erroring. I do\n> not think that it is worth the effort of preserving the previous semantic of\n> erroring.\n\nYes. I would move things in this direction. I may reconsider this\npart again with more testing but switching from one to the other is\nsimple enough so let's keep the code as you suggest for now.\n\n>> scanint8() only has one caller in the backend with your patch, and we\n>> don't check after its return result in int8.c, so I would suggest to\n>> move the error handling directly in int8in() and to remove scanint8().\n> \n> Ok.\n\nAs per the extra comments of upthread, this should use a switch\nwithout a default.\n\n> Before dealing with the 16/32 versions, which involve quite a significant\n> amount of changes, I would want a clear message that \"the 64 bit approach\"\n> is the model to follow.\n> \n> Moreover, I'm unsure how to rename the existing pg_strtoint32 and others\n> which call ereport, if the name is used for the common error returning\n> version.\n\nRight, there was this part. This brings also the point of having one\ninterface for the backend as all the error messages for the backend\nare actually the same, with the most simple name being that:\npg_strtoint(value, size, error_ok).\n\nThis then calls all the sub-routines we have in src/common/. There\nwere more suggestions here:\nhttps://www.postgresql.org/message-id/20190729050539.d7mbjabcrlv7bxc3@alap3.anarazel.de\n\n> I would not expect any significant performance difference when loading int8\n> things because basically scanint8 has just been renamed pg_strtoint64 and\n> made global, and that is more or less all. It might be a little bit slower\n> because possible the compiler cannot inline the conversion, but on the other\n> hand, the *likely hints and removed tests may marginaly help performance. I\n> think that the only way to test performance significantly would be to write\n> a specific program that loops over a conversion.\n\nI would suspect a change for pg_strtouint64().\n--\nMichael",
"msg_date": "Thu, 5 Sep 2019 15:52:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Thu, Sep 05, 2019 at 03:52:48PM +0900, Michael Paquier wrote:\n> Right, there was this part. This brings also the point of having one\n> interface for the backend as all the error messages for the backend\n> are actually the same, with the most simple name being that:\n> pg_strtoint(value, size, error_ok).\n\nI have been looking at that for the last couple of days. First, I\nhave consolidated all the error strings in a single routine like this\none, except that error_ok is not really needed if you take this\napproach: callers that don't care about failures could just call the\nset of routines in common/string.c and be done with it.\n\nAttached is an update of my little module that I used to check that\nthe refactoring was done correctly (regression tests attached), it\nalso includes a couple of routines to check the performance difference\nbetween one approach and the other, with focus on two things:\n- Is pg_strtouint64 really improved?\n- How much do we lose by moving to a common interface in the backend\nwith pg_strtoint?\n\nThe good news is that removing strtol from pg_strtouint64 really\nimproves the performance as already reported, and with one billion\ncalls in a tight loop you see a clear difference:\n=# select pg_strtouint64_old_check('10000', 1000000000);\n pg_strtouint64_old_check\n--------------------------\n 10000\n(1 row)\nTime: 15576.539 ms (00:15.577)\n=# select pg_strtouint64_new_check('10000', 1000000000);\n pg_strtouint64_new_check\n--------------------------\n 10000\n(1 row)\nTime: 8820.544 ms (00:08.821)\n\nSo the new implementation is more than 40% faster with this\nmicro-benchmark on my Debian box.\n\nThe bad news is that a pg_strtoint() wrapper is not a good idea:\n=# select pg_strtoint_check('10000', 1000000000, 4);\n pg_strtoint_check\n-------------------\n 10000\n(1 row)\nTime: 11178.101 ms (00:11.178)\n=# select pg_strtoint32_check('10000', 1000000000);\n pg_strtoint32_check\n---------------------\n 10000\n(1 row)\nTime: 9252.894 ms (00:09.253)\nSo trying to consolidate all error messages leads to a 15% hit with\nthis test, which sucks.\n\nSo, it seems to me that if we want to have a consolidation of\neverything, we basically need to have the generic error messages for\nthe backend directly into common/string.c where the routines are\nrefactored, and I think that we should do the following based on the\npatch attached:\n- Just remove pg_strtoint(), and move the error generation logic in\ncommon/string.c.\n- Add an error_ok flag for all the pg_strto[u]int{16|32|64} routines,\nwhich generates errors only for FRONTEND is not defined.\n- Use pg_log_error() for the frontend errors.\n\nIf those errors are added everywhere, we would basically have no code\npaths in the frontend or the backend (for the unsigned part only)\nwhich use them yet. Another possibility would be to ignore the\nerror_ok flag in those cases, but that's inconsistent. My take would\nbe here to have a more generic error message, like that:\n\"value \\\"%s\\\" is out of range for [unsigned] {16|32|64}-bit integer\" \n\"invalid input syntax for [unsigned] {16|32|64}-bit integer: \\\"%s\\\"\\n\"\n\nI do not suggest to change the messages for the backend for signed\nentries though, as these are linked to the data types used.\n\nAttached is an update of my toy module, and an updated patch with what\nI have done up to now. This stuff already does a lot, so for now I\nhave not worked on the removal strtoint() in string.c yet. We could\njust do that with a follow-up patch and make it use the new conversion\nroutines once we are sure that they are in a correct shape. As\nstrtoint() makes use of strtol(), switching to the new routines will\nbe much faster anyway...\n\nFabien, any thoughts?\n--\nMichael",
"msg_date": "Mon, 9 Sep 2019 14:28:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-09 14:28:14 +0900, Michael Paquier wrote:\n> On Thu, Sep 05, 2019 at 03:52:48PM +0900, Michael Paquier wrote:\n> > Right, there was this part. This brings also the point of having one\n> > interface for the backend as all the error messages for the backend\n> > are actually the same, with the most simple name being that:\n> > pg_strtoint(value, size, error_ok).\n\nI *VEHEMENTLY* oppose the introduction of any future pseudo-generic\nroutines taking the type width as a parameter. They're weakening the\ntype system and they're unnecessarily inefficient.\n\n\nI don't really buy that saving a few copies of a strings is worth that\nmuch. But if you really want to do that, the right approach imo would be\nto move the error reporting into a separate function. I.e. something\n\nvoid pg_attribute_noreturn()\npg_strtoint_error(pg_strtoint_status status, const char *input, const char *type)\n\nthat you'd call in small wrappers. Something like\n\nstatic inline int32\npg_strtoint32_check(const char* s)\n{\n int32 result;\n pg_strtoint_status status = pg_strtoint32(s, &result);\n\n if (unlikely(status == PG_STRTOINT_OK))\n pg_strtoint_error(status, s, \"int32\");\n return result;\n}\n\n\n> So, it seems to me that if we want to have a consolidation of\n> everything, we basically need to have the generic error messages for\n> the backend directly into common/string.c where the routines are\n> refactored, and I think that we should do the following based on the\n> patch attached:\n\n> - Just remove pg_strtoint(), and move the error generation logic in\n> common/string.c.\n\nI'm not quite sure what you mean by moving the \"error generation logic\"?\n\n\n> - Add an error_ok flag for all the pg_strto[u]int{16|32|64} routines,\n> which generates errors only for FRONTEND is not defined.\n\nI think this is a bad idea.\n\n\n> - Use pg_log_error() for the frontend errors.\n>\n> If those errors are added everywhere, we would basically have no code\n> paths in the frontend or the backend (for the unsigned part only)\n> which use them yet. Another possibility would be to ignore the\n> error_ok flag in those cases, but that's inconsistent.\n\nYea, ignoring it would be terrible idea.\n\n> diff --git a/src/backend/libpq/pqmq.c b/src/backend/libpq/pqmq.c\n> index a9bd47d937..593a5ef54e 100644\n> --- a/src/backend/libpq/pqmq.c\n> +++ b/src/backend/libpq/pqmq.c\n> @@ -286,10 +286,10 @@ pq_parse_errornotice(StringInfo msg, ErrorData *edata)\n> \t\t\t\tedata->hint = pstrdup(value);\n> \t\t\t\tbreak;\n> \t\t\tcase PG_DIAG_STATEMENT_POSITION:\n> -\t\t\t\tedata->cursorpos = pg_strtoint32(value);\n> +\t\t\t\tedata->cursorpos = pg_strtoint(value, 4);\n> \t\t\t\tbreak;\n\nI'd be really upset if this type of change went in.\n\n\n\n> #include \"fmgr.h\"\n> #include \"miscadmin.h\"\n> +#include \"common/string.h\"\n> #include \"nodes/extensible.h\"\n> #include \"nodes/parsenodes.h\"\n> #include \"nodes/plannodes.h\"\n> @@ -80,7 +81,7 @@\n> #define READ_UINT64_FIELD(fldname) \\\n> \ttoken = pg_strtok(&length);\t\t/* skip :fldname */ \\\n> \ttoken = pg_strtok(&length);\t\t/* get field value */ \\\n> -\tlocal_node->fldname = pg_strtouint64(token, NULL, 10)\n> +\t(void) pg_strtouint64(token, &local_node->fldname)\n\nSeems like these actually could just ought to use the error-checked\nvariants. And I think it ought to change all of\nREAD_{INT,UINT,LONG,UINT64,OID}_FIELD, rather than just redirecting one\nof them to the new routines.\n\n\n> static void pcb_error_callback(void *arg);\n> @@ -496,7 +496,7 @@ make_const(ParseState *pstate, Value *value, int location)\n>\n> \t\tcase T_Float:\n> \t\t\t/* could be an oversize integer as well as a float ... */\n> -\t\t\tif (scanint8(strVal(value), true, &val64))\n> +\t\t\tif (pg_strtoint64(strVal(value), &val64) == PG_STRTOINT_OK)\n> \t\t\t{\n> \t\t\t\t/*\n> \t\t\t\t * It might actually fit in int32. Probably only INT_MIN can\n\nNot for this change, but we really ought to move away from this crappy\nlogic. It's really bonkers to have T_Float represent large integers and\nfloats.\n\n\n\n> +/*\n> + * pg_strtoint16\n> + *\n> + * Convert input string to a signed 16-bit integer. Allows any number of\n> + * leading or trailing whitespace characters.\n> + *\n> + * NB: Accumulate input as a negative number, to deal with two's complement\n> + * representation of the most negative number, which can't be represented as a\n> + * positive number.\n> + *\n> + * The function returns immediately if the conversion failed with a status\n> + * value to let the caller handle the error. On success, the result is\n> + * stored in \"*result\".\n> + */\n> +pg_strtoint_status\n> +pg_strtoint16(const char *s, int16 *result)\n> +{\n> +\tconst char *ptr = s;\n> +\tint16\t\ttmp = 0;\n> +\tbool\t\tneg = false;\n> +\n> +\t/* skip leading spaces */\n> +\twhile (likely(*ptr) && isspace((unsigned char) *ptr))\n> +\t\tptr++;\n> +\n> +\t/* handle sign */\n> +\tif (*ptr == '-')\n> +\t{\n> +\t\tptr++;\n> +\t\tneg = true;\n> +\t}\n> +\telse if (*ptr == '+')\n> +\t\tptr++;\n\n> +\t/* require at least one digit */\n> +\tif (unlikely(!isdigit((unsigned char) *ptr)))\n> +\t\treturn PG_STRTOINT_SYNTAX_ERROR;\n\nWonder if there's an argument for moving this behaviour to someplace\nelse - in most cases we really don't expect whitespace, and checking for\nit is unnecessary overhead.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Sep 2019 03:17:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Mon, Sep 09, 2019 at 03:17:38AM -0700, Andres Freund wrote:\n> On 2019-09-09 14:28:14 +0900, Michael Paquier wrote:\n> I *VEHEMENTLY* oppose the introduction of any future pseudo-generic\n> routines taking the type width as a parameter. They're weakening the\n> type system and they're unnecessarily inefficient.\n\nI saw that, using the previous wrapper increases the time of one call\nfrom roughly 0.9ms to 1.1ns.\n\n> I don't really buy that saving a few copies of a strings is worth that\n> much. But if you really want to do that, the right approach imo would be\n> to move the error reporting into a separate function. I.e. something\n> \n> void pg_attribute_noreturn()\n> pg_strtoint_error(pg_strtoint_status status, const char *input, const char *type)\n> \n> that you'd call in small wrappers. Something like\n\nI am not completely sure if we should do that either anyway. Another\napproach would be to try to make the callers of the routines generate\ntheir own error messages. The errors we have now are really linked to\nthe data types we have in core for signed integers (smallint, int,\nbigint). In most cases do they really make sense (varlena.c, pqmq.c,\netc.)? And for errors which should never happen we could just use\nelog(). For the input functions of int2/4/8 we still need the\nexisting errors of course.\n\n>> So, it seems to me that if we want to have a consolidation of\n>> everything, we basically need to have the generic error messages for\n>> the backend directly into common/string.c where the routines are\n>> refactored, and I think that we should do the following based on the\n>> patch attached:\n> \n>> - Just remove pg_strtoint(), and move the error generation logic in\n>> common/string.c.\n> \n> I'm not quite sure what you mean by moving the \"error generation logic\"?\n\nI was referring to the error messages we have on HEAD in scanint8()\nand pg_strtoint16() for bad inputs and overflows.\n\n> Seems like these actually could just ought to use the error-checked\n> variants. And I think it ought to change all of\n> READ_{INT,UINT,LONG,UINT64,OID}_FIELD, rather than just redirecting one\n\nRight.\n\n>> static void pcb_error_callback(void *arg);\n>> @@ -496,7 +496,7 @@ make_const(ParseState *pstate, Value *value, int location)\n>>\n>> \t\tcase T_Float:\n>> \t\t\t/* could be an oversize integer as well as a float ... */\n>> -\t\t\tif (scanint8(strVal(value), true, &val64))\n>> +\t\t\tif (pg_strtoint64(strVal(value), &val64) == PG_STRTOINT_OK)\n>> \t\t\t{\n>> \t\t\t\t/*\n>> \t\t\t\t * It might actually fit in int32. Probably only INT_MIN can\n> \n> Not for this change, but we really ought to move away from this crappy\n> logic. It's really bonkers to have T_Float represent large integers and\n> floats.\n\nI am not sure but what are you suggesting here?\n\n>> +\t/* skip leading spaces */\n>> +\twhile (likely(*ptr) && isspace((unsigned char) *ptr))\n>> +\t\tptr++;\n>> +\n>> +\t/* handle sign */\n>> +\tif (*ptr == '-')\n>> +\t{\n>> +\t\tptr++;\n>> +\t\tneg = true;\n>> +\t}\n>> +\telse if (*ptr == '+')\n>> +\t\tptr++;\n> \n>> +\t/* require at least one digit */\n>> +\tif (unlikely(!isdigit((unsigned char) *ptr)))\n>> +\t\treturn PG_STRTOINT_SYNTAX_ERROR;\n> \n> Wonder if there's an argument for moving this behaviour to someplace\n> else - in most cases we really don't expect whitespace, and checking for\n> it is unnecessary overhead.\n\nNot sure about that. I would keep the scope of the patch simple as of\nnow, where we make sure that we have the right interface for\neverything. There are a couple of extra improvements which could be\ndone afterwards, and if we move everything in the same place that\nshould be easier to move on with more improvements. Hopefully.\n--\nMichael",
"msg_date": "Mon, 9 Sep 2019 20:57:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-09 20:57:46 +0900, Michael Paquier wrote:\n> > I don't really buy that saving a few copies of a strings is worth that\n> > much. But if you really want to do that, the right approach imo would be\n> > to move the error reporting into a separate function. I.e. something\n> > \n> > void pg_attribute_noreturn()\n> > pg_strtoint_error(pg_strtoint_status status, const char *input, const char *type)\n> > \n> > that you'd call in small wrappers. Something like\n> \n> I am not completely sure if we should do that either anyway. Another\n> approach would be to try to make the callers of the routines generate\n> their own error messages. The errors we have now are really linked to\n> the data types we have in core for signed integers (smallint, int,\n> bigint). In most cases do they really make sense (varlena.c, pqmq.c,\n> etc.)?\n\nI think there's plenty places that ought to use the checked functions,\neven if they currently don't. And typically calling in the caller will\nactually be slightly less efficient, than an out-of-line function like I\nwas proposing above, because it'll be in-line code for infrequent code.\n\nBut ISTM all of them ought to just use the C types, rather than the SQL\ntypes however. Since in the above proposal the caller determines the\ntype names, if you want a different type - like the SQL input routines -\ncan just invoke pg_strtoint_error() themselves (or just have it open\ncoded).\n\n\n> And for errors which should never happen we could just use\n> elog(). For the input functions of int2/4/8 we still need the\n> existing errors of course.\n\nRight, there it makes sense to continue to refer the SQL level types.\n\n\n> >> So, it seems to me that if we want to have a consolidation of\n> >> everything, we basically need to have the generic error messages for\n> >> the backend directly into common/string.c where the routines are\n> >> refactored, and I think that we should do the following based on the\n> >> patch attached:\n> > \n> >> - Just remove pg_strtoint(), and move the error generation logic in\n> >> common/string.c.\n> > \n> > I'm not quite sure what you mean by moving the \"error generation logic\"?\n> \n> I was referring to the error messages we have on HEAD in scanint8()\n> and pg_strtoint16() for bad inputs and overflows.\n\nNot the right direction imo.\n\n\n\n> >> static void pcb_error_callback(void *arg);\n> >> @@ -496,7 +496,7 @@ make_const(ParseState *pstate, Value *value, int location)\n> >>\n> >> \t\tcase T_Float:\n> >> \t\t\t/* could be an oversize integer as well as a float ... */\n> >> -\t\t\tif (scanint8(strVal(value), true, &val64))\n> >> +\t\t\tif (pg_strtoint64(strVal(value), &val64) == PG_STRTOINT_OK)\n> >> \t\t\t{\n> >> \t\t\t\t/*\n> >> \t\t\t\t * It might actually fit in int32. Probably only INT_MIN can\n> > \n> > Not for this change, but we really ought to move away from this crappy\n> > logic. It's really bonkers to have T_Float represent large integers and\n> > floats.\n> \n> I am not sure but what are you suggesting here?\n\nI'm saying that we shouldn't need the whole logic of trying to parse the\nstring as an int, and then fail to float if it's not that. But that it's\nnot this patchset's task to fix this.\n\n\n> >> +\t/* skip leading spaces */\n> >> +\twhile (likely(*ptr) && isspace((unsigned char) *ptr))\n> >> +\t\tptr++;\n> >> +\n> >> +\t/* handle sign */\n> >> +\tif (*ptr == '-')\n> >> +\t{\n> >> +\t\tptr++;\n> >> +\t\tneg = true;\n> >> +\t}\n> >> +\telse if (*ptr == '+')\n> >> +\t\tptr++;\n> > \n> >> +\t/* require at least one digit */\n> >> +\tif (unlikely(!isdigit((unsigned char) *ptr)))\n> >> +\t\treturn PG_STRTOINT_SYNTAX_ERROR;\n> > \n> > Wonder if there's an argument for moving this behaviour to someplace\n> > else - in most cases we really don't expect whitespace, and checking for\n> > it is unnecessary overhead.\n> \n> Not sure about that. I would keep the scope of the patch simple as of\n> now, where we make sure that we have the right interface for\n> everything. There are a couple of extra improvements which could be\n> done afterwards, and if we move everything in the same place that\n> should be easier to move on with more improvements. Hopefully.\n\nThe only reason for thinking about it now is that we'd then avoid\nchanging the API twice.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Sep 2019 05:27:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Mon, Sep 09, 2019 at 03:17:38AM -0700, Andres Freund wrote:\n> On 2019-09-09 14:28:14 +0900, Michael Paquier wrote:\n>> @@ -80,7 +81,7 @@\n>> #define READ_UINT64_FIELD(fldname) \\\n>> \ttoken = pg_strtok(&length);\t\t/* skip :fldname */ \\\n>> \ttoken = pg_strtok(&length);\t\t/* get field value */ \\\n>> -\tlocal_node->fldname = pg_strtouint64(token, NULL, 10)\n>> +\t(void) pg_strtouint64(token, &local_node->fldname)\n> \n> Seems like these actually could just ought to use the error-checked\n> variants. And I think it ought to change all of\n> READ_{INT,UINT,LONG,UINT64,OID}_FIELD, rather than just redirecting one\n> of them to the new routines.\n\nOkay for these changes, except for READ_INT_FIELD where we have short\nvariables using it as well (for example StrategyNumber) so this\ngenerates a justified warning. I think that a correct solution\nhere would be to add a new READ_SHORT_FIELD which uses pg_strtoint16.\nI am not adding that for now.\n--\nMichael",
"msg_date": "Tue, 10 Sep 2019 11:22:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Mon, Sep 09, 2019 at 05:27:04AM -0700, Andres Freund wrote:\n> On 2019-09-09 20:57:46 +0900, Michael Paquier wrote:\n> But ISTM all of them ought to just use the C types, rather than the SQL\n> types however. Since in the above proposal the caller determines the\n> type names, if you want a different type - like the SQL input routines -\n> can just invoke pg_strtoint_error() themselves (or just have it open\n> coded).\n\nYep, that was my line of thoughts.\n\n>> And for errors which should never happen we could just use\n>> elog(). For the input functions of int2/4/8 we still need the\n>> existing errors of course.\n> \n> Right, there it makes sense to continue to refer the SQL level types.\n\nActually, I found your suggestion of using a noreturn function for the\nerror reporting to be a very clean alternative. I didn't know though\nthat gcc is not able to detect that a function does not return if you\ndon't have a default in the switch for all the status codes. And this\neven if all the values of the enum for the switch are listed.\n\n> I'm saying that we shouldn't need the whole logic of trying to parse the\n> string as an int, and then fail to float if it's not that. But that it's\n> not this patchset's task to fix this.\n\nAh, sure. Agreed.\n\n>> Not sure about that. I would keep the scope of the patch simple as of\n>> now, where we make sure that we have the right interface for\n>> everything. There are a couple of extra improvements which could be\n>> done afterwards, and if we move everything in the same place that\n>> should be easier to move on with more improvements. Hopefully.\n> \n> The only reason for thinking about it now is that we'd then avoid\n> changing the API twice.\n\nWhat I think we would be looking for here is an extra argument for the\nlow-level routines to control the behavior of the function in an\nextensible way, say a bits16 for a set of flags, with one flag to\nignore checks for trailing and leading whitespace. This feels a bit\nover-engineered though for this purpose.\n\nAttached is an updated patch? How does it look? I have left the\nparts of readfuncs.c for now as there are more issues behind that than\ndoing a single switch, short reads are one, long reads a second. And\nthe patch already does a lot. There could be also an argument for\nhaving extra _check wrappers for the unsigned portions but these would\nbe mostly unused in the backend code, so I have left that out on\npurpose.\n\nAfter all that stuff, there are still some issues which need more\ncare, in short:\n- the T_Float conversion.\n- removal of strtoint()\n- the part for readfuncs.c\n--\nMichael",
"msg_date": "Tue, 10 Sep 2019 12:05:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Tue, Sep 10, 2019 at 12:05:25PM +0900, Michael Paquier wrote:\n> Attached is an updated patch? How does it look? I have left the\n> parts of readfuncs.c for now as there are more issues behind that than\n> doing a single switch, short reads are one, long reads a second. And\n> the patch already does a lot. There could be also an argument for\n> having extra _check wrappers for the unsigned portions but these would\n> be mostly unused in the backend code, so I have left that out on\n> purpose.\n\nI have looked at this patch again today after letting it aside a\ncouple of days, and I quite like the resulting shape of the routines.\nDoes anybody else have any comments? Would it make sense to extend\nmore the string-to-int conversion routines with a set of control flags\nto bypass the removal of leading and trailing whitespaces?\n--\nMichael",
"msg_date": "Fri, 13 Sep 2019 13:30:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On 2019-09-10 12:05:25 +0900, Michael Paquier wrote:\n> On Mon, Sep 09, 2019 at 05:27:04AM -0700, Andres Freund wrote:\n> > On 2019-09-09 20:57:46 +0900, Michael Paquier wrote:\n> > But ISTM all of them ought to just use the C types, rather than the SQL\n> > types however. Since in the above proposal the caller determines the\n> > type names, if you want a different type - like the SQL input routines -\n> > can just invoke pg_strtoint_error() themselves (or just have it open\n> > coded).\n> \n> Yep, that was my line of thoughts.\n> \n> >> And for errors which should never happen we could just use\n> >> elog(). For the input functions of int2/4/8 we still need the\n> >> existing errors of course.\n> > \n> > Right, there it makes sense to continue to refer the SQL level types.\n> \n> Actually, I found your suggestion of using a noreturn function for the\n> error reporting to be a very clean alternative. I didn't know though\n> that gcc is not able to detect that a function does not return if you\n> don't have a default in the switch for all the status codes. And this\n> even if all the values of the enum for the switch are listed.\n\nAs I proposed they'd be in different translation units, so the compiler\nwouldn't see the definition of the function, just the declaration.\n\n\n> >> Not sure about that. I would keep the scope of the patch simple as of\n> >> now, where we make sure that we have the right interface for\n> >> everything. There are a couple of extra improvements which could be\n> >> done afterwards, and if we move everything in the same place that\n> >> should be easier to move on with more improvements. Hopefully.\n> > \n> > The only reason for thinking about it now is that we'd then avoid\n> > changing the API twice.\n> \n> What I think we would be looking for here is an extra argument for the\n> low-level routines to control the behavior of the function in an\n> extensible way, say a bits16 for a set of flags, with one flag to\n> ignore checks for trailing and leading whitespace.\n\nThat'd probably be a bad idea, for performance reasons.\n\n\n\n> Attached is an updated patch? How does it look? I have left the\n> parts of readfuncs.c for now as there are more issues behind that than\n> doing a single switch, short reads are one, long reads a second.\n\nHm? I don't know what you mean by those issues.\n\n\n> And the patch already does a lot. There could be also an argument for\n> having extra _check wrappers for the unsigned portions but these would\n> be mostly unused in the backend code, so I have left that out on\n> purpose.\n\nI'd value consistency higher here.\n\n\n\n\n> diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c\n> index 2c0ae395ba..8e75d52b06 100644\n> --- a/src/backend/executor/spi.c\n> +++ b/src/backend/executor/spi.c\n> @@ -21,6 +21,7 @@\n> #include \"catalog/heap.h\"\n> #include \"catalog/pg_type.h\"\n> #include \"commands/trigger.h\"\n> +#include \"common/string.h\"\n> #include \"executor/executor.h\"\n> #include \"executor/spi_priv.h\"\n> #include \"miscadmin.h\"\n> @@ -2338,8 +2339,7 @@ _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,\n> \t\t\t\t\tCreateTableAsStmt *ctastmt = (CreateTableAsStmt *) stmt->utilityStmt;\n> \n> \t\t\t\t\tif (strncmp(completionTag, \"SELECT \", 7) == 0)\n> -\t\t\t\t\t\t_SPI_current->processed =\n> -\t\t\t\t\t\t\tpg_strtouint64(completionTag + 7, NULL, 10);\n> +\t\t\t\t\t\t(void) pg_strtouint64(completionTag + 7, &_SPI_current->processed);\n\nI'd just use the checked version here, seems like a good thing to check\nfor, and I can't imagine it matters performance wise.\n\n\n> @@ -63,8 +63,16 @@ Datum\n> int2in(PG_FUNCTION_ARGS)\n> {\n> \tchar\t *num = PG_GETARG_CSTRING(0);\n> +\tint16\t\tres;\n> +\tpg_strtoint_status status;\n> \n> -\tPG_RETURN_INT16(pg_strtoint16(num));\n> +\t/* Use a custom set of error messages here adapted to the data type */\n> +\tstatus = pg_strtoint16(num, &res);\n\nI don't know what that comment is supposed to mean?\n\n> +/*\n> + * pg_strtoint64_check\n> + *\n> + * Convert input string to a signed 64-bit integer.\n> + *\n> + * This throws ereport() upon bad input format or overflow.\n> + */\n> +int64\n> +pg_strtoint64_check(const char *s)\n> +{\n> +\tint64\t\tresult;\n> +\tpg_strtoint_status status = pg_strtoint64(s, &result);\n> +\n> +\tif (unlikely(status != PG_STRTOINT_OK))\n> +\t\tpg_strtoint_error(status, s, \"int64\");\n> +\treturn result;\n> +}\n\nI think I'd just put these as inlines in the header.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Sep 2019 18:38:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 06:38:31PM -0700, Andres Freund wrote:\n> On 2019-09-10 12:05:25 +0900, Michael Paquier wrote:\n>> On Mon, Sep 09, 2019 at 05:27:04AM -0700, Andres Freund wrote:\n>> Attached is an updated patch? How does it look? I have left the\n>> parts of readfuncs.c for now as there are more issues behind that than\n>> doing a single switch, short reads are one, long reads a second.\n> \n> Hm? I don't know what you mean by those issues.\n\nI think that we have more issues than it looks. For example:\n- Switching UINT to use pg_strtouint32() causes an incompatibility\nissue compared to atoui().\n- Switching INT to use pg_strtoint32() causes a set of warnings as for\nexample with AttrNumber:\n72 | (void) pg_strtoint32(token, &local_node->fldname)\n | ^~~~~~~~~~~~~~~~~~~~~\n | |\n | AttrNumber * {aka short int *}\nAnd it is not like we should use a cast either, as we could hide real\nissues.\t Hence it seems to me that we need to have a new routine\ndefinition for shorter integers and switch more flags to that.\n- Switching LONG to use pg_strtoint64() leads to another set of\nissues, particularly one could see an assertion failure related to Agg\nnodes. I am not sure either that we should use int64 here as the size\ncan be at least 32b.\n- Switching OID to use pg_strtoint32 causes a failure with initdb.\n\nSo while I agree with you that a switch should be doable, there is a\nlarge set of issues to ponder about here, and the patch already does a\nlot, so I really think that we had better do a closer lookup at those\nissues separately, once the basics are in place, and consider them if\nthey actually make sense. There is much more than just doing a direct\nswitch in this area with the family of ato*() system calls.\n\n>> And the patch already does a lot. There could be also an argument for\n>> having extra _check wrappers for the unsigned portions but these would\n>> be mostly unused in the backend code, so I have left that out on\n>> purpose.\n> \n> I'd value consistency higher here.\n\nOkay, no objections to that.\n\n>> @@ -2338,8 +2339,7 @@ _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,\n>> \t\t\t\t\tCreateTableAsStmt *ctastmt = (CreateTableAsStmt *) stmt->utilityStmt;\n>> \n>> \t\t\t\t\tif (strncmp(completionTag, \"SELECT \", 7) == 0)\n>> -\t\t\t\t\t\t_SPI_current->processed =\n>> -\t\t\t\t\t\t\tpg_strtouint64(completionTag + 7, NULL, 10);\n>> +\t\t\t\t\t\t(void) pg_strtouint64(completionTag + 7, &_SPI_current->processed);\n> \n> I'd just use the checked version here, seems like a good thing to check\n> for, and I can't imagine it matters performance wise.\n\nYeah, makes sense. I don't think it matters either for\npg_stat_statements in the same context. So changed that part as\nwell.\n\n>> @@ -63,8 +63,16 @@ Datum\n>> int2in(PG_FUNCTION_ARGS)\n>> {\n>> \tchar\t *num = PG_GETARG_CSTRING(0);\n>> +\tint16\t\tres;\n>> +\tpg_strtoint_status status;\n>> \n>> -\tPG_RETURN_INT16(pg_strtoint16(num));\n>> +\t/* Use a custom set of error messages here adapted to the data type */\n>> +\tstatus = pg_strtoint16(num, &res);\n> \n> I don't know what that comment is supposed to mean?\n\nI mean here that the _check equivalent cannot be used as any error\nmessages generated need to be consistent with the SQL data type. I\nhave updated the comment, does it look better now?\n\n>> +/*\n>> + * pg_strtoint64_check\n>> + *\n>> + * Convert input string to a signed 64-bit integer.\n>> + *\n>> + * This throws ereport() upon bad input format or overflow.\n>> + */\n>> +int64\n>> +pg_strtoint64_check(const char *s)\n>> +{\n>> +\tint64\t\tresult;\n>> +\tpg_strtoint_status status = pg_strtoint64(s, &result);\n>> +\n>> +\tif (unlikely(status != PG_STRTOINT_OK))\n>> +\t\tpg_strtoint_error(status, s, \"int64\");\n>> +\treturn result;\n>> +}\n> \n> I think I'd just put these as inlines in the header.\n\nI have not considered that. This bloats a bit more builtins.h. We\ncould live with that, or just move that into a separate header in\ninclude/utils/, say int.h? Even if common/int.h exists?\n\nAttached is an updated patch. Perhaps you have something else in\nmind?\n--\nMichael",
"msg_date": "Sat, 14 Sep 2019 15:02:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Bonjour Michaël,\n\n> - Switching INT to use pg_strtoint32() causes a set of warnings as for\n> example with AttrNumber:\n> 72 | (void) pg_strtoint32(token, &local_node->fldname)\n> | ^~~~~~~~~~~~~~~~~~~~~\n> | |\n> | AttrNumber * {aka short int *}\n\n> And it is not like we should use a cast either, as we could hide real\n> issues.\n\nIt should rather call pg_strtoint16? And possibly switch the \"short int\" \ndeclaration to int16?\n\nAbout batch v14: applies cleanly and compiles without warnings. \"make \ncheck\" ok.\n\nI do not think that \"pg_strtoint_error\" should be inlinable. The function \nis unlikely to be called, so it is not performance critical to inline it, \nand would enlarge the executable needlessly. However, the \n\"pg_strto*_check\" variants should be inlinable, as you have done.\n\nAbout the code, on several instances of:\n\n /* skip leading spaces */\n while (likely(*ptr) && isspace((unsigned char) *ptr)) …\n\nI would drop the \"likely(*ptr)\".\n\nAnd on several instances of:\n\n !unlikely(isdigit((unsigned char) *ptr)))\n\nISTM that we want \"unlikely(!isdigit((unsigned char) *ptr)))\". Parsing \n!unlikely leads to false conclusion and a headache:-)\n\nOtherwise, this batch of changes looks ok to me.\n\n-- \nFabien.",
"msg_date": "Sat, 14 Sep 2019 10:24:10 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 10:24:10AM +0200, Fabien COELHO wrote:\n> It should rather call pg_strtoint16? And possibly switch the \"short int\"\n> declaration to int16?\n\nSure, but you get into other problems if using the 16-bit version for\nsome other fields, which is why it seems to me that we should add an\nextra routine for shorts. So for now I prefer discarding this part.\n\n> I do not think that \"pg_strtoint_error\" should be inlinable. The function is\n> unlikely to be called, so it is not performance critical to inline it, and\n> would enlarge the executable needlessly. However, the \"pg_strto*_check\"\n> variants should be inlinable, as you have done.\n\nMakes sense.\n\n> About the code, on several instances of:\n> \n> /* skip leading spaces */\n> while (likely(*ptr) && isspace((unsigned char) *ptr)) …\n> \n> I would drop the \"likely(*ptr)\".\n\nRight as well. There were two places out of six with that pattern.\n\n> And on several instances of:\n> \n> !unlikely(isdigit((unsigned char) *ptr)))\n> \n> ISTM that we want \"unlikely(!isdigit((unsigned char) *ptr)))\". Parsing\n> !unlikely leads to false conclusion and a headache:-)\n\nThat part was actually inconsistent with the rest.\n\n> Otherwise, this batch of changes looks ok to me.\n\nThanks.\n--\nMichael",
"msg_date": "Mon, 16 Sep 2019 19:18:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-14 15:02:36 +0900, Michael Paquier wrote:\n> On Fri, Sep 13, 2019 at 06:38:31PM -0700, Andres Freund wrote:\n> > On 2019-09-10 12:05:25 +0900, Michael Paquier wrote:\n> >> On Mon, Sep 09, 2019 at 05:27:04AM -0700, Andres Freund wrote:\n> >> Attached is an updated patch? How does it look? I have left the\n> >> parts of readfuncs.c for now as there are more issues behind that than\n> >> doing a single switch, short reads are one, long reads a second.\n> > \n> > Hm? I don't know what you mean by those issues.\n> \n> I think that we have more issues than it looks. For example:\n> - Switching UINT to use pg_strtouint32() causes an incompatibility\n> issue compared to atoui().\n\n\"An incompatibility\" is, uh, vague.\n\n\n> - Switching INT to use pg_strtoint32() causes a set of warnings as for\n> example with AttrNumber:\n> 72 | (void) pg_strtoint32(token, &local_node->fldname)\n> | ^~~~~~~~~~~~~~~~~~~~~\n> | |\n> | AttrNumber * {aka short int *}\n> And it is not like we should use a cast either, as we could hide real\n> issues.\t Hence it seems to me that we need to have a new routine\n> definition for shorter integers and switch more flags to that.\n\nYea.\n\n\n> - Switching LONG to use pg_strtoint64() leads to another set of\n> issues, particularly one could see an assertion failure related to Agg\n> nodes. I am not sure either that we should use int64 here as the size\n> can be at least 32b.\n\nThat seems pretty clearly something that needs to be debugged before\napplying this series. If there's such a failure, it indicates that\nthere's either a problem in this patchset, or a pre-existing problem in\nreadfuncs.\n\n\n> - Switching OID to use pg_strtoint32 causes a failure with initdb.\n\nNeeds to be debugged too. Although I suspect this might just be that you\nneed to use unsigned variant.\n\n\n> So while I agree with you that a switch should be doable, there is a\n> large set of issues to ponder about here, and the patch already does a\n> lot, so I really think that we had better do a closer lookup at those\n> issues separately, once the basics are in place, and consider them if\n> they actually make sense. There is much more than just doing a direct\n> switch in this area with the family of ato*() system calls.\n\nI have no problme applying this separately, but I really don't think\nit's wise to apply this before these problems have been debugged.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 16 Sep 2019 10:08:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Bonjour Michaël,\n\n>> Otherwise, this batch of changes looks ok to me.\n>\n> Thanks.\n\nAbout v15: applies cleanly, compiles, \"make check\" ok.\n\nWhile re-reading the patch, there are bit of repetitions on pg_strtou?int* \ncomments. I'm wondering whether it would make sense to write a global \ncomments before each 3-type series to avoid that.\n\n-- \nFabien.",
"msg_date": "Mon, 16 Sep 2019 19:17:27 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 10:08:19AM -0700, Andres Freund wrote:\n> On 2019-09-14 15:02:36 +0900, Michael Paquier wrote:\n>> - Switching OID to use pg_strtoint32 causes a failure with initdb.\n> \n> Needs to be debugged too. Although I suspect this might just be that you\n> need to use unsigned variant.\n\nNo, that's not it. My last message had a typo as I used the unsigned \nvariant. Anyway, by switching the routines of readfuncs.c to use the\nnew _check wrappers it is easy to see what the actual issue is. And\nhere we go with one example:\n\"FATAL: invalid input syntax for type uint32: \"12089 :relkind v\"\n\nSo, the root of the problem is that the optimized conversion routines\nwould complain if the end of the string includes incorrect characters\nwhen converting a node from text, which is not something that strtoXX\ncomplains about. So our wrappers atooid() and atoui() accept more\ntypes of strings in input as they rely on the system's strtol(). And\nthat counts for the failures with UINT and OID. atoi() also is more\nflexible on that, which explains the failures for INT, as well as\natol() for LONG (this shows a failure in the regression tests, not at\ninitdb time though).\n\nOne may think that this restriction does not actually apply to\nREAD_UINT64_FIELD because the routine involves no other things than\nqueryId. However once I enable -DWRITE_READ_PARSE_PLAN_TREES\n-DWRITE_READ_PARSE_PLAN_TREES -DCOPY_PARSE_PLAN_TREES in the builds,\nqueryId parsing also complains with the patch. So except if we\nredesign the node reads we are bound to keep around the wrapper of\nstrtoXX on HEAD called pg_strtouint64() to avoid an incompatibility\nwhen parsing the 64-bit query ID. We could keep that isolated in\nreadfuncs.c close to the declarations of atoui() and strtobool()\nthough. This also points out that pg_strtouint64 of HEAD is\ninconsistent with its signed relatives in the treatment of the input\nstring.\n\nThe code paths of the patch calling pg_strtouint64_check to parse\ncompletion tags (spi.c and pg_stat_statements.c) should complain in\nthose cases as the format of the tags for SELECT and COPY is fixed.\n\nIn order to unify the parsing interface and put all the conversion\nroutines in a single place, I still think that the patch has value so\nI would still keep it (with a fix for the queryId parsing of course),\nbut there is much more to it.\n\n>> So while I agree with you that a switch should be doable, there is a\n>> large set of issues to ponder about here, and the patch already does a\n>> lot, so I really think that we had better do a closer lookup at those\n>> issues separately, once the basics are in place, and consider them if\n>> they actually make sense. There is much more than just doing a direct\n>> switch in this area with the family of ato*() system calls.\n> \n> I have no problem applying this separately, but I really don't think\n> it's wise to apply this before these problems have been debugged.\n\nSure. No problem with that line of reasoning.\n--\nMichael",
"msg_date": "Tue, 17 Sep 2019 11:29:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 11:29:13AM +0900, Michael Paquier wrote:\n> The code paths of the patch calling pg_strtouint64_check to parse\n> completion tags (spi.c and pg_stat_statements.c) should complain in\n> those cases as the format of the tags for SELECT and COPY is fixed.\n> \n> In order to unify the parsing interface and put all the conversion\n> routines in a single place, I still think that the patch has value so\n> I would still keep it (with a fix for the queryId parsing of course),\n> but there is much more to it.\n\nForgot to mention that another angle of attack would be of course to\nadd some control flags in the optimized parsing functions to make them\nmore permissive regarding the handling of the trailing characters, by\nnot considering as a syntax error the case where the last character is\nnot a zero-termination.\n--\nMichael",
"msg_date": "Tue, 17 Sep 2019 11:40:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 11:29:13AM +0900, Michael Paquier wrote:\n> In order to unify the parsing interface and put all the conversion\n> routines in a single place, I still think that the patch has value so\n> I would still keep it (with a fix for the queryId parsing of course),\n> but there is much more to it.\n\nAs of now, here is an updated patch which takes the path to not\ncomplicate the refactored APIs and fixes the issue with queryID in\nreadfuncs.c. Thoughts?\n--\nMichael",
"msg_date": "Wed, 18 Sep 2019 10:13:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Is there any specific reason for hard coding the *base* of a number\nrepresenting the string in strtouint64(). I understand that currently\nstrtouint64() is being used just to convert an input string to decimal\nunsigned value but what if we want it to be used for hexadecimal\nvalues or may be some other values, in that case it can't be used.\nFurther, the function name is strtouint64() but the comments atop it's\ndefinition says it's pg_strtouint64(). That needs to be corrected.\n\nAt few places, I could see that the function call to\npg_strtoint32_check() is followed by an error handling. Isn't that\nalready being done in pg_strtoint32_check function itself. For e.g. in\nrefint.c the function call to pg_strtoint32_check is followed by a if\ncondition that checks for an error which I assume shouldn't be there\nas it is already being done by pg_strtoint32_check.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Wed, Sep 18, 2019 at 6:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Sep 17, 2019 at 11:29:13AM +0900, Michael Paquier wrote:\n> > In order to unify the parsing interface and put all the conversion\n> > routines in a single place, I still think that the patch has value so\n> > I would still keep it (with a fix for the queryId parsing of course),\n> > but there is much more to it.\n>\n> As of now, here is an updated patch which takes the path to not\n> complicate the refactored APIs and fixes the issue with queryID in\n> readfuncs.c. Thoughts?\n> --\n> Michael\n\n\n",
"msg_date": "Fri, 4 Oct 2019 14:27:44 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "Hi,\n\nOn 2019-10-04 14:27:44 +0530, Ashutosh Sharma wrote:\n> Is there any specific reason for hard coding the *base* of a number\n> representing the string in strtouint64(). I understand that currently\n> strtouint64() is being used just to convert an input string to decimal\n> unsigned value but what if we want it to be used for hexadecimal\n> values or may be some other values, in that case it can't be used.\n\nIt's a lot slower if the base is variable, because the compiler cannot\nreplace the division by shifts.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Oct 2019 08:28:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Fri, Oct 4, 2019 at 8:58 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-10-04 14:27:44 +0530, Ashutosh Sharma wrote:\n> > Is there any specific reason for hard coding the *base* of a number\n> > representing the string in strtouint64(). I understand that currently\n> > strtouint64() is being used just to convert an input string to decimal\n> > unsigned value but what if we want it to be used for hexadecimal\n> > values or may be some other values, in that case it can't be used.\n>\n> It's a lot slower if the base is variable, because the compiler cannot\n> replace the division by shifts.\n>\n\nThanks Andres for the reply. I didn't know that the compiler won't be\nable to replace division with shifts operator if the base is variable\nand it's true that it would make the things a lot slower.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 5 Oct 2019 08:09:27 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Fri, Oct 04, 2019 at 02:27:44PM +0530, Ashutosh Sharma wrote:\n> Is there any specific reason for hard coding the *base* of a number\n> representing the string in strtouint64(). I understand that currently\n> strtouint64() is being used just to convert an input string to decimal\n> unsigned value but what if we want it to be used for hexadecimal\n> values or may be some other values, in that case it can't be used.\n> Further, the function name is strtouint64() but the comments atop it's\n> definition says it's pg_strtouint64(). That needs to be corrected.\n\nPerformance, as Andres has already stated upthread. Moving away from\nstrtol gives roughly a 40% improvement with a call-to-call comparison:\nhttps://www.postgresql.org/message-id/20190909052814.GA26605@paquier.xyz\n\n> At few places, I could see that the function call to\n> pg_strtoint32_check() is followed by an error handling. Isn't that\n> already being done in pg_strtoint32_check function itself. For e.g. in\n> refint.c the function call to pg_strtoint32_check is followed by a if\n> condition that checks for an error which I assume shouldn't be there\n> as it is already being done by pg_strtoint32_check.\n\npg_strtoint32_check is used for a signed integer, so it would complain\nabout incorrect input syntax, but not when the parsed integer is less\nor equal than 0, which is what refint.c complains about.\n--\nMichael",
"msg_date": "Mon, 7 Oct 2019 16:38:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 10:13:20AM +0900, Michael Paquier wrote:\n> As of now, here is an updated patch which takes the path to not\n> complicate the refactored APIs and fixes the issue with queryID in\n> readfuncs.c. Thoughts?\n\nFor now, and seeing that there is little interest in it. I have\nmarked the patch as returned with feedback in this CF. The thread has\ngone long, so if there is a new submission I would suggest using a new\nthread with a fresh start point.. Not sure if I'll work on that or\nnot.\n--\nMichael",
"msg_date": "Mon, 25 Nov 2019 16:21:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: refactoring - share str2*int64 functions"
}
] |
[
{
"msg_contents": "Dear PostgreSQL Community: I want to add a feature in PostgreSQL, and I need use map structure and set structure(like STL in C++). Do PostgreSQL have realized these structures? Where can I find the functions?\r\n What I need in the code is just like this:\r\n map<char*, set<char*> >\r\n set<char*>\r\nThank you,\r\nLiu Baozhu\nDear PostgreSQL Community: I want to add a feature in PostgreSQL, and I need use map structure and set structure(like STL in C++). Do PostgreSQL have realized these structures? Where can I find the functions? What I need in the code is just like this: map<char*, set<char*> > set<char*>Thank you,Liu Baozhu",
"msg_date": "Sun, 21 Apr 2019 20:32:30 +0800",
"msg_from": "\"=?gb18030?B?w87Cw8jL?=\" <liubaozhu1258@qq.com>",
"msg_from_op": true,
"msg_subject": "Do PostgreSQL have map and set structure(like STL in C++)?"
},
{
"msg_contents": "\nHello,\n\n> I want to add a feature in PostgreSQL, and I need use map structure and \n> set structure(like STL in C++). Do PostgreSQL have realized these \n> structures? Where can I find the functions? What I need in the code is \n> just like this: map<char*, set<char*>>, set<char*>\n\nYou are looking for a hash table, see under \"src/backend/utils/hash/\".\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 21 Apr 2019 19:22:20 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Do PostgreSQL have map and set structure(like STL in C++)?"
},
{
"msg_contents": "Hello Liu!\n\n> 21 апр. 2019 г., в 17:32, 梦旅人 <liubaozhu1258@qq.com> написал(а):\n> I want to add a feature in PostgreSQL, and I need use map structure and set structure(like STL in C++). Do PostgreSQL have realized these structures? Where can I find the functions?\n> What I need in the code is just like this:\n> map<char*, set<char*> >\n> set<char*>\n\nYou can use HTAB at utils/hsearch.h [0]\n\nIt is Larson's dynamic hashing, implementation is in backend/utils/hash/dynahash.c\nMostly like unordered_map. Accordingly, it lacks lower bound functionality as sorted sets do.\n\nAlso, you can user RB-tree in lib/rbtree.h [1] It's usual red-black tree.\n\nBest regards, Andrey Borodin.\n\n\n[0] https://github.com/postgres/postgres/blob/master/src/include/utils/hsearch.h\n[1] https://github.com/postgres/postgres/blob/master/src/include/lib/rbtree.h\n\n",
"msg_date": "Sun, 21 Apr 2019 22:22:34 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Do PostgreSQL have map and set structure(like STL in C++)?"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 5:22 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > 21 апр. 2019 г., в 17:32, 梦旅人 <liubaozhu1258@qq.com> написал(а):\n> > I want to add a feature in PostgreSQL, and I need use map structure and set structure(like STL in C++). Do PostgreSQL have realized these structures? Where can I find the functions?\n> > What I need in the code is just like this:\n> > map<char*, set<char*> >\n> > set<char*>\n>\n> You can use HTAB at utils/hsearch.h [0]\n>\n> It is Larson's dynamic hashing, implementation is in backend/utils/hash/dynahash.c\n> Mostly like unordered_map. Accordingly, it lacks lower bound functionality as sorted sets do.\n>\n> Also, you can user RB-tree in lib/rbtree.h [1] It's usual red-black tree.\n\nThere is also src/include/lib/simplehash.h. A bit like C++'s\nstd::unordered_map and std::unordered_set templates, it is specialised\nby type with inlined hash and eq functions (though it's done with\npreprocessor tricks). Unlike those it it uses a variant of Robin Hood\nalgorithm for collisions, while the C++ standard effectively requires\ncollision chains. Like dynahash (hsearch.h), it's both map and set at\nthe same time: instead of using a pair<key,value> it has a simple\nelement type and it's your job to decide if the whole thing or just\npart of it is the key. Unlike dynahash, it doesn't have a way to work\nin shared memory.\n\nThere is also src/include/lib/dshash.h. It uses collision chains, and\nworks in DSA shared memory (a strange allocator that can deal with\nmemory mapped at different addresses in different processes), and has\na lock/partitioning scheme to deal with sharing, but it's not very\nfeatureful. Unlike dynahash in shared memory mode, it can allocate\nmore memory as required, and can be created and destroyed any time\n(whereas dynahash in shared memory mode is trapped in a fixed sized\nregion of memory that lives as long as the cluster).\n\nAndrey mentioned Larson's dynamic hashing; that's quite interesting,\nand can be seen in our on-disk hash indexes, but dynahash does it for\nin-memory hash tables, whereas simplehash and dshash just rebuild the\nwhole hash table when one unlucky caller inserts the new element that\nexceeds the load factor target. AFAIK it's quite unusual to use the\ndynamic expansion trick for an in-memory-only hash table (?).\n\nSo yeah, that's three general purpose hash table/set implementations.\n\nI wouldn't mind if we had some more generic container and algorithm\ncode like simplehash.h in the tree; it's easy to show performance\nbenefits of inlining in microbenchmarks, and it's nicer to program\nwithout explicitly dealing in void pointers all over the place. I\nsuppose it would be theoretically possible to make some of these\ndesign choices into policies you could select when instantiating,\nrather than having separate library components. Another thing that is\non the radar for future development is concurrent lock-free extensible\nhash tables as seen in some other projects.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Apr 2019 11:29:18 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do PostgreSQL have map and set structure(like STL in C++)?"
}
] |
[
{
"msg_contents": "Folks,\n\nAny interest in this?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 21 Apr 2019 20:31:15 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "[PATCH v1] Add \\echo_stderr to psql"
},
{
"msg_contents": "ne 21. 4. 2019 v 20:31 odesílatel David Fetter <david@fetter.org> napsal:\n\n> Folks,\n>\n> Any interest in this?\n>\n\nhas sense\n\nPavel\n\n\n> Best,\n> David.\n> --\n> David Fetter <david(at)fetter(dot)org> http://fetter.org/\n> Phone: +1 415 235 3778\n>\n> Remember to vote!\n> Consider donating to Postgres: http://www.postgresql.org/about/donate\n>\n\nne 21. 4. 2019 v 20:31 odesílatel David Fetter <david@fetter.org> napsal:Folks,\n\nAny interest in this?has sensePavel\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 21 Apr 2019 20:42:18 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Add \\echo_stderr to psql"
},
{
"msg_contents": "\n> Any interest in this?\n\nYep, although I'm not sure of the suggested command name. More \nsuggestions:\n \\stderr ...\n \\err ...\n \\error ...\n \\warn ...\n \\warning ...\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 21 Apr 2019 21:31:16 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Add \\echo_stderr to psql"
},
{
"msg_contents": "On Sun, Apr 21, 2019 at 09:31:16PM +0200, Fabien COELHO wrote:\n> > Any interest in this?\n> \n> Yep, although I'm not sure of the suggested command name. More suggestions:\n> \\stderr ...\n> \\err ...\n> \\error ...\n> \\warn ...\n> \\warning ...\n\nNaming Things is one of the two[1] hard problems in CS.\n\nI'm happy with whatever the community consensus comes out to be.\n\nBest,\nDavid.\n\n[1] The others are cache coherency and off-by-one\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 21 Apr 2019 23:38:20 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] Add \\echo_stderr to psql"
},
{
"msg_contents": ">\n>\n> \\warn ...\n> \\warning ...\n>\n\nThese two seem about the best to me, drawing from the perl warn command.\n\nI suppose we could go the bash &2 route here, but I don't want to.\n\n \\warn ...\n \\warning ...These two seem about the best to me, drawing from the perl warn command.I suppose we could go the bash &2 route here, but I don't want to.",
"msg_date": "Sun, 21 Apr 2019 23:52:45 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Add \\echo_stderr to psql"
},
{
"msg_contents": "\nHello Corey,\n\n>> \\warn ...\n>> \\warning ...\n>\n> These two seem about the best to me, drawing from the perl warn command.\n\nYep, I was thinking of perl & gmake. Maybe the 4 letter option is better \nbecause its the same length as \"echo\".\n\n> I suppose we could go the bash &2 route here, but I don't want to.\n\nI agree on this one.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 22 Apr 2019 09:04:08 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Add \\echo_stderr to psql"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 09:04:08AM +0200, Fabien COELHO wrote:\n> \n> Hello Corey,\n> \n> > > \\warn ...\n> > > \\warning ...\n> > \n> > These two seem about the best to me, drawing from the perl warn command.\n> \n> Yep, I was thinking of perl & gmake. Maybe the 4 letter option is better\n> because its the same length as \"echo\".\n> \n> > I suppose we could go the bash &2 route here, but I don't want to.\n> \n> I agree on this one.\n\nPlease find attached v2, name is now \\warn.\n\nHow might we test this portably?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 22 Apr 2019 15:45:32 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] Add \\echo_stderr to psql"
},
{
"msg_contents": "\n>>>> \\warn ...\n>>>> \\warning ...\n>>>\n>>> These two seem about the best to me, drawing from the perl warn command.\n>>\n>> Yep, I was thinking of perl & gmake. Maybe the 4 letter option is better\n>> because its the same length as \"echo\".\n>>\n>>> I suppose we could go the bash &2 route here, but I don't want to.\n>>\n>> I agree on this one.\n>\n> Please find attached v2, name is now \\warn.\n>\n> How might we test this portably?\n\nTAP testing? see pgbench which has tap test which can test stdout & stderr \nby calling utility command_checks_all, the same could be done with psql.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 22 Apr 2019 19:46:18 +0200 (CEST)",
"msg_from": "Fabien COELHO <fabien.coelho@mines-paristech.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Add \\echo_stderr to psql"
},
{
"msg_contents": "Hello David,\n\n> Please find attached v2, name is now \\warn.\n\nPatch applies cleanly, compiles, \"make check ok\", although there are no \ntests. Doc gen ok.\n\nCode is pretty straightforward.\n\nI'd put the commands in alphabetical order (echo, qecho, warn) instead of \ne/w/q in the condition.\n\nThe -n trick does not appear in the help lines, ISTM that it could fit, so \nmaybe it could be added, possibly something like:\n\n \\echo [-n] [TEXT] write string to stdout, possibly without trailing newline\n\nand same for \\warn and \\qecho?\n\n> How might we test this portably?\n\nHmmm... TAP tests are expected to be portable. Attached a simple POC, \nwhich could be extended to test many more things which are currently out \nof coverage (src/bin/psql stuff is covered around 40% only).\n\n-- \nFabien.",
"msg_date": "Sat, 27 Apr 2019 16:05:20 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Add \\echo_stderr to psql"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 04:05:20PM +0200, Fabien COELHO wrote:\n> \n> Hello David,\n> \n> > Please find attached v2, name is now \\warn.\n> \n> Patch applies cleanly, compiles, \"make check ok\", although there are no\n> tests. Doc gen ok.\n> \n> Code is pretty straightforward.\n> \n> I'd put the commands in alphabetical order (echo, qecho, warn) instead of\n> e/w/q in the condition.\n\nDone.\n\n> The -n trick does not appear in the help lines, ISTM that it could fit, so\n> maybe it could be added, possibly something like:\n> \n> \\echo [-n] [TEXT] write string to stdout, possibly without trailing newline\n> \n> and same for \\warn and \\qecho?\n\nMakes sense, but I put it there just for \\echo to keep lines short.\n\n> > How might we test this portably?\n> \n> Hmmm... TAP tests are expected to be portable. Attached a simple POC, which\n> could be extended to test many more things which are currently out of\n> coverage (src/bin/psql stuff is covered around 40% only).\n\nThanks for putting this together. I've added this test, and agree that\nincreasing coverage is important for another patch.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sat, 27 Apr 2019 19:15:48 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] Add \\echo_stderr to psql"
},
{
"msg_contents": "\nHello David,\n\nAbout v3. Applies, compiles, global & local make check are ok. doc gen ok.\n\n>> I'd put the commands in alphabetical order (echo, qecho, warn) instead of\n>> e/w/q in the condition.\n>\n> Done.\n\nCannot see it:\n\n + else if (strcmp(cmd, \"echo\") == 0 || strcmp(cmd, \"warn\") == 0 || strcmp(cmd, \"qecho\") == 0)\n\n>> The -n trick does not appear in the help lines, ISTM that it could fit, so\n>> maybe it could be added, possibly something like:\n>>\n>> \\echo [-n] [TEXT] write string to stdout, possibly without trailing newline\n>>\n>> and same for \\warn and \\qecho?\n>\n> Makes sense, but I put it there just for \\echo to keep lines short.\n\nI think that putting together the 3 echo variants help makes sense, but \nmaybe someone will object about breaking the abc order.\n\n>>> How might we test this portably?\n>>\n>> Hmmm... TAP tests are expected to be portable. Attached a simple POC, which\n>> could be extended to test many more things which are currently out of\n>> coverage (src/bin/psql stuff is covered around 40% only).\n>\n> Thanks for putting this together. I've added this test, and agree that\n> increasing coverage is important for another patch.\n\nYep.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 27 Apr 2019 22:09:27 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Add \\echo_stderr to psql"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 10:09:27PM +0200, Fabien COELHO wrote:\n> \n> Hello David,\n> \n> About v3. Applies, compiles, global & local make check are ok. doc gen ok.\n> \n> > > I'd put the commands in alphabetical order (echo, qecho, warn) instead of\n> > > e/w/q in the condition.\n> > \n> > Done.\n> \n> Cannot see it:\n> \n> + else if (strcmp(cmd, \"echo\") == 0 || strcmp(cmd, \"warn\") == 0 || strcmp(cmd, \"qecho\") == 0)\n\nMy mistake. I didn't think the order in which they were compared\nmattered much, but it makes sense on further reflection to keep things\ntidy in the code.\n\n> > > The -n trick does not appear in the help lines, ISTM that it could fit, so\n> > > maybe it could be added, possibly something like:\n> > > \n> > > \\echo [-n] [TEXT] write string to stdout, possibly without trailing newline\n> > > \n> > > and same for \\warn and \\qecho?\n> > \n> > Makes sense, but I put it there just for \\echo to keep lines short.\n> \n> I think that putting together the 3 echo variants help makes sense, but\n> maybe someone will object about breaking the abc order.\n\nHere's the alphabetical version.\n\n> > > > How might we test this portably?\n> > > \n> > > Hmmm... TAP tests are expected to be portable. Attached a simple POC, which\n> > > could be extended to test many more things which are currently out of\n> > > coverage (src/bin/psql stuff is covered around 40% only).\n> > \n> > Thanks for putting this together. I've added this test, and agree that\n> > increasing coverage is important for another patch.\n> \n> Yep.\n\nSpeaking of which, I'd like to see about getting your patch against\nTestlib.pm in so more tests of psql can also go in. It's not a new\nfeature /per se/, and it doesn't break any current scripts, so I'd\nmake the argument that it's OK for them to go in and possibly even be\nback-patched.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 28 Apr 2019 16:58:13 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "[PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "\nHello David,\n\nAbout v4: applies, compiles, global & local \"make check\" ok. Doc gen ok.\n\nCode & help look ok.\n\nAbout the doc: I do not understand why the small program listing contains \nan \"\\echo :variable\". Also, the new entry should probably be between the \n\\w & \\watch entries instead of between \\echo & \\ef.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 28 Apr 2019 20:22:09 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "On Sun, Apr 28, 2019 at 08:22:09PM +0200, Fabien COELHO wrote:\n> \n> Hello David,\n> \n> About v4: applies, compiles, global & local \"make check\" ok. Doc gen ok.\n> \n> Code & help look ok.\n> \n> About the doc: I do not understand why the small program listing contains an\n> \"\\echo :variable\".\n\nIt no longer does.\n\n> Also, the new entry should probably be between the \\w &\n> \\watch entries instead of between \\echo & \\ef.\n\nMoved.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 29 Apr 2019 06:11:06 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "\nHello David,\n\nAbout v5: applies, compiles, global & local make check ok, doc gen ok.\n\nVery minor comment: \\qecho is just before \\o in the embedded help, where \nit should be just after. Sorry I did not see it on the preceding \nsubmission.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 29 Apr 2019 08:30:18 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 08:30:18AM +0200, Fabien COELHO wrote:\n> \n> Hello David,\n> \n> About v5: applies, compiles, global & local make check ok, doc gen ok.\n> \n> Very minor comment: \\qecho is just before \\o in the embedded help, where it\n> should be just after. Sorry I did not see it on the preceding submission.\n\nDone.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 29 Apr 2019 16:17:10 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "\nHello David,\n\n>> About v5: applies, compiles, global & local make check ok, doc gen ok.\n>>\n>> Very minor comment: \\qecho is just before \\o in the embedded help, where it\n>> should be just after. Sorry I did not see it on the preceding submission.\n>\n> Done.\n\nPatch v6 applies, compiles, global & local make check ok, doc gen ok.\n\nThis is okay for me, marked as ready.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 29 Apr 2019 21:39:06 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "\n>>> About v5: applies, compiles, global & local make check ok, doc gen ok.\n>>> \n>>> Very minor comment: \\qecho is just before \\o in the embedded help, \n>>> where it should be just after. Sorry I did not see it on the preceding \n>>> submission.\n>\n> Unfortunately new TAP test doesn't pass on my machine. I'm not good at Perl \n> and didn't get the reason of the failure quickly.\n\nI guess that you have a verbose ~/.psqlrc.\n\nCan you try with adding -X to psql option when calling psql from the tap \ntest?\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 30 Apr 2019 15:46:28 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "(Unfortunately I accidentally sent my previous two messages using my personal\nemail address because of my email client configuration. This address is not\nverified by PostgreSQL.org services and messages didn't reach hackers mailing\nlists, so I recent latest message).\n\nOn Tue, Apr 30, 2019 at 4:46 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > Unfortunately new TAP test doesn't pass on my machine. I'm not good at Perl\n> > and didn't get the reason of the failure quickly.\n>\n> I guess that you have a verbose ~/.psqlrc.\n>\n> Can you try with adding -X to psql option when calling psql from the tap\n> test?\n\nAh, true. This patch works for me:\n\ndiff --git a/src/bin/psql/t/001_psql.pl b/src/bin/psql/t/001_psql.pl\nindex 32dd43279b..637baa94c9 100644\n--- a/src/bin/psql/t/001_psql.pl\n+++ b/src/bin/psql/t/001_psql.pl\n@@ -20,7 +20,7 @@ sub psql\n {\n local $Test::Builder::Level = $Test::Builder::Level + 1;\n my ($opts, $stat, $in, $out, $err, $name) = @_;\n- my @cmd = ('psql', split /\\s+/, $opts);\n+ my @cmd = ('psql', '-X', split /\\s+/, $opts);\n $node->command_checks_all(\\@cmd, $stat, $out, $err, $name, $in);\n return;\n }\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Wed, 1 May 2019 10:05:44 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "On Wed, May 01, 2019 at 10:05:44AM +0300, Arthur Zakirov wrote:\n> (Unfortunately I accidentally sent my previous two messages using my personal\n> email address because of my email client configuration. This address is not\n> verified by PostgreSQL.org services and messages didn't reach hackers mailing\n> lists, so I recent latest message).\n> \n> On Tue, Apr 30, 2019 at 4:46 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > > Unfortunately new TAP test doesn't pass on my machine. I'm not good at Perl\n> > > and didn't get the reason of the failure quickly.\n> >\n> > I guess that you have a verbose ~/.psqlrc.\n> >\n> > Can you try with adding -X to psql option when calling psql from the tap\n> > test?\n> \n> Ah, true. This patch works for me:\n> \n> diff --git a/src/bin/psql/t/001_psql.pl b/src/bin/psql/t/001_psql.pl\n> index 32dd43279b..637baa94c9 100644\n> --- a/src/bin/psql/t/001_psql.pl\n> +++ b/src/bin/psql/t/001_psql.pl\n> @@ -20,7 +20,7 @@ sub psql\n> {\n> local $Test::Builder::Level = $Test::Builder::Level + 1;\n> my ($opts, $stat, $in, $out, $err, $name) = @_;\n> - my @cmd = ('psql', split /\\s+/, $opts);\n> + my @cmd = ('psql', '-X', split /\\s+/, $opts);\n> $node->command_checks_all(\\@cmd, $stat, $out, $err, $name, $in);\n> return;\n> }\n\nPlease find attached :)\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 1 May 2019 09:38:57 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "\n>>> I guess that you have a verbose ~/.psqlrc.\n>>>\n>>> Can you try with adding -X to psql option when calling psql from the tap\n>>> test?\n>>\n>> Ah, true. This patch works for me:\n>>\n>> diff --git a/src/bin/psql/t/001_psql.pl b/src/bin/psql/t/001_psql.pl\n>> index 32dd43279b..637baa94c9 100644\n>> --- a/src/bin/psql/t/001_psql.pl\n>> +++ b/src/bin/psql/t/001_psql.pl\n>> @@ -20,7 +20,7 @@ sub psql\n>> {\n>> local $Test::Builder::Level = $Test::Builder::Level + 1;\n>> my ($opts, $stat, $in, $out, $err, $name) = @_;\n>> - my @cmd = ('psql', split /\\s+/, $opts);\n>> + my @cmd = ('psql', '-X', split /\\s+/, $opts);\n>> $node->command_checks_all(\\@cmd, $stat, $out, $err, $name, $in);\n>> return;\n>> }\n>\n> Please find attached :)\n\nGood. Works for me, even with a verbose .psqlrc. Switched back to ready.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 1 May 2019 12:02:48 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> [ v7-0001-Add-warn-to-psql.patch ]\n\nI took a look at this. I have no quibble with the proposed feature,\nand the implementation is certainly simple enough. But I'm unconvinced\nabout the proposed test scaffolding. Spinning up a new PG instance is a\n*hell* of a lot of overhead to pay for testing something that could be\ntested as per attached. Admittedly, the attached doesn't positively\nprove which pipe each output string went down, but that does not strike\nme as a concern large enough to justify adding a TAP test for.\n\nI'd be happier about adding TAP infrastructure if it looked like it'd\nbe usable to test some of the psql areas that are unreachable by the\nexisting test methodology, particularly tab-complete.c and prompt.c.\nBut I don't see anything here that looks like it'll work for that.\n\nI don't like what you did to command_checks_all, either --- it could\nhardly say \"bolted on after the fact\" more clearly if you'd written\nthat in <blink><red> text. If we need an input-stream argument,\nlet's just add it in a rational place and adjust the callers.\nThere aren't that many of 'em, nor has the subroutine been around\nall that long.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 02 Jul 2019 16:10:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "\nHello Tom,\n\n> I took a look at this. I have no quibble with the proposed feature,\n> and the implementation is certainly simple enough. But I'm unconvinced\n> about the proposed test scaffolding. Spinning up a new PG instance is a\n> *hell* of a lot of overhead to pay for testing something that could be\n> tested as per attached.\n\n\n> Admittedly, the attached doesn't positively prove which pipe each output \n> string went down, but that does not strike me as a concern large enough \n> to justify adding a TAP test for.\n\nSure.\n\nThe point is that there would be at least *one* TAP tests so that many \nother features of psql, although not all, can be tested. I have been \nreviewing quite a few patches without tests because of this lack of \ninfrastructure, and no one patch is ever going to justify a TAP test on \nits own. It has to start somewhere. Currently psql coverage is abysmal, \naround 40% of lines & functions are called by the whole non regression \ntests, despite the hundreds of psql-relying tests. Pg is around 80% \ncoverage overall.\n\nBasically, I really thing that one psql dedicated TAP test should be \nadded, not for \\warn per se, but for other features.\n\n> I'd be happier about adding TAP infrastructure if it looked like it'd\n> be usable to test some of the psql areas that are unreachable by the\n> existing test methodology, particularly tab-complete.c and prompt.c.\n> But I don't see anything here that looks like it'll work for that.\n\nThe tab complete and prompt are special interactive cases and probably \nrequire special infrastructure to make a test believe it is running \nagainst a tty while it is not. The point of this proposal is not to \naddress these special needs, but to lay a basic infra.\n\n> I don't like what you did to command_checks_all,\n\nYeah, probably my fault, not David.\n\n> either --- it could hardly say \"bolted on after the fact\" more clearly \n> if you'd written that in <blink><red> text. If we need an input-stream \n> argument, let's just add it in a rational place and adjust the callers. \n> There aren't that many of 'em, nor has the subroutine been around all \n> that long.\n\nI wanted to avoid breaking the function signature of it is used by some \nexternal packages. Not caring is an option.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 3 Jul 2019 09:05:28 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> The point is that there would be at least *one* TAP tests so that many \n> other features of psql, although not all, can be tested. I have been \n> reviewing quite a few patches without tests because of this lack of \n> infrastructure, and no one patch is ever going to justify a TAP test on \n> its own. It has to start somewhere. Currently psql coverage is abysmal, \n> around 40% of lines & functions are called by the whole non regression \n> tests, despite the hundreds of psql-relying tests.\n\nYeah, but the point I was trying to make is that that's mostly down to\nlaziness. I see no reason that we couldn't be covering a lot of these\nfeatures in src/test/regress/sql/psql.sql, with far less overhead.\nThe interactive aspects of psql can't be tested that way ... but since\nthis patch doesn't actually provide any way to test those, it's not much\nof a proof-of-concept.\n\nIOW, the blocking factor here is not \"does src/bin/psql/t/ exist\",\nit's \"has somebody written a test that moves the coverage needle\nmeaningfully\". I'm not big on adding a bunch of overhead first and\njust hoping somebody will do something to make it worthwhile later.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Jul 2019 10:06:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "\nHello Tom,\n\n>> The point is that there would be at least *one* TAP tests so that many\n>> other features of psql, although not all, can be tested. [...]\n>\n> Yeah, but the point I was trying to make is that that's mostly down to\n> laziness.\n\nNot always.\n\nI agree that using TAP test if another simpler option is available is not \na good move.\n\nHowever, in the current state, as soon as there is some variation a test \nis removed and coverage is lost, but they could be kept if the check could \nbe against a regexp.\n\n> I see no reason that we couldn't be covering a lot of these features in \n> src/test/regress/sql/psql.sql, with far less overhead. The interactive \n> aspects of psql can't be tested that way ... but since this patch \n> doesn't actually provide any way to test those, it's not much of a \n> proof-of-concept.\n\nThe PoC is checking against a set of regexp instead of expecting an exact \noutput. Ok, it does not solve all possible test scenarii, that is life.\n\n> IOW, the blocking factor here is not \"does src/bin/psql/t/ exist\",\n> it's \"has somebody written a test that moves the coverage needle\n> meaningfully\". I'm not big on adding a bunch of overhead first and\n> just hoping somebody will do something to make it worthwhile later.\n\nI do intend to add coverage once a psql TAP test is available, as I have \ndone with pgbench. Ok, some of the changes are still in the long CF queue, \nbut at least pgbench coverage is around 90%.\n\nI also intend to direct submitted patches to use the TAP infra when \nappropriate, instead of \"no tests, too bad\".\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 3 Jul 2019 16:29:23 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "I wrote:\n> David Fetter <david@fetter.org> writes:\n>> [ v7-0001-Add-warn-to-psql.patch ]\n\n> I took a look at this. I have no quibble with the proposed feature,\n> and the implementation is certainly simple enough. But I'm unconvinced\n> about the proposed test scaffolding.\n\nI pushed this with the simplified test methodology.\n\nWhile I was fooling with it I noticed that the existing code for -n\nis buggy. The documentation says clearly that only the first\nargument is a candidate to be -n:\n\n If the first argument is an unquoted <literal>-n</literal> the trailing\n newline is not written.\n\nbut the actual implementation allows any argument to be recognized as\n-n:\n\nregression=# \\echo this -n should not be -n like this\nthis should not be like thisregression=# \n\nI fixed that, but I'm wondering if we should back-patch that fix\nor leave the back branches alone.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jul 2019 12:38:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> I agree that using TAP test if another simpler option is available is not \n> a good move.\n\n> However, in the current state, as soon as there is some variation a test \n> is removed and coverage is lost, but they could be kept if the check could \n> be against a regexp.\n\nI'm fairly suspicious of using TAP tests just to get a regexp match.\nThe thing I don't like about TAP tests for this is that they won't\nnotice if the test case prints extra stuff beyond what you were\nexpecting --- at least, not without care that I don't think we usually\ntake.\n\nI've thought for some time that we should steal an idea from MySQL\nand extend pg_regress so that individual lines of an expected-file\ncould have regexp match patterns rather than being just exact matches.\nI'm not really sure how to do that without reimplementing diff(1)\nfor ourselves :-(, but that would be a very large step forward if\nwe could find a reasonable implementation.\n\nAnyway, my opinion about having TAP test(s) for psql remains that\nit'll be a good idea as soon as somebody submits a test that adds\na meaningful amount of code coverage that way (and the coverage\ncan't be gotten more simply). But we don't need a patch that is\njust trying to get the camel's nose under the tent.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jul 2019 12:48:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "On Fri, Jul 05, 2019 at 12:38:02PM -0400, Tom Lane wrote:\n> I wrote:\n> > David Fetter <david@fetter.org> writes:\n> >> [ v7-0001-Add-warn-to-psql.patch ]\n> \n> > I took a look at this. I have no quibble with the proposed feature,\n> > and the implementation is certainly simple enough. But I'm unconvinced\n> > about the proposed test scaffolding.\n> \n> I pushed this with the simplified test methodology.\n\nThanks!\n\n> While I was fooling with it I noticed that the existing code for -n\n> is buggy. The documentation says clearly that only the first\n> argument is a candidate to be -n:\n> \n> If the first argument is an unquoted <literal>-n</literal> the trailing\n> newline is not written.\n> \n> but the actual implementation allows any argument to be recognized as\n> -n:\n> \n> regression=# \\echo this -n should not be -n like this\n> this should not be like thisregression=# \n> \n> I fixed that, but I'm wondering if we should back-patch that fix\n> or leave the back branches alone.\n\n+0.5 for back-patching.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Fri, 5 Jul 2019 23:29:03 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "On Fri, Jul 5, 2019 at 11:29:03PM +0200, David Fetter wrote:\n> > While I was fooling with it I noticed that the existing code for -n\n> > is buggy. The documentation says clearly that only the first\n> > argument is a candidate to be -n:\n> > \n> > If the first argument is an unquoted <literal>-n</literal> the trailing\n> > newline is not written.\n> > \n> > but the actual implementation allows any argument to be recognized as\n> > -n:\n> > \n> > regression=# \\echo this -n should not be -n like this\n> > this should not be like thisregression=# \n> > \n> > I fixed that, but I'm wondering if we should back-patch that fix\n> > or leave the back branches alone.\n> \n> +0.5 for back-patching.\n\nUh, if this was done in a major release I am thinking we have to mention\nthis as an incompatibility, which means we should probably not backpatch\nit.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 8 Jul 2019 21:29:17 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Fri, Jul 5, 2019 at 11:29:03PM +0200, David Fetter wrote:\n>>> I fixed that, but I'm wondering if we should back-patch that fix\n>>> or leave the back branches alone.\n\n>> +0.5 for back-patching.\n\n> Uh, if this was done in a major release I am thinking we have to mention\n> this as an incompatibility, which means we should probably not backpatch\n> it.\n\nHow is \"clearly doesn't match the documentation\" not a bug?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jul 2019 23:29:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 11:29:00PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Fri, Jul 5, 2019 at 11:29:03PM +0200, David Fetter wrote:\n> >>> I fixed that, but I'm wondering if we should back-patch that fix\n> >>> or leave the back branches alone.\n> \n> >> +0.5 for back-patching.\n> \n> > Uh, if this was done in a major release I am thinking we have to mention\n> > this as an incompatibility, which means we should probably not backpatch\n> > it.\n> \n> How is \"clearly doesn't match the documentation\" not a bug?\n\nUh, it is a bug, but people might be expecting the existing behavior\nwithout consulting the documentation, and we don't expect people to be\ntesting minor releases.\n\nAnyway, it seems to be have been applied only to head so far.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 8 Jul 2019 23:35:28 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 8:35 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Jul 8, 2019 at 11:29:00PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Fri, Jul 5, 2019 at 11:29:03PM +0200, David Fetter wrote:\n> > >>> I fixed that, but I'm wondering if we should back-patch that fix\n> > >>> or leave the back branches alone.\n> >\n> > >> +0.5 for back-patching.\n> >\n> > > Uh, if this was done in a major release I am thinking we have to\n> mention\n> > > this as an incompatibility, which means we should probably not\n> backpatch\n> > > it.\n> >\n> > How is \"clearly doesn't match the documentation\" not a bug?\n>\n> Uh, it is a bug, but people might be expecting the existing behavior\n> without consulting the documentation, and we don't expect people to be\n> testing minor releases.\n>\n> Anyway, it seems to be have been applied only to head so far.\n>\n\nI would leave it at that. Won't Fix for released versions (neither code\nnor documentation) as we describe the intended usage so people do the right\nthing (which is highly likely anyway - though something like \"\\echo\n:content_to_echo -n\" wouldn't surprise me) but those that learned through\ntrial and error only experience a behavior change on a major release as\nthey would expect. This doesn't seem important enough to warrant breaking\nthe general rule. Though I'd give a +1 to v12; at least for me Beta is\ngenerally fair game.\n\nDavid J.\n\nOn Mon, Jul 8, 2019 at 8:35 PM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Jul 8, 2019 at 11:29:00PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Fri, Jul 5, 2019 at 11:29:03PM +0200, David Fetter wrote:\n> >>> I fixed that, but I'm wondering if we should back-patch that fix\n> >>> or leave the back branches alone.\n> \n> >> +0.5 for back-patching.\n> \n> > Uh, if this was done in a major release I am thinking we have to mention\n> > this as an incompatibility, which means we should probably not backpatch\n> > it.\n> \n> How is \"clearly doesn't match the documentation\" not a bug?\n\nUh, it is a bug, but people might be expecting the existing behavior\nwithout consulting the documentation, and we don't expect people to be\ntesting minor releases.\n\nAnyway, it seems to be have been applied only to head so far.I would leave it at that. Won't Fix for released versions (neither code nor documentation) as we describe the intended usage so people do the right thing (which is highly likely anyway - though something like \"\\echo :content_to_echo -n\" wouldn't surprise me) but those that learned through trial and error only experience a behavior change on a major release as they would expect. This doesn't seem important enough to warrant breaking the general rule. Though I'd give a +1 to v12; at least for me Beta is generally fair game.David J.",
"msg_date": "Mon, 8 Jul 2019 21:19:03 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v4] Add \\warn to psql"
}
] |
[
{
"msg_contents": "While fooling around with the patch shown at\n<12166.1555559689@sss.pgh.pa.us>, I noticed this rather strange\npre-existing behavior (tested on v11 as well as HEAD):\n\nregression=# create table idxpart (a int) partition by range (a);\nCREATE TABLE\nregression=# create table idxpart0 (like idxpart);\nCREATE TABLE\nregression=# alter table idxpart0 add unique(a);\nALTER TABLE\nregression=# alter table idxpart attach partition idxpart0 default;\nALTER TABLE\nregression=# \\d idxpart0\n Table \"public.idxpart0\"\n Column | Type | Collation | Nullable | Default \n--------+---------+-----------+----------+---------\n a | integer | | | \nPartition of: idxpart DEFAULT\nIndexes:\n \"idxpart0_a_key\" UNIQUE CONSTRAINT, btree (a)\n\nregression=# alter table only idxpart add primary key (a);\nALTER TABLE\nregression=# \\d idxpart0\n Table \"public.idxpart0\"\n Column | Type | Collation | Nullable | Default \n--------+---------+-----------+----------+---------\n a | integer | | not null | \nPartition of: idxpart DEFAULT\nIndexes:\n \"idxpart0_a_key\" UNIQUE CONSTRAINT, btree (a)\n\nIn other words, even though I said ALTER TABLE ONLY, this command went\nand modified the partition to have a not null constraint that it\ndidn't have before. (And yes, it scans the partition to verify that.)\n\nISTM that this is a bug, not a feature: if there's any point at\nall to saying ONLY in this context, it's that we're not supposed\nto be doing anything as expensive as adding a new constraint to\na child partition. No? So I think that this should have failed.\nWe need to require the partition(s) to already have attnotnull set.\nDoes anyone want to argue differently?\n\nNote that if we ever get around to tracking the inheritance status\nof attnotnull, this'd also require incrementing an inheritance counter\nfor attnotnull, meaning we'd need a stronger lock on the partition\nthan is needed just to check not-nullness. But we already need to\nincrement pg_constraint.coninhcount for the child's unique or pkey\nconstraint that we're co-opting, so that doesn't sound like a real\nproblem. (In practice it seems that this command takes AEL on\nthe child partition, which surprises me; shouldn't we be striving\nfor something less?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Apr 2019 15:16:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Odd behavior of partitioned ALTER TABLE ONLY ... ADD PRIMARY KEY"
},
{
"msg_contents": "On 2019-Apr-21, Tom Lane wrote:\n\n> ISTM that this is a bug, not a feature: if there's any point at\n> all to saying ONLY in this context, it's that we're not supposed\n> to be doing anything as expensive as adding a new constraint to\n> a child partition. No? So I think that this should have failed.\n\nHmm, yeah, this is not intentional and I agree that it shouldn't be\ndoing this.\n\n> We need to require the partition(s) to already have attnotnull set.\n\nSounds good to me, yes.\n\nDo you want me to see about this?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 21 Apr 2019 15:36:13 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Odd behavior of partitioned ALTER TABLE ONLY ... ADD PRIMARY KEY"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Apr-21, Tom Lane wrote:\n>> ISTM that this is a bug, not a feature: if there's any point at\n>> all to saying ONLY in this context, it's that we're not supposed\n>> to be doing anything as expensive as adding a new constraint to\n>> a child partition. No? So I think that this should have failed.\n\n> Hmm, yeah, this is not intentional and I agree that it shouldn't be\n> doing this.\n\n>> We need to require the partition(s) to already have attnotnull set.\n\n> Sounds good to me, yes.\n\n> Do you want me to see about this?\n\nIt's tied up in the other patch I'm working on, so I can deal with it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Apr 2019 15:40:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Odd behavior of partitioned ALTER TABLE ONLY ... ADD PRIMARY KEY"
}
] |
[
{
"msg_contents": "Hey everyone,\n\nI am writing a plpgsql function that (to greatly simplify) raises an\nexception with a formatted* message. Ideally, I should be able to call\nit with raise_exception('The person %I has only %I bananas.', 'Fred',\n8), which mimics the format(text, any[]) calling convention.\n\nHere is where I have encountered a limitation of PostgreSQL's design:\nhttps://www.postgresql.org/docs/11/datatype-pseudo.html mentions\nexplicitly that, \"At present most procedural languages forbid use of a\npseudo-type as an argument type\".\n\nMy reasoning is that I should be able to accept a value of some type if\nall I do is passing it to a function that accepts exactly that type,\nsuch as format(text, any[]). Given the technical reality, I assume that\nI wouldn't be able to do anything else with that value, but that is\nfine, since I don't have to do anything with it regardless.\n\nBR\nMichał \"phoe\" Herda\n\n*I do not want to use the obvious solution of\nraise_exception(format(...)) because the argument to that function is\nthe error ID that is then looked up in a table from which the error\nmessage and sqlstate are retrieved. My full code is in the attached SQL\nfile. Once it is executed:\n\nSELECT gateway_error('user_does_not_exist', '2'); -- works but is unnatural,\nSELECT gateway_error('user_does_not_exist', 2); -- is natural but\ndoesn't work.",
"msg_date": "Mon, 22 Apr 2019 00:03:06 +0200",
"msg_from": "=?UTF-8?Q?Micha=c5=82_=22phoe=22_Herda?= <phoe@disroot.org>",
"msg_from_op": true,
"msg_subject": "Allow any[] as input arguments for sql/plpgsql functions to mimic\n format()"
},
{
"msg_contents": "Hi\n\npo 22. 4. 2019 v 11:27 odesílatel Michał \"phoe\" Herda <phoe@disroot.org>\nnapsal:\n\n> Hey everyone,\n>\n> I am writing a plpgsql function that (to greatly simplify) raises an\n> exception with a formatted* message. Ideally, I should be able to call\n> it with raise_exception('The person %I has only %I bananas.', 'Fred',\n> 8), which mimics the format(text, any[]) calling convention.\n>\n> Here is where I have encountered a limitation of PostgreSQL's design:\n> https://www.postgresql.org/docs/11/datatype-pseudo.html mentions\n> explicitly that, \"At present most procedural languages forbid use of a\n> pseudo-type as an argument type\".\n>\n> My reasoning is that I should be able to accept a value of some type if\n> all I do is passing it to a function that accepts exactly that type,\n> such as format(text, any[]). Given the technical reality, I assume that\n> I wouldn't be able to do anything else with that value, but that is\n> fine, since I don't have to do anything with it regardless.\n>\n> BR\n> Michał \"phoe\" Herda\n>\n> *I do not want to use the obvious solution of\n> raise_exception(format(...)) because the argument to that function is\n> the error ID that is then looked up in a table from which the error\n> message and sqlstate are retrieved. My full code is in the attached SQL\n> file. Once it is executed:\n>\n> SELECT gateway_error('user_does_not_exist', '2'); -- works but is\n> unnatural,\n> SELECT gateway_error('user_does_not_exist', 2); -- is natural but\n> doesn't work.\n>\n\nIt is known problem, and fix is not easy.\n\nAny expressions inside plpgsql are simple queries like SELECT expr, and\nthey are executed same pipeline like queries.\n\nThe plans of these queries are stored and reused. Originally these plans\ndisallow any changes, now some changes are supported, but parameters should\nbe same all time. This is ensured by disallowing \"any\" type.\n\nOther polymorphic types are very static, so there is not described risk.\n\nProbably some enhancement can be in this are. The plan can be re-planed\nafter some change - but it can has lot of performance impacts. It is long\nopen topic. Some changes in direction to dynamic languages can increase\ncost of some future optimization to higher performance :-(.\n\nRegards\n\nPavel\n\nHipo 22. 4. 2019 v 11:27 odesílatel Michał \"phoe\" Herda <phoe@disroot.org> napsal:Hey everyone,\n\nI am writing a plpgsql function that (to greatly simplify) raises an\nexception with a formatted* message. Ideally, I should be able to call\nit with raise_exception('The person %I has only %I bananas.', 'Fred',\n8), which mimics the format(text, any[]) calling convention.\n\nHere is where I have encountered a limitation of PostgreSQL's design:\nhttps://www.postgresql.org/docs/11/datatype-pseudo.html mentions\nexplicitly that, \"At present most procedural languages forbid use of a\npseudo-type as an argument type\".\n\nMy reasoning is that I should be able to accept a value of some type if\nall I do is passing it to a function that accepts exactly that type,\nsuch as format(text, any[]). Given the technical reality, I assume that\nI wouldn't be able to do anything else with that value, but that is\nfine, since I don't have to do anything with it regardless.\n\nBR\nMichał \"phoe\" Herda\n\n*I do not want to use the obvious solution of\nraise_exception(format(...)) because the argument to that function is\nthe error ID that is then looked up in a table from which the error\nmessage and sqlstate are retrieved. My full code is in the attached SQL\nfile. Once it is executed:\n\nSELECT gateway_error('user_does_not_exist', '2'); -- works but is unnatural,\nSELECT gateway_error('user_does_not_exist', 2); -- is natural but\ndoesn't work.It is known problem, and fix is not easy. Any expressions inside plpgsql are simple queries like SELECT expr, and they are executed same pipeline like queries.The plans of these queries are stored and reused. Originally these plans disallow any changes, now some changes are supported, but parameters should be same all time. This is ensured by disallowing \"any\" type. Other polymorphic types are very static, so there is not described risk.Probably some enhancement can be in this are. The plan can be re-planed after some change - but it can has lot of performance impacts. It is long open topic. Some changes in direction to dynamic languages can increase cost of some future optimization to higher performance :-(.RegardsPavel",
"msg_date": "Mon, 22 Apr 2019 12:09:55 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow any[] as input arguments for sql/plpgsql functions to mimic\n format()"
},
{
"msg_contents": "Hey!\n\nOK - thank you for the update and the explanation.\n\nMy reasoning in this case is - if we allow the any[] type to only be\npassed to other functions that accept any[], and disallow any kind of\nother operations on this array (such as retrieving its elements or\nmodifying it), I do not yet see any places where it might introduce a\nperformance regression. These arguments will literally be pass-only, and\nsince we are unable to interact with them in any other way, there will\nbe no possibility of type mismatches and therefore for performance\npenalties.\n\nThis approach puts all the heavy work on the plpgsql compiler - it will\nneed to ensure that, if there is a any[] or VARIADIC any variable in a\nfunction arglist, it must NOT be accessed in any way, and can only be\npassed to other functions which accept any[] or VARIADIC any.\n\nBR\n~phoe\n\nOn 22.04.2019 12:09, Pavel Stehule wrote:\n> Hi\n>\n> po 22. 4. 2019 v 11:27 odesílatel Michał \"phoe\" Herda\n> <phoe@disroot.org <mailto:phoe@disroot.org>> napsal:\n>\n> Hey everyone,\n>\n> I am writing a plpgsql function that (to greatly simplify) raises an\n> exception with a formatted* message. Ideally, I should be able to call\n> it with raise_exception('The person %I has only %I bananas.', 'Fred',\n> 8), which mimics the format(text, any[]) calling convention.\n>\n> Here is where I have encountered a limitation of PostgreSQL's design:\n> https://www.postgresql.org/docs/11/datatype-pseudo.html mentions\n> explicitly that, \"At present most procedural languages forbid use of a\n> pseudo-type as an argument type\".\n>\n> My reasoning is that I should be able to accept a value of some\n> type if\n> all I do is passing it to a function that accepts exactly that type,\n> such as format(text, any[]). Given the technical reality, I assume\n> that\n> I wouldn't be able to do anything else with that value, but that is\n> fine, since I don't have to do anything with it regardless.\n>\n> BR\n> Michał \"phoe\" Herda\n>\n> *I do not want to use the obvious solution of\n> raise_exception(format(...)) because the argument to that function is\n> the error ID that is then looked up in a table from which the error\n> message and sqlstate are retrieved. My full code is in the\n> attached SQL\n> file. Once it is executed:\n>\n> SELECT gateway_error('user_does_not_exist', '2'); -- works but is\n> unnatural,\n> SELECT gateway_error('user_does_not_exist', 2); -- is natural but\n> doesn't work.\n>\n>\n> It is known problem, and fix is not easy.\n>\n> Any expressions inside plpgsql are simple queries like SELECT expr,\n> and they are executed same pipeline like queries.\n>\n> The plans of these queries are stored and reused. Originally these\n> plans disallow any changes, now some changes are supported, but\n> parameters should be same all time. This is ensured by disallowing\n> \"any\" type.\n>\n> Other polymorphic types are very static, so there is not described risk.\n>\n> Probably some enhancement can be in this are. The plan can be\n> re-planed after some change - but it can has lot of performance\n> impacts. It is long open topic. Some changes in direction to dynamic\n> languages can increase cost of some future optimization to higher\n> performance :-(.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n> \n\n\n\n\n\n\nHey!\nOK - thank you for the update and the explanation.\nMy reasoning in this case is - if we allow the any[] type to only\n be passed to other functions that accept any[], and disallow any\n kind of other operations on this array (such as retrieving its\n elements or modifying it), I do not yet see any places where it\n might introduce a performance regression. These arguments will\n literally be pass-only, and since we are unable to interact with\n them in any other way, there will be no possibility of type\n mismatches and therefore for performance penalties.\nThis approach puts all the heavy work on the plpgsql compiler -\n it will need to ensure that, if there is a any[] or VARIADIC any\n variable in a function arglist, it must NOT be accessed in any\n way, and can only be passed to other functions which accept any[]\n or VARIADIC any.\n\nBR\n ~phoe\n\nOn 22.04.2019 12:09, Pavel Stehule\n wrote:\n\n\n\n\nHi\n\n\n\npo 22. 4. 2019 v 11:27\n odesílatel Michał \"phoe\" Herda <phoe@disroot.org>\n napsal:\n\nHey everyone,\n\n I am writing a plpgsql function that (to greatly simplify)\n raises an\n exception with a formatted* message. Ideally, I should be\n able to call\n it with raise_exception('The person %I has only %I\n bananas.', 'Fred',\n 8), which mimics the format(text, any[]) calling convention.\n\n Here is where I have encountered a limitation of\n PostgreSQL's design:\nhttps://www.postgresql.org/docs/11/datatype-pseudo.html\n mentions\n explicitly that, \"At present most procedural languages\n forbid use of a\n pseudo-type as an argument type\".\n\n My reasoning is that I should be able to accept a value of\n some type if\n all I do is passing it to a function that accepts exactly\n that type,\n such as format(text, any[]). Given the technical reality, I\n assume that\n I wouldn't be able to do anything else with that value, but\n that is\n fine, since I don't have to do anything with it regardless.\n\n BR\n Michał \"phoe\" Herda\n\n *I do not want to use the obvious solution of\n raise_exception(format(...)) because the argument to that\n function is\n the error ID that is then looked up in a table from which\n the error\n message and sqlstate are retrieved. My full code is in the\n attached SQL\n file. Once it is executed:\n\n SELECT gateway_error('user_does_not_exist', '2'); -- works\n but is unnatural,\n SELECT gateway_error('user_does_not_exist', 2); -- is\n natural but\n doesn't work.\n\n\n\nIt is known problem, and fix is not easy. \n\n\n\nAny expressions inside plpgsql are simple queries like\n SELECT expr, and they are executed same pipeline like\n queries.\n\n\nThe plans of these queries are stored and reused.\n Originally these plans disallow any changes, now some\n changes are supported, but parameters should be same all\n time. This is ensured by disallowing \"any\" type. \n\n\n\nOther polymorphic types are very static, so there is not\n described risk.\n\n\nProbably some enhancement can be in this are. The plan\n can be re-planed after some change - but it can has lot of\n performance impacts. It is long open topic. Some changes in\n direction to dynamic languages can increase cost of some\n future optimization to higher performance :-(.\n\n\nRegards\n\n\nPavel",
"msg_date": "Mon, 22 Apr 2019 19:20:02 +0200",
"msg_from": "=?UTF-8?Q?Micha=c5=82_=22phoe=22_Herda?= <phoe@disroot.org>",
"msg_from_op": true,
"msg_subject": "Re: Allow any[] as input arguments for sql/plpgsql functions to mimic\n format()"
},
{
"msg_contents": "Hi\n\npo 22. 4. 2019 v 19:20 odesílatel Michał \"phoe\" Herda <phoe@disroot.org>\nnapsal:\n\n> Hey!\n>\n> OK - thank you for the update and the explanation.\n>\n> My reasoning in this case is - if we allow the any[] type to only be\n> passed to other functions that accept any[], and disallow any kind of other\n> operations on this array (such as retrieving its elements or modifying it),\n> I do not yet see any places where it might introduce a performance\n> regression. These arguments will literally be pass-only, and since we are\n> unable to interact with them in any other way, there will be no possibility\n> of type mismatches and therefore for performance penalties.\n>\n> This approach puts all the heavy work on the plpgsql compiler - it will\n> need to ensure that, if there is a any[] or VARIADIC any variable in a\n> function arglist, it must NOT be accessed in any way, and can only be\n> passed to other functions which accept any[] or VARIADIC any.\n>\nPLpgSQL compiler knows nothing about a expressions - the compiler process\nonly plpgsql statements. Expressions are processed at runtime only by SQL\nparser and executor.\n\nIt is good to start with plpgsql codes -\nhttps://github.com/postgres/postgres/tree/master/src/pl/plpgsql/src\n\nyou can see there, so plpgsql is very different from other compilers. It\njust glue of SQL expressions or queries, that are black box for PLpgSQL\ncompiler and executor.\n\nJust any[] is not plpgsql way. For your case you should to use a overloading\n\ncreate or replace function fx(fmt text, par text)\nreturns void as $$\nbegin\n raise notice '%', format(fmt, par);\nend;\n$$ language plpgsql;\n\ncreate or replace function fx(fmt text, par numeric)\nreturns void as $$\nbegin\n raise notice '%', format(fmt, par);\nend;\n$$ language plpgsql;\n\nThere is another limit, you cannot to declare function parameter type that\nenforce explicit casting\n\ncan be nice (but it is strange idea) to have some other flags for arguments\n\nCREATE OR REPLACE FUNCTION gateway_error(fmt text, par text FORCE EXPLICIT\nCAST)\n...\n\nRegards\n\nPavel\n\n\n> BR\n> ~phoe\n> On 22.04.2019 12:09, Pavel Stehule wrote:\n>\n> Hi\n>\n> po 22. 4. 2019 v 11:27 odesílatel Michał \"phoe\" Herda <phoe@disroot.org>\n> napsal:\n>\n>> Hey everyone,\n>>\n>> I am writing a plpgsql function that (to greatly simplify) raises an\n>> exception with a formatted* message. Ideally, I should be able to call\n>> it with raise_exception('The person %I has only %I bananas.', 'Fred',\n>> 8), which mimics the format(text, any[]) calling convention.\n>>\n>> Here is where I have encountered a limitation of PostgreSQL's design:\n>> https://www.postgresql.org/docs/11/datatype-pseudo.html mentions\n>> explicitly that, \"At present most procedural languages forbid use of a\n>> pseudo-type as an argument type\".\n>>\n>> My reasoning is that I should be able to accept a value of some type if\n>> all I do is passing it to a function that accepts exactly that type,\n>> such as format(text, any[]). Given the technical reality, I assume that\n>> I wouldn't be able to do anything else with that value, but that is\n>> fine, since I don't have to do anything with it regardless.\n>>\n>> BR\n>> Michał \"phoe\" Herda\n>>\n>> *I do not want to use the obvious solution of\n>> raise_exception(format(...)) because the argument to that function is\n>> the error ID that is then looked up in a table from which the error\n>> message and sqlstate are retrieved. My full code is in the attached SQL\n>> file. Once it is executed:\n>>\n>> SELECT gateway_error('user_does_not_exist', '2'); -- works but is\n>> unnatural,\n>> SELECT gateway_error('user_does_not_exist', 2); -- is natural but\n>> doesn't work.\n>>\n>\n> It is known problem, and fix is not easy.\n>\n> Any expressions inside plpgsql are simple queries like SELECT expr, and\n> they are executed same pipeline like queries.\n>\n> The plans of these queries are stored and reused. Originally these plans\n> disallow any changes, now some changes are supported, but parameters should\n> be same all time. This is ensured by disallowing \"any\" type.\n>\n> Other polymorphic types are very static, so there is not described risk.\n>\n> Probably some enhancement can be in this are. The plan can be re-planed\n> after some change - but it can has lot of performance impacts. It is long\n> open topic. Some changes in direction to dynamic languages can increase\n> cost of some future optimization to higher performance :-(.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>\n\nHipo 22. 4. 2019 v 19:20 odesílatel Michał \"phoe\" Herda <phoe@disroot.org> napsal:\n\nHey!\nOK - thank you for the update and the explanation.\nMy reasoning in this case is - if we allow the any[] type to only\n be passed to other functions that accept any[], and disallow any\n kind of other operations on this array (such as retrieving its\n elements or modifying it), I do not yet see any places where it\n might introduce a performance regression. These arguments will\n literally be pass-only, and since we are unable to interact with\n them in any other way, there will be no possibility of type\n mismatches and therefore for performance penalties.\nThis approach puts all the heavy work on the plpgsql compiler -\n it will need to ensure that, if there is a any[] or VARIADIC any\n variable in a function arglist, it must NOT be accessed in any\n way, and can only be passed to other functions which accept any[]\n or VARIADIC any.PLpgSQL compiler knows nothing about a expressions - the compiler process only plpgsql statements. Expressions are processed at runtime only by SQL parser and executor.It is good to start with plpgsql codes - https://github.com/postgres/postgres/tree/master/src/pl/plpgsql/srcyou can see there, so plpgsql is very different from other compilers. It just glue of SQL expressions or queries, that are black box for PLpgSQL compiler and executor.Just any[] is not plpgsql way. For your case you should to use a overloadingcreate or replace function fx(fmt text, par text)returns void as $$begin raise notice '%', format(fmt, par);end;$$ language plpgsql;create or replace function fx(fmt text, par numeric)returns void as $$begin raise notice '%', format(fmt, par);end;$$ language plpgsql;There is another limit, you cannot to declare function parameter type that enforce explicit casting can be nice (but it is strange idea) to have some other flags for argumentsCREATE OR REPLACE FUNCTION gateway_error(fmt text, par text FORCE EXPLICIT CAST)...RegardsPavel \n\nBR\n ~phoe\n\nOn 22.04.2019 12:09, Pavel Stehule\n wrote:\n\n\n\nHi\n\n\n\npo 22. 4. 2019 v 11:27\n odesílatel Michał \"phoe\" Herda <phoe@disroot.org>\n napsal:\n\nHey everyone,\n\n I am writing a plpgsql function that (to greatly simplify)\n raises an\n exception with a formatted* message. Ideally, I should be\n able to call\n it with raise_exception('The person %I has only %I\n bananas.', 'Fred',\n 8), which mimics the format(text, any[]) calling convention.\n\n Here is where I have encountered a limitation of\n PostgreSQL's design:\nhttps://www.postgresql.org/docs/11/datatype-pseudo.html\n mentions\n explicitly that, \"At present most procedural languages\n forbid use of a\n pseudo-type as an argument type\".\n\n My reasoning is that I should be able to accept a value of\n some type if\n all I do is passing it to a function that accepts exactly\n that type,\n such as format(text, any[]). Given the technical reality, I\n assume that\n I wouldn't be able to do anything else with that value, but\n that is\n fine, since I don't have to do anything with it regardless.\n\n BR\n Michał \"phoe\" Herda\n\n *I do not want to use the obvious solution of\n raise_exception(format(...)) because the argument to that\n function is\n the error ID that is then looked up in a table from which\n the error\n message and sqlstate are retrieved. My full code is in the\n attached SQL\n file. Once it is executed:\n\n SELECT gateway_error('user_does_not_exist', '2'); -- works\n but is unnatural,\n SELECT gateway_error('user_does_not_exist', 2); -- is\n natural but\n doesn't work.\n\n\n\nIt is known problem, and fix is not easy. \n\n\n\nAny expressions inside plpgsql are simple queries like\n SELECT expr, and they are executed same pipeline like\n queries.\n\n\nThe plans of these queries are stored and reused.\n Originally these plans disallow any changes, now some\n changes are supported, but parameters should be same all\n time. This is ensured by disallowing \"any\" type. \n\n\n\nOther polymorphic types are very static, so there is not\n described risk.\n\n\nProbably some enhancement can be in this are. The plan\n can be re-planed after some change - but it can has lot of\n performance impacts. It is long open topic. Some changes in\n direction to dynamic languages can increase cost of some\n future optimization to higher performance :-(.\n\n\nRegards\n\n\nPavel",
"msg_date": "Mon, 22 Apr 2019 19:53:29 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow any[] as input arguments for sql/plpgsql functions to mimic\n format()"
},
{
"msg_contents": "=?UTF-8?Q?Micha=c5=82_=22phoe=22_Herda?= <phoe@disroot.org> writes:\n> My reasoning in this case is - if we allow the any[] type to only be\n> passed to other functions that accept any[], and disallow any kind of\n> other operations on this array (such as retrieving its elements or\n> modifying it), I do not yet see any places where it might introduce a\n> performance regression.\n\nPerformance regressions are not the question here --- or at least, there\nare a lot of other questions to get past first.\n\n* plpgsql doesn't have any mechanism for restricting the use of a\nparameter in the way you suggest. It's not clear if it'd be practical\nto add one, given the arms-length way in which plpgsql does expression\nevaluation, and it seems likely that any such thing would be messy\nand bug-prone.\n\n* There's not actually any such type as any[]. There's anyarray,\nwhich is not what you're wishing for here because it'd constrain\nall the actual arguments to be the same type (or at least coercible\nto the same array element type). This is related to the next point...\n\n* format() isn't declared as taking any[]. It's really\n\nregression=# \\df format\n List of functions\n Schema | Name | Result data type | Argument data types | Type \n------------+--------+------------------+----------------------+------\n pg_catalog | format | text | text | func\n pg_catalog | format | text | text, VARIADIC \"any\" | func\n(2 rows)\n\n\"VARIADIC any\" is a very special hack, because unlike other VARIADIC\ncases, it doesn't result in collapsing the actual arguments into an\narray. (Again, it can't because they might not have a common type.)\nThe called function has to have special logic for examining its\narguments to find out how many there are and what their types are.\nformat() can do that because it's written in C, but a plpgsql function,\nnot so much.\n\n* We could imagine allowing a plpgsql function to be declared\n\"VARIADIC any\", and treating it as having N polymorphic arguments of\nindependent types, but then what? plpgsql has no notation for\naccessing such arguments (they wouldn't have names, to start with),\nnor for finding out how many there are, and it certainly has no\nnotation for passing the whole group of them on to some other function.\n\n\nI think the closest you'll be able to get here is to declare the\nplpgsql function as taking \"variadic text[]\" and then passing the\ntext array to format() with a VARIADIC marker. That should work\nmostly okay, although calls might need explicit casts to text\nin some cases.\n\nFWIW, it'd likely be easier to get around these problems in plperl\nor pltcl than in plpgsql, as those are both much less concerned\nwith the exact data types of their arguments than plpgsql, and\nmore accustomed to dealing with functions with variable argument\nlists. I don't know Python well enough to say whether the same\nis true of plpython.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 15:02:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow any[] as input arguments for sql/plpgsql functions to mimic\n format()"
}
] |
[
{
"msg_contents": "Andres has suggested that I work on teaching nbtree to accommodate\nvariable-width, logical table identifiers, such as those required for\nindirect indexes, or clustered indexes, where secondary indexes must\nuse a logical primary key value instead of a heap TID. I'm not\ncurrently committed to working on this as a project, but I definitely\ndon't want to make it any harder. This has caused me to think about\nthe problem as it relates to the new on-disk representation for v4\nnbtree indexes in Postgres 12. I do have a minor misgiving about one\nparticular aspect of what I came up with: The precise details of how\nwe represent heap TID in pivot tuples seems like it might make things\nharder than they need to be for a future logical/varwidth table\nidentifier project. This probably isn't worth doing anything about\nnow, but it seems worth discussing now, just in case.\n\nThe macro BTreeTupleGetHeapTID() can be used to get a pointer to an\nItemPointerData (an ItemPointer) for the heap TID column if any is\navailable, regardless of whether the tuple is a non-pivot tuple\n(points to the heap) or a pivot tuple (belongs in internal/branch\npages, and points to a block in the index, but needs to store heap TID\nas well). In the non-pivot case the ItemPointer points to the start of\nthe tuple (raw IndexTuple field), while in the pivot case it points to\nitup + IndexTupleSize() - sizeof(ItemPointerData). This interface\nseems like the right thing to me; it's simple, low-context, works just\nas well with INCLUDE indexes, and makes it fast to determine if there\nare any truncated suffix attributes. However, I don't like the way the\nalignment padding works -- there is often \"extra\" padding *between*\nthe last untrucated suffix attribute and the heap TID.\n\nIt seems like any MAXALIGN() padding should all be at the end -- the\nonly padding between tuples should be based on the *general*\nrequirement for the underlying data types, regardless of whether or\nnot we're dealing with the special heap TID representation in pivot\ntuples. We should eliminate what could be viewed as a special case.\nThis approach is probably going to be easier to generalize later.\nThere can be a design where the logical/varwidth attribute can be\naccessed either by using the usual index_getattr() stuff, or using an\ninterface like BTreeTupleGetHeapTID() to get to it quickly. We'd have\nto store an offset to the final/identifier attribute in the header to\nmake that work, because we couldn't simply assume a particular width\n(like 6 bytes), but that seems straightforward. (I imagine that\nthere'd be less difference between pivot and non-pivot tuples with\nvarwidth identifiers than there are currently with heap TID, since we\nwon't have to worry about pg_upgrade.)\n\nnbtinsert.c is very MAXALIGN()-heavy, and currently always represents\nthat index tuples have a MAXALIGN()'d size, but that doesn't seem\nnecessary or desirable to me. After all, we don't do that within\nheapam -- we can just rely on the bufpage.c routines to allocate a\nMAXALIGN()'d space for the whole tuple, while still making the lp_len\nfield in the line pointer use the original size (i.e. size with\nun-MAXALIGN()'ed tuple data area). I've found that it's quite possible\nto get the nbtree code to store the tuple size (lp_len and redundant\nIndexTupleSize() representation) this way, just like heapam always\nhas. This has some useful consequences: BTreeTupleGetHeapTID()\ncontinues to work with the special pivot tuple representation, while\n_bt_truncate() never \"puts padding in the wrong place\" when it must\nadd a heap TID due to there being many duplicates, and split point\nthat avoids doing that (that \"truncates the heap TID attribute\"). I\ncould make this work without breaking the regression tests in about 10\nminutes, which is at least encouraging (it was a bit tricky, though).\n\nThis also results in an immediate though small benefit for v4 nbtree\nindexes: _bt_truncate() produces smaller pivot tuples in a few cases.\nFor example, indexes with one or two boolean fields will have pivot\ntuples that are 15 bytes and 16 bytes in length respectively,\noccupying 16 bytes of tuple space on internal pages. The saving comes\nbecause we can use the alignment padding hole, that was empty in the\noriginal non-pivot index tuple that the new pivot tuple is to be\nformed from. Currently, the size of these pivot tuples would be 24\nbytes, so we're occasionally saving a MAXALIGN() quantum in space this\nway. It is unlikely that anyone would actually care very much about\nthese kinds of space savings, but at the same time it feels more\nelegant to me. The heap TID may not have a pg_attribute entry, but\nISTM that the on-disk representation should not have padding \"in the\nwrong place\", on general principle.\n\nThoughts?\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 21 Apr 2019 17:46:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> Andres has suggested that I work on teaching nbtree to accommodate\n> variable-width, logical table identifiers, such as those required for\n> indirect indexes, or clustered indexes, where secondary indexes must\n> use a logical primary key value instead of a heap TID. I'm not\n> currently committed to working on this as a project, but I definitely\n> don't want to make it any harder. This has caused me to think about\n> the problem as it relates to the new on-disk representation for v4\n> nbtree indexes in Postgres 12. I do have a minor misgiving about one\n> particular aspect of what I came up with: The precise details of how\n> we represent heap TID in pivot tuples seems like it might make things\n> harder than they need to be for a future logical/varwidth table\n> identifier project. This probably isn't worth doing anything about\n> now, but it seems worth discussing now, just in case.\n\nThis seems like it would be helpful for global indexes as well, wouldn't\nit?\n\n> This also results in an immediate though small benefit for v4 nbtree\n> indexes: _bt_truncate() produces smaller pivot tuples in a few cases.\n> For example, indexes with one or two boolean fields will have pivot\n> tuples that are 15 bytes and 16 bytes in length respectively,\n> occupying 16 bytes of tuple space on internal pages. The saving comes\n> because we can use the alignment padding hole, that was empty in the\n> original non-pivot index tuple that the new pivot tuple is to be\n> formed from. Currently, the size of these pivot tuples would be 24\n> bytes, so we're occasionally saving a MAXALIGN() quantum in space this\n> way. It is unlikely that anyone would actually care very much about\n> these kinds of space savings, but at the same time it feels more\n> elegant to me. The heap TID may not have a pg_attribute entry, but\n> ISTM that the on-disk representation should not have padding \"in the\n> wrong place\", on general principle.\n> \n> Thoughts?\n\nI agree with trying to avoid having padding 'in the wrong place' and if\nit makes some indexes smaller, great, even if they're unlikely to be\ninteresting in the vast majority of cases, they may still exist out\nthere. Of course, this is provided that it doesn't overly complicate\nthe code, but it sounds like it wouldn't be too bad in this case.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 22 Apr 2019 11:36:09 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-21 17:46:09 -0700, Peter Geoghegan wrote:\n> Andres has suggested that I work on teaching nbtree to accommodate\n> variable-width, logical table identifiers, such as those required for\n> indirect indexes, or clustered indexes, where secondary indexes must\n> use a logical primary key value instead of a heap TID.\n\nI think it's two more cases:\n\n- table AMs that want to support tables that are bigger than 32TB. That\n used to be unrealistic, but it's not anymore. Especially when the need\n to VACUUM etc is largely removed / reduced.\n- global indexes (for cross-partition unique constraints and such),\n which need a partition identifier as part of the tid (or as part of\n the index key, but I think that actually makes interaction with\n indexam from other layers more complicated - the inside of the index\n maybe may want to represent it as a column, but to the outside that\n ought not to be visible)\n\n\n\n> Thoughts?\n\nSeems reasonable to me.\n\nI, more generally, wonder if there's not a case to squeeze out more\npadding than \"just\" what you describe (since we IIRC don't frequently\nkeep pointers into such tuples anyway, and definitely don't for byval\nattrs). But that's very likely better done separately.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 09:35:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 8:36 AM Stephen Frost <sfrost@snowman.net> wrote:\n> This seems like it would be helpful for global indexes as well, wouldn't\n> it?\n\nYes, though that should probably work by reusing what we already do\nwith heap TID (use standard IndexTuple fields on the leaf level for\nheap TID), plus an additional identifier for the partition number that\nis located at the physical end of the tuple. IOW, I think that this\nmight benefit from a design that is half way between what we already\ndo with heap TIDs and what we would be required to do to make varwidth\nlogical row identifiers in tables work -- the partition number is\nvarwidth, though often only a single byte.\n\n> I agree with trying to avoid having padding 'in the wrong place' and if\n> it makes some indexes smaller, great, even if they're unlikely to be\n> interesting in the vast majority of cases, they may still exist out\n> there. Of course, this is provided that it doesn't overly complicate\n> the code, but it sounds like it wouldn't be too bad in this case.\n\nHere is what it took:\n\n* Removed the \"conservative\" MAXALIGN() within index_form_tuple(),\nbringing it in line with heap_form_tuple(), which only MAXALIGN()s so\nthat the first attribute in tuple's data area can safely be accessed\non alignment-picky platforms, but doesn't do the same with data_len.\n\n* Removed most of the MAXALIGN()s from nbtinsert.c, except one that\nconsiders if a page split is required.\n\n* Didn't change the nbtsplitloc.c code, because we need to assume\nMAXALIGN()'d space quantities there. We continue to not trust the\nreported tuple length to be MAXALIGN()'d, which is now essentially\nrather than just defensive.\n\n* Removed MAXALIGN()s within _bt_truncate(), and SHORTALIGN()'d the\nwhole tuple size in the case where new pivot tuple requires a heap TID\nrepresentation. We access TIDs as 3 2 byte integers, so this is\nnecessary for alignment-picky platforms.\n\nI will pursue this as a project for PostgreSQL 13. It doesn't affect\non-disk compatibility, because BTreeTupleGetHeapTID() works just as\nwell with either the existing scheme, or this new one. Having the\n\"real\" tuple length available will make it easier to implement \"true\"\nsuffix truncation, where we truncate *within* a text attribute (i.e.\ngenerate a new, shorter value using new opclass infrastructure).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 22 Apr 2019 10:16:16 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 9:35 AM Andres Freund <andres@anarazel.de> wrote:\n> I, more generally, wonder if there's not a case to squeeze out more\n> padding than \"just\" what you describe (since we IIRC don't frequently\n> keep pointers into such tuples anyway, and definitely don't for byval\n> attrs). But that's very likely better done separately.\n\nThere is one way that that is kind of relevant here. The current\nrequirement seems to be that *any* sort of tuple header be\nMAXALIGN()'d, because in the worst case the first attribute needs to\nbe accessed at a MAXALIGN()'d boundary on an alignment-picky platform.\nThat isn't so bad today, since we usually find a reasonably good way\nto put those 8 bytes (or 23/24 bytes in the case of heap tuples) to\nuse. However, with varwidth table identifiers, the only use of those 8\nbytes that I can think of is the offset to the identifier (or perhaps\nits length), plus the usual t_info stuff. We'd almost invariably waste\n4 or 5 bytes, which seems like a problem to me.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 22 Apr 2019 10:24:24 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Mon, Apr 22, 2019 at 8:36 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > This seems like it would be helpful for global indexes as well, wouldn't\n> > it?\n> \n> Yes, though that should probably work by reusing what we already do\n> with heap TID (use standard IndexTuple fields on the leaf level for\n> heap TID), plus an additional identifier for the partition number that\n> is located at the physical end of the tuple. IOW, I think that this\n> might benefit from a design that is half way between what we already\n> do with heap TIDs and what we would be required to do to make varwidth\n> logical row identifiers in tables work -- the partition number is\n> varwidth, though often only a single byte.\n\nYes, global indexes for partitioned tables could potentially be simpler\nthan the logical row identifiers, but maybe it'd be useful to just have\none implementation based around logical row identifiers which ends up\nworking for global indexes as well as the other types of indexes and\ntable access methods.\n\nIf we thought that the 'single-byte' partition number covered enough\nuse-cases then we could possibly consider supporting them for partitions\nby just 'stealing' a byte from BlockIdData and having the per-partition\nsize be limited to 4TB when a global index exists on the partitioned\ntable. That's certainly not an ideal limitation but it might be\nappealing to some users who really would like global indexes and could\npossibly require less to implement, though there's a lot of other things\nthat would have to be done to have global indexes. Anyhow, just some\nrandom thoughts that I figured I'd share in case there might be\nsomething there worth thinking about.\n\n> > I agree with trying to avoid having padding 'in the wrong place' and if\n> > it makes some indexes smaller, great, even if they're unlikely to be\n> > interesting in the vast majority of cases, they may still exist out\n> > there. Of course, this is provided that it doesn't overly complicate\n> > the code, but it sounds like it wouldn't be too bad in this case.\n> \n> Here is what it took:\n> \n> * Removed the \"conservative\" MAXALIGN() within index_form_tuple(),\n> bringing it in line with heap_form_tuple(), which only MAXALIGN()s so\n> that the first attribute in tuple's data area can safely be accessed\n> on alignment-picky platforms, but doesn't do the same with data_len.\n> \n> * Removed most of the MAXALIGN()s from nbtinsert.c, except one that\n> considers if a page split is required.\n> \n> * Didn't change the nbtsplitloc.c code, because we need to assume\n> MAXALIGN()'d space quantities there. We continue to not trust the\n> reported tuple length to be MAXALIGN()'d, which is now essentially\n> rather than just defensive.\n> \n> * Removed MAXALIGN()s within _bt_truncate(), and SHORTALIGN()'d the\n> whole tuple size in the case where new pivot tuple requires a heap TID\n> representation. We access TIDs as 3 2 byte integers, so this is\n> necessary for alignment-picky platforms.\n> \n> I will pursue this as a project for PostgreSQL 13. It doesn't affect\n> on-disk compatibility, because BTreeTupleGetHeapTID() works just as\n> well with either the existing scheme, or this new one. Having the\n> \"real\" tuple length available will make it easier to implement \"true\"\n> suffix truncation, where we truncate *within* a text attribute (i.e.\n> generate a new, shorter value using new opclass infrastructure).\n\nThis sounds pretty good to me, though I'm not nearly as close to the\ncode there as you are.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 22 Apr 2019 13:32:04 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 10:32 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Yes, global indexes for partitioned tables could potentially be simpler\n> than the logical row identifiers, but maybe it'd be useful to just have\n> one implementation based around logical row identifiers which ends up\n> working for global indexes as well as the other types of indexes and\n> table access methods.\n\nMaybe so. I think that we'd have to actually try it out to know for sure.\n\n> If we thought that the 'single-byte' partition number covered enough\n> use-cases then we could possibly consider supporting them for partitions\n> by just 'stealing' a byte from BlockIdData and having the per-partition\n> size be limited to 4TB when a global index exists on the partitioned\n> table.\n\nI don't think that that would make it any easier to implement.\n\n> This sounds pretty good to me, though I'm not nearly as close to the\n> code there as you are.\n\nI'm slightly concerned that I may have broken an index_form_tuple()\ncaller from some other access method, but they all seem to not trust\nindex_form_tuple() to have MAXALIGN()'d on their behalf, just like\nnbtree (nbtree won't when I'm done, though, because it will actively\ntry to preserve the \"real\" tuple size). It's convenient to me that no\ncaller seems to rely on the index_form_tuple() MAXALIGN() that I want\nto remove.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 22 Apr 2019 10:47:07 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 1:16 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Yes, though that should probably work by reusing what we already do\n> with heap TID (use standard IndexTuple fields on the leaf level for\n> heap TID), plus an additional identifier for the partition number that\n> is located at the physical end of the tuple. IOW, I think that this\n> might benefit from a design that is half way between what we already\n> do with heap TIDs and what we would be required to do to make varwidth\n> logical row identifiers in tables work -- the partition number is\n> varwidth, though often only a single byte.\n\nI think we're likely to have a problem with global indexes + DETACH\nPARTITION that is similar to the problem we now have with DROP COLUMN.\n\nIf you drop or detach a partition, you can either (a) perform, as part\nof that operation, a scan of every global index to remove all\nreferences to the former partition, or (b) tell each global indexes\nthat all references to that partition number ought to be regarded as\ndead index tuples. (b) makes detaching partitions faster and (a)\nseems hard to make rollback-safe, so I'm guessing we'll end up with\n(b).\n\nBut that means that if someone repeatedly attaches and detaches\npartitions, the partition numbers could get quite big. And even\nwithout that somebody could have a lot of partitions. So while I do\nnot disagree that the partition number could be variable-width and\nsometimes only 1 payload byte, I think we had better make sure to\ndesign the system in such a way that it scales to at least 4 payload\nbytes, because I have no faith that anything less will be sufficient\nfor our demanding user base.\n\nWe don't want people to be able to exhaust the supply of partition\nnumbers the way they can exhaust the supply of attribute numbers by\nadding and dropping columns repeatedly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 Apr 2019 08:22:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 5:22 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> If you drop or detach a partition, you can either (a) perform, as part\n> of that operation, a scan of every global index to remove all\n> references to the former partition, or (b) tell each global indexes\n> that all references to that partition number ought to be regarded as\n> dead index tuples. (b) makes detaching partitions faster and (a)\n> seems hard to make rollback-safe, so I'm guessing we'll end up with\n> (b).\n\nI agree that (b) is the way to go.\n\n> We don't want people to be able to exhaust the supply of partition\n> numbers the way they can exhaust the supply of attribute numbers by\n> adding and dropping columns repeatedly.\n\nI agree that a partition numbering system needs to be able to\naccommodate arbitrarily-many partitions over time. It wouldn't have\noccurred to me to do it any other way. It is far far easier to make\nthis work than it would be to retrofit varwidth attribute numbers. We\nwon't have to worry about the HeapTupleHeaderGetNatts()\nrepresentation. At the same time, nothing stops us from representing\npartition numbers in a simpler though less space efficient way in\nsystem catalogs.\n\nThe main point of having global indexes is to be able to push down the\npartition number and use it during index scans. We can store the\npartition number at the end of the tuple on leaf pages, so that it's\neasily accessible (important for VACUUM), while continuing to use the\nIndexTuple fields for heap TID. On internal pages, the IndexTuple\nfields must be used for the downlink (block number of child), so both\npartition number and heap TID have to go towards the end of the tuples\n(this happens just with heap TID on Postgres 12). Of course, suffix\ntruncation will manage to consistently get rid of both in most cases,\nespecially when the global index is a unique index.\n\nThe hard part is how to do varwidth encoding for space-efficient\npartition numbers while continuing to use IndexTuple fields for heap\nTID on the leaf level, *and* also having a\nBTreeTupleGetHeapTID()-style macro to get partition number without\nwalking the entire index tuple. I suppose you could make the byte at\nthe end of the tuple indicate that there are in fact 31 bits total\nwhen its high bit is set -- otherwise it's a 7 bit integer. Something\nlike that may be the way to go. The alignment rules seem to make it\nworthwhile to keep the heap TID in the tuple header; it seems\ninherently necessary to have a MAXALIGN()'d tuple header, so finding a\nway to consistently put the first MAXALIGN() quantum to good use seems\nwise.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 24 Apr 2019 10:43:57 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 10:43 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> The hard part is how to do varwidth encoding for space-efficient\n> partition numbers while continuing to use IndexTuple fields for heap\n> TID on the leaf level, *and* also having a\n> BTreeTupleGetHeapTID()-style macro to get partition number without\n> walking the entire index tuple. I suppose you could make the byte at\n> the end of the tuple indicate that there are in fact 31 bits total\n> when its high bit is set -- otherwise it's a 7 bit integer. Something\n> like that may be the way to go. The alignment rules seem to make it\n> worthwhile to keep the heap TID in the tuple header; it seems\n> inherently necessary to have a MAXALIGN()'d tuple header, so finding a\n> way to consistently put the first MAXALIGN() quantum to good use seems\n> wise.\n\nIt's even harder than that if you want to make it possible to walk the\ntuple from either direction, which also seems useful. You want to be\nable to jump straight to the end of the tuple to get the partition\nnumber, while at the same time being able to access it in the usual\nway, as if it was just another attribute.\n\nUgh, this is a mess. It would be so much easier if we had a tuple\nrepresentation that stored attribute offsets right in the header.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 24 Apr 2019 11:02:43 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 9:35 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-04-21 17:46:09 -0700, Peter Geoghegan wrote:\n> > Andres has suggested that I work on teaching nbtree to accommodate\n> > variable-width, logical table identifiers, such as those required for\n> > indirect indexes, or clustered indexes, where secondary indexes must\n> > use a logical primary key value instead of a heap TID.\n\nI'm revisiting this thread now because it may have relevance to the\nnbtree deduplication patch. If nothing else, the patch further commits\nus to the current heap TID format by making assumptions about the\nwidth of posting lists with 6 byte TIDs. Though I suppose a posting\nlist almost has to have fixed width TIDs to perform acceptably.\nDatabase systems with a varwidth TID format probably don't support\nanything like posting lists.\n\n> I think it's two more cases:\n>\n> - table AMs that want to support tables that are bigger than 32TB. That\n> used to be unrealistic, but it's not anymore. Especially when the need\n> to VACUUM etc is largely removed / reduced.\n\nCan we steal some bits that are currently used for offset number\ninstead? 16 bits is far more than we ever need to use for heap offset\nnumbers in practice. (I wonder if this would also have benefits for\nthe representation of in-memory bitmaps?)\n\n> - global indexes (for cross-partition unique constraints and such),\n> which need a partition identifier as part of the tid (or as part of\n> the index key, but I think that actually makes interaction with\n> indexam from other layers more complicated - the inside of the index\n> maybe may want to represent it as a column, but to the outside that\n> ought not to be visible)\n\nCan we just use an implementation level attribute for this? Would it\nbe so bad if we weren't able to jump straight to the partition number\nwithout walking through the tuple when the tuple has varwidth\nattributes? (If that isn't acceptable, then we can probably make it\nwork for global indexes without having to generalize everything.)\n\nIn general, Postgres heap TIDs are not stable identifiers of rows\n(they only stably identify HOT chains). This is not the case in other\ndatabase systems, which may go to great trouble to make it possible to\nassume that TIDs are stable over time (Andy Pavlo says that our TIDs\nare physical while Oracle's are logical -- I don't like that\nterminology, but know what he means). Generalizing the nbtree AM to be\nable to work with an arbitrary type of table row identifier that isn't\nat all like a TID raises tricky definitional questions. It would have\nto work in a way that made the new variety of table row identifier\nstable, which is a significant new requirement (and one that zheap is\nclearly not interested in).\n\nIf we're willing to support something like a clustered index in\nnbtree, I wonder what we need to do to make things like numeric\ndisplay scale still work. For bonus points, describe how unique\nchecking works with a secondary unique index.\n\nI am not suggesting that these issues are totally insurmountable. What\nI am saying is this: If we already had \"stable logical\" TIDs instead\nof \"mostly physical TIDs\", then generalizing nbtree index tuples to\nstore arbitrary table row identifiers would more or less be all about\nthe data structure managed by nbtree. But that isn't the case, and\nthat strongly discourages me from working on this -- we shouldn't talk\nabout the problem as if it is mostly just a matter of settling of the\nbest index tuple format. Frankly I am not very enthusiastic about\nworking on a project that has unclear scope and unclear benefits for\nusers.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 30 Oct 2019 11:33:21 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "Hi,\n\nOn 2019-10-30 11:33:21 -0700, Peter Geoghegan wrote:\n> On Mon, Apr 22, 2019 at 9:35 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-04-21 17:46:09 -0700, Peter Geoghegan wrote:\n> > > Andres has suggested that I work on teaching nbtree to accommodate\n> > > variable-width, logical table identifiers, such as those required for\n> > > indirect indexes, or clustered indexes, where secondary indexes must\n> > > use a logical primary key value instead of a heap TID.\n> \n> I'm revisiting this thread now because it may have relevance to the\n> nbtree deduplication patch. If nothing else, the patch further commits\n> us to the current heap TID format by making assumptions about the\n> width of posting lists with 6 byte TIDs.\n\nI'd much rather not entrench this further, even leaving global indexes\naside. The 4 byte block number is a significant limitation for heap\ntables too, and we should lift that at some point not too far away.\nThen there's also other AMs that could really use a wider tid space.\n\n\n> Though I suppose a posting list almost has to have fixed width TIDs to\n> perform acceptably.\n\nHm. It's not clear to me why that is?\n\n\n> > I think it's two more cases:\n> >\n> > - table AMs that want to support tables that are bigger than 32TB. That\n> > used to be unrealistic, but it's not anymore. Especially when the need\n> > to VACUUM etc is largely removed / reduced.\n> \n> Can we steal some bits that are currently used for offset number\n> instead? 16 bits is far more than we ever need to use for heap offset\n> numbers in practice.\n\nI think that's a terrible idea. For one, some AMs will have significant\nhigher limits, especially taking compression and larger block sizes into\naccount. Also not all AMs need identifiers tied so closely to a disk\nposition, e.g. zedstore does not. We shouldn't hack evermore\ninformation into the offset, given that background.\n\n\n> (I wonder if this would also have benefits for the representation of\n> in-memory bitmaps?)\n\nHm. Not sure how?\n\n\n> > - global indexes (for cross-partition unique constraints and such),\n> > which need a partition identifier as part of the tid (or as part of\n> > the index key, but I think that actually makes interaction with\n> > indexam from other layers more complicated - the inside of the index\n> > maybe may want to represent it as a column, but to the outside that\n> > ought not to be visible)\n> \n> Can we just use an implementation level attribute for this? Would it\n> be so bad if we weren't able to jump straight to the partition number\n> without walking through the tuple when the tuple has varwidth\n> attributes? (If that isn't acceptable, then we can probably make it\n> work for global indexes without having to generalize everything.)\n\nHaving to walk through the index tuple might be acceptable - in all\nlikelihood we'll have to do so anyway. It does however not *really*\nresolve the issue that we still need to pass something tid back from the\nindexam, so we can fetch the associated tuple from the heap, or add the\ntid to a bitmap. But that could be done separately from the index\ninternal data structures.\n\n\n> Generalizing the nbtree AM to be able to work with an arbitrary type\n> of table row identifier that isn't at all like a TID raises tricky\n> definitional questions. It would have to work in a way that made the\n> new variety of table row identifier stable, which is a significant new\n> requirement (and one that zheap is clearly not interested in).\n\nHm. I don't see why a different types of TID would imply them being\nstable?\n\n\n> I am not suggesting that these issues are totally insurmountable. What\n> I am saying is this: If we already had \"stable logical\" TIDs instead\n> of \"mostly physical TIDs\", then generalizing nbtree index tuples to\n> store arbitrary table row identifiers would more or less be all about\n> the data structure managed by nbtree. But that isn't the case, and\n> that strongly discourages me from working on this -- we shouldn't talk\n> about the problem as if it is mostly just a matter of settling of the\n> best index tuple format.\n\n\n\n> Frankly I am not very enthusiastic about working on a project that has\n> unclear scope and unclear benefits for users.\n\nWhy would properly supporting AMs like zedstore, global indexes,\n\"indirect\" indexes etc benefit users?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 30 Oct 2019 12:02:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "On Wed, Oct 30, 2019 at 12:03 PM Andres Freund <andres@anarazel.de> wrote:\n> I'd much rather not entrench this further, even leaving global indexes\n> aside. The 4 byte block number is a significant limitation for heap\n> tables too, and we should lift that at some point not too far away.\n> Then there's also other AMs that could really use a wider tid space.\n\nI agree that that limitation is a problem that should be fixed before\ntoo long. But the solution probably shouldn't be a radical departure\nfrom what we have today. The vast majority of tables are not affected\nby the TID space limitation. Those tables that are can tolerate\nsupporting fixed width \"long\" TIDs (perhaps 8 bytes long?) that are\nused for the higher portion of the heap TID space alone.\n\nThe idea here is that TID is varwidth, but actually uses the existing\nheap TID format most of the time. For larger tables it uses a wider\nfixed width struct that largely works the same as the old 6 byte\nstruct.\n\n> > Though I suppose a posting list almost has to have fixed width TIDs to\n> > perform acceptably.\n>\n> Hm. It's not clear to me why that is?\n\nRandom access matters for things like determining the correct offset\nto split a posting list at. This is needed in the event of an\noverlapping insertion of a new duplicate tuple whose heap TID falls\nwithin the range of the posting list. Also, I want to be able to scan\nposting lists backwards for backwards scans. In general, fixed-width\nTIDs make the page space accounting fairly simple, which matters a lot\nin nbtree.\n\nI can support varwidth TIDs in the future pretty well if the TID\ndoesn't have to be *arbitrarily* wide. Individual posting lists can\nthemselves either use 6 byte or 8 byte TIDs, preserving the ability to\naccess a posting list entry at random using simple pointer arithmetic.\nThis makes converting over index AMs a lot less painful -- it'll be\npretty easy to avoid mixing together the 6 byte and 8 byte structs.\n\n> > Can we steal some bits that are currently used for offset number\n> > instead? 16 bits is far more than we ever need to use for heap offset\n> > numbers in practice.\n>\n> I think that's a terrible idea. For one, some AMs will have significant\n> higher limits, especially taking compression and larger block sizes into\n> account. Also not all AMs need identifiers tied so closely to a disk\n> position, e.g. zedstore does not. We shouldn't hack evermore\n> information into the offset, given that background.\n\nFair enough, but somebody needs to cut some scope here.\n\n> Having to walk through the index tuple might be acceptable - in all\n> likelihood we'll have to do so anyway. It does however not *really*\n> resolve the issue that we still need to pass something tid back from the\n> indexam, so we can fetch the associated tuple from the heap, or add the\n> tid to a bitmap. But that could be done separately from the index\n> internal data structures.\n\nI agree.\n\n> > Generalizing the nbtree AM to be able to work with an arbitrary type\n> > of table row identifier that isn't at all like a TID raises tricky\n> > definitional questions.\n\n> Hm. I don't see why a different types of TID would imply them being\n> stable?\n\nIt is unclear what it means. I would like to see a sketch of a design\nfor varwidth TIDs that balances everybody's concerns. I don't think\n\"indirect\" indexes are a realistic goal for Postgres. VACUUM is just\ntoo messy there (as is any other garbage collection mechanism).\nZedstore and Zheap don't change this.\n\n> > Frankly I am not very enthusiastic about working on a project that has\n> > unclear scope and unclear benefits for users.\n>\n> Why would properly supporting AMs like zedstore, global indexes,\n> \"indirect\" indexes etc benefit users?\n\nGlobal indexes seem doable.\n\nI don't see how \"indirect\" indexes can ever work in Postgres. I don't\nknow exactly what zedstore needs here, but maybe it can work well with\na less ambitious design for varwidth TIDs along the lines I've\nsketched.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 30 Oct 2019 12:37:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "Hi,\n\nOn 2019-10-30 12:37:50 -0700, Peter Geoghegan wrote:\n> On Wed, Oct 30, 2019 at 12:03 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'd much rather not entrench this further, even leaving global indexes\n> > aside. The 4 byte block number is a significant limitation for heap\n> > tables too, and we should lift that at some point not too far away.\n> > Then there's also other AMs that could really use a wider tid space.\n> \n> I agree that that limitation is a problem that should be fixed before\n> too long. But the solution probably shouldn't be a radical departure\n> from what we have today. The vast majority of tables are not affected\n> by the TID space limitation. Those tables that are can tolerate\n> supporting fixed width \"long\" TIDs (perhaps 8 bytes long?) that are\n> used for the higher portion of the heap TID space alone.\n> \n> The idea here is that TID is varwidth, but actually uses the existing\n> heap TID format most of the time. For larger tables it uses a wider\n> fixed width struct that largely works the same as the old 6 byte\n> struct.\n\nI assume you mean that the index would dynamically recognize when it\nneeds the wider tids (\"for the higher portion\")? If so, yea, that makes\nsense to me. Would that need to encode the 6/8byteness of a tid on a\nper-element basis? Or are you thinking of being able to upconvert in a\ngeneral way?\n\n\n> > > Though I suppose a posting list almost has to have fixed width TIDs to\n> > > perform acceptably.\n> >\n> > Hm. It's not clear to me why that is?\n> \n> Random access matters for things like determining the correct offset\n> to split a posting list at. This is needed in the event of an\n> overlapping insertion of a new duplicate tuple whose heap TID falls\n> within the range of the posting list. Also, I want to be able to scan\n> posting lists backwards for backwards scans. In general, fixed-width\n> TIDs make the page space accounting fairly simple, which matters a lot\n> in nbtree.\n\nIf we had variable width tids varying by more than 2 bytes, it might be\nreasonable for cases like this to store all tids padded to the width of\nthe widest tid. I think that'd still be pretty OK, because you'd only\npay the price if you actually have long tids, and only on pages where\nsuch tids are referenced. Obviously that means that such a posting list\ncould grow more than by the size of the inserted tid upon insertion, but\nthat might still be OK? That'd require storing the width of the posting\nlist elements somewhere, unfortunately - not sure if that's a problem?\n\nISTM if we had varint style tids, we'd probably still save space on\naverage for today's heap that way. How important is it for you to be\nable to split out the \"block offset\" and \"page offset\" bits?\n\nI'm somewhat inclined to think that tids should just be a varint (or\nmaybe two, if we want to make it slightly simpler to keep compat to how\nthey look today), and that the AM internally makes sense of that.\n\n\n> I can support varwidth TIDs in the future pretty well if the TID\n> doesn't have to be *arbitrarily* wide.\n\nI think it'd be perfectly reasonable to have a relatively low upper\nlimit for tid width. Not 8 bytes, but also not 200 bytes.\n\n\n> Individual posting lists can themselves either use 6 byte or 8 byte\n> TIDs, preserving the ability to access a posting list entry at random\n> using simple pointer arithmetic. This makes converting over index AMs\n> a lot less painful -- it'll be pretty easy to avoid mixing together\n> the 6 byte and 8 byte structs.\n\nWith varint style tids as I suggested, that ought to be fairly simple?\n\n\n> I don't think \"indirect\" indexes are a realistic goal for\n> Postgres. VACUUM is just too messy there (as is any other garbage\n> collection mechanism). Zedstore and Zheap don't change this.\n\nHm, I don't think there's a fundamental problem, but let's leave that\naside, there's enough other reasons to improve this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 30 Oct 2019 13:25:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
},
{
"msg_contents": "On Wed, Oct 30, 2019 at 1:25 PM Andres Freund <andres@anarazel.de> wrote:\n> I assume you mean that the index would dynamically recognize when it\n> needs the wider tids (\"for the higher portion\")? If so, yea, that makes\n> sense to me. Would that need to encode the 6/8byteness of a tid on a\n> per-element basis? Or are you thinking of being able to upconvert in a\n> general way?\n\nI think that index access methods should continue to control the\nlayout of TIDs/item pointers, while mostly just treating them as\nfixed-width integers. Maybe you just have 6 and 8 byte structs, or\nmaybe you also have a 16 byte struct, but it isn't really variable\nlength to the index AM (it's more like a union). It becomes the index\nAM's responsibility to remember which TID width applies at a per tuple\nlevel (or per posting list level, etc). In general, index AMs have to\n\"supply their own status bits\", which should be fine (the alternative\nis that they refuse to support anything other than the traditional 6\nbyte TID format).\n\nTable AMs don't get to supply their own operator class for sorting\nthese integers -- they had better be happy with the TID sort order\nthat is baked in (ascending int order) in the context of index scans\nthat return duplicates, and things like that. There is probably\ngeneric infrastructure for up-converting, too, but the index AM is\nfundamentally in the driving seat with this design.\n\n> If we had variable width tids varying by more than 2 bytes, it might be\n> reasonable for cases like this to store all tids padded to the width of\n> the widest tid. I think that'd still be pretty OK, because you'd only\n> pay the price if you actually have long tids, and only on pages where\n> such tids are referenced. Obviously that means that such a posting list\n> could grow more than by the size of the inserted tid upon insertion, but\n> that might still be OK? That'd require storing the width of the posting\n> list elements somewhere, unfortunately - not sure if that's a problem?\n\nMy solution is to require the index AM to look after itself. The index\nAM is responsible for not mixing them up. For nbtree with\ndeduplication, this means that having different width TIDs makes it\nimpossible to merge together posting lists/tuples. For GIN, this means\nthat we can expand the width of the TIDs in the posting list to match\nthe widest TID. We can then make it into a posting tree if necessary\n-- GIN has the benefit of always being able to fall back on the option\nof making a new posting tree (unlike nbtree with deduplication). GIN's\nB-Tree is very primitive in some ways (no deletion of items in the\nentry tree, no deletion of pages in the entry tree), which gives it\nthe ability to safely fall back on creating a new posting tree when it\nruns out of space.\n\n> ISTM if we had varint style tids, we'd probably still save space on\n> average for today's heap that way. How important is it for you to be\n> able to split out the \"block offset\" and \"page offset\" bits?\n\nPretty important. The nbtree deduplication patch is very compelling\nbecause it almost offers the best of both worlds -- the concurrency\ncharacteristics of today's nbtree, combined with very much improved\nspace efficiency. Keeping the space accounting as simple as possible\nseems like a big part of why this is possible at all. There is only\none new type of WAL record required for deduplication, and they're\npretty small. (Existing WAL records are changed to support posting\nlist splits, but these are small, low-overhead changes.)\n\n> I'm somewhat inclined to think that tids should just be a varint (or\n> maybe two, if we want to make it slightly simpler to keep compat to how\n> they look today), and that the AM internally makes sense of that.\n\nI am opposed to adding anything that is truly varint. The index access\nmethod ought to be able to have fine control over the layout, without\nbeing burdened by an overly complicated TID representation.\n\n> > I can support varwidth TIDs in the future pretty well if the TID\n> > doesn't have to be *arbitrarily* wide.\n>\n> I think it'd be perfectly reasonable to have a relatively low upper\n> limit for tid width. Not 8 bytes, but also not 200 bytes.\n\nMy point is that we should find a way to make TIDs continue to be an\narray of fixed width integers in any given context. Lots of index\naccess method code can be ported in a relatively straightforward\nfashion this way. This has some downsides, but I believe that they're\nworth it.\n\n> > Individual posting lists can themselves either use 6 byte or 8 byte\n> > TIDs, preserving the ability to access a posting list entry at random\n> > using simple pointer arithmetic. This makes converting over index AMs\n> > a lot less painful -- it'll be pretty easy to avoid mixing together\n> > the 6 byte and 8 byte structs.\n>\n> With varint style tids as I suggested, that ought to be fairly simple?\n\nnbtree probably won't be able to tolerate having to widen every TID in\nthe posting list all at once when new tuples are inserted that have\nTIDs that are one byte wider, that go in the same posting list (as I\nsaid, keeping the space accounting simple is particularly important\nfor nbtree). This even seems hard for GIN, which thinks of TIDs as an\narray of fixed width ints in many contexts. Also, BRIN revmap pages\nare also mostly just arrays of 6 byte item pointers, that rely on\nsimple pointer arithmetic to do random access.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 30 Oct 2019 14:59:16 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on nbtree with logical/varwidth table identifiers, v12\n on-disk representation"
}
] |
[
{
"msg_contents": "Hi,\n\nISTM show_plan_tlist()'s rule of whether to the show range table prefix\nwith displayed variables contradicts the description of the VERBOSE option\nin EXPLAIN documentation, which is as follows:\n\n=======\nVERBOSE\n\nDisplay additional information regarding the plan. Specifically, include\nthe output column list for each node in the plan tree, schema-qualify\ntable and function names, always label variables in expressions with their\nrange table alias, and always print the name of each trigger for which\nstatistics are displayed. This parameter defaults to FALSE.\n=======\n\nSpecifically, the current behavior contradicts the part of the sentence\nthat says \"always label variables in expressions with their range table\nalias\". See this example:\n\ncreate table foo (a int);\ncreate table foo1 () inherits (foo);\n\n-- \"a\" is not labeled here\nexplain verbose select * from only foo order by 1;\n QUERY PLAN\n────────────────────────────────────────────────────────────────\n Sort (cost=0.01..0.02 rows=1 width=4)\n Output: a\n Sort Key: foo.a\n -> Seq Scan on public.foo (cost=0.00..0.00 rows=1 width=4)\n Output: a\n(5 rows)\n\n-- it's labeled in this case\nexplain verbose select * from foo order by 1;\n QUERY PLAN\n───────────────────────────────────────────────────────────────────────────\n Sort (cost=192.60..198.98 rows=2551 width=4)\n Output: foo.a\n Sort Key: foo.a\n -> Append (cost=0.00..48.26 rows=2551 width=4)\n -> Seq Scan on public.foo (cost=0.00..0.00 rows=1 width=4)\n Output: foo.a\n -> Seq Scan on public.foo1 (cost=0.00..35.50 rows=2550 width=4)\n Output: foo1.a\n(8 rows)\n\nSeeing that \"Sort Key\" is always displayed with the range table alias, I\nchecked explain.c to see why the discrepancy exists and it seems that\nshow_plan_tlist() (and show_tablesample()) use the following condition for\nwhether or not to use the range table prefix:\n\n useprefix = list_length(es->rtable) > 1;\n\nwhereas other functions, including show_sort_group_keys() that prints the\n\"Sort Key\", use the following condition:\n\n useprefix = (list_length(es->rtable) > 1 || es->verbose);\n\nI can think of two ways we could do:\n\n1. Change show_plan_tlist() and show_tablesample() to use the same rule as\nothers\n\n2. Change other functions to use the same rule as show_plan_tlist(), also\nupdating the documentation to note the exceptional case when column names\nare not prefixed\n\nThoughts?\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Mon, 22 Apr 2019 16:49:19 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "display of variables in EXPLAIN VERBOSE"
},
{
"msg_contents": "On Mon, 22 Apr 2019 at 19:49, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> Seeing that \"Sort Key\" is always displayed with the range table alias, I\n> checked explain.c to see why the discrepancy exists and it seems that\n> show_plan_tlist() (and show_tablesample()) use the following condition for\n> whether or not to use the range table prefix:\n>\n> useprefix = list_length(es->rtable) > 1;\n>\n> whereas other functions, including show_sort_group_keys() that prints the\n> \"Sort Key\", use the following condition:\n>\n> useprefix = (list_length(es->rtable) > 1 || es->verbose);\n>\n> I can think of two ways we could do:\n>\n> 1. Change show_plan_tlist() and show_tablesample() to use the same rule as\n> others\n>\n> 2. Change other functions to use the same rule as show_plan_tlist(), also\n> updating the documentation to note the exceptional case when column names\n> are not prefixed\n\nI'd vote to make the code match the documentation, but probably\nimplement it by adding a new field to ExplainState and just calculate\nwhat to do once in ExplainQuery() instead of calculating what to do in\nvarious random places.\n\nI don't think we should backpatch this change, likely it would be\nbetter to keep the explain output as stable as possible in the back\nbranches, so that might mean a documentation tweak should be done for\nthem.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 23 Apr 2019 01:27:00 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: display of variables in EXPLAIN VERBOSE"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I'd vote to make the code match the documentation, but probably\n> implement it by adding a new field to ExplainState and just calculate\n> what to do once in ExplainQuery() instead of calculating what to do in\n> various random places.\n\nYeah, this is none too consistent:\n\n$ grep -n 'useprefix =' explain.c\n2081: useprefix = list_length(es->rtable) > 1;\n2151: useprefix = (IsA(planstate->plan, SubqueryScan) ||es->verbose);\n2165: useprefix = (list_length(es->rtable) > 1 || es->verbose);\n2238: useprefix = (list_length(es->rtable) > 1 || es->verbose);\n2377: useprefix = (list_length(es->rtable) > 1 || es->verbose);\n2485: useprefix = list_length(es->rtable) > 1;\n\nIf we're going to mess with this, I'd also suggest that we not depend on\nlist_length(es->rtable) per se, as that counts RTEs that may have nothing\nto do with the plan. For instance, I've never been very happy about\nthis behavior:\n\nregression=# create table tt (f1 int, f2 int);\nCREATE TABLE\nregression=# explain verbose select * from tt;\n QUERY PLAN \n-------------------------------------------------------------\n Seq Scan on public.tt (cost=0.00..32.60 rows=2260 width=8)\n Output: f1, f2\n(2 rows)\n\nregression=# create view vv as select * from tt;\nCREATE VIEW\nregression=# explain verbose select * from vv;\n QUERY PLAN \n-------------------------------------------------------------\n Seq Scan on public.tt (cost=0.00..32.60 rows=2260 width=8)\n Output: tt.f1, tt.f2\n(2 rows)\n\nThe reason for the difference is the presence of the view's RTE\nin the plan, but why should that affect the printout? Maybe we\ncould make it depend on the number of RTE names assigned by\nselect_rtable_names_for_explain, instead.\n\nBTW, now that I look at this, I think the reason why I didn't make\ntlist printouts pay attention to VERBOSE for this purpose is that\nyou don't get them at all if not verbose:\n\nregression=# explain select * from tt;\n QUERY PLAN \n------------------------------------------------------\n Seq Scan on tt (cost=0.00..32.60 rows=2260 width=8)\n(1 row)\n\nSo if we were to be rigidly consistent with this point of the docs,\nthere would be no way to see a tlist without variable qualification,\nwhich doesn't really seem that nice.\n\nAlternatively, we could just leave this as-is. I do not think the\nquoted doc paragraph was ever meant as an exact specification\nof what EXPLAIN VERBOSE does, nor do I believe that making it so\nwould be helpful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 11:58:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: display of variables in EXPLAIN VERBOSE"
},
{
"msg_contents": "On 2019/04/23 0:58, Tom Lane wrote:\n> BTW, now that I look at this, I think the reason why I didn't make\n> tlist printouts pay attention to VERBOSE for this purpose is that\n> you don't get them at all if not verbose:\n> \n> regression=# explain select * from tt;\n> QUERY PLAN \n> ------------------------------------------------------\n> Seq Scan on tt (cost=0.00..32.60 rows=2260 width=8)\n> (1 row)\n> \n> So if we were to be rigidly consistent with this point of the docs,\n> there would be no way to see a tlist without variable qualification,\n> which doesn't really seem that nice.\n\nHmm yes. Variables in sort keys, quals, etc., which are shown without\nVERBOSE, are qualified only if VERBOSE is specified. Variables in the\ntargetlists that are shown only in the VERBOSE output may be displayed\nwithout qualifications, which looks a bit inconsistent.\n\nexplain (verbose, costs off) select * from foo where a > 0 order by 1;\n QUERY PLAN\n──────────────────────────────\n Sort\n Output: a\n Sort Key: foo.a\n -> Seq Scan on public.foo\n Output: a\n Filter: (foo.a > 0)\n(6 rows)\n\nMaybe, targetlist variables should *always* be qualified given that they\nare considered VERBOSE information to begin with?\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 13:54:51 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: display of variables in EXPLAIN VERBOSE"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nWith a master-standby setup configured on the same machine, I'm\ngetting a panic in tablespace test while running make installcheck.\n\n1. CREATE TABLESPACE regress_tblspacewith LOCATION 'blah';\n2. DROP TABLESPACE regress_tblspacewith;\n3. CREATE TABLESPACE regress_tblspace LOCATION 'blah';\n-- do some operations in this tablespace\n4. DROP TABLESPACE regress_tblspace;\n\nThe master panics at the last statement when standby has completed\napplying the WAL up to step 2 but hasn't started step 3.\nPANIC: could not fsync file\n\"pg_tblspc/16387/PG_12_201904072/16384/16446\": No such file or\ndirectory\n\nThe reason is both the tablespace points to the same location. When\nmaster tries to delete the new tablespace (and requests a checkpoint),\nthe corresponding folder is already deleted by the standby while\napplying WAL to delete the old tablespace. I'm able to reproduce the\nissue with the attached script.\n\nsh standby-server-setup.sh\nmake installcheck\n\nI accept that configuring master-standby on the same machine for this\ntest is not okay. But, can we avoid the PANIC somehow? Or, is this\nintentional and I should not include testtablespace in this case?\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 22 Apr 2019 15:52:59 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "Hello.\n\nAt Mon, 22 Apr 2019 15:52:59 +0530, Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote in <CAGz5QC+j1BDq7onp6H8Cye-ahD2zS1dLttp-dEuEoZStEjxq5Q@mail.gmail.com>\n> Hello hackers,\n> \n> With a master-standby setup configured on the same machine, I'm\n> getting a panic in tablespace test while running make installcheck.\n> \n> 1. CREATE TABLESPACE regress_tblspacewith LOCATION 'blah';\n> 2. DROP TABLESPACE regress_tblspacewith;\n> 3. CREATE TABLESPACE regress_tblspace LOCATION 'blah';\n> -- do some operations in this tablespace\n> 4. DROP TABLESPACE regress_tblspace;\n> \n> The master panics at the last statement when standby has completed\n> applying the WAL up to step 2 but hasn't started step 3.\n> PANIC: could not fsync file\n> \"pg_tblspc/16387/PG_12_201904072/16384/16446\": No such file or\n> directory\n> \n> The reason is both the tablespace points to the same location. When\n> master tries to delete the new tablespace (and requests a checkpoint),\n> the corresponding folder is already deleted by the standby while\n> applying WAL to delete the old tablespace. I'm able to reproduce the\n> issue with the attached script.\n> \n> sh standby-server-setup.sh\n> make installcheck\n> \n> I accept that configuring master-standby on the same machine for this\n> test is not okay. But, can we avoid the PANIC somehow? Or, is this\n> intentional and I should not include testtablespace in this case?\n\nIf you don't have a problem using TAP test suite, tablespace is\nallowed with a bit restricted steps using the first patch in my\njust posted patchset[1]. This will work for you if you are okay\nwith creating a standby after creating a tablespace. See the\nsecond patch in the patchset.\n\nIf you stick on shell script, the following steps allow tablespaces.\n\n1. Create tablespace directories for both master and standby.\n2. Create a master then start.\n3. Create tablespaces on the master.\n4. Create a standby using pg_basebackup --tablespace_mapping=<mstdir>=<sbydir>\n5. Start the standby.\n\n[1] https://www.postgresql.org/message-id/20190422.211933.156769089.horiguchi.kyotaro@lab.ntt.co.jp\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Mon, 22 Apr 2019 21:36:41 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 6:07 PM Kyotaro HORIGUCHI\n<horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> If you don't have a problem using TAP test suite, tablespace is\n> allowed with a bit restricted steps using the first patch in my\n> just posted patchset[1]. This will work for you if you are okay\n> with creating a standby after creating a tablespace. See the\n> second patch in the patchset.\n>\n> If you stick on shell script, the following steps allow tablespaces.\n>\n> 1. Create tablespace directories for both master and standby.\n> 2. Create a master then start.\n> 3. Create tablespaces on the master.\n> 4. Create a standby using pg_basebackup --tablespace_mapping=<mstdir>=<sbydir>\n> 5. Start the standby.\n>\n> [1] https://www.postgresql.org/message-id/20190422.211933.156769089.horiguchi.kyotaro@lab.ntt.co.jp\n>\nThank you for the info. I'll try the same.\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Apr 2019 18:21:00 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 03:52:59PM +0530, Kuntal Ghosh wrote:\n> I accept that configuring master-standby on the same machine for this\n> test is not okay. But, can we avoid the PANIC somehow? Or, is this\n> intentional and I should not include testtablespace in this case?\n\nWell, it is a bit more than \"not okay\", as the primary and the\nstandby step on each other's toe because they are trying to use the\nsame tablespace path. The PANIC is also expected as that's what we\nwant with data_sync_retry = off, which is the default, and the wanted\nbehavior to PANIC immediately and enforce WAL recovery should a fsync\nfail. Obviously, not being able to have transparent tablespace\nhandling for multiple nodes on the same host is a problem, though this\nimplies grammar changes for CREATE TABLESPACE or having a sort of\nnode name handling which makes the experience trouble-less. Still\nthere is the argument that not all users would want both instances to\nuse the same tablespace path. So the problem is not as simple as it\nlooks, and the cost of solving it is not worth the use cases either.\n--\nMichael",
"msg_date": "Tue, 23 Apr 2019 11:27:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "At Tue, 23 Apr 2019 11:27:06 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190423022706.GG2712@paquier.xyz>\n> On Mon, Apr 22, 2019 at 03:52:59PM +0530, Kuntal Ghosh wrote:\n> > I accept that configuring master-standby on the same machine for this\n> > test is not okay. But, can we avoid the PANIC somehow? Or, is this\n> > intentional and I should not include testtablespace in this case?\n> \n> Well, it is a bit more than \"not okay\", as the primary and the\n> standby step on each other's toe because they are trying to use the\n> same tablespace path. The PANIC is also expected as that's what we\n> want with data_sync_retry = off, which is the default, and the wanted\n> behavior to PANIC immediately and enforce WAL recovery should a fsync\n> fail. Obviously, not being able to have transparent tablespace\n> handling for multiple nodes on the same host is a problem, though this\n> implies grammar changes for CREATE TABLESPACE or having a sort of\n> node name handling which makes the experience trouble-less. Still\n> there is the argument that not all users would want both instances to\n> use the same tablespace path. So the problem is not as simple as it\n> looks, and the cost of solving it is not worth the use cases either.\n\nWe could easily adopt a jail or chroot like feature to tablespace\npaths. Suppose a new GUC(!), say, tablespace_chroot and the value\ncan contain replacements like %a, %p, %h, we would set the\nvariable as:\n\ntablespace_chroot = '/somewhere/%p';\n\nthen the tablespace location is prefixed by '/somewhere/5432' for\nthe first server, '/somehwere/5433' for the second.\n\nI think this is rahter a testing or debugging feature. This can\nbe apply to all paths, so the variable might be \"path_prefix\" or\nsomething more generic than tablespace_chroot.\n\nDoes it make sense?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 13:33:39 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "At Tue, 23 Apr 2019 13:33:39 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190423.133339.113770648.horiguchi.kyotaro@lab.ntt.co.jp>\n> At Tue, 23 Apr 2019 11:27:06 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190423022706.GG2712@paquier.xyz>\n> > On Mon, Apr 22, 2019 at 03:52:59PM +0530, Kuntal Ghosh wrote:\n> > > I accept that configuring master-standby on the same machine for this\n> > > test is not okay. But, can we avoid the PANIC somehow? Or, is this\n> > > intentional and I should not include testtablespace in this case?\n> > \n> > Well, it is a bit more than \"not okay\", as the primary and the\n> > standby step on each other's toe because they are trying to use the\n> > same tablespace path. The PANIC is also expected as that's what we\n> > want with data_sync_retry = off, which is the default, and the wanted\n> > behavior to PANIC immediately and enforce WAL recovery should a fsync\n> > fail. Obviously, not being able to have transparent tablespace\n> > handling for multiple nodes on the same host is a problem, though this\n> > implies grammar changes for CREATE TABLESPACE or having a sort of\n> > node name handling which makes the experience trouble-less. Still\n> > there is the argument that not all users would want both instances to\n> > use the same tablespace path. So the problem is not as simple as it\n> > looks, and the cost of solving it is not worth the use cases either.\n> \n> We could easily adopt a jail or chroot like feature to tablespace\n> paths. Suppose a new GUC(!), say, tablespace_chroot and the value\n> can contain replacements like %a, %p, %h, we would set the\n> variable as:\n> \n> tablespace_chroot = '/somewhere/%p';\n> \n> then the tablespace location is prefixed by '/somewhere/5432' for\n> the first server, '/somehwere/5433' for the second.\n> \n> I think this is rahter a testing or debugging feature. This can\n> be apply to all paths, so the variable might be \"path_prefix\" or\n\nall paths out of $PGDATA directory. tablespace locations,\nlog_directory and stats_temp_directory?\n\n> something more generic than tablespace_chroot.\n> \n> Does it make sense?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 13:41:04 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 01:33:39PM +0900, Kyotaro HORIGUCHI wrote:\n> I think this is rahter a testing or debugging feature. This can\n> be apply to all paths, so the variable might be \"path_prefix\" or\n> something more generic than tablespace_chroot.\n> \n> Does it make sense?\n\nA GUC which enforces object creation does not sound like a good idea\nto me, and what you propose would still bite back, for example two\nlocal nodes could use the same port, but a different Unix socket\npath.\n--\nMichael",
"msg_date": "Tue, 23 Apr 2019 14:53:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-22 15:52:59 +0530, Kuntal Ghosh wrote:\n> Hello hackers,\n> \n> With a master-standby setup configured on the same machine, I'm\n> getting a panic in tablespace test while running make installcheck.\n> \n> 1. CREATE TABLESPACE regress_tblspacewith LOCATION 'blah';\n> 2. DROP TABLESPACE regress_tblspacewith;\n> 3. CREATE TABLESPACE regress_tblspace LOCATION 'blah';\n> -- do some operations in this tablespace\n> 4. DROP TABLESPACE regress_tblspace;\n> \n> The master panics at the last statement when standby has completed\n> applying the WAL up to step 2 but hasn't started step 3.\n> PANIC: could not fsync file\n> \"pg_tblspc/16387/PG_12_201904072/16384/16446\": No such file or\n> directory\n> \n> The reason is both the tablespace points to the same location. When\n> master tries to delete the new tablespace (and requests a checkpoint),\n> the corresponding folder is already deleted by the standby while\n> applying WAL to delete the old tablespace. I'm able to reproduce the\n> issue with the attached script.\n> \n> sh standby-server-setup.sh\n> make installcheck\n> \n> I accept that configuring master-standby on the same machine for this\n> test is not okay. But, can we avoid the PANIC somehow? Or, is this\n> intentional and I should not include testtablespace in this case?\n\nFWIW, I think the right fix for this is to simply drop the requirement\nthat tablespace paths need to be absolute. It's not buying us anything,\nit's just making things more complicated. We should just do a simple\ncheck against the tablespace being inside PGDATA, and leave it at\nthat. Yes, that can be tricked, but so can the current system.\n\nThat'd make both regression tests easier, as well as operations.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 23:00:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 11:00:03PM -0700, Andres Freund wrote:\n> FWIW, I think the right fix for this is to simply drop the requirement\n> that tablespace paths need to be absolute. It's not buying us anything,\n> it's just making things more complicated. We should just do a simple\n> check against the tablespace being inside PGDATA, and leave it at\n> that. Yes, that can be tricked, but so can the current system.\n\nconvert_and_check_filename() checks after that already, mostly. For\nTAP tests I am not sure that this would help much though as all the\nnodes of a given test use the same root path for their data folders,\nso you cannot just use \"../hoge/\" as location. We already generate a\nwarning when a tablespace is in a data folder, as this causes issues\nwith recursion lookups of base backups. What do you mean in this\ncase? Forbidding the behavior? \n--\nMichael",
"msg_date": "Tue, 23 Apr 2019 16:08:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "At Tue, 23 Apr 2019 14:53:28 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190423055328.GK2712@paquier.xyz>\n> On Tue, Apr 23, 2019 at 01:33:39PM +0900, Kyotaro HORIGUCHI wrote:\n> > I think this is rahter a testing or debugging feature. This can\n> > be apply to all paths, so the variable might be \"path_prefix\" or\n> > something more generic than tablespace_chroot.\n> > \n> > Does it make sense?\n> \n> A GUC which enforces object creation does not sound like a good idea\n> to me, and what you propose would still bite back, for example two\n> local nodes could use the same port, but a different Unix socket\n> path.\n\nIt's not necessarily be port number, but I agree that it's not a\ngood idea. I prefer allowing relative paths for tablespaces.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 17:27:29 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "At Tue, 23 Apr 2019 16:08:18 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190423070818.GM2712@paquier.xyz>\n> On Mon, Apr 22, 2019 at 11:00:03PM -0700, Andres Freund wrote:\n> > FWIW, I think the right fix for this is to simply drop the requirement\n> > that tablespace paths need to be absolute. It's not buying us anything,\n> > it's just making things more complicated. We should just do a simple\n> > check against the tablespace being inside PGDATA, and leave it at\n> > that. Yes, that can be tricked, but so can the current system.\n> \n> convert_and_check_filename() checks after that already, mostly. For\n> TAP tests I am not sure that this would help much though as all the\n> nodes of a given test use the same root path for their data folders,\n> so you cannot just use \"../hoge/\" as location. We already generate a\n> warning when a tablespace is in a data folder, as this causes issues\n> with recursion lookups of base backups. What do you mean in this\n> case? Forbidding the behavior? \n\nIsn't it good enough just warning when we see pg_tblspc twice\nwhile scanning? The check is not perfect for an \"abosolute path\"\nthat continas '/./' above pgdata directory.\n\nFor TAP tests, we can point generated temporary directory by\n\"../../<tmpdirsname>\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 17:44:18 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "At Tue, 23 Apr 2019 17:44:18 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190423.174418.262292011.horiguchi.kyotaro@lab.ntt.co.jp>\n> At Tue, 23 Apr 2019 16:08:18 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190423070818.GM2712@paquier.xyz>\n> > On Mon, Apr 22, 2019 at 11:00:03PM -0700, Andres Freund wrote:\n> > > FWIW, I think the right fix for this is to simply drop the requirement\n> > > that tablespace paths need to be absolute. It's not buying us anything,\n> > > it's just making things more complicated. We should just do a simple\n> > > check against the tablespace being inside PGDATA, and leave it at\n> > > that. Yes, that can be tricked, but so can the current system.\n> > \n> > convert_and_check_filename() checks after that already, mostly. For\n> > TAP tests I am not sure that this would help much though as all the\n> > nodes of a given test use the same root path for their data folders,\n> > so you cannot just use \"../hoge/\" as location. We already generate a\n> > warning when a tablespace is in a data folder, as this causes issues\n> > with recursion lookups of base backups. What do you mean in this\n> > case? Forbidding the behavior? \n> \n> Isn't it good enough just warning when we see pg_tblspc twice\n> while scanning? The check is not perfect for an \"abosolute path\"\n> that continas '/./' above pgdata directory.\n\nI don't get basebackup recurse. How can I do that? basebackup\nrejects non-empty direcoty as a tablespace directory. I'm not\nsure about pg_upgrade but it's not a problem as far as we keep\nwaning on that kind of tablespace directory.\n\nSo I propose this:\n\n - Allow relative path as a tablespace direcotry in exchange for\n issueing WARNING.\n\n =# CREATE TABLESPACE ts1 LOCATION '../../hoge';\n \"WARNING: tablespace location should be in absolute path\"\n\n - For abosolute paths, we keep warning as before. Of course we\n don't bother '.' and '..'.\n\n =# CREATE TABLESPACE ts1 LOCATION '/home/.../data';\n \"WARNING: tablespace location should not be in the data directory\"\n\n\n> For TAP tests, we can point generated temporary directory by\n> \"../../<tmpdirsname>\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 19:00:54 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "At Tue, 23 Apr 2019 19:00:54 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190423.190054.262966274.horiguchi.kyotaro@lab.ntt.co.jp>\n> > For TAP tests, we can point generated temporary directory by\n> > \"../../<tmpdirsname>\".\n\nWrong. A generating tmpdir (how?) in \"../\" (that is, in the node\ndirecotry) would work.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 19:20:33 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-23 16:08:18 +0900, Michael Paquier wrote:\n> On Mon, Apr 22, 2019 at 11:00:03PM -0700, Andres Freund wrote:\n> > FWIW, I think the right fix for this is to simply drop the requirement\n> > that tablespace paths need to be absolute. It's not buying us anything,\n> > it's just making things more complicated. We should just do a simple\n> > check against the tablespace being inside PGDATA, and leave it at\n> > that. Yes, that can be tricked, but so can the current system.\n> \n> convert_and_check_filename() checks after that already, mostly. For\n> TAP tests I am not sure that this would help much though as all the\n> nodes of a given test use the same root path for their data folders,\n> so you cannot just use \"../hoge/\" as location.\n\nI don't see the problem here. Putting the primary and standby PGDATAs\ninto a subdirectory that also can contain a relatively referenced\ntablespace seems trivial?\n\n> I'm not We already generate a warning when a tablespace is in a data\n> folder, as this causes issues with recursion lookups of base backups.\n> What do you mean in this case? Forbidding the behavior? -- Michael\n\nI mostly am talking about replacing\n\nOid\nCreateTableSpace(CreateTableSpaceStmt *stmt)\n{\n...\n\t/*\n\t * Allowing relative paths seems risky\n\t *\n\t * this also helps us ensure that location is not empty or whitespace\n\t */\n\tif (!is_absolute_path(location))\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n\t\t\t\t errmsg(\"tablespace location must be an absolute path\")));\n\nwith a check that forces relative paths to be outside of PGDATA (baring\nsymlinks). As far as I can tell convert_and_check_filename() would check\njust about the opposite.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Apr 2019 10:05:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "At Tue, 23 Apr 2019 10:05:03 -0700, Andres Freund <andres@anarazel.de> wrote in <20190423170503.uw5jxrujqlozg23l@alap3.anarazel.de>\n> Hi,\n> \n> On 2019-04-23 16:08:18 +0900, Michael Paquier wrote:\n> > On Mon, Apr 22, 2019 at 11:00:03PM -0700, Andres Freund wrote:\n> > > FWIW, I think the right fix for this is to simply drop the requirement\n> > > that tablespace paths need to be absolute. It's not buying us anything,\n> > > it's just making things more complicated. We should just do a simple\n> > > check against the tablespace being inside PGDATA, and leave it at\n> > > that. Yes, that can be tricked, but so can the current system.\n> > \n> > convert_and_check_filename() checks after that already, mostly. For\n> > TAP tests I am not sure that this would help much though as all the\n> > nodes of a given test use the same root path for their data folders,\n> > so you cannot just use \"../hoge/\" as location.\n> \n> I don't see the problem here. Putting the primary and standby PGDATAs\n> into a subdirectory that also can contain a relatively referenced\n> tablespace seems trivial?\n> \n> > I'm not We already generate a warning when a tablespace is in a data\n> > folder, as this causes issues with recursion lookups of base backups.\n> > What do you mean in this case? Forbidding the behavior? -- Michael\n> \n> I mostly am talking about replacing\n> \n> Oid\n> CreateTableSpace(CreateTableSpaceStmt *stmt)\n> {\n> ...\n> \t/*\n> \t * Allowing relative paths seems risky\n> \t *\n> \t * this also helps us ensure that location is not empty or whitespace\n> \t */\n> \tif (!is_absolute_path(location))\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> \t\t\t\t errmsg(\"tablespace location must be an absolute path\")));\n> \n> with a check that forces relative paths to be outside of PGDATA (baring\n> symlinks). As far as I can tell convert_and_check_filename() would check\n> just about the opposite.\n\nWe need to adjust relative path between PGDATA-based and\npg_tblspc based. The attached first patch does that.\n\n- I'm not sure it is OK to use getcwd this way.\n\nThe second attached is TAP change to support tablespaces using\nrelative tablespaces. One issue here is is_in_data_directory\ncanonicalizes DataDir on-the-fly. It is needed when DataDir\ncontains '/./' or such. I think the canonicalization should be\ndone far earlier.\n\n- This is tentative or sample. I'll visit the current discussion thread.\n\nThe third is test for this issue.\n\n- Tablespace handling gets easier.\n\nThe fourth is the fix for the issue here.\n\n- Not all possible simliar issue is not checked.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 24 Apr 2019 13:18:45 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "Sorry, I was in haste.\n\nAt Wed, 24 Apr 2019 13:18:45 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190424.131845.116224815.horiguchi.kyotaro@lab.ntt.co.jp>\n> At Tue, 23 Apr 2019 10:05:03 -0700, Andres Freund <andres@anarazel.de> wrote in <20190423170503.uw5jxrujqlozg23l@alap3.anarazel.de>\n> > Hi,\n> > \n> > On 2019-04-23 16:08:18 +0900, Michael Paquier wrote:\n> > > On Mon, Apr 22, 2019 at 11:00:03PM -0700, Andres Freund wrote:\n> > > > FWIW, I think the right fix for this is to simply drop the requirement\n> > > > that tablespace paths need to be absolute. It's not buying us anything,\n> > > > it's just making things more complicated. We should just do a simple\n> > > > check against the tablespace being inside PGDATA, and leave it at\n> > > > that. Yes, that can be tricked, but so can the current system.\n> > > \n> > > convert_and_check_filename() checks after that already, mostly. For\n> > > TAP tests I am not sure that this would help much though as all the\n> > > nodes of a given test use the same root path for their data folders,\n> > > so you cannot just use \"../hoge/\" as location.\n> > \n> > I don't see the problem here. Putting the primary and standby PGDATAs\n> > into a subdirectory that also can contain a relatively referenced\n> > tablespace seems trivial?\n> > \n> > > I'm not We already generate a warning when a tablespace is in a data\n> > > folder, as this causes issues with recursion lookups of base backups.\n> > > What do you mean in this case? Forbidding the behavior? -- Michael\n> > \n> > I mostly am talking about replacing\n> > \n> > Oid\n> > CreateTableSpace(CreateTableSpaceStmt *stmt)\n> > {\n> > ...\n> > \t/*\n> > \t * Allowing relative paths seems risky\n> > \t *\n> > \t * this also helps us ensure that location is not empty or whitespace\n> > \t */\n> > \tif (!is_absolute_path(location))\n> > \t\tereport(ERROR,\n> > \t\t\t\t(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> > \t\t\t\t errmsg(\"tablespace location must be an absolute path\")));\n> > \n> > with a check that forces relative paths to be outside of PGDATA (baring\n> > symlinks). As far as I can tell convert_and_check_filename() would check\n> > just about the opposite.\n\nWe need to adjust relative path between PGDATA-based and\npg_tblspc based. The attached first patch does that.\n\n- I'm not sure it is OK to use getcwd this way. Another issue\n here is is_in_data_directory canonicalizes DataDir\n on-the-fly. It is needed when DataDir contains '/./' or such. I\n think the canonicalization should be done far earlier.\n\nThe second attached is TAP change to support tablespaces using\nrelative tablespaces.\n\n- This is tentative or sample. I'll visit the current discussion thread.\n\nThe third is test for this issue.\n\n- Tablespace handling gets easier.\n\nThe fourth is the fix for the issue here.\n\n- Not all possible simliar issue is not checked.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Wed, 24 Apr 2019 13:23:04 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "Hello.\n\nAt Wed, 24 Apr 2019 13:23:04 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190424.132304.40676137.horiguchi.kyotaro@lab.ntt.co.jp>\n> > > with a check that forces relative paths to be outside of PGDATA (baring\n> > > symlinks). As far as I can tell convert_and_check_filename() would check\n> > > just about the opposite.\n> \n> We need to adjust relative path between PGDATA-based and\n> pg_tblspc based. The attached first patch does that.\n> \n> - I'm not sure it is OK to use getcwd this way. Another issue\n> here is is_in_data_directory canonicalizes DataDir\n> on-the-fly. It is needed when DataDir contains '/./' or such. I\n> think the canonicalization should be done far earlier.\n> \n> The second attached is TAP change to support tablespaces using\n> relative tablespaces.\n> \n> - This is tentative or sample. I'll visit the current discussion thread.\n> \n> The third is test for this issue.\n> \n> - Tablespace handling gets easier.\n> \n> The fourth is the fix for the issue here.\n> \n> - Not all possible simliar issue is not checked.\n\nThis is new version. Adjusted pg_basebackup's behavior to allow\nrelative mappings. But..\n\nThis is apparently out of a bug fix. What should I do with it?\n\nShould we applying only 0004 (after further checking) or\nsomething as bug fix, then register the rest for v13?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 24 Apr 2019 17:02:28 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:\n> At Wed, 24 Apr 2019 13:23:04 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190424.132304.40676137.horiguchi.kyotaro@lab.ntt.co.jp>\n>> We need to adjust relative path between PGDATA-based and\n>> pg_tblspc based. The attached first patch does that.\n\n> This is new version. Adjusted pg_basebackup's behavior to allow\n> relative mappings. But..\n\nI can't say that I like 0001 at all. It adds a bunch of complication and\nnew failure modes (e.g., having to panic on chdir failure) in order to do\nwhat exactly? I've not been following the thread closely, but the\noriginal problem is surely just a dont-do-that misconfiguration. I also\nsuspect that this is assuming way too much about the semantics of getcwd\n--- some filesystem configurations may have funny situations like multiple\npaths to the same place.\n\n0004 also seems like it's adding at least as many failure modes as\nit removes. Moreover, isn't it just postponing the failure a little?\nLater WAL might well try to touch the directory you skipped creation of.\nWe can't realistically decide that all WAL-application errors are\nignorable, but that seems like the direction this would have us go in.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Apr 2019 10:13:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-24 10:13:09 -0400, Tom Lane wrote:\n> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:\n> > At Wed, 24 Apr 2019 13:23:04 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190424.132304.40676137.horiguchi.kyotaro@lab.ntt.co.jp>\n> >> We need to adjust relative path between PGDATA-based and\n> >> pg_tblspc based. The attached first patch does that.\n> \n> > This is new version. Adjusted pg_basebackup's behavior to allow\n> > relative mappings. But..\n> \n> I can't say that I like 0001 at all. It adds a bunch of complication and\n> new failure modes (e.g., having to panic on chdir failure) in order to do\n> what exactly? I've not been following the thread closely, but the\n> original problem is surely just a dont-do-that misconfiguration. I also\n> suspect that this is assuming way too much about the semantics of getcwd\n> --- some filesystem configurations may have funny situations like multiple\n> paths to the same place.\n\nI'm not at all defending the conrete patch. But I think allowing\nrelative paths to tablespaces would solve a whole lot of practical\nproblems, while not meaningfully increasing failure modes. The inability\nto reasonably test master/standby setups on a single machine is pretty\njarring (yes, one can use basebackup tablespace maps - but that doesn't\nwork well for new tablespaces). And for a lot of production setups\nabsolute paths suck too - it's far from a given that primary / standby\ndatabases need to have the same exact path layout.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Apr 2019 09:24:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-24 17:02:28 +0900, Kyotaro HORIGUCHI wrote:\n> +/*\n> + * Check if the path is in the data directory strictly.\n> + */\n> +static bool\n> +is_in_data_directory(const char *path)\n> +{\n> +\tchar cwd[MAXPGPATH];\n> +\tchar abspath[MAXPGPATH];\n> +\tchar absdatadir[MAXPGPATH];\n> +\n> +\tgetcwd(cwd, MAXPGPATH);\n> +\tif (chdir(path) < 0)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"invalid directory \\\"%s\\\": %m\", path)));\n> +\n> +\t/* getcwd is defined as returning absolute path */\n> +\tgetcwd(abspath, MAXPGPATH);\n> +\n> +\t/* DataDir needs to be canonicalized */\n> +\tif (chdir(DataDir))\n> +\t\tereport(FATAL,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"could not chdir to the data directory \\\"%s\\\": %m\",\n> +\t\t\t\t\t\tDataDir)));\n> +\tgetcwd(absdatadir, MAXPGPATH);\n> +\n> +\t/* this must succeed */\n> +\tif (chdir(cwd))\n> +\t\tereport(FATAL,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"could not chdir to the current working directory \\\"%s\\\": %m\",\n> +\t\t\t\t\t cwd)));\n> +\n> +\treturn path_is_prefix_of_path(absdatadir, abspath);\n> +}\n\nThis seems like a bad idea to me. Why don't we just use\nmake_absolute_path() on the proposed tablespace path, and then check\npath_is_prefix_of() or such? Sure, that can be tricked using symlinks\netc, but that's already the case.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Apr 2019 09:30:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 9:25 AM Andres Freund <andres@anarazel.de> wrote:\n\n> The inability\n> to reasonably test master/standby setups on a single machine is pretty\n> jarring (yes, one can use basebackup tablespace maps - but that doesn't\n> work well for new tablespaces).\n\n\n+1 agree. Feature which can't be easily tested becomes hurdle for\ndevelopment and this is one of them. For reference the bug reported in [1]\nis hard to test and fix without having easy ability to setup master/standby\non same node. We discussed few ways to eliminate the issue in thread [2]\nbut I wasn't able to find a workable solution. It would be really helpful\nto lift this testing limitation.\n\n1]\nhttps://www.postgresql.org/message-id/flat/20190423.163949.36763221.horiguchi.kyotaro%40lab.ntt.co.jp#7fdeee86f3050df6315c04f5f6f93672\n2]\nhttps://www.postgresql.org/message-id/flat/CALfoeivGMTmCmSXRSWDf%3DujWS7L8QmoUoziv-A61f2R8DcmwiA%40mail.gmail.com#709b53c078ebe549cff2462c092a8f09\n\nOn Wed, Apr 24, 2019 at 9:25 AM Andres Freund <andres@anarazel.de> wrote: The inability\nto reasonably test master/standby setups on a single machine is pretty\njarring (yes, one can use basebackup tablespace maps - but that doesn't\nwork well for new tablespaces).+1 agree. Feature which can't be easily tested becomes hurdle for development and this is one of them. For reference the bug reported in [1] is hard to test and fix without having easy ability to setup master/standby on same node. We discussed few ways to eliminate the issue in thread [2] but I wasn't able to find a workable solution. It would be really helpful to lift this testing limitation.1] https://www.postgresql.org/message-id/flat/20190423.163949.36763221.horiguchi.kyotaro%40lab.ntt.co.jp#7fdeee86f3050df6315c04f5f6f936722] https://www.postgresql.org/message-id/flat/CALfoeivGMTmCmSXRSWDf%3DujWS7L8QmoUoziv-A61f2R8DcmwiA%40mail.gmail.com#709b53c078ebe549cff2462c092a8f09",
"msg_date": "Wed, 24 Apr 2019 09:42:57 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-24 10:13:09 -0400, Tom Lane wrote:\n>> I can't say that I like 0001 at all. It adds a bunch of complication and\n>> new failure modes (e.g., having to panic on chdir failure) in order to do\n>> what exactly? I've not been following the thread closely, but the\n>> original problem is surely just a dont-do-that misconfiguration. I also\n>> suspect that this is assuming way too much about the semantics of getcwd\n>> --- some filesystem configurations may have funny situations like multiple\n>> paths to the same place.\n\n> I'm not at all defending the conrete patch. But I think allowing\n> relative paths to tablespaces would solve a whole lot of practical\n> problems, while not meaningfully increasing failure modes.\n\nI'm not against allowing relative tablespace paths. But I did not like\nthe chdir and getcwd-semantics hazards --- why do we have to buy into\nall that to allow relative paths?\n\nI think it would likely be sufficient to state plainly in the docs\nthat a relative path had better point outside $PGDATA, and maybe\nhave some *simple* tests on the canonicalized form of the path to\nprevent obvious mistakes. Going further than that is likely to add\nmore problems than it removes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Apr 2019 13:02:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "At Wed, 24 Apr 2019 09:30:12 -0700, Andres Freund <andres@anarazel.de> wrote in <20190424163012.7wzdl6j2v73cufip@alap3.anarazel.de>\n> Hi,\n> \n> On 2019-04-24 17:02:28 +0900, Kyotaro HORIGUCHI wrote:\n> > +/*\n> > + * Check if the path is in the data directory strictly.\n> > + */\n> > +static bool\n> > +is_in_data_directory(const char *path)\n> > +{\n<hinde the ugly thing:p>\n> > +\t\t\t\t errmsg(\"could not chdir to the current working directory \\\"%s\\\": %m\",\n> > +\t\t\t\t\t cwd)));\n> > +\n> > +\treturn path_is_prefix_of_path(absdatadir, abspath);\n> > +}\n> \n> This seems like a bad idea to me. Why don't we just use\n> make_absolute_path() on the proposed tablespace path, and then check\n> path_is_prefix_of() or such? Sure, that can be tricked using symlinks\n> etc, but that's already the case.\n\nThanks for the suggestions, Tom, Andres. For clarity, as I\nmentioned in the post, I didn't like the getcwd in 0001. The\nreason for the previous patch are:\n\n1. canonicalize_path doesn't process '/.' and '/..' in the middle\n of a path. That prevents correct checking of directory\n inclusiveness. Actually regression test suffers that.\n\n2. Simply I missed make_absolute_path..\n\nSo, I rewote canonicalize_path to process '.' and '..'s not only\nat the end of a path but all occurrances in a path. This makes\nthe strange loop of chdir-getwen useless. But the new\ncanonicalize_path is a bit complex.\n\nThe modified canonicalize_path works filesystem-access-free so it\ndoesn't follow symlinks. Thus it makes a misjudge when two paths\nare in an inclusion relationship after resolving symlinks in the\npaths. But I don't think we don't need to treat such a malicious\nsituation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 25 Apr 2019 17:08:55 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "Hello.\n\nWin32 implement cannot have symbolic link feature as Linux-like\nOSes for some restrictions. (Windows 7 and 10 behave differently,\nas I heard.)\n\nSo the 0002 patch implemnets \"fake\" symbolic link as mentioned in\nits commit message.\n\nAlso I fixed 0001 slightly.\n\nregards.\n\nAt Thu, 25 Apr 2019 17:08:55 +0900 (Tokyo Standard Time), Kyotaro\nHORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in\n> Thanks for the suggestions, Tom, Andres. For clarity, as I\n> mentioned in the post, I didn't like the getcwd in 0001. The\n> reason for the previous patch are:\n> \n> 1. canonicalize_path doesn't process '/.' and '/..' in the middle\n> of a path. That prevents correct checking of directory\n> inclusiveness. Actually regression test suffers that.\n> \n> 2. Simply I missed make_absolute_path..\n> \n> So, I rewote canonicalize_path to process '.' and '..'s not only\n> at the end of a path but all occurrances in a path. This makes\n> the strange loop of chdir-getwen useless. But the new\n> canonicalize_path is a bit complex.\n> \n> The modified canonicalize_path works filesystem-access-free so it\n> doesn't follow symlinks. Thus it makes a misjudge when two paths\n> are in an inclusion relationship after resolving symlinks in the\n> paths. But I don't think we don't need to treat such a malicious\n> situation.\n\nreagrds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 26 Apr 2019 17:29:56 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-26 17:29:56 +0900, Kyotaro HORIGUCHI wrote:\n> Win32 implement cannot have symbolic link feature as Linux-like\n> OSes for some restrictions. (Windows 7 and 10 behave differently,\n> as I heard.)\n> \n> So the 0002 patch implemnets \"fake\" symbolic link as mentioned in\n> its commit message.\n\nI'm confused - what does this have to do with the topic at hand? Also,\ndon't we already emulate symlinks with junction points?\n\n- Andres\n\n\n",
"msg_date": "Fri, 26 Apr 2019 12:25:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "Hello.\n\nAt Fri, 26 Apr 2019 12:25:10 -0700, Andres Freund <andres@anarazel.de> wrote in <20190426192510.dndtaxslneoh4rs5@alap3.anarazel.de>\n> On 2019-04-26 17:29:56 +0900, Kyotaro HORIGUCHI wrote:\n> > Win32 implement cannot have symbolic link feature as Linux-like\n> > OSes for some restrictions. (Windows 7 and 10 behave differently,\n> > as I heard.)\n> > \n> > So the 0002 patch implemnets \"fake\" symbolic link as mentioned in\n> > its commit message.\n> \n> I'm confused - what does this have to do with the topic at hand? Also,\n> don't we already emulate symlinks with junction points?\n\nJust to know how we have, or whether we can have relative\ntablespaces on Windows. The answer for the second question is\n\"no\" for relative symbolic links.\n\nThe current implement based on reparse point emulates *nix\nsymlinks partly. It is using \"mount point\"(= junktion point)\nwhich accepts only absolute paths (to a directory).\n\nWindows has an API CreateSymbolincLink() but it needs\nadministrator privilege at least on Win7. Giving a flag allows\nunprivileged creation if the OS is running under \"Developer\nMode\". On Windows10 (I don't have one), AFAIK\nCreateSymbolicLink() is changed not to need the privilege, but\nthe flag in turn leads to error that \"invalid flag\".\n\nReparse point also can implement symbolic link but it needs\nadministrator privilege at least on Windows7.\n\n\nThe fake symlinks need correction after the data directory and\ntablespsce directory are moved. Maybe needs to call\nCorrectSymlink() or something at startup... Or relative\ntablespaces should be rejected on Windows?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 07 May 2019 10:16:54 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
},
{
"msg_contents": "On Tue, May 07, 2019 at 10:16:54AM +0900, Kyotaro HORIGUCHI wrote:\n> The fake symlinks need correction after the data directory and\n> tablespsce directory are moved. Maybe needs to call\n> CorrectSymlink() or something at startup... Or relative\n> tablespaces should be rejected on Windows?\n\nIt took enough sweat and tears to have an implementation with junction\npoints done correctly on Windows and we know that it works, so I am\nnot sure that we need an actual wrapper for readlink() and such for\nthe backend code to replace junction points. The issue with Windows\nis that perl's symlink() is not directly available on Windows.\n--\nMichael",
"msg_date": "Tue, 7 May 2019 10:55:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same machine"
},
{
"msg_contents": "At Tue, 7 May 2019 10:55:06 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190507015506.GC1499@paquier.xyz>\n> On Tue, May 07, 2019 at 10:16:54AM +0900, Kyotaro HORIGUCHI wrote:\n> > The fake symlinks need correction after the data directory and\n> > tablespsce directory are moved. Maybe needs to call\n> > CorrectSymlink() or something at startup... Or relative\n> > tablespaces should be rejected on Windows?\n> \n> It took enough sweat and tears to have an implementation with junction\n> points done correctly on Windows and we know that it works, so I am\n\nIndeed. It is very ill documented and complex.\n\n> not sure that we need an actual wrapper for readlink() and such for\n> the backend code to replace junction points. The issue with Windows\n> is that perl's symlink() is not directly available on Windows.\n\nUgg. If we want to run tablespace-related tests involving\nreplication on Windows, we need to make the tests using absolute\ntablespace paths. Period...?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Wed, 08 May 2019 17:09:03 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Regression test PANICs with master-standby setup on same\n machine"
}
] |
[
{
"msg_contents": "It's deliberate that \\dt doesn't show toast tables.\n\\d shows them, but doesn't show their indices.\n\nIt seems to me that their indices should be shown, without having to think and\nknow to query pg_index.\n\npostgres=# \\d pg_toast.pg_toast_2600\nTOAST table \"pg_toast.pg_toast_2600\"\n Column | Type \n------------+---------\n chunk_id | oid\n chunk_seq | integer\n chunk_data | bytea\nIndexes:\n \"pg_toast_2600_index\" PRIMARY KEY, btree (chunk_id, chunk_seq)\n\nJustin",
"msg_date": "Mon, 22 Apr 2019 10:49:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "On Mon, 22 Apr 2019 at 17:49, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> It's deliberate that \\dt doesn't show toast tables.\n> \\d shows them, but doesn't show their indices.\n>\n> It seems to me that their indices should be shown, without having to think and\n> know to query pg_index.\n>\n> postgres=# \\d pg_toast.pg_toast_2600\n> TOAST table \"pg_toast.pg_toast_2600\"\n> Column | Type\n> ------------+---------\n> chunk_id | oid\n> chunk_seq | integer\n> chunk_data | bytea\n> Indexes:\n> \"pg_toast_2600_index\" PRIMARY KEY, btree (chunk_id, chunk_seq)\n>\n+1.\n\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Fri, 3 May 2019 14:55:47 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "On Fri, May 03, 2019 at 02:55:47PM +0200, Rafia Sabih wrote:\n> On Mon, 22 Apr 2019 at 17:49, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > It's deliberate that \\dt doesn't show toast tables.\n> > \\d shows them, but doesn't show their indices.\n> >\n> > It seems to me that their indices should be shown, without having to think and\n> > know to query pg_index.\n> >\n> > postgres=# \\d pg_toast.pg_toast_2600\n> > TOAST table \"pg_toast.pg_toast_2600\"\n> > Column | Type\n> > ------------+---------\n> > chunk_id | oid\n> > chunk_seq | integer\n> > chunk_data | bytea\n> > Indexes:\n> > \"pg_toast_2600_index\" PRIMARY KEY, btree (chunk_id, chunk_seq)\n>\n> +1.\n\nThanks - what about also showing the associated non-toast table ?\n\npostgres=# \\d pg_toast.pg_toast_2620\nTOAST table \"pg_toast.pg_toast_2620\"\n Column | Type\n------------+---------\n chunk_id | oid\n chunk_seq | integer\n chunk_data | bytea\nFOR TABLE: \"pg_catalog.pg_trigger\"\nIndexes:\n \"pg_toast_2620_index\" PRIMARY KEY, btree (chunk_id, chunk_seq)\n\nThat could be displayed differently, perhaps in the header, but I think this is\nmore consistent with other display.\n\nJustin",
"msg_date": "Fri, 3 May 2019 09:27:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "On Fri, 3 May 2019 at 16:27, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, May 03, 2019 at 02:55:47PM +0200, Rafia Sabih wrote:\n> > On Mon, 22 Apr 2019 at 17:49, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > It's deliberate that \\dt doesn't show toast tables.\n> > > \\d shows them, but doesn't show their indices.\n> > >\n> > > It seems to me that their indices should be shown, without having to think and\n> > > know to query pg_index.\n> > >\n> > > postgres=# \\d pg_toast.pg_toast_2600\n> > > TOAST table \"pg_toast.pg_toast_2600\"\n> > > Column | Type\n> > > ------------+---------\n> > > chunk_id | oid\n> > > chunk_seq | integer\n> > > chunk_data | bytea\n> > > Indexes:\n> > > \"pg_toast_2600_index\" PRIMARY KEY, btree (chunk_id, chunk_seq)\n> >\n> > +1.\n>\n> Thanks - what about also showing the associated non-toast table ?\n>\nIMHO, what makes more sense is to show the name of associated toast\ntable in the \\dt+ of the normal table.\n\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Mon, 6 May 2019 09:13:52 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "Rafia Sabih <rafia.pghackers@gmail.com> writes:\n> On Fri, 3 May 2019 at 16:27, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> Thanks - what about also showing the associated non-toast table ?\n\n> IMHO, what makes more sense is to show the name of associated toast\n> table in the \\dt+ of the normal table.\n\nI'm not for that: it's useless information in at least 99.44% of cases.\n\nPossibly it is useful in the other direction as Justin suggests.\nNot sure though --- generally, if you're looking at a specific\ntoast table, you already know which table is its parent. But\nmaybe confirmation is a good thing.\n\nThat seems off-topic for this thread though. I agree with the\nstated premise that \\d on a toast table should show all the same\ninformation \\d on a regular table would.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 11:58:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "On Mon, May 06, 2019 at 09:13:52AM +0200, Rafia Sabih wrote:\n> On Fri, 3 May 2019 at 16:27, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Fri, May 03, 2019 at 02:55:47PM +0200, Rafia Sabih wrote:\n> > > On Mon, 22 Apr 2019 at 17:49, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > It's deliberate that \\dt doesn't show toast tables.\n> > > > \\d shows them, but doesn't show their indices.\n> > > >\n> > > > It seems to me that their indices should be shown, without having to think and\n> > > > know to query pg_index.\n> > > >\n> > > > postgres=# \\d pg_toast.pg_toast_2600\n> > > > TOAST table \"pg_toast.pg_toast_2600\"\n> > > > Column | Type\n> > > > ------------+---------\n> > > > chunk_id | oid\n> > > > chunk_seq | integer\n> > > > chunk_data | bytea\n> > > > Indexes:\n> > > > \"pg_toast_2600_index\" PRIMARY KEY, btree (chunk_id, chunk_seq)\n> > >\n> > > +1.\n> >\n> > Thanks - what about also showing the associated non-toast table ?\n> >\n> IMHO, what makes more sense is to show the name of associated toast\n> table in the \\dt+ of the normal table.\n\nPerhaps ... but TOAST is an implementation detail, and I think it should rarely\nbe important to know the toast table for a given table.\n\nI think it's more useful to go the other way (at least), to answer questions\nwhen pg_toast.* table shows up in a query like these:\n\n - SELECT relpages, relname FROM pg_class ORDER BY 1 DESC;\n - SELECT COUNT(1), relname FROM pg_class c JOIN pg_buffercache b ON b.relfilenode=c.oid GROUP BY 2 ORDER BY 1 DESC LIMIT 9;\n\nJustin\n\n\n",
"msg_date": "Mon, 6 May 2019 11:22:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: make \\d pg_toast.foo show its indices ; and, \\d toast show its\n main table"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-06 11:58:18 -0400, Tom Lane wrote:\n> Not sure though --- generally, if you're looking at a specific\n> toast table, you already know which table is its parent. But\n> maybe confirmation is a good thing.\n\nI'm not convinced by that. I've certainly many a time wrote queries\nagainst pg_class to figure out which relation a toast table belongs\nto. E.g. after looking at the largest relations in the system, looking\nat pg_stat_*_tables, after seeing an error in the logs, etc.\n\n\n> That seems off-topic for this thread though. I agree with the\n> stated premise that \\d on a toast table should show all the same\n> information \\d on a regular table would.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 May 2019 09:26:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "On 2019-May-06, Justin Pryzby wrote:\n\n> Perhaps ... but TOAST is an implementation detail, and I think it should rarely\n> be important to know the toast table for a given table.\n\nI'm with Andres -- while it's admittedly a rare need, it is a real one.\n\nSometimes I wish for \\d++ which would display internal details too obscure\nto show in the regular \\d+, such as the toast table name.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 6 May 2019 12:33:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices ; and, \\d toast show its\n main table"
},
{
"msg_contents": "On Mon, May 6, 2019 at 12:26 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm not convinced by that. I've certainly many a time wrote queries\n> against pg_class to figure out which relation a toast table belongs\n> to. E.g. after looking at the largest relations in the system, looking\n> at pg_stat_*_tables, after seeing an error in the logs, etc.\n\n+1. I think it would be great for \\d on the TOAST table to show this\ninformation.\n\n> > That seems off-topic for this thread though. I agree with the\n> > stated premise that \\d on a toast table should show all the same\n> > information \\d on a regular table would.\n>\n> +1\n\nThat premise seems like a good one, too.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 May 2019 13:52:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Rafia Sabih <rafia.pghackers@gmail.com> writes:\n> > On Fri, 3 May 2019 at 16:27, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> Thanks - what about also showing the associated non-toast table ?\n> \n> > IMHO, what makes more sense is to show the name of associated toast\n> > table in the \\dt+ of the normal table.\n> \n> I'm not for that: it's useless information in at least 99.44% of cases.\n\nI don't think I'd put it in \\dt+, but the toast table is still\npg_toast.pg_toast_{relOid}, right? What about showing the OID of the\ntable in the \\d output, eg:\n\n=> \\d comments\n Table \"public.comments\" (50788)\n Column | Type | Collation | Nullable | Default\n\netc?\n\n> Possibly it is useful in the other direction as Justin suggests.\n> Not sure though --- generally, if you're looking at a specific\n> toast table, you already know which table is its parent. But\n> maybe confirmation is a good thing.\n\nAs mentioned elsewhere, there are certainly times when you don't know\nthat info and if you're looking at the definition of a TOAST table,\nwhich isn't terribly complex, it seems like a good idea to go ahead and\ninclude the table it's the TOAST table for.\n\n> That seems off-topic for this thread though. I agree with the\n> stated premise that \\d on a toast table should show all the same\n> information \\d on a regular table would.\n\n+1\n\nThanks!\n\nStephen",
"msg_date": "Tue, 7 May 2019 11:16:00 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Rafia Sabih <rafia.pghackers@gmail.com> writes:\n>>> IMHO, what makes more sense is to show the name of associated toast\n>>> table in the \\dt+ of the normal table.\n\n>> I'm not for that: it's useless information in at least 99.44% of cases.\n\n> I don't think I'd put it in \\dt+, but the toast table is still\n> pg_toast.pg_toast_{relOid}, right? What about showing the OID of the\n> table in the \\d output, eg:\n> => \\d comments\n> Table \"public.comments\" (50788)\n\nNot unless you want to break every regression test that uses \\d.\nInstability of the output is also a reason not to show the\ntoast table's name in the parent's \\d[+].\n\n>> Possibly it is useful in the other direction as Justin suggests.\n>> Not sure though --- generally, if you're looking at a specific\n>> toast table, you already know which table is its parent. But\n>> maybe confirmation is a good thing.\n\n> As mentioned elsewhere, there are certainly times when you don't know\n> that info and if you're looking at the definition of a TOAST table,\n> which isn't terribly complex, it seems like a good idea to go ahead and\n> include the table it's the TOAST table for.\n\nI'm not against putting that info into the result of \\d on the toast\ntable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 11:24:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Rafia Sabih <rafia.pghackers@gmail.com> writes:\n> >>> IMHO, what makes more sense is to show the name of associated toast\n> >>> table in the \\dt+ of the normal table.\n> \n> >> I'm not for that: it's useless information in at least 99.44% of cases.\n> \n> > I don't think I'd put it in \\dt+, but the toast table is still\n> > pg_toast.pg_toast_{relOid}, right? What about showing the OID of the\n> > table in the \\d output, eg:\n> > => \\d comments\n> > Table \"public.comments\" (50788)\n> \n> Not unless you want to break every regression test that uses \\d.\n> Instability of the output is also a reason not to show the\n> toast table's name in the parent's \\d[+].\n\nSo we need a way to turn it off. That doesn't seem like it'd be hard to\nimplement and the information is certainly quite useful.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 7 May 2019 11:30:06 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "On Tue, May 7, 2019 at 11:30 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > Not unless you want to break every regression test that uses \\d.\n> > Instability of the output is also a reason not to show the\n> > toast table's name in the parent's \\d[+].\n>\n> So we need a way to turn it off. That doesn't seem like it'd be hard to\n> implement and the information is certainly quite useful.\n\nUgh. It's not really worth it if we have to go to such lengths.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 7 May 2019 16:44:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, May 7, 2019 at 11:30 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > Not unless you want to break every regression test that uses \\d.\n> > > Instability of the output is also a reason not to show the\n> > > toast table's name in the parent's \\d[+].\n> >\n> > So we need a way to turn it off. That doesn't seem like it'd be hard to\n> > implement and the information is certainly quite useful.\n> \n> Ugh. It's not really worth it if we have to go to such lengths.\n\nI don't think I agree.. We've gone to pretty great lengths to have\nthings that can be turned on and off for explain because they're useful\nto have but not something that's predictible in the regression tests.\nThis doesn't strike me as all that different (indeed, if anything it\nseems like it should be less of an issue since it's entirely client\nside...).\n\nHaving our test framework deny us useful features just strikes me as\nbizarre.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 7 May 2019 16:48:09 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Having our test framework deny us useful features just strikes me as\n> bizarre.\n\nThis is presuming that it's useful, which is debatable IMO.\nI think most people will find it useless noise almost all of the time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 17:49:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Having our test framework deny us useful features just strikes me as\n> > bizarre.\n> \n> This is presuming that it's useful, which is debatable IMO.\n> I think most people will find it useless noise almost all of the time.\n\nAlright, maybe I'm not the best representation of our user base, but I\nsure type 'select oid,* from pg_class where relname = ...' with some\nregularity, mostly to get the oid to then go do something else. Having\nthe relfilenode would be nice too, now that I think about it, and\nreltuples. There's ways to get *nearly* everything that's in pg_class\nand friends out of various \\d incantations, but not quite everything,\nwhich seems unfortunate.\n\nIn any case, I can understand an argument that the code it requires is\ntoo much to maintain for a relatively minor feature (though it hardly\nseems like it would be...) or that it would be confusing or unhelpful to\nusers (aka \"noise\") much of the time, so I'll leave it to others to\ncomment on if they think any of these ideas be a useful addition or not.\n\nI just don't think we should be voting down a feature because it'd take\na bit of extra effort to make our regression tests work with it, which\nis all I was intending to get at here.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 7 May 2019 18:03:29 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "On Tue, May 7, 2019 at 6:03 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Alright, maybe I'm not the best representation of our user base, but I\n> sure type 'select oid,* from pg_class where relname = ...' with some\n> regularity, mostly to get the oid to then go do something else. Having\n> the relfilenode would be nice too, now that I think about it, and\n> reltuples. There's ways to get *nearly* everything that's in pg_class\n> and friends out of various \\d incantations, but not quite everything,\n> which seems unfortunate.\n>\n> In any case, I can understand an argument that the code it requires is\n> too much to maintain for a relatively minor feature (though it hardly\n> seems like it would be...) or that it would be confusing or unhelpful to\n> users (aka \"noise\") much of the time, so I'll leave it to others to\n> comment on if they think any of these ideas be a useful addition or not.\n>\n> I just don't think we should be voting down a feature because it'd take\n> a bit of extra effort to make our regression tests work with it, which\n> is all I was intending to get at here.\n\nI think it's unjustifiable to show this in \\d output. But maybe in\n\\d+ output it could be justified, or perhaps in the \\d++ which I seem\nto recall Alvaro proposing someplace recently.\n\nI think if we're going to show it, it should be on its own line, with\na clear label, not just in the table header as you proposed.\nOtherwise, people won't know what it is.\n\nI suppose the work we'd need to make it work with the regression tests\nis no worse than the hide_tableam crock which Andres recently added.\nThat is certainly a crock, but I can testify that it's a very useful\ncrock for zheap development.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 May 2019 10:18:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think it's unjustifiable to show this in \\d output. But maybe in\n> \\d+ output it could be justified, or perhaps in the \\d++ which I seem\n> to recall Alvaro proposing someplace recently.\n\nYeah, if we're going to do that (show a table's toast table) I would\nwant to bury it in \\d++ or some other not-currently-used notation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 May 2019 10:28:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices"
},
{
"msg_contents": "I'm continuing this thread with an additional change to slash dee for\npartitioned indexes.\n\npostgres=# \\d ttz_i_idx\n Partitioned index \"public.ttz_i_idx\"\n Column | Type | Key? | Definition \n--------+---------+------+------------\n i | integer | yes | i\nbtree, for table \"public.ttz\"\nNumber of partitions: 2 (Use \\d+ to list them.)\n\npostgres=# \\d+ ttz_i_idx\n Partitioned index \"public.ttz_i_idx\"\n Column | Type | Key? | Definition | Storage | Stats target \n--------+---------+------+------------+---------+--------------\n i | integer | yes | i | plain | \nbtree, for table \"public.ttz\"\nPartitions: ttz1_i_idx,\n ttz2_i_idx, PARTITIONED\n\nShowing the list of index partitions is probably not frequently useful, but\nconsider the case of non-default names, for example due to truncation.\n\nI didn't update regression output; note that this patch also, by chance, causes\ntablespace of partitioned indexes to be output, which I think is good and an\noversight that it isn't currently shown.\n\nI added CF entry and including previous two patches for CFBOT purposes.\n\nRecap: Tom, Andreas, Robert, Stephen and I agree that \\d toast should show the\nmain table. Rafia and Alvaro think that \\d on the main table should (also?)\nshow its toast.\n\nJustin",
"msg_date": "Sun, 23 Jun 2019 09:25:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: make \\d pg_toast.foo show its indices ; and, \\d toast show its\n main table ; and \\d relkind=I show its partitions (and tablespace)"
},
{
"msg_contents": "My previous patch missed a 1-line hunk, so resending.",
"msg_date": "Thu, 27 Jun 2019 18:02:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: make \\d pg_toast.foo show its indices ; and, \\d toast show its\n main table ; and \\d relkind=I show its partitions (and tablespace)"
},
{
"msg_contents": "\nThere are 3 independent patches associated to one thread and one CF entry.\n\n\n*** About toast table v3:\n\nPatch applies cleanly, compiles, works for me.\n\nISTM that the he query should be unambiguous: pg_catalog.pg_class instead \nof pg_class, add an alias (eg c), use c.FIELD to access an attribute. In \nits current form \"pg_class\" could resolve to another table depending on \nthe search path.\n\nC style is broken. On \"if () {\", brace must be on next line. On \"1 != \nPQntuples(result)\", I would exchange operands.\n\nPQclear must be called on the main path.\n\nIf the table name contains a \", the result looks awkward:\n\n \tFor table: \"public.foo\"bla\"\n\nI'm wondering whether some escaping should be done. Well, it is not done \nfor other simular entries, so probably this is bad but okay:-)\n\nThere are no tests:-(\n\n\n*** About toast index v3\n\nPatch applies cleanly, compiles, works for me.\n\nThere are no tests:-(\n\n*** About the next one, v4\n\nPatch applies cleanly, compiles. Not sure how to test it.\n\n\"switch (*PQgetvalue(result, i, 2))\": I understand that relkind is a must \nadmit I do not like this style much, an intermediate variable would \nimprove readability. Also, a simple if instead of a swich might be more \nappropriate, and be closer to the previous implementation.\n\nThere are no tests:-(\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 30 Jun 2019 10:26:28 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices ; and, \\d toast show its\n main table ; and \\d relkind=I show its partitions (and tablespace)"
},
{
"msg_contents": "Sorry, I missed this until now.\n\nOn Sun, Jun 30, 2019 at 10:26:28AM +0200, Fabien COELHO wrote:\n> *** About toast table v3:\n> \n> Patch applies cleanly, compiles, works for me.\n> \n> ISTM that the he query should be unambiguous: pg_catalog.pg_class instead of\n> pg_class, add an alias (eg c), use c.FIELD to access an attribute. In its\n> current form \"pg_class\" could resolve to another table depending on the\n> search path.\n\nThanks for noticing, fixed.\n\n> C style is broken. On \"if () {\", brace must be on next line. On \"1 !=\n> PQntuples(result)\", I would exchange operands.\n> \n> PQclear must be called on the main path.\n\nDone\n\n> There are no tests:-(\n\n\"show-childs\" caused plenty of tests fail; actually..it looks like my previous\npatch duplicated \"tablespace\" line for indices (and I managed to not notice the\noriginal one, and claimed my patch fixed that omission, sigh). I added test\nthat it shows its partitions, too.\n\nIt seems like an obviously good idea to add tests for \\d toast; it's not clear\nto me how to do run \\d for a toast table, which is named after a user table's\nOID... (I tested that \\gexec doesn't work for this).\n\nSo for now I used \\d pg_toast.pg_toast_2619\n\n> If the table name contains a \", the result looks awkward:\n> \n> \tFor table: \"public.foo\"bla\"\n> \n> I'm wondering whether some escaping should be done. Well, it is not done for\n> other simular entries, so probably this is bad but okay:-)\n\nLeaving this for another commit-day.\n\nThanks for testing.\n\nJustin",
"msg_date": "Tue, 16 Jul 2019 00:01:24 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: make \\d pg_toast.foo show its indices ; and, \\d toast show its\n main table ; and \\d relkind=I show its partitions (and tablespace)"
},
{
"msg_contents": "I realized that the test added to show-childs patch was listing partitioned\ntables not indices..fixed.",
"msg_date": "Tue, 16 Jul 2019 09:06:36 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: make \\d pg_toast.foo show its indices ; and, \\d toast show its\n main table ; and \\d relkind=I show its partitions"
},
{
"msg_contents": "Find attached updated patches which also work against old servers.\n\n1) avoid ::regnamespace; 2) don't PQgetvalue() fields which don't exist and then crash.",
"msg_date": "Wed, 17 Jul 2019 01:26:34 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: make \\d pg_toast.foo show its indices ; and, \\d toast show its\n main table ; and \\d relkind=I show its partitions"
},
{
"msg_contents": "\n> Find attached updated patches which also work against old servers.\n\nI can't check that for sure.\n\n* About toast table addition v7:\n\nPatch applies cleanly, compiles, make check ok, no doc.\n\nThis addition show the main table of a toast table, which is useful.\n\nField relnamespace oid in pg_class appears with pg 7.3, maybe it would be \nappropriate to guard agains older versions, with \"pset.sversion >= 70300\". \nIt seems that there are other unguarded instances in \"describe.c\", so \nmaybe this is considered too old.\n\nTest is ok.\n\n* About toast index v7:\n\nPatch applies cleanly on top of previous, compiles, make check ok, no doc.\n\nThis patch simply enables an existing query on toast tables so as to show \ncorresponding indices.\n\nTest is ok.\n\n* About toast part v7.\n\nPatch applies cleanly, compiles, make check ok, no doc.\n\nIt gives the partition info about an index as it is shown about a table, \nwhich is useful.\n\nThere are some changes in the query on older systems, which seem harmless.\nThe code is rather simplified because a special case is removed, which is \na good thing.\n\nTest is ok.\n\nMarked as ready.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 23 Jul 2019 07:19:00 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices ; and, \\d toast show its\n main table ; and \\d relkind=I show its partitions"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> Field relnamespace oid in pg_class appears with pg 7.3, maybe it would be \n> appropriate to guard agains older versions, with \"pset.sversion >= 70300\". \n> It seems that there are other unguarded instances in \"describe.c\", so \n> maybe this is considered too old.\n\nPer the comment at the head of describe.c, we only expect it to work\nback to 7.4. I tested against a 7.4 server, the modified queries\nseem fine.\n\n> Marked as ready.\n\nPushed with minor fiddling with the toast-table code, and rather\nmore significant hacking on the partitioned-index code. Notably,\n0003 had broken output of Tablespace: footers for everything except\nindexes. It's possibly not Justin's fault that that wasn't noticed,\nbecause we had no regression tests covering it :-(. We do now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2019 17:08:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices ;\n and, \\d toast show its main table ; and \\d relkind=I show its partitions"
},
{
"msg_contents": "\n> Pushed with minor fiddling with the toast-table code, and rather\n> more significant hacking on the partitioned-index code. Notably,\n> 0003 had broken output of Tablespace: footers for everything except\n> indexes.\n\nArgh, sorry for the review miss.\n\n> It's possibly not Justin's fault that that wasn't noticed,\n> because we had no regression tests covering it :-(. We do now.\n\nThanks.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 23 Jul 2019 22:06:58 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: make \\d pg_toast.foo show its indices ; and, \\d toast show its\n main table ; and \\d relkind=I show its partitions"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.