threads
listlengths
1
2.99k
[ { "msg_contents": "In a very common operation of accidentally specifying a recycled\nsegment, pg_waldump often returns the following obscure message.\n\n$ pg_waldump 00000001000000000000002D\npg_waldump: fatal: could not find a valid record after 0/2D000000\n\nThe more detailed message is generated internally and we can use it.\nThat looks like the following.\n\n$ pg_waldump 00000001000000000000002D\npg_waldump: fatal: unexpected pageaddr 0/24000000 in log segment 00000001000000000000002D, offset 0\n\nIs it work doing?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 04 Jun 2021 17:35:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "detailed error message of pg_waldump" }, { "msg_contents": "On Fri, Jun 4, 2021 at 5:35 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> In a very common operation of accidentally specifying a recycled\n> segment, pg_waldump often returns the following obscure message.\n>\n> $ pg_waldump 00000001000000000000002D\n> pg_waldump: fatal: could not find a valid record after 0/2D000000\n>\n> The more detailed message is generated internally and we can use it.\n> That looks like the following.\n>\n> $ pg_waldump 00000001000000000000002D\n> pg_waldump: fatal: unexpected pageaddr 0/24000000 in log segment 00000001000000000000002D, offset 0\n>\n> Is it work doing?\n\nPerhaps we need both? The current message describes where the error\nhappened and the message internally generated describes the details.\nIt seems to me that both are useful. For example, if we find an error\nduring XLogReadRecord(), we show both as follows:\n\n if (errormsg)\n fatal_error(\"error in WAL record at %X/%X: %s\",\n LSN_FORMAT_ARGS(xlogreader_state->ReadRecPtr),\n errormsg);\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 16 Jun 2021 16:52:11 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: detailed error message of pg_waldump" }, { "msg_contents": "Thanks!\n\nAt Wed, 16 Jun 2021 16:52:11 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> On Fri, Jun 4, 2021 at 5:35 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > In a very common operation of accidentally specifying a recycled\n> > segment, pg_waldump often returns the following obscure message.\n> >\n> > $ pg_waldump 00000001000000000000002D\n> > pg_waldump: fatal: could not find a valid record after 0/2D000000\n> >\n> > The more detailed message is generated internally and we can use it.\n> > That looks like the following.\n> >\n> > $ pg_waldump 00000001000000000000002D\n> > pg_waldump: fatal: unexpected pageaddr 0/24000000 in log segment 00000001000000000000002D, offset 0\n> >\n> > Is it work doing?\n> \n> Perhaps we need both? The current message describes where the error\n> happened and the message internally generated describes the details.\n> It seems to me that both are useful. For example, if we find an error\n> during XLogReadRecord(), we show both as follows:\n> \n> if (errormsg)\n> fatal_error(\"error in WAL record at %X/%X: %s\",\n> LSN_FORMAT_ARGS(xlogreader_state->ReadRecPtr),\n> errormsg);\n\nYeah, I thought that it might be a bit vervous and lengty but actually\nwe have another place where doing that. One more point is whether we\nhave a case where first_record is invalid but errormsg is NULL\nthere. WALDumpReadPage immediately exits so we should always have a\nmessage in that case according to the comment in ReadRecord.\n\n> * We only end up here without a message when XLogPageRead()\n> * failed - in that case we already logged something. In\n> * StandbyMode that only happens if we have been triggered, so we\n> * shouldn't loop anymore in that case.\n\nSo that can be an assertion.\n\nNow the messages looks like this.\n\n$ pg_waldump /home/horiguti/data/data_work/pg_wal/000000020000000000000010 \npg_waldump: fatal: could not find a valid record after 0/0: unexpected pageaddr 0/9000000 in log segment 000000020000000000000010, offset 0\n\nreagards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 16 Jun 2021 17:35:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: detailed error message of pg_waldump" }, { "msg_contents": "On Wed, Jun 16, 2021 at 5:36 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Thanks!\n>\n> At Wed, 16 Jun 2021 16:52:11 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > On Fri, Jun 4, 2021 at 5:35 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > In a very common operation of accidentally specifying a recycled\n> > > segment, pg_waldump often returns the following obscure message.\n> > >\n> > > $ pg_waldump 00000001000000000000002D\n> > > pg_waldump: fatal: could not find a valid record after 0/2D000000\n> > >\n> > > The more detailed message is generated internally and we can use it.\n> > > That looks like the following.\n> > >\n> > > $ pg_waldump 00000001000000000000002D\n> > > pg_waldump: fatal: unexpected pageaddr 0/24000000 in log segment 00000001000000000000002D, offset 0\n> > >\n> > > Is it work doing?\n> >\n> > Perhaps we need both? The current message describes where the error\n> > happened and the message internally generated describes the details.\n> > It seems to me that both are useful. For example, if we find an error\n> > during XLogReadRecord(), we show both as follows:\n> >\n> > if (errormsg)\n> > fatal_error(\"error in WAL record at %X/%X: %s\",\n> > LSN_FORMAT_ARGS(xlogreader_state->ReadRecPtr),\n> > errormsg);\n>\n> Yeah, I thought that it might be a bit vervous and lengty but actually\n> we have another place where doing that. One more point is whether we\n> have a case where first_record is invalid but errormsg is NULL\n> there. WALDumpReadPage immediately exits so we should always have a\n> message in that case according to the comment in ReadRecord.\n>\n> > * We only end up here without a message when XLogPageRead()\n> > * failed - in that case we already logged something. In\n> > * StandbyMode that only happens if we have been triggered, so we\n> > * shouldn't loop anymore in that case.\n>\n> So that can be an assertion.\n>\n> Now the messages looks like this.\n>\n> $ pg_waldump /home/horiguti/data/data_work/pg_wal/000000020000000000000010\n> pg_waldump: fatal: could not find a valid record after 0/0: unexpected pageaddr 0/9000000 in log segment 000000020000000000000010, offset 0\n>\n\nThank you for updating the patch!\n\n+ *\n+ * The returned pointer (or *errormsg) points to an internal buffer that's\n+ * valid until the next call to XLogFindNextRecord or XLogReadRecord.\n */\n\nThe comment of XLogReadRecord() also has a similar description. Should\nwe update it as well?\n\nBTW is this patch registered to the current commitfest? I could not find it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 5 Jul 2021 16:04:27 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: detailed error message of pg_waldump" } ]
[ { "msg_contents": ">Just a note here. After examining the core dump I did notice something.\n\n>While in XidInMVCCSnapshot call the snapshot->suboverflowed is set true\n>although subxip == NULL and subxcnt == 0. As far as I understand,\n>snapshot->suboverflowed is set true in the GetRunningTransactionData\n>call.\n\n>And then I decided to put elog around CurrentRunningXacts->subxcnt's\n>assigment.\n>diff --git a/src/backend/storage/ipc/procarray.c\n>b/src/backend/storage/ipc/procarray.c\n>index 42a89fc5dc9..3d2db02f580 100644\n>--- a/src/backend/storage/ipc/procarray.c\n>+++ b/src/backend/storage/ipc/procarray.c\n>@@ -2781,6 +2781,9 @@ GetRunningTransactionData(void)\n> * increases if slots do.\n> */\n\n>+ if (suboverflowed)\n>+ elog(WARNING, \" >>> CurrentRunningXacts->subxid_overflow\n>is true\");\n>+\n> CurrentRunningXacts->xcnt = count - subcount;\n> CurrentRunningXacts->subxcnt = subcount;\n> CurrentRunningXacts->subxid_overflow = suboverflowed;\n\n>... and did get a bunch of messages. I.e. subxid_overflow is set true\n>very often.\n\n>I've increased the value of PGPROC_MAX_CACHED_SUBXIDS. Once it becomes\n>more than 120 there are no messages and no failed assertions are\n>provided any more.\n\nPlease, avoid using decimal based values.\n\n128 is multiple of 64.\n\nSee :\n\nhttps://github.com/trevstanhope/scratch/blob/master/C/docs/O%27Reilly%20-%20Practical%20C%20Programming%203rd%20Edition.pdf\n\n15.6.1 The Power of Powers of 2\n\nregards,\n\nRanier Vilela\n\n\n>Just a note here. After examining the core dump I did notice something.\n>While in XidInMVCCSnapshot call the snapshot->suboverflowed is set true >although subxip == NULL and subxcnt == 0. As far as I understand, >snapshot->suboverflowed is set true in the GetRunningTransactionData >call.\n>And then I decided to put elog around CurrentRunningXacts->subxcnt's >assigment.>diff --git a/src/backend/storage/ipc/procarray.c >b/src/backend/storage/ipc/procarray.c>index 42a89fc5dc9..3d2db02f580 100644>--- a/src/backend/storage/ipc/procarray.c>+++ b/src/backend/storage/ipc/procarray.c>@@ -2781,6 +2781,9 @@ GetRunningTransactionData(void)>         * increases if slots do.>         */\n>+       if (suboverflowed)>+               elog(WARNING, \" >>> CurrentRunningXacts->subxid_overflow >is true\");>+>        CurrentRunningXacts->xcnt = count - subcount;>        CurrentRunningXacts->subxcnt = subcount;>        CurrentRunningXacts->subxid_overflow = suboverflowed;\n>... and did get a bunch of messages. I.e. subxid_overflow is set true >very often.\n>I've increased the value of PGPROC_MAX_CACHED_SUBXIDS. Once it becomes >more than 120 there are no messages and no failed assertions are >provided any more.Please, avoid using decimal based values.128 is multiple of 64.See :https://github.com/trevstanhope/scratch/blob/master/C/docs/O%27Reilly%20-%20Practical%20C%20Programming%203rd%20Edition.pdf15.6.1 The Power of Powers of 2regards,Ranier Vilela", "msg_date": "Fri, 4 Jun 2021 11:47:52 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": ">\n> Please, avoid using decimal based values.\n>\n> 128 is multiple of 64.\n>\nIt's true that 128 is better to use than 120 but the main problem is not in\nthe value but in the fact we never get\nCurrentRunningXacts->subxid_overflow = suboverflowed; with value more than\n120. This solves the problem but it doesn't seem the right way to fix the\nissue. Instead it's better to process suboverflowed state which is legit\nitself not resulting getting the crash on the Assert. So the discussion of\n\"better\" value doesn't seem related to the problem. It is for demonstration\nonly.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nPlease, avoid using decimal based values.128 is multiple of 64.It's true that 128 is better to use than 120 but the main problem is not in the value but in the fact we never get CurrentRunningXacts->subxid_overflow = suboverflowed; with value more than 120. This solves the problem but it doesn't seem the right way to fix the issue. Instead it's better to process suboverflowed state which is legit itself  not resulting getting the crash on the Assert. So the discussion of \"better\" value doesn't seem related to the problem. It is for demonstration only.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 4 Jun 2021 19:07:02 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "Em sex., 4 de jun. de 2021 às 12:07, Pavel Borisov <pashkin.elfe@gmail.com>\nescreveu:\n\n> Please, avoid using decimal based values.\n>>\n>> 128 is multiple of 64.\n>>\n> It's true that 128 is better to use than 120 but the main problem is not\n> in the value but in the fact we never get\n> CurrentRunningXacts->subxid_overflow = suboverflowed; with value more\n> than 120. This solves the problem but it doesn't seem the right way to fix\n> the issue.\n>\nIt seems to me a solution too.\n\nInstead it's better to process suboverflowed state which is legit itself\n> not resulting getting the crash on the Assert.\n>\nOf course it would be great to find the root of the problem.\n\n\n> So the discussion of \"better\" value doesn't seem related to the problem.\n> It is for demonstration only.\n>\nIMHO, you could propose a patch, documenting this whole situation and\nproposing this workaround.\nI've been studying commits, and on several occasions problems have been\nfixed like this.\nBut important is the documentation of the problem.\n\nbest regards,\nRanier Vilela\n\nEm sex., 4 de jun. de 2021 às 12:07, Pavel Borisov <pashkin.elfe@gmail.com> escreveu:Please, avoid using decimal based values.128 is multiple of 64.It's true that 128 is better to use than 120 but the main problem is not in the value but in the fact we never get CurrentRunningXacts->subxid_overflow = suboverflowed; with value more than 120. This solves the problem but it doesn't seem the right way to fix the issue.It seems to me a solution too. Instead it's better to process suboverflowed state which is legit itself  not resulting getting the crash on the Assert.Of course it would be great to find the root of the problem.  So the discussion of \"better\" value doesn't seem related to the problem. It is for demonstration only.IMHO, you could propose a patch, documenting this whole situation and proposing this workaround.I've been studying commits, and on several occasions problems have been fixed like this.But important is the documentation of the problem. best regards,Ranier Vilela", "msg_date": "Fri, 4 Jun 2021 15:08:22 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" } ]
[ { "msg_contents": "Here's a completely trivial command to turn of echoing of a couple of\nWindows commands pg_upgrade writes to cleanup scripts. This makes them\nbehave more like the Unix equivalents.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 4 Jun 2021 12:10:47 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "pg_upgrade don't echo windows commands" }, { "msg_contents": "On Fri, Jun 04, 2021 at 12:10:47PM -0400, Andrew Dunstan wrote:\n> Here's a completely trivial command to turn of echoing of a couple of\n> Windows commands pg_upgrade writes to cleanup scripts. This makes them\n> behave more like the Unix equivalents.\n\nWhy not. Perhaps you should add a comment to mention that appending @\nto those commands disables echo. That's not obvious for the reader.\n--\nMichael", "msg_date": "Mon, 26 Jul 2021 16:13:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade don't echo windows commands" } ]
[ { "msg_contents": "Hi all,\n\nAs said in $subject, installcheck fails once I set up a server with\ndefault_toast_compression = lz4 in the test indirect_toast. Please\nsee the attached for the diffs.\n\nThe issue is that the ordering of the tuples returned by UPDATE\nRETURNING is not completely stable. Perhaps we should just enforce\nthe order of those tuples by wrapping the DMLs into a CTE and use an\nORDER BY in the outer query.\n\nOther ideas?\n--\nMichael", "msg_date": "Sat, 5 Jun 2021 09:20:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "installcheck failure in indirect_toast with\n default_toast_compression = lz4" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> The issue is that the ordering of the tuples returned by UPDATE\n> RETURNING is not completely stable. Perhaps we should just enforce\n> the order of those tuples by wrapping the DMLs into a CTE and use an\n> ORDER BY in the outer query.\n\nHmm. I'm not very clear on what that test is intending to test,\nbut maybe it's dependent on pglz compression, in which case the\nright fix would be to force default_toast_compression = pglz\nfor the duration of the test.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Jun 2021 20:28:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: installcheck failure in indirect_toast with\n default_toast_compression = lz4" }, { "msg_contents": "On Fri, Jun 04, 2021 at 08:28:59PM -0400, Tom Lane wrote:\n> Hmm. I'm not very clear on what that test is intending to test,\n> but maybe it's dependent on pglz compression, in which case the\n> right fix would be to force default_toast_compression = pglz\n> for the duration of the test.\n\nSupport for external toast datums, as of 36820250, so that should be\nindependent on the compression method used, no? I was just sticking\nsome checks based on pg_column_compression() all over the test, and\nall the values are correctly getting compressed and decompressed as\nfar as I can see.\n\nI got to wonder whether this is not pointing at an actual issue, and\nwhether it may be better to not make this test rely only on pglz, but\nI have not put much thoughts into it TBH.\n--\nMichael", "msg_date": "Sat, 5 Jun 2021 10:41:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: installcheck failure in indirect_toast with\n default_toast_compression = lz4" }, { "msg_contents": "On Sat, Jun 05, 2021 at 09:20:43AM +0900, Michael Paquier wrote:\n> As said in $subject, installcheck fails once I set up a server with\n> default_toast_compression = lz4 in the test indirect_toast. Please\n> see the attached for the diffs.\n> \n> The issue is that the ordering of the tuples returned by UPDATE\n> RETURNING is not completely stable. Perhaps we should just enforce\n> the order of those tuples by wrapping the DMLs into a CTE and use an\n> ORDER BY in the outer query.\n\nSee also a prior discussion:\nhttps://www.postgresql.org/message-id/CAFiTN-sm8Dpx3q92g5ohTdZu1_wKsw96-KiEMf3SoK8DhRPfWw%40mail.gmail.com\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 6 Jun 2021 15:52:57 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: installcheck failure in indirect_toast with\n default_toast_compression = lz4" }, { "msg_contents": "On Sun, Jun 06, 2021 at 03:52:57PM -0500, Justin Pryzby wrote:\n> See also a prior discussion:\n> https://www.postgresql.org/message-id/CAFiTN-sm8Dpx3q92g5ohTdZu1_wKsw96-KiEMf3SoK8DhRPfWw%40mail.gmail.com\n\nAh, thanks for the reference. So this was discussed but not actually\nfixed. I can see the data getting stored inline rather than\nexternalized with lz4. So, as the goal of the test is to stress the\ncase of externalized values, we'd better make sure that pglz is used.\nI'll push something doing that with more comments added to the test.\n--\nMichael", "msg_date": "Mon, 7 Jun 2021 18:03:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: installcheck failure in indirect_toast with\n default_toast_compression = lz4" } ]
[ { "msg_contents": "Hi,\n\nDuring a recent cleanup of brin_minmax_multi.c I noticed a few typos.\nI've attached a patch to fix these.\n\nI originally buried this in [1], but think it's likely better to have\na proper thread for it.\n\nThe patch does change some comments which reference parameter or\nvariable names. I hope that I've not misunderstood something. It\nwould be good if Tomas could have a look over it just in case I have.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvqPKwbUn8Z74BZrDmLoZzcP4s%2BLZTxNuUi3P3OxbieT1Q%40mail.gmail.com", "msg_date": "Sat, 5 Jun 2021 16:33:23 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Fix a few typos in brin_minmax_multi.c" }, { "msg_contents": "On Sat, 5 Jun 2021 at 16:33, David Rowley <dgrowleyml@gmail.com> wrote:\n> During a recent cleanup of brin_minmax_multi.c I noticed a few typos.\n> I've attached a patch to fix these.\n\nI ended up finding a few more in mcv.c and push them.\n\nDavid\n\n\n", "msg_date": "Thu, 10 Jun 2021 20:14:57 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix a few typos in brin_minmax_multi.c" }, { "msg_contents": "On 6/10/21 10:14 AM, David Rowley wrote:\n> On Sat, 5 Jun 2021 at 16:33, David Rowley <dgrowleyml@gmail.com> wrote:\n>> During a recent cleanup of brin_minmax_multi.c I noticed a few typos.\n>> I've attached a patch to fix these.\n> \n> I ended up finding a few more in mcv.c and push them.\n> \n\nThanks!\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 10 Jun 2021 13:51:01 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Fix a few typos in brin_minmax_multi.c" } ]
[ { "msg_contents": "Hi,\nI have observed the following behavior with PostgreSQL 13.3.\n\nThe WAL sender process sends approximately 500 keepalive messages per\nsecond to pg_recvlogical.\nThese keepalive messages are totally un-necessary.\nKeepalives should be sent only if there is no network traffic and a certain\ntime (half of wal_sender_timeout) passes.\nThese keepalive messages not only choke the network but also impact the\nperformance of the receiver,\nbecause the receiver has to process the received message and then decide\nwhether to reply to it or not.\nThe receiver remains busy doing this activity 500 times a second.\n\nOn investigation it is revealed that the following code fragment in\nfunction WalSndWaitForWal in file walsender.c is responsible for sending\nthese frequent keepalives:\n\nif (MyWalSnd->flush < sentPtr &&\n MyWalSnd->write < sentPtr &&\n !waiting_for_ping_response)\n WalSndKeepalive(false);\n\nwaiting_for_ping_response is normally false, and flush and write will\nalways be less than sentPtr (Receiver's LSNs cannot advance server's LSNs)\n\nHere are the steps to reproduce:\n1. Start the database server.\n2. Setup pgbench tables.\n ./pgbench -i -s 50 -h 192.168.5.140 -p 7654 -U abbas postgres\n3. Create a logical replication slot.\n SELECT * FROM pg_create_logical_replication_slot('my_slot',\n'test_decoding');\n4. Start pg_recvlogical.\n ./pg_recvlogical --slot=my_slot --verbose -d postgres -h 192.168.5.140 -p\n7654 -U abbas --start -f -\n5. Run pgbench\n ./pgbench -U abbas -h 192.168.5.140 -p 7654 -c 2 -j 2 -T 1200 -n postgres\n6. Observer network traffic to find the keepalive flood.\n\nAlternately modify the above code fragment to see approx 500 keepalive log\nmessages a second\n\nif (MyWalSnd->flush < sentPtr &&\n MyWalSnd->write < sentPtr &&\n !waiting_for_ping_response)\n{\n elog(LOG, \"[Keepalive] wrt ptr %X/%X snt ptr %X/%X \",\n (uint32) (MyWalSnd->write >> 32),\n (uint32) MyWalSnd->write,\n (uint32) (sentPtr >> 32),\n (uint32) sentPtr);\n WalSndKeepalive(false);\n}\n\nOpinions?\n\n-- \n-- \n*Abbas*\nSenior Architect\n\n\nPh: 92.334.5100153\nSkype ID: gabbasb\nedbpostgres.com\n\n*Follow us on Twitter*\n@EnterpriseDB\n\nHi,I have observed the following behavior with PostgreSQL 13.3.The WAL sender process sends approximately 500 keepalive messages per second to pg_recvlogical.These keepalive messages are totally un-necessary.Keepalives should be sent only if there is no network traffic and a certain time (half of wal_sender_timeout) passes.These keepalive messages not only choke the network but also impact the performance of the receiver,because the receiver has to process the received message and then decide whether to reply to it or not.The receiver remains busy doing this activity 500 times a second.On investigation it is revealed that the following code fragment in function WalSndWaitForWal in file walsender.c is responsible for sending these frequent keepalives:if (MyWalSnd->flush < sentPtr &&    MyWalSnd->write < sentPtr &&    !waiting_for_ping_response)        WalSndKeepalive(false);waiting_for_ping_response is normally false, and flush and write will always be less than sentPtr (Receiver's LSNs cannot advance server's LSNs)Here are the steps to reproduce:1. Start the database server.2. Setup pgbench tables.  ./pgbench -i -s 50 -h 192.168.5.140 -p 7654 -U abbas postgres3. Create a logical replication slot.   SELECT * FROM pg_create_logical_replication_slot('my_slot', 'test_decoding');4. Start pg_recvlogical.  ./pg_recvlogical --slot=my_slot --verbose -d postgres -h 192.168.5.140 -p 7654 -U abbas --start -f -5. Run pgbench  ./pgbench -U abbas -h 192.168.5.140 -p 7654  -c 2 -j 2 -T 1200 -n postgres6. Observer network traffic to find the keepalive flood.Alternately modify the above code fragment to see approx 500 keepalive log messages a secondif (MyWalSnd->flush < sentPtr &&    MyWalSnd->write < sentPtr &&    !waiting_for_ping_response){    elog(LOG, \"[Keepalive]  wrt ptr %X/%X  snt ptr %X/%X \",                (uint32) (MyWalSnd->write >> 32),                (uint32) MyWalSnd->write,                (uint32) (sentPtr >> 32),                (uint32) sentPtr);    WalSndKeepalive(false);}Opinions?-- -- Abbas\n\nSenior ArchitectPh: 92.334.5100153\n\nSkype ID: gabbasbedbpostgres.comFollow us on Twitter@EnterpriseDB", "msg_date": "Sat, 5 Jun 2021 16:08:00 +0500", "msg_from": "Abbas Butt <abbas.butt@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Logical replication keepalive flood" }, { "msg_contents": "At Sat, 5 Jun 2021 16:08:00 +0500, Abbas Butt <abbas.butt@enterprisedb.com> wrote in \n> Hi,\n> I have observed the following behavior with PostgreSQL 13.3.\n> \n> The WAL sender process sends approximately 500 keepalive messages per\n> second to pg_recvlogical.\n> These keepalive messages are totally un-necessary.\n> Keepalives should be sent only if there is no network traffic and a certain\n> time (half of wal_sender_timeout) passes.\n> These keepalive messages not only choke the network but also impact the\n> performance of the receiver,\n> because the receiver has to process the received message and then decide\n> whether to reply to it or not.\n> The receiver remains busy doing this activity 500 times a second.\n\nI can reproduce the problem.\n\n> On investigation it is revealed that the following code fragment in\n> function WalSndWaitForWal in file walsender.c is responsible for sending\n> these frequent keepalives:\n> \n> if (MyWalSnd->flush < sentPtr &&\n> MyWalSnd->write < sentPtr &&\n> !waiting_for_ping_response)\n> WalSndKeepalive(false);\n\nThe immediate cause is pg_recvlogical doesn't send a reply before\nsleeping. Currently it sends replies every 10 seconds intervals.\n\nSo the attached first patch stops the flood.\n\nThat said, I don't think it is not intended that logical walsender\nsends keep-alive packets with such a high frequency. It happens\nbecause walsender actually doesn't wait at all because it waits on\nWL_SOCKET_WRITEABLE because the keep-alive packet inserted just before\nis always pending.\n\nSo as the attached second, we should try to flush out the keep-alive\npackets if possible before checking pg_is_send_pending().\n\nAny one can \"fix\" the issue but I think each of them is reasonable by\nitself.\n\nAny thoughts, suggestions and/or opinions?\n\nregareds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 07 Jun 2021 16:23:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Mon, Jun 7, 2021 at 12:54 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sat, 5 Jun 2021 16:08:00 +0500, Abbas Butt <abbas.butt@enterprisedb.com> wrote in\n> > Hi,\n> > I have observed the following behavior with PostgreSQL 13.3.\n> >\n> > The WAL sender process sends approximately 500 keepalive messages per\n> > second to pg_recvlogical.\n> > These keepalive messages are totally un-necessary.\n> > Keepalives should be sent only if there is no network traffic and a certain\n> > time (half of wal_sender_timeout) passes.\n> > These keepalive messages not only choke the network but also impact the\n> > performance of the receiver,\n> > because the receiver has to process the received message and then decide\n> > whether to reply to it or not.\n> > The receiver remains busy doing this activity 500 times a second.\n>\n> I can reproduce the problem.\n>\n> > On investigation it is revealed that the following code fragment in\n> > function WalSndWaitForWal in file walsender.c is responsible for sending\n> > these frequent keepalives:\n> >\n> > if (MyWalSnd->flush < sentPtr &&\n> > MyWalSnd->write < sentPtr &&\n> > !waiting_for_ping_response)\n> > WalSndKeepalive(false);\n>\n> The immediate cause is pg_recvlogical doesn't send a reply before\n> sleeping. Currently it sends replies every 10 seconds intervals.\n>\n\nYeah, but one can use -s option to send it at lesser intervals.\n\n> So the attached first patch stops the flood.\n>\n\nI am not sure sending feedback every time before sleep is a good idea,\nthis might lead to unnecessarily sending more messages. Can we try by\nusing one-second interval with -s option to see how it behaves? As a\nmatter of comparison the similar logic in workers.c uses\nwal_receiver_timeout to send such an update message rather than\nsending it every time before sleep.\n\n> That said, I don't think it is not intended that logical walsender\n> sends keep-alive packets with such a high frequency. It happens\n> because walsender actually doesn't wait at all because it waits on\n> WL_SOCKET_WRITEABLE because the keep-alive packet inserted just before\n> is always pending.\n>\n> So as the attached second, we should try to flush out the keep-alive\n> packets if possible before checking pg_is_send_pending().\n>\n\n/* Send keepalive if the time has come */\n WalSndKeepaliveIfNecessary();\n\n+ /* We may have queued a keep alive packet. flush it before sleeping. */\n+ pq_flush_if_writable();\n\nWe already call pq_flush_if_writable() from WalSndKeepaliveIfNecessary\nafter sending the keep-alive message, so not sure how this helps?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 7 Jun 2021 15:43:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Mon, Jun 7, 2021 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Jun 7, 2021 at 12:54 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Sat, 5 Jun 2021 16:08:00 +0500, Abbas Butt <\n> abbas.butt@enterprisedb.com> wrote in\n> > > Hi,\n> > > I have observed the following behavior with PostgreSQL 13.3.\n> > >\n> > > The WAL sender process sends approximately 500 keepalive messages per\n> > > second to pg_recvlogical.\n> > > These keepalive messages are totally un-necessary.\n> > > Keepalives should be sent only if there is no network traffic and a\n> certain\n> > > time (half of wal_sender_timeout) passes.\n> > > These keepalive messages not only choke the network but also impact the\n> > > performance of the receiver,\n> > > because the receiver has to process the received message and then\n> decide\n> > > whether to reply to it or not.\n> > > The receiver remains busy doing this activity 500 times a second.\n> >\n> > I can reproduce the problem.\n> >\n> > > On investigation it is revealed that the following code fragment in\n> > > function WalSndWaitForWal in file walsender.c is responsible for\n> sending\n> > > these frequent keepalives:\n> > >\n> > > if (MyWalSnd->flush < sentPtr &&\n> > > MyWalSnd->write < sentPtr &&\n> > > !waiting_for_ping_response)\n> > > WalSndKeepalive(false);\n> >\n> > The immediate cause is pg_recvlogical doesn't send a reply before\n> > sleeping. Currently it sends replies every 10 seconds intervals.\n> >\n>\n> Yeah, but one can use -s option to send it at lesser intervals.\n>\n\nThat option can impact pg_recvlogical, it will not impact the server\nsending keepalives too frequently.\nBy default the status interval is 10 secs, still we are getting 500\nkeepalives a second from the server.\n\n\n>\n> > So the attached first patch stops the flood.\n> >\n>\n> I am not sure sending feedback every time before sleep is a good idea,\n> this might lead to unnecessarily sending more messages. Can we try by\n> using one-second interval with -s option to see how it behaves? As a\n> matter of comparison the similar logic in workers.c uses\n> wal_receiver_timeout to send such an update message rather than\n> sending it every time before sleep.\n>\n> > That said, I don't think it is not intended that logical walsender\n> > sends keep-alive packets with such a high frequency. It happens\n> > because walsender actually doesn't wait at all because it waits on\n> > WL_SOCKET_WRITEABLE because the keep-alive packet inserted just before\n> > is always pending.\n> >\n> > So as the attached second, we should try to flush out the keep-alive\n> > packets if possible before checking pg_is_send_pending().\n> >\n>\n> /* Send keepalive if the time has come */\n> WalSndKeepaliveIfNecessary();\n>\n> + /* We may have queued a keep alive packet. flush it before sleeping. */\n> + pq_flush_if_writable();\n>\n> We already call pq_flush_if_writable() from WalSndKeepaliveIfNecessary\n> after sending the keep-alive message, so not sure how this helps?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\n\n-- \n-- \n*Abbas*\nSenior Architect\n\n\nPh: 92.334.5100153\nSkype ID: gabbasb\nedbpostgres.com\n\n*Follow us on Twitter*\n@EnterpriseDB\n\nOn Mon, Jun 7, 2021 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Jun 7, 2021 at 12:54 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sat, 5 Jun 2021 16:08:00 +0500, Abbas Butt <abbas.butt@enterprisedb.com> wrote in\n> > Hi,\n> > I have observed the following behavior with PostgreSQL 13.3.\n> >\n> > The WAL sender process sends approximately 500 keepalive messages per\n> > second to pg_recvlogical.\n> > These keepalive messages are totally un-necessary.\n> > Keepalives should be sent only if there is no network traffic and a certain\n> > time (half of wal_sender_timeout) passes.\n> > These keepalive messages not only choke the network but also impact the\n> > performance of the receiver,\n> > because the receiver has to process the received message and then decide\n> > whether to reply to it or not.\n> > The receiver remains busy doing this activity 500 times a second.\n>\n> I can reproduce the problem.\n>\n> > On investigation it is revealed that the following code fragment in\n> > function WalSndWaitForWal in file walsender.c is responsible for sending\n> > these frequent keepalives:\n> >\n> > if (MyWalSnd->flush < sentPtr &&\n> >     MyWalSnd->write < sentPtr &&\n> >     !waiting_for_ping_response)\n> >         WalSndKeepalive(false);\n>\n> The immediate cause is pg_recvlogical doesn't send a reply before\n> sleeping. Currently it sends replies every 10 seconds intervals.\n>\n\nYeah, but one can use -s option to send it at lesser intervals.That option can impact pg_recvlogical, it will not impact the server sending keepalives too frequently.By default the status interval is 10 secs, still we are getting 500 keepalives a second from the server. \n\n> So the attached first patch stops the flood.\n>\n\nI am not sure sending feedback every time before sleep is a good idea,\nthis might lead to unnecessarily sending more messages. Can we try by\nusing one-second interval with -s option to see how it behaves? As a\nmatter of comparison the similar logic in workers.c uses\nwal_receiver_timeout to send such an update message rather than\nsending it every time before sleep.\n\n> That said, I don't think it is not intended that logical walsender\n> sends keep-alive packets with such a high frequency.  It happens\n> because walsender actually doesn't wait at all because it waits on\n> WL_SOCKET_WRITEABLE because the keep-alive packet inserted just before\n> is always pending.\n>\n> So as the attached second, we should try to flush out the keep-alive\n> packets if possible before checking pg_is_send_pending().\n>\n\n/* Send keepalive if the time has come */\n  WalSndKeepaliveIfNecessary();\n\n+ /* We may have queued a keep alive packet. flush it before sleeping. */\n+ pq_flush_if_writable();\n\nWe already call pq_flush_if_writable() from WalSndKeepaliveIfNecessary\nafter sending the keep-alive message, so not sure how this helps?\n\n-- \nWith Regards,\nAmit Kapila.\n-- -- Abbas\n\nSenior ArchitectPh: 92.334.5100153\n\nSkype ID: gabbasbedbpostgres.comFollow us on Twitter@EnterpriseDB", "msg_date": "Mon, 7 Jun 2021 15:26:05 +0500", "msg_from": "Abbas Butt <abbas.butt@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "At Mon, 7 Jun 2021 15:26:05 +0500, Abbas Butt <abbas.butt@enterprisedb.com> wrote in \n> On Mon, Jun 7, 2021 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > The immediate cause is pg_recvlogical doesn't send a reply before\n> > > sleeping. Currently it sends replies every 10 seconds intervals.\n> > >\n> >\n> > Yeah, but one can use -s option to send it at lesser intervals.\n> >\n> \n> That option can impact pg_recvlogical, it will not impact the server\n> sending keepalives too frequently.\n> By default the status interval is 10 secs, still we are getting 500\n> keepalives a second from the server.\n>\n> > > So the attached first patch stops the flood.\n> > >\n> >\n> > I am not sure sending feedback every time before sleep is a good idea,\n> > this might lead to unnecessarily sending more messages. Can we try by\n> > using one-second interval with -s option to see how it behaves? As a\n> > matter of comparison the similar logic in workers.c uses\n> > wal_receiver_timeout to send such an update message rather than\n> > sending it every time before sleep.\n\nLogical walreceiver sends a feedback when walrcv_eceive() doesn't\nreceive a byte. If its' not good that pg_recvlogical does the same\nthing, do we need to improve logical walsender's behavior as well?\n\n> > > That said, I don't think it is not intended that logical walsender\n> > > sends keep-alive packets with such a high frequency. It happens\n> > > because walsender actually doesn't wait at all because it waits on\n> > > WL_SOCKET_WRITEABLE because the keep-alive packet inserted just before\n> > > is always pending.\n> > >\n> > > So as the attached second, we should try to flush out the keep-alive\n> > > packets if possible before checking pg_is_send_pending().\n> > >\n> >\n> > /* Send keepalive if the time has come */\n> > WalSndKeepaliveIfNecessary();\n> >\n> > + /* We may have queued a keep alive packet. flush it before sleeping. */\n> > + pq_flush_if_writable();\n> >\n> > We already call pq_flush_if_writable() from WalSndKeepaliveIfNecessary\n> > after sending the keep-alive message, so not sure how this helps?\n\nNo. WalSndKeepaliveIfNecessary calls it only when walreceiver does not\nreceive a reply message for a long time. So the keepalive sent by the\ndirect call to WalSndKeepalive() from WalSndWaitForWal is not flushed\nout in most cases, which causes the flood.\n\nI rechecked all callers of WalSndKeepalive().\n\nWalSndKeepalive()\n+- *WalSndWaltForWal\n+- ProcessStandbyReplyMessage\n|+- ProcessStandbyMessage\n| +- ProcessRepliesIfAny\n| +- $WalSndWriteData\n| +- *WalSndWaitForWal\n| +- WalSndLoop\n| (calls pq_flush_if_writable() after sending the packet, but the\n| keepalive packet prevents following stream data from being sent\n| since the pending keepalive-packet causes pq_is_sned_pending()\n| return (falsely) true.)\n+- WalSndDone\n +- *WalSndLoop\n+- WalSndKeepaliveIfNecessary\n (calls pq_flush_if_writable always only after calling WalSndKeepalive())\n\nThe callers prefixed by '*' above misunderstand that some of the data\nsent by them are still pending even when the only pending bytes is the\nkeepalive packet. Of course the keepalive pakcets should be sent\n*before* sleep and the unsent keepalive packet prevents the callers\nfrom sleeping then they immediately retry sending another keepalive\npakcet and repeat it until the condition changes. (The callers\nprevised by \"$\" also enters a sleep before flushing but doesn't repeat\nsending keepalives.)\n\nThe caller is forgetting that a keepalive pakcet may be queued but not\nflushed after calling WalSndKeepalive. So more sensible fix would be\ncalling pq_flush_if_writable in WalSndKeepalive?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 08 Jun 2021 10:05:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "At Tue, 08 Jun 2021 10:05:36 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 7 Jun 2021 15:26:05 +0500, Abbas Butt <abbas.butt@enterprisedb.com> wrote in \n> > On Mon, Jun 7, 2021 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I am not sure sending feedback every time before sleep is a good idea,\n> > > this might lead to unnecessarily sending more messages. Can we try by\n> > > using one-second interval with -s option to see how it behaves? As a\n> > > matter of comparison the similar logic in workers.c uses\n> > > wal_receiver_timeout to send such an update message rather than\n> > > sending it every time before sleep.\n> \n> Logical walreceiver sends a feedback when walrcv_eceive() doesn't\n> receive a byte. If its' not good that pg_recvlogical does the same\n> thing, do we need to improve logical walsender's behavior as well?\n\nFor the clarity, only the change in the walsender side can stop the\nflood.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 08 Jun 2021 14:09:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "Hi Kyotaro,\nI have tried to test your patches. Unfortunately even after applying the\npatches\nthe WAL Sender is still sending too frequent keepalive messages.\nIn my opinion the fix is to make sure that wal_sender_timeout/2 has passed\nbefore sending\nthe keepalive message in the code fragment I had shared earlier.\nIn other words we should replace the call to\nWalSndKeepalive(false);\nwith\nWalSndKeepaliveIfNecessary(false);\n\nDo you agree with the suggested fix?\n\nOn Tue, Jun 8, 2021 at 10:09 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Tue, 08 Jun 2021 10:05:36 +0900 (JST), Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote in\n> > At Mon, 7 Jun 2021 15:26:05 +0500, Abbas Butt <\n> abbas.butt@enterprisedb.com> wrote in\n> > > On Mon, Jun 7, 2021 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > > > I am not sure sending feedback every time before sleep is a good\n> idea,\n> > > > this might lead to unnecessarily sending more messages. Can we try by\n> > > > using one-second interval with -s option to see how it behaves? As a\n> > > > matter of comparison the similar logic in workers.c uses\n> > > > wal_receiver_timeout to send such an update message rather than\n> > > > sending it every time before sleep.\n> >\n> > Logical walreceiver sends a feedback when walrcv_eceive() doesn't\n> > receive a byte. If its' not good that pg_recvlogical does the same\n> > thing, do we need to improve logical walsender's behavior as well?\n>\n> For the clarity, only the change in the walsender side can stop the\n> flood.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\n\n-- \n-- \n*Abbas*\nSenior Architect\n\n\nPh: 92.334.5100153\nSkype ID: gabbasb\nedbpostgres.com\n\n*Follow us on Twitter*\n@EnterpriseDB\n\nHi Kyotaro,I have tried to test your patches. Unfortunately even after applying the patches the WAL Sender is still sending too frequent keepalive messages.In my opinion the fix is to make sure that wal_sender_timeout/2 has passed before sendingthe keepalive message in the code fragment I had shared earlier.In  other words we should replace the call toWalSndKeepalive(false);withWalSndKeepaliveIfNecessary(false);Do you agree with the suggested fix?On Tue, Jun 8, 2021 at 10:09 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Tue, 08 Jun 2021 10:05:36 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 7 Jun 2021 15:26:05 +0500, Abbas Butt <abbas.butt@enterprisedb.com> wrote in \n> > On Mon, Jun 7, 2021 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I am not sure sending feedback every time before sleep is a good idea,\n> > > this might lead to unnecessarily sending more messages. Can we try by\n> > > using one-second interval with -s option to see how it behaves? As a\n> > > matter of comparison the similar logic in workers.c uses\n> > > wal_receiver_timeout to send such an update message rather than\n> > > sending it every time before sleep.\n> \n> Logical walreceiver sends a feedback when walrcv_eceive() doesn't\n> receive a byte.  If its' not good that pg_recvlogical does the same\n> thing, do we need to improve logical walsender's behavior as well?\n\nFor the clarity, only the change in the walsender side can stop the\nflood.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n-- -- Abbas\n\nSenior ArchitectPh: 92.334.5100153\n\nSkype ID: gabbasbedbpostgres.comFollow us on Twitter@EnterpriseDB", "msg_date": "Tue, 8 Jun 2021 17:21:56 +0500", "msg_from": "Abbas Butt <abbas.butt@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "Hi.\n\nOn 2021/06/08 21:21, Abbas Butt wrote:\n> Hi Kyotaro,\n> I have tried to test your patches. Unfortunately even after applying the\n> patches\n> the WAL Sender is still sending too frequent keepalive messages.\n\nSorry for the bogus patch.  I must have seen something impossible.\n\nThe keep-alive packet is immediately flushed explicitly, so Amit is \nright that no additional\n\npq_flush_if_writable() is not needed.\n\n> In my opinion the fix is to make sure that wal_sender_timeout/2 has passed\n> before sending\n> the keepalive message in the code fragment I had shared earlier.\n> In other words we should replace the call to\n> WalSndKeepalive(false);\n> with\n> WalSndKeepaliveIfNecessary(false);\n>\n> Do you agree with the suggested fix?\n\nI'm afraid not. The same is done just after unconditionally.\n\nThe issue - if actually it is - we send a keep-alive packet before a \nquite short sleep.\n\nWe really want to send it if the sleep gets long but we cannot predict \nthat before entering a sleep.\n\nLet me think a little more on this..\n\nregards.\n\n\n\n\n", "msg_date": "Wed, 9 Jun 2021 11:21:55 +0900", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "At Wed, 9 Jun 2021 11:21:55 +0900, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> The issue - if actually it is - we send a keep-alive packet before a\n> quite short sleep.\n> \n> We really want to send it if the sleep gets long but we cannot predict\n> that before entering a sleep.\n> \n> Let me think a little more on this..\n\nAfter some investigation, I find out that the keepalives are sent\nalmost always after XLogSendLogical requests for the *next* record. In\nmost of the cases the record is not yet inserted at the request time\nbut insertd very soon (in 1-digit milliseconds). It doesn't seem to be\nexpected that that happens with such a high frequency when\nXLogSendLogical is keeping up-to-date with the bleeding edge of WAL\nrecords.\n\nIt is completely unpredictable when the next record comes, so we\ncannot decide whether to send a keepalive or not at the current\ntiming.\n\nSince we want to send a keepalive when we have nothing to send for a\nwhile, it is a bit different to keep sending keepalives at some\nintervals while the loop is busy.\n\nAs a possible solution, the attached patch splits the sleep into two\npieces. If the first sleep reaches the timeout then send a keepalive\nthen sleep for the remaining time. The first timeout is quite\narbitrary but keepalive of 4Hz at maximum doesn't look so bad to me.\n\nIs it acceptable?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 09 Jun 2021 17:17:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Wed, Jun 9, 2021 at 1:47 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 9 Jun 2021 11:21:55 +0900, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > The issue - if actually it is - we send a keep-alive packet before a\n> > quite short sleep.\n> >\n> > We really want to send it if the sleep gets long but we cannot predict\n> > that before entering a sleep.\n> >\n> > Let me think a little more on this..\n>\n> After some investigation, I find out that the keepalives are sent\n> almost always after XLogSendLogical requests for the *next* record.\n>\n\nDoes these keepalive messages are sent at the same frequency even for\nsubscribers? Basically, I wanted to check if we have logical\nreplication set up between 2 nodes then do we send these keep-alive\nmessages flood? If not, then why is it different in the case of\npg_recvlogical? Is it possible that the write/flush location is not\nupdated at the pace at which we expect? Please see commit 41d5f8ad73\nwhich seems to be talking about a similar problem.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Jun 2021 14:59:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "Hi,\n\nOn Wed, Jun 9, 2021 at 2:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Jun 9, 2021 at 1:47 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Wed, 9 Jun 2021 11:21:55 +0900, Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote in\n> > > The issue - if actually it is - we send a keep-alive packet before a\n> > > quite short sleep.\n> > >\n> > > We really want to send it if the sleep gets long but we cannot predict\n> > > that before entering a sleep.\n> > >\n> > > Let me think a little more on this..\n> >\n> > After some investigation, I find out that the keepalives are sent\n> > almost always after XLogSendLogical requests for the *next* record.\n> >\n>\n> Does these keepalive messages are sent at the same frequency even for\n> subscribers?\n\n\nYes, I have tested it with one publisher and one subscriber.\nThe moment I start pgbench session I can see keepalive messages sent and\nreplied by the subscriber with same frequency.\n\n\n> Basically, I wanted to check if we have logical\n> replication set up between 2 nodes then do we send these keep-alive\n> messages flood?\n\n\nYes we do.\n\n\n> If not, then why is it different in the case of\n> pg_recvlogical?\n\n\nNothing, the WAL sender behaviour is same in both cases.\n\n\n> Is it possible that the write/flush location is not\n> updated at the pace at which we expect?\n\n\nWell, it is async replication. The receiver can choose to update LSNs at\nits own will, say after 10 mins interval.\nIt should only impact the size of WAL retained by the server.\n\nPlease see commit 41d5f8ad73\n> which seems to be talking about a similar problem.\n>\n\nThat commit does not address this problem.\n\n\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\n\n-- \n-- \n*Abbas*\nSenior Architect\n\n\nPh: 92.334.5100153\nSkype ID: gabbasb\nedbpostgres.com\n\n*Follow us on Twitter*\n@EnterpriseDB\n\nHi,On Wed, Jun 9, 2021 at 2:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Jun 9, 2021 at 1:47 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 9 Jun 2021 11:21:55 +0900, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > The issue - if actually it is - we send a keep-alive packet before a\n> > quite short sleep.\n> >\n> > We really want to send it if the sleep gets long but we cannot predict\n> > that before entering a sleep.\n> >\n> > Let me think a little more on this..\n>\n> After some investigation, I find out that the keepalives are sent\n> almost always after XLogSendLogical requests for the *next* record.\n>\n\nDoes these keepalive messages are sent at the same frequency even for\nsubscribers?Yes, I have tested it with one publisher and one subscriber.The moment I start pgbench session I can see keepalive messages sent and replied by the subscriber with same frequency.  Basically, I wanted to check if we have logical\nreplication set up between 2 nodes then do we send these keep-alive\nmessages flood?Yes we do.  If not, then why is it different in the case of\npg_recvlogical?Nothing, the WAL sender behaviour is same in both cases.  Is it possible that the write/flush location is not\nupdated at the pace at which we expect?Well, it is async replication. The receiver can choose to update LSNs at its own will, say after 10 mins interval.It should only impact the size of WAL retained by the server. Please see commit 41d5f8ad73\nwhich seems to be talking about a similar problem.That commit does not address this problem. \n\n-- \nWith Regards,\nAmit Kapila.\n-- -- Abbas\n\nSenior ArchitectPh: 92.334.5100153\n\nSkype ID: gabbasbedbpostgres.comFollow us on Twitter@EnterpriseDB", "msg_date": "Wed, 9 Jun 2021 17:32:25 +0500", "msg_from": "Abbas Butt <abbas.butt@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "At Wed, 9 Jun 2021 17:32:25 +0500, Abbas Butt <abbas.butt@enterprisedb.com> wrote in \n> \n> On Wed, Jun 9, 2021 at 2:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Does these keepalive messages are sent at the same frequency even for\n> > subscribers?\n> \n> Yes, I have tested it with one publisher and one subscriber.\n> The moment I start pgbench session I can see keepalive messages sent and\n> replied by the subscriber with same frequency.\n> \n> > Basically, I wanted to check if we have logical\n> > replication set up between 2 nodes then do we send these keep-alive\n> > messages flood?\n> \n> Yes we do.\n> \n> > If not, then why is it different in the case of\n> > pg_recvlogical?\n> \n> Nothing, the WAL sender behaviour is same in both cases.\n> \n> \n> > Is it possible that the write/flush location is not\n> > updated at the pace at which we expect?\n\nYes. MyWalSnd->flush/write are updated far frequently but still\nMyWalSnd->write is behind sentPtr by from thousands of bytes up to\nless than 1 block (1block = 8192 bytes). (Flush lags are larger than\nwrite lags, of course.)\n\nI counted how many times keepalives are sent for each request length\nto logical_read_xlog_page() for 10 seconds pgbench run and replicating\npgbench_history, using the attached change.\n\nsize: sent /notsent/ calls: write lag/ flush lag\n 8: 3 / 6 / 3: 5960 / 348962\n 16: 1 / 2 / 1: 520 / 201096\n 24: 2425 / 4852 / 2461: 5259 / 293569\n 98: 2 / 0 / 54: 5 / 1050\n 187: 2 / 0 / 94: 0 / 1060\n4432: 1 / 0 / 1: 410473592 / 410473592\n7617: 2 / 0 / 27: 317 / 17133\n8280: 1 / 2 / 4: 390 / 390\n\nWhere,\n\nsize is requested data length to logical_read_xlog_page()\n\nsent is the number of keepalives sent in the loop in WalSndWaitForWal\n\nnotsent is the number of runs of the loop in WalSndWaitForWal without\n\t\tsending a keepalive\n\ncalls is the number of calls to WalSndWaitForWal\n\nwrite lag is the bytes MyWalSnd->write is behind from sentPtr at the\n first run of the loop per call to logical_read_xlog_page.\n\nflush lag is the the same to the above for MyWalSnd->flush.\n\nMaybe the line of size=4432 is the first time fetch of WAL.\n\nSo this numbers show that WalSndWaitForWal is called almost only at\nstarting to fetching a record, and in that case the function runs the\nloop three times and sends one keepalive by average.\n\n> Well, it is async replication. The receiver can choose to update LSNs at\n> its own will, say after 10 mins interval.\n> It should only impact the size of WAL retained by the server.\n> \n> Please see commit 41d5f8ad73\n> > which seems to be talking about a similar problem.\n> >\n> \n> That commit does not address this problem.\n\nYeah, at least for me, WalSndWaitForWal send a keepalive per one call.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c\nindex 42738eb940..ee78116e79 100644\n--- a/src/backend/access/transam/xlogreader.c\n+++ b/src/backend/access/transam/xlogreader.c\n@@ -571,6 +571,7 @@ err:\n * We fetch the page from a reader-local cache if we know we have the required\n * data and if there hasn't been any error since caching the data.\n */\n+int hogestate = -1;\n static int\n ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)\n {\n@@ -605,6 +606,7 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)\n \t{\n \t\tXLogRecPtr\ttargetSegmentPtr = pageptr - targetPageOff;\n \n+\t\thogestate = pageptr + XLOG_BLCKSZ - state->currRecPtr;\n \t\treadLen = state->routine.page_read(state, targetSegmentPtr, XLOG_BLCKSZ,\n \t\t\t\t\t\t\t\t\t\t state->currRecPtr,\n \t\t\t\t\t\t\t\t\t\t state->readBuf);\n@@ -623,6 +625,7 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)\n \t * First, read the requested data length, but at least a short page header\n \t * so that we can validate it.\n \t */\n+\thogestate = pageptr + Max(reqLen, SizeOfXLogShortPHD) - state->currRecPtr;\n \treadLen = state->routine.page_read(state, pageptr, Max(reqLen, SizeOfXLogShortPHD),\n \t\t\t\t\t\t\t\t\t state->currRecPtr,\n \t\t\t\t\t\t\t\t\t state->readBuf);\n@@ -642,6 +645,7 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)\n \t/* still not enough */\n \tif (readLen < XLogPageHeaderSize(hdr))\n \t{\n+\t\thogestate = pageptr + XLogPageHeaderSize(hdr) - state->currRecPtr;\n \t\treadLen = state->routine.page_read(state, pageptr, XLogPageHeaderSize(hdr),\n \t\t\t\t\t\t\t\t\t\t state->currRecPtr,\n \t\t\t\t\t\t\t\t\t\t state->readBuf);\n@@ -649,6 +653,7 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)\n \t\t\tgoto err;\n \t}\n \n+\thogestate = -1;\n \t/*\n \t * Now that we know we have the full header, validate it.\n \t */\ndiff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c\nindex 109c723f4e..0de10c4a31 100644\n--- a/src/backend/replication/walsender.c\n+++ b/src/backend/replication/walsender.c\n@@ -1363,17 +1363,45 @@ WalSndUpdateProgress(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId\n * if we detect a shutdown request (either from postmaster or client)\n * we will return early, so caller must always check.\n */\n+unsigned long counts[32768][3] = {0};\n+unsigned long lagw[32768] = {0};\n+unsigned long lagf[32768] = {0};\n+\n+void\n+PrintCounts(void)\n+{\n+\tint i = 0;\n+\tfor (i = 0 ; i < 32768 ; i++)\n+\t{\n+\t\tif (counts[i][0] + counts[i][1] + counts[i][2] > 0)\n+\t\t{\n+\t\t\tunsigned long wl = 0, fl = 0;\n+\t\t\tif (counts[i][1] > 0)\n+\t\t\t{\n+\t\t\t\twl = lagw[i] / counts[i][0];\n+\t\t\t\tfl = lagf[i] / counts[i][0];\n+\t\t\t\n+\t\t\t\tereport(LOG, (errmsg (\"[%5d]: %5lu / %5lu / %5lu: %5lu %5lu\",\n+\t\t\t\t\t\t\t\t\t i, counts[i][1], counts[i][2], counts[i][0], wl, fl), errhidestmt(true)));\n+\t\t\t}\n+\t\t}\n+\t}\n+}\n+\n static XLogRecPtr\n WalSndWaitForWal(XLogRecPtr loc)\n {\n \tint\t\t\twakeEvents;\n \tstatic XLogRecPtr RecentFlushPtr = InvalidXLogRecPtr;\n+\textern int hogestate;\n+\tbool\t\tlagtaken = false;\n \n \t/*\n \t * Fast path to avoid acquiring the spinlock in case we already know we\n \t * have enough WAL available. This is particularly interesting if we're\n \t * far behind.\n \t */\n+\tcounts[hogestate][0]++;\n \tif (RecentFlushPtr != InvalidXLogRecPtr &&\n \t\tloc <= RecentFlushPtr)\n \t\treturn RecentFlushPtr;\n@@ -1439,7 +1467,39 @@ WalSndWaitForWal(XLogRecPtr loc)\n \t\tif (MyWalSnd->flush < sentPtr &&\n \t\t\tMyWalSnd->write < sentPtr &&\n \t\t\t!waiting_for_ping_response)\n+\t\t{\n+\t\t\tif (hogestate >= 0)\n+\t\t\t{\n+\t\t\t\tcounts[hogestate][1]++;\n+\t\t\t\tif (!lagtaken)\n+\t\t\t\t{\n+\t\t\t\t\tlagf[hogestate] += sentPtr - MyWalSnd->flush;\n+\t\t\t\t\tlagw[hogestate] += sentPtr - MyWalSnd->write;\n+\t\t\t\t\tlagtaken = true;\n+\t\t\t\t}\n+\t\t\t}\n+//\t\t\tereport(LOG, (errmsg (\"KA[%lu/%lu/%lu]: %X/%X %X/%X %X/%X %d: %ld\",\n+//\t\t\t\t\t\t\t\t ka, na, ka + na,\n+//\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(MyWalSnd->flush),\n+//\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(MyWalSnd->write),\n+//\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(sentPtr),\n+//\t\t\t\t\t\t\t\t waiting_for_ping_response,\n+//\t\t\t\t\t\t\t\t sentPtr - MyWalSnd->write)));\n \t\t\tWalSndKeepalive(false);\n+\t\t}\n+\t\telse\n+\t\t{\n+\t\t\tif (hogestate >= 0)\n+\t\t\t\tcounts[hogestate][2]++;\n+\n+//\t\t\tereport(LOG, (errmsg (\"kap[%lu/%lu/%lu]: %X/%X %X/%X %X/%X %d: %ld\",\n+//\t\t\t\t\t\t\t\t ka, na, ka + na,\n+//\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(MyWalSnd->flush),\n+//\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(MyWalSnd->write),\n+//\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(sentPtr),\n+//\t\t\t\t\t\t\t\t waiting_for_ping_response,\n+//\t\t\t\t\t\t\t\t sentPtr - MyWalSnd->write)));\n+\t\t}\n \n \t\t/* check whether we're done */\n \t\tif (loc <= RecentFlushPtr)", "msg_date": "Thu, 10 Jun 2021 15:00:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "At Thu, 10 Jun 2021 15:00:16 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 9 Jun 2021 17:32:25 +0500, Abbas Butt <abbas.butt@enterprisedb.com> wrote in \n> > \n> > On Wed, Jun 9, 2021 at 2:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Is it possible that the write/flush location is not\n> > > updated at the pace at which we expect?\n> \n> Yes. MyWalSnd->flush/write are updated far frequently but still\n> MyWalSnd->write is behind sentPtr by from thousands of bytes up to\n> less than 1 block (1block = 8192 bytes). (Flush lags are larger than\n> write lags, of course.)\n\nFor more clarity, I changed the previous patch a bit and retook numbers.\n\nTotal records: 19476\n 8: 2 / 4 / 2: 4648 / 302472\n 16: 5 / 10 / 5: 5427 / 139872\n 24: 3006 / 6015 / 3028: 4739 / 267215\n187: 2 / 0 / 50: 1 / 398\n\nWhile a 10 seconds run of pgbench, it walsender reads 19476 records\nand calls logical_read_xlog_page() 3028 times, and the mean of write\nlag is 4739 bytes and flush lag is 267215 bytes (really?), as the\nresult most of the record fetch causes a keep alive. (The WAL contains\nmany FPIs).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c\nindex 42738eb940..ee78116e79 100644\n--- a/src/backend/access/transam/xlogreader.c\n+++ b/src/backend/access/transam/xlogreader.c\n@@ -571,6 +571,7 @@ err:\n * We fetch the page from a reader-local cache if we know we have the required\n * data and if there hasn't been any error since caching the data.\n */\n+int hogestate = -1;\n static int\n ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)\n {\n@@ -605,6 +606,7 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)\n \t{\n \t\tXLogRecPtr\ttargetSegmentPtr = pageptr - targetPageOff;\n \n+\t\thogestate = pageptr + XLOG_BLCKSZ - state->currRecPtr;\n \t\treadLen = state->routine.page_read(state, targetSegmentPtr, XLOG_BLCKSZ,\n \t\t\t\t\t\t\t\t\t\t state->currRecPtr,\n \t\t\t\t\t\t\t\t\t\t state->readBuf);\n@@ -623,6 +625,7 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)\n \t * First, read the requested data length, but at least a short page header\n \t * so that we can validate it.\n \t */\n+\thogestate = pageptr + Max(reqLen, SizeOfXLogShortPHD) - state->currRecPtr;\n \treadLen = state->routine.page_read(state, pageptr, Max(reqLen, SizeOfXLogShortPHD),\n \t\t\t\t\t\t\t\t\t state->currRecPtr,\n \t\t\t\t\t\t\t\t\t state->readBuf);\n@@ -642,6 +645,7 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)\n \t/* still not enough */\n \tif (readLen < XLogPageHeaderSize(hdr))\n \t{\n+\t\thogestate = pageptr + XLogPageHeaderSize(hdr) - state->currRecPtr;\n \t\treadLen = state->routine.page_read(state, pageptr, XLogPageHeaderSize(hdr),\n \t\t\t\t\t\t\t\t\t\t state->currRecPtr,\n \t\t\t\t\t\t\t\t\t\t state->readBuf);\n@@ -649,6 +653,7 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)\n \t\t\tgoto err;\n \t}\n \n+\thogestate = -1;\n \t/*\n \t * Now that we know we have the full header, validate it.\n \t */\ndiff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c\nindex 109c723f4e..62f5f09fee 100644\n--- a/src/backend/replication/walsender.c\n+++ b/src/backend/replication/walsender.c\n@@ -1363,17 +1363,49 @@ WalSndUpdateProgress(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId\n * if we detect a shutdown request (either from postmaster or client)\n * we will return early, so caller must always check.\n */\n+unsigned long counts[32768][3] = {0};\n+unsigned long lagw[32768] = {0};\n+unsigned long lagf[32768] = {0};\n+unsigned long nrec = 0;\n+void\n+PrintCounts(void)\n+{\n+\tint i = 0;\n+\tereport(LOG, (errmsg (\"Total records: %lu\", nrec), errhidestmt(true)));\n+\tnrec = 0;\n+\n+\tfor (i = 0 ; i < 32768 ; i++)\n+\t{\n+\t\tif (counts[i][0] + counts[i][1] + counts[i][2] > 0)\n+\t\t{\n+\t\t\tunsigned long wl = 0, fl = 0;\n+\t\t\tif (counts[i][1] > 0)\n+\t\t\t{\n+\t\t\t\twl = lagw[i] / counts[i][0];\n+\t\t\t\tfl = lagf[i] / counts[i][0];\n+\t\t\t\n+\t\t\t\tereport(LOG, (errmsg (\"%5d: %5lu / %5lu / %5lu: %7lu / %7lu\",\n+\t\t\t\t\t\t\t\t\t i, counts[i][1], counts[i][2], counts[i][0], wl, fl), errhidestmt(true)));\n+\t\t\t}\n+\t\t\tcounts[i][0] = counts[i][1] = counts[i][2] = lagw[i] = lagf[i] = 0;\n+\t\t}\n+\t}\n+}\n+\n static XLogRecPtr\n WalSndWaitForWal(XLogRecPtr loc)\n {\n \tint\t\t\twakeEvents;\n \tstatic XLogRecPtr RecentFlushPtr = InvalidXLogRecPtr;\n+\textern int hogestate;\n+\tbool\t\tlagtaken = false;\n \n \t/*\n \t * Fast path to avoid acquiring the spinlock in case we already know we\n \t * have enough WAL available. This is particularly interesting if we're\n \t * far behind.\n \t */\n+\tcounts[hogestate][0]++;\n \tif (RecentFlushPtr != InvalidXLogRecPtr &&\n \t\tloc <= RecentFlushPtr)\n \t\treturn RecentFlushPtr;\n@@ -1439,7 +1471,39 @@ WalSndWaitForWal(XLogRecPtr loc)\n \t\tif (MyWalSnd->flush < sentPtr &&\n \t\t\tMyWalSnd->write < sentPtr &&\n \t\t\t!waiting_for_ping_response)\n+\t\t{\n+\t\t\tif (hogestate >= 0)\n+\t\t\t{\n+\t\t\t\tcounts[hogestate][1]++;\n+\t\t\t\tif (!lagtaken)\n+\t\t\t\t{\n+\t\t\t\t\tlagf[hogestate] += sentPtr - MyWalSnd->flush;\n+\t\t\t\t\tlagw[hogestate] += sentPtr - MyWalSnd->write;\n+\t\t\t\t\tlagtaken = true;\n+\t\t\t\t}\n+\t\t\t}\n+//\t\t\tereport(LOG, (errmsg (\"KA[%lu/%lu/%lu]: %X/%X %X/%X %X/%X %d: %ld\",\n+//\t\t\t\t\t\t\t\t ka, na, ka + na,\n+//\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(MyWalSnd->flush),\n+//\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(MyWalSnd->write),\n+//\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(sentPtr),\n+//\t\t\t\t\t\t\t\t waiting_for_ping_response,\n+//\t\t\t\t\t\t\t\t sentPtr - MyWalSnd->write)));\n \t\t\tWalSndKeepalive(false);\n+\t\t}\n+\t\telse\n+\t\t{\n+\t\t\tif (hogestate >= 0)\n+\t\t\t\tcounts[hogestate][2]++;\n+\n+//\t\t\tereport(LOG, (errmsg (\"kap[%lu/%lu/%lu]: %X/%X %X/%X %X/%X %d: %ld\",\n+//\t\t\t\t\t\t\t\t ka, na, ka + na,\n+//\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(MyWalSnd->flush),\n+//\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(MyWalSnd->write),\n+//\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(sentPtr),\n+//\t\t\t\t\t\t\t\t waiting_for_ping_response,\n+//\t\t\t\t\t\t\t\t sentPtr - MyWalSnd->write)));\n+\t\t}\n \n \t\t/* check whether we're done */\n \t\tif (loc <= RecentFlushPtr)\n@@ -2843,6 +2907,7 @@ XLogSendLogical(void)\n {\n \tXLogRecord *record;\n \tchar\t *errm;\n+\textern unsigned long nrec;\n \n \t/*\n \t * We'll use the current flush point to determine whether we've caught up.\n@@ -2860,6 +2925,7 @@ XLogSendLogical(void)\n \t */\n \tWalSndCaughtUp = false;\n \n+\tnrec++;\n \trecord = XLogReadRecord(logical_decoding_ctx->reader, &errm);\n \n \t/* xlog record was invalid */", "msg_date": "Thu, 10 Jun 2021 15:12:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Jun 10, 2021 at 11:42 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 10 Jun 2021 15:00:16 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > At Wed, 9 Jun 2021 17:32:25 +0500, Abbas Butt <abbas.butt@enterprisedb.com> wrote in\n> > >\n> > > On Wed, Jun 9, 2021 at 2:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > Is it possible that the write/flush location is not\n> > > > updated at the pace at which we expect?\n> >\n> > Yes. MyWalSnd->flush/write are updated far frequently but still\n> > MyWalSnd->write is behind sentPtr by from thousands of bytes up to\n> > less than 1 block (1block = 8192 bytes). (Flush lags are larger than\n> > write lags, of course.)\n>\n> For more clarity, I changed the previous patch a bit and retook numbers.\n>\n> Total records: 19476\n> 8: 2 / 4 / 2: 4648 / 302472\n> 16: 5 / 10 / 5: 5427 / 139872\n> 24: 3006 / 6015 / 3028: 4739 / 267215\n> 187: 2 / 0 / 50: 1 / 398\n>\n> While a 10 seconds run of pgbench, it walsender reads 19476 records\n> and calls logical_read_xlog_page() 3028 times, and the mean of write\n> lag is 4739 bytes and flush lag is 267215 bytes (really?), as the\n> result most of the record fetch causes a keep alive. (The WAL contains\n> many FPIs).\n>\n\nGood analysis. I think this analysis has shown that walsender is\nsending messages at top speed as soon as they are generated. So, I am\nwondering why there is any need to wait/sleep in such a workload. One\npossibility that occurred to me RecentFlushPtr is not updated and or\nwe are not checking it aggressively. To investigate on that lines, can\nyou check the behavior with the attached patch? This is just a quick\nhack patch to test whether we need to really wait for WAL a bit\naggressively.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 10 Jun 2021 12:18:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "At Thu, 10 Jun 2021 12:18:00 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> Good analysis. I think this analysis has shown that walsender is\n> sending messages at top speed as soon as they are generated. So, I am\n> wondering why there is any need to wait/sleep in such a workload. One\n> possibility that occurred to me RecentFlushPtr is not updated and or\n> we are not checking it aggressively. To investigate on that lines, can\n> you check the behavior with the attached patch? This is just a quick\n> hack patch to test whether we need to really wait for WAL a bit\n> aggressively.\n\nYeah, anyway the comment for the caller site of WalSndKeepalive tells\nthat exiting out of the function *after* there is somewhat wrong.\n\n> * possibly are waiting for a later location. So, before sleeping, we\n> * send a ping containing the flush location. If the receiver is\n\nBut I nothing changed by moving the keepalive check to after the exit\ncheck. (loc <= RecentFlushPtr).\n\nAnd the patch also doesn't change the situation so much. The average\nnumber of loops is reduced from 3 to 2 per call but the ratio between\ntotal records and keepalives doesn't change.\n\nprevisous: A=#total-rec = 19476, B=#keepalive=3006, B/A = 0.154\nthis time: A=#total-rec = 13208, B=#keepalive=1988, B/A = 0.151\n\nTotal records: 13208\nreqsz: #sent/ #!sent/ #call: wr lag / fl lag\n 8: 4 / 4 / 4: 6448 / 268148\n 16: 1 / 1 / 1: 8688 / 387320\n 24: 1988 / 1987 / 1999: 6357 / 226163\n 195: 1 / 0 / 20: 408 / 1647\n7477: 2 / 0 / 244: 68 / 847\n8225: 1 / 1 / 1: 7208 / 7208\n\nSo I checked how many bytes RecentFlushPtr is behind requested loc if\nit is not advanced enough.\n\nTotal records: 15128\nreqsz: #sent/ #!sent/ #call: wr lag / fl lag / RecentFlushPtr lag\n 8: 2 / 2 / 2: 520 / 60640 / 8\n 16: 1 / 1 / 1: 8664 / 89336 / 16\n 24: 2290 / 2274 / 2302: 5677 / 230583 / 23\n 187: 1 / 0 / 40: 1 / 6118 / 1\n 7577: 1 / 0 / 69: 120 / 3733 / 65\n 8177: 1 / 1 / 1: 8288 / 8288 / 2673\n\nSo it's not a matter of RecentFlushPtr check. (Almost) Always when\nWalSndWakeupRequest feels to need to send a keepalive, the function is\ncalled before the record begins to be written.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Jun 2021 10:37:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Fri, Jun 11, 2021 at 7:07 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 10 Jun 2021 12:18:00 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > Good analysis. I think this analysis has shown that walsender is\n> > sending messages at top speed as soon as they are generated. So, I am\n> > wondering why there is any need to wait/sleep in such a workload. One\n> > possibility that occurred to me RecentFlushPtr is not updated and or\n> > we are not checking it aggressively. To investigate on that lines, can\n> > you check the behavior with the attached patch? This is just a quick\n> > hack patch to test whether we need to really wait for WAL a bit\n> > aggressively.\n>\n> Yeah, anyway the comment for the caller site of WalSndKeepalive tells\n> that exiting out of the function *after* there is somewhat wrong.\n>\n> > * possibly are waiting for a later location. So, before sleeping, we\n> > * send a ping containing the flush location. If the receiver is\n>\n> But I nothing changed by moving the keepalive check to after the exit\n> check. (loc <= RecentFlushPtr).\n>\n> And the patch also doesn't change the situation so much. The average\n> number of loops is reduced from 3 to 2 per call but the ratio between\n> total records and keepalives doesn't change.\n>\n> previsous: A=#total-rec = 19476, B=#keepalive=3006, B/A = 0.154\n> this time: A=#total-rec = 13208, B=#keepalive=1988, B/A = 0.151\n>\n> Total records: 13208\n> reqsz: #sent/ #!sent/ #call: wr lag / fl lag\n> 8: 4 / 4 / 4: 6448 / 268148\n> 16: 1 / 1 / 1: 8688 / 387320\n> 24: 1988 / 1987 / 1999: 6357 / 226163\n> 195: 1 / 0 / 20: 408 / 1647\n> 7477: 2 / 0 / 244: 68 / 847\n> 8225: 1 / 1 / 1: 7208 / 7208\n>\n> So I checked how many bytes RecentFlushPtr is behind requested loc if\n> it is not advanced enough.\n>\n> Total records: 15128\n> reqsz: #sent/ #!sent/ #call: wr lag / fl lag / RecentFlushPtr lag\n> 8: 2 / 2 / 2: 520 / 60640 / 8\n> 16: 1 / 1 / 1: 8664 / 89336 / 16\n> 24: 2290 / 2274 / 2302: 5677 / 230583 / 23\n> 187: 1 / 0 / 40: 1 / 6118 / 1\n> 7577: 1 / 0 / 69: 120 / 3733 / 65\n> 8177: 1 / 1 / 1: 8288 / 8288 / 2673\n>\n\nDoes this data indicate that when the request_size is 187 or 7577,\neven though we have called WalSndWaitForWal() 40 and 69 times\nrespectively but keepalive is sent just once? Why such a behavior\nshould depend upon request size?\n\n> So it's not a matter of RecentFlushPtr check. (Almost) Always when\n> WalSndWakeupRequest feels to need to send a keepalive, the function is\n> called before the record begins to be written.\n>\n\nI think we always wake up walsender after we have flushed the WAL via\nWalSndWakeupProcessRequests(). I think here the reason why we are\nseeing keepalives is that we always send it before sleeping. So, it\nseems each time we try to read a new page, we call WalSndWaitForWal\nwhich sends at least one keepalive message. I am not sure what is an\nappropriate way to reduce the frequency of these keepalive messages.\nAndres might have some ideas?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 12 Jun 2021 17:21:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "Hi.\n\nBy using Kyotaro's \"counting\" patch I was able to reproduce very\nsimilar results to what he had earlier posted [1].\n\nAFAIK I have the same test scenario that he was using.\n\nTest setup:\n- using async pub/sub\n- subscription is for the pgbench_history table\n- pgbench is run for 10 seconds\n- config for all the wal_sender/receiver timeout GUCs are just default values\n\n\nResults (HEAD + Kyotaro counting patch)\n=======================================\n\n[postgres@CentOS7-x64 ~]$ 2021-08-10 16:36:23.733 AEST [32436] LOG:\nTotal records: 18183\n2021-08-10 16:36:23.733 AEST [32436] LOG: 8: 2 / 0 /\n1: 440616 / 580320\n2021-08-10 16:36:23.733 AEST [32436] LOG: 16: 4 / 8 /\n4: 4524 / 288688\n2021-08-10 16:36:23.733 AEST [32436] LOG: 24: 2916 / 5151 /\n2756: 31227 / 323190\n2021-08-10 16:36:23.733 AEST [32436] LOG: 187: 2 / 0 /\n51: 157 / 10629\n2021-08-10 16:36:23.733 AEST [32436] LOG: 2960: 1 / 0 /\n1: 49656944 / 49656944\n2021-08-10 16:36:23.733 AEST [32436] LOG: 7537: 2 / 0 /\n36: 231 / 7028\n2021-08-10 16:36:23.733 AEST [32436] LOG: 7577: 1 / 2 /\n78: 106 / 106\n2021-08-10 16:36:23.733 AEST [32436] LOG: 8280: 1 / 2 /\n3: 88 / 88\n\n//////\n\nThat base data is showing there are similar numbers of keepalives sent\nas there are calls made to WalSndWaitForWal. IIUC it means that mostly\nthe loop is sending the special keepalives on the *first* iteration,\nbut by the time of the *second* iteration the ProcessRepliesIfAny()\nwill have some status already received, and so mostly sending another\nkeepalive will be deemed unnecessary.\n\nBased on this, our idea was to simply skip sending the\nWalSndKeepalive(false) for the FIRST iteration of the loop only! PSA\nthe patch 0002 which does this skip.\n\nWith this skip patch (v1-0002) applied the same pgbench tests were run\nagain. The results look like below.\n\nResults (HEAD + Kyotaro patch + Skip-first keepalive patch)\n===========================================================\n\nRUN #1\n------\n[postgres@CentOS7-x64 ~]$ 2021-08-11 16:32:59.827 AEST [20339] LOG:\nTotal records: 19367\n2021-08-11 16:32:59.827 AEST [20339] LOG: 24: 10 / 9232 /\n3098: 19 / 440\n2021-08-11 16:32:59.827 AEST [20339] LOG: 102: 1 / 1 /\n32: 257 / 16828\n2021-08-11 16:32:59.827 AEST [20339] LOG: 187: 1 / 1 /\n52: 155 / 9541\n\nRUN #2\n------\n[postgres@CentOS7-x64 ~]$ 2021-08-11 16:36:03.983 AEST [25513] LOG:\nTotal records: 17815\n2021-08-11 16:36:03.983 AEST [25513] LOG: 24: 73 / 8683 /\n2958: 1647 / 3290\n2021-08-11 16:36:03.983 AEST [25513] LOG: 8280: 1 / 1 /\n3: 88 / 88\n\nRUN #3\n------\n[postgres@CentOS7-x64 ~]$ 2021-08-11 16:39:27.655 AEST [31061] LOG:\nTotal records: 19906\n2021-08-11 16:39:27.655 AEST [31061] LOG: 24: 18 / 8546 /\n2890: 61 / 1530\n2021-08-11 16:39:27.655 AEST [31061] LOG: 83: 1 / 3 /\n1: 8664 / 8664\n\n~~\n\nThis data shows the special keepalives are now greatly reduced from\n1000s to just 10s.\n\nThoughts?\n\n------\n[1] https://www.postgresql.org/message-id/20210610.150016.1709823354377067679.horikyota.ntt%40gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 12 Aug 2021 12:32:52 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "FYI - Here are some more counter results with/without the skip patch\n[1] applied.\n\nThis is the same test setup as before except now using *synchronous* pub/sub.\n\n//////////\n\nTest setup:\n- using synchronous pub/sub\n- subscription is for the pgbench_history table\n- pgbench is run for 10 seconds\n- config for all the wal_sender/receiver timeout GUCs are just default values\n\nWITHOUT the skip-first patch applied\n=====================================\n\nRUN #1\n------\nLOG: Total records: 310\nLOG: 24: 49 / 131 / 49: 8403 / 9270\nLOG: 944: 1 / 0 / 1: 159693904 / 159693904\nLOG: 8280: 1 / 2 / 2: 480 / 480\n\nRUN #2\n------\nLOG: Total records: 275\nLOG: 24: 45 / 129 / 46: 8580 / 8766\nLOG: 5392: 1 / 0 / 1: 160107248 / 160107248\n\nRUN #3\n------\nLOG: Total records: 330\nLOG: 24: 50 / 144 / 51: 8705 / 8705\nLOG: 3704: 1 / 0 / 1: 160510344 / 160510344\nLOG: 8280: 1 / 2 / 2: 468 / 468\n\nWITH the skip-first patch applied\n=================================\n\nRUN #1\n------\nLOG: Total records: 247\nLOG: 24: 5 / 172 / 44: 3601700 / 3601700\nLOG: 8280: 1 / 1 / 1: 1192 / 1192\n\nRUN #2\n------\nLOG: Total records: 338\nLOG: 24: 8 / 199 / 55: 1335 / 1335\nLOG: 7597: 1 / 1 / 1: 11712 / 11712\nLOG: 8280: 1 / 1 / 2: 480 / 480\n\nRUN #3\n------\nLOG: Total records: 292\nLOG: 24: 4 / 184 / 49: 719 / 719\n\n//////////\n\nAs before there is a big % reduction of keepalives after the patch,\nexcept here there was never really much of a \"flood\" in the first\nplace.\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPtyMBzweYUpb_QazVL6Uze2Yc5M5Ti2Xwee_eWM3Jrbog%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 13 Aug 2021 16:45:08 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Aug 12, 2021 at 12:33 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> This data shows the special keepalives are now greatly reduced from\n> 1000s to just 10s.\n>\n> Thoughts?\n>\n\nI could easily see the flood of keepalives with the test setup\ndescribed by the original problem reporter (Abbas Butt).\nI found that the\n\"v1-0002-WIP-skip-the-keepalive-on-FIRST-loop-iteration.patch\" patch\nreduced the keepalives by about 50% in this case.\nI also tried the pub/sub setup with the publication on the\npgbench_history table.\nWith this pub/sub setup, I found that the patch dramatically reduced\nthe keepalives sent, similar to that reported by Peter.\nResults (using Kyotoro’s keepalive counting patch) are below:\n\nPUB/SUB, publishing the pgbench_history table\n\n(1) without patch, 10s pgbench run:\n\n2021-09-08 15:21:56.643 AEST [63720] LOG: Total records: 47019\n2021-09-08 15:21:56.643 AEST [63720] LOG: 8: 8 / 16 /\n8: 8571 / 882048\n2021-09-08 15:21:56.643 AEST [63720] LOG: 16: 5 / 10 /\n5: 3649 / 764892\n2021-09-08 15:21:56.643 AEST [63720] LOG: 24: 6271 / 12561 /\n6331: 113989 / 886115\n2021-09-08 15:21:56.643 AEST [63720] LOG: 195: 2 / 0 /\n112: 72 / 10945\n2021-09-08 15:21:56.643 AEST [63720] LOG: 6856: 1 / 0 /\n1: 666232176 / 666232176\n2021-09-08 15:21:56.643 AEST [63720] LOG: 7477: 2 / 0 /\n298: 27 / 3303\n2021-09-08 15:21:56.643 AEST [63720] LOG: 8159: 19 / 32 /\n6073: 15 / 1869\n\n(2) with patch, 10s pgbench run\n\n2021-09-08 15:39:14.008 AEST [71431] LOG: Total records: 45858\n2021-09-08 15:39:14.008 AEST [71431] LOG: 24: 61 / 18278 /\n6168: 108034 / 115228\n2021-09-08 15:39:14.008 AEST [71431] LOG: 84: 1 / 1 /\n7: 2256 / 295230\n2021-09-08 15:39:14.008 AEST [71431] LOG: 110: 1 / 1 /\n3: 10629 / 708293\n2021-09-08 15:39:14.008 AEST [71431] LOG: 7477: 18 / 18 /\n4577: 53 / 7850\n\n\nWhere columns are:\n\nsize: sent /notsent/ calls: write lag/ flush lag\n\n- size is requested data length to logical_read_xlog_page()\n- sent is the number of keepalives sent in the loop in WalSndWaitForWal\n- notsent is the number of runs of the loop in WalSndWaitForWal\nwithout sending a keepalive\n- calls is the number of calls to WalSndWaitForWal\n- write lag is the bytes MyWalSnd->write is behind from sentPtr at the\nfirst run of the loop per call to logical_read_xlog_page.\n- flush lag is the same as the above, but for MyWalSnd->flush.\n\n\nHowever, the problem I found is that, with the patch applied, there is\na test failure when running “make check-world”:\n\n t/006_logical_decoding.pl ............ 4/14\n# Failed test 'pg_recvlogical acknowledged changes'\n# at t/006_logical_decoding.pl line 117.\n# got: 'BEGIN\n# table public.decoding_test: INSERT: x[integer]:5 y[text]:'5''\n# expected: ''\n# Looks like you failed 1 test of 14.\nt/006_logical_decoding.pl ............ Dubious, test returned 1 (wstat\n256, 0x100) Failed 1/14 subtests\n\n\nTo investigate this, I added some additional logging to\npg_recvlogical.c and PostgresNode.pm and re-ran\n006_logical_decoding.pl without and with the patch (logs attached).\n\nWhen the patch is NOT applied, and when pg_recvlogical is invoked by\nthe test for a 2nd time with the same \"--endpos\" LSN, it gets a\nkeepalive, detects walEnd>=endpos, and thus returns an empty record.\nThe test is expecting an empty record, so all is OK.\nWhen the patch is applied, and when pg_recvlogical is invoked by the\ntest for a 2nd time with the same \"--endpos\" LSN, it gets a WAL record\nwith THE SAME LSN (== endpos) as previously obtained by the last WAL\nrecord when it was invoked the 1st time, but the record data is\nactually the first row of some records written after endpos, that it\nwasn't meant to read.\nThis doesn't seem right to me - how can pg_recvlogical receive two\ndifferent WAL records with the same LSN?\nWith the patch applied, I was expecting pg_reclogical to get WAL\nrecords with LSN>endpos, but this doesn't happen.\nI'm thinking that the patch must have broken walsender in some way,\npossibly by missing out on calls to ProcessStandbyReplyMessage()\nbecause the sending of some keepalives are avoided (see below), so\nthat the MyWalSnd flush and write location are not kept up-to-date.\nThe code comments below seem to hint about this. I don't really like\nthe way keepalives are used for this, but this seems to be the\nexisting functionality. Maybe someone else can confirm that this could\nindeed break walsender?\n\nwalsender.c\nWalSndWaitForWal()\n\n /*\n * We only send regular messages to the client for full decoded\n * transactions, but a synchronous replication and walsender shutdown\n * possibly are waiting for a later location. So, before sleeping, we\n * send a ping containing the flush location. If the receiver is\n * otherwise idle, this keepalive will trigger a reply. Processing the\n * reply will update these MyWalSnd locations.\n */\n if (!loop_first_time && /* avoid keepalive on first iteration\n*/ <--- added by the patch\n MyWalSnd->flush < sentPtr &&\n MyWalSnd->write < sentPtr &&\n !waiting_for_ping_response)\n {\n WalSndKeepalive(false);\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Tue, 14 Sep 2021 15:39:20 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "From Tuesday, September 14, 2021 1:39 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> However, the problem I found is that, with the patch applied, there is\r\n> a test failure when running “make check-world”:\r\n> \r\n> t/006_logical_decoding.pl ............ 4/14\r\n> # Failed test 'pg_recvlogical acknowledged changes'\r\n> # at t/006_logical_decoding.pl line 117.\r\n> # got: 'BEGIN\r\n> # table public.decoding_test: INSERT: x[integer]:5 y[text]:'5''\r\n> # expected: ''\r\n> # Looks like you failed 1 test of 14.\r\n> t/006_logical_decoding.pl ............ Dubious, test returned 1 (wstat\r\n> 256, 0x100) Failed 1/14 subtests\r\n> \r\n> \r\n\r\nAfter applying the patch,\r\nI saw the same problem and can reproduce it by the following steps:\r\n\r\n1) execute the SQLs.\r\n-----------SQL-----------\r\nCREATE TABLE decoding_test(x integer, y text);\r\nSELECT pg_create_logical_replication_slot('test_slot', 'test_decoding');\r\nINSERT INTO decoding_test(x,y) SELECT s, s::text FROM generate_series(1,4) s;\r\n\r\n-- use the lsn here to execute pg_recvlogical later\r\nSELECT lsn FROM pg_logical_slot_peek_changes('test_slot', NULL, NULL) ORDER BY lsn DESC LIMIT 1;\r\nINSERT INTO decoding_test(x,y) SELECT s, s::text FROM generate_series(5,50) s;\r\n----------------------\r\n\r\n2) Then, if I execute the following command twice:\r\n# pg_recvlogical -E lsn -d postgres -S 'test_slot' --start --no-loop -f -\r\nBEGIN 708\r\ntable public.decoding_test: INSERT: x[integer]:1 y[text]:'1'\r\ntable public.decoding_test: INSERT: x[integer]:2 y[text]:'2'\r\ntable public.decoding_test: INSERT: x[integer]:3 y[text]:'3'\r\ntable public.decoding_test: INSERT: x[integer]:4 y[text]:'4'\r\nCOMMIT 708\r\n\r\n# pg_recvlogical -E lsn -d postgres -S 'test_slot' --start --no-loop -f -\r\nBEGIN 709\r\n\r\nIt still generated ' BEGIN 709' in the second time execution.\r\nBut it will output nothing in the second time execution without the patch.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n\r\n\r\n", "msg_date": "Thu, 16 Sep 2021 00:59:37 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Sep 16, 2021 at 6:29 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> From Tuesday, September 14, 2021 1:39 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > However, the problem I found is that, with the patch applied, there is\n> > a test failure when running “make check-world”:\n> >\n> > t/006_logical_decoding.pl ............ 4/14\n> > # Failed test 'pg_recvlogical acknowledged changes'\n> > # at t/006_logical_decoding.pl line 117.\n> > # got: 'BEGIN\n> > # table public.decoding_test: INSERT: x[integer]:5 y[text]:'5''\n> > # expected: ''\n> > # Looks like you failed 1 test of 14.\n> > t/006_logical_decoding.pl ............ Dubious, test returned 1 (wstat\n> > 256, 0x100) Failed 1/14 subtests\n> >\n> >\n>\n> After applying the patch,\n> I saw the same problem and can reproduce it by the following steps:\n>\n> 1) execute the SQLs.\n> -----------SQL-----------\n> CREATE TABLE decoding_test(x integer, y text);\n> SELECT pg_create_logical_replication_slot('test_slot', 'test_decoding');\n> INSERT INTO decoding_test(x,y) SELECT s, s::text FROM generate_series(1,4) s;\n>\n> -- use the lsn here to execute pg_recvlogical later\n> SELECT lsn FROM pg_logical_slot_peek_changes('test_slot', NULL, NULL) ORDER BY lsn DESC LIMIT 1;\n> INSERT INTO decoding_test(x,y) SELECT s, s::text FROM generate_series(5,50) s;\n> ----------------------\n>\n> 2) Then, if I execute the following command twice:\n> # pg_recvlogical -E lsn -d postgres -S 'test_slot' --start --no-loop -f -\n> BEGIN 708\n> table public.decoding_test: INSERT: x[integer]:1 y[text]:'1'\n> table public.decoding_test: INSERT: x[integer]:2 y[text]:'2'\n> table public.decoding_test: INSERT: x[integer]:3 y[text]:'3'\n> table public.decoding_test: INSERT: x[integer]:4 y[text]:'4'\n> COMMIT 708\n>\n> # pg_recvlogical -E lsn -d postgres -S 'test_slot' --start --no-loop -f -\n> BEGIN 709\n>\n> It still generated ' BEGIN 709' in the second time execution.\n> But it will output nothing in the second time execution without the patch.\n>\n\nI think here the reason is that the first_lsn of a transaction is\nalways equal to end_lsn of the previous transaction (See comments\nabove first_lsn and end_lsn fields of ReorderBufferTXN). I have not\ndebugged but I think in StreamLogicalLog() the cur_record_lsn after\nreceiving 'w' message, in this case, will be equal to endpos whereas\nwe expect to be greater than endpos to exit. Before the patch, it will\nalways get the 'k' message where we expect the received lsn to be\nequal to endpos to conclude that we can exit. Do let me know if your\nanalysis differs?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 16 Sep 2021 18:06:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Aug 12, 2021 at 8:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> That base data is showing there are similar numbers of keepalives sent\n> as there are calls made to WalSndWaitForWal. IIUC it means that mostly\n> the loop is sending the special keepalives on the *first* iteration,\n> but by the time of the *second* iteration the ProcessRepliesIfAny()\n> will have some status already received, and so mostly sending another\n> keepalive will be deemed unnecessary.\n>\n> Based on this, our idea was to simply skip sending the\n> WalSndKeepalive(false) for the FIRST iteration of the loop only! PSA\n> the patch 0002 which does this skip.\n>\n\nI think we should also keep in mind that there are cases where it\nseems we are not able to send keep-alive at the appropriate frequency.\nSee the discussion [1]. This is to ensure that we shouldn't\nunintentionally hamper some other workloads by fixing the workload\nbeing discussed here.\n\n[1] - https://www.postgresql.org/message-id/20210913.103107.813489310351696839.horikyota.ntt%40gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 17 Sep 2021 10:35:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Thursday, September 16, 2021 8:36 PM Amit Kapila <amit.kapila16@gmail.com>:\r\n> On Thu, Sep 16, 2021 at 6:29 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > After applying the patch,\r\n> > I saw the same problem and can reproduce it by the following steps:\r\n> >\r\n> > 1) execute the SQLs.\r\n> > -----------SQL-----------\r\n> > CREATE TABLE decoding_test(x integer, y text);\r\n> > SELECT pg_create_logical_replication_slot('test_slot', 'test_decoding');\r\n> > INSERT INTO decoding_test(x,y) SELECT s, s::text FROM generate_series(1,4)\r\n> s;\r\n> >\r\n> > -- use the lsn here to execute pg_recvlogical later\r\n> > SELECT lsn FROM pg_logical_slot_peek_changes('test_slot', NULL, NULL)\r\n> ORDER BY lsn DESC LIMIT 1;\r\n> > INSERT INTO decoding_test(x,y) SELECT s, s::text FROM\r\n> generate_series(5,50) s;\r\n> > ----------------------\r\n> >\r\n> > 2) Then, if I execute the following command twice:\r\n> > # pg_recvlogical -E lsn -d postgres -S 'test_slot' --start --no-loop -f -\r\n> > BEGIN 708\r\n> > table public.decoding_test: INSERT: x[integer]:1 y[text]:'1'\r\n> > table public.decoding_test: INSERT: x[integer]:2 y[text]:'2'\r\n> > table public.decoding_test: INSERT: x[integer]:3 y[text]:'3'\r\n> > table public.decoding_test: INSERT: x[integer]:4 y[text]:'4'\r\n> > COMMIT 708\r\n> >\r\n> > # pg_recvlogical -E lsn -d postgres -S 'test_slot' --start --no-loop -f -\r\n> > BEGIN 709\r\n> >\r\n> > It still generated ' BEGIN 709' in the second time execution.\r\n> > But it will output nothing in the second time execution without the patch.\r\n> >\r\n> \r\n> I think here the reason is that the first_lsn of a transaction is\r\n> always equal to end_lsn of the previous transaction (See comments\r\n> above first_lsn and end_lsn fields of ReorderBufferTXN). I have not\r\n> debugged but I think in StreamLogicalLog() the cur_record_lsn after\r\n> receiving 'w' message, in this case, will be equal to endpos whereas\r\n> we expect to be greater than endpos to exit. Before the patch, it will\r\n> always get the 'k' message where we expect the received lsn to be\r\n> equal to endpos to conclude that we can exit. Do let me know if your\r\n> analysis differs?\r\n\r\nAfter debugging it, I agree with your analysis.\r\n\r\nWITH the patch:\r\nin function StreamLogicalLog(), I can see the cur_record_lsn is equal\r\nto endpos which result in unexpected record.\r\n\r\nWITHOUT the patch:\r\nIn function StreamLogicalLog(), it first received a 'k' message which will break the\r\nloop by the following code.\r\n\r\n\t\t\tif (endposReached)\r\n\t\t\t{\r\n\t\t\t\tprepareToTerminate(conn, endpos, true, InvalidXLogRecPtr);\r\n\t\t\t\ttime_to_abort = true;\r\n\t\t\t\tbreak;\r\n\t\t\t}\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Fri, 17 Sep 2021 06:22:14 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Sep 16, 2021 at 10:59 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> From Tuesday, September 14, 2021 1:39 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > However, the problem I found is that, with the patch applied, there is\n> > a test failure when running “make check-world”:\n> >\n> > t/006_logical_decoding.pl ............ 4/14\n> > # Failed test 'pg_recvlogical acknowledged changes'\n> > # at t/006_logical_decoding.pl line 117.\n> > # got: 'BEGIN\n> > # table public.decoding_test: INSERT: x[integer]:5 y[text]:'5''\n> > # expected: ''\n> > # Looks like you failed 1 test of 14.\n> > t/006_logical_decoding.pl ............ Dubious, test returned 1 (wstat\n> > 256, 0x100) Failed 1/14 subtests\n> >\n> >\n>\n> After applying the patch,\n> I saw the same problem and can reproduce it by the following steps:\n>\n> 1) execute the SQLs.\n> -----------SQL-----------\n> CREATE TABLE decoding_test(x integer, y text);\n> SELECT pg_create_logical_replication_slot('test_slot', 'test_decoding');\n> INSERT INTO decoding_test(x,y) SELECT s, s::text FROM generate_series(1,4) s;\n>\n> -- use the lsn here to execute pg_recvlogical later\n> SELECT lsn FROM pg_logical_slot_peek_changes('test_slot', NULL, NULL) ORDER BY lsn DESC LIMIT 1;\n> INSERT INTO decoding_test(x,y) SELECT s, s::text FROM generate_series(5,50) s;\n> ----------------------\n>\n> 2) Then, if I execute the following command twice:\n> # pg_recvlogical -E lsn -d postgres -S 'test_slot' --start --no-loop -f -\n> BEGIN 708\n> table public.decoding_test: INSERT: x[integer]:1 y[text]:'1'\n> table public.decoding_test: INSERT: x[integer]:2 y[text]:'2'\n> table public.decoding_test: INSERT: x[integer]:3 y[text]:'3'\n> table public.decoding_test: INSERT: x[integer]:4 y[text]:'4'\n> COMMIT 708\n>\n> # pg_recvlogical -E lsn -d postgres -S 'test_slot' --start --no-loop -f -\n> BEGIN 709\n>\n> It still generated ' BEGIN 709' in the second time execution.\n> But it will output nothing in the second time execution without the patch.\n>\n\nHello Hous-san, thanks for including the steps. Unfortunately, no\nmatter what I tried, I could never get the patch to display the\nproblem \"BEGIN 709\" for the 2nd time execution of pg_recvlogical\n\nAfter discussion offline (thanks Greg!) it was found that the\npg_recvlogical step 2 posted above is not quite identical to what the\nTAP 006 test is doing.\n\nSpecifically, the TAP test also includes some other options (-o\ninclude-xids=0 -o skip-empty-xacts=1) which are not in your step.\n\nIf I include these options then I can reproduce the problem.\n-----------------------------------------\n[postgres@CentOS7-x64 ~]$ pg_recvlogical -E '0/150B5E0' -d postgres\n-S 'test_slot' --start --no-loop -o include-xids=0 -o\nskip-empty-xacts=1 -f -\nBEGIN\ntable public.decoding_test: INSERT: x[integer]:5 y[text]:'5'\n-----------------------------------------\n\nI don't know why these options should make any difference but they do.\nPerhaps they cause a fluke of millisecond timing differences in our\ndifferent VM environments.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 17 Sep 2021 17:11:56 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Fri, Sep 17, 2021 at 12:42 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Sep 16, 2021 at 10:59 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n>\n> Hello Hous-san, thanks for including the steps. Unfortunately, no\n> matter what I tried, I could never get the patch to display the\n> problem \"BEGIN 709\" for the 2nd time execution of pg_recvlogical\n>\n> After discussion offline (thanks Greg!) it was found that the\n> pg_recvlogical step 2 posted above is not quite identical to what the\n> TAP 006 test is doing.\n>\n> Specifically, the TAP test also includes some other options (-o\n> include-xids=0 -o skip-empty-xacts=1) which are not in your step.\n>\n> If I include these options then I can reproduce the problem.\n> -----------------------------------------\n> [postgres@CentOS7-x64 ~]$ pg_recvlogical -E '0/150B5E0' -d postgres\n> -S 'test_slot' --start --no-loop -o include-xids=0 -o\n> skip-empty-xacts=1 -f -\n> BEGIN\n> table public.decoding_test: INSERT: x[integer]:5 y[text]:'5'\n> -----------------------------------------\n>\n> I don't know why these options should make any difference but they do.\n>\n\nI think there is a possibility that skip-empty-xacts = 1 is making\ndifference. Basically, if there is some empty transaction say via\nautovacuum, it would skip it and possibly send keep-alive message\nbefore sending transaction id 709. Then you won't see the problem with\nHou-San's test. Can you try by keeping autovacuum = off and by not\nusing skip-empty-xact option?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 17 Sep 2021 14:47:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Sep 16, 2021 at 10:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I think here the reason is that the first_lsn of a transaction is\n> always equal to end_lsn of the previous transaction (See comments\n> above first_lsn and end_lsn fields of ReorderBufferTXN).\n\nThat may be the case, but those comments certainly don't make this clear.\n\n>I have not\n> debugged but I think in StreamLogicalLog() the cur_record_lsn after\n> receiving 'w' message, in this case, will be equal to endpos whereas\n> we expect to be greater than endpos to exit. Before the patch, it will\n> always get the 'k' message where we expect the received lsn to be\n> equal to endpos to conclude that we can exit. Do let me know if your\n> analysis differs?\n>\n\nYes, pg_recvlogical seems to be relying on receiving a keepalive for\nits \"--endpos\" logic to work (and the 006 test is relying on '' record\noutput from pg_recvlogical in this case).\nBut is it correct to be relying on a keepalive for this?\nAs I already pointed out, there's also code which seems to be relying\non replies from sending keepalives, to update flush and write\nlocations related to LSN.\nThe original problem reporter measured 500 keepalives per second being\nsent by walsender (which I also reproduced, for pg_recvlogical and\npub/sub cases).\nNone of these cases appear to be traditional uses of \"keepalive\" type\nmessages to me.\nAm I missing something? Documentation?\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 17 Sep 2021 19:32:54 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Fri, Sep 17, 2021 at 3:03 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Sep 16, 2021 at 10:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I think here the reason is that the first_lsn of a transaction is\n> > always equal to end_lsn of the previous transaction (See comments\n> > above first_lsn and end_lsn fields of ReorderBufferTXN).\n>\n> That may be the case, but those comments certainly don't make this clear.\n>\n> >I have not\n> > debugged but I think in StreamLogicalLog() the cur_record_lsn after\n> > receiving 'w' message, in this case, will be equal to endpos whereas\n> > we expect to be greater than endpos to exit. Before the patch, it will\n> > always get the 'k' message where we expect the received lsn to be\n> > equal to endpos to conclude that we can exit. Do let me know if your\n> > analysis differs?\n> >\n>\n> Yes, pg_recvlogical seems to be relying on receiving a keepalive for\n> its \"--endpos\" logic to work (and the 006 test is relying on '' record\n> output from pg_recvlogical in this case).\n> But is it correct to be relying on a keepalive for this?\n>\n\nI don't think this experiment/test indicates that pg_recvlogical's\n\"--endpos\" relies on keepalive. It would just print the records till\n--endpos and then exit. In the test under discussion, as per\nconfirmation by Hou-San, the BEGIN record received has the same LSN as\n--endpos, so it would just output that and exit which is what is\nmentioned in pg_recvlogical docs as well (If there's a record with LSN\nexactly equal to lsn, the record will be output).\n\nI think here the test case could be a culprit. In the original commit\neb2a6131be [1], where this test of the second time using\npg_recvlogical was added there were no additional Inserts (# Insert\nsome rows after $endpos, which we won't read.) which were later added\nby a different commit 8222a9d9a1 [2]. I am not sure if the test added\nby commit [2] was a good idea. It seems to be working due to the way\nkeepalives are being sent but otherwise, it can fail as per the\ncurrent design of pg_recvlogical.\n\n[1]:\ncommit eb2a6131beccaad2b39629191508062b70d3a1c6\nAuthor: Simon Riggs <simon@2ndQuadrant.com>\nDate: Tue Mar 21 14:04:49 2017 +0000\n\n Add a pg_recvlogical wrapper to PostgresNode\n\n[2]:\ncommit 8222a9d9a12356349114ec275b01a1a58da2b941\nAuthor: Noah Misch <noah@leadboat.com>\nDate: Wed May 13 20:42:09 2020 -0700\n\n In successful pg_recvlogical, end PGRES_COPY_OUT cleanly.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 19 Sep 2021 11:16:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Sun, Sep 19, 2021 at 3:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> >\n> > Yes, pg_recvlogical seems to be relying on receiving a keepalive for\n> > its \"--endpos\" logic to work (and the 006 test is relying on '' record\n> > output from pg_recvlogical in this case).\n> > But is it correct to be relying on a keepalive for this?\n> >\n>\n> I don't think this experiment/test indicates that pg_recvlogical's\n> \"--endpos\" relies on keepalive. It would just print the records till\n> --endpos and then exit. In the test under discussion, as per\n> confirmation by Hou-San, the BEGIN record received has the same LSN as\n> --endpos, so it would just output that and exit which is what is\n> mentioned in pg_recvlogical docs as well (If there's a record with LSN\n> exactly equal to lsn, the record will be output).\n>\n\nIt seems to be relying on keepalive to get ONE specific record per\n--endpos value, because once we apply the\n\"v1-0002-WIP-skip-the-keepalive-on-FIRST-loop-iteration.patch\" patch,\nthen when pg_recvlogical is invoked for a second time with the same\n--endos, it outputs the next record (BEGIN) too. So now for the same\n--endpos value, we've had two different records output by\npg_recvlogical.\nI have not seen this described in the documentation, so I think it\nwill need to be updated, should keepalives be reduced as per the\npatch. The current documentation seems to be implying that one\nparticular record will be returned for a given --endpos (at least,\nthere is no mention of the possibility of different records being\noutput for the one --endpos, or that the first_lsn of a transaction is\nalways equal to end_lsn of the previous transaction).\n\n--endpos=lsn\n\n In --start mode, automatically stop replication and exit with\nnormal exit status 0 when receiving reaches the specified LSN. If\nspecified when not in --start mode, an error is raised.\n\n If there's a record with LSN exactly equal to lsn, the record will be output.\n\n The --endpos option is not aware of transaction boundaries and may\ntruncate output partway through a transaction. Any partially output\ntransaction will not be consumed and will be replayed again when the\nslot is next read from. Individual messages are never truncated.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Mon, 20 Sep 2021 11:41:02 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Sep 30, 2021 at 8:49 AM Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com> wrote:\n>\n> I noticed another patch that Horiguchi-San posted earlier[1].\n>\n> The approach in that patch is to splits the sleep into two\n> pieces. If the first sleep reaches the timeout then send a keepalive\n> then sleep for the remaining time.\n>\n> I tested that patch and can see the keepalive message is reduced and\n> the patch won't cause the current regression test fail.\n>\n> Since I didn't find some comments about that patch,\n> I wonder did we find some problems about that patch ?\n>\n\nI am not able to understand some parts of that patch.\n\n+ If the sleep is shorter\n+ * than KEEPALIVE_TIMEOUT milliseconds, we skip sending a keepalive to\n+ * prevent it from getting too-frequent.\n+ */\n+ if (MyWalSnd->flush < sentPtr &&\n+ MyWalSnd->write < sentPtr &&\n+ !waiting_for_ping_response)\n+ {\n+ if (sleeptime > KEEPALIVE_TIMEOUT)\n+ {\n+ int r;\n+\n+ r = WalSndWait(wakeEvents, KEEPALIVE_TIMEOUT,\n+ WAIT_EVENT_WAL_SENDER_WAIT_WAL);\n+\n+ if (r != 0)\n+ continue;\n+\n+ sleeptime -= KEEPALIVE_TIMEOUT;\n+ }\n+\n+ WalSndKeepalive(false);\n\nIt claims to skip sending keepalive if the sleep time is shorter than\nKEEPALIVE_TIMEOUT (a new threshold) but the above code seems to\nsuggest it sends in both cases. What am I missing?\n\nAlso, more to the point this special keep_alive seems to be sent for\nsynchronous replication and walsender shutdown as they can expect\nupdated locations. You haven't given any reason (theory) why those two\nwon't be impacted due to this change? I am aware that for synchronous\nreplication, we wake waiters while ProcessStandbyReplyMessage but I am\nnot sure how it helps with wal sender shutdown? I think we need to\nknow the reasons for this message and then need to see if the change\nhas any impact on the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 30 Sep 2021 11:25:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Sep 30, 2021 at 3:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I am not able to understand some parts of that patch.\n>\n> + If the sleep is shorter\n> + * than KEEPALIVE_TIMEOUT milliseconds, we skip sending a keepalive to\n> + * prevent it from getting too-frequent.\n> + */\n> + if (MyWalSnd->flush < sentPtr &&\n> + MyWalSnd->write < sentPtr &&\n> + !waiting_for_ping_response)\n> + {\n> + if (sleeptime > KEEPALIVE_TIMEOUT)\n> + {\n> + int r;\n> +\n> + r = WalSndWait(wakeEvents, KEEPALIVE_TIMEOUT,\n> + WAIT_EVENT_WAL_SENDER_WAIT_WAL);\n> +\n> + if (r != 0)\n> + continue;\n> +\n> + sleeptime -= KEEPALIVE_TIMEOUT;\n> + }\n> +\n> + WalSndKeepalive(false);\n>\n> It claims to skip sending keepalive if the sleep time is shorter than\n> KEEPALIVE_TIMEOUT (a new threshold) but the above code seems to\n> suggest it sends in both cases. What am I missing?\n>\n\nThe comment does seem to be wrong.\nThe way I see it, if the calculated sleeptime is greater than\nKEEPALIVE_TIMEOUT, then it first sleeps for up to KEEPALIVE_TIMEOUT\nmilliseconds and skips sending a keepalive if something happens (i.e.\nthe socket becomes readable/writeable) during that time (WalSendWait\nwill return non-zero in that case). Otherwise, it sends a keepalive\nand sleeps for (sleeptime - KEEPALIVE_TIMEOUT), then loops around\nagain ...\n\n> Also, more to the point this special keep_alive seems to be sent for\n> synchronous replication and walsender shutdown as they can expect\n> updated locations. You haven't given any reason (theory) why those two\n> won't be impacted due to this change? I am aware that for synchronous\n> replication, we wake waiters while ProcessStandbyReplyMessage but I am\n> not sure how it helps with wal sender shutdown? I think we need to\n> know the reasons for this message and then need to see if the change\n> has any impact on the same.\n>\n\nYes, I'm not sure about the possible impacts, still looking at it.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 30 Sep 2021 16:21:25 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Sep 30, 2021 at 1:19 PM Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com> wrote:\n>\n> I tested that patch and can see the keepalive message is reduced and\n> the patch won't cause the current regression test fail.\n>\n\nActually, with the patch applied, I find that \"make check-world\" fails\n(006_logical_decoding, test 7).\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 30 Sep 2021 17:21:03 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "At Thu, 30 Sep 2021 16:21:25 +1000, Greg Nancarrow <gregn4422@gmail.com> wrote in \n> On Thu, Sep 30, 2021 at 3:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > It claims to skip sending keepalive if the sleep time is shorter than\n> > KEEPALIVE_TIMEOUT (a new threshold) but the above code seems to\n> > suggest it sends in both cases. What am I missing?\n> >\n> \n> The comment does seem to be wrong.\n\nMmm. Indeed the comment does say something like that... Looking the\npatch name together, I might have confused something. However, the\npatch looks like working for the purpose.\n\n> The way I see it, if the calculated sleeptime is greater than\n> KEEPALIVE_TIMEOUT, then it first sleeps for up to KEEPALIVE_TIMEOUT\n> milliseconds and skips sending a keepalive if something happens (i.e.\n> the socket becomes readable/writeable) during that time (WalSendWait\n> will return non-zero in that case). Otherwise, it sends a keepalive\n> and sleeps for (sleeptime - KEEPALIVE_TIMEOUT), then loops around\n> again ...\n\nThe maim point of the patch is moving of the timing of sending the\nbefore-sleep keepalive. It seems to me that currently\nWalSndWaitForWal may send \"before-sleep\" keepalive every run of the\nloop under a certain circumstance. I suspect this happen in this case.\n\nAfter the patch applied, that keepalive is sent only when the loop is\nactually going to sleep some time. In case the next WAL doesn't come\nfor KEEPALIVE_TIMEOUT milliseconds, it sends a keepalive. There's a\ndubious behavior when sleeptime <= KEEPALIVE_TIMEOUT that it sends a\nkeepalive immediately. It was (as far as I recall) intentional in\norder to make the code simpler. However, on second thought, we will\nhave the next chance to send keepalive in that case, and intermittent\nfrequent keepalives can happen with that behavior. So I came to think\nthat we can omit keepalives at all that case.\n\n(I myself haven't see the keepalive flood..)\n\n> > Also, more to the point this special keep_alive seems to be sent for\n> > synchronous replication and walsender shutdown as they can expect\n> > updated locations. You haven't given any reason (theory) why those two\n> > won't be impacted due to this change? I am aware that for synchronous\n> > replication, we wake waiters while ProcessStandbyReplyMessage but I am\n> > not sure how it helps with wal sender shutdown? I think we need to\n> > know the reasons for this message and then need to see if the change\n> > has any impact on the same.\n> >\n> \n> Yes, I'm not sure about the possible impacts, still looking at it.\n\nIf the comment describes the objective correctly, the only possible\nimpact would be that there may be a case where server responds a bit\nslowly for a shutdown request. But I'm not sure it is definitely\ntrue.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 30 Sep 2021 16:56:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "At Thu, 30 Sep 2021 17:21:03 +1000, Greg Nancarrow <gregn4422@gmail.com> wrote in \n> Actually, with the patch applied, I find that \"make check-world\" fails\n> (006_logical_decoding, test 7).\n\nMmm..\n\nt/006_logical_decoding.pl .. 4/14 \n# Failed test 'pg_recvlogical acknowledged changes'\n# at t/006_logical_decoding.pl line 117.\n# got: 'BEGIN\n# table public.decoding_test: INSERT: x[integer]:5 y[text]:'5''\n# expected: ''\n\nI'm not sure what the test is checking for now, though.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 30 Sep 2021 17:08:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Sep 30, 2021 at 6:08 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 30 Sep 2021 17:21:03 +1000, Greg Nancarrow <gregn4422@gmail.com> wrote in\n> > Actually, with the patch applied, I find that \"make check-world\" fails\n> > (006_logical_decoding, test 7).\n>\n> Mmm..\n>\n> t/006_logical_decoding.pl .. 4/14\n> # Failed test 'pg_recvlogical acknowledged changes'\n> # at t/006_logical_decoding.pl line 117.\n> # got: 'BEGIN\n> # table public.decoding_test: INSERT: x[integer]:5 y[text]:'5''\n> # expected: ''\n>\n> I'm not sure what the test is checking for now, though.\n>\n\nI think it's trying to check that pg_recvlogical doesn't read \"past\" a\nspecified \"--endpos\" LSN. The test is invoking pg_recvlogical with the\nsame --endpos LSN value multiple times.\nAfter first getting the LSN (to use for the --endpos value) after 4\nrows are inserted, some additional rows are inserted which the test\nexpects pg_recvlogical won't read because it should't read past\n--endpos.\nProblem is, the test seems to be relying on a keepalive between the\nWAL record of the first transaction and the WAL record of the next\ntransaction.\nAs Amit previously explained on this thread \"I think here the reason\nis that the first_lsn of a transaction is always equal to end_lsn of\nthe previous transaction (See comments\nabove first_lsn and end_lsn fields of ReorderBufferTXN).\"\nWhen the patch is applied, pg_recvlogical doesn't read a keepalive\nwhen it is invoked with the same --endpos for a second time here, and\nit ends up reading the first WAL record for the next transaction\n(those additional rows that the test expects it won't read).\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 30 Sep 2021 19:51:14 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "At Thu, 30 Sep 2021 17:08:35 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 30 Sep 2021 17:21:03 +1000, Greg Nancarrow <gregn4422@gmail.com> wrote in \n> > Actually, with the patch applied, I find that \"make check-world\" fails\n> > (006_logical_decoding, test 7).\n> \n> Mmm..\n> \n> t/006_logical_decoding.pl .. 4/14 \n> # Failed test 'pg_recvlogical acknowledged changes'\n> # at t/006_logical_decoding.pl line 117.\n> # got: 'BEGIN\n> # table public.decoding_test: INSERT: x[integer]:5 y[text]:'5''\n> # expected: ''\n> \n> I'm not sure what the test is checking for now, though.\n\nIt's checking that endpos works correctly? The logical decoded WALs\nlooks like this.\n\n0/1528F10|table public.decoding_test: INSERT: x[integer]:1 y[text]:'1'\n0/15290F8|table public.decoding_test: INSERT: x[integer]:2 y[text]:'2'\n0/1529138|table public.decoding_test: INSERT: x[integer]:3 y[text]:'3'\n0/1529178|table public.decoding_test: INSERT: x[integer]:4 y[text]:'4'\n0/15291E8|COMMIT 709\n0/15291E8|BEGIN 710\n0/15291E8|table public.decoding_test: INSERT: x[integer]:5 y[text]:'5'\n0/1529228|table public.decoding_test: INSERT: x[integer]:6 y[text]:'6'\n\nThe COMMIT and BEGIN shares the same LSN, which I don't understand how come.\n\nThe previous read by pg_recvlocal prceeded upto the COMMIT record. and\nthe following command runs after that behaves differently.\n\npg_recvlogical -S test_slot --dbname postgres --endpos '0/15291E8' -f - --no-loop --start\n\nBefore the patch it ends before reading a record, and after the patch\nit reads into the \"table ...\" line. pg_recvlogical seems using the\nendpos as the beginning of the last record. In that meaning the three\nlines (COMMIT 709/BEGIN 710/table ...'5') are falls under the end of\ndata.\n\nThe difference seems coming from the timing keepalive\ncomes. pg_recvlogical checks the endpos only when keepalive comes. In\nother words, it needs keepalive for every data line so that it stops\nexactly at the specified endpos.\n\n1. Is it the correct behavior that the three data lines share the same\n LSN? I think BEGIN and the next line should do, but COMMIT and next\n BEGIN shouldn't.\n\n2. Is it the designed behavior that pg_recvlogical does check endpos\n only when a keepalive comes? If it is the correct behavior, we\n shouldn't avoid the keepalive flood.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 30 Sep 2021 19:11:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Sep 30, 2021 at 1:26 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 30 Sep 2021 16:21:25 +1000, Greg Nancarrow <gregn4422@gmail.com> wrote in\n> > On Thu, Sep 30, 2021 at 3:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n>\n> > > Also, more to the point this special keep_alive seems to be sent for\n> > > synchronous replication and walsender shutdown as they can expect\n> > > updated locations. You haven't given any reason (theory) why those two\n> > > won't be impacted due to this change? I am aware that for synchronous\n> > > replication, we wake waiters while ProcessStandbyReplyMessage but I am\n> > > not sure how it helps with wal sender shutdown? I think we need to\n> > > know the reasons for this message and then need to see if the change\n> > > has any impact on the same.\n> > >\n> >\n> > Yes, I'm not sure about the possible impacts, still looking at it.\n>\n> If the comment describes the objective correctly, the only possible\n> impact would be that there may be a case where server responds a bit\n> slowly for a shutdown request. But I'm not sure it is definitely\n> true.\n>\n\nSo, we should try to find how wal sender shutdown is dependent on\nsending keep alive and second thing is what about sync rep case? I\nthink in the worst case that also might delay. Why do you think that\nwould be acceptable?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 30 Sep 2021 17:07:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Sep 30, 2021 at 3:41 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 30 Sep 2021 17:08:35 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > At Thu, 30 Sep 2021 17:21:03 +1000, Greg Nancarrow <gregn4422@gmail.com> wrote in\n> > > Actually, with the patch applied, I find that \"make check-world\" fails\n> > > (006_logical_decoding, test 7).\n> >\n> > Mmm..\n> >\n> > t/006_logical_decoding.pl .. 4/14\n> > # Failed test 'pg_recvlogical acknowledged changes'\n> > # at t/006_logical_decoding.pl line 117.\n> > # got: 'BEGIN\n> > # table public.decoding_test: INSERT: x[integer]:5 y[text]:'5''\n> > # expected: ''\n> >\n> > I'm not sure what the test is checking for now, though.\n>\n> It's checking that endpos works correctly? The logical decoded WALs\n> looks like this.\n>\n> 0/1528F10|table public.decoding_test: INSERT: x[integer]:1 y[text]:'1'\n> 0/15290F8|table public.decoding_test: INSERT: x[integer]:2 y[text]:'2'\n> 0/1529138|table public.decoding_test: INSERT: x[integer]:3 y[text]:'3'\n> 0/1529178|table public.decoding_test: INSERT: x[integer]:4 y[text]:'4'\n> 0/15291E8|COMMIT 709\n> 0/15291E8|BEGIN 710\n> 0/15291E8|table public.decoding_test: INSERT: x[integer]:5 y[text]:'5'\n> 0/1529228|table public.decoding_test: INSERT: x[integer]:6 y[text]:'6'\n>\n> The COMMIT and BEGIN shares the same LSN, which I don't understand how come.\n>\n\nThis is because endlsn is always commit record + 1 which makes it\nequal to start of next record and we use endlsn here for commit. See\nbelow comments in code.\n/*\n* LSN pointing to the end of the commit record + 1.\n*/\nXLogRecPtr end_lsn;\n\n> The previous read by pg_recvlocal prceeded upto the COMMIT record. and\n> the following command runs after that behaves differently.\n>\n> pg_recvlogical -S test_slot --dbname postgres --endpos '0/15291E8' -f - --no-loop --start\n>\n> Before the patch it ends before reading a record, and after the patch\n> it reads into the \"table ...\" line. pg_recvlogical seems using the\n> endpos as the beginning of the last record. In that meaning the three\n> lines (COMMIT 709/BEGIN 710/table ...'5') are falls under the end of\n> data.\n>\n> The difference seems coming from the timing keepalive\n> comes. pg_recvlogical checks the endpos only when keepalive comes. In\n> other words, it needs keepalive for every data line so that it stops\n> exactly at the specified endpos.\n>\n> 1. Is it the correct behavior that the three data lines share the same\n> LSN? I think BEGIN and the next line should do, but COMMIT and next\n> BEGIN shouldn't.\n>\n> 2. Is it the designed behavior that pg_recvlogical does check endpos\n> only when a keepalive comes? If it is the correct behavior, we\n> shouldn't avoid the keepalive flood.\n>\n\nIf anything, I think this is a testcase issue as explained by me in email [1]\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Ja2XmK59Czv1V%2BtfOgU4mcFfDwTtTgO02Wd%3DrO02JbiQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 30 Sep 2021 17:15:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "On Thu, Sep 30, 2021 at 5:56 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> After the patch applied, that keepalive is sent only when the loop is\n> actually going to sleep some time. In case the next WAL doesn't come\n> for KEEPALIVE_TIMEOUT milliseconds, it sends a keepalive. There's a\n> dubious behavior when sleeptime <= KEEPALIVE_TIMEOUT that it sends a\n> keepalive immediately. It was (as far as I recall) intentional in\n> order to make the code simpler. However, on second thought, we will\n> have the next chance to send keepalive in that case, and intermittent\n> frequent keepalives can happen with that behavior. So I came to think\n> that we can omit keepalives at all that case.\n>\n> (I myself haven't see the keepalive flood..)\n>\n\nI tried your updated patch\n(avoid_keepalive_flood_at_bleeding_edge_of_wal.patch, rebased) and\nalso manually applied your previous keepalive-counting code\n(count_keepalives2.diff.txt), adapted to the code updates.\nI tested both the problem originally reported (which used\npg_recvlogical) and similarly using pub/sub of the pgbench_history\ntable, and in both cases I found that your patch very significantly\nreduced the keepalives, so the keepalive flood is no longer seen.\nI am still a little unsure about the impact on pg_recvlogical --endpos\nfunctionality, which is detected by the regression test failure. I did\ntry to update pg_recvlogical, to not rely on a keepalive for --endpos,\nbut so far haven't been successful in doing that. If the test is\naltered/removed then I think that the documentation for pg_recvlogical\n--endpos will need updating in some way.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 1 Oct 2021 18:14:22 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" }, { "msg_contents": "At Thu, 30 Sep 2021 17:07:08 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Thu, Sep 30, 2021 at 1:26 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > If the comment describes the objective correctly, the only possible\n> > impact would be that there may be a case where server responds a bit\n> > slowly for a shutdown request. But I'm not sure it is definitely\n> > true.\n> >\n> \n> So, we should try to find how wal sender shutdown is dependent on\n> sending keep alive and second thing is what about sync rep case? I\n> think in the worst case that also might delay. Why do you think that\n> would be acceptable?\n\nMmm. AFAICS including the history of the code, the problem to fix\nlooks like to be pthat logical wal receiver doesn't send a flush\nresponse spontaneously. As far as receiver doesn't do that and unless\nwe allow some delay of the response, sender inevitably needs to ping\nfrequently until the wanted respons returns.\n\nIt seems to me that it is better to make the receiver send a response\nat flush LSN movement spontaneously rather than tweaking the keepalive\nsending mechanism. But letting XLogFlush trigger lsn_mapping\nprocessing does not seem simple..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 01 Oct 2021 18:12:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication keepalive flood" } ]
[ { "msg_contents": "> Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> writes:\n\n> > Now, while this patch does seem to work correctly, it raises a number of\n> > weird cpluspluscheck warnings, which I think are attributable to the\n> > new macro definitions. I didn't look into it closely, but I suppose it\n> > should be fixable given sufficient effort:\n>\n> Didn't test, but the first one is certainly fixable by adding a cast,\n> and I guess the others might be as well.\n\n>I get no warnings with this one. I'm a bit wary of leaving\n>VARDATA_COMPRESSED_GET_EXTSIZE unchanged, but at least nothing in this\n>patch requires a cast there.\n\nHi Alvaro.\n\nPlease, would you mind testing with these changes.\nI'm curious to see if anything improves or not.\n1. Add a const to the attr parameter.\n2. Remove the cmid variable (and store it).\n3. Add tail cut.\n\nregards,\n\nRanier Vilela", "msg_date": "Sat, 5 Jun 2021 11:16:20 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?" }, { "msg_contents": "Em sáb., 5 de jun. de 2021 às 11:16, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> > Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> writes:\n>\n> > > Now, while this patch does seem to work correctly, it raises a number\n> of\n> > > weird cpluspluscheck warnings, which I think are attributable to the\n> > > new macro definitions. I didn't look into it closely, but I suppose it\n> > > should be fixable given sufficient effort:\n> >\n> > Didn't test, but the first one is certainly fixable by adding a cast,\n> > and I guess the others might be as well.\n>\n> >I get no warnings with this one. I'm a bit wary of leaving\n> >VARDATA_COMPRESSED_GET_EXTSIZE unchanged, but at least nothing in this\n> >patch requires a cast there.\n>\n> Hi Alvaro.\n>\n> Please, would you mind testing with these changes.\n> I'm curious to see if anything improves or not.\n> 1. Add a const to the attr parameter.\n> 2. Remove the cmid variable (and store it).\n> 3. Add tail cut.\n>\nI think that\nhttps://github.com/postgres/postgres/commit/e6241d8e030fbd2746b3ea3f44e728224298f35b#diff-640a50de37a0dc027d9d1c7239e34aed53b184a8ec6e1f653694e458376b19fa\nstill has space for improvements.\n\nIf attr->attcompression is invalid, it doesn't matter, it's better to\ndecompress.\nBesides if it is invalid, currently default_toast_compression is not stored\nin attr->attcompression.\nI believe this version should be a little faster.\nSince it won't double-scan the attributes if it's not really necessary.\n\nregards,\nRanier Vilela", "msg_date": "Sun, 6 Jun 2021 18:34:53 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?" } ]
[ { "msg_contents": "Hi all,\n\nWhile reading the code of pg_log_backend_memory_contexts(), I have\nbeen surprised to see that the code would attempt to look at a PROC\nentry based on the given input PID *before* checking if the function\nhas been called by a superuser. This does not strike me as a good\nidea as this allows any users to call this function and to take\nProcArrayLock in shared mode, freely.\n\nIt seems to me that we had better check for a superuser at the\nbeginning of the function, like in the attached.\n\nThanks,\n--\nMichael", "msg_date": "Sun, 6 Jun 2021 15:53:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Misplaced superuser check in pg_log_backend_memory_contexts()" }, { "msg_contents": "On Sun, Jun 06, 2021 at 03:53:10PM +0900, Michael Paquier wrote:\n> \n> While reading the code of pg_log_backend_memory_contexts(), I have\n> been surprised to see that the code would attempt to look at a PROC\n> entry based on the given input PID *before* checking if the function\n> has been called by a superuser. This does not strike me as a good\n> idea as this allows any users to call this function and to take\n> ProcArrayLock in shared mode, freely.\n\nIt doesn't seem like a huge problem as at least GetSnapshotData also acquires\nProcArrayLock in shared mode. Knowing if a specific pid is a postgres backend\nor not isn't privileged information either, and anyone can check that using\npg_stat_activity as an unprivileged user (which will also acquire ProcArrayLock\nin shared mode).\n> \n> It seems to me that we had better check for a superuser at the\n> beginning of the function, like in the attached.\n\nHowever +1 for the patch, as it seems more consistent to always get a\npermission failure if you're not a superuser.\n\n\n", "msg_date": "Sun, 6 Jun 2021 15:13:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Misplaced superuser check in pg_log_backend_memory_contexts()" }, { "msg_contents": "On Sun, Jun 6, 2021 at 12:23 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> While reading the code of pg_log_backend_memory_contexts(), I have\n> been surprised to see that the code would attempt to look at a PROC\n> entry based on the given input PID *before* checking if the function\n> has been called by a superuser. This does not strike me as a good\n> idea as this allows any users to call this function and to take\n> ProcArrayLock in shared mode, freely.\n>\n> It seems to me that we had better check for a superuser at the\n> beginning of the function, like in the attached.\n\npg_signal_backend still locks ProcArrayLock in shared mode first and then\nchecks for the superuser permissions. Of course, it does that for the\nroleId i.e. superuser_arg(proc->roleId), but there's also superuser() check.\n\nWith Regards,\nBharath Rupireddy.\n\nOn Sun, Jun 6, 2021 at 12:23 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> While reading the code of pg_log_backend_memory_contexts(), I have\n> been surprised to see that the code would attempt to look at a PROC\n> entry based on the given input PID *before* checking if the function\n> has been called by a superuser.  This does not strike me as a good\n> idea as this allows any users to call this function and to take\n> ProcArrayLock in shared mode, freely.\n>\n> It seems to me that we had better check for a superuser at the\n> beginning of the function, like in the attached.\n\npg_signal_backend still locks ProcArrayLock in shared mode first and then checks for the superuser permissions. Of course, it does that for the roleId i.e. superuser_arg(proc->roleId), but there's also superuser() check.\n\nWith Regards,\nBharath Rupireddy.", "msg_date": "Sun, 6 Jun 2021 19:03:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Misplaced superuser check in pg_log_backend_memory_contexts()" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sun, Jun 06, 2021 at 03:53:10PM +0900, Michael Paquier wrote:\n>> It seems to me that we had better check for a superuser at the\n>> beginning of the function, like in the attached.\n\n> However +1 for the patch, as it seems more consistent to always get a\n> permission failure if you're not a superuser.\n\nYeah, it's just weird if such a check is not the first thing\nin the function. Even if you can convince yourself that the\nactions taken before that don't create any security issue today,\nit's not hard to imagine that innocent future code rearrangements\ncould break that argument. What's the value of postponing the\ncheck anyway?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Jun 2021 11:13:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Misplaced superuser check in pg_log_backend_memory_contexts()" }, { "msg_contents": "On Sun, Jun 06, 2021 at 11:13:40AM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n>> However +1 for the patch, as it seems more consistent to always get a\n>> permission failure if you're not a superuser.\n> \n> Yeah, it's just weird if such a check is not the first thing\n> in the function. Even if you can convince yourself that the\n> actions taken before that don't create any security issue today,\n> it's not hard to imagine that innocent future code rearrangements\n> could break that argument. What's the value of postponing the\n> check anyway?\n\nThanks for the input, I have applied the patch.\n--\nMichael", "msg_date": "Tue, 8 Jun 2021 11:49:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Misplaced superuser check in pg_log_backend_memory_contexts()" }, { "msg_contents": "\n\nOn 2021/06/08 11:49, Michael Paquier wrote:\n> On Sun, Jun 06, 2021 at 11:13:40AM -0400, Tom Lane wrote:\n>> Julien Rouhaud <rjuju123@gmail.com> writes:\n>>> However +1 for the patch, as it seems more consistent to always get a\n>>> permission failure if you're not a superuser.\n>>\n>> Yeah, it's just weird if such a check is not the first thing\n>> in the function. Even if you can convince yourself that the\n>> actions taken before that don't create any security issue today,\n>> it's not hard to imagine that innocent future code rearrangements\n>> could break that argument. What's the value of postponing the\n>> check anyway?\n> \n> Thanks for the input, I have applied the patch.\n\nThanks a lot!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 8 Jun 2021 23:30:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Misplaced superuser check in pg_log_backend_memory_contexts()" }, { "msg_contents": "On 2021-06-08 11:49, Michael Paquier wrote:\n> On Sun, Jun 06, 2021 at 11:13:40AM -0400, Tom Lane wrote:\n>> Julien Rouhaud <rjuju123@gmail.com> writes:\n>>> However +1 for the patch, as it seems more consistent to always get a\n>>> permission failure if you're not a superuser.\n>> \n>> Yeah, it's just weird if such a check is not the first thing\n>> in the function. Even if you can convince yourself that the\n>> actions taken before that don't create any security issue today,\n>> it's not hard to imagine that innocent future code rearrangements\n>> could break that argument. What's the value of postponing the\n>> check anyway?\n> \n> Thanks for the input, I have applied the patch.\n\nThanks for your modification!\n\nBTW, I did the same thing in another patch I'm proposing[1], so I'll fix \nthat as well.\n\n[1] \nhttps://www.postgresql.org/message-id/c6682a25f3f0e9bd520707342219eac5%40oss.nttdata.com\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 09 Jun 2021 00:25:51 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Misplaced superuser check in pg_log_backend_memory_contexts()" }, { "msg_contents": "On Wed, Jun 09, 2021 at 12:25:51AM +0900, torikoshia wrote:\n> BTW, I did the same thing in another patch I'm proposing[1], so I'll fix\n> that as well.\n\nYes, it would be better to be consistent here.\n--\nMichael", "msg_date": "Wed, 9 Jun 2021 10:37:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Misplaced superuser check in pg_log_backend_memory_contexts()" } ]
[ { "msg_contents": "Hi,\n\nThere seems to be a weird bug in Postgres (last tested 11.12) where it\nallows an INSERT into a table with a UNIQUE / UNIQUE CONSTRAINT index\non a TEXT/VARCHAR when there's already a value present in that index,\nbut only for UTF-8 input.\n\nI just had this happen on our user table and it somehow made it so\nthat Postgres returned no results for *any* SELECT ... FROM x WHERE\nunique_col = 'x', which unfortunately meant no one could login to our\nservice.\n\nI had to:\n\nSET enable_indexscan = off;\nSET enable_bitmapscan = off;\n\nAnd then the data was returned properly. I thought maybe the index was\ncorrupt somehow, so I tried to reindex the unique index, which failed\nbecause \"nur\" was present twice.\n\nI modified the value in that column by the primary key (which is an\ninteger), and that allowed me to reindex, after which queries against\nthe column started working properly again.\n\nMy collation settings:\n\n postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n\nI've had this happen before on a different table with cyerrlic UTF-8\ninput, but didn't really have much to go on debugging wise.\n\nWhat I sort of don't get is... before we insert anything into these\ntables, we always check to see if a value already exists. And Postgres\nmust be returning no results for some reason. So it goes to insert a\nduplicate value which somehow succeeds despite the unique index, but\nthen a reindex says it's a duplicate. Pretty weird.\n\nRegards,\nOmar\n\n\n", "msg_date": "Sun, 6 Jun 2021 03:54:48 -0700", "msg_from": "Omar Kilani <omar.kilani@gmail.com>", "msg_from_op": true, "msg_subject": "Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "On Sun, 2021-06-06 at 03:54 -0700, Omar Kilani wrote:\n> There seems to be a weird bug in Postgres (last tested 11.12) where it\n> allows an INSERT into a table with a UNIQUE / UNIQUE CONSTRAINT index\n> on a TEXT/VARCHAR when there's already a value present in that index,\n> but only for UTF-8 input.\n> \n> I just had this happen on our user table and it somehow made it so\n> that Postgres returned no results for *any* SELECT ... FROM x WHERE\n> unique_col = 'x', which unfortunately meant no one could login to our\n> service.\n> \n> I had to:\n> \n> SET enable_indexscan = off;\n> SET enable_bitmapscan = off;\n> \n> And then the data was returned properly.\n\nSounds like data corruption.\nREINDEX the index and see if that fixes the problem.\nTry to figure out the cause (bad hardware?).\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Sun, 06 Jun 2021 13:46:17 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "On Sun, 6 Jun 2021 at 22:55, Omar Kilani <omar.kilani@gmail.com> wrote:\n> There seems to be a weird bug in Postgres (last tested 11.12) where it\n> allows an INSERT into a table with a UNIQUE / UNIQUE CONSTRAINT index\n> on a TEXT/VARCHAR when there's already a value present in that index,\n> but only for UTF-8 input.\n\nIt would be good to know a bit about the history of this instance.\nWas the initdb done on 11.12? Or some other 11.x version? Or was this\ninstance pg_upgraded from some previous major version?\n\nThere was a bug fixed in 11.11 that caused CREATE INDEX CONCURRENTLY\npossibly to miss rows that were inserted by a prepared transaction.\nWas this index created with CREATE INDEX CONCURRENTLY?\n\n> What I sort of don't get is... before we insert anything into these\n> tables, we always check to see if a value already exists. And Postgres\n> must be returning no results for some reason. So it goes to insert a\n> duplicate value which somehow succeeds despite the unique index, but\n> then a reindex says it's a duplicate. Pretty weird.\n\nThat does not seem that weird to me. If the index is corrupt and\nfails to find the record you're searching for using a scan of that\nindex, then it seems pretty likely that the record would also not be\nfound in the index when doing the INSERT.\n\nThe reindex will catch the problem because it uses the heap as the\nsource of truth to build the new index. It simply sounds like there\nare two records in the heap because a subsequent one was added and a\ncorrupt index didn't find the original record either because it was\neither missing from the index or because the index was corrupt in some\nway that the record was just not found.\n\nDavid\n\n\n", "msg_date": "Sun, 6 Jun 2021 23:59:39 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "Hey David,\n\nHmmm… it wasn’t init on 11.x.\n\nThis is a very old database (2004) that has moved forward via pg_upgrade. I\nthink we did a pg_dump and pg_restore every time we hit some major\nincompatibility like float vs integer date times.\n\nThe current DB started as a pg_restore into 10.x. Then was pg_upgrade’d to\n11.2. Has been minor upgraded a bunch of times since and we upgraded to\n11.12… just before this happened.\n\nAs in, we just restarted our cluster on 11.12. Everything was working fine\n(and the index was working) and then the INSERT happened.\n\nI have checksums on and I did a VACUUM on the table just before the REINDEX.\n\nI’m 99.99999% confident the hardware isn’t bad.\n\nThe only time we’ve seen this is with Unicode input.\n\nRegards,\nOmar\n\nOn Sun, Jun 6, 2021 at 4:59 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Sun, 6 Jun 2021 at 22:55, Omar Kilani <omar.kilani@gmail.com> wrote:\n> > There seems to be a weird bug in Postgres (last tested 11.12) where it\n> > allows an INSERT into a table with a UNIQUE / UNIQUE CONSTRAINT index\n> > on a TEXT/VARCHAR when there's already a value present in that index,\n> > but only for UTF-8 input.\n>\n> It would be good to know a bit about the history of this instance.\n> Was the initdb done on 11.12? Or some other 11.x version? Or was this\n> instance pg_upgraded from some previous major version?\n>\n> There was a bug fixed in 11.11 that caused CREATE INDEX CONCURRENTLY\n> possibly to miss rows that were inserted by a prepared transaction.\n> Was this index created with CREATE INDEX CONCURRENTLY?\n>\n> > What I sort of don't get is... before we insert anything into these\n> > tables, we always check to see if a value already exists. And Postgres\n> > must be returning no results for some reason. So it goes to insert a\n> > duplicate value which somehow succeeds despite the unique index, but\n> > then a reindex says it's a duplicate. Pretty weird.\n>\n> That does not seem that weird to me. If the index is corrupt and\n> fails to find the record you're searching for using a scan of that\n> index, then it seems pretty likely that the record would also not be\n> found in the index when doing the INSERT.\n>\n> The reindex will catch the problem because it uses the heap as the\n> source of truth to build the new index. It simply sounds like there\n> are two records in the heap because a subsequent one was added and a\n> corrupt index didn't find the original record either because it was\n> either missing from the index or because the index was corrupt in some\n> way that the record was just not found.\n>\n> David\n>\n\nHey David,Hmmm… it wasn’t init on 11.x.This is a very old database (2004) that has moved forward via pg_upgrade. I think we did a pg_dump and pg_restore every time we hit some major incompatibility like float vs integer date times.The current DB started as a pg_restore into 10.x. Then was pg_upgrade’d to 11.2. Has been minor upgraded a bunch of times since and we upgraded to 11.12… just before this happened.As in, we just restarted our cluster on 11.12. Everything was working fine (and the index was working) and then the INSERT happened.I have checksums on and I did a VACUUM on the table just before the REINDEX.I’m 99.99999% confident the hardware isn’t bad.The only time we’ve seen this is with Unicode input.Regards,OmarOn Sun, Jun 6, 2021 at 4:59 AM David Rowley <dgrowleyml@gmail.com> wrote:On Sun, 6 Jun 2021 at 22:55, Omar Kilani <omar.kilani@gmail.com> wrote:\n> There seems to be a weird bug in Postgres (last tested 11.12) where it\n> allows an INSERT into a table with a UNIQUE / UNIQUE CONSTRAINT index\n> on a TEXT/VARCHAR when there's already a value present in that index,\n> but only for UTF-8 input.\n\nIt would be good to know a bit about the history of this instance.\nWas the initdb done on 11.12? Or some other 11.x version?  Or was this\ninstance pg_upgraded from some previous major version?\n\nThere was a bug fixed in 11.11 that caused CREATE INDEX CONCURRENTLY\npossibly to miss rows that were inserted by a prepared transaction.\nWas this index created with CREATE INDEX CONCURRENTLY?\n\n> What I sort of don't get is... before we insert anything into these\n> tables, we always check to see if a value already exists. And Postgres\n> must be returning no results for some reason. So it goes to insert a\n> duplicate value which somehow succeeds despite the unique index, but\n> then a reindex says it's a duplicate. Pretty weird.\n\nThat does not seem that weird to me.  If the index is corrupt and\nfails to find the record you're searching for using a scan of that\nindex, then it seems pretty likely that the record would also not be\nfound in the index when doing the INSERT.\n\nThe reindex will catch the problem because it uses the heap as the\nsource of truth to build the new index.  It simply sounds like there\nare two records in the heap because a subsequent one was added and a\ncorrupt index didn't find the original record either because it was\neither missing from the index or because the index was corrupt in some\nway that the record was just not found.\n\nDavid", "msg_date": "Sun, 6 Jun 2021 06:41:52 -0700", "msg_from": "Omar Kilani <omar.kilani@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "I just remembered, I have… many… snapshots of the on disk data prior to\nstarting 11.12.\n\nIt should be possible to start at a specific point in time with the index\nin the state it was in prior to the insert.\n\nHow do I prove or disprove… hardware issues?\n\nAlso… I ran the select on 3 of our standby servers and they all had the\nsame issue. Presumably the index would be “corrupt” in the same way across\nmultiple very different machines from WAL apply?\n\nOn Sun, Jun 6, 2021 at 6:41 AM Omar Kilani <omar.kilani@gmail.com> wrote:\n\n> Hey David,\n>\n> Hmmm… it wasn’t init on 11.x.\n>\n> This is a very old database (2004) that has moved forward via pg_upgrade.\n> I think we did a pg_dump and pg_restore every time we hit some major\n> incompatibility like float vs integer date times.\n>\n> The current DB started as a pg_restore into 10.x. Then was pg_upgrade’d to\n> 11.2. Has been minor upgraded a bunch of times since and we upgraded to\n> 11.12… just before this happened.\n>\n> As in, we just restarted our cluster on 11.12. Everything was working fine\n> (and the index was working) and then the INSERT happened.\n>\n> I have checksums on and I did a VACUUM on the table just before the\n> REINDEX.\n>\n> I’m 99.99999% confident the hardware isn’t bad.\n>\n> The only time we’ve seen this is with Unicode input.\n>\n> Regards,\n> Omar\n>\n> On Sun, Jun 6, 2021 at 4:59 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n>> On Sun, 6 Jun 2021 at 22:55, Omar Kilani <omar.kilani@gmail.com> wrote:\n>> > There seems to be a weird bug in Postgres (last tested 11.12) where it\n>> > allows an INSERT into a table with a UNIQUE / UNIQUE CONSTRAINT index\n>> > on a TEXT/VARCHAR when there's already a value present in that index,\n>> > but only for UTF-8 input.\n>>\n>> It would be good to know a bit about the history of this instance.\n>> Was the initdb done on 11.12? Or some other 11.x version? Or was this\n>> instance pg_upgraded from some previous major version?\n>>\n>> There was a bug fixed in 11.11 that caused CREATE INDEX CONCURRENTLY\n>> possibly to miss rows that were inserted by a prepared transaction.\n>> Was this index created with CREATE INDEX CONCURRENTLY?\n>>\n>> > What I sort of don't get is... before we insert anything into these\n>> > tables, we always check to see if a value already exists. And Postgres\n>> > must be returning no results for some reason. So it goes to insert a\n>> > duplicate value which somehow succeeds despite the unique index, but\n>> > then a reindex says it's a duplicate. Pretty weird.\n>>\n>> That does not seem that weird to me. If the index is corrupt and\n>> fails to find the record you're searching for using a scan of that\n>> index, then it seems pretty likely that the record would also not be\n>> found in the index when doing the INSERT.\n>>\n>> The reindex will catch the problem because it uses the heap as the\n>> source of truth to build the new index. It simply sounds like there\n>> are two records in the heap because a subsequent one was added and a\n>> corrupt index didn't find the original record either because it was\n>> either missing from the index or because the index was corrupt in some\n>> way that the record was just not found.\n>>\n>> David\n>>\n>\n\nI just remembered, I have… many… snapshots of the on disk data prior to starting 11.12.It should be possible to start at a specific point in time with the index in the state it was in prior to the insert.How do I prove or disprove… hardware issues?Also… I ran the select on 3 of our standby servers and they all had the same issue. Presumably the index would be “corrupt” in the same way across multiple very different machines from WAL apply?On Sun, Jun 6, 2021 at 6:41 AM Omar Kilani <omar.kilani@gmail.com> wrote:Hey David,Hmmm… it wasn’t init on 11.x.This is a very old database (2004) that has moved forward via pg_upgrade. I think we did a pg_dump and pg_restore every time we hit some major incompatibility like float vs integer date times.The current DB started as a pg_restore into 10.x. Then was pg_upgrade’d to 11.2. Has been minor upgraded a bunch of times since and we upgraded to 11.12… just before this happened.As in, we just restarted our cluster on 11.12. Everything was working fine (and the index was working) and then the INSERT happened.I have checksums on and I did a VACUUM on the table just before the REINDEX.I’m 99.99999% confident the hardware isn’t bad.The only time we’ve seen this is with Unicode input.Regards,OmarOn Sun, Jun 6, 2021 at 4:59 AM David Rowley <dgrowleyml@gmail.com> wrote:On Sun, 6 Jun 2021 at 22:55, Omar Kilani <omar.kilani@gmail.com> wrote:\n> There seems to be a weird bug in Postgres (last tested 11.12) where it\n> allows an INSERT into a table with a UNIQUE / UNIQUE CONSTRAINT index\n> on a TEXT/VARCHAR when there's already a value present in that index,\n> but only for UTF-8 input.\n\nIt would be good to know a bit about the history of this instance.\nWas the initdb done on 11.12? Or some other 11.x version?  Or was this\ninstance pg_upgraded from some previous major version?\n\nThere was a bug fixed in 11.11 that caused CREATE INDEX CONCURRENTLY\npossibly to miss rows that were inserted by a prepared transaction.\nWas this index created with CREATE INDEX CONCURRENTLY?\n\n> What I sort of don't get is... before we insert anything into these\n> tables, we always check to see if a value already exists. And Postgres\n> must be returning no results for some reason. So it goes to insert a\n> duplicate value which somehow succeeds despite the unique index, but\n> then a reindex says it's a duplicate. Pretty weird.\n\nThat does not seem that weird to me.  If the index is corrupt and\nfails to find the record you're searching for using a scan of that\nindex, then it seems pretty likely that the record would also not be\nfound in the index when doing the INSERT.\n\nThe reindex will catch the problem because it uses the heap as the\nsource of truth to build the new index.  It simply sounds like there\nare two records in the heap because a subsequent one was added and a\ncorrupt index didn't find the original record either because it was\neither missing from the index or because the index was corrupt in some\nway that the record was just not found.\n\nDavid", "msg_date": "Sun, 6 Jun 2021 06:53:14 -0700", "msg_from": "Omar Kilani <omar.kilani@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "Omar Kilani <omar.kilani@gmail.com> writes:\n> This is a very old database (2004) that has moved forward via pg_upgrade. I\n> think we did a pg_dump and pg_restore every time we hit some major\n> incompatibility like float vs integer date times.\n\nIf it's that old, it's likely also survived multiple OS upgrades.\nIt seems clear that this index has been corrupt for awhile, and\nI'm wondering whether the corruption was brought on by an OS\nlocale change. There's useful info at\n\nhttps://wiki.postgresql.org/wiki/Locale_data_changes\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Jun 2021 11:08:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "I was able to restore a snapshot where the database was fully consistent.\n\n2021-06-06 14:52:34.748 UTC [0/48529] LOG: database system was\ninterrupted while in recovery at log time 2021-06-06 06:57:27 UTC\n2021-06-06 14:52:34.748 UTC [0/48529] HINT: If this has occurred more\nthan once some data might be corrupted and you might need to choose an\nearlier recovery target.\n2021-06-06 14:52:40.847 UTC [0/48529] LOG: database system was not\nproperly shut down; automatic recovery in progress\n2021-06-06 14:52:40.856 UTC [0/48529] LOG: invalid record length at\n8849/32000098: wanted 24, got 0\n2021-06-06 14:52:40.856 UTC [0/48529] LOG: redo is not required\n2021-06-06 14:52:40.865 UTC [0/48529] LOG: checkpoint starting:\nend-of-recovery immediate\n2021-06-06 14:52:40.909 UTC [0/48529] LOG: checkpoint complete: wrote\n0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\nwrite=0.006 s, sync=0.001 s, total=0.050 s; sync files=0,\nlongest=0.000 s, average=0.000 s; distance=0 kB, estimate=0 kB\n2021-06-06 14:52:40.964 UTC [0/48527] LOG: database system is ready\nto accept connections\n\nI'm running pg_verify_checksums on the cluster, but the database is\nmany TB so it'll be a bit.\n\nI missed one of your questions before -- no, it wasn't created with\nCREATE INDEX CONCURRENTLY. That index was created by 11.2's pg_restore\nroughly 2 years ago.\n\nOn Sun, Jun 6, 2021 at 6:53 AM Omar Kilani <omar.kilani@gmail.com> wrote:\n>\n> I just remembered, I have… many… snapshots of the on disk data prior to starting 11.12.\n>\n> It should be possible to start at a specific point in time with the index in the state it was in prior to the insert.\n>\n> How do I prove or disprove… hardware issues?\n>\n> Also… I ran the select on 3 of our standby servers and they all had the same issue. Presumably the index would be “corrupt” in the same way across multiple very different machines from WAL apply?\n>\n> On Sun, Jun 6, 2021 at 6:41 AM Omar Kilani <omar.kilani@gmail.com> wrote:\n>>\n>> Hey David,\n>>\n>> Hmmm… it wasn’t init on 11.x.\n>>\n>> This is a very old database (2004) that has moved forward via pg_upgrade. I think we did a pg_dump and pg_restore every time we hit some major incompatibility like float vs integer date times.\n>>\n>> The current DB started as a pg_restore into 10.x. Then was pg_upgrade’d to 11.2. Has been minor upgraded a bunch of times since and we upgraded to 11.12… just before this happened.\n>>\n>> As in, we just restarted our cluster on 11.12. Everything was working fine (and the index was working) and then the INSERT happened.\n>>\n>> I have checksums on and I did a VACUUM on the table just before the REINDEX.\n>>\n>> I’m 99.99999% confident the hardware isn’t bad.\n>>\n>> The only time we’ve seen this is with Unicode input.\n>>\n>> Regards,\n>> Omar\n>>\n>> On Sun, Jun 6, 2021 at 4:59 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>>>\n>>> On Sun, 6 Jun 2021 at 22:55, Omar Kilani <omar.kilani@gmail.com> wrote:\n>>> > There seems to be a weird bug in Postgres (last tested 11.12) where it\n>>> > allows an INSERT into a table with a UNIQUE / UNIQUE CONSTRAINT index\n>>> > on a TEXT/VARCHAR when there's already a value present in that index,\n>>> > but only for UTF-8 input.\n>>>\n>>> It would be good to know a bit about the history of this instance.\n>>> Was the initdb done on 11.12? Or some other 11.x version? Or was this\n>>> instance pg_upgraded from some previous major version?\n>>>\n>>> There was a bug fixed in 11.11 that caused CREATE INDEX CONCURRENTLY\n>>> possibly to miss rows that were inserted by a prepared transaction.\n>>> Was this index created with CREATE INDEX CONCURRENTLY?\n>>>\n>>> > What I sort of don't get is... before we insert anything into these\n>>> > tables, we always check to see if a value already exists. And Postgres\n>>> > must be returning no results for some reason. So it goes to insert a\n>>> > duplicate value which somehow succeeds despite the unique index, but\n>>> > then a reindex says it's a duplicate. Pretty weird.\n>>>\n>>> That does not seem that weird to me. If the index is corrupt and\n>>> fails to find the record you're searching for using a scan of that\n>>> index, then it seems pretty likely that the record would also not be\n>>> found in the index when doing the INSERT.\n>>>\n>>> The reindex will catch the problem because it uses the heap as the\n>>> source of truth to build the new index. It simply sounds like there\n>>> are two records in the heap because a subsequent one was added and a\n>>> corrupt index didn't find the original record either because it was\n>>> either missing from the index or because the index was corrupt in some\n>>> way that the record was just not found.\n>>>\n>>> David\n\n\n", "msg_date": "Sun, 6 Jun 2021 08:08:51 -0700", "msg_from": "Omar Kilani <omar.kilani@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "Hey Tom,\n\nThe database was pg_dump'ed out of 10.4 and pg_restore'd into 11.2 on\na RHEL 7.x machine.\n\nThe only other upgrade has been to RHEL 8.x. So the locale data change\nmight have changed something -- thanks for that information.\n\nWe've seen this issue on a different table before upgrading to RHEL\n8.x, though. And only on that table, because it's user-generated and\ngets a bunch of Unicode data input into a UNIQUE index.\n\nI'm not saying the index isn't corrupt as in something's not wrong\nwith it. I'm saying that during normal Postgres operation the index\nhas somehow got itself into this state, and I'm fairly sure it's not\nthe hardware.\n\nThanks again.\n\nRegards,\nOmar\n\nOn Sun, Jun 6, 2021 at 8:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Omar Kilani <omar.kilani@gmail.com> writes:\n> > This is a very old database (2004) that has moved forward via pg_upgrade. I\n> > think we did a pg_dump and pg_restore every time we hit some major\n> > incompatibility like float vs integer date times.\n>\n> If it's that old, it's likely also survived multiple OS upgrades.\n> It seems clear that this index has been corrupt for awhile, and\n> I'm wondering whether the corruption was brought on by an OS\n> locale change. There's useful info at\n>\n> https://wiki.postgresql.org/wiki/Locale_data_changes\n>\n> regards, tom lane\n\n\n", "msg_date": "Sun, 6 Jun 2021 08:18:51 -0700", "msg_from": "Omar Kilani <omar.kilani@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "Hmmm.\n\nIs it possible that in some version of 11.x, the corrupt index stopped\n\"working\"? As in, yes, it may have been corrupt but still returned\ndata on version 11.y, whereas on version 11.z it's no longer working\nand returns nothing?\n\nDavid mentions that change in 11.11...?\n\nI guess I can try some older versions of 11.x on this cluster for\ncompleteness' sake.\n\nRegards,\nOmar\n\nOn Sun, Jun 6, 2021 at 8:18 AM Omar Kilani <omar.kilani@gmail.com> wrote:\n>\n> Hey Tom,\n>\n> The database was pg_dump'ed out of 10.4 and pg_restore'd into 11.2 on\n> a RHEL 7.x machine.\n>\n> The only other upgrade has been to RHEL 8.x. So the locale data change\n> might have changed something -- thanks for that information.\n>\n> We've seen this issue on a different table before upgrading to RHEL\n> 8.x, though. And only on that table, because it's user-generated and\n> gets a bunch of Unicode data input into a UNIQUE index.\n>\n> I'm not saying the index isn't corrupt as in something's not wrong\n> with it. I'm saying that during normal Postgres operation the index\n> has somehow got itself into this state, and I'm fairly sure it's not\n> the hardware.\n>\n> Thanks again.\n>\n> Regards,\n> Omar\n>\n> On Sun, Jun 6, 2021 at 8:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Omar Kilani <omar.kilani@gmail.com> writes:\n> > > This is a very old database (2004) that has moved forward via pg_upgrade. I\n> > > think we did a pg_dump and pg_restore every time we hit some major\n> > > incompatibility like float vs integer date times.\n> >\n> > If it's that old, it's likely also survived multiple OS upgrades.\n> > It seems clear that this index has been corrupt for awhile, and\n> > I'm wondering whether the corruption was brought on by an OS\n> > locale change. There's useful info at\n> >\n> > https://wiki.postgresql.org/wiki/Locale_data_changes\n> >\n> > regards, tom lane\n\n\n", "msg_date": "Sun, 6 Jun 2021 08:38:33 -0700", "msg_from": "Omar Kilani <omar.kilani@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "On Sun, Jun 6, 2021 at 5:19 PM Omar Kilani <omar.kilani@gmail.com> wrote:\n>\n> Hey Tom,\n>\n> The database was pg_dump'ed out of 10.4 and pg_restore'd into 11.2 on\n> a RHEL 7.x machine.\n>\n> The only other upgrade has been to RHEL 8.x. So the locale data change\n> might have changed something -- thanks for that information.\n\nThere is no might -- if you upgraded from RHEL 7 to RHEL 8 without\ndoing a reindex or a dump/reload there, you are pretty much guaranteed\nto have corrupt text indexes from that. Regardless of PostgreSQL\nversions, this was about the RHEL upgrade not the Postgres one.\n\n\n\n> We've seen this issue on a different table before upgrading to RHEL\n> 8.x, though. And only on that table, because it's user-generated and\n> gets a bunch of Unicode data input into a UNIQUE index.\n\nThis indicates you may have more than one problem.\n\nBut that doesn't mean it's not both, sadly.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sun, 6 Jun 2021 18:14:54 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "Hey Magnus,\n\nHmmm, okay -- that's unfortunate. :)\n\nI apparently wrote a script in 2017 to find duplicates from this issue\non the other table and fix them up. Maybe a similar locale thing\nhappened back then?\n\nAnyway, for what it's worth:\n\nChecksum scan completed\nData checksum version: 1\nFiles scanned: 7068\nBlocks scanned: 524565247\nBad checksums: 0\n\nRegards,\nOmar\n\nOn Sun, Jun 6, 2021 at 9:15 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Sun, Jun 6, 2021 at 5:19 PM Omar Kilani <omar.kilani@gmail.com> wrote:\n> >\n> > Hey Tom,\n> >\n> > The database was pg_dump'ed out of 10.4 and pg_restore'd into 11.2 on\n> > a RHEL 7.x machine.\n> >\n> > The only other upgrade has been to RHEL 8.x. So the locale data change\n> > might have changed something -- thanks for that information.\n>\n> There is no might -- if you upgraded from RHEL 7 to RHEL 8 without\n> doing a reindex or a dump/reload there, you are pretty much guaranteed\n> to have corrupt text indexes from that. Regardless of PostgreSQL\n> versions, this was about the RHEL upgrade not the Postgres one.\n>\n>\n>\n> > We've seen this issue on a different table before upgrading to RHEL\n> > 8.x, though. And only on that table, because it's user-generated and\n> > gets a bunch of Unicode data input into a UNIQUE index.\n>\n> This indicates you may have more than one problem.\n>\n> But that doesn't mean it's not both, sadly.\n>\n>\n> --\n> Magnus Hagander\n> Me: https://www.hagander.net/\n> Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sun, 6 Jun 2021 09:24:52 -0700", "msg_from": "Omar Kilani <omar.kilani@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "On 06/06/21 11:08, Omar Kilani wrote:\n> I'm running pg_verify_checksums on the cluster, but the database is\n> many TB so it'll be a bit.\n\nIndex corruption because of a locale change would not be the sort of thing\nchecksums would detect. Entries would be put into the index in the correct\norder according to the old collation. The same entries can be still there,\nintact, just fine according to the checksums, only the new collation would\nhave put them in a different order. Index search algorithms that are fast,\nbecause they assume the entries to be correctly ordered, will skip regions\nof the index where the desired key \"couldn't possibly be\", and if that's\nwhere the old ordering put it, it won't be found.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sun, 6 Jun 2021 12:36:26 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "Hey Chap,\n\nYeah, I understand. Just ruling out the bad hardware scenario.\n\nPlus the next person to Google this will hopefully stumble upon this\nthread. :)\n\nRegards,\nOmar\n\nOn Sun, Jun 6, 2021 at 9:36 AM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 06/06/21 11:08, Omar Kilani wrote:\n> > I'm running pg_verify_checksums on the cluster, but the database is\n> > many TB so it'll be a bit.\n>\n> Index corruption because of a locale change would not be the sort of thing\n> checksums would detect. Entries would be put into the index in the correct\n> order according to the old collation. The same entries can be still there,\n> intact, just fine according to the checksums, only the new collation would\n> have put them in a different order. Index search algorithms that are fast,\n> because they assume the entries to be correctly ordered, will skip regions\n> of the index where the desired key \"couldn't possibly be\", and if that's\n> where the old ordering put it, it won't be found.\n>\n> Regards,\n> -Chap\n>\n\nHey Chap,Yeah, I understand. Just ruling out the bad hardware scenario.Plus the next person to Google this will hopefully stumble upon this thread. :)Regards,OmarOn Sun, Jun 6, 2021 at 9:36 AM Chapman Flack <chap@anastigmatix.net> wrote:On 06/06/21 11:08, Omar Kilani wrote:\n> I'm running pg_verify_checksums on the cluster, but the database is\n> many TB so it'll be a bit.\n\nIndex corruption because of a locale change would not be the sort of thing\nchecksums would detect. Entries would be put into the index in the correct\norder according to the old collation. The same entries can be still there,\nintact, just fine according to the checksums, only the new collation would\nhave put them in a different order. Index search algorithms that are fast,\nbecause they assume the entries to be correctly ordered, will skip regions\nof the index where the desired key \"couldn't possibly be\", and if that's\nwhere the old ordering put it, it won't be found.\n\nRegards,\n-Chap", "msg_date": "Sun, 6 Jun 2021 09:38:29 -0700", "msg_from": "Omar Kilani <omar.kilani@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "On Sun, Jun 06, 2021 at 03:54:48AM -0700, Omar Kilani wrote:\n> What I sort of don't get is... before we insert anything into these\n> tables, we always check to see if a value already exists. And Postgres\n> must be returning no results for some reason. So it goes to insert a\n> duplicate value which somehow succeeds despite the unique index, but\n> then a reindex says it's a duplicate. Pretty weird.\n\nIn addition to the other issues, this is racy.\n\nYou 1) check if a key exists, and if not then 2) INSERT (or maybe you UPDATE if\nit did exist).\n\nhttps://en.wikipedia.org/wiki/Time-of-check_to_time-of-use\n\nMaybe you'll say that \"this process only runs once\", but it's not hard to\nimagine that might be violated. For example, if you restart a multi-threaded\nprocess, does the parent make sure that the child processes die before itself\ndying? Do you create a pidfile, and do you make sure the children are dead\nbefore removing the pidfile ?\n\nThe right way to do this since v9.6 is INSERT ON CONFLICT, which is also more\nefficient in a couple ways.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 6 Jun 2021 16:03:46 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "We do use ON CONFLICT… it doesn’t work because the index is both “good” and\n“bad” at the same time.\n\nOn Sun, Jun 6, 2021 at 2:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sun, Jun 06, 2021 at 03:54:48AM -0700, Omar Kilani wrote:\n> > What I sort of don't get is... before we insert anything into these\n> > tables, we always check to see if a value already exists. And Postgres\n> > must be returning no results for some reason. So it goes to insert a\n> > duplicate value which somehow succeeds despite the unique index, but\n> > then a reindex says it's a duplicate. Pretty weird.\n>\n> In addition to the other issues, this is racy.\n>\n> You 1) check if a key exists, and if not then 2) INSERT (or maybe you\n> UPDATE if\n> it did exist).\n>\n> https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use\n>\n> Maybe you'll say that \"this process only runs once\", but it's not hard to\n> imagine that might be violated. For example, if you restart a\n> multi-threaded\n> process, does the parent make sure that the child processes die before\n> itself\n> dying? Do you create a pidfile, and do you make sure the children are dead\n> before removing the pidfile ?\n>\n> The right way to do this since v9.6 is INSERT ON CONFLICT, which is also\n> more\n> efficient in a couple ways.\n>\n> --\n> Justin\n>\n\nWe do use ON CONFLICT… it doesn’t work because the index is both “good” and “bad” at the same time.On Sun, Jun 6, 2021 at 2:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sun, Jun 06, 2021 at 03:54:48AM -0700, Omar Kilani wrote:\n> What I sort of don't get is... before we insert anything into these\n> tables, we always check to see if a value already exists. And Postgres\n> must be returning no results for some reason. So it goes to insert a\n> duplicate value which somehow succeeds despite the unique index, but\n> then a reindex says it's a duplicate. Pretty weird.\n\nIn addition to the other issues, this is racy.\n\nYou 1) check if a key exists, and if not then 2) INSERT (or maybe you UPDATE if\nit did exist).\n\nhttps://en.wikipedia.org/wiki/Time-of-check_to_time-of-use\n\nMaybe you'll say that \"this process only runs once\", but it's not hard to\nimagine that might be violated.  For example, if you restart a multi-threaded\nprocess, does the parent make sure that the child processes die before itself\ndying?  Do you create a pidfile, and do you make sure the children are dead\nbefore removing the pidfile ?\n\nThe right way to do this since v9.6 is INSERT ON CONFLICT, which is also more\nefficient in a couple ways.\n\n-- \nJustin", "msg_date": "Sun, 6 Jun 2021 14:06:04 -0700", "msg_from": "Omar Kilani <omar.kilani@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "I mean, maybe it's because I've been awake since... 7am yesterday, but\nit seems to me that if Postgres fails catastrophically silently (and I\nwould say \"it looks like all your data in this table disappeared\nbecause of some arcane locale / btree issue that no one except Tom\nLane even knows exists\" -- see the replies about hardware issues and\nON CONFLICT as an example) -- then maybe that is... not good, and\nPostgres shouldn't do that?\n\nNot only that, it's only indices which have non-ASCII or whatever in\nthem that silently fail, so it's like 95% of your indices work just\nfine, but the ones that don't... look fine. They're not corrupt on\ndisk, they have their full size, etc.\n\nHow is anyone supposed to know about this issue? I've been using\nPostgres since 1999, built the Postgres website, worked with Neil and\nGavin on Postgres, submitted patches to Postgres and various\nPostgres-related projects, and this is the first time I've become\naware of it. I mean, maybe I'm dumb, and... fine. But your average\nuser is going to have no idea about this.\n\nWhy can't some \"locale signature\" or something be encoded into the\nindex so Postgres can at least warn you? Or not use the messed up\nindex altogether instead of silently returning no data?\n\nOn Sun, Jun 6, 2021 at 2:06 PM Omar Kilani <omar.kilani@gmail.com> wrote:\n>\n> We do use ON CONFLICT… it doesn’t work because the index is both “good” and “bad” at the same time.\n>\n> On Sun, Jun 6, 2021 at 2:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>\n>> On Sun, Jun 06, 2021 at 03:54:48AM -0700, Omar Kilani wrote:\n>> > What I sort of don't get is... before we insert anything into these\n>> > tables, we always check to see if a value already exists. And Postgres\n>> > must be returning no results for some reason. So it goes to insert a\n>> > duplicate value which somehow succeeds despite the unique index, but\n>> > then a reindex says it's a duplicate. Pretty weird.\n>>\n>> In addition to the other issues, this is racy.\n>>\n>> You 1) check if a key exists, and if not then 2) INSERT (or maybe you UPDATE if\n>> it did exist).\n>>\n>> https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use\n>>\n>> Maybe you'll say that \"this process only runs once\", but it's not hard to\n>> imagine that might be violated. For example, if you restart a multi-threaded\n>> process, does the parent make sure that the child processes die before itself\n>> dying? Do you create a pidfile, and do you make sure the children are dead\n>> before removing the pidfile ?\n>>\n>> The right way to do this since v9.6 is INSERT ON CONFLICT, which is also more\n>> efficient in a couple ways.\n>>\n>> --\n>> Justin\n\n\n", "msg_date": "Sun, 6 Jun 2021 14:20:10 -0700", "msg_from": "Omar Kilani <omar.kilani@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "Omar Kilani <omar.kilani@gmail.com> writes:\n> How is anyone supposed to know about this issue?\n\nWe're working on infrastructure to help detect OS locale changes,\nbut it's not shipped yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Jun 2021 17:29:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" }, { "msg_contents": "On Sun, Jun 6, 2021 at 11:20 PM Omar Kilani <omar.kilani@gmail.com> wrote:\n>\n> I mean, maybe it's because I've been awake since... 7am yesterday, but\n> it seems to me that if Postgres fails catastrophically silently (and I\n> would say \"it looks like all your data in this table disappeared\n> because of some arcane locale / btree issue that no one except Tom\n> Lane even knows exists\" -- see the replies about hardware issues and\n> ON CONFLICT as an example) -- then maybe that is... not good, and\n> Postgres shouldn't do that?\n\nIt is most definitely not an \"arcane issue no one except Tom lane even\nknows exists\". I would assume most people who work with consulting or\nsupport around PostgreSQL know it exists, because some of their\ncustomers have hit it :/\n\nI think it's more in the other direction -- those people are more\nlikely to dismiss that issue as \"the person reporting this will\nalready have checked this, it must be something else\"...\n\n\n\n> Not only that, it's only indices which have non-ASCII or whatever in\n> them that silently fail, so it's like 95% of your indices work just\n> fine, but the ones that don't... look fine. They're not corrupt on\n> disk, they have their full size, etc.\n\nNo it's not. ASCII will also fail in many cases. Did you read the page\nthat you were linked to? It even includes an example of why ASCII\ncases will also fail.\n\nIt's only non-text indexes that are \"safe\".\n\n\n> How is anyone supposed to know about this issue? I've been using\n> Postgres since 1999, built the Postgres website, worked with Neil and\n> Gavin on Postgres, submitted patches to Postgres and various\n> Postgres-related projects, and this is the first time I've become\n> aware of it. I mean, maybe I'm dumb, and... fine. But your average\n> user is going to have no idea about this.\n\nThis problem has been around before, just usually doesn't affect the\nEnglish locale. Surely if you've spent that much time around Postgres\nand in the community you must've heard about it before?\n\nAnd this particular issue has been written about numerous times, which\nhas been published through the postgres website and blog aggregators.\n\nIt is definitely a weakness in how PostgreSQL does things, but it's a\npretty well known weakness by now.\n\n\n> Why can't some \"locale signature\" or something be encoded into the\n> index so Postgres can at least warn you? Or not use the messed up\n> index altogether instead of silently returning no data?\n\nIf you use ICU for your text indexes, it does exactly that. The page\nat https://www.postgresql.org/docs/13/sql-altercollation.html shows\nyou examples of what wouldh appen in that case. (This page also\ndocuments that there is no version tracking for the built-in\ncollections, but I definitely agree that's pretty well hidden-away by\nbeing on the reference page of alter collation..)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 7 Jun 2021 15:13:46 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Strangeness with UNIQUE indexes and UTF-8" } ]
[ { "msg_contents": "An internal instance was rejecting connections with \"too many clients\".\nI found a bunch of processes waiting on a futex and I was going to upgrade the\nkernel (3.10.0-514) and dismiss the issue.\n\nHowever, I also found an autovacuum chewing 100% CPU, and it appears the\nproblem is actually because autovacuum has locked a page of pg-statistic, and\nevery other process then gets stuck waiting in the planner. I checked a few\nand found these:\n\n#13 0x0000000000961908 in SearchSysCache3 (cacheId=cacheId@entry=59, key1=key1@entry=2610, key2=key2@entry=2, key3=key3@entry=0) at syscache.c:1156\n\nAs for the autovacuum:\n\n$ ps -wwf 18950\nUID PID PPID C STIME TTY STAT TIME CMD\npostgres 18950 7179 93 Jun04 ? ts 2049:20 postgres: autovacuum worker ts\n\n(gdb)\n#0 0x00000000004f995c in heap_prune_satisfies_vacuum (prstate=prstate@entry=0x7ffe7a0cd0c0, tup=tup@entry=0x7ffe7a0cce50, buffer=buffer@entry=14138) at pruneheap.c:423\n#1 0x00000000004fa154 in heap_prune_chain (prstate=0x7ffe7a0cd0c0, rootoffnum=11, buffer=14138) at pruneheap.c:644\n#2 heap_page_prune (relation=relation@entry=0x7f0349466d28, buffer=buffer@entry=14138, vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>, old_snap_xmin=old_snap_xmin@entry=0,\n old_snap_ts=old_snap_ts@entry=0, report_stats=report_stats@entry=false, off_loc=<optimized out>, off_loc@entry=0x1d1b3fc) at pruneheap.c:278\n#3 0x00000000004fd9bf in lazy_scan_prune (vacrel=vacrel@entry=0x1d1b390, buf=buf@entry=14138, blkno=blkno@entry=75, page=page@entry=0x2aaab2089e00 \"G\\f\",\n vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>, prunestate=prunestate@entry=0x7ffe7a0ced80) at vacuumlazy.c:1712\n#4 0x0000000000500263 in lazy_scan_heap (aggressive=<optimized out>, params=0x1c77b7c, vacrel=<optimized out>) at vacuumlazy.c:1347\n#5 heap_vacuum_rel (rel=0x7f0349466d28, params=0x1c77b7c, bstrategy=<optimized out>) at vacuumlazy.c:612\n#6 0x000000000067418a in table_relation_vacuum (bstrategy=<optimized out>, params=0x1c77b7c, rel=0x7f0349466d28) at ../../../src/include/access/tableam.h:1678\n#7 vacuum_rel (relid=2619, relation=<optimized out>, params=params@entry=0x1c77b7c) at vacuum.c:2001\n#8 0x000000000067556e in vacuum (relations=0x1cc5008, params=params@entry=0x1c77b7c, bstrategy=<optimized out>, bstrategy@entry=0x1c77400, isTopLevel=isTopLevel@entry=true) at vacuum.c:461\n#9 0x0000000000783c13 in autovacuum_do_vac_analyze (bstrategy=0x1c77400, tab=0x1c77b78) at autovacuum.c:3284\n#10 do_autovacuum () at autovacuum.c:2537\n#11 0x0000000000784073 in AutoVacWorkerMain (argv=0x0, argc=0) at autovacuum.c:1715\n#12 0x00000000007841c9 in StartAutoVacWorker () at autovacuum.c:1500\n#13 0x0000000000792b9c in StartAutovacuumWorker () at postmaster.c:5547\n#14 sigusr1_handler (postgres_signal_arg=<optimized out>) at postmaster.c:5251\n#15 <signal handler called>\n#16 0x00007f0346c56783 in __select_nocancel () from /lib64/libc.so.6\n#17 0x000000000048ee7d in ServerLoop () at postmaster.c:1709\n#18 0x0000000000793e98 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1bed280) at postmaster.c:1417\n#19 0x0000000000491272 in main (argc=3, argv=0x1bed280) at main.c:209\n\nheap_page_prune() is being called repeatedly, with (I think) the same arguments.\n\n(gdb) c\nContinuing.\n\nBreakpoint 3, heap_page_prune (relation=relation@entry=0x7f0349466d28, buffer=buffer@entry=14138, vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>, old_snap_xmin=old_snap_xmin@entry=0, \n old_snap_ts=old_snap_ts@entry=0, report_stats=report_stats@entry=false, off_loc=off_loc@entry=0x1d1b3fc) at pruneheap.c:225\n225 in pruneheap.c\n(gdb) \nContinuing.\n\nBreakpoint 3, heap_page_prune (relation=relation@entry=0x7f0349466d28, buffer=buffer@entry=14138, vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>, old_snap_xmin=old_snap_xmin@entry=0, \n old_snap_ts=old_snap_ts@entry=0, report_stats=report_stats@entry=false, off_loc=off_loc@entry=0x1d1b3fc) at pruneheap.c:225\n225 in pruneheap.c\n(gdb) \nContinuing.\n\nBreakpoint 3, heap_page_prune (relation=relation@entry=0x7f0349466d28, buffer=buffer@entry=14138, vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>, old_snap_xmin=old_snap_xmin@entry=0, \n old_snap_ts=old_snap_ts@entry=0, report_stats=report_stats@entry=false, off_loc=off_loc@entry=0x1d1b3fc) at pruneheap.c:225\n225 in pruneheap.c\n(gdb) \nContinuing.\n\nBreakpoint 3, heap_page_prune (relation=relation@entry=0x7f0349466d28, buffer=buffer@entry=14138, vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>, old_snap_xmin=old_snap_xmin@entry=0, \n old_snap_ts=old_snap_ts@entry=0, report_stats=report_stats@entry=false, off_loc=off_loc@entry=0x1d1b3fc) at pruneheap.c:225\n225 in pruneheap.c\n\n(gdb) p *vacrel\n$3 = {rel = 0x7f0349466d28, indrels = 0x1d1b500, nindexes = 1, do_index_vacuuming = true, do_index_cleanup = true, do_failsafe = false, bstrategy = 0x1c77400, lps = 0x0, old_rel_pages = 80, \n old_live_tuples = 1101, relfrozenxid = 909081649, relminmxid = 53341561, OldestXmin = 913730329, FreezeLimit = 863730329, MultiXactCutoff = 48553302, relnamespace = 0x1d1b520 \"pg_catalog\", \n relname = 0x1d1b548 \"pg_statistic\", indname = 0x0, blkno = 75, offnum = 15, phase = VACUUM_ERRCB_PHASE_SCAN_HEAP, dead_tuples = 0x1ccef10, rel_pages = 85, scanned_pages = 76, \n pinskipped_pages = 0, frozenskipped_pages = 0, tupcount_pages = 76, pages_removed = 0, lpdead_item_pages = 65, nonempty_pages = 75, lock_waiter_detected = false, new_rel_tuples = 0, \n new_live_tuples = 0, indstats = 0x1d1b590, num_index_scans = 0, tuples_deleted = 757, lpdead_items = 1103, new_dead_tuples = 0, num_tuples = 973, live_tuples = 973}\n\n(gdb) p *rel\n$2 = {rd_node = {spcNode = 1663, dbNode = 16886, relNode = 107230415}, rd_smgr = 0x1d0a670, rd_refcnt = 1, rd_backend = -1, rd_islocaltemp = false, rd_isnailed = false, rd_isvalid = true, \n rd_indexvalid = true, rd_statvalid = false, rd_createSubid = 0, rd_newRelfilenodeSubid = 0, rd_firstRelfilenodeSubid = 0, rd_droppedSubid = 0, rd_rel = 0x7f0349466f40, rd_att = 0x7f0349467058, \n rd_id = 2619, rd_lockInfo = {lockRelId = {relId = 2619, dbId = 16886}}, rd_rules = 0x0, rd_rulescxt = 0x0, trigdesc = 0x0, rd_rsdesc = 0x0, rd_fkeylist = 0x0, rd_fkeyvalid = false, \n rd_partkey = 0x0, rd_partkeycxt = 0x0, rd_partdesc = 0x0, rd_pdcxt = 0x0, rd_partdesc_nodetached = 0x0, rd_pddcxt = 0x0, rd_partdesc_nodetached_xmin = 0, rd_partcheck = 0x0, \n rd_partcheckvalid = false, rd_partcheckcxt = 0x0, rd_indexlist = 0x7f0349498e40, rd_pkindex = 2696, rd_replidindex = 0, rd_statlist = 0x0, rd_indexattr = 0x7f0349498ee8, \n rd_keyattr = 0x7f0349498e98, rd_pkattr = 0x7f0349498ec0, rd_idattr = 0x0, rd_pubactions = 0x0, rd_options = 0x0, rd_amhandler = 3, rd_tableam = 0x9ccfc0 <heapam_methods>, rd_index = 0x0, \n rd_indextuple = 0x0, rd_indexcxt = 0x0, rd_indam = 0x0, rd_opfamily = 0x0, rd_opcintype = 0x0, rd_support = 0x0, rd_supportinfo = 0x0, rd_indoption = 0x0, rd_indexprs = 0x0, rd_indpred = 0x0, \n rd_exclops = 0x0, rd_exclprocs = 0x0, rd_exclstrats = 0x0, rd_indcollation = 0x0, rd_opcoptions = 0x0, rd_amcache = 0x0, rd_fdwroutine = 0x0, rd_toastoid = 0, pgstat_info = 0x1c88940}\n\n(gdb) p *relation->rd_rel\n$8 = {oid = 2619, relname = {data = \"pg_statistic\", '\\000' <repeats 51 times>}, relnamespace = 11, reltype = 13029, reloftype = 0, relowner = 10, relam = 2, relfilenode = 107230415, \n reltablespace = 0, relpages = 80, reltuples = 1101, relallvisible = 11, reltoastrelid = 2840, relhasindex = true, relisshared = false, relpersistence = 112 'p', relkind = 114 'r', relnatts = 31, \n relchecks = 0, relhasrules = false, relhastriggers = false, relhassubclass = false, relrowsecurity = false, relforcerowsecurity = false, relispopulated = true, relreplident = 110 'n', \n relispartition = false, relrewrite = 0, relfrozenxid = 909081649, relminmxid = 53341561}\n\n(gdb) info locals\nrel = 0x7f0349466d28\noffnum = 4\nmaxoff = 21\nitemid = 0x2aaab2089e24\ntuple = {t_len = 564, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 75}, ip_posid = 4}, t_tableOid = 2619, t_data = 0x2aaab208bbc8}\n\n(gdb) p *itemid\n$2 = {lp_off = 7624, lp_flags = 1, lp_len = 564}\n\nI'll leave the instance running for a little bit before restarting (or kill-9)\nin case someone requests more info.\n\nSee also:\nhttps://www.postgresql.org/message-id/2591376.1621196582@sss.pgh.pa.us\n\nThese commits may be relevant.\n\ncommit 3c3b8a4b26891892bccf3d220580a7f413c0b9ca\nAuthor: Peter Geoghegan <pg@bowt.ie>\nDate: Wed Apr 7 08:47:15 2021 -0700\n\n Truncate line pointer array during VACUUM.\n\ncommit 7ab96cf6b312cfcd79cdc1a69c6bdb75de0ed30f\nAuthor: Peter Geoghegan <pg@bowt.ie>\nDate: Tue Apr 6 07:49:39 2021 -0700\n\n Refactor lazy_scan_heap() loop.\n\ncommit 8523492d4e349c4714aa2ab0291be175a88cb4fc\nAuthor: Peter Geoghegan <pg@bowt.ie>\nDate: Tue Apr 6 08:49:22 2021 -0700\n\n Remove tupgone special case from vacuumlazy.c.\n\ncommit dc7420c2c9274a283779ec19718d2d16323640c0\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Wed Aug 12 16:03:49 2020 -0700\n\n snapshot scalability: Don't compute global horizons while building snapshots.\n\n\n", "msg_date": "Sun, 6 Jun 2021 11:35:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Sun, 6 Jun 2021 at 18:35, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> An internal instance was rejecting connections with \"too many clients\".\n> I found a bunch of processes waiting on a futex and I was going to upgrade the\n> kernel (3.10.0-514) and dismiss the issue.\n>\n> However, I also found an autovacuum chewing 100% CPU, and it appears the\n> problem is actually because autovacuum has locked a page of pg-statistic, and\n> every other process then gets stuck waiting in the planner. I checked a few\n> and found these:\n\nMy suspicion is that for some tuple on that page\nHeapTupleSatisfiesVacuum() returns HEAPTUPLE_DEAD for a tuple that it\nthinks should have been cleaned up by heap_page_prune, but isn't. This\nwould result in an infinite loop in lazy_scan_prune where the\ncondition on vacuumlazy.c:1800 will always be true, but the retry will\nnot do the job it's expected to do.\n\nApart from reporting this suspicion, I sadly can't help you much\nfurther, as my knowledge and experience on vacuum and snapshot\nhorizons is only limited and probably won't help you in this.\n\nI think it would be helpful for further debugging if we would have the\nstate of the all tuples on that page (well, the tuple headers with\ntheir transactionids and their line pointers), as that would help with\ndetermining if my suspicion could be correct.\n\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Sun, 6 Jun 2021 19:26:22 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> On Sun, 6 Jun 2021 at 18:35, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> However, I also found an autovacuum chewing 100% CPU, and it appears the\n>> problem is actually because autovacuum has locked a page of pg-statistic, and\n>> every other process then gets stuck waiting in the planner. I checked a few\n>> and found these:\n\n> My suspicion is that for some tuple on that page\n> HeapTupleSatisfiesVacuum() returns HEAPTUPLE_DEAD for a tuple that it\n> thinks should have been cleaned up by heap_page_prune, but isn't. This\n> would result in an infinite loop in lazy_scan_prune where the\n> condition on vacuumlazy.c:1800 will always be true, but the retry will\n> not do the job it's expected to do.\n\nSince Justin's got a debugger on the process already, it probably\nwouldn't be too hard to confirm or disprove that theory by stepping\nthrough the code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Jun 2021 13:59:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Sun, Jun 6, 2021 at 9:35 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I'll leave the instance running for a little bit before restarting (or kill-9)\n> in case someone requests more info.\n\nHow about dumping the page image out, and sharing it with the list?\nThis procedure should work fine from gdb:\n\nhttps://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Dumping_a_page_image_from_within_GDB\n\nI suggest that you dump the \"page\" pointer inside lazy_scan_prune(). I\nimagine that you have the instance already stuck in an infinite loop,\nso what we'll probably see from the page image is the page after the\nfirst prune and another no-progress prune.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 6 Jun 2021 11:00:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi, \n\nOn Sun, Jun 6, 2021, at 10:59, Tom Lane wrote:\n> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > On Sun, 6 Jun 2021 at 18:35, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> However, I also found an autovacuum chewing 100% CPU, and it appears the\n> >> problem is actually because autovacuum has locked a page of pg-statistic, and\n> >> every other process then gets stuck waiting in the planner. I checked a few\n> >> and found these:\n> \n> > My suspicion is that for some tuple on that page\n> > HeapTupleSatisfiesVacuum() returns HEAPTUPLE_DEAD for a tuple that it\n> > thinks should have been cleaned up by heap_page_prune, but isn't. This\n> > would result in an infinite loop in lazy_scan_prune where the\n> > condition on vacuumlazy.c:1800 will always be true, but the retry will\n> > not do the job it's expected to do.\n> \n> Since Justin's got a debugger on the process already, it probably\n> wouldn't be too hard to confirm or disprove that theory by stepping\n> through the code.\n\nIf that turns out to be the issue, it'd be good to check what prevents the tuple from being considered fully dead, by stepping through the visibility test...\n\nAndres\n\n\n", "msg_date": "Sun, 06 Jun 2021 11:01:54 -0700", "msg_from": "\"Andres Freund\" <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Sun, Jun 06, 2021 at 11:00:38AM -0700, Peter Geoghegan wrote:\n> On Sun, Jun 6, 2021 at 9:35 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I'll leave the instance running for a little bit before restarting (or kill-9)\n> > in case someone requests more info.\n> \n> How about dumping the page image out, and sharing it with the list?\n> This procedure should work fine from gdb:\n\nSorry, but I already killed the process to try to follow Matthias' suggestion.\nI have a core file from \"gcore\" but it looks like it's incomplete and the\naddress is now \"out of bounds\"...\n\n#2 0x00000000004fd9bf in lazy_scan_prune (vacrel=vacrel@entry=0x1d1b390, buf=buf@entry=14138, blkno=blkno@entry=75, page=page@entry=0x2aaab2089e00 <Address 0x2aaab2089e00 out of bounds>,\n\nI saved a copy of the datadir, but a manual \"vacuum\" doesn't trigger the\nproblem. So if Matthias' theory is right, it seems like there may be a race\ncondition. Maybe that goes without saying.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 6 Jun 2021 13:43:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Sun, Jun 6, 2021 at 11:43 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Sorry, but I already killed the process to try to follow Matthias' suggestion.\n> I have a core file from \"gcore\" but it looks like it's incomplete and the\n> address is now \"out of bounds\"...\n\nBased on what you said about ending up back in lazy_scan_prune()\nalone, I think he's right. That is, I agree that it's very likely that\nthe stuck VACUUM would not have become stuck had the \"goto retry on\nHEAPTUPLE_DEAD inside lazy_scan_prune\" thing not been added by commit\n8523492d4e3. But that in itself doesn't necessarily implicate commit\n8523492d4e3.\n\nThe interesting question is: Why doesn't heap_page_prune() ever agree\nwith HeapTupleSatisfiesVacuum() calls made from lazy_scan_prune(), no\nmatter how many times the call to heap_page_prune() is repeated? (It's\nrepeated to try to resolve the disagreement that aborted xacts can\nsometimes cause.)\n\nIf I had to guess I'd say that the underlying problem has something to\ndo with heap_prune_satisfies_vacuum() not agreeing with\nHeapTupleSatisfiesVacuum(), perhaps only when GlobalVisCatalogRels is\nused. But that's a pretty wild guess at this point.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 6 Jun 2021 12:03:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Sun, Jun 06, 2021 at 07:26:22PM +0200, Matthias van de Meent wrote:\n> I think it would be helpful for further debugging if we would have the\n> state of the all tuples on that page (well, the tuple headers with\n> their transactionids and their line pointers), as that would help with\n> determining if my suspicion could be correct.\n\nThis is the state of the page after I killed the cluster and started a\npostmaster on a copy of its datadir, with autovacuum=off:\n\nSELECT lp, lp_off, lp_flags, lp_len, t_xmin, t_xmax, t_field3, t_ctid, t_infomask2, t_infomask, t_hoff, t_bits, t_oid FROM heap_page_items(get_raw_page('pg_statistic', 75)) ;\n lp | lp_off | lp_flags | lp_len | t_xmin | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid \n----+--------+----------+--------+-----------+-----------+----------+---------+-------------+------------+--------+----------------------------------+-------\n 1 | 0 | 3 | 0 | | | | | | | | | \n 2 | 0 | 3 | 0 | | | | | | | | | \n 3 | 0 | 3 | 0 | | | | | | | | | \n 4 | 7624 | 1 | 564 | 913726913 | 913730328 | 0 | (83,19) | 31 | 8451 | 32 | 11111111111111111111101000100000 | \n 5 | 6432 | 1 | 1188 | 913726913 | 913730328 | 0 | (83,20) | 31 | 8451 | 32 | 11111111111111111111110100110000 | \n 6 | 6232 | 1 | 195 | 913726913 | 913730328 | 0 | (83,21) | 31 | 8451 | 32 | 11111111111111111111111000100000 | \n 7 | 6032 | 1 | 195 | 913726913 | 913730328 | 0 | (83,22) | 31 | 8451 | 32 | 11111111111111111111111000100000 | \n 8 | 5848 | 1 | 181 | 913726913 | 913730328 | 0 | (83,23) | 31 | 8451 | 32 | 11111111111111111111111000100000 | \n 9 | 5664 | 1 | 181 | 913726913 | 913730328 | 0 | (81,13) | 31 | 8451 | 32 | 11111111111111111111111000100000 | \n 10 | 5488 | 1 | 176 | 913726913 | 913730328 | 0 | (81,14) | 31 | 8451 | 32 | 11111111111111111111111000100000 | \n 11 | 5312 | 1 | 176 | 913726913 | 913730328 | 0 | (82,13) | 31 | 8451 | 32 | 11111111111111111111111000100000 | \n 12 | 5128 | 1 | 181 | 913726913 | 913730328 | 0 | (79,57) | 31 | 8451 | 32 | 11111111111111111111111000100000 | \n 13 | 4952 | 1 | 176 | 913726913 | 913730328 | 0 | (80,25) | 31 | 8451 | 32 | 11111111111111111111111000100000 | \n 14 | 4776 | 1 | 176 | 913726913 | 913730328 | 0 | (80,26) | 31 | 8451 | 32 | 11111111111111111111111000100000 | \n 15 | 4600 | 1 | 176 | 913726913 | 913730328 | 0 | (84,1) | 31 | 8451 | 32 | 11111111111111111111111000100000 | \n 16 | 4424 | 1 | 176 | 913726913 | 913730328 | 0 | (84,2) | 31 | 8451 | 32 | 11111111111111111111111000100000 | \n 17 | 4248 | 1 | 176 | 913726913 | 913730328 | 0 | (84,3) | 31 | 8451 | 32 | 11111111111111111111111000100000 | \n 18 | 2880 | 1 | 1364 | 913726913 | 913730328 | 0 | (84,4) | 31 | 8451 | 32 | 11111111111111111111110100110000 | \n 19 | 2696 | 1 | 179 | 913726914 | 0 | 0 | (75,19) | 31 | 10499 | 32 | 11111111111111111111111000100000 | \n 20 | 2520 | 1 | 176 | 913726914 | 0 | 0 | (75,20) | 31 | 10499 | 32 | 11111111111111111111111000100000 | \n 21 | 2336 | 1 | 179 | 913726914 | 0 | 0 | (75,21) | 31 | 10499 | 32 | 11111111111111111111111000100000 | \n(21 rows)\n\n(In the interest of full disclosure, this was a dumb cp -a of the live datadir\nwhere the processes had been stuck for a day, and where I was unable to open a\nclient session).\n\n8451 = HEAP_KEYS_UPDATED + 259 atts?\n\nNote that after I vacuum pg_statistic, it looks like this:\n\nts=# SELECT lp, lp_off, lp_flags, lp_len, t_xmin, t_xmax, t_field3, t_ctid, t_infomask2, t_infomask, t_hoff, t_bits, t_oid FROM heap_page_items(get_raw_page('pg_statistic', 75));\n lp | lp_off | lp_flags | lp_len | t_xmin | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid \n----+--------+----------+--------+-----------+--------+----------+---------+-------------+------------+--------+----------------------------------+-------\n 1 | 0 | 0 | 0 | | | | | | | | |\n...\n 18 | 0 | 0 | 0 | | | | | | | | |\n 19 | 8008 | 1 | 179 | 913726914 | 0 | 0 | (75,19) | 31 | 10499 | 32 | 11111111111111111111111000100000 | \n 20 | 7832 | 1 | 176 | 913726914 | 0 | 0 | (75,20) | 31 | 10499 | 32 | 11111111111111111111111000100000 | \n 21 | 7648 | 1 | 179 | 913726914 | 0 | 0 | (75,21) | 31 | 10499 | 32 | 11111111111111111111111000100000 | \n\nts=# VACUUM VERBOSE pg_statistic;\n|INFO: vacuuming \"pg_catalog.pg_statistic\"\n|INFO: scanned index \"pg_statistic_relid_att_inh_index\" to remove 278403 row versions\n|DETAIL: CPU: user: 0.10 s, system: 0.00 s, elapsed: 0.14 s\n|INFO: \"pg_statistic\": removed 278403 dead item identifiers in 4747 pages\n|DETAIL: CPU: user: 0.12 s, system: 0.07 s, elapsed: 1.99 s\n|INFO: index \"pg_statistic_relid_att_inh_index\" now contains 1101 row versions in 758 pages\n|DETAIL: 277271 index row versions were removed.\n|747 index pages were newly deleted.\n|747 index pages are currently deleted, of which 0 are currently reusable.\n|CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n|INFO: \"pg_statistic\": found 277216 removable, 1101 nonremovable row versions in 4758 out of 4758 pages\n|DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 920282115\n|0 pages removed.\n|Skipped 0 pages due to buffer pins, 0 frozen pages.\n|CPU: user: 1.75 s, system: 0.18 s, elapsed: 6.52 s.\n|INFO: \"pg_statistic\": truncated 4758 to 96 pages\n|DETAIL: CPU: user: 0.02 s, system: 0.02 s, elapsed: 0.06 s\n|INFO: vacuuming \"pg_toast.pg_toast_2619\"\n|INFO: scanned index \"pg_toast_2619_index\" to remove 30 row versions\n|DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n|INFO: \"pg_toast_2619\": removed 30 dead item identifiers in 11 pages\n|DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n|INFO: index \"pg_toast_2619_index\" now contains 115 row versions in 2 pages\n|DETAIL: 3 index row versions were removed.\n|0 index pages were newly deleted.\n|0 index pages are currently deleted, of which 0 are currently reusable.\n|CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n|INFO: \"pg_toast_2619\": found 29 removable, 115 nonremovable row versions in 43 out of 43 pages\n|DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 920282115\n|0 pages removed.\n|Skipped 0 pages due to buffer pins, 0 frozen pages.\n|CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.03 s.\n|VACUUM\n\nBefore:\n pg_catalog | pg_statistic | table | postgres | permanent | heap | 38 MB | \nAfter:\n pg_catalog | pg_statistic | table | postgres | permanent | heap | 1192 kB | \n\nI also have nearly-intact bt f from the partial core:\n\n#0 0x00000000004fa063 in heap_prune_chain (prstate=0x7ffe7a0cd0c0, rootoffnum=4, buffer=14138) at pruneheap.c:592\n lp = <optimized out>\n tupdead = <optimized out>\n recent_dead = <optimized out>\n rootlp = 0x2aaab2089e24\n nchain = 0\n tup = {t_len = 564, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 75}, ip_posid = 4}, t_tableOid = 2619, t_data = 0x2aaab208bbc8}\n ndeleted = 0\n priorXmax = 0\n htup = <optimized out>\n maxoff = 21\n offnum = 4\n...\n#1 heap_page_prune (relation=relation@entry=0x7f0349466d28, buffer=buffer@entry=14138, vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>, old_snap_xmin=old_snap_xmin@entry=0, old_snap_ts=old_snap_ts@entry=0,\n report_stats=report_stats@entry=false, off_loc=<optimized out>, off_loc@entry=0x1d1b3fc) at pruneheap.c:278\n itemid = 0x2aaab2089e24\n ndeleted = 0\n page = 0x2aaab2089e00 <Address 0x2aaab2089e00 out of bounds>\n offnum = 4\n maxoff = 21\n...\n#2 0x00000000004fd9bf in lazy_scan_prune (vacrel=vacrel@entry=0x1d1b390, buf=buf@entry=14138, blkno=blkno@entry=75, page=page@entry=0x2aaab2089e00 <Address 0x2aaab2089e00 out of bounds>,\n vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>, prunestate=prunestate@entry=0x7ffe7a0ced80) at vacuumlazy.c:1712\n rel = 0x7f0349466d28\n offnum = 4\n maxoff = 21\n itemid = 0x2aaab2089e24\n tuple = {t_len = 564, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 75}, ip_posid = 4}, t_tableOid = 2619, t_data = 0x2aaab208bbc8}\n res = <optimized out>\n tuples_deleted = 0\n lpdead_items = 0\n new_dead_tuples = 0\n num_tuples = 0\n live_tuples = 0\n nfrozen = 0\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 6 Jun 2021 15:06:23 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Sun, Jun 06, 2021 at 11:00:38AM -0700, Peter Geoghegan wrote:\n> On Sun, Jun 6, 2021 at 9:35 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I'll leave the instance running for a little bit before restarting (or kill-9)\n> > in case someone requests more info.\n> \n> How about dumping the page image out, and sharing it with the list?\n> This procedure should work fine from gdb:\n> \n> https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Dumping_a_page_image_from_within_GDB\n\n> I suggest that you dump the \"page\" pointer inside lazy_scan_prune(). I\n> imagine that you have the instance already stuck in an infinite loop,\n> so what we'll probably see from the page image is the page after the\n> first prune and another no-progress prune.\n\nThe cluster was again rejecting with \"too many clients already\".\n\nI was able to open a shell this time, but it immediately froze when I tried to\ntab complete \"pg_stat_acti\"...\n\nI was able to dump the page image, though - attached. I can send you its\n\"data\" privately, if desirable. I'll also try to step through this.\n\npostgres=# SELECT lp, lp_off, lp_flags, lp_len, t_xmin, t_xmax, t_field3, t_ctid, t_infomask2, t_infomask, t_hoff, t_bits, t_oid FROM heap_page_items(pg_read_binary_file('/tmp/dump_block.page'));\n lp | lp_off | lp_flags | lp_len | t_xmin | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid \n----+--------+----------+--------+-----------+-----------+----------+--------+-------------+------------+--------+----------------------------------+-------\n 1 | 1320 | 1 | 259 | 926025112 | 0 | 0 | (1,1) | 32799 | 10499 | 32 | 11111111111111111111111000100000 | \n 2 | 8088 | 1 | 104 | 926018702 | 0 | 0 | (1,2) | 32799 | 10497 | 32 | 11111111111111111111100000000000 | \n 3 | 0 | 0 | 0 | | | | | | | | | \n 4 | 7904 | 1 | 179 | 926018702 | 0 | 0 | (1,4) | 32799 | 10499 | 32 | 11111111111111111111111000100000 | \n 5 | 7728 | 1 | 176 | 926018702 | 0 | 0 | (1,5) | 32799 | 10499 | 32 | 11111111111111111111111000100000 | \n 6 | 7464 | 1 | 259 | 926014884 | 926025112 | 0 | (1,1) | 49183 | 9475 | 32 | 11111111111111111111111000100000 | \n 7 | 2 | 2 | 0 | | | | | | | | | \n 8 | 4 | 2 | 0 | | | | | | | | | \n 9 | 19 | 2 | 0 | | | | | | | | | \n 10 | 0 | 0 | 0 | | | | | | | | | \n 11 | 20 | 2 | 0 | | | | | | | | | \n 12 | 5792 | 1 | 1666 | 926014887 | 0 | 0 | (1,12) | 31 | 10499 | 32 | 11111111111111111111101000100000 | \n 13 | 5 | 2 | 0 | | | | | | | | | \n 14 | 3912 | 1 | 1880 | 925994211 | 0 | 0 | (1,14) | 31 | 10499 | 32 | 11111111111111111111110100110000 | \n 15 | 0 | 3 | 0 | | | | | | | | | \n 16 | 18 | 2 | 0 | | | | | | | | | \n 17 | 0 | 3 | 0 | | | | | | | | | \n 18 | 1936 | 1 | 1976 | 926013259 | 0 | 0 | (1,18) | 32799 | 10499 | 32 | 11111111111111111111110100110000 | \n 19 | 1760 | 1 | 176 | 926014887 | 0 | 0 | (1,19) | 32799 | 10499 | 32 | 11111111111111111111111000100000 | \n 20 | 1584 | 1 | 176 | 926014889 | 0 | 0 | (1,20) | 32799 | 10499 | 32 | 11111111111111111111111000100000 | \n 21 | 6 | 2 | 0 | | | | | | | | | \n 22 | 0 | 3 | 0 | | | | | | | | | \n 23 | 0 | 3 | 0 | | | | | | | | | \n 24 | 0 | 3 | 0 | | | | | | | | | \n 25 | 0 | 3 | 0 | | | | | | | | | \n 26 | 0 | 3 | 0 | | | | | | | | | \n 27 | 0 | 3 | 0 | | | | | | | | | \n 28 | 0 | 3 | 0 | | | | | | | | | \n(28 rows)\n\nNo great surprise that it's again in pg_statistic.\n\n#0 GetPrivateRefCountEntry (buffer=buffer@entry=411, do_move=do_move@entry=false) at bufmgr.c:313\n#1 0x00000000007ecb4f in GetPrivateRefCount (buffer=411) at bufmgr.c:398\n#2 BufferGetBlockNumber (buffer=buffer@entry=411) at bufmgr.c:2762\n#3 0x00000000004fa0f3 in heap_prune_chain (prstate=0x7fff7e4a9180, rootoffnum=7, buffer=411) at pruneheap.c:625\n#4 heap_page_prune (relation=relation@entry=0x7fe636faed28, buffer=buffer@entry=411, vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>, old_snap_xmin=old_snap_xmin@entry=0, old_snap_ts=old_snap_ts@entry=0, \n report_stats=report_stats@entry=false, off_loc=<optimized out>, off_loc@entry=0x12b433c) at pruneheap.c:278\n#5 0x00000000004fd9bf in lazy_scan_prune (vacrel=vacrel@entry=0x12b42d0, buf=buf@entry=411, blkno=blkno@entry=1, page=page@entry=0x2aaaab54be00 \"J\\f\", vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>, \n prunestate=prunestate@entry=0x7fff7e4aae40) at vacuumlazy.c:1712\n#6 0x0000000000500263 in lazy_scan_heap (aggressive=<optimized out>, params=0x12ce89c, vacrel=<optimized out>) at vacuumlazy.c:1347\n#7 heap_vacuum_rel (rel=0x7fe636faed28, params=0x12ce89c, bstrategy=<optimized out>) at vacuumlazy.c:612\n#8 0x000000000067418a in table_relation_vacuum (bstrategy=<optimized out>, params=0x12ce89c, rel=0x7fe636faed28) at ../../../src/include/access/tableam.h:1678\n#9 vacuum_rel (relid=2619, relation=<optimized out>, params=params@entry=0x12ce89c) at vacuum.c:2001\n\n-- \nJustin", "msg_date": "Tue, 8 Jun 2021 06:03:29 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Sun, Jun 06, 2021 at 01:59:10PM -0400, Tom Lane wrote:\n> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > On Sun, 6 Jun 2021 at 18:35, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> However, I also found an autovacuum chewing 100% CPU, and it appears the\n> >> problem is actually because autovacuum has locked a page of pg-statistic, and\n> >> every other process then gets stuck waiting in the planner. I checked a few\n> >> and found these:\n> \n> > My suspicion is that for some tuple on that page\n> > HeapTupleSatisfiesVacuum() returns HEAPTUPLE_DEAD for a tuple that it\n> > thinks should have been cleaned up by heap_page_prune, but isn't. This\n> > would result in an infinite loop in lazy_scan_prune where the\n> > condition on vacuumlazy.c:1800 will always be true, but the retry will\n> > not do the job it's expected to do.\n> \n> Since Justin's got a debugger on the process already, it probably\n> wouldn't be too hard to confirm or disprove that theory by stepping\n> through the code.\n\nBreakpoint 2, HeapTupleSatisfiesVacuum (htup=htup@entry=0x7fff7e4a9ca0, OldestXmin=926025113, buffer=buffer@entry=411) at heapam_visibility.c:1163\n1163 heapam_visibility.c: No such file or directory.\n(gdb) fin\nRun till exit from #0 HeapTupleSatisfiesVacuum (htup=htup@entry=0x7fff7e4a9ca0, OldestXmin=926025113, buffer=buffer@entry=411) at heapam_visibility.c:1163\nlazy_scan_prune (vacrel=vacrel@entry=0x12b42d0, buf=buf@entry=411, blkno=blkno@entry=1, page=page@entry=0x2aaaab54be00 \"J\\f\", vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>, prunestate=prunestate@entry=0x7fff7e4aae40)\n at vacuumlazy.c:1790\n1790 vacuumlazy.c: No such file or directory.\nValue returned is $23 = HEAPTUPLE_DEAD\n\n offnum = 6\n tuple = {t_len = 259, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 1}, ip_posid = 6}, t_tableOid = 2619, t_data = 0x2aaaab54db28}\n res = HEAPTUPLE_DEAD\n\n(gdb) p *itemid\n$51 = {lp_off = 7464, lp_flags = 1, lp_len = 259}\n\n(gdb) p *tuple->t_data\n$54 = {t_choice = {t_heap = {t_xmin = 926014884, t_xmax = 926025112, t_field3 = {t_cid = 0, t_xvac = 0}}, t_datum = {datum_len_ = 926014884, datum_typmod = 926025112, datum_typeid = 0}}, t_ctid = {ip_blkid = {bi_hi = 0,\n bi_lo = 1}, ip_posid = 1}, t_infomask2 = 49183, t_infomask = 9475, t_hoff = 32 ' ', t_bits = 0x2aaaab54db3f \"\\377\\377\\177\\004\"}\n\nlp_flags = LP_NORMAL 1 ??\nt_infomask2 = HEAP_ONLY_TUPLE | HEAP_HOT_UPDATED + 31 atts\ninfomask = HEAP_UPDATED | HEAP_XMAX_COMMITTED | HEAP_XMIN_COMMITTED | HEAP_HASVARWIDTH | HEAP_HASNULL\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 8 Jun 2021 06:33:33 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, 8 Jun 2021 at 13:03, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sun, Jun 06, 2021 at 11:00:38AM -0700, Peter Geoghegan wrote:\n> > On Sun, Jun 6, 2021 at 9:35 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > I'll leave the instance running for a little bit before restarting (or kill-9)\n> > > in case someone requests more info.\n> >\n> > How about dumping the page image out, and sharing it with the list?\n> > This procedure should work fine from gdb:\n> >\n> > https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Dumping_a_page_image_from_within_GDB\n>\n> > I suggest that you dump the \"page\" pointer inside lazy_scan_prune(). I\n> > imagine that you have the instance already stuck in an infinite loop,\n> > so what we'll probably see from the page image is the page after the\n> > first prune and another no-progress prune.\n>\n> The cluster was again rejecting with \"too many clients already\".\n>\n> I was able to open a shell this time, but it immediately froze when I tried to\n> tab complete \"pg_stat_acti\"...\n>\n> I was able to dump the page image, though - attached. I can send you its\n> \"data\" privately, if desirable. I'll also try to step through this.\n\nCould you attach a dump of lazy_scan_prune's vacrel, all the global\nvisibility states (GlobalVisCatalogRels, and possibly\nGlobalVisSharedRels, GlobalVisDataRels, and GlobalVisTempRels), and\nheap_page_prune's PruneState?\n\nAdditionally, the locals of lazy_scan_prune (more specifically, the\n'offnum' when it enters heap_page_prune) would also be appreciated, as\nit helps indicate the tuple.\n\nI've been looking at whatever might have done this, and I'm currently\nstuck on lacking information in GlobalVisCatalogRels and the\nPruneState.\n\nOne curiosity that I did notice is that the t_xmax of the problematic\ntuples has been exactly one lower than the OldestXmin. Not weird, but\na curiosity.\n\n\nWith regards,\n\nMatthias van de Meent.\n\n\nPS. Attached a few of my current research notes, which are mainly\ncomparisons between heap_prune_satisfies_vacuum and\nHeapTupleSatisfiesVacuum.", "msg_date": "Tue, 8 Jun 2021 13:54:41 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 08, 2021 at 01:54:41PM +0200, Matthias van de Meent wrote:\n> On Tue, 8 Jun 2021 at 13:03, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Sun, Jun 06, 2021 at 11:00:38AM -0700, Peter Geoghegan wrote:\n> > > On Sun, Jun 6, 2021 at 9:35 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > I'll leave the instance running for a little bit before restarting (or kill-9)\n> > > > in case someone requests more info.\n> > >\n> > > How about dumping the page image out, and sharing it with the list?\n> > > This procedure should work fine from gdb:\n> > >\n> > > https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Dumping_a_page_image_from_within_GDB\n> >\n> > > I suggest that you dump the \"page\" pointer inside lazy_scan_prune(). I\n> > > imagine that you have the instance already stuck in an infinite loop,\n> > > so what we'll probably see from the page image is the page after the\n> > > first prune and another no-progress prune.\n> >\n> > The cluster was again rejecting with \"too many clients already\".\n> >\n> > I was able to open a shell this time, but it immediately froze when I tried to\n> > tab complete \"pg_stat_acti\"...\n> >\n> > I was able to dump the page image, though - attached. I can send you its\n> > \"data\" privately, if desirable. I'll also try to step through this.\n> \n> Could you attach a dump of lazy_scan_prune's vacrel, all the global\n> visibility states (GlobalVisCatalogRels, and possibly\n> GlobalVisSharedRels, GlobalVisDataRels, and GlobalVisTempRels), and\n> heap_page_prune's PruneState?\n\n(gdb) p *vacrel\n$56 = {rel = 0x7fe636faed28, indrels = 0x12b4440, nindexes = 1, do_index_vacuuming = true, do_index_cleanup = true, do_failsafe = false, bstrategy = 0x1210340, lps = 0x0, old_rel_pages = 81, old_live_tuples = 1100, relfrozenxid = 921613998, relminmxid = 53878631, OldestXmin = 926025113, FreezeLimit = 876025113, MultiXactCutoff = 49085856, relnamespace = 0x12b4460 \"pg_catalog\", relname = 0x12b4488 \"pg_statistic\", indname = 0x0, blkno = 1, offnum = 6, phase = VACUUM_ERRCB_PHASE_SCAN_HEAP, dead_tuples = 0x127a980, rel_pages = 81, scanned_pages = 2, pinskipped_pages = 0, frozenskipped_pages = 0, tupcount_pages = 2, pages_removed = 0, lpdead_item_pages = 1, nonempty_pages = 1, lock_waiter_detected = false, new_rel_tuples = 0, new_live_tuples = 0, indstats = 0x12b4568, num_index_scans = 0, tuples_deleted = 0, lpdead_items = 3, new_dead_tuples = 0, num_tuples = 14, live_tuples = 14}\n\n(gdb) p GlobalVisCatalogRels\n$57 = {definitely_needed = {value = 926025113}, maybe_needed = {value = 926025112}}\n(gdb) p GlobalVisSharedRels\n$58 = {definitely_needed = {value = 926025113}, maybe_needed = {value = 926025112}}\n(gdb) p GlobalVisDataRels\n$59 = {definitely_needed = {value = 926025113}, maybe_needed = {value = 926025113}}\n(gdb) p GlobalVisTempRels\n$60 = {definitely_needed = {value = 926025113}, maybe_needed = {value = 926025113}}\n\nI don't know when you want prstate from, but here it is at some point:\n\n(gdb) p *prstate\n$77 = {rel = 0x7fe636faed28, vistest = 0xe7bcc0 <GlobalVisCatalogRels>, old_snap_ts = 0, old_snap_xmin = 0, old_snap_used = false, new_prune_xid = 0, latestRemovedXid = 0, nredirected = 0, ndead = 0, nunused = 0, \n\n> Additionally, the locals of lazy_scan_prune (more specifically, the\n> 'offnum' when it enters heap_page_prune) would also be appreciated, as\n> it helps indicate the tuple.\n\nBreakpoint 1, heap_page_prune (relation=relation@entry=0x7fe636faed28, buffer=buffer@entry=411, vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>, old_snap_xmin=old_snap_xmin@entry=0, old_snap_ts=old_snap_ts@entry=0,\n report_stats=report_stats@entry=false, off_loc=off_loc@entry=0x12b433c) at pruneheap.c:225\n225 pruneheap.c: No such file or directory.\n(gdb) up\n#1 0x00000000004fd9bf in lazy_scan_prune (vacrel=vacrel@entry=0x12b42d0, buf=buf@entry=411, blkno=blkno@entry=1, page=page@entry=0x2aaaab54be00 \"J\\f\", vistest=vistest@entry=0xe7bcc0 <GlobalVisCatalogRels>,\n prunestate=prunestate@entry=0x7fff7e4aae40) at vacuumlazy.c:1712\n1712 vacuumlazy.c: No such file or directory.\n(gdb) info locals\nrel = 0x7fe636faed28\noffnum = 6\nmaxoff = 28\nitemid = 0x2aaaab54be2c\ntuple = {t_len = 259, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 1}, ip_posid = 6}, t_tableOid = 2619, t_data = 0x2aaaab54db28}\nres = <optimized out>\ntuples_deleted = 0\nlpdead_items = 0\nnew_dead_tuples = 0\nnum_tuples = 0\nlive_tuples = 0\nnfrozen = 0\n\nMaybe you need to know that this is also returning RECENTLY_DEAD.\n\nBreakpoint 4, heap_prune_satisfies_vacuum (prstate=prstate@entry=0x7fff7e4a9180, tup=tup@entry=0x7fff7e4a8f10, buffer=buffer@entry=411) at pruneheap.c:423\n423 in pruneheap.c\n(gdb)\nRun till exit from #0 heap_prune_satisfies_vacuum (prstate=prstate@entry=0x7fff7e4a9180, tup=tup@entry=0x7fff7e4a8f10, buffer=buffer@entry=411) at pruneheap.c:423\n0x00000000004fa887 in heap_prune_chain (prstate=0x7fff7e4a9180, rootoffnum=6, buffer=411) at pruneheap.c:560\n560 in pruneheap.c\nValue returned is $72 = HEAPTUPLE_RECENTLY_DEAD\n\ntup = {t_len = 259, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 1}, ip_posid = 6}, t_tableOid = 2619, t_data = 0x2aaaab54db28}\n(gdb) p * htup\n$82 = {t_choice = {t_heap = {t_xmin = 926014884, t_xmax = 926025112, t_field3 = {t_cid = 0, t_xvac = 0}}, t_datum = {datum_len_ = 926014884, datum_typmod = 926025112, datum_typeid = 0}}, t_ctid = {ip_blkid = {bi_hi = 0, bi_lo = 1}, ip_posid = 1}, t_infomask2 = 49183, t_infomask = 9475, t_hoff = 32 ' ', t_bits = 0x2aaaab54db3f \"\\377\\377\\177\\004\"}\n\n-- \nJustin\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\n", "msg_date": "Tue, 8 Jun 2021 07:11:36 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, 8 Jun 2021 at 14:11, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Jun 08, 2021 at 01:54:41PM +0200, Matthias van de Meent wrote:\n> > On Tue, 8 Jun 2021 at 13:03, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Sun, Jun 06, 2021 at 11:00:38AM -0700, Peter Geoghegan wrote:\n> > > > On Sun, Jun 6, 2021 at 9:35 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > > I'll leave the instance running for a little bit before restarting (or kill-9)\n> > > > > in case someone requests more info.\n> > > >\n> > > > How about dumping the page image out, and sharing it with the list?\n> > > > This procedure should work fine from gdb:\n> > > >\n> > > > https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Dumping_a_page_image_from_within_GDB\n> > >\n> > > > I suggest that you dump the \"page\" pointer inside lazy_scan_prune(). I\n> > > > imagine that you have the instance already stuck in an infinite loop,\n> > > > so what we'll probably see from the page image is the page after the\n> > > > first prune and another no-progress prune.\n> > >\n> > > The cluster was again rejecting with \"too many clients already\".\n> > >\n> > > I was able to open a shell this time, but it immediately froze when I tried to\n> > > tab complete \"pg_stat_acti\"...\n> > >\n> > > I was able to dump the page image, though - attached. I can send you its\n> > > \"data\" privately, if desirable. I'll also try to step through this.\n> >\n> > Could you attach a dump of lazy_scan_prune's vacrel, all the global\n> > visibility states (GlobalVisCatalogRels, and possibly\n> > GlobalVisSharedRels, GlobalVisDataRels, and GlobalVisTempRels), and\n> > heap_page_prune's PruneState?\n>\n> (gdb) p *vacrel\n> $56 = {... OldestXmin = 926025113, ...}\n>\n> (gdb) p GlobalVisCatalogRels\n> $57 = {definitely_needed = {value = 926025113}, maybe_needed = {value = 926025112}}\n\nThis maybe_needed is older than the OldestXmin, which indeed gives us\nthis problematic behaviour:\n\nheap_prune_satisfies_vacuum considers 1 more transaction to be\nunvacuumable, and thus indeed won't vacuum that tuple that\nHeapTupleSatisfiesVacuum does want to be vacuumed.\n\nThe new open question is now: Why is\nGlobalVisCatalogRels->maybe_needed < OldestXmin? IIRC\nGLobalVisCatalogRels->maybe_needed is constructed from the same\nComputeXidHorizonsResult->catalog_oldest_nonremovable which later is\nreturned to be used in vacrel->OldestXmin.\n\n\n> Maybe you need to know that this is also returning RECENTLY_DEAD.\n\nI had expected that, but good to have confirmation.\n\nThanks for the information!\n\n\nWith regards,\n\nMatthias van de Meent.\n\n\n", "msg_date": "Tue, 8 Jun 2021 14:27:14 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 08, 2021 at 02:27:14PM +0200, Matthias van de Meent wrote:\n> Thanks for the information!\n\nI created an apparently-complete core file by first doing this:\n| echo 127 |sudo tee /proc/22591/coredump_filter\n\n*and updated wiki:Developer_FAQ to work with huge pages\n\nI'm planning to kill the process shortly if nobody asks for anything else.\n\n\n", "msg_date": "Tue, 8 Jun 2021 08:15:08 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On 2021-Jun-06, Justin Pryzby wrote:\n\n> However, I also found an autovacuum chewing 100% CPU, and it appears the\n> problem is actually because autovacuum has locked a page of pg-statistic, and\n> every other process then gets stuck waiting in the planner. I checked a few\n> and found these:\n\n> [...]\n\nHmm ... I wonder if this could be related to commits d9d076222f5b,\nc98763bf51bf, etc. I don't have any connecting thoughts other than the\ntuple visibility code being involved. Do you see any procs with the\nPROC_IN_SAFE_IC flag set?\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Tue, 8 Jun 2021 12:06:02 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi,\n\nOn 2021-06-08 14:27:14 +0200, Matthias van de Meent wrote:\n> heap_prune_satisfies_vacuum considers 1 more transaction to be\n> unvacuumable, and thus indeed won't vacuum that tuple that\n> HeapTupleSatisfiesVacuum does want to be vacuumed.\n> \n> The new open question is now: Why is\n> GlobalVisCatalogRels->maybe_needed < OldestXmin? IIRC\n> GLobalVisCatalogRels->maybe_needed is constructed from the same\n> ComputeXidHorizonsResult->catalog_oldest_nonremovable which later is\n> returned to be used in vacrel->OldestXmin.\n\nThe horizon used by pruning is only updated once per transaction (well,\napproximately). What presumably is happening is that the retry loop is\nretrying, without updating the horizon, therefore the same thing is\nhappening over and over again?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Jun 2021 10:17:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 08, 2021 at 12:06:02PM -0400, Alvaro Herrera wrote:\n> On 2021-Jun-06, Justin Pryzby wrote:\n> \n> > However, I also found an autovacuum chewing 100% CPU, and it appears the\n> > problem is actually because autovacuum has locked a page of pg-statistic, and\n> > every other process then gets stuck waiting in the planner. I checked a few\n> > and found these:\n> \n> Hmm ... I wonder if this could be related to commits d9d076222f5b,\n> c98763bf51bf, etc. I don't have any connecting thoughts other than the\n> tuple visibility code being involved. Do you see any procs with the\n> PROC_IN_SAFE_IC flag set?\n\nCan you give me a hint how to do that from a corefile ?\n\nI got this far:\n(gdb) p MyProc->links\n$13 = {prev = 0x0, next = 0x0}\n\nI do have a reindex script which runs CIC, and it does looks suspicious.\n\n<<Mon Jun 7 22:00:54 CDT 2021: starting db=ts\n...\nMon Jun 7 22:01:16 CDT 2021: ts: pg_subscription_rel: pg_subscription_rel_srrelid_srsubid_index(reindex system)...\nMon Jun 7 22:01:16 CDT 2021: ts: pg_subscription: pg_subscription_oid_index(reindex system)...\nMon Jun 7 22:01:16 CDT 2021: ts: pg_subscription: pg_subscription_subname_index(reindex system)...\nMon Jun 7 22:01:16 CDT 2021: ts: pg_subscription: pg_toast.pg_toast_6100_index(reindex system)...\nMon Jun 7 22:01:17 CDT 2021: ts: pg_statistic_ext_data: pg_statistic_ext_data_stxoid_index(reindex system)...\nMon Jun 7 22:01:17 CDT 2021: ts: pg_statistic_ext_data: pg_toast.pg_toast_3429_index(reindex system)...\nMon Jun 7 22:01:17 CDT 2021: ts: pg_statistic_ext: pg_statistic_ext_name_index(reindex system)...\nMon Jun 7 22:01:17 CDT 2021: ts: pg_statistic_ext: pg_statistic_ext_oid_index(reindex system)...\nMon Jun 7 22:01:17 CDT 2021: ts: pg_statistic_ext: pg_statistic_ext_relid_index(reindex system)...\nMon Jun 7 22:01:17 CDT 2021: ts: pg_statistic_ext: pg_toast.pg_toast_3381_index(reindex system)...\nMon Jun 7 22:01:17 CDT 2021: ts: pg_statistic: pg_statistic_relid_att_inh_index(reindex system)...\nMon Jun 7 22:02:56 CDT 2021: ts: pg_statistic: pg_toast.pg_toast_2619_index(reindex system)...\nMon Jun 7 22:02:57 CDT 2021: ts: pg_statio_all_tables_snap: pg_statio_all_tables_snap_t_idx(reindex non-partitioned)...\nERROR: canceling statement due to statement timeout\nreindex: warning, dropping invalid/unswapped index: pg_statio_all_tables_snap_t_idx_ccnew\nMon Jun 7 23:02:57 CDT 2021: ts: pg_statio_all_tables_snap: pg_toast.pg_toast_33011_index(reindex system)...\nMon Jun 7 23:02:57 CDT 2021: ts: pg_statio_all_indexes_snap: pg_statio_all_indexes_snap_t_idx(reindex non-partitioned)...\nERROR: canceling statement due to statement timeout\nreindex: warning, dropping invalid/unswapped index: pg_statio_all_indexes_snap_t_idx_ccnew\nTue Jun 8 00:02:57 CDT 2021: ts: pg_statio_all_indexes_snap: pg_toast.pg_toast_33020_index(reindex system)...\nTue Jun 8 00:02:57 CDT 2021: ts: pg_shseclabel: pg_shseclabel_object_index(reindex system)...\nTue Jun 8 00:02:58 CDT 2021: ts: pg_shseclabel: pg_toast.pg_toast_3592_index(reindex system)...\nTue Jun 8 00:02:58 CDT 2021: ts: pg_shdescription: pg_shdescription_o_c_index(reindex system)...\nTue Jun 8 00:02:58 CDT 2021: ts: pg_shdescription: pg_toast.pg_toast_2396_index(reindex system)...\n...\nTue Jun 8 00:02:57 CDT 2021: ts: pg_statio_all_indexes_snap: pg_toast.pg_toast_33020_index(reindex system)...\nTue Jun 8 00:02:57 CDT 2021: ts: pg_shseclabel: pg_shseclabel_object_index(reindex system)...\nTue Jun 8 00:02:58 CDT 2021: ts: pg_shseclabel: pg_toast.pg_toast_3592_index(reindex system)...\nTue Jun 8 00:02:58 CDT 2021: ts: pg_shdescription: pg_shdescription_o_c_index(reindex system)...\nTue Jun 8 00:02:58 CDT 2021: ts: pg_shdescription: pg_toast.pg_toast_2396_index(reindex system)...\n...\nTue Jun 8 01:21:20 CDT 2021: ts: pg_aggregate: pg_aggregate_fnoid_index(reindex system)...\nTue Jun 8 01:21:20 CDT 2021: ts: pg_aggregate: pg_toast.pg_toast_2600_index(reindex system)...\nTue Jun 8 01:21:20 CDT 2021: ts: permissions: perm_group_idx(reindex non-partitioned)...\nERROR: canceling statement due to statement timeout\nreindex: warning, dropping invalid/unswapped index: perm_group_idx_ccnew\nTue Jun 8 02:21:20 CDT 2021: ts: permissions: perm_user_idx(reindex non-partitioned)...\nERROR: canceling statement due to statement timeout\nreindex: warning, dropping invalid/unswapped index: perm_user_idx_ccnew\nTue Jun 8 03:21:20 CDT 2021: ts: permissions: pg_toast.pg_toast_33577_index(reindex system)...\nTue Jun 8 03:21:20 CDT 2021: ts: patchfiles: patchfiles_filename_idx(reindex non-partitioned)...\nERROR: canceling statement due to statement timeout\nreindex: warning, dropping invalid/unswapped index: patchfiles_filename_idx_ccnew\nTue Jun 8 04:21:21 CDT 2021: ts: patchfiles: patchfiles_pkey(reindex non-partitioned)...\nERROR: canceling statement due to statement timeout\nreindex: warning, dropping invalid/unswapped index: patchfiles_pkey_ccnew\n\n=> It's strange that these timed out, since the statio tables are less than\n15MB, and permissions and patchfiles are both under 100kB.\n\nAnd it seems to say that it time out after less than 1sec.\n\nThey're running this:\n| PGOPTIONS=\"--deadlock_timeout=333ms -cstatement-timeout=3600s\" psql -c \"REINDEX INDEX CONCURRENTLY $i\"\nAnd if it times out, it then runs: $PSQL \"DROP INDEX CONCURRENTLY $bad\"\n\nI've killed it a little bit ago, but I should've saved the start time of the\nautovacuum. I found this:\n\n#5 heap_vacuum_rel (rel=0x7fe636faed28, params=0x12ce89c, bstrategy=<optimized out>) at vacuumlazy.c:612\nstarttime = 676436464463888\nru0 = {tv = {tv_sec = 1623121264, tv_usec = 463887}, ru = {ru_utime = {tv_sec = 0, tv_usec = 77418}, ru_stime = {tv_sec = 0, tv_usec = 13866}, {ru_maxrss = 7440, __ru_maxrss_word = 7440}, {ru_ixrss = 0, __ru_ixrss_word = 0}, {\n ru_idrss = 0, __ru_idrss_word = 0}, {ru_isrss = 0, __ru_isrss_word = 0}, {ru_minflt = 1826, __ru_minflt_word = 1826}, {ru_majflt = 1, __ru_majflt_word = 1}, {ru_nswap = 0, __ru_nswap_word = 0}, {ru_inblock = 2008,\n __ru_inblock_word = 2008}, {ru_oublock = 192, __ru_oublock_word = 192}, {ru_msgsnd = 0, __ru_msgsnd_word = 0}, {ru_msgrcv = 0, __ru_msgrcv_word = 0}, {ru_nsignals = 0, __ru_nsignals_word = 0}, {ru_nvcsw = 29,\n __ru_nvcsw_word = 29}, {ru_nivcsw = 9, __ru_nivcsw_word = 9}}}\n\n$ date -d @1623121264\nMon Jun 7 22:01:04 CDT 2021\n\n$ date -d '2000-01-01 UTC + 676436464seconds'\nMon Jun 7 22:01:04 CDT 2021\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 8 Jun 2021 12:34:04 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On 2021-Jun-08, Justin Pryzby wrote:\n\n> On Tue, Jun 08, 2021 at 12:06:02PM -0400, Alvaro Herrera wrote:\n> > On 2021-Jun-06, Justin Pryzby wrote:\n> > \n> > > However, I also found an autovacuum chewing 100% CPU, and it appears the\n> > > problem is actually because autovacuum has locked a page of pg-statistic, and\n> > > every other process then gets stuck waiting in the planner. I checked a few\n> > > and found these:\n> > \n> > Hmm ... I wonder if this could be related to commits d9d076222f5b,\n> > c98763bf51bf, etc. I don't have any connecting thoughts other than the\n> > tuple visibility code being involved. Do you see any procs with the\n> > PROC_IN_SAFE_IC flag set?\n> \n> Can you give me a hint how to do that from a corefile ?\n\n(gdb) set $i=0\n(gdb) set $total = ProcGlobal->allProcCount\n(gdb) while($i<$total)\n >print ProcGlobal->allProcs[$i++]->statusFlags\n >end\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"In fact, the basic problem with Perl 5's subroutines is that they're not\ncrufty enough, so the cruft leaks out into user-defined code instead, by\nthe Conservation of Cruft Principle.\" (Larry Wall, Apocalypse 6)\n\n\n", "msg_date": "Tue, 8 Jun 2021 14:01:51 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Reminds me of the other bug that you also reported about a year ago,\nJustin - which was never fixed. The one with the duplicate tids from a cic\n(see pg 14 open item).\n\nIs this essentially the same environment as the one that led to your other\nbug report?\n\nPeter Geoghegan\n(Sent from my phone)\n\nReminds me of the other bug that you also  reported about a year ago, Justin - which was never fixed. The one with the duplicate tids from a cic (see pg 14 open item).Is this essentially the same environment as the one that led to your other bug report? Peter Geoghegan(Sent from my phone)", "msg_date": "Tue, 8 Jun 2021 11:40:31 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 08, 2021 at 02:01:51PM -0400, Alvaro Herrera wrote:\n> On 2021-Jun-08, Justin Pryzby wrote:\n> \n> > On Tue, Jun 08, 2021 at 12:06:02PM -0400, Alvaro Herrera wrote:\n> > > On 2021-Jun-06, Justin Pryzby wrote:\n> > > \n> > > > However, I also found an autovacuum chewing 100% CPU, and it appears the\n> > > > problem is actually because autovacuum has locked a page of pg-statistic, and\n> > > > every other process then gets stuck waiting in the planner. I checked a few\n> > > > and found these:\n> > > \n> > > Hmm ... I wonder if this could be related to commits d9d076222f5b,\n> > > c98763bf51bf, etc. I don't have any connecting thoughts other than the\n> > > tuple visibility code being involved. Do you see any procs with the\n> > > PROC_IN_SAFE_IC flag set?\n> > \n> > Can you give me a hint how to do that from a corefile ?\n> \n> (gdb) set $i=0\n> (gdb) set $total = ProcGlobal->allProcCount\n> (gdb) while($i<$total)\n> >print ProcGlobal->allProcs[$i++]->statusFlags\n> >end\n\nThey're all zero except for:\n\n$201 = 1 '\\001'\n$202 = 3 '\\003'\n$203 = 1 '\\001'\n\nsrc/include/storage/proc.h-#define PROC_IS_AUTOVACUUM 0x01 /* is it an autovac worker? */\nsrc/include/storage/proc.h-#define PROC_IN_VACUUM 0x02 /* currently running lazy vacuum */\nsrc/include/storage/proc.h:#define PROC_IN_SAFE_IC 0x04 /* currently running CREATE INDEX\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 8 Jun 2021 13:45:14 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 08, 2021 at 11:40:31AM -0700, Peter Geoghegan wrote:\n> Reminds me of the other bug that you also reported about a year ago,\n> Justin - which was never fixed. The one with the duplicate tids from a cic\n> (see pg 14 open item).\n> \n> Is this essentially the same environment as the one that led to your other\n> bug report?\n\nYes, it's on the same VM, running an internal instance of our software.\n\nI'm not sure, but my reindex script may be more relevant than the software.\nSome of the data pg_statistic data might be similar to the instances our\ncustomers run, but much of it isn't similar.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 8 Jun 2021 14:04:28 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 08, 2021 at 12:34:04PM -0500, Justin Pryzby wrote:\n> On Tue, Jun 08, 2021 at 12:06:02PM -0400, Alvaro Herrera wrote:\n> > On 2021-Jun-06, Justin Pryzby wrote:\n> > \n> > > However, I also found an autovacuum chewing 100% CPU, and it appears the\n> > > problem is actually because autovacuum has locked a page of pg-statistic, and\n> > > every other process then gets stuck waiting in the planner. I checked a few\n> > > and found these:\n> > \n> > Hmm ... I wonder if this could be related to commits d9d076222f5b,\n> > c98763bf51bf, etc. I don't have any connecting thoughts other than the\n> > tuple visibility code being involved. Do you see any procs with the\n> > PROC_IN_SAFE_IC flag set?\n> \n> I do have a reindex script which runs CIC, and it does looks suspicious.\n> \n> <<Mon Jun 7 22:00:54 CDT 2021: starting db=ts\n> ...\n> Mon Jun 7 22:02:57 CDT 2021: ts: pg_statio_all_tables_snap: pg_statio_all_tables_snap_t_idx(reindex non-partitioned)...\n> ERROR: canceling statement due to statement timeout\n> reindex: warning, dropping invalid/unswapped index: pg_statio_all_tables_snap_t_idx_ccnew\n> Mon Jun 7 23:02:57 CDT 2021: ts: pg_statio_all_tables_snap: pg_toast.pg_toast_33011_index(reindex system)...\n> Mon Jun 7 23:02:57 CDT 2021: ts: pg_statio_all_indexes_snap: pg_statio_all_indexes_snap_t_idx(reindex non-partitioned)...\n> ERROR: canceling statement due to statement timeout\n> reindex: warning, dropping invalid/unswapped index: pg_statio_all_indexes_snap_t_idx_ccnew\n> Tue Jun 8 00:02:57 CDT 2021: ts: pg_statio_all_indexes_snap: pg_toast.pg_toast_33020_index(reindex system)...\n> Tue Jun 8 01:21:20 CDT 2021: ts: permissions: perm_group_idx(reindex non-partitioned)...\n> ERROR: canceling statement due to statement timeout\n> reindex: warning, dropping invalid/unswapped index: perm_group_idx_ccnew\n> Tue Jun 8 02:21:20 CDT 2021: ts: permissions: perm_user_idx(reindex non-partitioned)...\n> ERROR: canceling statement due to statement timeout\n> reindex: warning, dropping invalid/unswapped index: perm_user_idx_ccnew\n> Tue Jun 8 03:21:20 CDT 2021: ts: permissions: pg_toast.pg_toast_33577_index(reindex system)...\n> Tue Jun 8 03:21:20 CDT 2021: ts: patchfiles: patchfiles_filename_idx(reindex non-partitioned)...\n> ERROR: canceling statement due to statement timeout\n> reindex: warning, dropping invalid/unswapped index: patchfiles_filename_idx_ccnew\n> Tue Jun 8 04:21:21 CDT 2021: ts: patchfiles: patchfiles_pkey(reindex non-partitioned)...\n> ERROR: canceling statement due to statement timeout\n> reindex: warning, dropping invalid/unswapped index: patchfiles_pkey_ccnew\n> \n> => It's strange that these timed out, since the statio tables are less than\n> 15MB, and permissions and patchfiles are both under 100kB.\n> \n> And it seems to say that it time out after less than 1sec.\n\nOops, no: it timed out after 3600sec, as requested.\n\n> They're running this:\n> | PGOPTIONS=\"--deadlock_timeout=333ms -cstatement-timeout=3600s\" psql -c \"REINDEX INDEX CONCURRENTLY $i\"\n> And if it times out, it then runs: $PSQL \"DROP INDEX CONCURRENTLY $bad\"\n...\n> $ date -d @1623121264\n> Mon Jun 7 22:01:04 CDT 2021\n\nWhich is probably because the reindex was waiting for the vacuum process to\nfinish (or maybe waiting on the page that vacuum had locked?).\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 8 Jun 2021 14:27:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On 2021-Jun-08, Justin Pryzby wrote:\n\n> They're all zero except for:\n> \n> $201 = 1 '\\001'\n> $202 = 3 '\\003'\n> $203 = 1 '\\001'\n> \n> src/include/storage/proc.h-#define PROC_IS_AUTOVACUUM 0x01 /* is it an autovac worker? */\n> src/include/storage/proc.h-#define PROC_IN_VACUUM 0x02 /* currently running lazy vacuum */\n> src/include/storage/proc.h:#define PROC_IN_SAFE_IC 0x04 /* currently running CREATE INDEX\n\nAh okay, not related then. Thanks for checking.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Tue, 8 Jun 2021 16:51:57 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 8, 2021 at 12:27 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > They're running this:\n> > | PGOPTIONS=\"--deadlock_timeout=333ms -cstatement-timeout=3600s\" psql -c \"REINDEX INDEX CONCURRENTLY $i\"\n> > And if it times out, it then runs: $PSQL \"DROP INDEX CONCURRENTLY $bad\"\n> ...\n> > $ date -d @1623121264\n> > Mon Jun 7 22:01:04 CDT 2021\n\nPerhaps reindex was waiting on the VACUUM process to finish, while\nVACUUM was (in effect) busy waiting on the REINDEX to finish. If the\nbug is hard to reproduce then it might just be that the circumstances\nthat lead to livelock require that things line up exactly and the heap\npage + XID level -- which I'd expect to be tricky to reproduce. As I\nsaid upthread, I'm almost certain that the \"goto retry\" added by\ncommit 8523492d is a factor here -- that is what I mean by busy\nwaiting inside VACUUM. It's possible that busy waiting like this\nhappens much more often than an actual undetected deadlock/livelock.\nWe only expect to \"goto retry\" in the event of a concurrently aborting\ntransaction.\n\nThe other bug that you reported back in July of last year [1] (which\ninvolved a \"REINDEX INDEX pg_class_tblspc_relfilenode_index\") was\npretty easy to recreate, just by running the REINDEX in a tight loop.\nCould you describe how tricky it is to repro this issue now?\n\nIf you instrument the \"goto retry\" code added to lazy_scan_prune() by\ncommit 8523492d, then you might notice that it is hit in contexts that\nit was never intended to work with. If you can reduce reproducing the\nproblem to reproducing hitting that goto in the absence of an aborted\ntransaction, then it might be a lot easier to produce a simple repro.\nThe livelock/deadlock is probably nothing more than the worst\nconsequence of the same issue, and so may not need to be reproduced\ndirectly to fix the issue.\n\n[1] https://www.postgresql.org/message-id/CAH2-WzkjjCoq5Y4LeeHJcjYJVxGm3M3SAWZ0%3D6J8K1FPSC9K0w%40mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 8 Jun 2021 13:52:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 08, 2021 at 01:52:40PM -0700, Peter Geoghegan wrote:\n> On Tue, Jun 8, 2021 at 12:27 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > They're running this:\n> > > | PGOPTIONS=\"--deadlock_timeout=333ms -cstatement-timeout=3600s\" psql -c \"REINDEX INDEX CONCURRENTLY $i\"\n> > > And if it times out, it then runs: $PSQL \"DROP INDEX CONCURRENTLY $bad\"\n> > ...\n> > > $ date -d @1623121264\n> > > Mon Jun 7 22:01:04 CDT 2021\n> \n> Perhaps reindex was waiting on the VACUUM process to finish, while\n> VACUUM was (in effect) busy waiting on the REINDEX to finish.\n\nBut when the reindex exited, the vacuum kept spinning until I sent SIGABRT 12\nhours later.\n\n> The other bug that you reported back in July of last year [1] (which\n> involved a \"REINDEX INDEX pg_class_tblspc_relfilenode_index\") was\n> pretty easy to recreate, just by running the REINDEX in a tight loop.\n> Could you describe how tricky it is to repro this issue now?\n\nI didn't try to reproduce it, but now hit it twice in 3 days.\n(Actuallly, I did try to reproduce it, by running tight loops around\nvacuum/analyze pg_statistic, which didn't work. Maybe because reindex is\nwhat's important.)\n\nI mentioned that we've been running pg14b1 since 2021-05-20. So it ran fine for\n13 days before breaking in an obvious way.\n\nOH - in the first instance, I recorded the stuck process, but not its\ntimestamp. It looks like that autovacuum process *also* started right after\n10pm, which is when the reindex job starts. So it seems like REINDEX may\ntrigger this pretty consistently:\n\n(gdb) frame 4\n#4 heap_vacuum_rel (rel=0x7f0349466d28, params=0x1c77b7c, bstrategy=<optimized out>) at vacuumlazy.c:612\n612 vacuumlazy.c: No such file or directory.\n(gdb) info locals\nstarttime = 676177375524485\n\n$ date -d '2000-01-01 UTC + 676177375seconds'\nFri Jun 4 22:02:55 CDT 2021\n\n> If you instrument the \"goto retry\" code added to lazy_scan_prune() by\n> commit 8523492d, then you might notice that it is hit in contexts that\n> it was never intended to work with. If you can reduce reproducing the\n> problem to reproducing hitting that goto in the absence of an aborted\n> transaction, then it might be a lot easier to produce a simple repro.\n\nI'm not sure what you're suggesting ? Maybe I should add some NOTICES there.\n\nI'm not sure why/if pg_statistic is special, but I guess when analyze happens,\nit gets updated, and eventually processed by autovacuum.\n\nThe main table here is a partitioned table which receives UPDATEs which moves\ntuples into a different partition (probably more often than what's\nrecommended).\n\n autovacuum_analyze_threshold | 2\n autovacuum_analyze_scale_factor | 0.005\n autovacuum_vacuum_scale_factor | 0.005\n log_autovacuum_min_duration | 9000\n checkpoint_timeout | 60\n wal_level | minimal\n\nIn pg14, the parent table is auto-analyzed.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 8 Jun 2021 16:23:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 8, 2021 at 2:23 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I'm not sure what you're suggesting ? Maybe I should add some NOTICES there.\n\nHere is one approach that might work: Can you check if the assertion\nadded by the attached patch fails very quickly with your test case?\n\nThis does nothing more than trigger an assertion failure in the event\nof retrying a second time for any given heap page. Theoretically that\ncould happen without there being any bug -- in principle we might have\nto retry several times for the same page. In practice the chances of\nit happening even once are vanishingly low, though -- so two times\nstrongly signals a bug. It was quite hard to hit the \"goto restart\"\neven once during my testing. There is still no test coverage for the\nline of code because it's so hard to hit.\n\nIf you find that the assertion is hit pretty quickly with the same\nworkload then you've all but reproduced the issue, probably in far\nless time. And, if you know that there were no concurrently aborting\ntransactions then you can be 100% sure that you have reproduced the\nissue -- this goto is only supposed to be executed when a transaction\nthat was in progress during the heap_page_prune() aborts after it\nreturns, but before we call HeapTupleSatisfiesVacuum() for one of the\naborted-xact tuples. It's supposed to be a super narrow thing.\n\n> I'm not sure why/if pg_statistic is special, but I guess when analyze happens,\n> it gets updated, and eventually processed by autovacuum.\n\npg_statistic is probably special, though only in a superficial way: it\nis the system catalog that tends to be the most frequently vacuumed in\npractice.\n\n> In pg14, the parent table is auto-analyzed.\n\nI wouldn't expect that to matter. The \"ANALYZE portion\" of the VACUUM\nANALYZE won't have started at the point that we get stuck.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 8 Jun 2021 14:38:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 8, 2021 at 4:03 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> postgres=# SELECT lp, lp_off, lp_flags, lp_len, t_xmin, t_xmax, t_field3, t_ctid, t_infomask2, t_infomask, t_hoff, t_bits, t_oid FROM heap_page_items(pg_read_binary_file('/tmp/dump_block.page'));\n> lp | lp_off | lp_flags | lp_len | t_xmin | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid\n> ----+--------+----------+--------+-----------+-----------+----------+--------+-------------+------------+--------+----------------------------------+-------\n> 1 | 1320 | 1 | 259 | 926025112 | 0 | 0 | (1,1) | 32799 | 10499 | 32 | 11111111111111111111111000100000 |\n\n*** SNIP ***\n\n> 6 | 7464 | 1 | 259 | 926014884 | 926025112 | 0 | (1,1) | 49183 | 9475 | 32 | 11111111111111111111111000100000 |\n\nAs I understand it from your remarks + gdb output from earlier [1],\nthe tuple at offset number 6 is the tuple that triggers the suspicious\n\"goto restart\" here. There was a regular UPDATE (not a HOT UPDATE)\nthat produced a successor version on the same heap page -- which is lp\n1. Here are the t_infomask details for both tuples:\n\nlp 6: HEAP_HASNULL|HEAP_HASVARWIDTH|HEAP_XMIN_COMMITTED|HEAP_XMAX_COMMITTED|HEAP_UPDATED\n<-- points to (1,1)\nlp 1: HEAP_HASNULL|HEAP_HASVARWIDTH|HEAP_XMIN_COMMITTED|HEAP_XMAX_INVALID|HEAP_UPDATED\n <-- This is (1,1)\n\nSo if lp 1's xmin and lp 6's xmax XID/Xact committed (i.e., if XID\n926025112 committed), why shouldn't HeapTupleSatisfiesVacuum() think\nthat lp 6 is DEAD (and not RECENTLY_DEAD)? You also say that\nvacuumlazy.c's OldestXmin is 926025113, so it is hard to fault HTSV\nhere. The only way it could be wrong is if the hint bits were somehow\nspuriously set, which seems unlikely to me.\n\n[1] https://postgr.es/m/20210608113333.GC16435@telsasoft.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 8 Jun 2021 15:52:02 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 8, 2021 at 5:27 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > (gdb) p *vacrel\n> > $56 = {... OldestXmin = 926025113, ...}\n> >\n> > (gdb) p GlobalVisCatalogRels\n> > $57 = {definitely_needed = {value = 926025113}, maybe_needed = {value = 926025112}}\n>\n> This maybe_needed is older than the OldestXmin, which indeed gives us\n> this problematic behaviour:\n\nGood catch.\n\n> heap_prune_satisfies_vacuum considers 1 more transaction to be\n> unvacuumable, and thus indeed won't vacuum that tuple that\n> HeapTupleSatisfiesVacuum does want to be vacuumed.\n\nFollowing up from my email from an hour ago here. Since I have no\nreason to suspect HeapTupleSatisfiesVacuum (per the earlier analysis),\nthis is very much starting to look like a\nheap_prune_satisfies_vacuum() problem. And therefore likely a problem\nin the snapshot scalability work.\n\nNote that GlobalVisCatalogRels.maybe_needed is 926025112, which\ndoesn't match OldestXmin in VACUUM (that's 926025113). Though both\nGlobalVisDataRels.definitely_needed and GlobalVisDataRels.maybe_needed\n*are* 926025113, and therefore agree with VACUUM's OldestXmin. But\nthis is pg_statistic we're vacuuming, and so GlobalVisCatalogRels is\nwhat matters.\n\n> The new open question is now: Why is\n> GlobalVisCatalogRels->maybe_needed < OldestXmin? IIRC\n> GLobalVisCatalogRels->maybe_needed is constructed from the same\n> ComputeXidHorizonsResult->catalog_oldest_nonremovable which later is\n> returned to be used in vacrel->OldestXmin.\n\nExactly.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 8 Jun 2021 16:46:46 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Tue, Jun 8, 2021 at 5:27 AM Matthias van de Meent\n>>> (gdb) p GlobalVisCatalogRels\n>>> $57 = {definitely_needed = {value = 926025113}, maybe_needed = {value = 926025112}}\n\n>> This maybe_needed is older than the OldestXmin, which indeed gives us\n>> this problematic behaviour:\n\n> Good catch.\n\nI wonder if this is a variant of the problem shown at\n\nhttps://www.postgresql.org/message-id/2591376.1621196582%40sss.pgh.pa.us\n\nwhere maybe_needed was visibly quite insane. This value is\nless visibly insane, but it's still wrong. It might be\ninteresting to try running this test case with the extra\nassertions I proposed there, to try to narrow down where\nit's going off the rails.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Jun 2021 20:11:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 08, 2021 at 02:38:37PM -0700, Peter Geoghegan wrote:\n> On Tue, Jun 8, 2021 at 2:23 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I'm not sure what you're suggesting ? Maybe I should add some NOTICES there.\n> \n> Here is one approach that might work: Can you check if the assertion\n> added by the attached patch fails very quickly with your test case?\n\nI reproduced the issue on a new/fresh cluster like this:\n\n./postgres -D data -c autovacuum_naptime=1 -c autovacuum_analyze_scale_factor=0.005 -c log_autovacuum_min_duration=-1\npsql -h /tmp postgres -c \"CREATE TABLE t(i int); INSERT INTO t SELECT generate_series(1,99999); CREATE INDEX ON t(i);\"\ntime while psql -h /tmp postgres -qc 'REINDEX (CONCURRENTLY) INDEX t_i_idx'; do :; done&\ntime while psql -h /tmp postgres -qc 'ANALYZE pg_attribute'; do :; done&\n\nTRAP: FailedAssertion(\"restarts == 0\", File: \"vacuumlazy.c\", Line: 1803, PID: 10367)\npostgres: autovacuum worker postgres(ExceptionalCondition+0x99)[0x5633f3ad6b09]\npostgres: autovacuum worker postgres(+0x1c0a37)[0x5633f36cca37]\npostgres: autovacuum worker postgres(heap_vacuum_rel+0xfca)[0x5633f36cf75a]\npostgres: autovacuum worker postgres(+0x305fed)[0x5633f3811fed]\npostgres: autovacuum worker postgres(vacuum+0x61a)[0x5633f38137ea]\npostgres: autovacuum worker postgres(+0x409dd3)[0x5633f3915dd3]\npostgres: autovacuum worker postgres(+0x40ae46)[0x5633f3916e46]\npostgres: autovacuum worker postgres(AutoVacuumUpdateDelay+0x0)[0x5633f3916f50]\npostgres: autovacuum worker postgres(+0x41985b)[0x5633f392585b]\n/lib/x86_64-linux-gnu/libpthread.so.0(+0x12890)[0x7f085c591890]\n/lib/x86_64-linux-gnu/libc.so.6(__select+0x17)[0x7f085bafaff7]\npostgres: autovacuum worker postgres(+0x419d06)[0x5633f3925d06]\npostgres: autovacuum worker postgres(PostmasterMain+0xcbb)[0x5633f39277bb]\npostgres: autovacuum worker postgres(main+0x4d4)[0x5633f3660a14]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7)[0x7f085ba05b97]\npostgres: autovacuum worker postgres(_start+0x2a)[0x5633f3660aba]\n2021-06-08 19:10:36.875 CDT postmaster[13483] LOG: server process (PID 10367) was terminated by signal 6: Aborted\n2021-06-08 19:10:36.875 CDT postmaster[13483] DETAIL: Failed process was running: autovacuum: VACUUM pg_toast.pg_toast_2619\n2021-06-08 19:10:36.875 CDT postmaster[13483] LOG: terminating any other active server processes\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and repeat your command.\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nconnection to server was lost\n\nreal 0m14.477s\n\n\n", "msg_date": "Tue, 8 Jun 2021 19:18:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 8, 2021 at 5:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wonder if this is a variant of the problem shown at\n>\n> https://www.postgresql.org/message-id/2591376.1621196582%40sss.pgh.pa.us\n>\n> where maybe_needed was visibly quite insane. This value is\n> less visibly insane, but it's still wrong. It might be\n> interesting to try running this test case with the extra\n> assertions I proposed there, to try to narrow down where\n> it's going off the rails.\n\nOh yeah. Justin didn't say anything about upgrading using pg_upgrade\n(just something about upgrading the kernel).\n\nDid you use pg_upgrade here, Justin?\n\nI'm going to see Andres in person in 20 minutes time (for the first\ntime in over a year!). I'll discuss this issue with him.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 8 Jun 2021 17:44:15 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 8, 2021 at 5:18 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I reproduced the issue on a new/fresh cluster like this:\n>\n> ./postgres -D data -c autovacuum_naptime=1 -c autovacuum_analyze_scale_factor=0.005 -c log_autovacuum_min_duration=-1\n> psql -h /tmp postgres -c \"CREATE TABLE t(i int); INSERT INTO t SELECT generate_series(1,99999); CREATE INDEX ON t(i);\"\n> time while psql -h /tmp postgres -qc 'REINDEX (CONCURRENTLY) INDEX t_i_idx'; do :; done&\n> time while psql -h /tmp postgres -qc 'ANALYZE pg_attribute'; do :; done&\n\nI don't have time to try this out myself today, but offhand I'm pretty\nconfident that this is sufficient to reproduce the underlying bug\nitself. And if that's true then I guess it can't have anything to do\nwith the pg_upgrade/pg_resetwal issue Tom just referenced, despite the\napparent similarity.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 8 Jun 2021 17:47:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 08, 2021 at 05:44:15PM -0700, Peter Geoghegan wrote:\n> On Tue, Jun 8, 2021 at 5:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I wonder if this is a variant of the problem shown at\n> >\n> > https://www.postgresql.org/message-id/2591376.1621196582%40sss.pgh.pa.us\n> >\n> > where maybe_needed was visibly quite insane. This value is\n> > less visibly insane, but it's still wrong. It might be\n> > interesting to try running this test case with the extra\n> > assertions I proposed there, to try to narrow down where\n> > it's going off the rails.\n> \n> Oh yeah. Justin didn't say anything about upgrading using pg_upgrade\n> (just something about upgrading the kernel).\n> \n> Did you use pg_upgrade here, Justin?\n\nYes.\n\nThe kernel upgrade was going to be my hand-waving dismissal of the issue when I\nsaw someting waiting on a futex. (Since a few years ago Tom had to remind me\nabout an old Linux futex bug which we hit after upgrading to v12 on a\ncustomer's server - they like to avoid maintenance at all costs).\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 8 Jun 2021 19:50:39 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Wed, Jun 9, 2021 at 2:17 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-06-08 14:27:14 +0200, Matthias van de Meent wrote:\n> > heap_prune_satisfies_vacuum considers 1 more transaction to be\n> > unvacuumable, and thus indeed won't vacuum that tuple that\n> > HeapTupleSatisfiesVacuum does want to be vacuumed.\n> >\n> > The new open question is now: Why is\n> > GlobalVisCatalogRels->maybe_needed < OldestXmin? IIRC\n> > GLobalVisCatalogRels->maybe_needed is constructed from the same\n> > ComputeXidHorizonsResult->catalog_oldest_nonremovable which later is\n> > returned to be used in vacrel->OldestXmin.\n>\n> The horizon used by pruning is only updated once per transaction (well,\n> approximately). What presumably is happening is that the retry loop is\n> retrying, without updating the horizon, therefore the same thing is\n> happening over and over again?\n\nWhen we calculated vacrel->OldestXmin in vacuum_set_xid_limits(),\nvacrel->OldestXmin and GlogalVisCatalogRels->maybe_needed must have\nbeen the same value. That is, those were 926025113. After that,\nvacrel->OldestXmin is not changed throughout lazy vacuum whereas\nGlobalVisCatalogRels->maybe_needed could be updated (right?). Is there\nany chance that GlobalVisCatalogRels->maybe_needed goes backward? For\nexample, a case like where when re-calculating\ncatalog_oldest_nonremovable (i.g. updating\nGlobalVisCatalogRels->maybe_needed) we take a process into account who\nhas an old XID but was ignored last time for some reason (e.g., its\nstatusFlag).\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 9 Jun 2021 11:26:12 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, Jun 08, 2021 at 05:47:28PM -0700, Peter Geoghegan wrote:\n> I don't have time to try this out myself today, but offhand I'm pretty\n> confident that this is sufficient to reproduce the underlying bug\n> itself. And if that's true then I guess it can't have anything to do\n> with the pg_upgrade/pg_resetwal issue Tom just referenced, despite the\n> apparent similarity.\n\nAgreed. It took me a couple of minutes to get autovacuum to run in an\ninfinite loop with a standalone instance. Nice catch, Justin!\n--\nMichael", "msg_date": "Wed, 9 Jun 2021 11:42:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Wed, 9 Jun 2021 at 04:42, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jun 08, 2021 at 05:47:28PM -0700, Peter Geoghegan wrote:\n> > I don't have time to try this out myself today, but offhand I'm pretty\n> > confident that this is sufficient to reproduce the underlying bug\n> > itself. And if that's true then I guess it can't have anything to do\n> > with the pg_upgrade/pg_resetwal issue Tom just referenced, despite the\n> > apparent similarity.\n>\n> Agreed. It took me a couple of minutes to get autovacuum to run in an\n> infinite loop with a standalone instance. Nice catch, Justin!\n\nI believe that I've found the culprit:\nGetOldestNonRemovableTransactionId(rel) does not use the exact same\nconditions for returning OldestXmin as GlobalVisTestFor(rel) does.\nThis results in different minimal XIDs, and subsequently this failure.\n\nThe attached patch fixes this inconsistency, and adds a set of asserts\nto ensure that GetOldesNonRemovableTransactionId is equal to the\nmaybe_needed of the GlobalVisTest of that relation, plus some at\nGlobalVisUpdateApply such that it will fail whenever it is called with\narguments that would move the horizons in the wrong direction. Note\nthat there was no problem in GlobalVisUpdateApply, but it helped me\ndetermine that that part was not the source of the problem, and I\nthink that having this safeguard is a net-positive.\n\nAnother approach might be changing GlobalVisTestFor(rel) instead to\nreflect the conditions in GetOldestNonRemovableTransactionId.\n\nWith attached prototype patch, I was unable to reproduce the\nproblematic case in 10 minutes. Without, I got the problematic\nbehaviour in seconds.\n\nWith regards,\n\nMatthias", "msg_date": "Wed, 9 Jun 2021 17:42:34 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi,\n\nGood find!\n\nOn 2021-06-09 17:42:34 +0200, Matthias van de Meent wrote:\n> I believe that I've found the culprit:\n> GetOldestNonRemovableTransactionId(rel) does not use the exact same\n> conditions for returning OldestXmin as GlobalVisTestFor(rel) does.\n> This results in different minimal XIDs, and subsequently this failure.\n\nSpecifically, the issue is that it uses the innocuous looking\n\n\telse if (RelationIsAccessibleInLogicalDecoding(rel))\n\t\treturn horizons.catalog_oldest_nonremovable;\n\nbut that's not sufficient, because\n\n#define RelationIsAccessibleInLogicalDecoding(relation) \\\n\t(XLogLogicalInfoActive() && \\\n\t RelationNeedsWAL(relation) && \\\n\t (IsCatalogRelation(relation) || RelationIsUsedAsCatalogTable(relation)))\n\nit is never true if wal_level < logical. So what it is missing is the\nIsCatalogRelation(rel) || bit.\n\n\n> The attached patch fixes this inconsistency\n\nI think I prefer applying the fix and the larger changes separately.\n\n\n> Another approach might be changing GlobalVisTestFor(rel) instead to\n> reflect the conditions in GetOldestNonRemovableTransactionId.\n\nNo, that'd not be correct, afaict.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 9 Jun 2021 11:45:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Wed, Jun 9, 2021 at 11:45 AM Andres Freund <andres@anarazel.de> wrote:\n> Good find!\n\n+1\n\n> > The attached patch fixes this inconsistency\n>\n> I think I prefer applying the fix and the larger changes separately.\n\nI wonder if it's worth making the goto inside lazy_scan_prune verify\nthat the heap tuple matches what we expect. I'm sure that we would\nhave found this issue far sooner if that had been in place already.\nThough I'm less sure of how much value adding such a check has now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 9 Jun 2021 13:45:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Wed, 9 Jun 2021 at 20:45, Andres Freund <andres@anarazel.de> wrote:\n>\n> Specifically, the issue is that it uses the innocuous looking\n>\n> else if (RelationIsAccessibleInLogicalDecoding(rel))\n> return horizons.catalog_oldest_nonremovable;\n>\n> but that's not sufficient, because\n>\n> #define RelationIsAccessibleInLogicalDecoding(relation) \\\n> (XLogLogicalInfoActive() && \\\n> RelationNeedsWAL(relation) && \\\n> (IsCatalogRelation(relation) || RelationIsUsedAsCatalogTable(relation)))\n>\n> it is never true if wal_level < logical. So what it is missing is the\n> IsCatalogRelation(rel) || bit.\n\nCorrect.\n\n> > The attached patch fixes this inconsistency\n>\n> I think I prefer applying the fix and the larger changes separately.\n\nFeel free to change anything in that patch, it was a prototype, or\ngive me a notice if you want me to split the patch.\n\n> > Another approach might be changing GlobalVisTestFor(rel) instead to\n> > reflect the conditions in GetOldestNonRemovableTransactionId.\n>\n> No, that'd not be correct, afaict.\n\nAllright, I wasn't sure of that myself.\n\nWith regards,\n\nMatthias van de Meent.\n\n\n", "msg_date": "Thu, 10 Jun 2021 17:20:47 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Wed, 9 Jun 2021 at 22:45, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Jun 9, 2021 at 11:45 AM Andres Freund <andres@anarazel.de> wrote:\n> > Good find!\n>\n> +1\n>\n> > > The attached patch fixes this inconsistency\n> >\n> > I think I prefer applying the fix and the larger changes separately.\n>\n> I wonder if it's worth making the goto inside lazy_scan_prune verify\n> that the heap tuple matches what we expect. I'm sure that we would\n> have found this issue far sooner if that had been in place already.\n> Though I'm less sure of how much value adding such a check has now.\n\nCould you elaborate on what this \"matches what we expect\" entails?\n\nApart from this, I'm also quite certain that the goto-branch that\ncreated this infinite loop should have been dead code: In a correctly\nworking system, the GlobalVis*Rels should always be at least as strict\nas the vacrel->OldestXmin, but at the same time only GlobalVis*Rels\ncan be updated (i.e. move their horizon forward) during the vacuum. As\nsuch, heap_prune_satisfies_vacuum should never fail to vacuum a tuple\nthat also satisifies the condition of HeapTupleSatisfiesVacuum. That\nis, unless we're also going to change code to update / move forward\nvacrel->OldestXmin in lazy_scan_prune between the HPSV call and the\nloop with HTSV.\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 10 Jun 2021 17:49:05 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Thu, Jun 10, 2021 at 8:49 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Could you elaborate on what this \"matches what we expect\" entails?\n>\n> Apart from this, I'm also quite certain that the goto-branch that\n> created this infinite loop should have been dead code: In a correctly\n> working system, the GlobalVis*Rels should always be at least as strict\n> as the vacrel->OldestXmin, but at the same time only GlobalVis*Rels\n> can be updated (i.e. move their horizon forward) during the vacuum. As\n> such, heap_prune_satisfies_vacuum should never fail to vacuum a tuple\n> that also satisifies the condition of HeapTupleSatisfiesVacuum.\n\nIt's true that these two similar functions should be in perfect\nagreement in general (given the same OldestXmin). That in itself\ndoesn't mean that they must always agree about a tuple in practice,\nwhen they're called in turn inside lazy_scan_prune(). In particular,\nnothing stops a transaction that was in progress to\nheap_prune_satisfies_vacuum (when it saw some tuples it inserted)\nconcurrently aborting. That will render the same tuples fully DEAD\ninside HeapTupleSatisfiesVacuum(). So we need to restart using the\ngoto purely to cover that case. See the commit message of commit\n8523492d4e3.\n\nBy \"matches what we expect\", I meant \"involves a just-aborted\ntransaction\". We could defensively verify that the inserting\ntransaction concurrently aborted at the point of retrying/calling\nheap_page_prune() a second time. If there is no aborted transaction\ninvolved (as was the case with this bug), then we can be confident\nthat something is seriously broken.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Jun 2021 09:03:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Thu, 10 Jun 2021 at 18:03, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Jun 10, 2021 at 8:49 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Could you elaborate on what this \"matches what we expect\" entails?\n> >\n> > Apart from this, I'm also quite certain that the goto-branch that\n> > created this infinite loop should have been dead code: In a correctly\n> > working system, the GlobalVis*Rels should always be at least as strict\n> > as the vacrel->OldestXmin, but at the same time only GlobalVis*Rels\n> > can be updated (i.e. move their horizon forward) during the vacuum. As\n> > such, heap_prune_satisfies_vacuum should never fail to vacuum a tuple\n> > that also satisifies the condition of HeapTupleSatisfiesVacuum.\n>\n> It's true that these two similar functions should be in perfect\n> agreement in general (given the same OldestXmin). That in itself\n> doesn't mean that they must always agree about a tuple in practice,\n> when they're called in turn inside lazy_scan_prune(). In particular,\n> nothing stops a transaction that was in progress to\n> heap_prune_satisfies_vacuum (when it saw some tuples it inserted)\n> concurrently aborting. That will render the same tuples fully DEAD\n> inside HeapTupleSatisfiesVacuum(). So we need to restart using the\n> goto purely to cover that case. See the commit message of commit\n> 8523492d4e3.\n\nI totally overlooked that HeapTupleSatisfiesVacuumHorizon does the\nheavyweight XID validation and does return HEAPTUPLE_DEAD in those\nrecently rolled back cases. Thank you for reminding me.\n\n> By \"matches what we expect\", I meant \"involves a just-aborted\n> transaction\". We could defensively verify that the inserting\n> transaction concurrently aborted at the point of retrying/calling\n> heap_page_prune() a second time. If there is no aborted transaction\n> involved (as was the case with this bug), then we can be confident\n> that something is seriously broken.\n\nI believe there are more cases than only the rolled back case, but\nchecking for those cases would potentially help, yes.\n\nWith regards,\n\nMatthias van de Meent.\n\n\n", "msg_date": "Thu, 10 Jun 2021 18:57:08 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Thu, Jun 10, 2021 at 9:57 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > By \"matches what we expect\", I meant \"involves a just-aborted\n> > transaction\". We could defensively verify that the inserting\n> > transaction concurrently aborted at the point of retrying/calling\n> > heap_page_prune() a second time. If there is no aborted transaction\n> > involved (as was the case with this bug), then we can be confident\n> > that something is seriously broken.\n>\n> I believe there are more cases than only the rolled back case, but\n> checking for those cases would potentially help, yes.\n\nWhy do you believe that there are other cases?\n\nI'm not aware of any case that causes lazy_scan_prune() to retry using\nthe goto, other than the aborted transaction case I described\n(excluding the bug that you diagnosed, which was of course never\nsupposed to happen). If it really is possible to observe a retry for\nany other reason then I'd very much like to know all the details - it\nmight well signal a distinct bug of the same general variety.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Jun 2021 10:07:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi,\n\nOn 2021-06-10 17:49:05 +0200, Matthias van de Meent wrote:\n> Apart from this, I'm also quite certain that the goto-branch that\n> created this infinite loop should have been dead code: In a correctly\n> working system, the GlobalVis*Rels should always be at least as strict\n> as the vacrel->OldestXmin, but at the same time only GlobalVis*Rels\n> can be updated (i.e. move their horizon forward) during the vacuum. As\n> such, heap_prune_satisfies_vacuum should never fail to vacuum a tuple\n> that also satisifies the condition of HeapTupleSatisfiesVacuum. That\n> is, unless we're also going to change code to update / move forward\n> vacrel->OldestXmin in lazy_scan_prune between the HPSV call and the\n> loop with HTSV.\n\nConsider the case of a transaction that inserted a row aborting. That\ntuple will be \"fully dead\" regardless of any xid horizons.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Jun 2021 10:09:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Thu, 10 Jun 2021 at 19:07, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Jun 10, 2021 at 9:57 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > > By \"matches what we expect\", I meant \"involves a just-aborted\n> > > transaction\". We could defensively verify that the inserting\n> > > transaction concurrently aborted at the point of retrying/calling\n> > > heap_page_prune() a second time. If there is no aborted transaction\n> > > involved (as was the case with this bug), then we can be confident\n> > > that something is seriously broken.\n> >\n> > I believe there are more cases than only the rolled back case, but\n> > checking for those cases would potentially help, yes.\n>\n> Why do you believe that there are other cases?\n>\n> I'm not aware of any case that causes lazy_scan_prune() to retry using\n> the goto, other than the aborted transaction case I described\n> (excluding the bug that you diagnosed, which was of course never\n> supposed to happen). If it really is possible to observe a retry for\n> any other reason then I'd very much like to know all the details - it\n> might well signal a distinct bug of the same general variety.\n\nI see one exit for HEAPTUPLE_DEAD on a potentially recently committed\nxvac (?), and we might also check against recently committed\ntransactions if xmin == xmax, although apparently that is not\nimplemented right now.\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 10 Jun 2021 19:29:29 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Thu, Jun 10, 2021 at 10:29 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I see one exit for HEAPTUPLE_DEAD on a potentially recently committed\n> xvac (?), and we might also check against recently committed\n> transactions if xmin == xmax, although apparently that is not\n> implemented right now.\n\nI don't follow. Perhaps you can produce a test case?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Jun 2021 10:42:47 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On 2021-Jun-10, Peter Geoghegan wrote:\n\n> On Thu, Jun 10, 2021 at 10:29 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > I see one exit for HEAPTUPLE_DEAD on a potentially recently committed\n> > xvac (?), and we might also check against recently committed\n> > transactions if xmin == xmax, although apparently that is not\n> > implemented right now.\n\nxvac was used by the pre-9.0 VACUUM FULL, so I don't think it's possible\nto see a recently committed one. (I think you'd have to find a table\nthat was pg_upgraded from 8.4 or older, with leftover tuples from an\naborted VACUUM FULL, and never vacuumed after that.)\n\nA scenario with such a tuple on disk is not impossible [in theory],\nbut if it does exist, then the VACUUM FULL would not be in the\npossibly-visible horizon.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Linux transform� mi computadora, de una `m�quina para hacer cosas',\nen un aparato realmente entretenido, sobre el cual cada d�a aprendo\nalgo nuevo\" (Jaime Salinas)\n\n\n", "msg_date": "Thu, 10 Jun 2021 14:14:55 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi,\n\nOn 2021-06-08 19:18:18 -0500, Justin Pryzby wrote:\n> I reproduced the issue on a new/fresh cluster like this:\n> \n> ./postgres -D data -c autovacuum_naptime=1 -c autovacuum_analyze_scale_factor=0.005 -c log_autovacuum_min_duration=-1\n> psql -h /tmp postgres -c \"CREATE TABLE t(i int); INSERT INTO t SELECT generate_series(1,99999); CREATE INDEX ON t(i);\"\n> time while psql -h /tmp postgres -qc 'REINDEX (CONCURRENTLY) INDEX t_i_idx'; do :; done&\n> time while psql -h /tmp postgres -qc 'ANALYZE pg_attribute'; do :; done&\n> \n> TRAP: FailedAssertion(\"restarts == 0\", File: \"vacuumlazy.c\", Line: 1803, PID: 10367)\n\nHas anybody looked at getting test coverage for the retry path? Not with\nthe goal of triggering an assertion, just to have at least basic\ncoverage.\n\nThe problem with writing a test is likely to find a way to halfway\nreliably schedule a transaction abort after pruning, but before the\ntuple-removal loop? Does anybody see a trick to do so?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Jun 2021 17:58:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Thu, Jun 10, 2021 at 5:58 PM Andres Freund <andres@anarazel.de> wrote:\n> The problem with writing a test is likely to find a way to halfway\n> reliably schedule a transaction abort after pruning, but before the\n> tuple-removal loop? Does anybody see a trick to do so?\n\nI asked Alexander about using his pending stop events infrastructure\npatch to test this code, back when it did the tupgone stuff rather\nthan loop:\n\nhttps://postgr.es/m/CAH2-Wz=Tb7bAgCFt0VFA0YJ5Vd1RxJqZRc\n\nI can't see any better way.\n\nISTM that it would be much more useful to focus on adding an assertion\n(or maybe even a \"can't happen\" error) that fails when the DEAD/goto\npath is reached with a tuple whose xmin wasn't aborted. If that was in\nplace then we would have caught the bug in\nGetOldestNonRemovableTransactionId() far sooner. That might actually\ncatch other bugs in the future.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Jun 2021 18:49:50 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi,\n\nOn 2021-06-10 18:49:50 -0700, Peter Geoghegan wrote:\n> ISTM that it would be much more useful to focus on adding an assertion\n> (or maybe even a \"can't happen\" error) that fails when the DEAD/goto\n> path is reached with a tuple whose xmin wasn't aborted. If that was in\n> place then we would have caught the bug in\n> GetOldestNonRemovableTransactionId() far sooner. That might actually\n> catch other bugs in the future.\n\nI'm not convinced - right now we don't exercise this path in tests at\nall. More assertions won't change that - stuff that can be triggered in\nproduction-ish loads doesn't help during development. I do think that\nthat makes it far too easy to have state management bugs (e.g. a wrong\npincount in retry cases or such).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Jun 2021 19:00:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> ISTM that it would be much more useful to focus on adding an assertion\n> (or maybe even a \"can't happen\" error) that fails when the DEAD/goto\n> path is reached with a tuple whose xmin wasn't aborted. If that was in\n> place then we would have caught the bug in\n> GetOldestNonRemovableTransactionId() far sooner. That might actually\n> catch other bugs in the future.\n\nSounds like a good idea. If we expect that path to be taken only\nrarely, then a test-and-elog would be worth its keep. Otherwise\nmaybe it should just be an Assert.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Jun 2021 22:00:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Thu, Jun 10, 2021 at 7:00 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm not convinced - right now we don't exercise this path in tests at\n> all. More assertions won't change that - stuff that can be triggered in\n> production-ish loads doesn't help during development. I do think that\n> that makes it far too easy to have state management bugs (e.g. a wrong\n> pincount in retry cases or such).\n\nThe code in lazy_scan_prune() led to our detecting this bug (albeit in\na fairly nasty way). The problematic VACUUM operations never actually\nexercised the goto as originally designed, for the purpose it was\nintended for. Perhaps we should add test coverage for the intended\nbehavior too, but that doesn't seem particularly relevant right now.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Jun 2021 19:15:59 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi,\n\nOn 2021-06-10 19:15:59 -0700, Peter Geoghegan wrote:\n> On Thu, Jun 10, 2021 at 7:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'm not convinced - right now we don't exercise this path in tests at\n> > all. More assertions won't change that - stuff that can be triggered in\n> > production-ish loads doesn't help during development. I do think that\n> > that makes it far too easy to have state management bugs (e.g. a wrong\n> > pincount in retry cases or such).\n> \n> The code in lazy_scan_prune() led to our detecting this bug (albeit in\n> a fairly nasty way). The problematic VACUUM operations never actually\n> exercised the goto as originally designed, for the purpose it was\n> intended for. Perhaps we should add test coverage for the intended\n> behavior too, but that doesn't seem particularly relevant right now.\n\nWell, I'd like to add assertions ensuring the retry path is only entered\nwhen correct - but I feel hesitant about doing so when I can't exercise\nthat path reliably in at least some of the situations.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Jun 2021 19:38:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Thu, Jun 10, 2021 at 7:38 PM Andres Freund <andres@anarazel.de> wrote:\n> Well, I'd like to add assertions ensuring the retry path is only entered\n> when correct - but I feel hesitant about doing so when I can't exercise\n> that path reliably in at least some of the situations.\n\nI originally tested the lazy_scan_prune() goto in the obvious way: by\nadding a pg_usleep() just after its heap_page_prune() call. I'm not\ntoo worried about the restart corrupting state or something, because\nthe state is pretty trivial. In any case the infrastructure to\nexercise the goto inside the tests doesn't exist yet -- I don't see\nany way around that on HEAD.\n\nOTOH I *am* concerned about the goto doing the wrong thing due to bugs\nin distant code. I cannot imagine any possible downside to at least\nasserting HeapTupleHeaderXminInvalid() against the \"concurrently\ninserted then abort\" tuple. That simple measure would have been enough\nto at least catch this particular bug far sooner.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Jun 2021 20:16:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Thu, 10 Jun 2021 at 19:43, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Jun 10, 2021 at 10:29 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > I see one exit for HEAPTUPLE_DEAD on a potentially recently committed\n> > xvac (?), and we might also check against recently committed\n> > transactions if xmin == xmax, although apparently that is not\n> > implemented right now.\n>\n> I don't follow. Perhaps you can produce a test case?\n\nIf you were to delete a tuple in the same transaction that you create\nit (without checkpoints / subtransactions), I would assume that this\nwould allow us to vacuum the tuple, as the only snapshot that could\nsee the tuple must commit or roll back. In any case, inside the\ntransaction the tuple is not visible anymore, and outside the\ntransaction the tuple will never be seen. That being the case, any\nsuch tuple that has xmin == xmax should be vacuumable at any time,\nexcept that you might want to wait for the transaction to have\ncommitted/rolled back to prevent any race conditions with (delayed)\nindex insertions.\n\nexample:\n\nBEGIN;\nINSERT INTO tab VALUES (1);\nDELETE FROM tab;\n-- At this point, the tuple inserted cannot be seen in any\n-- current or future snapshot, and could thus be vacuumed.\nCOMMIT;\n\nBecause I am not quite yet well versed with the xid assignment and\nheapam deletion subsystems, it could very well be that either this\ncase is impossible to reach, or that the heapam tuple delete logic\nalready applies this at tuple delete time.\n\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 14 Jun 2021 11:53:47 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi,\n\nOn 2021-06-14 11:53:47 +0200, Matthias van de Meent wrote:\n> On Thu, 10 Jun 2021 at 19:43, Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Thu, Jun 10, 2021 at 10:29 AM Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> wrote:\n> > > I see one exit for HEAPTUPLE_DEAD on a potentially recently committed\n> > > xvac (?), and we might also check against recently committed\n> > > transactions if xmin == xmax, although apparently that is not\n> > > implemented right now.\n> >\n> > I don't follow. Perhaps you can produce a test case?\n> \n> If you were to delete a tuple in the same transaction that you create\n> it (without checkpoints / subtransactions), I would assume that this\n> would allow us to vacuum the tuple, as the only snapshot that could\n> see the tuple must commit or roll back.\n\nRight now we do not do so, but I think we talked about adding such logic\na couple times.\n\nI think a more robust assertion than aborted-ness could be to assert\nthat repeated retries are not allowed to have the same \"oldest xid\" than\na previous retry. With oldest xid be the older of xmin/xmax?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Jun 2021 15:12:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi,\n\n> @@ -4032,6 +4039,24 @@ GlobalVisTestShouldUpdate(GlobalVisState *state)\n> static void\n> GlobalVisUpdateApply(ComputeXidHorizonsResult *horizons)\n> {\n> +\t/* assert non-decreasing nature of horizons */\n> +\tAssert(FullTransactionIdFollowsOrEquals(\n> +\t\t\t FullXidRelativeTo(horizons->latest_completed,\n> +\t\t\t\t\t\t\t\t horizons->shared_oldest_nonremovable),\n> +\t\t\t GlobalVisSharedRels.maybe_needed));\n> +\tAssert(FullTransactionIdFollowsOrEquals(\n> +\t\t\t FullXidRelativeTo(horizons->latest_completed,\n> +\t\t\t\t\t\t\t\t horizons->catalog_oldest_nonremovable),\n> +\t\t\t GlobalVisCatalogRels.maybe_needed));\n> +\tAssert(FullTransactionIdFollowsOrEquals(\n> +\t\t\t FullXidRelativeTo(horizons->latest_completed,\n> +\t\t\t\t\t\t\t\t horizons->data_oldest_nonremovable),\n> +\t\t\t GlobalVisDataRels.maybe_needed));\n> +\tAssert(FullTransactionIdFollowsOrEquals(\n> +\t\t\t FullXidRelativeTo(horizons->latest_completed,\n> +\t\t\t\t\t\t\t\t horizons->temp_oldest_nonremovable),\n> +\t\t\t GlobalVisTempRels.maybe_needed));\n> +\n> \tGlobalVisSharedRels.maybe_needed =\n> \t\tFullXidRelativeTo(horizons->latest_completed,\n> \t\t\t\t\t\t horizons->shared_oldest_nonremovable);\n\nThinking more about it, I don't think these are correct. See the\nfollowing comment in procarray.c:\n\n * Note: despite the above, it's possible for the calculated values to move\n * backwards on repeated calls. The calculated values are conservative, so\n * that anything older is definitely not considered as running by anyone\n * anymore, but the exact values calculated depend on a number of things. For\n * example, if there are no transactions running in the current database, the\n * horizon for normal tables will be latestCompletedXid. If a transaction\n * begins after that, its xmin will include in-progress transactions in other\n * databases that started earlier, so another call will return a lower value.\n * Nonetheless it is safe to vacuum a table in the current database with the\n * first result. There are also replication-related effects: a walsender\n * process can set its xmin based on transactions that are no longer running\n * on the primary but are still being replayed on the standby, thus possibly\n * making the values go backwards. In this case there is a possibility that\n * we lose data that the standby would like to have, but unless the standby\n * uses a replication slot to make its xmin persistent there is little we can\n * do about that --- data is only protected if the walsender runs continuously\n * while queries are executed on the standby. (The Hot Standby code deals\n * with such cases by failing standby queries that needed to access\n * already-removed data, so there's no integrity bug.) The computed values\n * are also adjusted with vacuum_defer_cleanup_age, so increasing that setting\n * on the fly is another easy way to make horizons move backwards, with no\n * consequences for data integrity.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Jun 2021 18:22:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Tue, 15 Jun 2021 at 03:22, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> > @@ -4032,6 +4039,24 @@ GlobalVisTestShouldUpdate(GlobalVisState *state)\n> > static void\n> > GlobalVisUpdateApply(ComputeXidHorizonsResult *horizons)\n> > {\n> > + /* assert non-decreasing nature of horizons */\n>\n> Thinking more about it, I don't think these are correct. See the\n> following comment in procarray.c:\n>\n> * Note: despite the above, it's possible for the calculated values to move\n> * backwards on repeated calls.\n\nSo the implicit assumption in heap_page_prune that\nHeapTupleSatisfiesVacuum(OldestXmin) is always consistent with\nheap_prune_satisfies_vacuum(vacrel) has never been true. In that case,\nwe'll need to redo the condition in heap_page_prune as well.\n\nPFA my adapted patch that fixes this new-ish issue, and does not\ninclude the (incorrect) assertions in GlobalVisUpdateApply. I've\ntested this against the reproducing case, both with and without the\nfix in GetOldestNonRemovableTransactionId, and it fails fall into an\ninfinite loop.\n\nI would appreciate it if someone could validate the new logic in the\nHEAPTUPLE_DEAD case. Although I believe it correctly handles the case\nwhere the vistest non-removable horizon moved backwards, a second pair\nof eyes would be appreciated.\n\n\nWith regards,\n\nMatthias van de Meent", "msg_date": "Wed, 16 Jun 2021 12:59:33 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Wed, Jun 16, 2021 at 3:59 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> On Tue, 15 Jun 2021 at 03:22, Andres Freund <andres@anarazel.de> wrote:\n> > > @@ -4032,6 +4039,24 @@ GlobalVisTestShouldUpdate(GlobalVisState *state)\n> > > static void\n> > > GlobalVisUpdateApply(ComputeXidHorizonsResult *horizons)\n> > > {\n> > > + /* assert non-decreasing nature of horizons */\n> >\n> > Thinking more about it, I don't think these are correct. See the\n> > following comment in procarray.c:\n> >\n> > * Note: despite the above, it's possible for the calculated values to move\n> > * backwards on repeated calls.\n>\n> So the implicit assumption in heap_page_prune that\n> HeapTupleSatisfiesVacuum(OldestXmin) is always consistent with\n> heap_prune_satisfies_vacuum(vacrel) has never been true. In that case,\n> we'll need to redo the condition in heap_page_prune as well.\n\nI don't think that this shows that the assumption within\nlazy_scan_prune() (the assumption that both \"satisfies vacuum\"\nfunctions agree) is wrong, with the obvious exception of cases\ninvolving the bug that Justin reported. GlobalVis*.maybe_needed is\nsupposed to be conservative.\n\n> PFA my adapted patch that fixes this new-ish issue, and does not\n> include the (incorrect) assertions in GlobalVisUpdateApply. I've\n> tested this against the reproducing case, both with and without the\n> fix in GetOldestNonRemovableTransactionId, and it fails fall into an\n> infinite loop.\n>\n> I would appreciate it if someone could validate the new logic in the\n> HEAPTUPLE_DEAD case. Although I believe it correctly handles the case\n> where the vistest non-removable horizon moved backwards, a second pair\n> of eyes would be appreciated.\n\nIf you look at the lazy_scan_prune() logic immediately prior to commit\n8523492d4e3, you'll see that it used to have a HEAPTUPLE_DEAD case\nthat didn't involve a restart -- this was the \"tupgone\" mechanism.\nBack then we actually had to remove any corresponding index tuples\nfrom indexes when in this rare case. Plus there was a huge amount of\ncomplicated mechanism to handle a very rare case, all of which was\nremoved by commit 8523492d4e3. Things like extra recovery conflict\ncode just for this rare case, or needing to acquire a super exclusive\nlock on pages during VACUUM's second heap pass. This is all cruft that\nI was happy to get rid of.\n\nThis is a good discussion of the tupgone stuff and the problems it\ncaused, which is good background information:\n\nhttps://www.postgresql.org/message-id/20200724165514.dnu5hr4vvgkssf5p%40alap3.anarazel.de\n\nEven if it was true that heap_prune_satisfies_vacuum() won't agree\nwith HeapTupleSatisfiesVacuum() after repeated retries within\nlazy_scan_prune(), it would probably best if we then made code outside\nvacuumlazy.c agree with the lazy_scan_prune() assumption, rather than\nthe other way around.\n\nHave you actually been able to demonstrate a problem involving\nlazy_scan_prune()'s goto, except the main\nGetOldestNonRemovableTransactionId() bug reported by Justin?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Jun 2021 09:03:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Wed, Jun 16, 2021 at 9:03 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Jun 16, 2021 at 3:59 AM Matthias van de Meent\n> > So the implicit assumption in heap_page_prune that\n> > HeapTupleSatisfiesVacuum(OldestXmin) is always consistent with\n> > heap_prune_satisfies_vacuum(vacrel) has never been true. In that case,\n> > we'll need to redo the condition in heap_page_prune as well.\n>\n> I don't think that this shows that the assumption within\n> lazy_scan_prune() (the assumption that both \"satisfies vacuum\"\n> functions agree) is wrong, with the obvious exception of cases\n> involving the bug that Justin reported. GlobalVis*.maybe_needed is\n> supposed to be conservative.\n\nI suppose it's true that they can disagree because we call\nvacuum_set_xid_limits() to get an OldestXmin inside vacuumlazy.c\nbefore calling GlobalVisTestFor() inside vacuumlazy.c to get a\nvistest. But that only implies that a tuple that would have been\nconsidered RECENTLY_DEAD inside lazy_scan_prune() (it just missed\nbeing considered DEAD according to OldestXmin) is seen as an LP_DEAD\nstub line pointer. Which really means it's DEAD to lazy_scan_prune()\nanyway. These days the only way that lazy_scan_prune() can consider a\ntuple fully DEAD is if it's no longer a tuple -- it has to actually be\nan LP_DEAD stub line pointer.\n\nIt's really no different to an opportunistic prune that concurrently\nprunes tuples that VACUUM would have seen as RECENTLY_DEAD if it was\ngoing solely on the OldestXmin cutoff. There are certain kinds of\ntables where non-HOT updates and opportunistic pruning constantly\nleave behind loads of LP_DEAD items. Pruning inside VACUUM won't do\nmuch of the total required pruning at all. That'll mean that some\nDEAD/LP_DEAD items will become dead long after a VACUUM starts, while\nnevertheless being removed by the same VACUUM. Of course there is no\nway for lazy_scan_prune() to distinguish one LP_DEAD item from another\n-- they're all stubs without tuple storage, and without a tuple header\nwith XIDs.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Jun 2021 09:46:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi,\n\nOn 2021-06-16 12:59:33 +0200, Matthias van de Meent wrote:\n> PFA my adapted patch that fixes this new-ish issue, and does not\n> include the (incorrect) assertions in GlobalVisUpdateApply. I've\n> tested this against the reproducing case, both with and without the\n> fix in GetOldestNonRemovableTransactionId, and it fails fall into an\n> infinite loop.\n\nCould you share your testcase? I've been working on a series of patches\nto address this (I'll share in a bit), and I've run quite a few tests,\nand didn't hit any infinite loops.\n\n\n\n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index 4b600e951a..f4320d5a34 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -1675,6 +1675,12 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams *params, bool aggressive)\n> * that any items that make it into the dead_tuples array are simple LP_DEAD\n> * line pointers, and that every remaining item with tuple storage is\n> * considered as a candidate for freezing.\n> + * \n> + * Note: It is possible that vistest's window moves back from the\n> + * vacrel->OldestXmin (see ComputeXidHorizons). To prevent an infinite\n> + * loop where we bounce between HeapTupleSatisfiesVacuum and \n> + * heap_prune_satisfies_vacuum who disagree on the [almost]deadness of\n> + * a tuple, we only retry when we know HTSV agrees with HPSV.\n> */\n\nHTSV is quite widely used because HeapTupleSatisfiesVacuum is quite\nwidely used. HPSV isn't, so it's a bit confusing to use this.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:12:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi,\n\nOn 2021-06-16 09:46:07 -0700, Peter Geoghegan wrote:\n> On Wed, Jun 16, 2021 at 9:03 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Wed, Jun 16, 2021 at 3:59 AM Matthias van de Meent\n> > > So the implicit assumption in heap_page_prune that\n> > > HeapTupleSatisfiesVacuum(OldestXmin) is always consistent with\n> > > heap_prune_satisfies_vacuum(vacrel) has never been true. In that case,\n> > > we'll need to redo the condition in heap_page_prune as well.\n> >\n> > I don't think that this shows that the assumption within\n> > lazy_scan_prune() (the assumption that both \"satisfies vacuum\"\n> > functions agree) is wrong, with the obvious exception of cases\n> > involving the bug that Justin reported. GlobalVis*.maybe_needed is\n> > supposed to be conservative.\n> \n> I suppose it's true that they can disagree because we call\n> vacuum_set_xid_limits() to get an OldestXmin inside vacuumlazy.c\n> before calling GlobalVisTestFor() inside vacuumlazy.c to get a\n> vistest. But that only implies that a tuple that would have been\n> considered RECENTLY_DEAD inside lazy_scan_prune() (it just missed\n> being considered DEAD according to OldestXmin) is seen as an LP_DEAD\n> stub line pointer. Which really means it's DEAD to lazy_scan_prune()\n> anyway. These days the only way that lazy_scan_prune() can consider a\n> tuple fully DEAD is if it's no longer a tuple -- it has to actually be\n> an LP_DEAD stub line pointer.\n\nI think it's more complicated than that - \"before\" isn't a guarantee when the\nhorizon can go backwards. Consider the case where a hot_standby_feedback=on\nreplica without a slot connects - that can result in the xid horizon going\nbackwards.\n\nI think a good way to address this might be to have GlobalVisUpdateApply()\nensure that maybe_needed does not go backwards within one backend.\n\nThis is *nearly* already guaranteed within vacuum, except for the case where a\ncatalog access between vacuum_set_xid_limits() and GlobalVisTestFor() could\nlead to an attempt at pruning, which could move maybe_needed to go backwards\ntheoretically if inbetween those two steps a replica connected that causes the\nhorizon to go backwards.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:22:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Wed, 16 Jun 2021 at 21:12, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-06-16 12:59:33 +0200, Matthias van de Meent wrote:\n> > PFA my adapted patch that fixes this new-ish issue, and does not\n> > include the (incorrect) assertions in GlobalVisUpdateApply. I've\n> > tested this against the reproducing case, both with and without the\n> > fix in GetOldestNonRemovableTransactionId, and it fails fall into an\n> > infinite loop.\n\n* Failst _to_ fall into an infinite loop. Sorry, failed to add a \"to\".\nIt passes tests\n\n> Could you share your testcase? I've been working on a series of patches\n> to address this (I'll share in a bit), and I've run quite a few tests,\n> and didn't hit any infinite loops.\n\nBasically, I've tested using the test case shared earlier; 2 sessions\nspamming connections with \"reindex concurrently some_index\" and\n\"analyze pg_attribute\" against the same database.\n\n>\n>\n> > diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> > index 4b600e951a..f4320d5a34 100644\n> > --- a/src/backend/access/heap/vacuumlazy.c\n> > +++ b/src/backend/access/heap/vacuumlazy.c\n> > @@ -1675,6 +1675,12 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams *params, bool aggressive)\n> > * that any items that make it into the dead_tuples array are simple LP_DEAD\n> > * line pointers, and that every remaining item with tuple storage is\n> > * considered as a candidate for freezing.\n> > + *\n> > + * Note: It is possible that vistest's window moves back from the\n> > + * vacrel->OldestXmin (see ComputeXidHorizons). To prevent an infinite\n> > + * loop where we bounce between HeapTupleSatisfiesVacuum and\n> > + * heap_prune_satisfies_vacuum who disagree on the [almost]deadness of\n> > + * a tuple, we only retry when we know HTSV agrees with HPSV.\n> > */\n>\n> HTSV is quite widely used because HeapTupleSatisfiesVacuum is quite\n> widely used. HPSV isn't, so it's a bit confusing to use this.\n\nSure. I thought it was fine to shorten, as the full function name was\njust named the line above and it's a long name, but I'm fine with\neither.\n\nKind regards,\n\nMatthias\n\n\n", "msg_date": "Wed, 16 Jun 2021 21:23:06 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Wed, 16 Jun 2021 at 21:22, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-06-16 09:46:07 -0700, Peter Geoghegan wrote:\n> > On Wed, Jun 16, 2021 at 9:03 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > On Wed, Jun 16, 2021 at 3:59 AM Matthias van de Meent\n> > > > So the implicit assumption in heap_page_prune that\n> > > > HeapTupleSatisfiesVacuum(OldestXmin) is always consistent with\n> > > > heap_prune_satisfies_vacuum(vacrel) has never been true. In that case,\n> > > > we'll need to redo the condition in heap_page_prune as well.\n> > >\n> > > I don't think that this shows that the assumption within\n> > > lazy_scan_prune() (the assumption that both \"satisfies vacuum\"\n> > > functions agree) is wrong, with the obvious exception of cases\n> > > involving the bug that Justin reported. GlobalVis*.maybe_needed is\n> > > supposed to be conservative.\n> >\n> > I suppose it's true that they can disagree because we call\n> > vacuum_set_xid_limits() to get an OldestXmin inside vacuumlazy.c\n> > before calling GlobalVisTestFor() inside vacuumlazy.c to get a\n> > vistest. But that only implies that a tuple that would have been\n> > considered RECENTLY_DEAD inside lazy_scan_prune() (it just missed\n> > being considered DEAD according to OldestXmin) is seen as an LP_DEAD\n> > stub line pointer. Which really means it's DEAD to lazy_scan_prune()\n> > anyway. These days the only way that lazy_scan_prune() can consider a\n> > tuple fully DEAD is if it's no longer a tuple -- it has to actually be\n> > an LP_DEAD stub line pointer.\n>\n> I think it's more complicated than that - \"before\" isn't a guarantee when the\n> horizon can go backwards. Consider the case where a hot_standby_feedback=on\n> replica without a slot connects - that can result in the xid horizon going\n> backwards.\n>\n> I think a good way to address this might be to have GlobalVisUpdateApply()\n> ensure that maybe_needed does not go backwards within one backend.\n>\n> This is *nearly* already guaranteed within vacuum, except for the case where a\n> catalog access between vacuum_set_xid_limits() and GlobalVisTestFor() could\n> lead to an attempt at pruning, which could move maybe_needed to go backwards\n> theoretically if inbetween those two steps a replica connected that causes the\n> horizon to go backwards.\n\nI'm tempted to suggest \"update one of GlobalVisUpdateApply /\nComputeXidHorizons to be non-decreasing\". We already have the\ninformation that any previous GlobalVis*->maybe_needed is correct, and\nthat if maybe_needed has been higher that that value is still correct,\nso we might just as well update the code to envelop that case. There's\nsome cases where this might be dangerous: New transactions after a\ntime with no active backends (in this case it should be fine to\nguarantee non-decreasing GlobalVisTestNonRemovableHorizon), and\nwalsender. I'm uncertain whether or not it's dangerous to _not_\nrollback maybe_needed for a new walsender-backend (e.g. the backend\nmight want to construct a snapshot of (then) before\nGlobalVisTestNonRemovableHorizon), especially when considering the\ncomment in ProcessStandbyHSFeedbackMessage:\n\n * There is a small window for a race condition here: although we just\n * checked that feedbackXmin precedes nextXid, the nextXid could have\n * gotten advanced between our fetching it and applying the xmin below,\n * perhaps far enough to make feedbackXmin wrap around. In that case the\n * xmin we set here would be \"in the future\" and have no effect. No point\n * in worrying about this since it's too late to save the desired data\n * anyway. Assuming that the standby sends us an increasing sequence of\n * xmins, this could only happen during the first reply cycle, else our\n * own xmin would prevent nextXid from advancing so far.\n\nAt the very least, changing GlobalVisUpdateApply/ComputeXidHorizons\nwould increase the potential amount of data lost in such race\nconditions, if any.\n\nAs further note, my suggested changes in vacuumlazy (specifically, the\n'continue' path added in lazy_scan_prune in my recent v2 patchset) is\nlikely incorrect because of a potential undocumented requirement of\nheap_page_prune: leave no dead tuples with xmax < vacrel->OldestXmin.\nI realised that in my patch, we would allow some these tuples to\ncontinue to exist IFF the GlobalVisTestNonRemovableHorizon moved back\nduring the vacuum, which would violate such requirement.\n\n\nKind regards,\n\nMatthias van de Meent.\n\n\n", "msg_date": "Wed, 16 Jun 2021 22:08:39 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Wed, Jun 16, 2021 at 12:22 PM Andres Freund <andres@anarazel.de> wrote:\n> I think it's more complicated than that - \"before\" isn't a guarantee when the\n> horizon can go backwards. Consider the case where a hot_standby_feedback=on\n> replica without a slot connects - that can result in the xid horizon going\n> backwards.\n\nOh yeah, I think that I get it now. Tell me if this sounds right to you:\n\nIt's not so much that HeapTupleSatisfiesVacuum() \"disagrees\" with\nheap_prune_satisfies_vacuum() in a way that actually matters to\nVACUUM. While there does seem to be a fairly mundane bug in\nGetOldestNonRemovableTransactionId() that really is a matter of\ndisagreement between the two functions, the fundamental issue is\ndeeper than that. The fundamental issue is that it's not okay to\nassume that the XID horizon won't go backwards. This probably matters\nfor lots of reasons. The most obvious reason is that in theory it\ncould cause lazy_scan_prune() to get stuck in about the same way as\nJustin reported, with the GetOldestNonRemovableTransactionId() bug.\n\nThis isn't an issue in the backbranches because we're using the same\nOldestXmin value directly when calling heap_page_prune(). We only ever\nhave one xid horizon cutoff like that per VACUUM (we only have\nOldestXmin, no vistest), so clearly it's not a problem.\n\n> I think a good way to address this might be to have GlobalVisUpdateApply()\n> ensure that maybe_needed does not go backwards within one backend.\n>\n> This is *nearly* already guaranteed within vacuum, except for the case where a\n> catalog access between vacuum_set_xid_limits() and GlobalVisTestFor() could\n> lead to an attempt at pruning, which could move maybe_needed to go backwards\n> theoretically if inbetween those two steps a replica connected that causes the\n> horizon to go backwards.\n\nThis would at least be easy to test. I like the idea of adding invariants.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Jun 2021 13:21:58 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi,\n\nOn 2021-06-16 12:12:23 -0700, Andres Freund wrote:\n> Could you share your testcase? I've been working on a series of patches\n> to address this (I'll share in a bit), and I've run quite a few tests,\n> and didn't hit any infinite loops.\n\nSorry for not yet doing that. Unfortunately I have an ongoing family\nhealth issue (& associated travel) claiming time and energy :(.\n\nI've pushed the minimal fix due to beta 2.\n\nBeyond beta 2 I am thinking of the below to unify the horizon\ndetermination:\n\nstatic inline GlobalVisHorizonKind\nGlobalVisHorizonKindForRel(Relation rel)\n{\n if (!rel)\n return VISHORIZON_SHARED;\n\n /*\n * Other relkkinds currently don't contain xids, nor always the necessary\n * logical decoding markers.\n */\n Assert(rel->rd_rel->relkind == RELKIND_RELATION ||\n rel->rd_rel->relkind == RELKIND_MATVIEW ||\n rel->rd_rel->relkind == RELKIND_TOASTVALUE);\n\n if (rel->rd_rel->relisshared || RecoveryInProgress())\n return VISHORIZON_SHARED;\n else if (IsCatalogRelation(rel) ||\n RelationIsAccessibleInLogicalDecoding(rel))\n return VISHORIZON_CATALOG;\n else if (!RELATION_IS_LOCAL(rel))\n return VISHORIZON_DATA;\n else\n return VISHORIZON_TEMP;\n}\n\nThat's then used in GetOldestNonRemovableTransactionId(),\nGlobalVisTestFor(). Makes sense?\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Mon, 21 Jun 2021 05:29:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "On Wed, Jun 16, 2021 at 1:21 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Oh yeah, I think that I get it now. Tell me if this sounds right to you:\n>\n> It's not so much that HeapTupleSatisfiesVacuum() \"disagrees\" with\n> heap_prune_satisfies_vacuum() in a way that actually matters to\n> VACUUM. While there does seem to be a fairly mundane bug in\n> GetOldestNonRemovableTransactionId() that really is a matter of\n> disagreement between the two functions, the fundamental issue is\n> deeper than that. The fundamental issue is that it's not okay to\n> assume that the XID horizon won't go backwards. This probably matters\n> for lots of reasons. The most obvious reason is that in theory it\n> could cause lazy_scan_prune() to get stuck in about the same way as\n> Justin reported, with the GetOldestNonRemovableTransactionId() bug.\n\nAny update on this, Andres?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 16 Jul 2021 16:13:03 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" }, { "msg_contents": "Hi,\n\nOn 2021-06-21 05:29:19 -0700, Andres Freund wrote:\n> On 2021-06-16 12:12:23 -0700, Andres Freund wrote:\n> > Could you share your testcase? I've been working on a series of patches\n> > to address this (I'll share in a bit), and I've run quite a few tests,\n> > and didn't hit any infinite loops.\n> \n> Sorry for not yet doing that. Unfortunately I have an ongoing family\n> health issue (& associated travel) claiming time and energy :(.\n> \n> I've pushed the minimal fix due to beta 2.\n> \n> Beyond beta 2 I am thinking of the below to unify the horizon\n> determination:\n> \n> static inline GlobalVisHorizonKind\n> GlobalVisHorizonKindForRel(Relation rel)\n\nI finally pushed this cleanup.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 24 Jul 2021 20:34:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic" } ]
[ { "msg_contents": "husky just reported $SUBJECT:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=husky&dt=2021-06-05%2013%3A42%3A17\n\nand I find I can reproduce that locally:\n\ndiff -U3 /home/postgres/pgsql/contrib/pg_visibility/expected/pg_visibility.out /home/postgres/pgsql/contrib/pg_visibility/results/pg_visibility.out\n--- /home/postgres/pgsql/contrib/pg_visibility/expected/pg_visibility.out\t2021-01-20 11:12:24.854346717 -0500\n+++ /home/postgres/pgsql/contrib/pg_visibility/results/pg_visibility.out\t2021-06-06 22:12:07.948890104 -0400\n@@ -215,7 +215,8 @@\n 0 | f | f\n 1 | f | f\n 2 | t | t\n-(3 rows)\n+ 3 | t | t\n+(4 rows)\n \n select * from pg_check_frozen('copyfreeze');\n t_ctid \n@@ -235,7 +236,8 @@\n 0 | t | t\n 1 | f | f\n 2 | t | t\n-(3 rows)\n+ 3 | t | t\n+(4 rows)\n \n select * from pg_check_frozen('copyfreeze');\n t_ctid \n\n\nThe test cases that are failing date back to January (7db0cd2145f),\nso I think this is some side-effect of a recent commit, but I have\nno idea which one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Jun 2021 22:15:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "contrib/pg_visibility fails regression under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "On Mon, Jun 7, 2021 at 11:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> husky just reported $SUBJECT:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=husky&dt=2021-06-05%2013%3A42%3A17\n>\n> and I find I can reproduce that locally:\n>\n> diff -U3 /home/postgres/pgsql/contrib/pg_visibility/expected/pg_visibility.out /home/postgres/pgsql/contrib/pg_visibility/results/pg_visibility.out\n> --- /home/postgres/pgsql/contrib/pg_visibility/expected/pg_visibility.out 2021-01-20 11:12:24.854346717 -0500\n> +++ /home/postgres/pgsql/contrib/pg_visibility/results/pg_visibility.out 2021-06-06 22:12:07.948890104 -0400\n> @@ -215,7 +215,8 @@\n> 0 | f | f\n> 1 | f | f\n> 2 | t | t\n> -(3 rows)\n> + 3 | t | t\n> +(4 rows)\n>\n> select * from pg_check_frozen('copyfreeze');\n> t_ctid\n> @@ -235,7 +236,8 @@\n> 0 | t | t\n> 1 | f | f\n> 2 | t | t\n> -(3 rows)\n> + 3 | t | t\n> +(4 rows)\n>\n> select * from pg_check_frozen('copyfreeze');\n> t_ctid\n>\n>\n> The test cases that are failing date back to January (7db0cd2145f),\n> so I think this is some side-effect of a recent commit, but I have\n> no idea which one.\n\nIt seems like the recent revert (8e03eb92e9a) is relevant.\n\nAfter committing 7db0cd2145f we had the same regression test failure\nin January[1]. Then we fixed that issue by 39b66a91b. But since we\nrecently reverted most of 39b66a91b, the same issue happened again.\n\nRegards,\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-01-19+20%3A27%3A46\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 7 Jun 2021 16:30:57 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: contrib/pg_visibility fails regression under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "On Mon, Jun 7, 2021 at 4:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jun 7, 2021 at 11:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > husky just reported $SUBJECT:\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=husky&dt=2021-06-05%2013%3A42%3A17\n> >\n> > and I find I can reproduce that locally:\n> >\n> > diff -U3 /home/postgres/pgsql/contrib/pg_visibility/expected/pg_visibility.out /home/postgres/pgsql/contrib/pg_visibility/results/pg_visibility.out\n> > --- /home/postgres/pgsql/contrib/pg_visibility/expected/pg_visibility.out 2021-01-20 11:12:24.854346717 -0500\n> > +++ /home/postgres/pgsql/contrib/pg_visibility/results/pg_visibility.out 2021-06-06 22:12:07.948890104 -0400\n> > @@ -215,7 +215,8 @@\n> > 0 | f | f\n> > 1 | f | f\n> > 2 | t | t\n> > -(3 rows)\n> > + 3 | t | t\n> > +(4 rows)\n> >\n> > select * from pg_check_frozen('copyfreeze');\n> > t_ctid\n> > @@ -235,7 +236,8 @@\n> > 0 | t | t\n> > 1 | f | f\n> > 2 | t | t\n> > -(3 rows)\n> > + 3 | t | t\n> > +(4 rows)\n> >\n> > select * from pg_check_frozen('copyfreeze');\n> > t_ctid\n> >\n> >\n> > The test cases that are failing date back to January (7db0cd2145f),\n> > so I think this is some side-effect of a recent commit, but I have\n> > no idea which one.\n>\n> It seems like the recent revert (8e03eb92e9a) is relevant.\n>\n> After committing 7db0cd2145f we had the same regression test failure\n> in January[1]. Then we fixed that issue by 39b66a91b. But since we\n> recently reverted most of 39b66a91b, the same issue happened again.\n>\n\nSo the cause of this failure seems the same as before. The failed test is,\n\nbegin;\ntruncate copyfreeze;\ncopy copyfreeze from stdin freeze;\n1 '1'\n2 '2'\n3 '3'\n4 '4'\n5 '5'\n\\.\ncopy copyfreeze from stdin;\n6 '6'\n\\.\ncopy copyfreeze from stdin freeze;\n7 '7'\n8 '8'\n9 '9'\n10 '10'\n11 '11'\n12 '12'\n\\.\ncommit;\n\nIf the target block cache is invalidated before the third COPY, we\nwill start to insert the frozen tuple into a new page, resulting in\nadding two blocks in total during the third COPY. I think we still\nneed the following part of the reverted code so that we don't leave\nthe page partially empty after relcache invalidation:\n\n--- a/src/backend/access/heap/hio.c\n+++ b/src/backend/access/heap/hio.c\n@@ -407,19 +407,19 @@ RelationGetBufferForTuple(Relation relation, Size len,\n * target.\n */\n targetBlock = GetPageWithFreeSpace(relation, targetFreeSpace);\n- }\n\n- /*\n- * If the FSM knows nothing of the rel, try the last page before we give\n- * up and extend. This avoids one-tuple-per-page syndrome during\n- * bootstrapping or in a recently-started system.\n- */\n- if (targetBlock == InvalidBlockNumber)\n- {\n- BlockNumber nblocks = RelationGetNumberOfBlocks(relation);\n+ /*\n+ * If the FSM knows nothing of the rel, try the last page before we\n+ * give up and extend. This avoids one-tuple-per-page syndrome during\n+ * bootstrapping or in a recently-started system.\n+ */\n+ if (targetBlock == InvalidBlockNumber)\n+ {\n+ BlockNumber nblocks = RelationGetNumberOfBlocks(relation);\n\n- if (nblocks > 0)\n- targetBlock = nblocks - 1;\n+ if (nblocks > 0)\n+ targetBlock = nblocks - 1;\n+ }\n }\n\nAttached the patch that brings back the above change.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 7 Jun 2021 21:11:35 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: contrib/pg_visibility fails regression under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "\n\nOn 6/7/21 2:11 PM, Masahiko Sawada wrote:\n> On Mon, Jun 7, 2021 at 4:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Mon, Jun 7, 2021 at 11:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>\n>>> husky just reported $SUBJECT:\n>>>\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=husky&dt=2021-06-05%2013%3A42%3A17\n>>>\n>>> and I find I can reproduce that locally:\n>>>\n>>> diff -U3 /home/postgres/pgsql/contrib/pg_visibility/expected/pg_visibility.out /home/postgres/pgsql/contrib/pg_visibility/results/pg_visibility.out\n>>> --- /home/postgres/pgsql/contrib/pg_visibility/expected/pg_visibility.out 2021-01-20 11:12:24.854346717 -0500\n>>> +++ /home/postgres/pgsql/contrib/pg_visibility/results/pg_visibility.out 2021-06-06 22:12:07.948890104 -0400\n>>> @@ -215,7 +215,8 @@\n>>> 0 | f | f\n>>> 1 | f | f\n>>> 2 | t | t\n>>> -(3 rows)\n>>> + 3 | t | t\n>>> +(4 rows)\n>>>\n>>> select * from pg_check_frozen('copyfreeze');\n>>> t_ctid\n>>> @@ -235,7 +236,8 @@\n>>> 0 | t | t\n>>> 1 | f | f\n>>> 2 | t | t\n>>> -(3 rows)\n>>> + 3 | t | t\n>>> +(4 rows)\n>>>\n>>> select * from pg_check_frozen('copyfreeze');\n>>> t_ctid\n>>>\n>>>\n>>> The test cases that are failing date back to January (7db0cd2145f),\n>>> so I think this is some side-effect of a recent commit, but I have\n>>> no idea which one.\n>>\n>> It seems like the recent revert (8e03eb92e9a) is relevant.\n>>\n>> After committing 7db0cd2145f we had the same regression test failure\n>> in January[1]. Then we fixed that issue by 39b66a91b. But since we\n>> recently reverted most of 39b66a91b, the same issue happened again.\n>>\n> \n> So the cause of this failure seems the same as before. The failed test is,\n> \n> begin;\n> truncate copyfreeze;\n> copy copyfreeze from stdin freeze;\n> 1 '1'\n> 2 '2'\n> 3 '3'\n> 4 '4'\n> 5 '5'\n> \\.\n> copy copyfreeze from stdin;\n> 6 '6'\n> \\.\n> copy copyfreeze from stdin freeze;\n> 7 '7'\n> 8 '8'\n> 9 '9'\n> 10 '10'\n> 11 '11'\n> 12 '12'\n> \\.\n> commit;\n> \n> If the target block cache is invalidated before the third COPY, we\n> will start to insert the frozen tuple into a new page, resulting in\n> adding two blocks in total during the third COPY. I think we still\n> need the following part of the reverted code so that we don't leave\n> the page partially empty after relcache invalidation:\n> \n> --- a/src/backend/access/heap/hio.c\n> +++ b/src/backend/access/heap/hio.c\n> @@ -407,19 +407,19 @@ RelationGetBufferForTuple(Relation relation, Size len,\n> * target.\n> */\n> targetBlock = GetPageWithFreeSpace(relation, targetFreeSpace);\n> - }\n> \n> - /*\n> - * If the FSM knows nothing of the rel, try the last page before we give\n> - * up and extend. This avoids one-tuple-per-page syndrome during\n> - * bootstrapping or in a recently-started system.\n> - */\n> - if (targetBlock == InvalidBlockNumber)\n> - {\n> - BlockNumber nblocks = RelationGetNumberOfBlocks(relation);\n> + /*\n> + * If the FSM knows nothing of the rel, try the last page before we\n> + * give up and extend. This avoids one-tuple-per-page syndrome during\n> + * bootstrapping or in a recently-started system.\n> + */\n> + if (targetBlock == InvalidBlockNumber)\n> + {\n> + BlockNumber nblocks = RelationGetNumberOfBlocks(relation);\n> \n> - if (nblocks > 0)\n> - targetBlock = nblocks - 1;\n> + if (nblocks > 0)\n> + targetBlock = nblocks - 1;\n> + }\n> }\n> \n> Attached the patch that brings back the above change.\n> \n\nThanks for the analysis! I think you're right - this bit should have\nbeen kept. Partial reverts are tricky :-(\n\nI'll get this fixed / pushed later today, after a bit more testing. I'd\nswear I ran tests with CCA, but it's possible I skipped contrib.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Jun 2021 15:01:43 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: contrib/pg_visibility fails regression under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "On Mon, Jun 7, 2021 at 10:01 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 6/7/21 2:11 PM, Masahiko Sawada wrote:\n> > On Mon, Jun 7, 2021 at 4:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>\n> >> On Mon, Jun 7, 2021 at 11:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>>\n> >>> husky just reported $SUBJECT:\n> >>>\n> >>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=husky&dt=2021-06-05%2013%3A42%3A17\n> >>>\n> >>> and I find I can reproduce that locally:\n> >>>\n> >>> diff -U3 /home/postgres/pgsql/contrib/pg_visibility/expected/pg_visibility.out /home/postgres/pgsql/contrib/pg_visibility/results/pg_visibility.out\n> >>> --- /home/postgres/pgsql/contrib/pg_visibility/expected/pg_visibility.out 2021-01-20 11:12:24.854346717 -0500\n> >>> +++ /home/postgres/pgsql/contrib/pg_visibility/results/pg_visibility.out 2021-06-06 22:12:07.948890104 -0400\n> >>> @@ -215,7 +215,8 @@\n> >>> 0 | f | f\n> >>> 1 | f | f\n> >>> 2 | t | t\n> >>> -(3 rows)\n> >>> + 3 | t | t\n> >>> +(4 rows)\n> >>>\n> >>> select * from pg_check_frozen('copyfreeze');\n> >>> t_ctid\n> >>> @@ -235,7 +236,8 @@\n> >>> 0 | t | t\n> >>> 1 | f | f\n> >>> 2 | t | t\n> >>> -(3 rows)\n> >>> + 3 | t | t\n> >>> +(4 rows)\n> >>>\n> >>> select * from pg_check_frozen('copyfreeze');\n> >>> t_ctid\n> >>>\n> >>>\n> >>> The test cases that are failing date back to January (7db0cd2145f),\n> >>> so I think this is some side-effect of a recent commit, but I have\n> >>> no idea which one.\n> >>\n> >> It seems like the recent revert (8e03eb92e9a) is relevant.\n> >>\n> >> After committing 7db0cd2145f we had the same regression test failure\n> >> in January[1]. Then we fixed that issue by 39b66a91b. But since we\n> >> recently reverted most of 39b66a91b, the same issue happened again.\n> >>\n> >\n> > So the cause of this failure seems the same as before. The failed test is,\n> >\n> > begin;\n> > truncate copyfreeze;\n> > copy copyfreeze from stdin freeze;\n> > 1 '1'\n> > 2 '2'\n> > 3 '3'\n> > 4 '4'\n> > 5 '5'\n> > \\.\n> > copy copyfreeze from stdin;\n> > 6 '6'\n> > \\.\n> > copy copyfreeze from stdin freeze;\n> > 7 '7'\n> > 8 '8'\n> > 9 '9'\n> > 10 '10'\n> > 11 '11'\n> > 12 '12'\n> > \\.\n> > commit;\n> >\n> > If the target block cache is invalidated before the third COPY, we\n> > will start to insert the frozen tuple into a new page, resulting in\n> > adding two blocks in total during the third COPY. I think we still\n> > need the following part of the reverted code so that we don't leave\n> > the page partially empty after relcache invalidation:\n> >\n> > --- a/src/backend/access/heap/hio.c\n> > +++ b/src/backend/access/heap/hio.c\n> > @@ -407,19 +407,19 @@ RelationGetBufferForTuple(Relation relation, Size len,\n> > * target.\n> > */\n> > targetBlock = GetPageWithFreeSpace(relation, targetFreeSpace);\n> > - }\n> >\n> > - /*\n> > - * If the FSM knows nothing of the rel, try the last page before we give\n> > - * up and extend. This avoids one-tuple-per-page syndrome during\n> > - * bootstrapping or in a recently-started system.\n> > - */\n> > - if (targetBlock == InvalidBlockNumber)\n> > - {\n> > - BlockNumber nblocks = RelationGetNumberOfBlocks(relation);\n> > + /*\n> > + * If the FSM knows nothing of the rel, try the last page before we\n> > + * give up and extend. This avoids one-tuple-per-page syndrome during\n> > + * bootstrapping or in a recently-started system.\n> > + */\n> > + if (targetBlock == InvalidBlockNumber)\n> > + {\n> > + BlockNumber nblocks = RelationGetNumberOfBlocks(relation);\n> >\n> > - if (nblocks > 0)\n> > - targetBlock = nblocks - 1;\n> > + if (nblocks > 0)\n> > + targetBlock = nblocks - 1;\n> > + }\n> > }\n> >\n> > Attached the patch that brings back the above change.\n> >\n>\n> Thanks for the analysis! I think you're right - this bit should have\n> been kept. Partial reverts are tricky :-(\n>\n> I'll get this fixed / pushed later today, after a bit more testing. I'd\n> swear I ran tests with CCA, but it's possible I skipped contrib.\n\nI had missed this mail. Thank you for pushing the fix!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 14 Jun 2021 21:56:15 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: contrib/pg_visibility fails regression under CLOBBER_CACHE_ALWAYS" } ]
[ { "msg_contents": "Hello, hackers,\n\nCurrently, REL_10_STABLE can't be compiled with gcc-10 or 11, -Werror \nand \"./configure\" without arguments. E.g. gcc-11 gives an error:\n\nobjectaddress.c:1618:99: error: ‘typeoids’ may be used uninitialized \n[-Werror=maybe-uninitialized]\n 1618 | \n ObjectIdGetDatum(typeoids[1]),\n...\nobjectaddress.c: In function ‘get_object_address’:\nobjectaddress.c:1578:33: note: ‘typeoids’ declared here\n 1578 | Oid typeoids[2];\n | ^~~~~~~~\n\ngcc-10 gives a similar error.\n\nI propose to back-port a small part of Tom Lane's commit 9a725f7b5cb7, \nwhich was somehow never back-ported to REL_10_STABLE. The fix is\nexplicit initialization to InvalidOid for the typeoids[2] variable involved.\n\nEven if, technically, the initialization is probably not required (or\nso I've heard), in PostgreSQL 11+ it was deemed that explicit \ninitialization is acceptable here to avoid compiler warning.\n\nPlease note that above-mentioned commit 9a725f7b5cb7 adds initialization \nfor a variable from the previous line, typenames[2] as well, but since \ngcc 10 and 11 don't warn on that, I guess there is no need to add that \ninitialization as well.\n\nThe proposed one-line patch is attached, but basically it is:\ndiff --git a/src/backend/catalog/objectaddress.c \nb/src/backend/catalog/objectaddress.c\nindex b0ff255a593..8cc9dc003c8 100644\n--- a/src/backend/catalog/objectaddress.c\n+++ b/src/backend/catalog/objectaddress.c\n@@ -1591,6 +1591,7 @@ get_object_address_opf_member(ObjectType objtype,\n \tfamaddr = get_object_address_opcf(OBJECT_OPFAMILY, copy, false);\n\n \t/* find out left/right type names and OIDs */\n+\ttypeoids[0] = typeoids[1] = InvalidOid;\n \ti = 0;\n \tforeach(cell, lsecond(object))\n \t{\n\nI've verified that all other current branches, i.e. \nREL9_6_STABLE..REL_13_STABLE (excluding REL_10_STABLE) and master can \ncompile cleanly even with bare ./configure without arguments using gcc-11.\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru", "msg_date": "Mon, 7 Jun 2021 14:16:18 +0700", "msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>", "msg_from_op": true, "msg_subject": "back-port one-line gcc-10+ warning fix to REL_10_STABLE" }, { "msg_contents": "On 2021-Jun-07, Anton Voloshin wrote:\n\n> Hello, hackers,\n> \n> Currently, REL_10_STABLE can't be compiled with gcc-10 or 11, -Werror and\n> \"./configure\" without arguments. E.g. gcc-11 gives an error:\n\nHi, thanks for the report. I noticed that the commit that introduced\nthis (41306a511c01) was introduced in 9.5, so I was surprised that you\nreport it doesn't complain in 9.6. Turns out that Peter E had fixed the\nissue, but only in 9.5 and 9.6; I don't really understand why no fix was\napplied to 10. I forward-ported that commit to 10, which should also\nfix the problem. Branches 11 and up already have Tom Lane's fix.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"[PostgreSQL] is a great group; in my opinion it is THE best open source\ndevelopment communities in existence anywhere.\" (Lamar Owen)\n\n\n", "msg_date": "Mon, 7 Jun 2021 11:14:25 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: back-port one-line gcc-10+ warning fix to REL_10_STABLE" } ]
[ { "msg_contents": "\nHi, hackers\n\nWhen we write a extension using C language, we often add the dynamic library\ninto shared_preload_libraries, however, I found that the bloom, btree_gist and\nbtree_gin do not follow this rule. I'm a bit confused with this, could anybody\nexplain it for me?\n\nThanks in advance.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Mon, 07 Jun 2021 19:18:34 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Confused about extension and shared_preload_libraries" }, { "msg_contents": "Hi Japin,\n\n> When we write a extension using C language, we often add the dynamic\nlibrary\n> into shared_preload_libraries, however, I found that the bloom,\nbtree_gist and\n> btree_gin do not follow this rule. I'm a bit confused with this, could\nanybody\n> explain it for me?\n\nIn the general case, you don't need to modify shared_preload_libraries to\nuse an extension, regardless of the language in which it's implemented.\nThat's it.\n\nSome extensions may however require this. See the description of the GUC\n[1].\n\n[1]:\nhttps://www.postgresql.org/docs/13/runtime-config-client.html#GUC-SHARED-PRELOAD-LIBRARIES\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Japin,> When we write a extension using C language, we often add the dynamic library> into shared_preload_libraries, however, I found that the bloom, btree_gist and> btree_gin do not follow this rule.  I'm a bit confused with this, could anybody> explain it for me?In the general case, you don't need to modify shared_preload_libraries to use an extension, regardless of the language in which it's implemented. That's it.Some extensions may however require this. See the description of the GUC [1].[1]: https://www.postgresql.org/docs/13/runtime-config-client.html#GUC-SHARED-PRELOAD-LIBRARIES-- Best regards,Aleksander Alekseev", "msg_date": "Mon, 7 Jun 2021 14:25:31 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Confused about extension and shared_preload_libraries" }, { "msg_contents": "\nOn Mon, 07 Jun 2021 at 19:25, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> Hi Japin,\n>\n>> When we write a extension using C language, we often add the dynamic\n> library\n>> into shared_preload_libraries, however, I found that the bloom,\n> btree_gist and\n>> btree_gin do not follow this rule. I'm a bit confused with this, could\n> anybody\n>> explain it for me?\n>\n> In the general case, you don't need to modify shared_preload_libraries to\n> use an extension, regardless of the language in which it's implemented.\n> That's it.\n>\n\nThanks for your explanation.\n\n> Some extensions may however require this. See the description of the GUC\n> [1].\n>\n> [1]:\n> https://www.postgresql.org/docs/13/runtime-config-client.html#GUC-SHARED-PRELOAD-LIBRARIES\n\nSorry for my poor reading of the documentation.\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Tue, 08 Jun 2021 10:20:21 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Confused about extension and shared_preload_libraries" } ]
[ { "msg_contents": "One of the friction points I have found in migrating from Oracle to PostgreSQL is in the conversion of hierarchical queries from the Oracle START WITH/CONNECT BY/ORDER SIBLINGS by pattern to using the ANSI recursive subquery form.\n\nOnce you wrap your head around it, the ANSI form is not so bad with one major exception. In order to achieve the equivalent of Oracle’s ORDER SIBLINGS BY clause, you need to add an additional column containing an array with the accumulated path back to the root of the hierarchy for each row. The problem with that is that it leaves you with an unfortunate choice: either accept the inefficiency of returning the array with the path back to the client (which the client doesn’t need or want), or requiring the application to explicitly list all of the columns that it wants just to exclude the hierarchy column, which can be hard to maintain, especially if your application needs to support both databases. If you have a ORM model where there could be multiple queries that share the same client code to read the result set, you might have to change multiple queries when new columns are added to a table or view even though you have centralized the processing of the result set.\n\nThe ideal solution for this would be for PostgreSQL to support the Oracle syntax and internally convert it to the ANSI form. Failing that, I have a modest suggestion that I would like to start a discussion around. What if you could use the MINUS keyword in the column list of a select statement to remove a column from the result set returned to the client? What I have in mind is something like this:\n\nTo achieve the equivalent of the following Oracle query:\n\n\n SELECT T.*\n FROM T\n START WITH T.ParentID IS NULL\n CONNECT BY T.ParentID = PRIOR T.ID\n ORDER SIBLINGS BY T.OrderVal\n\nYou could use\n\n WITH RECURSIVE TT AS (\n SELECT T0.*, ARRAY[]::INTEGER[] || T.OrderVal AS Sortable\n FROM T T0\n UNION ALL\n SELECT T1.*, TT.Sortable || T1 AS Sortable\n FROM TT\n INNER JOIN T T1 ON (T1.ParentID = TT.ID)\n )\n SELECT TT.* MINUS TT.Sortable\n FROM TT\nORDER BY TT.Sortable\n\nNow the Sortable column can be used to order the result set but is not returned to the client.\n\nNot knowing the internals of the parser, I’m assuming that the use of MINUS in this construct would be distinguishable from the set difference use case because the expression being subtracted is a column (or perhaps even a lst of columns) rather than a SELECT expression.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOne of the friction points I have found in migrating from Oracle to PostgreSQL is in the conversion of hierarchical queries from the Oracle START WITH/CONNECT BY/ORDER SIBLINGS by pattern to using the ANSI recursive subquery form.\n\nOnce you wrap your head around it, the ANSI form is not so bad with one major exception.  In order to achieve the equivalent of Oracle’s  ORDER SIBLINGS BY clause, you need to add an additional column containing an array with the accumulated path back to the\n root of the hierarchy for each row.  The problem with that is that it leaves you with an unfortunate choice: either accept the inefficiency of returning the array with the path back to the client (which the client doesn’t need or want), or requiring the application\n to explicitly list all of the columns that it wants just to exclude the hierarchy column, which can be hard to maintain, especially if your application needs to support both databases.  If you have a ORM model where there could be multiple queries that share\n the same client code to read the result set, you might have to change multiple queries when new columns are added to a table or view even though you have centralized the processing of the result set.\n\nThe ideal solution for this would be for PostgreSQL to support the Oracle syntax and internally convert it to the ANSI form.  Failing that, I have a modest suggestion that I would like to start a discussion around.  What if you could use the MINUS keyword in\n the column list of a select statement to remove a column from the result set returned to the client?  What I have in mind is something like this:\n\nTo achieve the equivalent of the following Oracle query:\n\n\n      SELECT T.*\n          FROM T\n       START WITH T.ParentID IS NULL\n       CONNECT BY T.ParentID = PRIOR T.ID\n      ORDER SIBLINGS BY T.OrderVal\n\nYou could use\n\n      WITH RECURSIVE TT AS (\n              SELECT T0.*, ARRAY[]::INTEGER[] || T.OrderVal AS Sortable\n                 FROM T T0\n             UNION ALL\n                SELECT T1.*, TT.Sortable || T1 AS Sortable\n                   FROM TT\n      INNER JOIN T T1 ON (T1.ParentID = TT.ID)\n    )\n   SELECT TT.* MINUS TT.Sortable\n      FROM TT\nORDER BY TT.Sortable\n\nNow the Sortable column can be used to order the result set but is not returned to the client.\n\nNot knowing the internals of the parser, I’m assuming that the use of MINUS in this construct would be distinguishable from the set difference use case because the expression being subtracted is a column (or perhaps even a lst of columns) rather than a SELECT\n expression.", "msg_date": "Mon, 7 Jun 2021 18:25:15 +0000", "msg_from": "Mark Zellers <mark.zellers@workday.com>", "msg_from_op": true, "msg_subject": "A modest proposal vis hierarchical queries: MINUS in the column list" }, { "msg_contents": "On Mon, Jun 7, 2021 at 1:54 PM Mark Zellers <mark.zellers@workday.com>\nwrote:\n\n> Failing that, I have a modest suggestion that I would like to start a\n> discussion around. What if you could use the MINUS keyword in the column\n> list of a select statement to remove a column from the result set returned\n> to the client?\n>\n\nI asked this a decade ago and got no useful responses.\n\nhttps://www.postgresql.org/message-id/flat/02e901cc2bb4%2476bc2090%24643461b0%24%40yahoo.com#3784fab26b0f946b3239266e3b70a6ce\n\nI will say I've still had the itch to want it occasionally in the years\nsince, though not frequently.\n\nDavid J.\n\nOn Mon, Jun 7, 2021 at 1:54 PM Mark Zellers <mark.zellers@workday.com> wrote:\n\n\nFailing that, I have a modest suggestion that I would like to start a discussion around.  What if you could use the MINUS keyword in\n the column list of a select statement to remove a column from the result set returned to the client?  I asked this a decade ago and got no useful responses.https://www.postgresql.org/message-id/flat/02e901cc2bb4%2476bc2090%24643461b0%24%40yahoo.com#3784fab26b0f946b3239266e3b70a6ceI will say I've still had the itch to want it occasionally in the years since, though not frequently.David J.", "msg_date": "Mon, 7 Jun 2021 14:36:56 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A modest proposal vis hierarchical queries: MINUS in the column\n list" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Mon, Jun 7, 2021 at 1:54 PM Mark Zellers <mark.zellers@workday.com>\n> wrote:\n>> What if you could use the MINUS keyword in the column\n>> list of a select statement to remove a column from the result set returned\n>> to the client?\n\n> I asked this a decade ago and got no useful responses.\n> https://www.postgresql.org/message-id/flat/02e901cc2bb4%2476bc2090%24643461b0%24%40yahoo.com#3784fab26b0f946b3239266e3b70a6ce\n\nI can recall more-recent requests for that too, though I'm too lazy\nto go search the archives right now.\n\nI'm fairly disinclined to do anything about it though, because I'm\nafraid of the SQL committee standardizing some other syntax for the\nsame idea in future (or maybe worse, commandeering the same keyword\nfor some other feature). It doesn't seem quite valuable enough to\ntake those risks for.\n\nNote that it's not like SQL hasn't heard of projections before.\nYou can always do \"SELECT a, b, d FROM subquery-yielding-a-b-c-d\".\nSo the proposed syntax would save a small amount of typing, but\nit's not adding any real new functionality.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Jun 2021 18:10:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A modest proposal vis hierarchical queries: MINUS in the column\n list" }, { "msg_contents": "On Mon, Jun 07, 2021 at 06:10:58PM -0400, Tom Lane wrote:\n> \n> I'm fairly disinclined to do anything about it though, because I'm\n> afraid of the SQL committee standardizing some other syntax for the\n> same idea in future (or maybe worse, commandeering the same keyword\n> for some other feature). It doesn't seem quite valuable enough to\n> take those risks for.\n\nAlso, isn't the OP problem already solved by the SEARCH / CYCLE grammar\nhandling added in 3696a600e2292?\n\n\n", "msg_date": "Tue, 8 Jun 2021 10:50:08 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A modest proposal vis hierarchical queries: MINUS in the column\n list" }, { "msg_contents": "\nOn 6/7/21 6:10 PM, Tom Lane wrote:\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> On Mon, Jun 7, 2021 at 1:54 PM Mark Zellers <mark.zellers@workday.com>\n>> wrote:\n>>> What if you could use the MINUS keyword in the column\n>>> list of a select statement to remove a column from the result set returned\n>>> to the client?\n>> I asked this a decade ago and got no useful responses.\n>> https://www.postgresql.org/message-id/flat/02e901cc2bb4%2476bc2090%24643461b0%24%40yahoo.com#3784fab26b0f946b3239266e3b70a6ce\n> I can recall more-recent requests for that too, though I'm too lazy\n> to go search the archives right now.\n>\n> I'm fairly disinclined to do anything about it though, because I'm\n> afraid of the SQL committee standardizing some other syntax for the\n> same idea in future (or maybe worse, commandeering the same keyword\n> for some other feature). It doesn't seem quite valuable enough to\n> take those risks for.\n>\n> Note that it's not like SQL hasn't heard of projections before.\n> You can always do \"SELECT a, b, d FROM subquery-yielding-a-b-c-d\".\n> So the proposed syntax would save a small amount of typing, but\n> it's not adding any real new functionality.\n>\n> \t\t\t\n\n\n\nTrue, but the problem happens when you have 250 fields and you want to\nskip 4 of them. Getting that right can be a pain.\n\n\nI agree that inventing syntax for this has the dangers you identify.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 8 Jun 2021 11:00:56 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A modest proposal vis hierarchical queries: MINUS in the column\n list" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 6/7/21 6:10 PM, Tom Lane wrote:\n>> Note that it's not like SQL hasn't heard of projections before.\n>> You can always do \"SELECT a, b, d FROM subquery-yielding-a-b-c-d\".\n>> So the proposed syntax would save a small amount of typing, but\n>> it's not adding any real new functionality.\n\n> True, but the problem happens when you have 250 fields and you want to\n> skip 4 of them. Getting that right can be a pain.\n\nI'm slightly skeptical of that argument, because if you have that\nsort of query, you're most likely generating the query programmatically\nanyway. Yeah, it'd be a pain to maintain such code by hand, but\nI don't see it being much of a problem if the code is built by\na machine.\n\nNote that I'm not saying the idea is useless. I'm just opining\nthat I'd rather wait for the SQL committee to do something in\nthis area.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Jun 2021 11:39:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A modest proposal vis hierarchical queries: MINUS in the column\n list" }, { "msg_contents": "On 08.06.21 04:50, Julien Rouhaud wrote:\n> On Mon, Jun 07, 2021 at 06:10:58PM -0400, Tom Lane wrote:\n>>\n>> I'm fairly disinclined to do anything about it though, because I'm\n>> afraid of the SQL committee standardizing some other syntax for the\n>> same idea in future (or maybe worse, commandeering the same keyword\n>> for some other feature). It doesn't seem quite valuable enough to\n>> take those risks for.\n> \n> Also, isn't the OP problem already solved by the SEARCH / CYCLE grammar\n> handling added in 3696a600e2292?\n\nYou still get the path column in the output, which is what the OP didn't \nwant. But optionally eliminating the path column from the output might \nbe a more constrained problem to solve. We actually already discussed \nthis; we just need to do it somehow.\n\n\n", "msg_date": "Tue, 8 Jun 2021 18:28:47 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A modest proposal vis hierarchical queries: MINUS in the column\n list" }, { "msg_contents": "Tom Lane writes:\n>Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 6/7/21 6:10 PM, Tom Lane wrote:\n>>> Note that it's not like SQL hasn't heard of projections before.\n>>> You can always do \"SELECT a, b, d FROM subquery-yielding-a-b-c-d\".\n>>> So the proposed syntax would save a small amount of typing, but\n>>> it's not adding any real new functionality.\n>> True, but the problem happens when you have 250 fields and you want to\n>> skip 4 of them. Getting that right can be a pain.\n\n>I'm slightly skeptical of that argument, because if you have that\n>sort of query, you're most likely generating the query programmatically\n>anyway. Yeah, it'd be a pain to maintain such code by hand, but\n>I don't see it being much of a problem if the code is built by\n>a machine.\n\nHere is the pattern I’m concerned with: the application has an entity layer that for each relationship knows all the fields and can read them and convert them into Java objects.\nDevelopers are typically writing queries that just `SELECT *` from a table or view to load the entity. There could be many different queries with different filter criteria, for example, that are all fed through the same Java code. If the query omits some fields, the Java code can handle that by examining the meta-data and not reading the missing fields.\n\nWhen new fields are added to a table or view, it is generally only necessary to update the common Java component rather than modifying each individual query. As I said in my original post, that leaves us with the unhappy alternatives of returning the (potentially large) temporary arrays used for sorting or having to explicitly name each column just to omit the unwanted temporary array.\n\nNote that the Oracle START WITH/CONNECT BY syntax avoids this issue entirely because it is not necessary to return the temporary structure used only for sorting and is not needed by the client.\n\nThere is a preference for static queries over dynamically generated ones, as those can be statically analyzed for correctness and security issue, so dynamically generating the query is not always an available option.\n\nI expect that this sort of pattern drives database developers crazy (“surely you aren’t using *all* those fields, why don’t you just explicitly list the ones you want?”) but there are other constraints (static validation, provably avoiding SQL Injection attacks, ease of maintenance) that may take precedence. There is value in not needing to make a knight’s tour through the code base every time someone adds a field to a table to update the column lists in all the queries that refer to that table.\n\n\nRegards,\n\nMark Z.\n\n\n\n\n\n\n\n\n\n\n\nTom Lane writes:\n>Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 6/7/21 6:10 PM, Tom Lane wrote:\n>>> Note that it's not like SQL hasn't heard of projections before.\n>>> You can always do \"SELECT a, b, d FROM subquery-yielding-a-b-c-d\".\n>>> So the proposed syntax would save a small amount of typing, but\n>>> it's not adding any real new functionality. \n>> True, but the problem happens when you have 250 fields and you want to\n>> skip 4 of them. Getting that right can be a pain.\n\n>I'm slightly skeptical of that argument, because if you have that\n>sort of query, you're most likely generating the query programmatically\n>anyway.  Yeah, it'd be a pain to maintain such code by hand, but\n>I don't see it being much of a problem if the code is built by\n>a machine.\n\nHere is the pattern I’m concerned with:  the application has an entity layer that for each relationship knows all the fields and can read them and convert them into Java objects.\nDevelopers are typically writing queries that just `SELECT *` from a table or view to load the entity.  There could be many different queries with different filter criteria, for example, that are all fed through the same Java code.  If the query omits some\n fields, the Java code can handle that by examining the meta-data and not reading the missing fields.\n\nWhen new fields are added to a table or view, it is generally only necessary to update the common Java component rather than modifying each individual query.  As I said in my original post, that leaves us with the unhappy alternatives of returning the (potentially\n large) temporary arrays used for sorting or having to explicitly name each column just to omit the unwanted temporary array.\n\nNote that the Oracle START WITH/CONNECT BY syntax avoids this issue entirely because it is not necessary to return the temporary structure used only for sorting and is not needed by the client.\n\nThere is a preference for static queries over dynamically generated ones, as those can be statically analyzed for correctness and security issue, so dynamically generating the query is not always an available option.\n\nI expect that this sort of pattern drives database developers crazy (“surely you aren’t using *all* those fields, why don’t you just explicitly list the ones you want?”) but there are other constraints (static validation, provably avoiding SQL Injection\n attacks, ease of maintenance) that may take precedence.  There is value in not needing to make a knight’s tour through the code base every time someone adds a field to a table to update the column lists in all the queries that refer to that table.\n\n\nRegards,\n\nMark Z.", "msg_date": "Wed, 9 Jun 2021 20:51:54 +0000", "msg_from": "Mark Zellers <mark.zellers@workday.com>", "msg_from_op": true, "msg_subject": "Re: [External Sender] Re: A modest proposal vis hierarchical queries:\n MINUS in the column list" } ]
[ { "msg_contents": "I wrote a script to automatically generate the node support functions \n(copy, equal, out, and read, as well as the node tags enum) from the \nstruct definitions.\n\nThe first eight patches are to clean up various inconsistencies to make \nparsing or generation easier.\n\nThe interesting stuff is in patch 0009.\n\nFor each of the four node support files, it creates two include files, \ne.g., copyfuncs.inc1.c and copyfuncs.inc2.c to include in the main file. \n All the scaffolding of the main file stays in place.\n\nIn this patch, I have only ifdef'ed out the code to could be removed, \nmainly so that it won't constantly have merge conflicts. Eventually, \nthat should all be changed to delete the code. When we do that, some \ncode comments should probably be preserved elsewhere, so that will need \nanother pass of consideration.\n\nI have tried to mostly make the coverage of the output match what is \ncurrently there. For example, one could do out/read coverage of utility \nstatement nodes easily with this, but I have manually excluded those for \nnow. The reason is mainly that it's easier to diff the before and \nafter, and adding a bunch of stuff like this might require a separate \nanalysis and review.\n\nSubtyping (TidScan -> Scan) is supported.\n\nFor the hard cases, you can just write a manual function and exclude \ngenerating one.\n\nFor the not so hard cases, there is a way of annotating struct fields to \nget special behaviors. For example, pg_node_attr(equal_ignore) has the \nfield ignored in equal functions.\n\nThere are a couple of additional minor issues mentioned in the script \nsource. But basically, it all seems to work.", "msg_date": "Mon, 7 Jun 2021 22:27:52 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "automatically generating node support functions" }, { "msg_contents": "On Tue, 8 Jun 2021 at 08:28, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> I wrote a script to automatically generate the node support functions\n> (copy, equal, out, and read, as well as the node tags enum) from the\n> struct definitions.\n\nThanks for working on this. I agree that it would be nice to see\nimprovements in this area.\n\nIt's almost 2 years ago now, but I'm wondering if you saw what Andres\nproposed in [1]? The idea was basically to make a metadata array of\nthe node structs so that, instead of having to output large amounts of\n.c code to do read/write/copy/equals, instead just have small\nfunctions that loop over the elements in the array for the given\nstruct and perform the required operation based on the type.\n\nThere were still quite a lot of unsolved problems, for example, how to\ndetermine the length of arrays so that we know how many bytes to\ncompare in equal funcs. I had a quick look at what you've got and\nsee you've got a solution for that by looking at the last \"int\" field\nbefore the array and using that. (I wonder if you'd be better to use\nsomething more along the lines of your pg_node_attr() for that?)\n\nThere's quite a few advantages having the metadata array rather than\nthe current approach:\n\n1. We don't need to compile 4 huge .c files and link them into the\npostgres binary. I imagine this will make the binary a decent amount\nsmaller.\n2. We can easily add more operations on nodes. e.g serialize nodes\nfor sending plans to parallel workers. or generating a hash value so\nwe can store node types in a hash table.\n\nOne disadvantage would be what Andres mentioned in [2]. He found\naround a 5% performance regression. However, looking at the\nNodeTypeComponents struct in [1], we might be able to speed it up\nfurther by shrinking that struct down a bit and just storing an uint16\nposition into a giant char array which contains all of the field\nnames. I imagine they wouldn't take more than 64k. fieldtype could see\na similar change. That would take the NodeTypeComponents struct from\n26 bytes down to 14 bytes, which means about double the number of\nfield metadata we could fit on a cache line.\n\nDo you have any thoughts about that approach instead?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20190828234136.fk2ndqtld3onfrrp@alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/20190920051857.2fhnvhvx4qdddviz@alap3.anarazel.de\n\n\n", "msg_date": "Wed, 9 Jun 2021 01:40:06 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 08.06.21 15:40, David Rowley wrote:\n> It's almost 2 years ago now, but I'm wondering if you saw what Andres\n> proposed in [1]? The idea was basically to make a metadata array of\n> the node structs so that, instead of having to output large amounts of\n> .c code to do read/write/copy/equals, instead just have small\n> functions that loop over the elements in the array for the given\n> struct and perform the required operation based on the type.\n\nThat project was technologically impressive, but it seemed to have \nsignificant hurdles to overcome before it can be useful. My proposal is \nusable and useful today. And it doesn't prevent anyone from working on \na more sophisticated solution.\n\n> There were still quite a lot of unsolved problems, for example, how to\n> determine the length of arrays so that we know how many bytes to\n> compare in equal funcs. I had a quick look at what you've got and\n> see you've got a solution for that by looking at the last \"int\" field\n> before the array and using that. (I wonder if you'd be better to use\n> something more along the lines of your pg_node_attr() for that?)\n\nI considered that, but since the convention seemed to work everywhere, I \nleft it. But it wouldn't be hard to change.\n\n\n", "msg_date": "Tue, 8 Jun 2021 19:45:58 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Hi,\n\nOn 2021-06-08 19:45:58 +0200, Peter Eisentraut wrote:\n> On 08.06.21 15:40, David Rowley wrote:\n> > It's almost 2 years ago now, but I'm wondering if you saw what Andres\n> > proposed in [1]? The idea was basically to make a metadata array of\n> > the node structs so that, instead of having to output large amounts of\n> > .c code to do read/write/copy/equals, instead just have small\n> > functions that loop over the elements in the array for the given\n> > struct and perform the required operation based on the type.\n> \n> That project was technologically impressive, but it seemed to have\n> significant hurdles to overcome before it can be useful. My proposal is\n> usable and useful today. And it doesn't prevent anyone from working on a\n> more sophisticated solution.\n\nI think it's short-sighted to further and further go down the path of\nparsing \"kind of C\" without just using a proper C parser. But leaving\nthat aside, a big part of the promise of the approach in that thread\nisn't actually tied to the specific way the type information is\ncollected: The perl script could output something like the \"node type\nmetadata\" I generated in that patchset, and then we don't need the large\namount of generated code and can much more economically add additional\noperations handling node types.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Jun 2021 12:23:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-06-08 19:45:58 +0200, Peter Eisentraut wrote:\n>> On 08.06.21 15:40, David Rowley wrote:\n>>> It's almost 2 years ago now, but I'm wondering if you saw what Andres\n>>> proposed in [1]?\n\n>> That project was technologically impressive, but it seemed to have\n>> significant hurdles to overcome before it can be useful. My proposal is\n>> usable and useful today. And it doesn't prevent anyone from working on a\n>> more sophisticated solution.\n\n> I think it's short-sighted to further and further go down the path of\n> parsing \"kind of C\" without just using a proper C parser. But leaving\n> that aside, a big part of the promise of the approach in that thread\n> isn't actually tied to the specific way the type information is\n> collected: The perl script could output something like the \"node type\n> metadata\" I generated in that patchset, and then we don't need the large\n> amount of generated code and can much more economically add additional\n> operations handling node types.\n\nI think the main reason that the previous patch went nowhere was general\nresistance to making developers install something as complicated as\nlibclang --- that could be a big lift on non-mainstream platforms.\nSo IMO it's a feature not a bug that Peter's approach just uses a perl\nscript. OTOH, the downstream aspects of your patch did seem appealing.\nSo I'd like to see a merger of the two approaches, using perl for the\ndata extraction and then something like what you'd done. Maybe that's\nthe same thing you're saying.\n\nI also see Peter's point that committing what he has here might be\na reasonable first step on that road. Getting the data extraction\nright is a big chunk of the job, and what we do with it afterward\ncould be improved later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Jul 2021 17:42:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Hi,\n\nOn 2021-07-14 17:42:10 -0400, Tom Lane wrote:\n> I think the main reason that the previous patch went nowhere was general\n> resistance to making developers install something as complicated as\n> libclang --- that could be a big lift on non-mainstream platforms.\n\nI'm still not particularly convinced it's and issue - I was suggesting\nwe commit the resulting metadata, so libclang would only be needed when\nmodifying node types. And even in case one needs to desperately modify\nnode types on a system without access to libclang, for an occasionally\nsmall change one could just modify the committed metadata structs\nmanually.\n\n\n> So IMO it's a feature not a bug that Peter's approach just uses a perl\n> script. OTOH, the downstream aspects of your patch did seem appealing.\n> So I'd like to see a merger of the two approaches, using perl for the\n> data extraction and then something like what you'd done. Maybe that's\n> the same thing you're saying.\n\nYes, that's what I was trying to say. I'm still doubtful it's a great\nidea to go further down the \"weird subset of C parsed by regexes\" road,\nbut I can live with it. If Peter could generate something roughly like\nthe metadata I emitted, I'd rebase my node functions ontop of that.\n\n\n> I also see Peter's point that committing what he has here might be\n> a reasonable first step on that road. Getting the data extraction\n> right is a big chunk of the job, and what we do with it afterward\n> could be improved later.\n\nTo me that seems likely to just cause churn without saving much\neffort. The needed information isn't really the same between generating\nthe node functions as text and collecting the metadata for \"generic node\nfunctions\", and none of the output is the same.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 14 Jul 2021 18:24:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 07.06.21 22:27, Peter Eisentraut wrote:\n> I wrote a script to automatically generate the node support functions \n> (copy, equal, out, and read, as well as the node tags enum) from the \n> struct definitions.\n> \n> The first eight patches are to clean up various inconsistencies to make \n> parsing or generation easier.\n\nAre there any concerns about the patches 0001 through 0008? Otherwise, \nmaybe we could get those out of the way.\n\n\n", "msg_date": "Mon, 19 Jul 2021 08:59:18 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> The first eight patches are to clean up various inconsistencies to make \n>> parsing or generation easier.\n\n> Are there any concerns about the patches 0001 through 0008? Otherwise, \n> maybe we could get those out of the way.\n\nI looked through those and don't have any complaints (though I just\neyeballed them, I didn't see what a compiler would say). I see\nyou pushed a couple of them already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Jul 2021 17:25:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Here is another set of preparatory patches that clean up various special \ncases and similar in the node support.\n\n0001-Remove-T_Expr.patch\n\nRemoves unneeded T_Expr.\n\n0002-Add-COPY_ARRAY_FIELD-and-COMPARE_ARRAY_FIELD.patch\n0003-Add-WRITE_INDEX_ARRAY.patch\n\nThese add macros to handle a few cases that were previously hand-coded.\n\n0004-Make-node-output-prefix-match-node-structure-name.patch\n\nSome nodes' output/read functions use a label that is slightly different \nfrom their node name, e.g., \"NOTIFY\" instead of \"NOTIFYSTMT\". This \ncleans that up so that an automated approach doesn't have to deal with \nthese special cases.\n\n0005-Add-Cardinality-typedef.patch\n\nAdds a typedef Cardinality for double fields that store an estimated row \nor other count. Works alongside Cost and Selectivity.\n\nThis is useful because it appears that the serialization format for \nthese float fields depends on their intent: Cardinality => %.0f, Cost => \n%.2f, Selectivity => %.4f. The only remaining exception is allvisfrac, \nwhich uses %.6f. Maybe that could also be a Selectivity, but I left it \nas is. I think this improves the clarity in this area.", "msg_date": "Tue, 17 Aug 2021 16:36:45 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On Tue, 2021-08-17 at 16:36 +0200, Peter Eisentraut wrote:\r\n> Here is another set of preparatory patches that clean up various special \r\n> cases and similar in the node support.\r\n> \r\n> 0001-Remove-T_Expr.patch\r\n> \r\n> Removes unneeded T_Expr.\r\n> \r\n> 0002-Add-COPY_ARRAY_FIELD-and-COMPARE_ARRAY_FIELD.patch\r\n> 0003-Add-WRITE_INDEX_ARRAY.patch\r\n> \r\n> These add macros to handle a few cases that were previously hand-coded.\r\n\r\nThese look sane to me.\r\n\r\n> 0004-Make-node-output-prefix-match-node-structure-name.patch\r\n> \r\n> Some nodes' output/read functions use a label that is slightly different \r\n> from their node name, e.g., \"NOTIFY\" instead of \"NOTIFYSTMT\". This \r\n> cleans that up so that an automated approach doesn't have to deal with \r\n> these special cases.\r\n\r\nIs there any concern about the added serialization length, or is that\r\ntrivial in practice? The one that particularly caught my eye is\r\nRANGETBLENTRY, which was previously RTE. But I'm not very well-versed\r\nin all the places these strings can be generated and stored.\r\n\r\n> 0005-Add-Cardinality-typedef.patch\r\n> \r\n> Adds a typedef Cardinality for double fields that store an estimated row \r\n> or other count. Works alongside Cost and Selectivity.\r\n\r\nShould RangeTblEntry.enrtuples also be a Cardinality?\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 2 Sep 2021 18:53:37 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 02.09.21 20:53, Jacob Champion wrote:\n>> 0004-Make-node-output-prefix-match-node-structure-name.patch\n>>\n>> Some nodes' output/read functions use a label that is slightly different\n>> from their node name, e.g., \"NOTIFY\" instead of \"NOTIFYSTMT\". This\n>> cleans that up so that an automated approach doesn't have to deal with\n>> these special cases.\n> \n> Is there any concern about the added serialization length, or is that\n> trivial in practice? The one that particularly caught my eye is\n> RANGETBLENTRY, which was previously RTE. But I'm not very well-versed\n> in all the places these strings can be generated and stored.\n\nThese are just matters of taste. Let's wait a bit more to see if anyone \nis concerned.\n\n>> 0005-Add-Cardinality-typedef.patch\n>>\n>> Adds a typedef Cardinality for double fields that store an estimated row\n>> or other count. Works alongside Cost and Selectivity.\n> \n> Should RangeTblEntry.enrtuples also be a Cardinality?\n\nYes, I'll add that.\n\n\n", "msg_date": "Tue, 7 Sep 2021 10:57:02 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On Tue, Sep 07, 2021 at 10:57:02AM +0200, Peter Eisentraut wrote:\n> On 02.09.21 20:53, Jacob Champion wrote:\n> >>0004-Make-node-output-prefix-match-node-structure-name.patch\n> >>\n> >>Some nodes' output/read functions use a label that is slightly different\n> >>from their node name, e.g., \"NOTIFY\" instead of \"NOTIFYSTMT\". This\n> >>cleans that up so that an automated approach doesn't have to deal with\n> >>these special cases.\n> >\n> >Is there any concern about the added serialization length, or is that\n> >trivial in practice? The one that particularly caught my eye is\n> >RANGETBLENTRY, which was previously RTE. But I'm not very well-versed\n> >in all the places these strings can be generated and stored.\n> \n> These are just matters of taste. Let's wait a bit more to see if anyone is\n> concerned.\n\nI am not concerned about changing the serialization length this much. The\nformat is already quite verbose, and this change is small relative to that\nexisting verbosity.\n\n\n", "msg_date": "Tue, 7 Sep 2021 21:30:46 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 17.08.21 16:36, Peter Eisentraut wrote:\n> Here is another set of preparatory patches that clean up various special \n> cases and similar in the node support.\n\nThis set of patches has been committed. I'll close this commit fest \nentry and come back with the main patch series in the future.\n\n\n", "msg_date": "Wed, 15 Sep 2021 21:01:33 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 15.09.21 21:01, Peter Eisentraut wrote:\n> On 17.08.21 16:36, Peter Eisentraut wrote:\n>> Here is another set of preparatory patches that clean up various \n>> special cases and similar in the node support.\n> \n> This set of patches has been committed.  I'll close this commit fest \n> entry and come back with the main patch series in the future.\n\nHere is an updated version of my original patch, so we have something to \ncontinue the discussion around. This takes into account all the \npreparatory patches that have been committed in the meantime. I have \nalso changed it so that the array size of a pointer is now explicitly \ndeclared using pg_node_attr(array_size(N)) instead of picking the most \nrecent scalar field, which was admittedly hacky. I have also added MSVC \nbuild support and made the Perl code more portable, so that the cfbot \ndoesn't have to be sad.", "msg_date": "Mon, 11 Oct 2021 16:22:33 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": ">\n> build support and made the Perl code more portable, so that the cfbot\n> doesn't have to be sad.\n>\n\nWas this also the reason for doing the output with print statements rather\nthan using one of the templating libraries? I'm mostly just curious, and\ncertainly don't want it to get in the way of working code.\n\nbuild support and made the Perl code more portable, so that the cfbot \ndoesn't have to be sad.Was this also the reason for doing the output with print statements rather than using one of the templating libraries? I'm mostly just curious, and certainly don't want it to get in the way of working code.", "msg_date": "Mon, 11 Oct 2021 21:06:50 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 12.10.21 03:06, Corey Huinker wrote:\n> build support and made the Perl code more portable, so that the cfbot\n> doesn't have to be sad.\n> \n> \n> Was this also the reason for doing the output with print statements \n> rather than using one of the templating libraries? I'm mostly just \n> curious, and certainly don't want it to get in the way of working code.\n\nUnless there is a templating library that ships with Perl (>= 5.8.3, \napparently now), this seems impractical.\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 15:04:04 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "\nOn 10/11/21 10:22 AM, Peter Eisentraut wrote:\n>\n> On 15.09.21 21:01, Peter Eisentraut wrote:\n>> On 17.08.21 16:36, Peter Eisentraut wrote:\n>>> Here is another set of preparatory patches that clean up various\n>>> special cases and similar in the node support.\n>>\n>> This set of patches has been committed.  I'll close this commit fest\n>> entry and come back with the main patch series in the future.\n>\n> Here is an updated version of my original patch, so we have something\n> to continue the discussion around.  This takes into account all the\n> preparatory patches that have been committed in the meantime.  I have\n> also changed it so that the array size of a pointer is now explicitly\n> declared using pg_node_attr(array_size(N)) instead of picking the most\n> recent scalar field, which was admittedly hacky.  I have also added\n> MSVC build support and made the Perl code more portable, so that the\n> cfbot doesn't have to be sad.\n\n\n\nI haven't been through the whole thing, but I did notice this: the\ncomment stripping code looks rather fragile. I think it would blow up if\nthere were a continuation line not starting with  qr/\\s*\\*/. It's a lot\nsimpler and more robust to do this if you slurp the file in whole.\nHere's what we do in the buildfarm code:\n\n    my $src = file_contents($_);\n # strip C comments\n    # We used to use the recipe in perlfaq6 but there is actually no point.\n    # We don't need to keep the quoted string values anyway, and\n    # on some platforms the complex regex causes perl to barf and crash.\n    $src =~ s{/\\*.*?\\*/}{}gs;\n\nAfter you've done that splitting it into lines is pretty simple.\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 12 Oct 2021 09:52:15 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 12.10.21 15:52, Andrew Dunstan wrote:\n> I haven't been through the whole thing, but I did notice this: the\n> comment stripping code looks rather fragile. I think it would blow up if\n> there were a continuation line not starting with  qr/\\s*\\*/. It's a lot\n> simpler and more robust to do this if you slurp the file in whole.\n> Here's what we do in the buildfarm code:\n> \n>     my $src = file_contents($_);\n> # strip C comments\n>     # We used to use the recipe in perlfaq6 but there is actually no point.\n>     # We don't need to keep the quoted string values anyway, and\n>     # on some platforms the complex regex causes perl to barf and crash.\n>     $src =~ s{/\\*.*?\\*/}{}gs;\n> \n> After you've done that splitting it into lines is pretty simple.\n\nHere is an updated patch, with some general rebasing, and the above \nimprovement. It now also generates #include lines necessary in \ncopyfuncs etc. to pull in all the node types it operates on.\n\nFurther, I have looked more into the \"metadata\" approach discussed in \n[0]. It's pretty easy to generate that kind of output from the data \nstructures my script produces. You just loop over all the node types \nand print stuff and keep a few counters. I don't plan to work on that \nat this time, but I just wanted to point out that if people wanted to \nmove into that direction, my patch wouldn't be in the way.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/20190828234136.fk2ndqtld3onfrrp%40alap3.anarazel.de", "msg_date": "Wed, 29 Dec 2021 12:08:17 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Rebased patch to resolve some merge conflicts\n\nOn 29.12.21 12:08, Peter Eisentraut wrote:\n> On 12.10.21 15:52, Andrew Dunstan wrote:\n>> I haven't been through the whole thing, but I did notice this: the\n>> comment stripping code looks rather fragile. I think it would blow up if\n>> there were a continuation line not starting with  qr/\\s*\\*/. It's a lot\n>> simpler and more robust to do this if you slurp the file in whole.\n>> Here's what we do in the buildfarm code:\n>>\n>>      my $src = file_contents($_);\n>>      # strip C comments\n>>      # We used to use the recipe in perlfaq6 but there is actually no \n>> point.\n>>      # We don't need to keep the quoted string values anyway, and\n>>      # on some platforms the complex regex causes perl to barf and crash.\n>>      $src =~ s{/\\*.*?\\*/}{}gs;\n>>\n>> After you've done that splitting it into lines is pretty simple.\n> \n> Here is an updated patch, with some general rebasing, and the above \n> improvement.  It now also generates #include lines necessary in \n> copyfuncs etc. to pull in all the node types it operates on.\n> \n> Further, I have looked more into the \"metadata\" approach discussed in \n> [0].  It's pretty easy to generate that kind of output from the data \n> structures my script produces.  You just loop over all the node types \n> and print stuff and keep a few counters.  I don't plan to work on that \n> at this time, but I just wanted to point out that if people wanted to \n> move into that direction, my patch wouldn't be in the way.\n> \n> \n> [0]: \n> https://www.postgresql.org/message-id/flat/20190828234136.fk2ndqtld3onfrrp%40alap3.anarazel.de", "msg_date": "Mon, 24 Jan 2022 16:15:48 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "What do people think about this patch now?\n\nI have received some feedback on several small technical issues, which \nhave all been fixed. This patch has been around for several commit \nfests now and AFAICT, nothing has broken it. This is just to indicate \nthat the parsing isn't as flimsy as one might fear.\n\nOne thing thing that is waiting behind this patch is that you currently \ncannot put utility commands into parse-time SQL functions, because there \nis no full out/read support for those. This patch would fix that \nproblem. (There is a little bit of additional work necessary, but I \nhave that mostly worked out in a separate branch.)\n\n\nOn 24.01.22 16:15, Peter Eisentraut wrote:\n> Rebased patch to resolve some merge conflicts\n> \n> On 29.12.21 12:08, Peter Eisentraut wrote:\n>> On 12.10.21 15:52, Andrew Dunstan wrote:\n>>> I haven't been through the whole thing, but I did notice this: the\n>>> comment stripping code looks rather fragile. I think it would blow up if\n>>> there were a continuation line not starting with  qr/\\s*\\*/. It's a lot\n>>> simpler and more robust to do this if you slurp the file in whole.\n>>> Here's what we do in the buildfarm code:\n>>>\n>>>      my $src = file_contents($_);\n>>>      # strip C comments\n>>>      # We used to use the recipe in perlfaq6 but there is actually no \n>>> point.\n>>>      # We don't need to keep the quoted string values anyway, and\n>>>      # on some platforms the complex regex causes perl to barf and \n>>> crash.\n>>>      $src =~ s{/\\*.*?\\*/}{}gs;\n>>>\n>>> After you've done that splitting it into lines is pretty simple.\n>>\n>> Here is an updated patch, with some general rebasing, and the above \n>> improvement.  It now also generates #include lines necessary in \n>> copyfuncs etc. to pull in all the node types it operates on.\n>>\n>> Further, I have looked more into the \"metadata\" approach discussed in \n>> [0].  It's pretty easy to generate that kind of output from the data \n>> structures my script produces.  You just loop over all the node types \n>> and print stuff and keep a few counters.  I don't plan to work on that \n>> at this time, but I just wanted to point out that if people wanted to \n>> move into that direction, my patch wouldn't be in the way.\n>>\n>>\n>> [0]: \n>> https://www.postgresql.org/message-id/flat/20190828234136.fk2ndqtld3onfrrp%40alap3.anarazel.de \n>>\n\n\n\n", "msg_date": "Mon, 14 Feb 2022 11:15:57 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> What do people think about this patch now?\n\nI'm in favor of moving forward with this. I do not like the\nlibclang-based approach that Andres was pushing, because of the\njump in developer tooling requirements that it'd cause.\n\nEyeballing the patch a bit, I do have some comments:\n\n* It's time for action on the business about extracting comments\nfrom the to-be-deleted code.\n\n* The Perl script is kind of under-commented for my taste.\nIt lacks a copyright notice, too.\n\n* In the same vein, I should not have to reverse-engineer what\nthe available pg_node_attr() properties are or do. Perhaps they\ncould be documented in the comment for the pg_node_attr macro\nin nodes.h.\n\n* Maybe the generated file names could be chosen less opaquely,\nsay \".funcs\" and \".switch\" instead of \".inc1\" and \".inc2\".\n\n* I don't understand why there are changes in the #include\nlists in copyfuncs.c etc?\n\n* I think that more thought needs to be put into the format\nof the *nodes.h struct declarations, because I fear pgindent\nis going to make a hash of what you've done here. When we\ndid similar stuff in the catalog headers, I think we ended\nup moving a lot of end-of-line comments onto their own lines.\n\n* I assume the pg_config_manual.h changes are not meant for\ncommit?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Feb 2022 12:09:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Hi,\n\nOn 2022-02-14 12:09:47 -0500, Tom Lane wrote:\n> I'm in favor of moving forward with this. I do not like the\n> libclang-based approach that Andres was pushing, because of the\n> jump in developer tooling requirements that it'd cause.\n\nFWIW, while I don't love the way the header parsing stuff in the patch (vs\nusing libclang or such), I don't have a real problem with it.\n\nI do however not think it's a good idea to commit something generating\nsomething like the existing node functions vs going for a metadata based\napproach at dealing with node functions. That aspect of my patchset is\nindependent of the libclang vs script debate.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Feb 2022 15:23:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I do however not think it's a good idea to commit something generating\n> something like the existing node functions vs going for a metadata based\n> approach at dealing with node functions. That aspect of my patchset is\n> independent of the libclang vs script debate.\n\nI think that finishing out and committing this patch is a fine step\non the way to that. Either that, or you should go ahead and merge\nyour backend work onto what Peter's done ... but that seems like\nit'll be bigger and harder to review.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Feb 2022 18:32:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Hi,\n\nOn 2022-02-14 18:32:21 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I do however not think it's a good idea to commit something generating\n> > something like the existing node functions vs going for a metadata based\n> > approach at dealing with node functions. That aspect of my patchset is\n> > independent of the libclang vs script debate.\n> \n> I think that finishing out and committing this patch is a fine step\n> on the way to that.\n\nI think most of gen_node_support.pl would change - a lot of that is generating\nthe node functions, which would not be needed anymore. And most of the\nremainder would change as well.\n\n\n> Either that, or you should go ahead and merge your backend work onto what\n> Peter's done ...\n\nI did offer to do part of that a while ago:\n\nhttps://www.postgresql.org/message-id/20210715012454.bvwg63farhmfwb47%40alap3.anarazel.de\n\nOn 2021-07-14 18:24:54 -0700, Andres Freund wrote:\n> If Peter could generate something roughly like the metadata I emitted, I'd\n> rebase my node functions ontop of that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Feb 2022 17:32:46 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-02-14 18:32:21 -0500, Tom Lane wrote:\n>> I think that finishing out and committing this patch is a fine step\n>> on the way to that.\n\n> I think most of gen_node_support.pl would change - a lot of that is generating\n> the node functions, which would not be needed anymore. And most of the\n> remainder would change as well.\n\nWell, yeah, we'd be throwing away some of that Perl code. So what?\nI think that most of the intellectual content in this patch is getting\nthe data source nailed down, ie putting the annotations into the *nodes.h\nfiles and building the code to parse that. I don't have a problem\nwith throwing away and rewriting the back-end part of the patch later.\n\nAnd, TBH, I am not really convinced that a pure metadata approach is going\nto work out, or that it will have sufficient benefit over just automating\nthe way we do it now. I notice that Peter's patch leaves a few\ntoo-much-of-a-special-case functions unconverted, which is no real\nproblem for his approach; but it seems like you won't get to take such\nshortcuts in a metadata-reading implementation.\n\nThe bottom line here is that I believe that Peter's patch could get us out\nof the business of hand-maintaining the backend/nodes/*.c files in the\nv15 timeframe, which would be a very nice thing. I don't see how your\npatch will be ready on anywhere near the same schedule. When it is ready,\nwe can switch, but in the meantime I'd like the maintenance benefit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Feb 2022 20:47:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Hi,\n\nOn 2022-02-14 20:47:33 -0500, Tom Lane wrote:\n> I think that most of the intellectual content in this patch is getting\n> the data source nailed down, ie putting the annotations into the *nodes.h\n> files and building the code to parse that. I don't have a problem\n> with throwing away and rewriting the back-end part of the patch later.\n\nImo that cuts the other way - without going for a metadata based approach we\ndon't know if we made the annotations rich enough...\n\n\n> And, TBH, I am not really convinced that a pure metadata approach is going\n> to work out, or that it will have sufficient benefit over just automating\n> the way we do it now. I notice that Peter's patch leaves a few\n> too-much-of-a-special-case functions unconverted, which is no real\n> problem for his approach; but it seems like you won't get to take such\n> shortcuts in a metadata-reading implementation.\n\nIMO my prototype of that approach pretty conclusively shows that it's feasible\nand worthwhile.\n\n\n> The bottom line here is that I believe that Peter's patch could get us out\n> of the business of hand-maintaining the backend/nodes/*.c files in the v15\n> timeframe, which would be a very nice thing. I don't see how your patch\n> will be ready on anywhere near the same schedule. When it is ready, we can\n> switch, but in the meantime I'd like the maintenance benefit.\n\nI'm not going to try to prevent the patch from going in. But I don't think\nit's a great idea to this without even trying to ensure the annotations are\nrich enough...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Feb 2022 18:10:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 14.02.22 18:09, Tom Lane wrote:\n> * It's time for action on the business about extracting comments\n> from the to-be-deleted code.\n\ndone\n\n> * The Perl script is kind of under-commented for my taste.\n> It lacks a copyright notice, too.\n\ndone\n\n> * In the same vein, I should not have to reverse-engineer what\n> the available pg_node_attr() properties are or do. Perhaps they\n> could be documented in the comment for the pg_node_attr macro\n> in nodes.h.\n\ndone\n\n> * Maybe the generated file names could be chosen less opaquely,\n> say \".funcs\" and \".switch\" instead of \".inc1\" and \".inc2\".\n\ndone\n\n> * I don't understand why there are changes in the #include\n> lists in copyfuncs.c etc?\n\nThose are #include lines required for the definitions of various \nstructs. Since the generation script already knows which header files \nare relevant (they are its input files), it can just generate the \nrequired #include lines as well. That way, the remaining copyfuncs.c \nonly has #include lines for things that the (remaining) file itself \nneeds, not what the files included by it need, and if a new header file \nwere to be added, it doesn't have to be added in 4+ places.\n\n> * I think that more thought needs to be put into the format\n> of the *nodes.h struct declarations, because I fear pgindent\n> is going to make a hash of what you've done here. When we\n> did similar stuff in the catalog headers, I think we ended\n> up moving a lot of end-of-line comments onto their own lines.\n\nI have tested pgindent repeatedly throughout this project, and it \ndoesn't look too bad. You are right that some manual curation of \ncomment formatting would be sensible, but I think that might be better \ndone as a separate patch.\n\n> * I assume the pg_config_manual.h changes are not meant for\n> commit?\n\nright", "msg_date": "Fri, 18 Feb 2022 07:51:56 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On Fri, 18 Feb 2022 at 19:52, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> [ v5-0001-Automatically-generate-node-support-functions.patch ]\n\nI've been looking over the patch and wondering the best way to move\nthis forward.\n\nBut first a couple of things I noted down from reading the patch:\n\n1. You're written:\n\n * Unknown attributes are ignored. Some additional attributes are used for\n * special \"hack\" cases.\n\nI think these really should all be documented. If someone needs to\nuse one of these hacks then they're going to need to trawl through\nPerl code to see if you've implemented something that matches the\nrequirements. I'd personally rather not have to look at the Perl code\nto find out which attributes I need to use for my new field. I'd bet\nI'm not the only one.\n\n2. Some of these comment lines have become pretty long after having\nadded the attribute macro.\n\ne.g.\n\nPlannerInfo *subroot pg_node_attr(readwrite_ignore); /* modified\n\"root\" for planning the subquery;\n not printed, too large, not interesting enough */\n\nI wonder if you'd be better to add a blank line above, then put the\ncomment on its own line, i.e:\n\n /* modified \"root\" for planning the subquery; not printed, too large,\nnot interesting enough */\nPlannerInfo *subroot pg_node_attr(readwrite_ignore);\n\n3. My biggest concern with this patch is it introducing some change in\nbehaviour with node copy/equal/read/write. I spent some time in my\ndiff tool comparing the files the Perl script built to the existing\ncode. Unfortunately, that job is pretty hard due to various order\nchanges in the outputted functions. I wonder if it's worth making a\npass in master and changing the function order to match what the\nscript outputs so that a proper comparison can be done just before\ncommitting the patch. The problem I see is that master is currently\na very fast-moving target and a detailed comparison would be much\neasier to do if the functions were in the same order. I'd be a bit\nworried that someone might commit something that requires some special\nbehaviour and that commit goes in sometime between when you've done a\ndetailed and when you commit the full patch.\n\nAlthough, perhaps you've just been copying and pasting code into the\ncorrect order before comparing, which might be good enough if it's\nsimple enough to do.\n\nI've not really done any detailed review of the Perl code. I'm not the\nbest person for that, but I do feel like the important part is making\nsure the outputted files logically match the existing files.\n\nAlso, I'm quite keen to see this work make it into v15. Do you think\nyou'll get time to do that? Thanks for working on it.\n\nDavid\n\n\n", "msg_date": "Fri, 25 Mar 2022 10:57:29 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 24.03.22 22:57, David Rowley wrote:\n> * Unknown attributes are ignored. Some additional attributes are used for\n> * special \"hack\" cases.\n> \n> I think these really should all be documented. If someone needs to\n> use one of these hacks then they're going to need to trawl through\n> Perl code to see if you've implemented something that matches the\n> requirements. I'd personally rather not have to look at the Perl code\n> to find out which attributes I need to use for my new field. I'd bet\n> I'm not the only one.\n\nThe only such hacks are the three path_hack[1-3] cases that correspond \nto the current _outPathInfo(). I've been thinking long and hard about \nhow to generalize any of these but couldn't come up with much yet. I \nsuppose we could replace the names \"path_hackN\" with something more \ndescriptive like \"reloptinfo_light\" and document those in nodes.h, which \nmight address your concern on paper. But I think you'd still need to \nunderstand all of that by looking at the definition of Path and its \nuses, so documenting those in nodes.h wouldn't really help, I think. \nOther ideas welcome.\n\n> 2. Some of these comment lines have become pretty long after having\n> added the attribute macro.\n> \n> e.g.\n> \n> PlannerInfo *subroot pg_node_attr(readwrite_ignore); /* modified\n> \"root\" for planning the subquery;\n> not printed, too large, not interesting enough */\n> \n> I wonder if you'd be better to add a blank line above, then put the\n> comment on its own line, i.e:\n> \n> /* modified \"root\" for planning the subquery; not printed, too large,\n> not interesting enough */\n> PlannerInfo *subroot pg_node_attr(readwrite_ignore);\n\nYes, my idea was to make a separate patch first that reformats many of \nthe structs and comments in that way.\n\n> 3. My biggest concern with this patch is it introducing some change in\n> behaviour with node copy/equal/read/write. I spent some time in my\n> diff tool comparing the files the Perl script built to the existing\n> code. Unfortunately, that job is pretty hard due to various order\n> changes in the outputted functions. I wonder if it's worth making a\n> pass in master and changing the function order to match what the\n> script outputs so that a proper comparison can be done just before\n> committing the patch.\n\nJust reordering won't really help. The content of the functions will be \ndifferent, for example because nodes that include Path will include its \nfields inline instead of calling out to _outPathInfo().\n\nIMO, the confirmation that it works is in COPY_PARSE_PLAN_TREES etc.\n\n> The problem I see is that master is currently\n> a very fast-moving target and a detailed comparison would be much\n> easier to do if the functions were in the same order. I'd be a bit\n> worried that someone might commit something that requires some special\n> behaviour and that commit goes in sometime between when you've done a\n> detailed and when you commit the full patch.\n\n> Also, I'm quite keen to see this work make it into v15. Do you think\n> you'll get time to do that? Thanks for working on it.\n\nMy thinking right now is to wait for the PG16 branch to open and then \nconsider putting it in early. That would avoid creating massive \nconflicts with concurrent patches that change node types, and it would \nalso relax some concerns about undiscovered behavior changes.\n\nIf there is interest in getting it into PG15, I do have capacity to work \non it. But in my estimation, this feature is more useful for future \ndevelopment, so squeezing in just before feature freeze wouldn't provide \nadditional benefit.\n\n\n", "msg_date": "Fri, 25 Mar 2022 14:08:32 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 24.03.22 22:57, David Rowley wrote:\n>> Also, I'm quite keen to see this work make it into v15. Do you think\n>> you'll get time to do that? Thanks for working on it.\n\n> My thinking right now is to wait for the PG16 branch to open and then \n> consider putting it in early.\n\n+1. However, as noted by David (and I think I made similar points awhile\nago), the patch could still use a lot of mop-up work. It'd be prudent to\ncontinue working on it so it will actually be ready to go when the branch\nis made.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 09:32:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 25.03.22 14:32, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 24.03.22 22:57, David Rowley wrote:\n>>> Also, I'm quite keen to see this work make it into v15. Do you think\n>>> you'll get time to do that? Thanks for working on it.\n> \n>> My thinking right now is to wait for the PG16 branch to open and then\n>> consider putting it in early.\n> \n> +1. However, as noted by David (and I think I made similar points awhile\n> ago), the patch could still use a lot of mop-up work. It'd be prudent to\n> continue working on it so it will actually be ready to go when the branch\n> is made.\n\nThe v5 patch was intended to address all the comments you made in your \nFeb. 14 mail. I'm not aware of any open issues from that.\n\n\n", "msg_date": "Fri, 25 Mar 2022 16:20:05 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "I rebased this mostly out of curiousity. I fixed some smallish\nconflicts and fixed a typedef problem new in JSON support; however, even\nwith these fixes it doesn't compile, because JsonPathSpec uses a novel\ntypedef pattern that apparently will need bespoke handling in the\ngen_nodes_support.pl script. It seemed better to post this even without\nthat, though.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El miedo atento y previsor es la madre de la seguridad\" (E. Burke)", "msg_date": "Tue, 19 Apr 2022 13:40:42 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I rebased this mostly out of curiousity. I fixed some smallish\n> conflicts and fixed a typedef problem new in JSON support; however, even\n> with these fixes it doesn't compile, because JsonPathSpec uses a novel\n> typedef pattern that apparently will need bespoke handling in the\n> gen_nodes_support.pl script. It seemed better to post this even without\n> that, though.\n\nMaybe we should fix JsonPathSpec to be less creative while we\nstill can? It's not real clear to me why that typedef even exists,\nrather than using a String node, or just a plain char * field.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Apr 2022 10:39:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 19.04.22 16:39, Tom Lane wrote:\n> Maybe we should fix JsonPathSpec to be less creative while we\n> still can? It's not real clear to me why that typedef even exists,\n> rather than using a String node, or just a plain char * field.\n\nYeah, let's get rid of it and use char *.\n\nI see in JsonCommon a pathspec is converted to a String node, so it's \nnot like JsonPathSpec is some kind of universal representation of the \nthing anyway.\n\n\n", "msg_date": "Tue, 19 Apr 2022 16:53:45 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 19.04.22 13:40, Alvaro Herrera wrote:\n> I rebased this mostly out of curiousity. I fixed some smallish\n> conflicts and fixed a typedef problem new in JSON support; however, even\n> with these fixes it doesn't compile, because JsonPathSpec uses a novel\n> typedef pattern that apparently will need bespoke handling in the\n> gen_nodes_support.pl script. It seemed better to post this even without\n> that, though.\n\nI have committed your change to the JsonTableColumnType enum and the \nremoval of JsonPathSpec. Other than that and some whitespace changes, I \ndidn't find anything in your 0002 patch that was different from my last \nsubmitted patch. Did I miss anything?\n\n\n", "msg_date": "Wed, 4 May 2022 17:45:55 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 2022-May-04, Peter Eisentraut wrote:\n\n> I have committed your change to the JsonTableColumnType enum and the removal\n> of JsonPathSpec.\n\nThanks!\n\n> Other than that and some whitespace changes, I didn't find anything in\n> your 0002 patch that was different from my last submitted patch. Did\n> I miss anything?\n\nNo, I had just fixed one simple conflict IIRC, but I had made no other\nchanges.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Porque francamente, si para saber manejarse a uno mismo hubiera que\nrendir examen... ¿Quién es el machito que tendría carnet?\" (Mafalda)\n\n\n", "msg_date": "Wed, 4 May 2022 18:03:07 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 25.03.22 14:08, Peter Eisentraut wrote:\n>> 2. Some of these comment lines have become pretty long after having\n>> added the attribute macro.\n>>\n>> e.g.\n>>\n>> PlannerInfo *subroot pg_node_attr(readwrite_ignore); /* modified\n>> \"root\" for planning the subquery;\n>>     not printed, too large, not interesting enough */\n>>\n>> I wonder if you'd be better to add a blank line above, then put the\n>> comment on its own line, i.e:\n>>\n>>   /* modified \"root\" for planning the subquery; not printed, too large,\n>> not interesting enough */\n>> PlannerInfo *subroot pg_node_attr(readwrite_ignore);\n> \n> Yes, my idea was to make a separate patch first that reformats many of \n> the structs and comments in that way.\n\nHere is a patch that reformats the relevant (and a few more) comments \nthat way. This has been run through pgindent, so the formatting should \nbe stable.", "msg_date": "Mon, 23 May 2022 07:49:52 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Here is a patch that reformats the relevant (and a few more) comments \n> that way. This has been run through pgindent, so the formatting should \n> be stable.\n\nNow that that's been pushed, the main patch is of course quite broken.\nAre you working on a rebase?\n\nI looked through the last published version of the main patch (Alvaro's\n0002 from 2022-04-19), without trying to actually test it, and found\na couple of things that look wrong in the Makefiles:\n\n* AFAICT, the infrastructure for removing the generated files at\n\"make *clean\" is incomplete. In particular I don't see any code\nfor removing the symlinks or the associated stamp file during\n\"make clean\". It looks like the existing header symlinks are\nall cleaned up in src/include/Makefile's \"clean\" rule, so you\ncould do likewise for these. Also, the \"make maintainer-clean\"\ninfrastructure seems incomplete --- shouldn't src/backend/Makefile's\nmaintainer-clean rule now also do\n\t$(MAKE) -C nodes $@\n?\n\n* There are some useful comments in backend/utils/Makefile that\nI think should be carried over along with the make rules that\n(it looks like) you cribbed from there, notably\n\n# fmgr-stamp records the last time we ran Gen_fmgrtab.pl. We don't rely on\n# the timestamps of the individual output files, because the Perl script\n# won't update them if they didn't change (to avoid unnecessary recompiles).\n\n# These generated headers must be symlinked into builddir/src/include/,\n# using absolute links for the reasons explained in src/backend/Makefile.\n# We use header-stamp to record that we've done this because the symlinks\n# themselves may appear older than fmgr-stamp.\n\nand something similar to this for the \"clean\" rule:\n# fmgroids.h, fmgrprotos.h, fmgrtab.c, fmgr-stamp, and errcodes.h are in the\n# distribution tarball, so they are not cleaned here.\n\n\nAlso, I share David's upthread allergy to the option names \"path_hackN\"\nand to documenting those only inside the conversion script. I think\nthe existing text that you moved into the script, such as this bit:\n\n\t\t# We do not print the parent, else we'd be in infinite\n\t\t# recursion. We can print the parent's relids for\n\t\t# identification purposes, though. We print the pathtarget\n\t\t# only if it's not the default one for the rel. We also do\n\t\t# not print the whole of param_info, since it's printed via\n\t\t# RelOptInfo; it's sufficient and less cluttering to print\n\t\t# just the required outer relids.\n\nis perfectly adequate as documentation, it just needs to be somewhere else\n(pathnodes.h seems fine, if not nodes.h) and labeled as to exactly which\npg_node_attr option invokes which behavior.\n\nBTW, I think this: \"Unknown attributes are ignored\" is a seriously\nbad idea; it will allow typos to escape detection.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Jul 2022 15:14:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 03.07.22 21:14, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Here is a patch that reformats the relevant (and a few more) comments\n>> that way. This has been run through pgindent, so the formatting should\n>> be stable.\n> \n> Now that that's been pushed, the main patch is of course quite broken.\n> Are you working on a rebase?\n\nattached\n\n> * AFAICT, the infrastructure for removing the generated files at\n> \"make *clean\" is incomplete.\n\nI have fixed all the makefiles per your suggestions.\n\n> and something similar to this for the \"clean\" rule:\n> # fmgroids.h, fmgrprotos.h, fmgrtab.c, fmgr-stamp, and errcodes.h are in the\n> # distribution tarball, so they are not cleaned here.\n\nExcept this one, since there is no clean rule. I think seeing that \nfiles are listed under a maintainer-clean target conveys that same message.\n\n> Also, I share David's upthread allergy to the option names \"path_hackN\"\n> and to documenting those only inside the conversion script.\n\nI'll look into that again.\n\n> BTW, I think this: \"Unknown attributes are ignored\" is a seriously\n> bad idea; it will allow typos to escape detection.\n\ngood point", "msg_date": "Mon, 4 Jul 2022 14:23:36 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> [ v6-0001-Automatically-generate-node-support-functions.patch ]\n\nI've now spent some time looking at this fairly carefully, and I think\nthis is a direction we can pursue, but I'm not yet happy about the\namount of magic knowledge that's embedded in the gen_node_support.pl\nscript rather than being encoded in pg_node_attr markers. Once this\nis in place, people will stop thinking about the nodes/*funcs.c\ninfrastructure altogether when they write patches, at least until\nthey get badly burned by it; so I don't want there to be big gotchas.\nAs an example, heaven help the future hacker who decides to change\nthe contents of A_Const and doesn't realize that that still has a\nmanually-implemented copyfuncs.c routine. So rather than embedding\nknowledge in gen_node_support.pl like this:\n\nmy @custom_copy = qw(A_Const Const ExtensibleNode);\n\nI think we ought to put it into the *nodes.h headers as much as\npossible, perhaps like this:\n\ntypedef struct A_Const pg_node_attr(custom_copy)\n{ ...\n\nI will grant that there are some things that are okay to embed\nin gen_node_support.pl, such as the list of @scalar_types,\nbecause if you need to add an entry there you will find it out\nwhen the script complains it doesn't know how to process a field.\nSo there is some judgment involved here, but on the whole I want\nto err on the side of exposing decisions in the headers.\n\nSo I propose that we handle these things via struct-level pg_node_attr\nmarkers, rather than node-type lists embedded in the script:\n\nabstract_types\nno_copy\nno_read_write\nno_read\ncustom_copy\ncustom_readwrite\n\n(The markings that \"we are not publishing right now to stay level with the\nmanual system\" are fine to apply in the script, since that's probably a\ntemporary thing anyway. Also, I don't have a problem with applying\nno_copy etc to the contents of whole files in the script, rather than\ntediously labeling each struct in such files.)\n\nThe hacks for scalar-copying EquivalenceClass*, EquivalenceMember*,\nstruct CustomPathMethods*, and CustomScan.methods should be replaced\nwith \"pg_node_attr(copy_as_scalar)\" labels on affected fields.\n\nI wonder whether this:\n\n # We do not support copying Path trees, mainly\n # because the circular linkages between RelOptInfo\n # and Path nodes can't be handled easily in a\n # simple depth-first traversal.\n\ncouldn't be done better by inventing an inheritable no_copy attr\nto attach to the Path supertype. Or maybe it'd be okay to just\nautomatically inherit the no_xxx properties from the supertype?\n\nI don't terribly like the ad-hoc mechanism for not comparing\nCoercionForm fields. OTOH, I am not sure whether replacing it\nwith per-field equal_ignore attrs would be better; there's at least\nan argument that that invites bugs of omission. But implementing\nthis with an uncommented test deep inside a script that most hackers\nshould not need to read is not good. On the whole I'd lean towards\nthe equal_ignore route.\n\nI'm confused by the \"various field types to ignore\" at the end\nof the outfuncs/readfuncs code. Do we really ignore those now?\nHow could that be safe? If it is safe, wouldn't it be better\nto handle that with per-field pg_node_attrs? Silently doing\nwhat might be the wrong thing doesn't seem good.\n\nIn the department of nitpicks:\n\n* copyfuncs.switch.c and equalfuncs.switch.c are missing trailing\nnewlines.\n\n* pgindent is not very happy with a lot of your comments in *nodes.h.\n\n* I think we should add explicit dependencies in backend/nodes/Makefile,\nalong the lines of\n\ncopyfuncs.o: copyfuncs.c copyfuncs.funcs.c copyfuncs.switch.c\n\nOtherwise the whole thing is a big gotcha for anyone not using\n--enable-depend.\n\nI don't know if you have time right now to push forward with these\npoints, but if you don't I can take a stab at it. I would like to\nsee this done and committed PDQ, because 835d476fd already broke\nmany patches that touch *nodes.h and I'd like to get the rest of\nthe fallout in place before rebasing affected patches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Jul 2022 12:59:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "... BTW, I thought of a consideration that we probably need some\nanswer for. As far as I can see, the patch assigns NodeTag values\nsequentially in the order it sees the struct declarations in the\ninput files; an order that doesn't have a lot to do with our past\npractice. The problem with that is that it's next door to impossible\nto control the tag value assigned to any one struct. During normal\ndevelopment that's not a big deal, but what if we need to add a\nnode struct in a released branch? As nodes.h observes already,\n\n * Note that inserting or deleting node types changes the numbers of other\n * node types later in the list. This is no problem during development, since\n * the node numbers are never stored on disk. But don't do it in a released\n * branch, because that would represent an ABI break for extensions.\n\nWe used to have the option of sticking new nodetags at the end of\nthe list in this situation, but we won't anymore.\n\nIt might be enough to invent a struct-level attribute allowing\nmanual assignment of node tags, ie\n\ntypedef struct MyNewNode pg_node_attr(nodetag=466)\n\nwhere it'd be the programmer's responsibility to pick a nonconflicting\ntag number. We'd only ever use that in ABI-frozen branches, so\nmanual assignment of the tag value should be workable.\n\nAnyway, this isn't something we have to have before committing,\nbut I think we're going to need it at some point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Jul 2022 20:54:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "The new patch addresses almost all of these issues.\n\n > Also, I share David's upthread allergy to the option names\n > \"path_hackN\" and to documenting those only inside the conversion\n > script.\n\nI have given these real names now and documented them with the other \nattributes.\n\n > BTW, I think this: \"Unknown attributes are ignored\" is a seriously\n > bad idea; it will allow typos to escape detection.\n\nfixed\n\n(I have also changed the inside of pg_node_attr to be comma-separated, \nrather than space-separated. This matches better how attribute-type \nthings look in C.)\n\n> I think we ought to put it into the *nodes.h headers as much as\n> possible, perhaps like this:\n> \n> typedef struct A_Const pg_node_attr(custom_copy)\n> { ...\n\ndone\n\n> So I propose that we handle these things via struct-level pg_node_attr\n> markers, rather than node-type lists embedded in the script:\n> \n> abstract_types\n> no_copy\n> no_read_write\n> no_read\n> custom_copy\n> custom_readwrite\n\ndone (no_copy is actually no_copy_equal, hence renamed)\n\n> The hacks for scalar-copying EquivalenceClass*, EquivalenceMember*,\n> struct CustomPathMethods*, and CustomScan.methods should be replaced\n> with \"pg_node_attr(copy_as_scalar)\" labels on affected fields.\n\nHmm, at least for Equivalence..., this is repeated a bunch of times for \neach field. I don't know if this is really a property of the type or \nsomething you can choose for each field? [not changed in v7 patch]\n\n> I wonder whether this:\n> \n> # We do not support copying Path trees, mainly\n> # because the circular linkages between RelOptInfo\n> # and Path nodes can't be handled easily in a\n> # simple depth-first traversal.\n> \n> couldn't be done better by inventing an inheritable no_copy attr\n> to attach to the Path supertype. Or maybe it'd be okay to just\n> automatically inherit the no_xxx properties from the supertype?\n\nThis is an existing comment in copyfuncs.c. I haven't looked into it \nany further.\n\n> I don't terribly like the ad-hoc mechanism for not comparing\n> CoercionForm fields. OTOH, I am not sure whether replacing it\n> with per-field equal_ignore attrs would be better; there's at least\n> an argument that that invites bugs of omission. But implementing\n> this with an uncommented test deep inside a script that most hackers\n> should not need to read is not good. On the whole I'd lean towards\n> the equal_ignore route.\n\nThe definition of CoercionForm in primnodes.h says that the comparison \nbehavior is a property of the type, so it needs to be handled somewhere \ncentrally, not on each field. [not changed in v7 patch]\n\n> I'm confused by the \"various field types to ignore\" at the end\n> of the outfuncs/readfuncs code. Do we really ignore those now?\n> How could that be safe? If it is safe, wouldn't it be better\n> to handle that with per-field pg_node_attrs? Silently doing\n> what might be the wrong thing doesn't seem good.\n\nI have replaced these with explicit ignore markings in pathnodes.h \n(PlannerGlobal, PlannerInfo, RelOptInfo). (This could then use a bit \nmore rearranging some of the per-field comments.)\n\n> * copyfuncs.switch.c and equalfuncs.switch.c are missing trailing\n> newlines.\n\nfixed\n\n> * pgindent is not very happy with a lot of your comments in *nodes.h.\n\nfixed\n\n> * I think we should add explicit dependencies in backend/nodes/Makefile,\n> along the lines of\n> \n> copyfuncs.o: copyfuncs.c copyfuncs.funcs.c copyfuncs.switch.c\n> \n> Otherwise the whole thing is a big gotcha for anyone not using\n> --enable-depend.\n\nfixed -- I think, could use more testing", "msg_date": "Wed, 6 Jul 2022 12:28:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 06.07.22 02:54, Tom Lane wrote:\n> It might be enough to invent a struct-level attribute allowing\n> manual assignment of node tags, ie\n> \n> typedef struct MyNewNode pg_node_attr(nodetag=466)\n> \n> where it'd be the programmer's responsibility to pick a nonconflicting\n> tag number. We'd only ever use that in ABI-frozen branches, so\n> manual assignment of the tag value should be workable.\n\nYes, I'm aware of this issue, and that was also more or less my idea.\n\n(Well, before the introduction of per-struct attributes, I was thinking \nabout parsing nodes.h to see if the tag is listed explicitly. But this \nis probably better.)\n\n\n", "msg_date": "Wed, 6 Jul 2022 12:30:49 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> [ v7-0001-Automatically-generate-node-support-functions.patch ]\n\nI have gone through this and made some proposed changes (attached),\nand I think it is almost committable. There is one nasty problem\nwe need a solution to, which is that pgindent is not at all on board\nwith this idea of attaching node attrs to typedefs. It pushes them\nto the next line, like this:\n\n@@ -691,7 +709,8 @@\n \t (rel)->reloptkind == RELOPT_OTHER_JOINREL || \\\n \t (rel)->reloptkind == RELOPT_OTHER_UPPER_REL)\n \n-typedef struct RelOptInfo pg_node_attr(no_copy_equal, no_read)\n+typedef struct RelOptInfo\n+pg_node_attr(no_copy_equal, no_read)\n {\n \tNodeTag\t\ttype;\n \nwhich is already enough to break the simplistic parsing in\ngen_node_support.pl. Now, we could fix that parsing logic to deal\nwith this layout, but this also seems to change pgindent's opinion\nof whether the subsequent braced material is part of a typedef or a\nfunction. That results in it injecting a lot of vertical space\nthat wasn't there before, which is annoying.\n\nI experimented a bit and found that we could do it this way:\n\n typedef struct RelOptInfo\n {\n+\tpg_node_attr(no_copy_equal, no_read)\n+\n \tNodeTag\t\ttype;\n\nwithout (AFAICT) confusing pgindent, but I've not tried to adapt\nthe perl script or the code to that style.\n\nAnyway, besides that, I have some comments that I've implemented\nin the attached delta patch.\n\n* After further thought I'm okay with your theory that attaching\nspecial copy/equal rules to specific field types is appropriate.\nWe might at some point want the pg_node_attr(copy_as_scalar)\napproach too, but we can always add that later. However, I thought\nsome more comments about it were needed in the *nodes.h files,\nso I added those. (My general feeling about this is that if\nanyone needs to look into gen_node_support.pl to understand how\nthe backend works, we've failed at documentation.)\n\n* As written, the patch created equal() support for all Plan structs,\nwhich is quite a bit of useless code bloat. I solved this by\nseparating no_copy and no_equal properties, so that we could mark\nPlan as no_equal while still having copy support.\n\n* I did not like the semantics of copy_ignore one bit: it was\nrelying on the pre-zeroing behavior of makeNode() to be sane at\nall, and I don't want that to be a requirement. (I foresee\nwanting to flat-copy node contents and turn COPY_SCALAR_FIELD\ninto a no-op.) I replaced it with copy_as(VALUE) to provide\nbetter-defined semantics.\n\n* Likewise, read_write_ignore left the contents of the field after\nreading too squishy for me. I invented read_as(VALUE) parallel\nto copy_as() to fix the semantics, and added a check that you\ncan only use read_write_ignore if the struct is no_read or\nyou provide read_as(). (This could be factored differently\nof course.)\n\n* I threw in a bunch more no_read markers to bring the readfuncs.c\ncontents into closer alignment with what we have today. Maybe\nthere is an argument for accepting that code bloat, but it's a\ndiscussion to have later. In any case, most of the pathnodes.h\nstructs HAVE to be marked no_read because there's no sane way\nto reconstruct them from outfuncs output.\n\n* I got rid of the code that stripped underscores from outfuncs\nstruct labels. That seemed like an entirely unnecessary\nbehavioral change.\n\n* FWIW, I'm okay with the question about\n\n \t\t# XXX Previously, for subtyping, only the leaf field name is\n \t\t# used. Ponder whether we want to keep it that way.\n\nI thought that it might make the output too cluttered, but after\nsome study of the results from printing plans and planner data\nstructures, it's not a big addition, and indeed I kind of like it.\n\n* Fixed a bug in write_only_req_outer code.\n\n* Made Plan and Join into abstract nodes.\n\nAnyway, if we can fix the impedance mismatch with pgindent,\nI think this is committable. There is a lot of follow-on\nwork that could be considered, but I'd like to get the present\nchanges in place ASAP so that other patches can be rebased\nonto something stable.\n\nI've attached a delta patch, and also repeated v7 so as not\nto confuse the cfbot.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 06 Jul 2022 16:46:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "I wrote:\n> I have gone through this and made some proposed changes (attached),\n> and I think it is almost committable.\n\nI see from the cfbot that it now needs to be taught about RelFileNumber...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Jul 2022 17:46:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 06.07.22 22:46, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> [ v7-0001-Automatically-generate-node-support-functions.patch ]\n> \n> I have gone through this and made some proposed changes (attached),\n\nI have included those.\n\n> and I think it is almost committable. There is one nasty problem\n> we need a solution to, which is that pgindent is not at all on board\n> with this idea of attaching node attrs to typedefs. It pushes them\n> to the next line, like this:\n> \n> @@ -691,7 +709,8 @@\n> \t (rel)->reloptkind == RELOPT_OTHER_JOINREL || \\\n> \t (rel)->reloptkind == RELOPT_OTHER_UPPER_REL)\n> \n> -typedef struct RelOptInfo pg_node_attr(no_copy_equal, no_read)\n> +typedef struct RelOptInfo\n> +pg_node_attr(no_copy_equal, no_read)\n> {\n> \tNodeTag\t\ttype;\n\nI have found that putting the attributes at the end of the struct \ndefinition, right before the semicolon, works, so I have changed it that \nway. (This is also where a gcc __attribute__() would go, so it seems \nreasonable.)\n\nThe attached patch is stable under pgindent.\n\nFinally, I have updated src/backend/nodes/README a bit.\n\nI realize I've been confused various times about when a catversion \nchange is required when changing nodes. (I think the bump in 251154bebe \nwas probably not needed.) I have tried to put that in the README. This \ncould perhaps be expanded.\n\nI think for this present patch, I would do a catversion bump, just to be \nsure, in case some of the printed node fields are different now.\n\nIt was also my plan to remove the #ifdef OBSOLETE sections in a separate \ncommit right after, just to be clear.\n\nFinal thoughts?", "msg_date": "Fri, 8 Jul 2022 14:44:00 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 06.07.22 22:46, Tom Lane wrote:\n>> ... There is one nasty problem\n>> we need a solution to, which is that pgindent is not at all on board\n>> with this idea of attaching node attrs to typedefs.\n\n> I have found that putting the attributes at the end of the struct \n> definition, right before the semicolon, works, so I have changed it that \n> way. (This is also where a gcc __attribute__() would go, so it seems \n> reasonable.)\n\nThat was the first solution I thought of as well, but I do not like\nit from a cosmetic standpoint. The node attributes are a pretty\ncritical part of the node definition (especially \"abstract\"),\nso shoving them to the very end is not helpful for readability.\nIMO anyway.\n\n> I think for this present patch, I would do a catversion bump, just to be \n> sure, in case some of the printed node fields are different now.\n\nI know from comparing the code that some printed node tags have\nchanged, and so has the print order of some fields. It might be\nthat none of those changes are in node types that can appear in\nstored rules --- but I'm not sure, so I concur that doing a\ncatversion bump for this commit is advisable.\n\n> It was also my plan to remove the #ifdef OBSOLETE sections in a separate \n> commit right after, just to be clear.\n\nYup, my thought as well. There are a few other mop-up things\nI want to do shortly after (e.g. add copyright-notice headers\nto the emitted files), but let's wait for the buildfarm's\nopinion of the main commit first.\n\n> Final thoughts?\n\nI'll re-read the patch today, but how open are you to putting the\nstruct attributes at the top? I'm willing to do the legwork.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Jul 2022 09:52:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 08.07.22 15:52, Tom Lane wrote:\n> I'll re-read the patch today, but how open are you to putting the\n> struct attributes at the top? I'm willing to do the legwork.\n\nI agree near the top would be preferable. I think it would even be \nfeasible to parse the whole thing if pgindent split it across lines. I \nsort of tried to maintain the consistency with C/C++ attributes like \n__attribute__ and [[attribute]], hoping that that would confuse other \ntooling the least. Feel free to experiment further.\n\n\n", "msg_date": "Fri, 8 Jul 2022 17:46:28 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "While going over this patch, I noticed that I forgot to add support for\nXidList in copyfuncs.c. OK if I push this soon quickly?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/", "msg_date": "Fri, 8 Jul 2022 18:45:34 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> While going over this patch, I noticed that I forgot to add support for\n> XidList in copyfuncs.c. OK if I push this soon quickly?\n\nYeah, go ahead, that part of copyfuncs is still going to be manually\nmaintained, so we need the fix.\n\nWhat about equalfuncs etc?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Jul 2022 14:15:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 08.07.22 15:52, Tom Lane wrote:\n>> I'll re-read the patch today, but how open are you to putting the\n>> struct attributes at the top? I'm willing to do the legwork.\n\n> I agree near the top would be preferable. I think it would even be \n> feasible to parse the whole thing if pgindent split it across lines. I \n> sort of tried to maintain the consistency with C/C++ attributes like \n> __attribute__ and [[attribute]], hoping that that would confuse other \n> tooling the least. Feel free to experiment further.\n\nI went through and did that, and I do like this way better.\n\nI did a final round of review, and found a few cosmetic things, as\nwell as serious bugs in the code I'd contributed for copy_as/read_as:\nthey did the wrong thing for VALUE of \"0\" because I should have\nwritten \"if (defined $foo)\" not \"if ($foo)\". Also, read_as did\nnot generate correct code for the case where we don't have\nread_write_ignore; in that case we have to read the value outfuncs.c\nwrote and then override it.\n\n0001 attached repeats your v8 (to please the cfbot).\n\n0002 includes some suggestions for the README file as well as\ncosmetic and not-so-cosmetic fixes for gen_node_support.pl.\n\n0003 moves the node-level attributes as discussed.\n\nLastly, I think we ought to apply pgperltidy to the Perl code.\nIn case you don't have that installed, 0004 is the diffs I got.\n\nI think this is ready to go (don't forget the catversion bump).\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 08 Jul 2022 16:03:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "I wrote:\n> 0003 moves the node-level attributes as discussed.\n\nMeh. Just realized that I forgot to adjust the commentary in nodes.h\nabout where to put node attributes.\n\nMaybe like\n\n- * Attributes can be attached to a node as a whole (the attribute\n- * specification must be at the end of the struct or typedef, just before the\n- * semicolon) or to a specific field (must be at the end of the line). The\n+ * Attributes can be attached to a node as a whole (place the attribute\n+ * specification on the first line after the struct's opening brace)\n+ * or to a specific field (place it at the end of that field's line). The\n * argument is a comma-separated list of attributes. Unrecognized attributes\n * cause an error.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Jul 2022 16:16:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 08.07.22 22:03, Tom Lane wrote:\n> I think this is ready to go (don't forget the catversion bump).\n\nThis is done now, after a brief vpath-shaped scare from the buildfarm \nearlier today.\n\n\n", "msg_date": "Sat, 9 Jul 2022 16:37:22 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 08.07.22 22:03, Tom Lane wrote:\n>> I think this is ready to go (don't forget the catversion bump).\n\n> This is done now, after a brief vpath-shaped scare from the buildfarm \n> earlier today.\n\nDoh ... never occurred to me either to try that :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 Jul 2022 11:03:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Here's some follow-on patches, as I threatened yesterday.\n\n0001 adds some material to nodes/README in hopes of compensating for\na couple of removed comments.\n\n0002 fixes gen_node_support.pl's rather badly broken error reporting.\nAs it stands, it always says that an error is on line 1 of the respective\ninput file, because it relies for that on perl's \"$.\" which is only\nworkable when we are reading the file a line at a time. The scheme\nof sucking in the entire file so that we can suppress multi-line C\ncomments easily doesn't play well with that. I concluded that the\nbest way to fix that was to adjust the C-comment-deletion code to\npreserve any newlines within a comment, and then we can easily count\nlines manually. The new C-comment-deletion code is a bit brute-force;\nmaybe there is a better way?\n\n0003 adds boilerplate header comments to the output files, using\nwording pretty similar to those written by genbki.pl.\n\n0004 fixes things so that we don't leave a mess of temporary files\nif the script dies partway through. genbki.pl perhaps could use\nthis as well, but my experience is that genbki usually reports any\nerrors before starting to write files. gen_node_support.pl not\nso much --- I had to manually clean up the mess several times while\nreviewing/testing.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 09 Jul 2022 12:58:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Hi,\n\nOn 2022-07-09 16:37:22 +0200, Peter Eisentraut wrote:\n> On 08.07.22 22:03, Tom Lane wrote:\n> > I think this is ready to go (don't forget the catversion bump).\n> \n> This is done now, after a brief vpath-shaped scare from the buildfarm\n> earlier today.\n\nI was just rebasing meson ontop of this and was wondering whether the input\nfilenames were in a particular order:\n\n\nnode_headers = \\\n\tnodes/nodes.h \\\n\tnodes/execnodes.h \\\n\tnodes/plannodes.h \\\n\tnodes/primnodes.h \\\n\tnodes/pathnodes.h \\\n\tnodes/extensible.h \\\n\tnodes/parsenodes.h \\\n\tnodes/replnodes.h \\\n\tnodes/value.h \\\n\tcommands/trigger.h \\\n\tcommands/event_trigger.h \\\n\tforeign/fdwapi.h \\\n\taccess/amapi.h \\\n\taccess/tableam.h \\\n\taccess/tsmapi.h \\\n\tutils/rel.h \\\n\tnodes/supportnodes.h \\\n\texecutor/tuptable.h \\\n\tnodes/lockoptions.h \\\n\taccess/sdir.h\n\nCan we either order them alphabetically or add a comment explaining the order?\n\n- Andres\n\n\n", "msg_date": "Sun, 10 Jul 2022 14:46:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I was just rebasing meson ontop of this and was wondering whether the input\n> filenames were in a particular order:\n\nThat annoyed me too. I think it's sensible to list the \"main\" input\nfiles first, but I'd put them in our traditional pipeline order:\n\n> \tnodes/nodes.h \\\n> \tnodes/primnodes.h \\\n> \tnodes/parsenodes.h \\\n> \tnodes/pathnodes.h \\\n> \tnodes/plannodes.h \\\n> \tnodes/execnodes.h \\\n\nThe rest could probably be alphabetical. I was also wondering if\nall of them really need to be read at all --- I'm unclear on what\naccess/sdir.h is contributing, for example.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 10 Jul 2022 19:09:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 11.07.22 01:09, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> I was just rebasing meson ontop of this and was wondering whether the input\n>> filenames were in a particular order:\n\nFirst, things used by later files need to be found in earlier files. So \nthat constrains the order a bit.\n\nSecond, the order of the files determines the ordering of the output. \nThe current order of the files reflects approximately the order how the \nmanual code was arranged. That could be changed. We could also just \nsort the node types in the script and dump out everything alphabetically.\n\n> That annoyed me too. I think it's sensible to list the \"main\" input\n> files first, but I'd put them in our traditional pipeline order:\n> \n>> \tnodes/nodes.h \\\n>> \tnodes/primnodes.h \\\n>> \tnodes/parsenodes.h \\\n>> \tnodes/pathnodes.h \\\n>> \tnodes/plannodes.h \\\n>> \tnodes/execnodes.h \\\n\nThe seems worth trying out.\n\n> The rest could probably be alphabetical. I was also wondering if\n> all of them really need to be read at all --- I'm unclear on what\n> access/sdir.h is contributing, for example.\n\ncould not handle type \"ScanDirection\" in struct \"IndexScan\" field \n\"indexorderdir\"\n\n\n", "msg_date": "Mon, 11 Jul 2022 16:09:24 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 11.07.22 01:09, Tom Lane wrote:\n>> The rest could probably be alphabetical. I was also wondering if\n>> all of them really need to be read at all --- I'm unclear on what\n>> access/sdir.h is contributing, for example.\n\n> could not handle type \"ScanDirection\" in struct \"IndexScan\" field \n> \"indexorderdir\"\n\nAh, I see. Still, we could also handle that with\n\npush @enum_types, qw(ScanDirection);\n\nwhich would be exactly one place that needs to know about this, rather\nthan the three (soon to be four) places that know that access/sdir.h\nneeds to be read and then mostly ignored.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 10:22:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 11.07.22 01:09, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n> I was just rebasing meson ontop of this and was wondering whether the input\n> filenames were in a particular order:\n\n> First, things used by later files need to be found in earlier files. So \n> that constrains the order a bit.\n\nYeah, the script needs to see supertype nodes before subtype nodes,\nelse it will not realize that the subtypes are nodes at all. However,\nthere is not very much cross-header-file subtyping. I experimented with\nrearranging the input-file order, and found that the *only* thing that\nbreaks it is to put primnodes.h after pathnodes.h (which fails because\nPlaceHolderVar is a subtype of Expr). You don't even need nodes.h to be\nfirst, which astonished me initially, but then I realized that both\nNodeTag and struct Node are special-cased in gen_node_support.pl,\nso we know enough to get by even before reading nodes.h.\n\nMore generally, the main *nodes.h files themselves are arranged in\npipeline order, eg parsenodes.h #includes primnodes.h. So that seems\nto be a pretty safe thing to rely on even if we grow more cross-header\nsubtyping cases later. But I'd vote for putting the incidental files\nin alphabetical order.\n\n> Second, the order of the files determines the ordering of the output. \n> The current order of the files reflects approximately the order how the \n> manual code was arranged. That could be changed. We could also just \n> sort the node types in the script and dump out everything alphabetically.\n\n+1 for sorting alphabetically. I experimented with that and it's a\nreally trivial change.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 11:37:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> could not handle type \"ScanDirection\" in struct \"IndexScan\" field \n>> \"indexorderdir\"\n\n> Ah, I see. Still, we could also handle that with\n> push @enum_types, qw(ScanDirection);\n\nI tried that, and it does work. The only other input file we could\nget rid of that way is nodes/lockoptions.h, which likewise contributes\nonly a couple of enum type names. Not sure it's worth messing with\n--- both ways seem crufty, though for different reasons.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 12:07:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Hi,\n\nOn 2022-07-11 12:07:09 -0400, Tom Lane wrote:\n> I wrote:\n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> >> could not handle type \"ScanDirection\" in struct \"IndexScan\" field\n> >> \"indexorderdir\"\n>\n> > Ah, I see. Still, we could also handle that with\n> > push @enum_types, qw(ScanDirection);\n>\n> I tried that, and it does work. The only other input file we could\n> get rid of that way is nodes/lockoptions.h, which likewise contributes\n> only a couple of enum type names.\n\nKinda wonder if those headers are even worth having. Plenty other enums in\nprimnodes.h.\n\n\n> Not sure it's worth messing with --- both ways seem crufty, though for\n> different reasons.\n\nNot sure either.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Jul 2022 09:14:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "I wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> I was just rebasing meson ontop of this and was wondering whether the input\n>>> filenames were in a particular order:\n\nPushed a patch to make that a bit less random-looking.\n\n> +1 for sorting alphabetically. I experimented with that and it's a\n> really trivial change.\n\nI had second thoughts about that, after noticing that alphabetizing\nthe NodeTag enum increased the backend's size by 20K or so. Presumably\nthat's telling us that a bunch of switch statements got less dense,\nwhich might possibly cause performance issues thanks to poorer cache\nbehavior or the like. Maybe it's still appropriate to do, but it's\nnot as open-and-shut as I first thought.\n\nMore generally, I'm having second thoughts about the wisdom of\nauto-generating the NodeTag enum at all. With the current setup,\nI am absolutely petrified about the risk of silent ABI breakage\nthanks to the enum order changing. In particular, if the meson\nbuild fails to use the same input-file order as the makefile build,\nthen we will get different enum orders from the two builds, causing\nan ABI discrepancy that nobody would notice until we had catastrophic\nextension-compatibility issues in the field.\n\nOf course, sorting the tags by name is a simple way to fix that.\nBut I'm not sure I want to buy into being forced to do it like that,\nbecause of the switch-density question.\n\nSo at this point I'm rather attracted to the idea of reverting to\na manually-maintained NodeTag enum. We know how to avoid ABI\nbreakage with that, and it's not exactly the most painful part\nof adding a new node type. Plus, that'd remove (most of?) the\nneed for gen_node_support.pl to deal with \"node-tag-only\" structs\nat all.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 13:57:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On Mon, Jul 11, 2022 at 1:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> More generally, I'm having second thoughts about the wisdom of\n> auto-generating the NodeTag enum at all. With the current setup,\n> I am absolutely petrified about the risk of silent ABI breakage\n> thanks to the enum order changing. In particular, if the meson\n> build fails to use the same input-file order as the makefile build,\n> then we will get different enum orders from the two builds, causing\n> an ABI discrepancy that nobody would notice until we had catastrophic\n> extension-compatibility issues in the field.\n\nI think this is a valid concern, but having it be automatically\ngenerated is awfully handy, so I think it would be nice to find some\nway of preserving that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Jul 2022 14:17:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Hi,\n\nOn 2022-07-11 13:57:38 -0400, Tom Lane wrote:\n> More generally, I'm having second thoughts about the wisdom of\n> auto-generating the NodeTag enum at all. With the current setup,\n> I am absolutely petrified about the risk of silent ABI breakage\n> thanks to the enum order changing. In particular, if the meson\n> build fails to use the same input-file order as the makefile build,\n> then we will get different enum orders from the two builds, causing\n> an ABI discrepancy that nobody would notice until we had catastrophic\n> extension-compatibility issues in the field.\n\nUgh, yes. And it already exists due to Solution.pm, although that's perhaps\nless likely to be encountered \"in the wild\".\n\nAdditionally, I think we've had to add tags to the enum in minor releases\nbefore and I'm afraid this now would end up looking even more awkward?\n\n\n> Of course, sorting the tags by name is a simple way to fix that.\n> But I'm not sure I want to buy into being forced to do it like that,\n> because of the switch-density question.\n> \n> So at this point I'm rather attracted to the idea of reverting to\n> a manually-maintained NodeTag enum.\n\n+0.5 - there might be a better solution to this, but I'm not immediately\nseeing it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Jul 2022 11:29:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 11, 2022 at 1:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> More generally, I'm having second thoughts about the wisdom of\n>> auto-generating the NodeTag enum at all. With the current setup,\n>> I am absolutely petrified about the risk of silent ABI breakage\n>> thanks to the enum order changing.\n\n> I think this is a valid concern, but having it be automatically\n> generated is awfully handy, so I think it would be nice to find some\n> way of preserving that.\n\nAgreed. The fundamental problem seems to be that each build toolchain\nhas its own source of truth about the file processing order, but we now\nsee that there had better be only one. We could make the sole source\nof truth about that be gen_node_support.pl itself, I think.\n\nWe can't simply move the file list into gen_node_support.pl, because\n(a) the build system has to know about the dependencies involved, and\n(b) gen_node_support.pl wouldn't know what to do in VPATH situations.\nHowever, we could have gen_node_support.pl contain a canonical list\nof the files it expects to be handed, and make it bitch if its\narguments don't match that.\n\nThat's ugly I admit, but the set of files of interest doesn't change\nso often that maintaining one additional copy would be a big problem.\n\nAnybody got a better idea?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 15:54:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Additionally, I think we've had to add tags to the enum in minor releases\n> before and I'm afraid this now would end up looking even more awkward?\n\nPeter and I already had a discussion about that upthread --- we figured\nthat if there's a way to manually assign a nodetag's number, you could use\nthat option when you have to add a tag in a stable branch. We didn't\nactually build out that idea, but I can go do that, if we can solve the\nmore fundamental problem of keeping the autogenerated numbers stable.\n\nOne issue with that idea, of course, is that you have to remember to do\nit like that when back-patching a node addition. Ideally there'd be\nsomething that'd carp if the last autogenerated tag moves in a stable\nbranch, but I'm not very sure where to put that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 16:04:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On Mon, Jul 11, 2022 at 3:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We can't simply move the file list into gen_node_support.pl, because\n> (a) the build system has to know about the dependencies involved, and\n> (b) gen_node_support.pl wouldn't know what to do in VPATH situations.\n> However, we could have gen_node_support.pl contain a canonical list\n> of the files it expects to be handed, and make it bitch if its\n> arguments don't match that.\n\nSorry if I'm being dense, but why do we have to duplicate the list of\nfiles instead of having gen_node_support.pl just sort whatever list\nthe build system provides to it?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Jul 2022 16:17:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Hi,\n\nOn 2022-07-11 15:54:22 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Jul 11, 2022 at 1:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> More generally, I'm having second thoughts about the wisdom of\n> >> auto-generating the NodeTag enum at all. With the current setup,\n> >> I am absolutely petrified about the risk of silent ABI breakage\n> >> thanks to the enum order changing.\n> \n> > I think this is a valid concern, but having it be automatically\n> > generated is awfully handy, so I think it would be nice to find some\n> > way of preserving that.\n> \n> Agreed. The fundamental problem seems to be that each build toolchain\n> has its own source of truth about the file processing order, but we now\n> see that there had better be only one. We could make the sole source\n> of truth about that be gen_node_support.pl itself, I think.\n> \n> We can't simply move the file list into gen_node_support.pl, because\n\n> (a) the build system has to know about the dependencies involved\n\nMeson has builtin support for tools like gen_node_support.pl reporting which\nfiles they've read and then to use those as dependencies. It'd not be a lot of\neffort to open-code that with make either.\n\nDoesn't look like we have dependency handling in Solution.pm?\n\n\n> (b) gen_node_support.pl wouldn't know what to do in VPATH situations.\n\nWe could easily add a --include-path argument or such. That'd be trivial to\nset for all of the build solutions.\n\nFWIW, for meson I already needed to add an option to specify the location of\noutput files (since scripts are called from the root of the build directory).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Jul 2022 13:17:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Hi,\n\nOn 2022-07-11 16:17:28 -0400, Robert Haas wrote:\n> On Mon, Jul 11, 2022 at 3:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > We can't simply move the file list into gen_node_support.pl, because\n> > (a) the build system has to know about the dependencies involved, and\n> > (b) gen_node_support.pl wouldn't know what to do in VPATH situations.\n> > However, we could have gen_node_support.pl contain a canonical list\n> > of the files it expects to be handed, and make it bitch if its\n> > arguments don't match that.\n> \n> Sorry if I'm being dense, but why do we have to duplicate the list of\n> files instead of having gen_node_support.pl just sort whatever list\n> the build system provides to it?\n\nBecause right now there's two buildsystems already (look at\nSolution.pm). Looks like we'll briefly have three, then two again.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Jul 2022 13:26:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-07-11 16:17:28 -0400, Robert Haas wrote:\n>> Sorry if I'm being dense, but why do we have to duplicate the list of\n>> files instead of having gen_node_support.pl just sort whatever list\n>> the build system provides to it?\n\n> Because right now there's two buildsystems already (look at\n> Solution.pm). Looks like we'll briefly have three, then two again.\n\nThere are two things we need: (1) be sure that the build system knows\nabout all the files of interest, and (2) process them in the correct\norder, which is *not* alphabetical. \"Just sort\" won't achieve either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 16:36:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-07-11 15:54:22 -0400, Tom Lane wrote:\n>> We can't simply move the file list into gen_node_support.pl, because\n>> (a) the build system has to know about the dependencies involved\n\n> Meson has builtin support for tools like gen_node_support.pl reporting which\n> files they've read and then to use those as dependencies. It'd not be a lot of\n> effort to open-code that with make either.\n\nIf you want to provide code for that, sure, but I don't know how to do it.\n\n>> (b) gen_node_support.pl wouldn't know what to do in VPATH situations.\n\n> We could easily add a --include-path argument or such. That'd be trivial to\n> set for all of the build solutions.\n\nTrue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 16:38:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Additionally, I think we've had to add tags to the enum in minor releases\n>> before and I'm afraid this now would end up looking even more awkward?\n\n> Peter and I already had a discussion about that upthread --- we figured\n> that if there's a way to manually assign a nodetag's number, you could use\n> that option when you have to add a tag in a stable branch. We didn't\n> actually build out that idea, but I can go do that, if we can solve the\n> more fundamental problem of keeping the autogenerated numbers stable.\n\n> One issue with that idea, of course, is that you have to remember to do\n> it like that when back-patching a node addition. Ideally there'd be\n> something that'd carp if the last autogenerated tag moves in a stable\n> branch, but I'm not very sure where to put that.\n\nOne way to do it is to provide logic in gen_node_support.pl to check\nthat, and activate that logic only in back branches. If we make that\npart of the branch-making procedure, we'd not forget to do it.\n\nProposed patch attached.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 11 Jul 2022 17:18:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Hi,\n\nOn 2022-07-11 16:38:05 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-07-11 15:54:22 -0400, Tom Lane wrote:\n> >> We can't simply move the file list into gen_node_support.pl, because\n> >> (a) the build system has to know about the dependencies involved\n> \n> > Meson has builtin support for tools like gen_node_support.pl reporting which\n> > files they've read and then to use those as dependencies. It'd not be a lot of\n> > effort to open-code that with make either.\n> \n> If you want to provide code for that, sure, but I don't know how to do it.\n\nIt'd basically be something like a --deps option providing a path to a file\n(e.g. .deps/nodetags.Po) where the script would emit something roughly\nequivalent to\n\npath/to/nodetags.h: path/to/nodes/nodes.h\npath/to/nodetags.h: path/to/nodes/primnodes.h\n...\npath/to/readfuncs.c: path/to/nodetags.h\n\nIt might or might not make sense to output this as one rule instead of\nmultiple ones.\n\nI think our existing dependency support would do the rest.\n\n\nWe'd still need a dependency on node-support-stamp (or nodetags.h or ...), to\ntrigger the first invocation of gen_node_support.pl.\n\n\nI don't think it's worth worrying about this not working reliably for non\n--enable-depend builds, there's a lot more broken than this. But it might be a\nbit annoying to deal with either a) creating the .deps directory even without\n--enable-depend, or b) specifying --deps only optionally.\n\nI can give it a go if this doesn't sound insane.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Jul 2022 14:37:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I don't think it's worth worrying about this not working reliably for non\n> --enable-depend builds, there's a lot more broken than this.\n\nWell, *I* care about that, and I won't stand for making the\nnon-enable-depend case significantly more broken than it is now.\nIn particular, what you're proposing would mean that \"make clean\"\nfollowed by rebuild wouldn't be sufficient to update everything\nanymore; you'd have to resort to maintainer-clean or \"git clean -dfx\"\nafter touching any node definition file, else gen_node_support.pl\nwould not get re-run. Up with that I will not put.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 18:09:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Hi,\n\nOn 2022-07-11 18:09:15 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I don't think it's worth worrying about this not working reliably for non\n> > --enable-depend builds, there's a lot more broken than this.\n>\n> Well, *I* care about that, and I won't stand for making the\n> non-enable-depend case significantly more broken than it is now.\n>\n> In particular, what you're proposing would mean that \"make clean\"\n> followed by rebuild wouldn't be sufficient to update everything\n> anymore; you'd have to resort to maintainer-clean or \"git clean -dfx\"\n> after touching any node definition file, else gen_node_support.pl\n> would not get re-run. Up with that I will not put.\n\nI'm not sure it'd have to mean that, but we could just implement the\ndependency stuff independent of the existing autodepend logic. Something like:\n\n# ensure that dependencies of\n-include gen_node_support.pl.deps\nnode-support-stamp: gen_node_support.pl\n\t$(PERL) --deps $^.deps $^\n\nI guess we'd have to distribute gen_node_support.pl.deps to make this work in\ntarball builds - which is probably fine? Not really different than including\nstamp files.\n\nI'm not entirely sure how well either the existing or the sketch above works\nwhen doing a VPATH build using tarball sources, and updating the files.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Jul 2022 15:27:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'm not entirely sure how well either the existing or the sketch above works\n> when doing a VPATH build using tarball sources, and updating the files.\n\nSeems like an awful lot of effort to avoid having multiple copies\nof the file list. I think we should just do what I sketched earlier,\nie put the master list into gen_node_support.pl and have it cross-check\nthat against its command line. If the meson system can avoid having\nits own copy of the list, great; but I don't feel like we have to make\nthat happen for the makefiles or Solution.pm.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jul 2022 18:39:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 2022-07-11 18:39:44 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'm not entirely sure how well either the existing or the sketch above works\n> > when doing a VPATH build using tarball sources, and updating the files.\n> \n> Seems like an awful lot of effort to avoid having multiple copies\n> of the file list. I think we should just do what I sketched earlier,\n> ie put the master list into gen_node_support.pl and have it cross-check\n> that against its command line. If the meson system can avoid having\n> its own copy of the list, great; but I don't feel like we have to make\n> that happen for the makefiles or Solution.pm.\n\nWFM.\n\n\n", "msg_date": "Mon, 11 Jul 2022 15:41:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On 11.07.22 19:57, Tom Lane wrote:\n> So at this point I'm rather attracted to the idea of reverting to\n> a manually-maintained NodeTag enum. We know how to avoid ABI\n> breakage with that, and it's not exactly the most painful part\n> of adding a new node type.\n\nOne of the nicer features is that you now get to see the numbers \nassigned to the enum tags, like\n\n T_LockingClause = 91,\n T_XmlSerialize = 92,\n T_PartitionElem = 93,\n\nso that when you get an error like \"unsupported node type: %d\", you can \njust look up what it is.\n\n\n\n", "msg_date": "Tue, 12 Jul 2022 21:03:47 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 11.07.22 19:57, Tom Lane wrote:\n>> So at this point I'm rather attracted to the idea of reverting to\n>> a manually-maintained NodeTag enum. We know how to avoid ABI\n>> breakage with that, and it's not exactly the most painful part\n>> of adding a new node type.\n\n> One of the nicer features is that you now get to see the numbers \n> assigned to the enum tags, like\n\n> T_LockingClause = 91,\n> T_XmlSerialize = 92,\n> T_PartitionElem = 93,\n\n> so that when you get an error like \"unsupported node type: %d\", you can \n> just look up what it is.\n\nYeah, I wasn't thrilled about reverting that either. I think the\ndefenses I installed in eea9fa9b2 should be sufficient to deal\nwith the risk.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Jul 2022 15:49:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Just one more thing here ... I really don't like the fact that\ngen_node_support.pl's response to unparseable input is to silently\nignore it. That's maybe tolerable outside a node struct, but\nI think we need a higher standard inside. I experimented with\npromoting the commented-out \"warn\" to \"die\", and soon learned\nthat there are two shortcomings:\n\n* We can't cope with the embedded union inside A_Const.\nSimplest fix is to move it outside.\n\n* We can't cope with function-pointer fields. The only real\nproblem there is that some of them spread across multiple lines,\nbut really that was a shortcoming we'd have to fix sometime\nanyway.\n\nProposed patch attached.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 13 Jul 2022 20:49:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On Wed, Jul 13, 2022 at 12:34 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n\nI have a question related to commit 964d01ae90. Today, after getting\nthe latest code, when I compiled it on my windows machine, it lead to\na compilation error because the outfuncs.funcs.c was not regenerated.\nI did the usual steps which I normally perform after getting the\nlatest code (a) run \"perl mkvcbuild.pl\" and (b) then build the code\nusing MSVC. Now, after that, I manually removed \"node-support-stamp\"\nfrom folder src/backend/nodes/ and re-did the steps and I see that the\noutfuncs.funcs.c got regenerated, and the build is also successful. I\nsee that there is handling to clean the file \"node-support-stamp\" in\nnodes/Makefile but not sure how it works for windows. I think I am\nmissing something here. Can you please guide me?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Aug 2022 11:51:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> I have a question related to commit 964d01ae90. Today, after getting\n> the latest code, when I compiled it on my windows machine, it lead to\n> a compilation error because the outfuncs.funcs.c was not regenerated.\n> I did the usual steps which I normally perform after getting the\n> latest code (a) run \"perl mkvcbuild.pl\" and (b) then build the code\n> using MSVC. Now, after that, I manually removed \"node-support-stamp\"\n> from folder src/backend/nodes/ and re-did the steps and I see that the\n> outfuncs.funcs.c got regenerated, and the build is also successful. I\n> see that there is handling to clean the file \"node-support-stamp\" in\n> nodes/Makefile but not sure how it works for windows. I think I am\n> missing something here. Can you please guide me?\n\nMore likely, we need to add something explicit to Mkvcbuild.pm\nfor this. I recall that it has stanzas to deal with updating\nother autogenerated files; I bet we either missed that or\nfat-fingered it for node-support-stamp.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Aug 2022 09:46:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On Wed, Aug 3, 2022 at 7:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > I have a question related to commit 964d01ae90. Today, after getting\n> > the latest code, when I compiled it on my windows machine, it lead to\n> > a compilation error because the outfuncs.funcs.c was not regenerated.\n> > I did the usual steps which I normally perform after getting the\n> > latest code (a) run \"perl mkvcbuild.pl\" and (b) then build the code\n> > using MSVC. Now, after that, I manually removed \"node-support-stamp\"\n> > from folder src/backend/nodes/ and re-did the steps and I see that the\n> > outfuncs.funcs.c got regenerated, and the build is also successful. I\n> > see that there is handling to clean the file \"node-support-stamp\" in\n> > nodes/Makefile but not sure how it works for windows. I think I am\n> > missing something here. Can you please guide me?\n>\n> More likely, we need to add something explicit to Mkvcbuild.pm\n> for this. I recall that it has stanzas to deal with updating\n> other autogenerated files; I bet we either missed that or\n> fat-fingered it for node-support-stamp.\n>\n\nI see below logic added by commit which seems to help regenerate the\nrequired files.\n\n+++ b/src/tools/msvc/Solution.pm\n@@ -839,6 +839,54 @@ EOF\n close($chs);\n }\n\n+ if (IsNewer(\n+ 'src/backend/nodes/node-support-stamp',\n+ 'src/backend/nodes/gen_node_support.pl'))\n...\n...\n\nNow, in commit 1349d2790b, we didn't change anything in\ngen_node_support.pl but changed \"typedef struct AggInfo\" due to which\nwe expect the files like outfuncs.funcs.c gets regenerated. However,\nas there is no change in gen_node_support.pl, the files didn't get\nregenerated.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 5 Aug 2022 17:52:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Wed, Aug 3, 2022 at 7:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> More likely, we need to add something explicit to Mkvcbuild.pm\n>> for this. I recall that it has stanzas to deal with updating\n>> other autogenerated files; I bet we either missed that or\n>> fat-fingered it for node-support-stamp.\n\n> I see below logic added by commit which seems to help regenerate the\n> required files.\n\nMeh ... it's not checking the data files themselves. Here's\na patch based on the logic for invoking genbki. Completely\nuntested, would somebody try it?\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 07 Aug 2022 10:49:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On Sun, Aug 7, 2022 at 8:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Wed, Aug 3, 2022 at 7:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> More likely, we need to add something explicit to Mkvcbuild.pm\n> >> for this. I recall that it has stanzas to deal with updating\n> >> other autogenerated files; I bet we either missed that or\n> >> fat-fingered it for node-support-stamp.\n>\n> > I see below logic added by commit which seems to help regenerate the\n> > required files.\n>\n> Meh ... it's not checking the data files themselves. Here's\n> a patch based on the logic for invoking genbki. Completely\n> untested, would somebody try it?\n>\n\nI tried it on commit a69959fab2 just before the commit (1349d2790b)\nwhich was causing problems for me. On running \"perl mkvcbuild.pl\", I\ngot the below error:\nwrong number of input files, expected nodes/nodes.h nodes/primnodes.h\nnodes/parsenodes.h nodes/pathnodes.h nodes/plannodes.h\nnodes/execnodes.h access/amapi.h access/sdir.h access/tableam.h\naccess/tsmapi.h commands/event_trigger.h commands/trigger.h\nexecutor/tuptable.h foreign/fdwapi.h nodes/extensible.h\nnodes/lockoptions.h nodes/replnodes.h nodes/supportnodes.h\nnodes/value.h utils/rel.h\n\nThis error seems to be originating from gen_node_support.pl. If I\nchanged the @node_headers to what it was instead of getting it from\nMakefile then the patch works and the build is also successful. See\nattached.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 8 Aug 2022 12:23:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Sun, Aug 7, 2022 at 8:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Meh ... it's not checking the data files themselves. Here's\n>> a patch based on the logic for invoking genbki. Completely\n>> untested, would somebody try it?\n\n> I tried it on commit a69959fab2 just before the commit (1349d2790b)\n> which was causing problems for me. On running \"perl mkvcbuild.pl\", I\n> got the below error:\n> wrong number of input files, expected nodes/nodes.h nodes/primnodes.h\n> nodes/parsenodes.h nodes/pathnodes.h nodes/plannodes.h\n> nodes/execnodes.h access/amapi.h access/sdir.h access/tableam.h\n> access/tsmapi.h commands/event_trigger.h commands/trigger.h\n> executor/tuptable.h foreign/fdwapi.h nodes/extensible.h\n> nodes/lockoptions.h nodes/replnodes.h nodes/supportnodes.h\n> nodes/value.h utils/rel.h\n\nAh. It'd help if that complaint said what the command input actually\nis :-(. But on looking closer, I missed stripping the empty strings\nthat \"split\" will produce at the ends of the array. I think the\nattached will do the trick, and I really do want to get rid of this\ncopy of the file list if possible.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 08 Aug 2022 14:06:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "I wrote:\n> Ah. It'd help if that complaint said what the command input actually\n> is :-(. But on looking closer, I missed stripping the empty strings\n> that \"split\" will produce at the ends of the array. I think the\n> attached will do the trick, and I really do want to get rid of this\n> copy of the file list if possible.\n\nI tried this version on the cfbot, and it seems happy, so pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Aug 2022 14:44:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" }, { "msg_contents": "On Tue, Aug 9, 2022 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Ah. It'd help if that complaint said what the command input actually\n> > is :-(. But on looking closer, I missed stripping the empty strings\n> > that \"split\" will produce at the ends of the array. I think the\n> > attached will do the trick, and I really do want to get rid of this\n> > copy of the file list if possible.\n>\n> I tried this version on the cfbot, and it seems happy, so pushed.\n>\n\nThank you. I have verified the committed patch and it works.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 9 Aug 2022 18:46:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: automatically generating node support functions" } ]
[ { "msg_contents": "Hi,\n\nOne of the existing limitations of logical decoding / replication is\nthat it does no care about sequences. The annoying consequence is that\nafter a failover to logical replica, all the table data may be\nreplicated but the sequences are still at the initial values, requiring\nsome custom solution that moves the sequences forward enough to prevent\nduplicities.\n\nThere have been attempts to address this in the past, most recently [1],\nbut none of them got in due to various issues.\n\nThis is an attempt, based on [1] (but with many significant parts added\nor reworked), aiming to deal with this. The primary purpose of sharing\nit is getting feedback and opinions on the design decisions. It's still\na WIP - it works fine AFAICS, but some of the bits may be a bit hackish.\n\nThe overall goal is to have the same sequence data on the primary and\nlogical replica, or something sufficiently close to that, so that the\nreplica after a failover does not generate duplicate values.\n\nThis patch does a couple basic things:\n\n1) extends the logical decoding to handle sequences. It adds a new\n callback, similarly to what we have for messages. There's a bit of\n complexity with transactional and non-transactional behavior, more\n about that later\n\n2) extends test_decoding to support this new callback, printing the\n sequence increments (the decoded WAL records)\n\n3) extends built-in replication to support sequences, so publications\n may contain both tables and sequences, etc., sequences data sync\n when creating subscriptions, etc.\n\n\ntransactional vs. non-transactional\n-----------------------------------\n\nThe first part (extending logical decoding) is simple in principle. We\nsimply decode the sequence updates, but then comes a challenge - should\nwe just treat it transactionally and stash it in reorder buffer, or\njust pass it to the output plugin right-away?\n\nFor messages, this can be specified as a flag when adding the message,\nso the user can decide depending on the message purpose. For sequences,\nall we do is nextval() and it depends on the context in which it's used,\nwe can't just pick one of those approaches.\n\nConsider this, for example:\n\n CREATE SEQUENCE s;\n BEGIN;\n SELECT nextval('s') FROM generate_series(1,1000) s(i);\n ROLLBACK;\n\nIf we handle this \"transactionally\", we'd stash the \"nextval\" increment\ninto the transaction, and then discard it due to the rollback, so the\noutput plugin (and replica) would never get it. So this is an argument\nfor non-transactional behavior.\n\nOn the other hand, consider this:\n\n CREATE SEQUENCE s;\n BEGIN;\n ALTER SEQUENCE s RESTART WITH 2000;\n SELECT nextval('s') FROM generate_series(1,1000) s(i);\n ROLLBACK;\n\nIn this case the ALTER creates a new relfilenode, and the ROLLBACK does\ndiscard it including the effects of the nextval calls. So here we should\ntreat it transactionally, stash the increment(s) in the transaction and\njust discard it all on rollback.\n\nA somewhat similar example is this\n\n BEGIN;\n CREATE SEQUENCE s;\n SELECT nextval('s') FROM generate_series(1,1000) s(i);\n COMMIT;\n\nAgain - the decoded nextval needs to be handled transactionally, because\notherwise it's going to be very difficult for custom plugins to combine\nthis with DDL replication.\n\nSo the patch does a fairly simple thing:\n\n1) By default, sequences are treated non-transactionally, i.e. sent to\n the output plugin right away.\n\n2) We track sequences created in running (sub)transactions, and those\n are handled transactionally. This includes ALTER SEQUENCE cases,\n which create a new relfilenode, which is used as an identifier.\n\nIt's a bit more complex, because of cases like this:\n\n BEGIN;\n CREATE SEQUENCE s;\n SAVEPOINT a;\n SELECT nextval('s') FROM generate_series(1,1000) s(i);\n ROLLBACK TO a;\n COMMIT;\n\nbecause we must not discard the nextval changes - this is handled by\nalways stashing the nextval changes to the subxact where the sequence\nrelfilenode was created.\n\nThe tracking is a bit cumbersome - there's a hash table with relfilenode\nmapped to XID in which it was created. AFAIK that works, but might be\nan issue with many sequences created in running transactions. Not sure.\n\n\ndetecting sequence creation\n---------------------------\n\nDetection that a sequence (or rather the relfilenode) was created is\ndone by adding a \"created\" flag into the xl_seq_rec, and setting it to\n\"true\" in the first WAL record after the creation. There might be some\nother way, but this seemed simple enough.\n\n\napplying the sequence (ResetSequence2)\n--------------------------------------\n\nThe decoding pretty much just extracts log_value, log_cnt and is_called\nfrom the sequence, and passes them to the output plugin. On the replica\nwe extract those from the message, and write them to the local sequence\nusing a new ResetSequence2 function.\n\nIt's possible we don't really need log_cnt and is_called. After all,\nlog_cnt is zero most of the time anyway, and the worst thing that could\nhappen if we ignore it is we skip a couple values (which seems fine).\n\n\nsyncing sequences in a subscription\n-----------------------------------\n\nAfter creating a subscription, the sequences get syncronized just like\ntables. This part ia a bit hacked together, and there's definitely room\nfor improvement - e.g. a new bgworker is started for each sequence, as\nwe simply treat both tabels and sequences as \"relation\". But all we need\nto do for sequences is copying the (last_value, log_cnt, is_called) and\ncalling ResetSequence2, so maybe we could sync all sequences in a single\nworker, or something like that.\n\n\nnew \"sequence\" publication action\n---------------------------------\n\nThe publications now have a new \"sequence\" publication action, which is\nenabled by default. This determines whether the publication decodes\nsequences or what.\n\n\nFOR ALL SEQUENCES\n-----------------\n\nIt should be possible to create FOR ALL SEQUENCES publications, just\nlike we have FOR ALL TABLES. But this produces shift/reduce conflicts\nin the grammar, and I didn't bother dealing with that. So for now it's\nrequired to do ALTER PUBLICATION ... [ADD | DROP] SEQUENCE ...\n\n\nno streaming support yet\n------------------------\n\nThere's no supoprt for streaming of in-progress transactions yet, but\nshould be trivial to add.\n\n\nGetCurrentTransactionId() in nextval\n------------------------------------\n\nThere's a bit annoying behavior of nextval() - if you do this:\n\n BEGIN;\n CREATE SEQUENCE s;\n SAVEPOINT a;\n SELECT nextval('s') FROM generate_series(1,100) s(i);\n COMMIT;\n\nthen the WAL record for nextval (right after the savepoint) will have\nXID 0 (easy to see in pg_waldump). That's kinda strange, and it causes\nproblems in DecodeSequence() when calling\n\n SnapBuildProcessChange(builder, xid, buf->origptr)\n\nfor transactional changes, because that expects a valid XID. Fixing\nthis required adding GetCurrentTransactionId() to nextval() and two\nother functions, which were only doing\n\n if (RelationNeedsWAL(seqrel))\n GetTopTransactionId();\n\nso far. I'm not sure if this has some particularly bad consequences.\n\n\nregards\n\n[1] \nhttps://www.postgresql.org/message-id/flat/1710ed7e13b.cd7177461430746.3372264562543607781%40highgo.ca\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 8 Jun 2021 00:28:22 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nSeems the cfbot was not entirely happy with the patch on some platforms, \nso here's a fixed version. There was a bogus call to ensure_transaction \nfunction (which does not exist at all) and a silly bug in transaction \nmanagement in apply_handle_sequence.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 13 Jun 2021 23:15:07 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "A rebased patch, addressing a minor bitrot due to 4daa140a2f5.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 23 Jun 2021 16:14:03 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 6/23/21 4:14 PM, Tomas Vondra wrote:\n> A rebased patch, addressing a minor bitrot due to 4daa140a2f5.\n> \n\nMeh, forgot to attach the patch as usual, of course ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 23 Jun 2021 16:25:10 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Wed, Jun 23, 2021 at 7:55 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 6/23/21 4:14 PM, Tomas Vondra wrote:\n> > A rebased patch, addressing a minor bitrot due to 4daa140a2f5.\n> >\n>\n> Meh, forgot to attach the patch as usual, of course ...\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 15 Jul 2021 17:47:38 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 08.06.21 00:28, Tomas Vondra wrote:\n> \n> new \"sequence\" publication action\n> ---------------------------------\n> \n> The publications now have a new \"sequence\" publication action, which is\n> enabled by default. This determines whether the publication decodes\n> sequences or what.\n> \n> \n> FOR ALL SEQUENCES\n> -----------------\n> \n> It should be possible to create FOR ALL SEQUENCES publications, just\n> like we have FOR ALL TABLES. But this produces shift/reduce conflicts\n> in the grammar, and I didn't bother dealing with that. So for now it's\n> required to do ALTER PUBLICATION ... [ADD | DROP] SEQUENCE ...\n\nI have been thinking about these DDL-level issues a bit. The most \ncommon use case will be to have a bunch of tables with implicit \nsequences, and you just want to replicate them from here to there \nwithout too much effort. So ideally an implicit sequence should be \nreplicated by default if the table is part of a publication (unless \nsequences are turned off by the publication option).\n\nWe already have support for things like that in \nGetPublicationRelations(), where a partitioned table is expanded to \ninclude the actual partitions. I think that logic could be reused. So \nin general I would have GetPublicationRelations() include sequences and \ndon't have GetPublicationSequenceRelations() at all. Then sequences \ncould also be sent by pg_publication_tables(), maybe add a relkind \ncolumn. And then you also don't need so much duplicate DDL code, if you \njust consider everything as a relation. For example, there doesn't seem \nto be an actual need to have fetch_sequence_list() and subsequent \nprocessing on the subscriber side. It does the same thing as \nfetch_table_list(), so it might as well just all be one thing.\n\nWe do, however, probably need some checking that we don't replicate \ntables to sequences or vice versa.\n\nWe probably also don't need a separate FOR ALL SEQUENCES option. What \nusers really want is a \"for everything\" option. We could think about \nrenaming or alternative syntax, but in principle I think FOR ALL TABLES \nshould include sequences by default.\n\nTests under src/test/subscription/ are needed.\n\nI'm not sure why test_decoding needs a skip-sequences option. The \nsource code says it's for backward compatibility, but I don't see why we \nneed that.\n\n\n", "msg_date": "Tue, 20 Jul 2021 17:30:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 7/20/21 5:30 PM, Peter Eisentraut wrote:\n> On 08.06.21 00:28, Tomas Vondra wrote:\n>>\n>> new \"sequence\" publication action\n>> ---------------------------------\n>>\n>> The publications now have a new \"sequence\" publication action, which is\n>> enabled by default. This determines whether the publication decodes\n>> sequences or what.\n>>\n>>\n>> FOR ALL SEQUENCES\n>> -----------------\n>>\n>> It should be possible to create FOR ALL SEQUENCES publications, just\n>> like we have FOR ALL TABLES. But this produces shift/reduce conflicts\n>> in the grammar, and I didn't bother dealing with that. So for now it's\n>> required to do ALTER PUBLICATION ... [ADD | DROP] SEQUENCE ...\n> \n> I have been thinking about these DDL-level issues a bit.  The most\n> common use case will be to have a bunch of tables with implicit\n> sequences, and you just want to replicate them from here to there\n> without too much effort.  So ideally an implicit sequence should be\n> replicated by default if the table is part of a publication (unless\n> sequences are turned off by the publication option).\n> \n\nAgreed, that seems like a reasonable approach.\n\n> We already have support for things like that in\n> GetPublicationRelations(), where a partitioned table is expanded to\n> include the actual partitions.  I think that logic could be reused.  So\n> in general I would have GetPublicationRelations() include sequences and\n> don't have GetPublicationSequenceRelations() at all.  Then sequences\n> could also be sent by pg_publication_tables(), maybe add a relkind\n> column.  And then you also don't need so much duplicate DDL code, if you\n> just consider everything as a relation.  For example, there doesn't seem\n> to be an actual need to have fetch_sequence_list() and subsequent\n> processing on the subscriber side.  It does the same thing as\n> fetch_table_list(), so it might as well just all be one thing.\n> \n\nNot sure. I agree with replicating implicit sequences by default,\nwithout having to add them to the publication. But I think we should\nallow adding other sequences too, and I think some of this code and\ndifferentiation from tables will be needed.\n\nFWIW I'm not claiming there are no duplicate parts - I've mostly\ncopy-pasted the table-handling code for sequences, and I'll look into\nreusing some of it.\n\n> We do, however, probably need some checking that we don't replicate\n> tables to sequences or vice versa.\n> \n\nTrue. I haven't tried doing such silly things yet.\n\n> We probably also don't need a separate FOR ALL SEQUENCES option.  What\n> users really want is a \"for everything\" option.  We could think about\n> renaming or alternative syntax, but in principle I think FOR ALL TABLES\n> should include sequences by default.\n> \n> Tests under src/test/subscription/ are needed.\n> \n\nYeah, true. At the moment there are just tests in test_decoding, mostly\nbecause the previous patch versions did not add support for sequences to\nbuilt-in replication. Will fix.\n\n> I'm not sure why test_decoding needs a skip-sequences option.  The\n> source code says it's for backward compatibility, but I don't see why we\n> need that.\n\nHmmm, I'm a bit baffled by skip-sequences true. I think Cary added it to\nlimit chances in test_decoding tests, while the misleading comment about\nbackwards compatibility comes from me.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 20 Jul 2021 22:27:09 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Here's a rebased version of the patch, no other changes.\n\nI think the crucial aspect of this that needs discussion/feedback the\nmost is the transactional vs. non-transactional behavior. All the other\nquestions are less important / cosmetic.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 20 Jul 2021 23:41:20 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nHere's a an updated version of this patch - rebased to current master, \nand fixing some of the issues raised in Peter's review.\n\nMainly, this adds a TAP test to src/test/subscription, focusing on \ntesting the various situations with transactional and non-transactional \nbehavior (with subtransactions and various ROLLBACK versions).\n\nThis new TAP test however uncovered an issue with wait_for_catchup(), \nbecause that uses pg_current_wal_lsn() to wait for replication of all \nthe changes. But that does not work when the sequence gets advanced in a \ntransaction that is then aborted, e.g. like this:\n\nBEGIN;\nSELECT nextval('s') FROM generate_series(1,100);\nROLLBACK;\n\nThe root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write, \nwhich is updated by XLogFlush() - but only in RecordTransactionCommit. \nWhich makes sense, because only the committed stuff is \"visible\". But \nthe non-transactional behavior changes this, because now some of the \nchanges from aborted transactions may need to be replicated. Which means \nthe wait_for_catchup() ends up not waiting for the sequence change.\n\nOne option would be adding XLogFlush() to RecordTransactionAbort(), but \nmy guess is we don't do that intentionally (even though aborts should be \nfairly rare in most workloads).\n\nI'm not entirely sure changing this (replicating changes from aborted \nxacts) is a good idea. Maybe it'd be better to replicate this \"lazily\" - \ninstead of replicating the advances right away, we might remember which \nsequences were advanced in the transaction, and then replicate current \nstate for those sequences at commit time.\n\nThe idea is that if an increment is \"invisible\" we probably don't need \nto replicate it, it's fine to replicate the next \"visible\" increment. So \nfor example given\n\nBEGIN;\nSELECT nextval('s');\nROLLBACK;\n\nBEGIN;\nSELECT nextval('s');\nCOMMIT;\n\nwe don't need to replicate the first change, but we need to replicate \nthe second one.\n\nThe trouble is we don't decode individual sequence advances, just those \nthat update the sequence tuple (so every 32 values or so). So we'd need \nto remeber the first increment, in a way that is (a) persistent across \nrestarts and (b) shared by all backends.\n\nThe other challenge seems to be ordering of the changes - at the moment \nwe have no issues with this, because increments on the same sequence are \nreplicated immediately, in the WAL order. But postponing them to commit \ntime would affect this order.\n\n\nI've also briefly looked at the code duplication - there's a couple \nfunctions in the patch that I shamelessly copy-pasted and tweaked to \nhandle sequences instead of tables:\n\npublicationcmds.c\n-----------------\n\n1) OpenTableList/CloseTableList - > OpenSequenceList/CloseSequenceList\n\nTrivial differences, trivial to get rid of - the only difference is \npretty much just table_open vs. relation open.\n\n\n2) AlterPublicationTables -> AlterPublicationSequences\n\nThis is a bit more complicated, because the tables also handle \ninheritance (which I think does not apply to sequences). Other than \nthat, it's calling the functions from (1).\n\n\nsubscriptioncmds.c\n------------------\n\n1) fetch_table_list, fetch_sequence_list\n\nMinimal differences, essentially just the catalog name.\n\n2) AlterSubscription_refresh\n\nA lot of duplication, but the code handling tables and sequences is \nalmost exactly the same and can be reused fairly easily (moved to a \nseparate function, called for tables and then sequences).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 30 Jul 2021 20:26:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Tue, Jul 20, 2021 at 5:41 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> I think the crucial aspect of this that needs discussion/feedback the\n> most is the transactional vs. non-transactional behavior. All the other\n> questions are less important / cosmetic.\n\nYeah, it seems really tricky to me to get this right. The hard part\nis, I think, mostly figuring out what the right behavior really is.\n\nDDL in PostgreSQL is transactional. Non-DDL operations on sequences\nare non-transactional. If a given transaction does only one of those\nthings, it seems clear enough what to do, but when the same\n(sub)transaction does both, it gets messy. I'd be tempted to think\nabout something like:\n\n1. When a transaction performs only non-transactional operations on\nsequences, they are emitted immediately.\n\n2. If a transaction performs transactional operations on sequences,\nthe decoded operations acquire a dependency on the transaction and\ncannot be emitted until that transaction is fully decoded. When commit\nor abort of that XID is reached, emit the postponed non-transactional\noperations at that point.\n\nI think this is similar to what you've designed, but I'm not sure that\nit's exactly equivalent. I think in particular that it may be better\nto insist that all of these operations are non-transactional and that\nthe debate is only about when they can be sent, rather than trying to\nsort of convert them into an equivalent series of transactional\noperations. That approach seems confusing especially in the case where\nsome (sub)transactions abort.\n\nBut this is just my $0.02.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Jul 2021 14:58:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 30.07.21 20:26, Tomas Vondra wrote:\n> Here's a an updated version of this patch - rebased to current master, \n> and fixing some of the issues raised in Peter's review.\n\nThis patch needs an update, as various conflicts have arisen now.\n\nAs was discussed before, it might be better to present a separate patch \nfor just the logical decoding part for now, since the replication and \nDDL stuff has the potential to conflict heavily with other patches being \ndiscussed right now. It looks like cutting this patch in two should be \ndoable easily.\n\nI looked through the test cases in test_decoding again. It all looks \npretty sensible. If anyone can think of any other tricky or dubious \ncases, we can add them there. It's easiest to discuss these things with \nconcrete test cases rather than in theory.\n\nOne slightly curious issue is that this can make sequence values go \nbackwards, when seen by the logical decoding consumer, like in the test \ncase:\n\n+ BEGIN\n+ sequence: public.test_sequence transactional: 1 created: 1 last_value: \n1, log_cnt: 0 is_called: 0\n+ COMMIT\n+ sequence: public.test_sequence transactional: 0 created: 0 last_value: \n33, log_cnt: 0 is_called: 1\n+ BEGIN\n+ sequence: public.test_sequence transactional: 1 created: 1 last_value: \n4, log_cnt: 0 is_called: 1\n+ COMMIT\n+ sequence: public.test_sequence transactional: 0 created: 0 last_value: \n334, log_cnt: 0 is_called: 1\n\nI suppose that's okay, since it's not really the intention that someone \nis concurrently consuming sequence values on the subscriber. Maybe \nsomething for the future. Fixing that would require changing the way \ntransactional sequence DDL updates these values, so it's not directly \nthe job of the decoding to address this.\n\nA small thing I found: Maybe the text that test_decoding produces for \nsequences can be made to look more consistent with the one for tables. \nFor example, in\n\n+ BEGIN\n+ sequence: public.test_table_a_seq transactional: 1 created: 1 \nlast_value: 1, log_cnt: 0 is_called: 0\n+ sequence: public.test_table_a_seq transactional: 1 created: 0 \nlast_value: 33, log_cnt: 0 is_called: 1\n+ table public.test_table: INSERT: a[integer]:1 b[integer]:100\n+ table public.test_table: INSERT: a[integer]:2 b[integer]:200\n+ COMMIT\n\nnote how the punctuation is different.\n\n\n", "msg_date": "Thu, 23 Sep 2021 12:27:17 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nOn 9/23/21 12:27 PM, Peter Eisentraut wrote:\n> On 30.07.21 20:26, Tomas Vondra wrote:\n>> Here's a an updated version of this patch - rebased to current master, \n>> and fixing some of the issues raised in Peter's review.\n> \n> This patch needs an update, as various conflicts have arisen now.\n> \n> As was discussed before, it might be better to present a separate patch \n> for just the logical decoding part for now, since the replication and \n> DDL stuff has the potential to conflict heavily with other patches being \n> discussed right now.  It looks like cutting this patch in two should be \n> doable easily.\n> \n\nAttached is the rebased patch, split into three parts:\n\n1) basic decoding infrastructure (decoding, reorderbuffer etc.)\n2) support for sequences in test_decoding\n3) support for sequences in built-in replication (catalogs, syntax, ...)\n\nThe last part is the largest one - I'm sure we'll have discussions about \nthe grammar, adding sequences automatically, etc. But as you said, let's \nfocus on the first part, which deals with the required decoding stuff.\n\nI've added a couple comments, explaining how we track sequences, why we \nneed the XID in nextval() etc. I've also added streaming support.\n\n\n> I looked through the test cases in test_decoding again.  It all looks \n> pretty sensible.  If anyone can think of any other tricky or dubious \n> cases, we can add them there.  It's easiest to discuss these things with \n> concrete test cases rather than in theory.\n> \n> One slightly curious issue is that this can make sequence values go \n> backwards, when seen by the logical decoding consumer, like in the test \n> case:\n> \n> + BEGIN\n> + sequence: public.test_sequence transactional: 1 created: 1 last_value: \n> 1, log_cnt: 0 is_called: 0\n> + COMMIT\n> + sequence: public.test_sequence transactional: 0 created: 0 last_value: \n> 33, log_cnt: 0 is_called: 1\n> + BEGIN\n> + sequence: public.test_sequence transactional: 1 created: 1 last_value: \n> 4, log_cnt: 0 is_called: 1\n> + COMMIT\n> + sequence: public.test_sequence transactional: 0 created: 0 last_value: \n> 334, log_cnt: 0 is_called: 1\n> \n> I suppose that's okay, since it's not really the intention that someone \n> is concurrently consuming sequence values on the subscriber.  Maybe \n> something for the future.  Fixing that would require changing the way \n> transactional sequence DDL updates these values, so it's not directly \n> the job of the decoding to address this.\n> \n\nYeah, that's due to how ALTER SEQUENCE does things, and I agree redoing \nthat seems well out of scope for this patch. What seems a bit suspicious \nis that some of the ALTER SEQUENCE changes have \"created: 1\" - it's \nprobably correct, though, because those ALTER SEQUENCE statements can be \nrolled-back, so we see it as if a new sequence is created. The flag name \nmight be a bit confusing, though.\n\n> A small thing I found: Maybe the text that test_decoding produces for \n> sequences can be made to look more consistent with the one for tables. \n> For example, in\n> \n> + BEGIN\n> + sequence: public.test_table_a_seq transactional: 1 created: 1 \n> last_value: 1, log_cnt: 0 is_called: 0\n> + sequence: public.test_table_a_seq transactional: 1 created: 0 \n> last_value: 33, log_cnt: 0 is_called: 1\n> + table public.test_table: INSERT: a[integer]:1 b[integer]:100\n> + table public.test_table: INSERT: a[integer]:2 b[integer]:200\n> + COMMIT\n> \n> note how the punctuation is different.\n\nI did tweak this a bit, hopefully it's more consistent.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 24 Sep 2021 21:16:20 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Just a note for some design decisions\n\n> 1) By default, sequences are treated non-transactionally, i.e. sent to the output plugin right away.\n\nIf our aim is just to make sure that all user-visible data in\n*transactional* tables is consistent with sequence state then one\nvery much simplified approach to this could be to track the results of\nnextval() calls in a transaction at COMMIT put the latest sequence\nvalue in WAL (or just track the sequences affected and put the latest\nsequence state in WAL at commit which needs extra read of sequence but\nprotects against race conditions with parallel transactions which get\nrolled back later)\n\nThis avoids sending redundant changes for multiple nextval() calls\n(like loading a million-row table with sequence-generated id column)\n\nAnd one can argue that we can safely ignore anything in ROLLBACKED\nsequences. This is assuming that even if we did advance the sequence\npaste the last value sent by the latest COMMITTED transaction it does\nnot matter for database consistency.\n\nIt can matter if customers just call nextval() in rolled-back\ntransactions and somehow expect these values to be replicated based on\nreasoning along \"sequences are not transactional - so rollbacks should\nnot matter\" .\n\nOr we may get away with most in-detail sequence tracking on the source\nif we just keep track of the xmin of the sequence and send the\nsequence info over at commit if it == current_transaction_id ?\n\n\n-----\nHannu Krosing\nGoogle Cloud - We have a long list of planned contributions and we are hiring.\nContact me if interested.\n\nOn Fri, Sep 24, 2021 at 9:16 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> On 9/23/21 12:27 PM, Peter Eisentraut wrote:\n> > On 30.07.21 20:26, Tomas Vondra wrote:\n> >> Here's a an updated version of this patch - rebased to current master,\n> >> and fixing some of the issues raised in Peter's review.\n> >\n> > This patch needs an update, as various conflicts have arisen now.\n> >\n> > As was discussed before, it might be better to present a separate patch\n> > for just the logical decoding part for now, since the replication and\n> > DDL stuff has the potential to conflict heavily with other patches being\n> > discussed right now. It looks like cutting this patch in two should be\n> > doable easily.\n> >\n>\n> Attached is the rebased patch, split into three parts:\n>\n> 1) basic decoding infrastructure (decoding, reorderbuffer etc.)\n> 2) support for sequences in test_decoding\n> 3) support for sequences in built-in replication (catalogs, syntax, ...)\n>\n> The last part is the largest one - I'm sure we'll have discussions about\n> the grammar, adding sequences automatically, etc. But as you said, let's\n> focus on the first part, which deals with the required decoding stuff.\n>\n> I've added a couple comments, explaining how we track sequences, why we\n> need the XID in nextval() etc. I've also added streaming support.\n>\n>\n> > I looked through the test cases in test_decoding again. It all looks\n> > pretty sensible. If anyone can think of any other tricky or dubious\n> > cases, we can add them there. It's easiest to discuss these things with\n> > concrete test cases rather than in theory.\n> >\n> > One slightly curious issue is that this can make sequence values go\n> > backwards, when seen by the logical decoding consumer, like in the test\n> > case:\n> >\n> > + BEGIN\n> > + sequence: public.test_sequence transactional: 1 created: 1 last_value:\n> > 1, log_cnt: 0 is_called: 0\n> > + COMMIT\n> > + sequence: public.test_sequence transactional: 0 created: 0 last_value:\n> > 33, log_cnt: 0 is_called: 1\n> > + BEGIN\n> > + sequence: public.test_sequence transactional: 1 created: 1 last_value:\n> > 4, log_cnt: 0 is_called: 1\n> > + COMMIT\n> > + sequence: public.test_sequence transactional: 0 created: 0 last_value:\n> > 334, log_cnt: 0 is_called: 1\n> >\n> > I suppose that's okay, since it's not really the intention that someone\n> > is concurrently consuming sequence values on the subscriber. Maybe\n> > something for the future. Fixing that would require changing the way\n> > transactional sequence DDL updates these values, so it's not directly\n> > the job of the decoding to address this.\n> >\n>\n> Yeah, that's due to how ALTER SEQUENCE does things, and I agree redoing\n> that seems well out of scope for this patch. What seems a bit suspicious\n> is that some of the ALTER SEQUENCE changes have \"created: 1\" - it's\n> probably correct, though, because those ALTER SEQUENCE statements can be\n> rolled-back, so we see it as if a new sequence is created. The flag name\n> might be a bit confusing, though.\n>\n> > A small thing I found: Maybe the text that test_decoding produces for\n> > sequences can be made to look more consistent with the one for tables.\n> > For example, in\n> >\n> > + BEGIN\n> > + sequence: public.test_table_a_seq transactional: 1 created: 1\n> > last_value: 1, log_cnt: 0 is_called: 0\n> > + sequence: public.test_table_a_seq transactional: 1 created: 0\n> > last_value: 33, log_cnt: 0 is_called: 1\n> > + table public.test_table: INSERT: a[integer]:1 b[integer]:100\n> > + table public.test_table: INSERT: a[integer]:2 b[integer]:200\n> > + COMMIT\n> >\n> > note how the punctuation is different.\n>\n> I did tweak this a bit, hopefully it's more consistent.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 25 Sep 2021 22:05:43 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 9/25/21 22:05, Hannu Krosing wrote:\n> Just a note for some design decisions\n> \n>> 1) By default, sequences are treated non-transactionally, i.e. sent to the output plugin right away.\n> \n> If our aim is just to make sure that all user-visible data in\n> *transactional* tables is consistent with sequence state then one\n> very much simplified approach to this could be to track the results of\n> nextval() calls in a transaction at COMMIT put the latest sequence\n> value in WAL (or just track the sequences affected and put the latest\n> sequence state in WAL at commit which needs extra read of sequence but\n> protects against race conditions with parallel transactions which get\n> rolled back later)\n> \n\nNot sure. TBH I feel rather uneasy about adding more stuff in COMMIT.\n\n> This avoids sending redundant changes for multiple nextval() calls\n> (like loading a million-row table with sequence-generated id column)\n> \n\nYeah, it'd be nice to have to optimize this a bit, somehow. But I'd bet \nit's a negligible amount of data / changes, compared to the table.\n\n> And one can argue that we can safely ignore anything in ROLLBACKED\n> sequences. This is assuming that even if we did advance the sequence\n> paste the last value sent by the latest COMMITTED transaction it does\n> not matter for database consistency.\n> \n\nI don't think we can ignore aborted (ROLLBACK) transactions, in the \nsense that you can't just discard the increments. Imagine you have this \nsequence of transactions:\n\nBEGIN;\nSELECT nextval('s'); -- allocates new chunk of values\nROLLBACK;\n\nBEGIN;\nSELECT nextval('s'); -- returns one of the cached values\nCOMMIT;\n\nIf you ignore the aborted transaction, then the sequence increment won't \nbe replicated -- but that's wrong, because user now has a visible \nsequence value from that chunk.\n\nSo I guess we'd have to maintain a cache of sequences incremented in the \ncurrent session, do nothing in aborted transactions (i.e. keep the \ncontents but don't log anything) and log/reset at commit.\n\nI wonder if multiple sessions make this even more problematic (e.g. due \nto session just disconnecting mid transansaction, without writing the \nabort record at all). But AFAICS that's not an issue, because the other \nsession has a separate cache for the sequence.\n\n> It can matter if customers just call nextval() in rolled-back\n> transactions and somehow expect these values to be replicated based on\n> reasoning along \"sequences are not transactional - so rollbacks should\n> not matter\" .\n> \n\nI don't think we guarantee anything for data in transactions that did \nnot commit, so this seems like a non-issue. I.e. we don't need to go out \nof our way to guarantee something we never promised.\n\n> Or we may get away with most in-detail sequence tracking on the source\n> if we just keep track of the xmin of the sequence and send the\n> sequence info over at commit if it == current_transaction_id ?\n> \n\nNot sure I understand this proposal. Can you explain?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 31 Oct 2021 21:44:53 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nI've spent a bit of time exploring the alternative approach outlined by \nHannu, i.e. tracking sequences accessed by the transaction, and logging \nthe final state just once at COMMIT. Attached is an experimental version \nof the patch series doing that - 0001 does the original approach \n(decoding the sequence updates from WAL) and then 0002 reworks it to \nthis alternative solution. The 0003 and 0004 stay mostly the same, \nexcept for minor fixes. Some of the tests in 0003/0004 fail, because \n0002 changes the semantics in various ways (more about that later).\n\nThe original approach (0001) may seem complex at first, but in principle \nit just decodes changes to the sequence relation, and either stashes \nthem into transaction (just like other changes) or applies them right \naway. I'd say that's the most complicated part - deciding whether the \nchange is transactional or not.\n\n0002 reworks that so that it doesn't decode the existing WAL records, \nbut tracks sequences which have been modified (updated on-disk state) \nand then accessed in the current transaction. And then at COMMIT time we \nwrite a new WAL message with info about the sequence.\n\nI realized we already cache sequences for each session - seqhashtab in \nsequence.c. It doesn't have any concept of a transaction, but it seems \nfairly easy to make that possible. I did this by adding two flags\n\n - needs_log - means the seesion advanced the sequence (on disk)\n - touched - true if the current xact called nextval() etc.\n\nThe idea is that what matters is updates to on-disk state, so whenever \nwe do that we set needs_log. But it only matters when the changes are \nmade visible in a committed transaction. Consider for example this:\n\nBEGIN;\nSELECT nextval('s') FROM generate_series(1,10000) s(i);\nROLLBACK;\nSELECT nextval('s');\n\nThe first nextval() call certainly sets both flags to true, at least for \ndefault sequences caching 32 values. But the values are not confirmed to \nthe user because of the rollback - this resets 'touched' flag, but \nleaves 'needs_log' set to true.\n\nAnd then the next nextval() - which may easily be just from cache - sets \ntouched=true again, and logs the sequence state at (implicit) commit. \nWhich resets both flags again.\n\nThe logging/cleanup happens in AtEOXact_Sequences() which gets called \nbefore commit/abort. This walks all cached sequences and writes the \nstate for those with both flags true (or resets flag for abort).\n\nThe cache also keeps info about the last \"sequence state\" in the \nsession, which is then used when writing into into WAL.\n\n\nTo write the sequence state into WAL, I've added a new WAL record \nxl_logical_sequence to RM_LOGICALMSG_ID, next to the xl_logical_message. \nIt's a bit arbitrary, maybe it should be part of RM_SEQ_ID, but it does \nthe trick. I don't think this is the main issue and it's easy enough to \nmove it elsewhere if needed.\n\nSo, that seems fairly straight-forward and it may reduce the number of \nreplication messages for large transactions. Unfortunately, it's not \nmuch simpler compared to the first approach - the amount of code is \nabout the same, and there's a bunch of other issues.\n\nThe main issue seems to be about ordering. Consider multiple sessions \nall advancing the sequence. With the \"old\" approach this was naturally \nordered - the order in which the increments were written to WAL made \nsense. But the sessions may advance the sequences in one order and then \ncommit in a different order, which mixes the updates. Consider for \nexample this scenario with two concurrent transactions:\n\nT1: nextval('s') -> allocates values [1,32]\nT2: nextval('s') -> allocates values [33,64]\nT2: commit -> logs [33,64]\nT1: commit -> logs [1,32]\n\nThe result is the sequence on the replica diverted because it replayed \nthe increments in the opposite order.\n\nI can think of two ways to fix this. Firstly, we could \"merge\" the \nincrements in some smart way, e.g. by discarding values considered \n\"stale\" (like decrements). But that seems pretty fragile, because the \nsequence may be altered in various ways, reset, etc. And it seems more \nlike transferring responsibility to someone else instead of actually \nsolving the issue.\n\nThe other fix is simply reading the current sequence state from disk at \ncommit and logging that (instead of the values cached from the last \nincrement). But I'm rather skeptical about doing such things right \nbefore COMMIT.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 19 Nov 2021 20:54:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nHere's a slightly improved version of the patch, fixing a couple issues \nI failed to notice. It also addresses a couple of the issues described \nin the last message, although mostly to show what would need to be done.\n\n1) Handle sequences dropped in the xact by calling try_relation_open, \nand just doing nothing if not found. Otherwise we'd get a failure in \nreorderbuffer, when decoding the change.\n\n2) Fixed nextval_internal() to log the correct last_value (the one we \nwrite into WAL).\n\n3) Reread the sequence state in AtEOXact_Sequences, to deal with the \nordering issue described before. This makes (2) somewhat pointless, \nbecause we just read whatever is on disk at that point. But having both \nmakes it easier to experiment / see what'd happen.\n\n4) Log the stats in DefineSequence() - Without this we'd not have the \ninitial sequence state in the WAL, because only nextval/setval etc. do \nthe logging. The old approach (decoding the sequence tuple) does not \nhave this issue.\n\n\nThe (3) changes the behavior in a somewhat strange way. Consider this \ncase with two concurrent transactions:\n\nT1: BEGIN;\nT2: BEGIN;\nT1: SELECT nextval('s') FROM generate_series(1,100) s(i);\nT2: SELECT nextval('s') FROM generate_series(1,100) s(i);\nT1: COMMIT;\nT2: COMMIT;\n\nThe result is that both transactions have used the same sequence, and so \nwill re-read the state from disk. But at that point the state is exactly \nthe same, so we'll log the same thing twice.\n\nThere's a much deeper issue, though. The current patch only logs the \nsequence if the session generated WAL when incrementing the sequence \n(which happens every 32 values). But other sessions may already use \nvalues from this range, so consider for example this:\n\nT1: BEGIN;\nT1: SELECT nextval('s') FROM generate_series(1,100) s(i);\nT2: BEGIN;\nT2: SELECT nextval('s');\nT2: COMMIT;\nT1: ROLLBACK;\n\nWhich unfortunately means T2 already used a value, but the increment may \nnot be logged at that time (or ever). This seems like a fatal issue, \nbecause it means we need to log *all* sequences the transaction touches, \nnot just those that wrote the increment to WAL. That might still work \nfor large transactions consuming many sequence values, but it's pretty \ninefficient for small OLTP transactions that only need one or two values \nfrom the sequence.\n\nSo I think just decoding the sequence tuples is a better solution - for \nlarge transactions (consuming many values from the sequence) it may be \nmore expensive (i.e. send more records to replica). But I doubt that \nmatters too much - it's likely negligible compared to other data for \nlarge transactions.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 22 Nov 2021 01:47:28 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 22.11.21 01:47, Tomas Vondra wrote:\n> So I think just decoding the sequence tuples is a better solution - for \n> large transactions (consuming many values from the sequence) it may be \n> more expensive (i.e. send more records to replica). But I doubt that \n> matters too much - it's likely negligible compared to other data for \n> large transactions.\n\nI agree that the original approach is better. It was worth trying out \nthis alternative, but it seems quite complicated. I note that a lot of \nadditional code had to be added around several areas of the code, \nwhereas the original patch really just touched the logical decoding \ncode, as it should. This doesn't prevent anyone from attempting to \noptimize things somehow in the future, but for now let's move forward \nwith the simple approach.\n\n\n", "msg_date": "Mon, 22 Nov 2021 16:44:57 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n> On 22. 11. 2021, at 16:44, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 22.11.21 01:47, Tomas Vondra wrote:\n>> So I think just decoding the sequence tuples is a better solution - for large transactions (consuming many values from the sequence) it may be more expensive (i.e. send more records to replica). But I doubt that matters too much - it's likely negligible compared to other data for large transactions.\n> \n> I agree that the original approach is better. It was worth trying out this alternative, but it seems quite complicated. I note that a lot of additional code had to be added around several areas of the code, whereas the original patch really just touched the logical decoding code, as it should. This doesn't prevent anyone from attempting to optimize things somehow in the future, but for now let's move forward with the simple approach.\n\n+1\n\n--\nPetr Jelinek\n\n\n\n", "msg_date": "Mon, 22 Nov 2021 17:01:46 +0100", "msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nOn 2021-09-25 22:05:43 +0200, Hannu Krosing wrote:\n> If our aim is just to make sure that all user-visible data in\n> *transactional* tables is consistent with sequence state then one\n> very much simplified approach to this could be to track the results of\n> nextval() calls in a transaction at COMMIT put the latest sequence\n> value in WAL (or just track the sequences affected and put the latest\n> sequence state in WAL at commit which needs extra read of sequence but\n> protects against race conditions with parallel transactions which get\n> rolled back later)\n\nI think this is a bad idea. It's architecturally more complicated and prevents\nuse cases because sequence values aren't guaranteed to be as new as on the\noriginal system. You'd need to track all sequence use somehow *even if there\nis no relevant WAL generated* in a transaction. There's simply no evidence of\nsequence use in a transaction if that transaction uses a previously logged\nsequence value.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Nov 2021 17:01:14 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 11/23/21 02:01, Andres Freund wrote:\n> Hi,\n> \n> On 2021-09-25 22:05:43 +0200, Hannu Krosing wrote:\n>> If our aim is just to make sure that all user-visible data in\n>> *transactional* tables is consistent with sequence state then one\n>> very much simplified approach to this could be to track the results of\n>> nextval() calls in a transaction at COMMIT put the latest sequence\n>> value in WAL (or just track the sequences affected and put the latest\n>> sequence state in WAL at commit which needs extra read of sequence but\n>> protects against race conditions with parallel transactions which get\n>> rolled back later)\n> \n> I think this is a bad idea. It's architecturally more complicated and prevents\n> use cases because sequence values aren't guaranteed to be as new as on the\n> original system. You'd need to track all sequence use somehow *even if there\n> is no relevant WAL generated* in a transaction. There's simply no evidence of\n> sequence use in a transaction if that transaction uses a previously logged\n> sequence value.\n> \n\nNot quite. We already have a cache of all sequences used by a session \n(see seqhashtab in sequence.c), and it's not that hard to extend it to \nper-transaction tracking. That's what the last two versions do, mostly.\n\nBut there are various issues with that approach, described in my last \nmessage(s).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 23 Nov 2021 14:36:17 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "I have checked the 0001 and 0003 patches. (I think we have dismissed\nthe approach in 0002 for now. And let's talk about 0004 later.)\n\nI have attached a few fixup patches, described further below.\n\n# 0001\n\nThe argument \"create\" for fill_seq_with_data() is always true (and\npatch 0002 removes it). So I'm not sure if it's needed. If it is, it\nshould be documented somewhere.\n\nAbout the comment you added in nextval_internal(): It's a good\nexplanation, so I would leave it in. I also agree with the\nconclusion of the comment that the current solution is reasonable. We\nprobably don't need the same comment again in fill_seq_with_data() and\nagain in do_setval(). Note that both of the latter functions already\npoint to nextval_interval() for other comments, so the same can be\nrelied upon here as well.\n\nThe order of the new fields sequence_cb and stream_sequence_cb is a\nbit inconsistent compared to truncate_cb and stream_truncate_cb.\nMaybe take another look to make the order more uniform.\n\nSome documentation needs to be added to the Logical Decoding chapter.\nI have attached a patch that I think catches all the places that need\nto be updated, with some details left for you to fill in. ;-) The\nordering of the some of the documentation chunks reflects the order in\nwhich the callbacks appear in the header files, which might not be\noptimal; see above.\n\nIn reorderbuffer.c, you left a comment about how to access a sequence\ntuple. There is an easier way, using Form_pg_sequence_data, which is\nhow sequence.c also does it. See attached patch.\n\n# 0003\n\nThe tests added in 0003 don't pass for me. It appears that you might\nhave forgotten to update the expected files after you added some tests\nor something. The cfbot shows the same. See attached patch for a\ncorrection, but do check what your intent was.\n\nAs mentioned before, we probably don't need the skip-sequences option\nin test_decoding.", "msg_date": "Tue, 7 Dec 2021 15:38:50 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 12/7/21 15:38, Peter Eisentraut wrote:\n> I have checked the 0001 and 0003 patches.  (I think we have dismissed\n> the approach in 0002 for now.  And let's talk about 0004 later.)\n> \n\nRight, I think that's correct.\n\n> I have attached a few fixup patches, described further below.\n> \n> # 0001\n> \n> The argument \"create\" for fill_seq_with_data() is always true (and\n> patch 0002 removes it).  So I'm not sure if it's needed.  If it is, it\n> should be documented somewhere.\n> \n\nGood point. I think it could be removed, but IIRC it'll be needed when\nadding sequence decoding to the built-in replication, and that patch is\nmeant to be just an implementation of the API, without changing WAL.\n\nOTOH I don't see it in the last version of that patch (in ResetSequence2\ncall) so maybe it's not really needed. I've kept it for now, but I'll\ninvestigate.\n\n> About the comment you added in nextval_internal(): It's a good\n> explanation, so I would leave it in.  I also agree with the\n> conclusion of the comment that the current solution is reasonable.  We\n> probably don't need the same comment again in fill_seq_with_data() and\n> again in do_setval().  Note that both of the latter functions already\n> point to nextval_interval() for other comments, so the same can be\n> relied upon here as well.\n> \n\nTrue. I moved it a bit in nextval_internal() and removed the other\ncopies. The existing references should be enough.\n\n> The order of the new fields sequence_cb and stream_sequence_cb is a\n> bit inconsistent compared to truncate_cb and stream_truncate_cb.\n> Maybe take another look to make the order more uniform.\n> \n\nYou mean in OutputPluginCallbacks? I'd actually argue it's the truncate\ncallbacks that are inconsistent - in the regular section truncate_cb is\nright before commit_cb, while in the streaming section it's at the end.\n\nOr what order do you think would be better? I'm fine with changing it.\n\n> Some documentation needs to be added to the Logical Decoding chapter.\n> I have attached a patch that I think catches all the places that need\n> to be updated, with some details left for you to fill in. ;-) The\n> ordering of the some of the documentation chunks reflects the order in\n> which the callbacks appear in the header files, which might not be\n> optimal; see above.\n> \n\nThanks. I added a bit about the callbacks being optional and what the\nparameters mean (only for sequence_cb, as the stream_ callbacks\ngenerally don't copy that bit).\n\n> In reorderbuffer.c, you left a comment about how to access a sequence\n> tuple.  There is an easier way, using Form_pg_sequence_data, which is\n> how sequence.c also does it.  See attached patch.\n> \n\nYeah, that looks much nicer.\n\n> # 0003\n> \n> The tests added in 0003 don't pass for me.  It appears that you might\n> have forgotten to update the expected files after you added some tests\n> or something.  The cfbot shows the same.  See attached patch for a\n> correction, but do check what your intent was.\n> \n\nYeah. I suspect I removed the expected results due to the experimental\nrework. I haven't noticed that because some of the tests for the\nbuilt-in replication are expected to fail.\n\n\nAttached is an updated version of the first two parts (infrastructure\nand test_decoding), with all your fixes merged.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 8 Dec 2021 01:23:45 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 08.12.21 01:23, Tomas Vondra wrote:\n>> The argument \"create\" for fill_seq_with_data() is always true (and\n>> patch 0002 removes it).  So I'm not sure if it's needed.  If it is, it\n>> should be documented somewhere.\n>>\n> \n> Good point. I think it could be removed, but IIRC it'll be needed when\n> adding sequence decoding to the built-in replication, and that patch is\n> meant to be just an implementation of the API, without changing WAL.\n> \n> OTOH I don't see it in the last version of that patch (in ResetSequence2\n> call) so maybe it's not really needed. I've kept it for now, but I'll\n> investigate.\n\nOk, please check. If it is needed, perhaps then we need a way for \ntest_decoding() to simulate it, for testing. But perhaps it's not needed.\n\n>> The order of the new fields sequence_cb and stream_sequence_cb is a\n>> bit inconsistent compared to truncate_cb and stream_truncate_cb.\n>> Maybe take another look to make the order more uniform.\n>>\n> \n> You mean in OutputPluginCallbacks? I'd actually argue it's the truncate\n> callbacks that are inconsistent - in the regular section truncate_cb is\n> right before commit_cb, while in the streaming section it's at the end.\n\nOk, that makes sense. Then leave yours.\n\nWhen the question about fill_seq_with_data() is resolved, I have no more \ncomments on this part.\n\n\n", "msg_date": "Wed, 8 Dec 2021 10:04:37 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nhere's an updated version of the patches, dealing with almost all of the\nissues (at least in the 0001 and 0002 parts). The main changes:\n\n1) I've removed the 'created' flag from fill_seq_with_data, as\ndiscussed. I don't think it's needed by any of the parts (not even 0003,\nAFAICS). We still need it in xl_seq_rec, though.\n\n2) GetCurrentTransactionId() added to sequence.c are called only with\nwal_level=logical, to minimize the overhead.\n\n\nThere's still one remaining problem, that I already explained in [1].\nThe problem is that with this:\n\n BEGIN;\n SELECT nextval('s') FROM generate_series(1,100);\n ROLLBACK;\n\n\nThe root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,\nwhich is updated by XLogFlush() - but only in RecordTransactionCommit.\nWhich makes sense, because only the committed stuff is \"visible\".\n\nBut the non-transactional behavior of sequence decoding disagrees with\nthis, because now some of the changes from aborted transactions may be\nreplicated. Which means the wait_for_catchup() ends up not waiting for\nthe sequence change to be replicated. This is an issue for tests in\npatch 0003, at least.\n\nMy concern is this actually affects other places waiting for things\ngetting replicated :-/\n\nI have outlined some ideas how to deal with this in [1], but we got\nsidetracked by exploring the alternative approach.\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/3d6df331-5532-6848-eb45-344b265e0238%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 14 Dec 2021 02:31:45 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Tue, Dec 14, 2021 at 7:02 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> here's an updated version of the patches, dealing with almost all of the\n> issues (at least in the 0001 and 0002 parts). The main changes:\n>\n> 1) I've removed the 'created' flag from fill_seq_with_data, as\n> discussed. I don't think it's needed by any of the parts (not even 0003,\n> AFAICS). We still need it in xl_seq_rec, though.\n>\n> 2) GetCurrentTransactionId() added to sequence.c are called only with\n> wal_level=logical, to minimize the overhead.\n>\n>\n> There's still one remaining problem, that I already explained in [1].\n> The problem is that with this:\n>\n> BEGIN;\n> SELECT nextval('s') FROM generate_series(1,100);\n> ROLLBACK;\n>\n>\n> The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,\n> which is updated by XLogFlush() - but only in RecordTransactionCommit.\n> Which makes sense, because only the committed stuff is \"visible\".\n>\n> But the non-transactional behavior of sequence decoding disagrees with\n> this, because now some of the changes from aborted transactions may be\n> replicated. Which means the wait_for_catchup() ends up not waiting for\n> the sequence change to be replicated. This is an issue for tests in\n> patch 0003, at least.\n>\n> My concern is this actually affects other places waiting for things\n> getting replicated :-/\n>\n\nBy any chance, will this impact synchronous replication as well which\nwaits for commits to be replicated?\n\nHow is this patch dealing with prepared transaction case where at\nprepare we will send transactional changes and then later if rollback\nprepared happens then the publisher will rollback changes related to\nnew relfilenode but subscriber would have already replayed the updated\nseqval change which won't be rolled back?\n\n--\nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 15 Dec 2021 18:50:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 14.12.21 02:31, Tomas Vondra wrote:\n> There's still one remaining problem, that I already explained in [1].\n> The problem is that with this:\n> \n> BEGIN;\n> SELECT nextval('s') FROM generate_series(1,100);\n> ROLLBACK;\n> \n> \n> The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,\n> which is updated by XLogFlush() - but only in RecordTransactionCommit.\n> Which makes sense, because only the committed stuff is \"visible\".\n> \n> But the non-transactional behavior of sequence decoding disagrees with\n> this, because now some of the changes from aborted transactions may be\n> replicated. Which means the wait_for_catchup() ends up not waiting for\n> the sequence change to be replicated.\n\nI can't think of a reason why this might be a problem.\n\n\n", "msg_date": "Wed, 15 Dec 2021 14:44:45 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 12/15/21 14:20, Amit Kapila wrote:\n> On Tue, Dec 14, 2021 at 7:02 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> here's an updated version of the patches, dealing with almost all of the\n>> issues (at least in the 0001 and 0002 parts). The main changes:\n>>\n>> 1) I've removed the 'created' flag from fill_seq_with_data, as\n>> discussed. I don't think it's needed by any of the parts (not even 0003,\n>> AFAICS). We still need it in xl_seq_rec, though.\n>>\n>> 2) GetCurrentTransactionId() added to sequence.c are called only with\n>> wal_level=logical, to minimize the overhead.\n>>\n>>\n>> There's still one remaining problem, that I already explained in [1].\n>> The problem is that with this:\n>>\n>> BEGIN;\n>> SELECT nextval('s') FROM generate_series(1,100);\n>> ROLLBACK;\n>>\n>>\n>> The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,\n>> which is updated by XLogFlush() - but only in RecordTransactionCommit.\n>> Which makes sense, because only the committed stuff is \"visible\".\n>>\n>> But the non-transactional behavior of sequence decoding disagrees with\n>> this, because now some of the changes from aborted transactions may be\n>> replicated. Which means the wait_for_catchup() ends up not waiting for\n>> the sequence change to be replicated. This is an issue for tests in\n>> patch 0003, at least.\n>>\n>> My concern is this actually affects other places waiting for things\n>> getting replicated :-/\n>>\n> \n> By any chance, will this impact synchronous replication as well which\n> waits for commits to be replicated?\n> \n\nPhysical or logical replication? Physical is certainly not replicated.\n\nFor logical replication, it's more complicated.\n\n> How is this patch dealing with prepared transaction case where at\n> prepare we will send transactional changes and then later if rollback\n> prepared happens then the publisher will rollback changes related to\n> new relfilenode but subscriber would have already replayed the updated\n> seqval change which won't be rolled back?\n> \n\nI'm not sure what exact scenario you are describing, but in general we\ndon't rollback sequence changes anyway, so this should not cause any\ndivergence between the publisher and subscriber.\n\nOr are you talking about something like ALTER SEQUENCE? I think that\nshould trigger the transactional behavior for the new relfilenode, so\nthe subscriber won't see that changes.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 15 Dec 2021 14:51:51 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 12/15/21 14:51, Tomas Vondra wrote:\n> On 12/15/21 14:20, Amit Kapila wrote:\n>> On Tue, Dec 14, 2021 at 7:02 AM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> Hi,\n>>>\n>>> here's an updated version of the patches, dealing with almost all of the\n>>> issues (at least in the 0001 and 0002 parts). The main changes:\n>>>\n>>> 1) I've removed the 'created' flag from fill_seq_with_data, as\n>>> discussed. I don't think it's needed by any of the parts (not even 0003,\n>>> AFAICS). We still need it in xl_seq_rec, though.\n>>>\n>>> 2) GetCurrentTransactionId() added to sequence.c are called only with\n>>> wal_level=logical, to minimize the overhead.\n>>>\n>>>\n>>> There's still one remaining problem, that I already explained in [1].\n>>> The problem is that with this:\n>>>\n>>> BEGIN;\n>>> SELECT nextval('s') FROM generate_series(1,100);\n>>> ROLLBACK;\n>>>\n>>>\n>>> The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,\n>>> which is updated by XLogFlush() - but only in RecordTransactionCommit.\n>>> Which makes sense, because only the committed stuff is \"visible\".\n>>>\n>>> But the non-transactional behavior of sequence decoding disagrees with\n>>> this, because now some of the changes from aborted transactions may be\n>>> replicated. Which means the wait_for_catchup() ends up not waiting for\n>>> the sequence change to be replicated. This is an issue for tests in\n>>> patch 0003, at least.\n>>>\n>>> My concern is this actually affects other places waiting for things\n>>> getting replicated :-/\n>>>\n>>\n>> By any chance, will this impact synchronous replication as well which\n>> waits for commits to be replicated?\n>>\n> \n> Physical or logical replication? Physical is certainly not replicated.\n> \n> For logical replication, it's more complicated.\n> \n\nApologies, sent too early ... I think it's more complicated for logical\nsync replication, because of a scenario like this:\n\n BEGIN;\n SELECT nextval('s') FROM generate_series(1,100); <-- writes WAL\n ROLLBACK;\n\n SELECT nextval('s');\n\nThe first transaction advances the sequence enough to generate a WAL,\nwhich we do every 32 values. But it's rolled back, so it does not update\nLogwrtResult.Write, because that happens only at commit.\n\nAnd then the nextval() generates a value from the sequence without\ngenerating WAL, so it doesn't update the LSN either (IIRC). That'd mean\na sync replication may not wait for this change to reach the subscriber.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 15 Dec 2021 14:58:39 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Looking at 0003,\n\nOn 2021-Dec-14, Tomas Vondra wrote:\n\n> diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml\n> index bb4ef5e5e22..4d166ad3f9c 100644\n> --- a/doc/src/sgml/ref/alter_publication.sgml\n> +++ b/doc/src/sgml/ref/alter_publication.sgml\n> @@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable> RENAME TO <r\n> <phrase>where <replaceable class=\"parameter\">publication_object</replaceable> is one of:</phrase>\n> \n> TABLE [ ONLY ] <replaceable class=\"parameter\">table_name</replaceable> [ * ] [, ... ]\n> + SEQUENCE <replaceable class=\"parameter\">sequence_name</replaceable> [ * ] [, ... ]\n> ALL TABLES IN SCHEMA { <replaceable class=\"parameter\">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]\n> + ALL SEQUENCE IN SCHEMA { <replaceable class=\"parameter\">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]\n\nNote that this says ALL SEQUENCE; I think it should be ALL SEQUENCES.\n\n> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n> index 3d4dd43e47b..f037c17985b 100644\n> --- a/src/backend/parser/gram.y\n> +++ b/src/backend/parser/gram.y\n> @@ -9762,6 +9762,26 @@ PublicationObjSpec:\n...\n> +\t\t\t| ALL SEQUENCE IN_P SCHEMA ColId\n> +\t\t\t\t{\n> +\t\t\t\t\t$$ = makeNode(PublicationObjSpec);\n> +\t\t\t\t\t$$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA;\n> +\t\t\t\t\t$$->name = $5;\n> +\t\t\t\t\t$$->location = @5;\n> +\t\t\t\t}\n> +\t\t\t| ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA\n> +\t\t\t\t{\n> +\t\t\t\t\t$$ = makeNode(PublicationObjSpec);\n> +\t\t\t\t\t$$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA;\n> +\t\t\t\t\t$$->location = @5;\n> +\t\t\t\t}\n\nAnd here you have ALL SEQUENCE in one spot and ALL SEQUENCES in the\nother.\n\nBTW I think these enum values should use the plural too,\nPUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA (not SEQUENCE). I suppose you\ncopied from PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA, but that too seems to be\na mistake: should be PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA.\n\n\n> @@ -10097,6 +10117,12 @@ UnlistenStmt:\n> \t\t\t\t}\n> \t\t;\n> \n> +/*\n> + * FIXME\n> + *\n> + * opt_publication_for_sequences and publication_for_sequences should be\n> + * copies for sequences\n> + */\n\nNot sure if this FIXME is relevant or should just be removed.\n\n> @@ -10105,6 +10131,12 @@ UnlistenStmt:\n> *\t\tBEGIN / COMMIT / ROLLBACK\n> *\t\t(also older versions END / ABORT)\n> *\n> + * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]\n> + *\n> + * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]\n> + *\n> + * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]\n> + *\n\nThis comment addition seems misplaced?\n\n> diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\n> index 2f412ca3db3..e30bf7b1b55 100644\n> --- a/src/bin/psql/tab-complete.c\n> +++ b/src/bin/psql/tab-complete.c\n> @@ -1647,13 +1647,13 @@ psql_completion(const char *text, int start, int end)\n> \t\tCOMPLETE_WITH(\"ADD\", \"DROP\", \"OWNER TO\", \"RENAME TO\", \"SET\");\n> \t/* ALTER PUBLICATION <name> ADD */\n> \telse if (Matches(\"ALTER\", \"PUBLICATION\", MatchAny, \"ADD\"))\n> -\t\tCOMPLETE_WITH(\"ALL TABLES IN SCHEMA\", \"TABLE\");\n> +\t\tCOMPLETE_WITH(\"ALL TABLES IN SCHEMA\", \"TABLE|SEQUENCE\");\n> \t/* ALTER PUBLICATION <name> DROP */\n> \telse if (Matches(\"ALTER\", \"PUBLICATION\", MatchAny, \"DROP\"))\n> -\t\tCOMPLETE_WITH(\"ALL TABLES IN SCHEMA\", \"TABLE\");\n> +\t\tCOMPLETE_WITH(\"ALL TABLES IN SCHEMA\", \"TABLE|SEQUENCE\");\n> \t/* ALTER PUBLICATION <name> SET */\n> \telse if (Matches(\"ALTER\", \"PUBLICATION\", MatchAny, \"SET\"))\n> -\t\tCOMPLETE_WITH(\"(\", \"ALL TABLES IN SCHEMA\", \"TABLE\");\n> +\t\tCOMPLETE_WITH(\"(\", \"ALL TABLES IN SCHEMA\", \"TABLE|SEQUENCE\");\n\nI think you should also add \"ALL SEQUENCES IN SCHEMA\" to these lists.\n\n\n> \telse if (Matches(\"ALTER\", \"PUBLICATION\", MatchAny, \"ADD|DROP|SET\", \"ALL\", \"TABLES\", \"IN\", \"SCHEMA\"))\n\n... and perhaps make this \"ALL\", \"TABLES|SEQUENCES\", \"IN\", \"SCHEMA\".\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Nunca confiaré en un traidor. Ni siquiera si el traidor lo he creado yo\"\n(Barón Vladimir Harkonnen)\n\n\n", "msg_date": "Wed, 15 Dec 2021 13:42:55 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 12/15/21 17:42, Alvaro Herrera wrote:\n> Looking at 0003,\n> \n> On 2021-Dec-14, Tomas Vondra wrote:\n> \n>> diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml\n>> index bb4ef5e5e22..4d166ad3f9c 100644\n>> --- a/doc/src/sgml/ref/alter_publication.sgml\n>> +++ b/doc/src/sgml/ref/alter_publication.sgml\n>> @@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable> RENAME TO <r\n>> <phrase>where <replaceable class=\"parameter\">publication_object</replaceable> is one of:</phrase>\n>> \n>> TABLE [ ONLY ] <replaceable class=\"parameter\">table_name</replaceable> [ * ] [, ... ]\n>> + SEQUENCE <replaceable class=\"parameter\">sequence_name</replaceable> [ * ] [, ... ]\n>> ALL TABLES IN SCHEMA { <replaceable class=\"parameter\">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]\n>> + ALL SEQUENCE IN SCHEMA { <replaceable class=\"parameter\">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]\n> \n> Note that this says ALL SEQUENCE; I think it should be ALL SEQUENCES.\n> \n>> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n>> index 3d4dd43e47b..f037c17985b 100644\n>> --- a/src/backend/parser/gram.y\n>> +++ b/src/backend/parser/gram.y\n>> @@ -9762,6 +9762,26 @@ PublicationObjSpec:\n> ...\n>> +\t\t\t| ALL SEQUENCE IN_P SCHEMA ColId\n>> +\t\t\t\t{\n>> +\t\t\t\t\t$$ = makeNode(PublicationObjSpec);\n>> +\t\t\t\t\t$$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA;\n>> +\t\t\t\t\t$$->name = $5;\n>> +\t\t\t\t\t$$->location = @5;\n>> +\t\t\t\t}\n>> +\t\t\t| ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA\n>> +\t\t\t\t{\n>> +\t\t\t\t\t$$ = makeNode(PublicationObjSpec);\n>> +\t\t\t\t\t$$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA;\n>> +\t\t\t\t\t$$->location = @5;\n>> +\t\t\t\t}\n> \n> And here you have ALL SEQUENCE in one spot and ALL SEQUENCES in the\n> other.\n> \n> BTW I think these enum values should use the plural too,\n> PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA (not SEQUENCE). I suppose you\n> copied from PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA, but that too seems to be\n> a mistake: should be PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA.\n> \n> \n>> @@ -10097,6 +10117,12 @@ UnlistenStmt:\n>> \t\t\t\t}\n>> \t\t;\n>> \n>> +/*\n>> + * FIXME\n>> + *\n>> + * opt_publication_for_sequences and publication_for_sequences should be\n>> + * copies for sequences\n>> + */\n> \n> Not sure if this FIXME is relevant or should just be removed.\n> \n>> @@ -10105,6 +10131,12 @@ UnlistenStmt:\n>> *\t\tBEGIN / COMMIT / ROLLBACK\n>> *\t\t(also older versions END / ABORT)\n>> *\n>> + * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]\n>> + *\n>> + * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]\n>> + *\n>> + * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]\n>> + *\n> \n> This comment addition seems misplaced?\n> \n>> diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\n>> index 2f412ca3db3..e30bf7b1b55 100644\n>> --- a/src/bin/psql/tab-complete.c\n>> +++ b/src/bin/psql/tab-complete.c\n>> @@ -1647,13 +1647,13 @@ psql_completion(const char *text, int start, int end)\n>> \t\tCOMPLETE_WITH(\"ADD\", \"DROP\", \"OWNER TO\", \"RENAME TO\", \"SET\");\n>> \t/* ALTER PUBLICATION <name> ADD */\n>> \telse if (Matches(\"ALTER\", \"PUBLICATION\", MatchAny, \"ADD\"))\n>> -\t\tCOMPLETE_WITH(\"ALL TABLES IN SCHEMA\", \"TABLE\");\n>> +\t\tCOMPLETE_WITH(\"ALL TABLES IN SCHEMA\", \"TABLE|SEQUENCE\");\n>> \t/* ALTER PUBLICATION <name> DROP */\n>> \telse if (Matches(\"ALTER\", \"PUBLICATION\", MatchAny, \"DROP\"))\n>> -\t\tCOMPLETE_WITH(\"ALL TABLES IN SCHEMA\", \"TABLE\");\n>> +\t\tCOMPLETE_WITH(\"ALL TABLES IN SCHEMA\", \"TABLE|SEQUENCE\");\n>> \t/* ALTER PUBLICATION <name> SET */\n>> \telse if (Matches(\"ALTER\", \"PUBLICATION\", MatchAny, \"SET\"))\n>> -\t\tCOMPLETE_WITH(\"(\", \"ALL TABLES IN SCHEMA\", \"TABLE\");\n>> +\t\tCOMPLETE_WITH(\"(\", \"ALL TABLES IN SCHEMA\", \"TABLE|SEQUENCE\");\n> \n> I think you should also add \"ALL SEQUENCES IN SCHEMA\" to these lists.\n> \n> \n>> \telse if (Matches(\"ALTER\", \"PUBLICATION\", MatchAny, \"ADD|DROP|SET\", \"ALL\", \"TABLES\", \"IN\", \"SCHEMA\"))\n> \n> ... and perhaps make this \"ALL\", \"TABLES|SEQUENCES\", \"IN\", \"SCHEMA\".\n> \n\nThanks for the review. I'm aware 0003 is still incomplete and subject to\nchange - it's certainly not meant for commit yet. The current 0003 patch\nis sufficient for testing the infrastructure, but we need to figure out\nhow to make it easier to use, what to do with implicit sequences and\nsimilar things. Peter had some ideas in [1].\n\n[1]\nhttps://www.postgresql.org/message-id/359bf6d0-413d-292a-4305-e99eeafead39%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 15 Dec 2021 17:56:04 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Wed, Dec 15, 2021 at 7:21 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 12/15/21 14:20, Amit Kapila wrote:\n> > On Tue, Dec 14, 2021 at 7:02 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> here's an updated version of the patches, dealing with almost all of the\n> >> issues (at least in the 0001 and 0002 parts). The main changes:\n> >>\n> >> 1) I've removed the 'created' flag from fill_seq_with_data, as\n> >> discussed. I don't think it's needed by any of the parts (not even 0003,\n> >> AFAICS). We still need it in xl_seq_rec, though.\n> >>\n> >> 2) GetCurrentTransactionId() added to sequence.c are called only with\n> >> wal_level=logical, to minimize the overhead.\n> >>\n> >>\n> >> There's still one remaining problem, that I already explained in [1].\n> >> The problem is that with this:\n> >>\n> >> BEGIN;\n> >> SELECT nextval('s') FROM generate_series(1,100);\n> >> ROLLBACK;\n> >>\n> >>\n> >> The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,\n> >> which is updated by XLogFlush() - but only in RecordTransactionCommit.\n> >> Which makes sense, because only the committed stuff is \"visible\".\n> >>\n> >> But the non-transactional behavior of sequence decoding disagrees with\n> >> this, because now some of the changes from aborted transactions may be\n> >> replicated. Which means the wait_for_catchup() ends up not waiting for\n> >> the sequence change to be replicated. This is an issue for tests in\n> >> patch 0003, at least.\n> >>\n> >> My concern is this actually affects other places waiting for things\n> >> getting replicated :-/\n> >>\n> >\n> > By any chance, will this impact synchronous replication as well which\n> > waits for commits to be replicated?\n> >\n>\n> Physical or logical replication?\n>\n\nlogical replication.\n\n> Physical is certainly not replicated.\n>\n> For logical replication, it's more complicated.\n>\n> > How is this patch dealing with prepared transaction case where at\n> > prepare we will send transactional changes and then later if rollback\n> > prepared happens then the publisher will rollback changes related to\n> > new relfilenode but subscriber would have already replayed the updated\n> > seqval change which won't be rolled back?\n> >\n>\n> I'm not sure what exact scenario you are describing, but in general we\n> don't rollback sequence changes anyway, so this should not cause any\n> divergence between the publisher and subscriber.\n>\n> Or are you talking about something like ALTER SEQUENCE? I think that\n> should trigger the transactional behavior for the new relfilenode, so\n> the subscriber won't see that changes.\n>\n\nYeah, something like Alter Sequence and nextval combination. I see\nthat it will be handled because of the transactional behavior for the\nnew relfilenode as for applying each sequence change, a new\nrelfilenode is created. I think on apply side, the patch always\ncreates a new relfilenode irrespective of whether the sequence message\nis transactional or not. Do we need to do that for the\nnon-transactional messages as well?\n\n--\nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 16 Dec 2021 17:29:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\n\nOn 12/16/21 12:59, Amit Kapila wrote:\n> On Wed, Dec 15, 2021 at 7:21 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 12/15/21 14:20, Amit Kapila wrote:\n>>> On Tue, Dec 14, 2021 at 7:02 AM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> Hi,\n>>>>\n>>>> here's an updated version of the patches, dealing with almost all of the\n>>>> issues (at least in the 0001 and 0002 parts). The main changes:\n>>>>\n>>>> 1) I've removed the 'created' flag from fill_seq_with_data, as\n>>>> discussed. I don't think it's needed by any of the parts (not even 0003,\n>>>> AFAICS). We still need it in xl_seq_rec, though.\n>>>>\n>>>> 2) GetCurrentTransactionId() added to sequence.c are called only with\n>>>> wal_level=logical, to minimize the overhead.\n>>>>\n>>>>\n>>>> There's still one remaining problem, that I already explained in [1].\n>>>> The problem is that with this:\n>>>>\n>>>> BEGIN;\n>>>> SELECT nextval('s') FROM generate_series(1,100);\n>>>> ROLLBACK;\n>>>>\n>>>>\n>>>> The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,\n>>>> which is updated by XLogFlush() - but only in RecordTransactionCommit.\n>>>> Which makes sense, because only the committed stuff is \"visible\".\n>>>>\n>>>> But the non-transactional behavior of sequence decoding disagrees with\n>>>> this, because now some of the changes from aborted transactions may be\n>>>> replicated. Which means the wait_for_catchup() ends up not waiting for\n>>>> the sequence change to be replicated. This is an issue for tests in\n>>>> patch 0003, at least.\n>>>>\n>>>> My concern is this actually affects other places waiting for things\n>>>> getting replicated :-/\n>>>>\n>>>\n>>> By any chance, will this impact synchronous replication as well which\n>>> waits for commits to be replicated?\n>>>\n>>\n>> Physical or logical replication?\n>>\n> \n> logical replication.\n> \n>> Physical is certainly not replicated.\n>>\n>> For logical replication, it's more complicated.\n>>\n>>> How is this patch dealing with prepared transaction case where at\n>>> prepare we will send transactional changes and then later if rollback\n>>> prepared happens then the publisher will rollback changes related to\n>>> new relfilenode but subscriber would have already replayed the updated\n>>> seqval change which won't be rolled back?\n>>>\n>>\n>> I'm not sure what exact scenario you are describing, but in general we\n>> don't rollback sequence changes anyway, so this should not cause any\n>> divergence between the publisher and subscriber.\n>>\n>> Or are you talking about something like ALTER SEQUENCE? I think that\n>> should trigger the transactional behavior for the new relfilenode, so\n>> the subscriber won't see that changes.\n>>\n> \n> Yeah, something like Alter Sequence and nextval combination. I see\n> that it will be handled because of the transactional behavior for the\n> new relfilenode as for applying each sequence change, a new\n> relfilenode is created.\n\nRight.\n\n> I think on apply side, the patch always creates a new relfilenode\n> irrespective of whether the sequence message is transactional or not.\n> Do we need to do that for the non-transactional messages as well?\n> \n\nGood question. I don't think that's necessary, I'll see if we can simply\nupdate the tuple (mostly just like redo).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 16 Dec 2021 15:54:30 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 12/15/21 14:51, Tomas Vondra wrote:\n> On 12/15/21 14:20, Amit Kapila wrote:\n>> On Tue, Dec 14, 2021 at 7:02 AM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> Hi,\n>>>\n>>> here's an updated version of the patches, dealing with almost all of the\n>>> issues (at least in the 0001 and 0002 parts). The main changes:\n>>>\n>>> 1) I've removed the 'created' flag from fill_seq_with_data, as\n>>> discussed. I don't think it's needed by any of the parts (not even 0003,\n>>> AFAICS). We still need it in xl_seq_rec, though.\n>>>\n>>> 2) GetCurrentTransactionId() added to sequence.c are called only with\n>>> wal_level=logical, to minimize the overhead.\n>>>\n>>>\n>>> There's still one remaining problem, that I already explained in [1].\n>>> The problem is that with this:\n>>>\n>>> BEGIN;\n>>> SELECT nextval('s') FROM generate_series(1,100);\n>>> ROLLBACK;\n>>>\n>>>\n>>> The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,\n>>> which is updated by XLogFlush() - but only in RecordTransactionCommit.\n>>> Which makes sense, because only the committed stuff is \"visible\".\n>>>\n>>> But the non-transactional behavior of sequence decoding disagrees with\n>>> this, because now some of the changes from aborted transactions may be\n>>> replicated. Which means the wait_for_catchup() ends up not waiting for\n>>> the sequence change to be replicated. This is an issue for tests in\n>>> patch 0003, at least.\n>>>\n>>> My concern is this actually affects other places waiting for things\n>>> getting replicated :-/\n>>>\n>>\n>> By any chance, will this impact synchronous replication as well which\n>> waits for commits to be replicated?\n>>\n> \n> Physical or logical replication? Physical is certainly not replicated.\n> \n\nActually, I take that back. It does affect physical (sync) replication \njust as well, and I think it might be considered a data loss issue. I \nstarted a new thread to discuss that, so that it's not buried in this \nthread about logical replication.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 18 Dec 2021 02:56:26 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nHere's an updated version of the patch series. The primary change is \ntweaking the WAL-logging of sequences modified per [1]. This changes \ntest output in test_decoding and built-in replication patches, and to \nmake it clearer I left the changes in separate patches.\n\nAssuming the WAL logging changes are acceptable, that resolves the data \nloss issue.\n\nI'm wondering what to do about changes with is_called=false, i.e. \nchanges generated by ALTER SEQUENCE etc. The current patch does decode \nthem and passes them to the output plugin, but I'm starting to think \nthat may not be the right behavior - if we haven't generated any data \nfrom the sequence, there's no point in replicating that, I think.\n\n\nregards\n\n\n[1] \nhttps://www.postgresql.org/message-id/712cad46-a9c8-1389-aef8-faf0203c9be9%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 22 Dec 2021 16:40:17 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nHere's a rebased version of the patch series. I decided to go back to \nthe version from 2021/12/14, which does not include the changes to WAL \nlogging. So this has the same issue with nextval() now waiting for WAL \nflush (and/or sync replica), as demonstrated by occasional failures of \nthe TAP test in 0003, but my reasoning is this:\n\n1) This is a preexisting issue, affecting sequences in general. It's not \nrelated to this patch, really. The fix will be independent of this, and \nthere's little reason to block the decoding until that happens.\n\n2) We've discussed a couple ways to fix this in [1] - logging individual \nsequence increments, flushing everything right away, waiting for page \nLSN, etc. My opinion is we'll use some form of waiting for page LSN, but \nno matter what fix we use it'll have almost no impact on this patch. \nThere might be minor changes to the test, but that's about it.\n\n3) There are ways to stabilize the tests even without that - it's enough \nto generate a little bit of WAL / get XID, not just nextval(). But \nthat's only for 0003 which I don't intend to commit yet, and 0001/0002 \nhave no problems at all.\n\n\nregards\n\n[1] \nhttps://www.postgresql.org/message-id/712cad46-a9c8-1389-aef8-faf0203c9be9@enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 26 Jan 2022 03:16:57 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 26.01.22 03:16, Tomas Vondra wrote:\n> Here's a rebased version of the patch series. I decided to go back to \n> the version from 2021/12/14, which does not include the changes to WAL \n> logging.\n\nI notice that test_decoding still has skip-sequences enabled by default, \n\"for backward compatibility\". I think we had concluded in previous \ndiscussions that we don't need that. I would remove the option altogether.\n\n\n\n", "msg_date": "Wed, 26 Jan 2022 11:09:46 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "I would not remove it altogether, there is plenty of consumers of this extension's output in the wild (even if I think it's unfortunate) that might not be interested in sequences, but changing it to off by default certainly makes sense.\n\n--\nPetr Jelinek\n\n> On 26. 1. 2022, at 11:09, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 26.01.22 03:16, Tomas Vondra wrote:\n>> Here's a rebased version of the patch series. I decided to go back to the version from 2021/12/14, which does not include the changes to WAL logging.\n> \n> I notice that test_decoding still has skip-sequences enabled by default, \"for backward compatibility\". I think we had concluded in previous discussions that we don't need that. I would remove the option altogether.\n> \n\n\n\n", "msg_date": "Wed, 26 Jan 2022 14:01:25 +0100", "msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 1/26/22 14:01, Petr Jelinek wrote:\n> I would not remove it altogether, there is plenty of consumers of \n> this extension's output in the wild (even if I think it's\n> unfortunate) that might not be interested in sequences, but changing\n> it to off by default certainly makes sense.\n\nIndeed. Attached is an updated patch series, with 0003 switching it to \nfalse by default (and fixing the fallout). For commit I'll merge that \ninto 0002, of course.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 27 Jan 2022 00:32:11 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 27.01.22 00:32, Tomas Vondra wrote:\n> \n> On 1/26/22 14:01, Petr Jelinek wrote:\n>> I would not remove it altogether, there is plenty of consumers of this \n>> extension's output in the wild (even if I think it's\n>> unfortunate) that might not be interested in sequences, but changing\n>> it to off by default certainly makes sense.\n> \n> Indeed. Attached is an updated patch series, with 0003 switching it to \n> false by default (and fixing the fallout). For commit I'll merge that \n> into 0002, of course.\n\n(could be done in separate patches too IMO)\n\ntest_decoding.c uses %zu several times for int64 values, which is not \ncorrect. This should use INT64_FORMAT or %lld with a cast to (long long \nint).\n\nI don't know, maybe it's worth commenting somewhere how the expected \nvalues in contrib/test_decoding/expected/sequence.out come about. \nOtherwise, it's quite a puzzle to determine where the 3830 comes from, \nfor example.\n\nI think the skip-sequences options is better turned around into a \npositive name like include-sequences. There is a mix of \"skip\" and \n\"include\" options in test_decoding, but there are more \"include\" ones \nright now.\n\nIn the 0003, a few files have been missed, apparently, so the tests \ndon't fully pass. See attached patch.\n\nI haven't looked fully through the 0004 patch, but I notice that there \nwas a confusing mix of FOR ALL SEQUENCE and FOR ALL SEQUENCES. I have \ncorrected that in the other attached patch.\n\nOther than the mentioned cosmetic issues, I think 0001-0003 are ready to go.", "msg_date": "Thu, 27 Jan 2022 17:05:59 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 1/27/22 17:05, Peter Eisentraut wrote:\n> On 27.01.22 00:32, Tomas Vondra wrote:\n>>\n>> On 1/26/22 14:01, Petr Jelinek wrote:\n>>> I would not remove it altogether, there is plenty of consumers of \n>>> this extension's output in the wild (even if I think it's\n>>> unfortunate) that might not be interested in sequences, but changing\n>>> it to off by default certainly makes sense.\n>>\n>> Indeed. Attached is an updated patch series, with 0003 switching it to \n>> false by default (and fixing the fallout). For commit I'll merge that \n>> into 0002, of course.\n> \n> (could be done in separate patches too IMO)\n> \n> test_decoding.c uses %zu several times for int64 values, which is not \n> correct.  This should use INT64_FORMAT or %lld with a cast to (long long \n> int).\n> \n\nGood point - INT64_FORMAT seems better. Also, the formatting was not \nquite right (missing space between the colon), so I fixed that too.\n\n> I don't know, maybe it's worth commenting somewhere how the expected \n> values in contrib/test_decoding/expected/sequence.out come about. \n> Otherwise, it's quite a puzzle to determine where the 3830 comes from, \n> for example.\n> \n\nYeah, that's probably a good idea - I had to think about the expected \noutput repeatedly, so an explanation would help. I'll do that in the \nnext version of the patch.\n\n> I think the skip-sequences options is better turned around into a \n> positive name like include-sequences.  There is a mix of \"skip\" and \n> \"include\" options in test_decoding, but there are more \"include\" ones \n> right now.\n> \n\nHmmm. I don't see much difference between skip-sequences and \ninclude-sequences, but I don't feel very strongly about it either so I \nswitched that to include-sequences (which defaults to true).\n\n> In the 0003, a few files have been missed, apparently, so the tests \n> don't fully pass.  See attached patch.\n> \n\nD'oh! I'd swear I've fixed those too.\n\n> I haven't looked fully through the 0004 patch, but I notice that there \n> was a confusing mix of FOR ALL SEQUENCE and FOR ALL SEQUENCES.  I have \n> corrected that in the other attached patch.\n> \n> Other than the mentioned cosmetic issues, I think 0001-0003 are ready to go.\n\nThanks. I think we'll have time to look at 0004 more closely once the \ninitial parts get committed.\n\n\nAttached is a a rebased/squashed version of the patches, with all the \nfixes discussed here.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 28 Jan 2022 01:25:24 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "I've polished & pushed the first part adding sequence decoding \ninfrastructure etc. Attached are the two remaining parts.\n\nI plan to wait a day or two and then push the test_decoding part. The \nlast part (for built-in replication) will need more work and maybe \nrethinking the grammar etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 10 Feb 2022 19:17:20 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 2/10/22 19:17, Tomas Vondra wrote:\n> I've polished & pushed the first part adding sequence decoding\n> infrastructure etc. Attached are the two remaining parts.\n> \n> I plan to wait a day or two and then push the test_decoding part. The\n> last part (for built-in replication) will need more work and maybe\n> rethinking the grammar etc.\n> \n\nI've pushed the second part, adding sequences to test_decoding.\n\nHere's the remaining part, rebased, with a small tweak in the TAP test\nto eliminate the issue with not waiting for sequence increments. I've\nkept the tweak in a separate patch, so that we can throw it away easily\nif we happen to resolve the issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 12 Feb 2022 01:34:33 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 2/12/22 01:34, Tomas Vondra wrote:\n> On 2/10/22 19:17, Tomas Vondra wrote:\n>> I've polished & pushed the first part adding sequence decoding\n>> infrastructure etc. Attached are the two remaining parts.\n>>\n>> I plan to wait a day or two and then push the test_decoding part. The\n>> last part (for built-in replication) will need more work and maybe\n>> rethinking the grammar etc.\n>>\n> \n> I've pushed the second part, adding sequences to test_decoding.\n> \n> Here's the remaining part, rebased, with a small tweak in the TAP test\n> to eliminate the issue with not waiting for sequence increments. I've\n> kept the tweak in a separate patch, so that we can throw it away easily\n> if we happen to resolve the issue.\n> \n\nHmm, cfbot was not happy about this, so here's a version fixing the\nelog() format issue reported by CirrusCI/mingw by ditching the log\nmessage. It was useful for debugging, but otherwise just noise.\n\nI'm a bit puzzled about the macOS failure, though. It seems as if the\ntest does not wait for the subscriber long enough, but this is with the\ntweaked test variant, so it should not have the rollback issue. And I\nhaven't seen this failure on any other machine.\n\nRegarding adding decoding of sequences to the built-in replication,\nthere is a couple questions that we need to discuss first before\ncleaning up the code etc. Most of them are related to syntax and\nhandling of various sequence variants.\n\n\n1) Firstly, what about implicit sequences. That is, if you create a\ntable with SERIAL or BIGSERIAL column, that'll have a sequence attached.\nShould those sequences be added to the publication when the table gets\nadded? Or should we require adding them separately? Or should that be\nspecified in the grammar, somehow? Should we have INCLUDING SEQUENCES\nfor ALTER PUBLICATION ... ADD TABLE ...?\n\nI think we shouldn't require replicating the sequence, because who knows\nwhat the schema is on the subscriber? We want to allow differences, so\nmaybe the sequence is not there. I'd start with just adding them\nseparately, because that just seems simpler, but maybe there are good\nreasons to support adding them in ADD TABLE.\n\n\n2) Should it be possible to add sequences that are also associated with\na serial column, without the table being replicated too? I'd say yes, if\npeople want to do that - I don't think it can cause any issues, and it's\npossible to just use sequence directly for non-serial columns anyway.\nWhich is the same thing, but we can't detect that.\n\n\n3) What about sequences for UNLOGGED tables? At the moment we don't\nallow sequences to be UNLOGGED (Peter revived his patch [1], but that's\nnot committed yet). Again, I'd say it's up to the user to decide which\nsequences are replicated - it's similar to (2).\n\n\n4) I wonder if we actually want FOR ALL SEQUENCES. On the one hand it'd\nbe symmetrical with FOR ALL TABLES, which is the other object type we\ncan replicate. So it'd seem reasonable to handle them in a similar way.\nBut it's causing some shift/reduce error in the grammar, so it'll need\nsome changes.\n\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/8da92c1f-9117-41bc-731b-ce1477a77d69@enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 12 Feb 2022 20:58:53 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 2/12/22 20:58, Tomas Vondra wrote:\n> On 2/12/22 01:34, Tomas Vondra wrote:\n>> On 2/10/22 19:17, Tomas Vondra wrote:\n>>> I've polished & pushed the first part adding sequence decoding\n>>> infrastructure etc. Attached are the two remaining parts.\n>>>\n>>> I plan to wait a day or two and then push the test_decoding part. The\n>>> last part (for built-in replication) will need more work and maybe\n>>> rethinking the grammar etc.\n>>>\n>>\n>> I've pushed the second part, adding sequences to test_decoding.\n>>\n>> Here's the remaining part, rebased, with a small tweak in the TAP test\n>> to eliminate the issue with not waiting for sequence increments. I've\n>> kept the tweak in a separate patch, so that we can throw it away easily\n>> if we happen to resolve the issue.\n>>\n> \n> Hmm, cfbot was not happy about this, so here's a version fixing the\n> elog() format issue reported by CirrusCI/mingw by ditching the log\n> message. It was useful for debugging, but otherwise just noise.\n> \n\nThere was another elog() making mingw unhappy, so here's a fix for that.\n\nThis should also fix an issue on the macOS machine. This is a thinko in\nthe tests, because wait_for_catchup() may not wait for all the sequence\nincrements after a rollback. The default mode is \"write\" which uses\npg_current_wal_lsn(), and that may be a bit stale after a rollback.\nDoing a simple insert after the rollback fixes this (using other LSN,\nlike pg_current_wal_insert_lsn() would work too, but it'd cause long\nwaits in the test).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 13 Feb 2022 14:10:25 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 13.02.22 14:10, Tomas Vondra wrote:\n> Here's the remaining part, rebased, with a small tweak in the TAP test\n> to eliminate the issue with not waiting for sequence increments. I've\n> kept the tweak in a separate patch, so that we can throw it away easily\n> if we happen to resolve the issue.\n\nThis basically looks fine to me. You have identified a few XXX and \nFIXME spots that should be addressed.\n\nHere are a few more comments:\n\n* general\n\nHandling of owned sequences, as previously discussed. This would \nprobably be a localized change somewhere in get_rel_sync_entry(), so it \ndoesn't affect the overall structure of the patch.\n\npg_dump support is missing.\n\nSome psql \\dxxx support should probably be there. Check where existing \npublication-table relationships are displayed.\n\n* src/backend/catalog/system_views.sql\n\n+ LATERAL pg_get_publication_sequences(P.pubname) GPT,\n\nThe GPT presumably stood for \"get publication tables\", so should be changed.\n\n(There might be a few more copy-and-paste occasions like this around. I \nhave not written down all of them.)\n\n* src/backend/commands/publicationcmds.c\n\nThis adds a new publication option called \"sequence\". I would have \nexpected it to be named \"sequences\", but that's just my opinion.\n\nBut in any case, the option is not documented at all.\n\nSome of the new functions added in this file are almost exact duplicates \nof the analogous functions for tables. For example, \nPublicationAddSequences()/PublicationDropSequences() are almost\nexactly the same as PublicationAddTables()/PublicationDropTables(). \nThis could be handled with less duplication by just adding an ObjectType \nargument to the existing functions.\n\n* src/backend/commands/sequence.c\n\nCould use some refactoring of ResetSequence()/ResetSequence2(). Maybe \ncall the latter ResetSequenceData() and have the former call it internally.\n\n* src/backend/commands/subscriptioncmds.c\n\nAlso lots of duplication between tables and sequences in this file.\n\n* src/backend/replication/logical/tablesync.c\n\nThe comment says it needs sequence support, but there appears to be \nsupport for initial sequence syncing. What exactly is missing here?\n\n* src/test/subscription/t/028_sequences.pl\n\nChange to use done_testing() (see 549ec201d6132b7c7ee11ee90a4e02119259ba5b).\n\n\n", "msg_date": "Tue, 15 Feb 2022 10:00:45 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 2/15/22 10:00, Peter Eisentraut wrote:\n> On 13.02.22 14:10, Tomas Vondra wrote:\n>> Here's the remaining part, rebased, with a small tweak in the TAP test\n>> to eliminate the issue with not waiting for sequence increments. I've\n>> kept the tweak in a separate patch, so that we can throw it away easily\n>> if we happen to resolve the issue.\n> \n> This basically looks fine to me.  You have identified a few XXX and\n> FIXME spots that should be addressed.\n> \n> Here are a few more comments:\n> \n> * general\n> \n> Handling of owned sequences, as previously discussed.  This would\n> probably be a localized change somewhere in get_rel_sync_entry(), so it\n> doesn't affect the overall structure of the patch.\n> \n\nSo you're suggesting not to track owned sequences in pg_publication_rel\nexplicitly, and handle them dynamically in output plugin? So when\ncalling get_rel_sync_entry on the sequence, we'd check if it's owned by\na table that is replicated.\n\nWe'd want a way to enable/disable this for each publication, but that\nmakes the lookups more complicated. Also, we'd probably need the same\nlogic elsewhere (e.g. in psql, when listing sequences in a publication).\n\nI'm not sure we want this complexity, maybe we should simply deal with\nthis in the ALTER PUBLICATION and track all sequences (owned or not) in\npg_publication_rel.\n\n> pg_dump support is missing.\n> \n\nWill fix.\n\n> Some psql \\dxxx support should probably be there.  Check where existing\n> publication-table relationships are displayed.\n> \n\nYeah, I noticed this while adding regression tests. Currently, \\dRp+\nshows something like this:\n\n Publication testpub_fortbl\n Owner | All tables | Inserts | Updates ...\n --------------------------+------------+---------+--------- ...\n regress_publication_user | f | t | t ...\n Tables:\n \"pub_test.testpub_nopk\"\n \"public.testpub_tbl1\"\n\nor\n\n Publication testpub5_forschema\n Owner | All tables | Inserts | Updates | ...\n --------------------------+------------+---------+---------+- ...\n regress_publication_user | f | t | t | ...\n Tables from schemas:\n \"CURRENT_SCHEMA\"\n \"public\"\n\nI wonder if we should copy the same block for sequences, so\n\n Publication testpub_fortbl\n Owner | All tables | Inserts | Updates ...\n --------------------------+------------+---------+--------- ...\n regress_publication_user | f | t | t ...\n Tables:\n \"pub_test.testpub_nopk\"\n \"public.testpub_tbl1\"\n Sequences:\n \"pub_test.sequence_s1\"\n \"public.sequence_s2\"\n\nAnd same for \"tables from schemas\" etc.\n\n\n> * src/backend/catalog/system_views.sql\n> \n> +         LATERAL pg_get_publication_sequences(P.pubname) GPT,\n> \n> The GPT presumably stood for \"get publication tables\", so should be\n> changed.\n> \n> (There might be a few more copy-and-paste occasions like this around.  I\n> have not written down all of them.)\n> \n\nWill fix.\n\n> * src/backend/commands/publicationcmds.c\n> \n> This adds a new publication option called \"sequence\".  I would have\n> expected it to be named \"sequences\", but that's just my opinion.\n> \n> But in any case, the option is not documented at all.\n> \n> Some of the new functions added in this file are almost exact duplicates\n> of the analogous functions for tables.  For example,\n> PublicationAddSequences()/PublicationDropSequences() are almost\n> exactly the same as PublicationAddTables()/PublicationDropTables(). This\n> could be handled with less duplication by just adding an ObjectType\n> argument to the existing functions.\n> \n\nYes, I noticed that too, and I'll review this later, after making sure\nthe behavior is correct.\n\n> * src/backend/commands/sequence.c\n> \n> Could use some refactoring of ResetSequence()/ResetSequence2().  Maybe\n> call the latter ResetSequenceData() and have the former call it internally.\n> \n\nWill check.\n\n> * src/backend/commands/subscriptioncmds.c\n> \n> Also lots of duplication between tables and sequences in this file.\n> \n\nSame as the case above.\n\n> * src/backend/replication/logical/tablesync.c\n> \n> The comment says it needs sequence support, but there appears to be\n> support for initial sequence syncing.  What exactly is missing here?\n> \n\nI think that's just obsolete comment.\n\n> * src/test/subscription/t/028_sequences.pl\n> \n> Change to use done_testing() (see\n> 549ec201d6132b7c7ee11ee90a4e02119259ba5b).\n\nWill fix.\n\n\nDo we need to handle both FOR ALL TABLES and FOR ALL SEQUENCES at the\nsame time? That seems like a reasonable thing people might want.\n\nThe patch probably also needs to modify pg_publication_namespace to\ntrack whether the schema is FOR TABLES IN SCHEMA or FOR SEQUENCES.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 19 Feb 2022 03:18:48 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Sat, Feb 19, 2022 at 7:48 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Do we need to handle both FOR ALL TABLES and FOR ALL SEQUENCES at the\n> same time? That seems like a reasonable thing people might want.\n>\n\n+1. That seems reasonable to me as well. I think the same will apply\nto FOR ALL TABLES IN SCHEMA and FOR ALL SEQUENCES IN SCHEMA.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 19 Feb 2022 11:03:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nHere's an updated version of the patch, fixing most of the issues from\nreviews so far. There's still a couple FIXME comments, but I think those\nare minor and/or straightforward to deal with.\n\nThe main improvements:\n\n1) Elimination of a lot of redundant code - one function handling\ntables, and an almost exact copy handling sequences. Now a single\nfunction handles both, possibly with \"sequences\" flag to tweak the behavior.\n\n2) A couple other functions gained \"sequences\" flag, determining which\nobjects are \"interesting\". For example PublicationAddSchemas needs to\nknow whether it's FOR ALL SEQUENCES or FOR ALL TABLES IN SCHEMA. I don't\nthink we can just use relkind here easily, because for tables we need to\nhandle various types of tables (regular, partitioned, ...).\n\n3) I also renamed a couple functions with \"tables\" in the name, which\nare now used for sequences too. So for example OpenTablesList() is\nrenamed to OpenRelationList() and so on.\n\n4) Addition of a number of regression tests to \"publication.sql\", which\nshowed a lot of issues, mostly related to not distinguishing tables and\nsequences when handling \"FOR ALL TABLES [IN SCHEMA]\" and \"FOR ALL\nSEQUENCES [IN SCHEMA]\".\n\n5) Proper tracking of FOR ALL [TABLES|SEQUENCES] IN SCHEMA in a catalog.\nThe pg_publication_namespace gained a pnsequences flag, which determines\nwhich case it is. So for example if you do\n\n ALTER PUBLICATION p ADD ALL TABLES IN SCHEMA s;\n ALTER PUBLICATION p ADD ALL SEQUENCES IN SCHEMA s;\n\nthere will be two rows in the catalog, one with 't' and one with 'f' in\nthe new column. I'm not sure this is the best way to track this - maybe\nit'd be better to have two flags, and keep a single row. Or maybe we\nshould have an array of relkinds (but that has the issue with tables\nhaving multiple relkinds mentioned before). Multiple rows make it more\nconvenient to add/remove publication schemas - with a single row it'd be\nnecessary to either insert a new row or update an existing one when\nadding the schema, and similarly for dropping it.\n\nBut maybe there are reasons / precedent to design this differently ...\n\n6) I'm not entirely sure the object_address changes (handling of the\npnsequences flag) are correct.\n\n7) This updates psql to do roughly the same thing as for tables, so \\dRp\nnow list sequences added either directly or through schema, so you might\nget footer like this:\n\n \\dRp+ testpub_mix\n ...\n Tables:\n \"public.testpub_tbl1\"\n Tables from schemas:\n \"pub_test\"\n Sequences:\n \"public.testpub_seq1\"\n Sequences from schemas:\n \"pub_test\"\n\nMaybe it's a bit too verbose, though. It also addes \"All sequences\" and\n\"Sequences\" columns into the publication description, but I don't think\nthat can be done much differently.\n\nFWIW I had to switch the describeOneTableDetails() chunk dealing with\nsequences from printQuery() to printTable() in order to handle dynamic\nfooters.\n\nThere's also a change in \\dn+ because a schema may be included in one\npublication as \"FOR ALL SEQUENCES IN SCHEMA\" and in another publication\nwith \"FOR ALL TABLES IN SCHEMA\". So I modified the output to\n\n \\dn+ pub_test1\n ...\n Publications:\n \"testpub_schemas\" (sequences)\n \"testpub_schemas\" (tables)\n\nBut maybe it'd be better to aggregate this into a single line like\n\n \\dn+ pub_test1\n ...\n Publications:\n \"testpub_schemas\" (tables, sequences)\n\nOpinions?\n\n8) There's a shortcoming in the current grammar, because you can specify\neither\n\n CREATE PUBLICATION p FOR ALL TABLES;\n\nor\n\n CREATE PUBLICATION p FOR ALL SEQUENCES;\n\nbut it's not possible to do\n\n CREATE PUBLICATION p FOR ALL TABLES AND FOR ALL SEQUENCES;\n\nwhich seems like a fairly reasonable thing users might want to do.\n\nThe problem is that \"FOR ALL TABLES\" (and same for sequences) is\nhard-coded in the grammar, not defined as PublicationObjSpec. This also\nmeans you can't do\n\n ALTER PUBLICATION p ADD ALL TABLES;\n\nAFAICS there are two ways to fix this - adding the combinations into the\ndefinition of CreatePublicationStmt, or adding FOR ALL TABLES (and\nsequences) to PublicationObjSpec.\n\n9) Another grammar limitation is that we don't cross-check the relkind,\nso for example\n\n ALTER PUBLICATION p ADD TABLE sequence;\n\nmight actually work. Should be easy to fix, though.\n\n10) Added pg_dump support (including tests). I'll add more tests, to\ncheck more grammar combinations.\n\n11) I need to test more grammar combinations in the TAP test too, to\nverify the output plugin interprets the stuff correctly.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 23 Feb 2022 00:24:30 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 2/19/22 06:33, Amit Kapila wrote:\n> On Sat, Feb 19, 2022 at 7:48 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Do we need to handle both FOR ALL TABLES and FOR ALL SEQUENCES at the\n>> same time? That seems like a reasonable thing people might want.\n>>\n> \n> +1. That seems reasonable to me as well. I think the same will apply\n> to FOR ALL TABLES IN SCHEMA and FOR ALL SEQUENCES IN SCHEMA.\n> \n\nIt already works for \"IN SCHEMA\" because that's handled as a publication\nobject, but FOR ALL TABLES and FOR ALL SEQUENCES are defined directly in\nCreatePublicationStmt.\n\nWhich also means you can't do ALTER PUBLICATION and change it to FOR ALL\nTABLES. Which is a bit annoying, but OK. It's a bit weird FOR ALL TABLES\nis mentioned in docs for ALTER PUBLICATION as if it was supported.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 23 Feb 2022 00:39:00 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Wed, Feb 23, 2022 at 4:54 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> 7) This updates psql to do roughly the same thing as for tables, so \\dRp\n> now list sequences added either directly or through schema, so you might\n> get footer like this:\n>\n> \\dRp+ testpub_mix\n> ...\n> Tables:\n> \"public.testpub_tbl1\"\n> Tables from schemas:\n> \"pub_test\"\n> Sequences:\n> \"public.testpub_seq1\"\n> Sequences from schemas:\n> \"pub_test\"\n>\n> Maybe it's a bit too verbose, though. It also addes \"All sequences\" and\n> \"Sequences\" columns into the publication description, but I don't think\n> that can be done much differently.\n>\n> FWIW I had to switch the describeOneTableDetails() chunk dealing with\n> sequences from printQuery() to printTable() in order to handle dynamic\n> footers.\n>\n> There's also a change in \\dn+ because a schema may be included in one\n> publication as \"FOR ALL SEQUENCES IN SCHEMA\" and in another publication\n> with \"FOR ALL TABLES IN SCHEMA\". So I modified the output to\n>\n> \\dn+ pub_test1\n> ...\n> Publications:\n> \"testpub_schemas\" (sequences)\n> \"testpub_schemas\" (tables)\n>\n> But maybe it'd be better to aggregate this into a single line like\n>\n> \\dn+ pub_test1\n> ...\n> Publications:\n> \"testpub_schemas\" (tables, sequences)\n>\n> Opinions?\n>\n\nI think the second one (aggregated) might be slightly better as that\nwill lead to a lesser number of lines when there are multiple such\npublications but it should be okay if you and others prefer first.\n\n> 8) There's a shortcoming in the current grammar, because you can specify\n> either\n>\n> CREATE PUBLICATION p FOR ALL TABLES;\n>\n> or\n>\n> CREATE PUBLICATION p FOR ALL SEQUENCES;\n>\n> but it's not possible to do\n>\n> CREATE PUBLICATION p FOR ALL TABLES AND FOR ALL SEQUENCES;\n>\n> which seems like a fairly reasonable thing users might want to do.\n>\n\nIsn't it better to support this with a syntax as indicated by Tom in\none of his earlier emails on this topic [1]? IIUC, it would be as\nfollows:\n\nCREATE PUBLICATION p FOR ALL TABLES, ALL SEQUENCES;\n\n> The problem is that \"FOR ALL TABLES\" (and same for sequences) is\n> hard-coded in the grammar, not defined as PublicationObjSpec. This also\n> means you can't do\n>\n> ALTER PUBLICATION p ADD ALL TABLES;\n>\n> AFAICS there are two ways to fix this - adding the combinations into the\n> definition of CreatePublicationStmt, or adding FOR ALL TABLES (and\n> sequences) to PublicationObjSpec.\n>\n\nI can imagine that adding to PublicationObjSpec will look compatible\nwith existing code but maybe another way will also be okay.\n\n[1] - https://www.postgresql.org/message-id/877603.1629120678%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Feb 2022 16:40:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 2/23/22 12:10, Amit Kapila wrote:\n> On Wed, Feb 23, 2022 at 4:54 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> 7) This updates psql to do roughly the same thing as for tables, so \\dRp\n>> now list sequences added either directly or through schema, so you might\n>> get footer like this:\n>>\n>> \\dRp+ testpub_mix\n>> ...\n>> Tables:\n>> \"public.testpub_tbl1\"\n>> Tables from schemas:\n>> \"pub_test\"\n>> Sequences:\n>> \"public.testpub_seq1\"\n>> Sequences from schemas:\n>> \"pub_test\"\n>>\n>> Maybe it's a bit too verbose, though. It also addes \"All sequences\" and\n>> \"Sequences\" columns into the publication description, but I don't think\n>> that can be done much differently.\n>>\n>> FWIW I had to switch the describeOneTableDetails() chunk dealing with\n>> sequences from printQuery() to printTable() in order to handle dynamic\n>> footers.\n>>\n>> There's also a change in \\dn+ because a schema may be included in one\n>> publication as \"FOR ALL SEQUENCES IN SCHEMA\" and in another publication\n>> with \"FOR ALL TABLES IN SCHEMA\". So I modified the output to\n>>\n>> \\dn+ pub_test1\n>> ...\n>> Publications:\n>> \"testpub_schemas\" (sequences)\n>> \"testpub_schemas\" (tables)\n>>\n>> But maybe it'd be better to aggregate this into a single line like\n>>\n>> \\dn+ pub_test1\n>> ...\n>> Publications:\n>> \"testpub_schemas\" (tables, sequences)\n>>\n>> Opinions?\n>>\n> \n> I think the second one (aggregated) might be slightly better as that\n> will lead to a lesser number of lines when there are multiple such\n> publications but it should be okay if you and others prefer first.\n> \n\nMaybe, but I don't think it's very common to have that many schemas\nadded to the same publication. And it probably does not make much\ndifference whether you have 1000 or 2000 items in the list - either both\nare acceptable or unacceptable, I think.\n\nBut I plan to look at this a bit more.\n\n>> 8) There's a shortcoming in the current grammar, because you can specify\n>> either\n>>\n>> CREATE PUBLICATION p FOR ALL TABLES;\n>>\n>> or\n>>\n>> CREATE PUBLICATION p FOR ALL SEQUENCES;\n>>\n>> but it's not possible to do\n>>\n>> CREATE PUBLICATION p FOR ALL TABLES AND FOR ALL SEQUENCES;\n>>\n>> which seems like a fairly reasonable thing users might want to do.\n>>\n> \n> Isn't it better to support this with a syntax as indicated by Tom in\n> one of his earlier emails on this topic [1]? IIUC, it would be as\n> follows:\n> \n> CREATE PUBLICATION p FOR ALL TABLES, ALL SEQUENCES;\n> \n\nYes. That's mostly what I meant by adding this to PublicationObjSpec.\n\n>> The problem is that \"FOR ALL TABLES\" (and same for sequences) is\n>> hard-coded in the grammar, not defined as PublicationObjSpec. This also\n>> means you can't do\n>>\n>> ALTER PUBLICATION p ADD ALL TABLES;\n>>\n>> AFAICS there are two ways to fix this - adding the combinations into the\n>> definition of CreatePublicationStmt, or adding FOR ALL TABLES (and\n>> sequences) to PublicationObjSpec.\n>>\n> \n> I can imagine that adding to PublicationObjSpec will look compatible\n> with existing code but maybe another way will also be okay.\n> \n\nI think just hard-coding this into CreatePublicationStmt would work, but\nit'll be an issue once/if we start adding more options. I'm not sure if\nit makes sense to replicate other relkinds, but maybe DDL?\n\nI'll try tweaking PublicationObjSpec, and we'll see.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 23 Feb 2022 17:07:09 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Wed, Feb 23, 2022, at 1:07 PM, Tomas Vondra wrote:\n> Maybe, but I don't think it's very common to have that many schemas\n> added to the same publication. And it probably does not make much\n> difference whether you have 1000 or 2000 items in the list - either both\n> are acceptable or unacceptable, I think.\nWouldn't it confuse users? Hey, duplicate publication. How? Wait. Doh.\n\n> I think just hard-coding this into CreatePublicationStmt would work, but\n> it'll be an issue once/if we start adding more options. I'm not sure if\n> it makes sense to replicate other relkinds, but maybe DDL?\nMaterialized view? As you mentioned DDL, maybe we can use the CREATE\nPUBLICATION syntax to select which DDL commands we want to replicate.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Feb 23, 2022, at 1:07 PM, Tomas Vondra wrote:Maybe, but I don't think it's very common to have that many schemasadded to the same publication. And it probably does not make muchdifference whether you have 1000 or 2000 items in the list - either bothare acceptable or unacceptable, I think.Wouldn't it confuse users? Hey, duplicate publication. How? Wait. Doh.I think just hard-coding this into CreatePublicationStmt would work, butit'll be an issue once/if we start adding more options. I'm not sure ifit makes sense to replicate other relkinds, but maybe DDL?Materialized view? As you mentioned DDL, maybe we can use the CREATEPUBLICATION syntax to select which DDL commands we want to replicate.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 23 Feb 2022 14:33:34 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 2/23/22 18:33, Euler Taveira wrote:\n> On Wed, Feb 23, 2022, at 1:07 PM, Tomas Vondra wrote:\n>> Maybe, but I don't think it's very common to have that many\n>> schemas added to the same publication. And it probably does not\n>> make much difference whether you have 1000 or 2000 items in the\n>> list - either both are acceptable or unacceptable, I think.\n>\n> Wouldn't it confuse users? Hey, duplicate publication. How? Wait.\n> Doh.\n> \n\nI don't follow. Duplicate publications? This talks about rows in\npg_publication_namespace, not pg_publication.\n\n>> I think just hard-coding this into CreatePublicationStmt would\n>> work, but it'll be an issue once/if we start adding more options.\n>> I'm not sure if it makes sense to replicate other relkinds, but\n>> maybe DDL?\n>\n> Materialized view? As you mentioned DDL, maybe we can use the CREATE \n> PUBLICATION syntax to select which DDL commands we want to\n> replicate.\n> \n\nWell, yeah. But that doesn't really say whether to hard-code it into the\nCREATE PUBLICATION syntax or cover it by PublicationObjSpec. Hard-coding\nit also means we can't ALTER the publication to be FOR ALL TABLES etc.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 23 Feb 2022 20:18:30 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Wed, Feb 23, 2022, at 4:18 PM, Tomas Vondra wrote:\n> On 2/23/22 18:33, Euler Taveira wrote:\n> > On Wed, Feb 23, 2022, at 1:07 PM, Tomas Vondra wrote:\n> >> Maybe, but I don't think it's very common to have that many\n> >> schemas added to the same publication. And it probably does not\n> >> make much difference whether you have 1000 or 2000 items in the\n> >> list - either both are acceptable or unacceptable, I think.\n> >\n> > Wouldn't it confuse users? Hey, duplicate publication. How? Wait.\n> > Doh.\n> > \n> \n> I don't follow. Duplicate publications? This talks about rows in\n> pg_publication_namespace, not pg_publication.\nI was referring to\n\n Publications:\n \"testpub_schemas\" (sequences)\n \"testpub_schemas\" (tables)\n\nversus\n\n Publications:\n \"testpub_schemas\" (tables, sequences)\n\nI prefer the latter.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Feb 23, 2022, at 4:18 PM, Tomas Vondra wrote:On 2/23/22 18:33, Euler Taveira wrote:> On Wed, Feb 23, 2022, at 1:07 PM, Tomas Vondra wrote:>> Maybe, but I don't think it's very common to have that many>> schemas added to the same publication. And it probably does not>> make much difference whether you have 1000 or 2000 items in the>> list - either both are acceptable or unacceptable, I think.>> Wouldn't it confuse users? Hey, duplicate publication. How? Wait.> Doh.> I don't follow. Duplicate publications? This talks about rows inpg_publication_namespace, not pg_publication.I was referring to  Publications:      \"testpub_schemas\" (sequences)      \"testpub_schemas\" (tables)versus  Publications:      \"testpub_schemas\" (tables, sequences)I prefer the latter.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 23 Feb 2022 19:05:58 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 23.02.22 12:10, Amit Kapila wrote:\n> Isn't it better to support this with a syntax as indicated by Tom in\n> one of his earlier emails on this topic [1]? IIUC, it would be as\n> follows:\n> \n> CREATE PUBLICATION p FOR ALL TABLES, ALL SEQUENCES;\n\nI don't think there is any point in supporting this. What FOR ALL \nTABLES was really supposed to mean was \"everything you can get your \nhands on\". I think we should just redefine FOR ALL TABLES to mean that, \nmaybe replace it with a different syntax. If you want to exclude \nsequences for some reason, there is already a publication option for \nthat. And FOR ALL SEQUENCES by itself doesn't make any sense in practice.\n\nAre there any other object types besides tables and sequences that we \nmight want to logically-replicate in the future and whose possible \nsyntax we should think about? I can't think of anything.\n\n\n\n", "msg_date": "Thu, 24 Feb 2022 13:11:23 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 2/10/22 19:17, Tomas Vondra wrote:\n> > I've polished & pushed the first part adding sequence decoding\n> > infrastructure etc. Attached are the two remaining parts.\n> >\n> > I plan to wait a day or two and then push the test_decoding part. The\n> > last part (for built-in replication) will need more work and maybe\n> > rethinking the grammar etc.\n> >\n>\n> I've pushed the second part, adding sequences to test_decoding.\n>\n\nThe test_decoding is failing randomly in the last few days. I am not\ncompletely sure but they might be related to this work. The two of\nthese appears to be due to the same reason:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07\n\nTRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n\"reorderbuffer.c\", Line: 1173, PID: 35013)\n0 postgres 0x00593de0 ExceptionalCondition + 160\\\\0\n\nAnother:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2022-02-16%2006%3A21%3A48\n\n--- /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/expected/rewrite.out\n2022-02-14 20:19:14.000000000 +0000\n+++ /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/results/rewrite.out\n2022-02-16 07:42:18.000000000 +0000\n@@ -126,6 +126,7 @@\n table public.replication_example: INSERT: id[integer]:4\nsomedata[integer]:3 text[character varying]:null\ntestcolumn1[integer]:null\n table public.replication_example: INSERT: id[integer]:5\nsomedata[integer]:4 text[character varying]:null\ntestcolumn1[integer]:2 testcolumn2[integer]:1\n COMMIT\n+ sequence public.replication_example_id_seq: transactional:0\nlast_value: 38 log_cnt: 0 is_called:1\n BEGIN\n table public.replication_example: INSERT: id[integer]:6\nsomedata[integer]:5 text[character varying]:null\ntestcolumn1[integer]:3 testcolumn2[integer]:null\n COMMIT\n@@ -133,7 +134,7 @@\n table public.replication_example: INSERT: id[integer]:7\nsomedata[integer]:6 text[character varying]:null\ntestcolumn1[integer]:4 testcolumn2[integer]:null\n table public.replication_example: INSERT: id[integer]:8\nsomedata[integer]:7 text[character varying]:null\ntestcolumn1[integer]:5 testcolumn2[integer]:null\ntestcolumn3[integer]:1\n COMMIT\n- (15 rows)\n+ (16 rows)\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 28 Feb 2022 17:16:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Mon, Feb 28, 2022 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 2/10/22 19:17, Tomas Vondra wrote:\n> > > I've polished & pushed the first part adding sequence decoding\n> > > infrastructure etc. Attached are the two remaining parts.\n> > >\n> > > I plan to wait a day or two and then push the test_decoding part. The\n> > > last part (for built-in replication) will need more work and maybe\n> > > rethinking the grammar etc.\n> > >\n> >\n> > I've pushed the second part, adding sequences to test_decoding.\n> >\n>\n> The test_decoding is failing randomly in the last few days. I am not\n> completely sure but they might be related to this work. The two of\n> these appears to be due to the same reason:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07\n>\n> TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n> \"reorderbuffer.c\", Line: 1173, PID: 35013)\n> 0 postgres 0x00593de0 ExceptionalCondition + 160\\\\0\n>\n\nWhile reviewing the code for this, I noticed that in\nsequence_decode(), we don't call ReorderBufferProcessXid to register\nthe first known lsn in WAL for the current xid. The similar functions\nlogicalmsg_decode() or heap_decode() do call ReorderBufferProcessXid\neven if they decide not to queue or send the change. Is there a reason\nfor not doing the same here? However, I am not able to deduce any\nscenario where lack of this will lead to such an Assertion failure.\nAny thoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 1 Mar 2022 17:23:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Tue, Mar 1, 2022 at 10:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 28, 2022 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> > >\n> > > On 2/10/22 19:17, Tomas Vondra wrote:\n> > > > I've polished & pushed the first part adding sequence decoding\n> > > > infrastructure etc. Attached are the two remaining parts.\n> > > >\n> > > > I plan to wait a day or two and then push the test_decoding part. The\n> > > > last part (for built-in replication) will need more work and maybe\n> > > > rethinking the grammar etc.\n> > > >\n> > >\n> > > I've pushed the second part, adding sequences to test_decoding.\n> > >\n> >\n> > The test_decoding is failing randomly in the last few days. I am not\n> > completely sure but they might be related to this work. The two of\n> > these appears to be due to the same reason:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07\n> >\n> > TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n> > \"reorderbuffer.c\", Line: 1173, PID: 35013)\n> > 0 postgres 0x00593de0 ExceptionalCondition + 160\\\\0\n> >\n\nFYI, it looks like the same assertion has failed again on the same\nbuild-farm machine [1]\n\n>\n> While reviewing the code for this, I noticed that in\n> sequence_decode(), we don't call ReorderBufferProcessXid to register\n> the first known lsn in WAL for the current xid. The similar functions\n> logicalmsg_decode() or heap_decode() do call ReorderBufferProcessXid\n> even if they decide not to queue or send the change. Is there a reason\n> for not doing the same here? However, I am not able to deduce any\n> scenario where lack of this will lead to such an Assertion failure.\n> Any thoughts?\n>\n\n------\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-03-03%2023%3A14%3A26\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 7 Mar 2022 13:13:50 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 3/1/22 12:53, Amit Kapila wrote:\n> On Mon, Feb 28, 2022 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> On 2/10/22 19:17, Tomas Vondra wrote:\n>>>> I've polished & pushed the first part adding sequence decoding\n>>>> infrastructure etc. Attached are the two remaining parts.\n>>>>\n>>>> I plan to wait a day or two and then push the test_decoding part. The\n>>>> last part (for built-in replication) will need more work and maybe\n>>>> rethinking the grammar etc.\n>>>>\n>>>\n>>> I've pushed the second part, adding sequences to test_decoding.\n>>>\n>>\n>> The test_decoding is failing randomly in the last few days. I am not\n>> completely sure but they might be related to this work. The two of\n>> these appears to be due to the same reason:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07\n>>\n>> TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n>> \"reorderbuffer.c\", Line: 1173, PID: 35013)\n>> 0 postgres 0x00593de0 ExceptionalCondition + 160\\\\0\n>>\n> \n> While reviewing the code for this, I noticed that in\n> sequence_decode(), we don't call ReorderBufferProcessXid to register\n> the first known lsn in WAL for the current xid. The similar functions\n> logicalmsg_decode() or heap_decode() do call ReorderBufferProcessXid\n> even if they decide not to queue or send the change. Is there a reason\n> for not doing the same here? However, I am not able to deduce any\n> scenario where lack of this will lead to such an Assertion failure.\n> Any thoughts?\n> \n\nThanks, that seems like an omission. Will fix.\n\n\nregards\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Mar 2022 17:39:16 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 2/28/22 12:46, Amit Kapila wrote:\n> On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 2/10/22 19:17, Tomas Vondra wrote:\n>>> I've polished & pushed the first part adding sequence decoding\n>>> infrastructure etc. Attached are the two remaining parts.\n>>>\n>>> I plan to wait a day or two and then push the test_decoding part. The\n>>> last part (for built-in replication) will need more work and maybe\n>>> rethinking the grammar etc.\n>>>\n>>\n>> I've pushed the second part, adding sequences to test_decoding.\n>>\n> \n> The test_decoding is failing randomly in the last few days. I am not\n> completely sure but they might be related to this work. The two of\n> these appears to be due to the same reason:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07\n> \n> TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n> \"reorderbuffer.c\", Line: 1173, PID: 35013)\n> 0 postgres 0x00593de0 ExceptionalCondition + 160\\\\0\n> \n\nThis might be related to the issue reported by Amit, i.e. that\nsequence_decode does not call ReorderBufferProcessXid(). If this keeps\nfailing, we'll have to add some extra debug info (logging LSN etc.), at\nleast temporarily. It'd be valuable to inspect the WAL too.\n\n> Another:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2022-02-16%2006%3A21%3A48\n> \n> --- /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/expected/rewrite.out\n> 2022-02-14 20:19:14.000000000 +0000\n> +++ /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/results/rewrite.out\n> 2022-02-16 07:42:18.000000000 +0000\n> @@ -126,6 +126,7 @@\n> table public.replication_example: INSERT: id[integer]:4\n> somedata[integer]:3 text[character varying]:null\n> testcolumn1[integer]:null\n> table public.replication_example: INSERT: id[integer]:5\n> somedata[integer]:4 text[character varying]:null\n> testcolumn1[integer]:2 testcolumn2[integer]:1\n> COMMIT\n> + sequence public.replication_example_id_seq: transactional:0\n> last_value: 38 log_cnt: 0 is_called:1\n> BEGIN\n> table public.replication_example: INSERT: id[integer]:6\n> somedata[integer]:5 text[character varying]:null\n> testcolumn1[integer]:3 testcolumn2[integer]:null\n> COMMIT\n> @@ -133,7 +134,7 @@\n> table public.replication_example: INSERT: id[integer]:7\n> somedata[integer]:6 text[character varying]:null\n> testcolumn1[integer]:4 testcolumn2[integer]:null\n> table public.replication_example: INSERT: id[integer]:8\n> somedata[integer]:7 text[character varying]:null\n> testcolumn1[integer]:5 testcolumn2[integer]:null\n> testcolumn3[integer]:1\n> COMMIT\n> - (15 rows)\n> + (16 rows)\n> \n\nInteresting. I can think of one reason that might cause this - we log\nthe first sequence increment after a checkpoint. So if a checkpoint\nhappens in an unfortunate place, there'll be an extra WAL record. On\nslow / busy machines that's quite possible, I guess.\n\nI wonder if these two issues might be related ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Mar 2022 17:53:27 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 3/7/22 17:39, Tomas Vondra wrote:\n> \n> \n> On 3/1/22 12:53, Amit Kapila wrote:\n>> On Mon, Feb 28, 2022 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>\n>>> On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> On 2/10/22 19:17, Tomas Vondra wrote:\n>>>>> I've polished & pushed the first part adding sequence decoding\n>>>>> infrastructure etc. Attached are the two remaining parts.\n>>>>>\n>>>>> I plan to wait a day or two and then push the test_decoding part. The\n>>>>> last part (for built-in replication) will need more work and maybe\n>>>>> rethinking the grammar etc.\n>>>>>\n>>>>\n>>>> I've pushed the second part, adding sequences to test_decoding.\n>>>>\n>>>\n>>> The test_decoding is failing randomly in the last few days. I am not\n>>> completely sure but they might be related to this work. The two of\n>>> these appears to be due to the same reason:\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07\n>>>\n>>> TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n>>> \"reorderbuffer.c\", Line: 1173, PID: 35013)\n>>> 0 postgres 0x00593de0 ExceptionalCondition + 160\\\\0\n>>>\n>>\n>> While reviewing the code for this, I noticed that in\n>> sequence_decode(), we don't call ReorderBufferProcessXid to register\n>> the first known lsn in WAL for the current xid. The similar functions\n>> logicalmsg_decode() or heap_decode() do call ReorderBufferProcessXid\n>> even if they decide not to queue or send the change. Is there a reason\n>> for not doing the same here? However, I am not able to deduce any\n>> scenario where lack of this will lead to such an Assertion failure.\n>> Any thoughts?\n>>\n> \n> Thanks, that seems like an omission. Will fix.\n> \n\nI've pushed this simple fix. Not sure it'll fix the assert failures on\nskink/locust, though. Given the lack of information it'll be difficult\nto verify. So let's wait a bit.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Mar 2022 22:11:19 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 3/7/22 17:53, Tomas Vondra wrote:\n> On 2/28/22 12:46, Amit Kapila wrote:\n>> On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> On 2/10/22 19:17, Tomas Vondra wrote:\n>>>> I've polished & pushed the first part adding sequence decoding\n>>>> infrastructure etc. Attached are the two remaining parts.\n>>>>\n>>>> I plan to wait a day or two and then push the test_decoding part. The\n>>>> last part (for built-in replication) will need more work and maybe\n>>>> rethinking the grammar etc.\n>>>>\n>>>\n>>> I've pushed the second part, adding sequences to test_decoding.\n>>>\n>>\n>> The test_decoding is failing randomly in the last few days. I am not\n>> completely sure but they might be related to this work. The two of\n>> these appears to be due to the same reason:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07\n>>\n>> TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n>> \"reorderbuffer.c\", Line: 1173, PID: 35013)\n>> 0 postgres 0x00593de0 ExceptionalCondition + 160\\\\0\n>>\n> \n> This might be related to the issue reported by Amit, i.e. that\n> sequence_decode does not call ReorderBufferProcessXid(). If this keeps\n> failing, we'll have to add some extra debug info (logging LSN etc.), at\n> least temporarily. It'd be valuable to inspect the WAL too.\n> \n>> Another:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2022-02-16%2006%3A21%3A48\n>>\n>> --- /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/expected/rewrite.out\n>> 2022-02-14 20:19:14.000000000 +0000\n>> +++ /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/results/rewrite.out\n>> 2022-02-16 07:42:18.000000000 +0000\n>> @@ -126,6 +126,7 @@\n>> table public.replication_example: INSERT: id[integer]:4\n>> somedata[integer]:3 text[character varying]:null\n>> testcolumn1[integer]:null\n>> table public.replication_example: INSERT: id[integer]:5\n>> somedata[integer]:4 text[character varying]:null\n>> testcolumn1[integer]:2 testcolumn2[integer]:1\n>> COMMIT\n>> + sequence public.replication_example_id_seq: transactional:0\n>> last_value: 38 log_cnt: 0 is_called:1\n>> BEGIN\n>> table public.replication_example: INSERT: id[integer]:6\n>> somedata[integer]:5 text[character varying]:null\n>> testcolumn1[integer]:3 testcolumn2[integer]:null\n>> COMMIT\n>> @@ -133,7 +134,7 @@\n>> table public.replication_example: INSERT: id[integer]:7\n>> somedata[integer]:6 text[character varying]:null\n>> testcolumn1[integer]:4 testcolumn2[integer]:null\n>> table public.replication_example: INSERT: id[integer]:8\n>> somedata[integer]:7 text[character varying]:null\n>> testcolumn1[integer]:5 testcolumn2[integer]:null\n>> testcolumn3[integer]:1\n>> COMMIT\n>> - (15 rows)\n>> + (16 rows)\n>>\n> \n> Interesting. I can think of one reason that might cause this - we log\n> the first sequence increment after a checkpoint. So if a checkpoint\n> happens in an unfortunate place, there'll be an extra WAL record. On\n> slow / busy machines that's quite possible, I guess.\n> \n\nI've tweaked the checkpoint_interval to make checkpoints more aggressive\n(set it to 1s), and it seems my hunch was correct - it produces failures\nexactly like this one. The best fix probably is to just disable decoding\nof sequences in those tests that are not aimed at testing sequence decoding.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Mar 2022 22:25:26 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 3/7/22 22:25, Tomas Vondra wrote:\n> \n> \n> On 3/7/22 17:53, Tomas Vondra wrote:\n>> On 2/28/22 12:46, Amit Kapila wrote:\n>>> On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> On 2/10/22 19:17, Tomas Vondra wrote:\n>>>>> I've polished & pushed the first part adding sequence decoding\n>>>>> infrastructure etc. Attached are the two remaining parts.\n>>>>>\n>>>>> I plan to wait a day or two and then push the test_decoding part. The\n>>>>> last part (for built-in replication) will need more work and maybe\n>>>>> rethinking the grammar etc.\n>>>>>\n>>>>\n>>>> I've pushed the second part, adding sequences to test_decoding.\n>>>>\n>>>\n>>> The test_decoding is failing randomly in the last few days. I am not\n>>> completely sure but they might be related to this work. The two of\n>>> these appears to be due to the same reason:\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07\n>>>\n>>> TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n>>> \"reorderbuffer.c\", Line: 1173, PID: 35013)\n>>> 0 postgres 0x00593de0 ExceptionalCondition + 160\\\\0\n>>>\n>>\n>> This might be related to the issue reported by Amit, i.e. that\n>> sequence_decode does not call ReorderBufferProcessXid(). If this keeps\n>> failing, we'll have to add some extra debug info (logging LSN etc.), at\n>> least temporarily. It'd be valuable to inspect the WAL too.\n>>\n>>> Another:\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2022-02-16%2006%3A21%3A48\n>>>\n>>> --- /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/expected/rewrite.out\n>>> 2022-02-14 20:19:14.000000000 +0000\n>>> +++ /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/results/rewrite.out\n>>> 2022-02-16 07:42:18.000000000 +0000\n>>> @@ -126,6 +126,7 @@\n>>> table public.replication_example: INSERT: id[integer]:4\n>>> somedata[integer]:3 text[character varying]:null\n>>> testcolumn1[integer]:null\n>>> table public.replication_example: INSERT: id[integer]:5\n>>> somedata[integer]:4 text[character varying]:null\n>>> testcolumn1[integer]:2 testcolumn2[integer]:1\n>>> COMMIT\n>>> + sequence public.replication_example_id_seq: transactional:0\n>>> last_value: 38 log_cnt: 0 is_called:1\n>>> BEGIN\n>>> table public.replication_example: INSERT: id[integer]:6\n>>> somedata[integer]:5 text[character varying]:null\n>>> testcolumn1[integer]:3 testcolumn2[integer]:null\n>>> COMMIT\n>>> @@ -133,7 +134,7 @@\n>>> table public.replication_example: INSERT: id[integer]:7\n>>> somedata[integer]:6 text[character varying]:null\n>>> testcolumn1[integer]:4 testcolumn2[integer]:null\n>>> table public.replication_example: INSERT: id[integer]:8\n>>> somedata[integer]:7 text[character varying]:null\n>>> testcolumn1[integer]:5 testcolumn2[integer]:null\n>>> testcolumn3[integer]:1\n>>> COMMIT\n>>> - (15 rows)\n>>> + (16 rows)\n>>>\n>>\n>> Interesting. I can think of one reason that might cause this - we log\n>> the first sequence increment after a checkpoint. So if a checkpoint\n>> happens in an unfortunate place, there'll be an extra WAL record. On\n>> slow / busy machines that's quite possible, I guess.\n>>\n> \n> I've tweaked the checkpoint_interval to make checkpoints more aggressive\n> (set it to 1s), and it seems my hunch was correct - it produces failures\n> exactly like this one. The best fix probably is to just disable decoding\n> of sequences in those tests that are not aimed at testing sequence decoding.\n> \n\nI've pushed a fix for this, adding \"include-sequences=0\" to a couple\ntest_decoding tests, which were failing with concurrent checkpoints.\n\nUnfortunately, I realized we have a similar issue in the \"sequences\"\ntests too :-( Imagine you do a series of sequence increments, e.g.\n\n SELECT nextval('s') FROM generate_sequences(1,100);\n\nIf there's a concurrent checkpoint, this may add an extra WAL record,\naffecting the decoded output (and also the data stored in the sequence\nrelation itself). Not sure what to do about this ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 8 Mar 2022 19:29:06 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 3/7/22 22:11, Tomas Vondra wrote:\n> \n> \n> On 3/7/22 17:39, Tomas Vondra wrote:\n>>\n>>\n>> On 3/1/22 12:53, Amit Kapila wrote:\n>>> On Mon, Feb 28, 2022 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>\n>>>> On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra\n>>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>>\n>>>>> On 2/10/22 19:17, Tomas Vondra wrote:\n>>>>>> I've polished & pushed the first part adding sequence decoding\n>>>>>> infrastructure etc. Attached are the two remaining parts.\n>>>>>>\n>>>>>> I plan to wait a day or two and then push the test_decoding part. The\n>>>>>> last part (for built-in replication) will need more work and maybe\n>>>>>> rethinking the grammar etc.\n>>>>>>\n>>>>>\n>>>>> I've pushed the second part, adding sequences to test_decoding.\n>>>>>\n>>>>\n>>>> The test_decoding is failing randomly in the last few days. I am not\n>>>> completely sure but they might be related to this work. The two of\n>>>> these appears to be due to the same reason:\n>>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09\n>>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07\n>>>>\n>>>> TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n>>>> \"reorderbuffer.c\", Line: 1173, PID: 35013)\n>>>> 0 postgres 0x00593de0 ExceptionalCondition + 160\\\\0\n>>>>\n>>>\n>>> While reviewing the code for this, I noticed that in\n>>> sequence_decode(), we don't call ReorderBufferProcessXid to register\n>>> the first known lsn in WAL for the current xid. The similar functions\n>>> logicalmsg_decode() or heap_decode() do call ReorderBufferProcessXid\n>>> even if they decide not to queue or send the change. Is there a reason\n>>> for not doing the same here? However, I am not able to deduce any\n>>> scenario where lack of this will lead to such an Assertion failure.\n>>> Any thoughts?\n>>>\n>>\n>> Thanks, that seems like an omission. Will fix.\n>>\n> \n> I've pushed this simple fix. Not sure it'll fix the assert failures on\n> skink/locust, though. Given the lack of information it'll be difficult\n> to verify. So let's wait a bit.\n> \n\nI've done about 5000 runs of 'make check' in test_decoding, on two rpi\nmachines (one armv7, one aarch64). Not a single assert failure :-(\n\nHow come skink/locust hit that in just a couple runs?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 8 Mar 2022 23:44:40 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Wed, Mar 9, 2022 at 4:14 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 3/7/22 22:11, Tomas Vondra wrote:\n> >\n> > I've pushed this simple fix. Not sure it'll fix the assert failures on\n> > skink/locust, though. Given the lack of information it'll be difficult\n> > to verify. So let's wait a bit.\n> >\n>\n> I've done about 5000 runs of 'make check' in test_decoding, on two rpi\n> machines (one armv7, one aarch64). Not a single assert failure :-(\n>\n> How come skink/locust hit that in just a couple runs?\n>\n\nIs it failed after you pushed a fix? I don't think so or am I missing\nsomething? I feel even if doesn't occur again it would have been\nbetter if we had some theory on how it occurred in the first place\nbecause that would make us feel more confident that we won't have any\nrelated problem left.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Mar 2022 17:11:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 3/9/22 12:41, Amit Kapila wrote:\n> On Wed, Mar 9, 2022 at 4:14 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 3/7/22 22:11, Tomas Vondra wrote:\n>>>\n>>> I've pushed this simple fix. Not sure it'll fix the assert failures on\n>>> skink/locust, though. Given the lack of information it'll be difficult\n>>> to verify. So let's wait a bit.\n>>>\n>>\n>> I've done about 5000 runs of 'make check' in test_decoding, on two rpi\n>> machines (one armv7, one aarch64). Not a single assert failure :-(\n>>\n>> How come skink/locust hit that in just a couple runs?\n>>\n> \n> Is it failed after you pushed a fix? I don't think so or am I missing\n> something? I feel even if doesn't occur again it would have been\n> better if we had some theory on how it occurred in the first place\n> because that would make us feel more confident that we won't have any\n> related problem left.\n> \n\nI don't think it failed yet - we have to wait a bit longer to make any\nconclusions, though. On skink it failed only twice over 1 month. I agree\nit'd be nice to have some theory, but I really don't have one.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 9 Mar 2022 14:18:06 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 23.02.22 00:24, Tomas Vondra wrote:\n> Here's an updated version of the patch, fixing most of the issues from\n> reviews so far. There's still a couple FIXME comments, but I think those\n> are minor and/or straightforward to deal with.\n\nThis patch needs a rebase because of a conflict in \nexpected/publication.out. In addition, see the attached fixup patch to \nget the pg_dump tests passing (and some other stuff).\n\n028_sequences.pl should be renamed to 029, since there is now another 028.\n\nIn psql, the output of \\dRp and \\dRp+ is inconsistent. The former shows\n\nAll tables | All sequences | Inserts | Updates | Deletes | Truncates | \nSequences | Via root\n\nthe latter shows\n\nAll tables | All sequences | Inserts | Updates | Deletes | Sequences | \nTruncates | Via root\n\nI think the first order is the best one.\n\nThat's all for now, I'll come back with more reviewing later.", "msg_date": "Thu, 10 Mar 2022 12:07:05 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 3/10/22 12:07, Peter Eisentraut wrote:\n> On 23.02.22 00:24, Tomas Vondra wrote:\n>> Here's an updated version of the patch, fixing most of the issues from\n>> reviews so far. There's still a couple FIXME comments, but I think those\n>> are minor and/or straightforward to deal with.\n> \n> This patch needs a rebase because of a conflict in\n> expected/publication.out.  In addition, see the attached fixup patch to\n> get the pg_dump tests passing (and some other stuff).\n> \n\nOK, rebased patch attached.\n\n> 028_sequences.pl should be renamed to 029, since there is now another 028.\n> \n\nRenamed.\n\n> In psql, the output of \\dRp and \\dRp+ is inconsistent.  The former shows\n> \n> All tables | All sequences | Inserts | Updates | Deletes | Truncates |\n> Sequences | Via root\n> \n> the latter shows\n> \n> All tables | All sequences | Inserts | Updates | Deletes | Sequences |\n> Truncates | Via root\n> \n> I think the first order is the best one.\n> \n\nGood idea, I've tweaked the code to use the former order.\n\n> That's all for now, I'll come back with more reviewing later.\n\n\nthanks\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 10 Mar 2022 23:49:35 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Tue, Mar 8, 2022 at 11:59 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 3/7/22 22:25, Tomas Vondra wrote:\n> >>\n> >> Interesting. I can think of one reason that might cause this - we log\n> >> the first sequence increment after a checkpoint. So if a checkpoint\n> >> happens in an unfortunate place, there'll be an extra WAL record. On\n> >> slow / busy machines that's quite possible, I guess.\n> >>\n> >\n> > I've tweaked the checkpoint_interval to make checkpoints more aggressive\n> > (set it to 1s), and it seems my hunch was correct - it produces failures\n> > exactly like this one. The best fix probably is to just disable decoding\n> > of sequences in those tests that are not aimed at testing sequence decoding.\n> >\n>\n> I've pushed a fix for this, adding \"include-sequences=0\" to a couple\n> test_decoding tests, which were failing with concurrent checkpoints.\n>\n> Unfortunately, I realized we have a similar issue in the \"sequences\"\n> tests too :-( Imagine you do a series of sequence increments, e.g.\n>\n> SELECT nextval('s') FROM generate_sequences(1,100);\n>\n> If there's a concurrent checkpoint, this may add an extra WAL record,\n> affecting the decoded output (and also the data stored in the sequence\n> relation itself). Not sure what to do about this ...\n>\n\nI am also not sure what to do for it but maybe if in some way we can\nincrease checkpoint timeout or other parameters for these tests then\nit would reduce the chances of such failures. The other idea could be\nto perform checkpoint before the start of tests to reduce the\npossibility of another checkpoint.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Mar 2022 17:04:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Fri, Mar 11, 2022 at 5:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 8, 2022 at 11:59 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 3/7/22 22:25, Tomas Vondra wrote:\n> > >>\n> > >> Interesting. I can think of one reason that might cause this - we log\n> > >> the first sequence increment after a checkpoint. So if a checkpoint\n> > >> happens in an unfortunate place, there'll be an extra WAL record. On\n> > >> slow / busy machines that's quite possible, I guess.\n> > >>\n> > >\n> > > I've tweaked the checkpoint_interval to make checkpoints more aggressive\n> > > (set it to 1s), and it seems my hunch was correct - it produces failures\n> > > exactly like this one. The best fix probably is to just disable decoding\n> > > of sequences in those tests that are not aimed at testing sequence decoding.\n> > >\n> >\n> > I've pushed a fix for this, adding \"include-sequences=0\" to a couple\n> > test_decoding tests, which were failing with concurrent checkpoints.\n> >\n> > Unfortunately, I realized we have a similar issue in the \"sequences\"\n> > tests too :-( Imagine you do a series of sequence increments, e.g.\n> >\n> > SELECT nextval('s') FROM generate_sequences(1,100);\n> >\n> > If there's a concurrent checkpoint, this may add an extra WAL record,\n> > affecting the decoded output (and also the data stored in the sequence\n> > relation itself). Not sure what to do about this ...\n> >\n>\n> I am also not sure what to do for it but maybe if in some way we can\n> increase checkpoint timeout or other parameters for these tests then\n> it would reduce the chances of such failures. The other idea could be\n> to perform checkpoint before the start of tests to reduce the\n> possibility of another checkpoint.\n>\n\nOne more thing, I notice while checking the commit for this feature is\nthat the below include seems to be out of order:\n--- a/src/backend/replication/logical/decode.c\n+++ b/src/backend/replication/logical/decode.c\n@@ -42,6 +42,7 @@\n #include \"replication/reorderbuffer.h\"\n #include \"replication/snapbuild.h\"\n #include \"storage/standby.h\"\n+#include \"commands/sequence.h\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Mar 2022 17:08:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 3/11/22 12:34, Amit Kapila wrote:\n> On Tue, Mar 8, 2022 at 11:59 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 3/7/22 22:25, Tomas Vondra wrote:\n>>>>\n>>>> Interesting. I can think of one reason that might cause this - we log\n>>>> the first sequence increment after a checkpoint. So if a checkpoint\n>>>> happens in an unfortunate place, there'll be an extra WAL record. On\n>>>> slow / busy machines that's quite possible, I guess.\n>>>>\n>>>\n>>> I've tweaked the checkpoint_interval to make checkpoints more aggressive\n>>> (set it to 1s), and it seems my hunch was correct - it produces failures\n>>> exactly like this one. The best fix probably is to just disable decoding\n>>> of sequences in those tests that are not aimed at testing sequence decoding.\n>>>\n>>\n>> I've pushed a fix for this, adding \"include-sequences=0\" to a couple\n>> test_decoding tests, which were failing with concurrent checkpoints.\n>>\n>> Unfortunately, I realized we have a similar issue in the \"sequences\"\n>> tests too :-( Imagine you do a series of sequence increments, e.g.\n>>\n>> SELECT nextval('s') FROM generate_sequences(1,100);\n>>\n>> If there's a concurrent checkpoint, this may add an extra WAL record,\n>> affecting the decoded output (and also the data stored in the sequence\n>> relation itself). Not sure what to do about this ...\n>>\n> \n> I am also not sure what to do for it but maybe if in some way we can\n> increase checkpoint timeout or other parameters for these tests then\n> it would reduce the chances of such failures. The other idea could be\n> to perform checkpoint before the start of tests to reduce the\n> possibility of another checkpoint.\n> \n\nYeah, I had the same ideas, but I'm not sure I like any of them. I doubt\nwe want to make checkpoints extremely rare, and even if we do that it'll\nstill fail on slow machines (e.g. with valgrind, clobber cache etc.).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 11 Mar 2022 13:53:15 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Further review (based on 20220310 patch):\n\n doc/src/sgml/ref/create_publication.sgml | 3 +\n\nFor the clauses added to the synopsis, descriptions should be added\nbelow. See attached patch for a start.\n\n src/backend/commands/sequence.c | 79 ++\n\nThere is quite a bit of overlap between ResetSequence() and\nResetSequence2(), but I couldn't see a good way to combine them that\ngenuinely saves code and complexity. So maybe it's ok.\n\nActually, ResetSequence2() is not really \"reset\", it's just \"set\".\nMaybe pick a different function name.\n\n src/backend/commands/subscriptioncmds.c | 272 +++++++\n\nThe code added in AlterSubscription_refresh() seems to be entirely\ncopy-and-paste from the tables case. I think this could be combined\nby concatenating the lists from fetch_table_list() and\nfetch_sequence_list() and looping over it once. The same also applies\nto CreateSubscription(), although the code duplication is smaller\nthere.\n\nThis in turn means that fetch_table_list() and fetch_sequence_list()\ncan be combined, so that you don't actually need any extensive new\ncode in CreateSubscription() and AlterSubscription_refresh() for\nsequences. This could go on, you can combine more of the underlying\ncode, like pg_publication_tables and pg_publication_sequences and so\non.\n\n src/backend/replication/logical/proto.c | 52 ++\n\nThe documentation of the added protocol message needs to be added to\nthe documentation. See attached patch for a start.\n\nThe sequence message does not contain the sequence Oid, unlike the\nrelation message. Would that be good to add?\n\n src/backend/replication/logical/worker.c | 56 ++\n\nMaybe the Asserts in apply_handle_sequence() should be elogs. These\nare checking what is sent over the network, so we don't want a\nbad/evil peer able to trigger asserts. And in non-assert builds these\nconditions would be unchecked.\n\n src/backend/replication/pgoutput/pgoutput.c | 82 +-\n\nI find the the in get_rel_sync_entry() confusing. You add a section for\n\nif (!publish && is_sequence)\n\nbut then shouldn't the code below that be something like\n\nif (!publish && !is_sequence)\n\n src/bin/pg_dump/t/002_pg_dump.pl | 38 +-\n\nThis adds a new publication \"pub4\", but the tests already contain a\n\"pub4\". I'm not sure why this even works, but perhaps the new one\nshold be \"pub5\", unless there is a deeper meaning.\n\n src/include/catalog/pg_publication_namespace.h | 3 +-\n\nI don't like how the distinction between table and sequence is done\nusing a bool field. That affects also the APIs in pg_publication.c\nand publicationcmds.c especially. There is a lot of unadorned \"true\"\nand \"false\" being passed around that isn't very clear, and it all\nappears to originate at this catalog. I think we could use a char\nfield here that uses the relkind constants. That would also make the\ncode in pg_publication.c etc. slightly clearer.\n\n\nSee attached patch for more small tweaks.\n\nYour patch still contains a number of XXX and FIXME comments, which in \nmy assessment are all more or less correct, so I didn't comment on those \nseparately.\n\nOther than that, this seems pretty good.\n\nEarlier in the thread I commented on some aspects of the new grammar \n(e.g., do we need FOR ALL SEQUENCES?). I think this would be useful to \nreview again after all the new logical replication patches are in. I \ndon't want to hold up this patch for that at this point.", "msg_date": "Sun, 13 Mar 2022 07:45:44 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 3/13/22 07:45, Peter Eisentraut wrote:\n> Further review (based on 20220310 patch):\n> \n>  doc/src/sgml/ref/create_publication.sgml       |   3 +\n> \n> For the clauses added to the synopsis, descriptions should be added\n> below.  See attached patch for a start.\n\nThanks. I'm not sure what other improvements do you think this .sgml\nfile needs?\n\n> \n>  src/backend/commands/sequence.c                |  79 ++\n> \n> There is quite a bit of overlap between ResetSequence() and\n> ResetSequence2(), but I couldn't see a good way to combine them that\n> genuinely saves code and complexity.  So maybe it's ok.\n> \n> Actually, ResetSequence2() is not really \"reset\", it's just \"set\".\n> Maybe pick a different function name.\n\nYeah, good point. I think the functions are sufficiently different, and\nattempting to remove the publications is unlikely to be an improvement.\nBut you're right \"ResetSequence2\" is not a great name, so I've changed\nit to \"SetSequence\".\n\n> \n>  src/backend/commands/subscriptioncmds.c        | 272 +++++++\n> \n> The code added in AlterSubscription_refresh() seems to be entirely\n> copy-and-paste from the tables case.  I think this could be combined\n> by concatenating the lists from fetch_table_list() and\n> fetch_sequence_list() and looping over it once.  The same also applies\n> to CreateSubscription(), although the code duplication is smaller\n> there.\n> \n> This in turn means that fetch_table_list() and fetch_sequence_list()\n> can be combined, so that you don't actually need any extensive new\n> code in CreateSubscription() and AlterSubscription_refresh() for\n> sequences.  This could go on, you can combine more of the underlying\n> code, like pg_publication_tables and pg_publication_sequences and so\n> on.\n> \n\nI've removed the duplicated code in both places, so that it processes\nonly a single list, which is a combination of tables and sequences\n(using list_concat). For CreateSubscription it was trivial, because the\ncode was simple and perfect copy. For _refresh there was a minor\ndifference, but I think it was actually entirely unnecessary when\nprocessing the combined list. But this will need more testing.\n\nI'm not sure about the last bit, though. How would you combine code for\npg_publication_tables and pg_publication_sequences, etc?\n\n>  src/backend/replication/logical/proto.c        |  52 ++\n> \n> The documentation of the added protocol message needs to be added to\n> the documentation.  See attached patch for a start.\n> \n\nOK. I've resolved the FIXME for LSN. Not sure what else is needed?\n\n> The sequence message does not contain the sequence Oid, unlike the\n> relation message.  Would that be good to add?\n\nI don't think we need to do that. For relations we do that because it\nserves as an identifier in RelationSyncCache, and it links the various\nmessages to it. For sequences we don't need that - the schema is fixed.\n\nOr do you see a practical reason to add the OID?\n\n> \n>  src/backend/replication/logical/worker.c       |  56 ++\n> \n> Maybe the Asserts in apply_handle_sequence() should be elogs.  These\n> are checking what is sent over the network, so we don't want a\n> bad/evil peer able to trigger asserts.  And in non-assert builds these\n> conditions would be unchecked.\n>\n\nI'll think about it, but AFAIK we don't really assume evil peers.\n\n\n>  src/backend/replication/pgoutput/pgoutput.c    |  82 +-\n> \n> I find the the in get_rel_sync_entry() confusing.  You add a section for\n> \n> if (!publish && is_sequence)\n> \n> but then shouldn't the code below that be something like\n> \n> if (!publish && !is_sequence)\n> \n\nHmm, maybe. But I think there's actually a bigger issue - this does not\nseem to be dealing with pg_publication_namespace.pnsequences correctly.\nThat is, we we don't differentiate which schemas include tables and\nwhich schemas include sequences. Interestingly, no tests fail. I'll take\na closer look tomorrow.\n\n>  src/bin/pg_dump/t/002_pg_dump.pl               |  38 +-\n> \n> This adds a new publication \"pub4\", but the tests already contain a\n> \"pub4\".  I'm not sure why this even works, but perhaps the new one\n> shold be \"pub5\", unless there is a deeper meaning.\n> \n\nI agree, pub5 it is. But it's interesting it does not fail even with the\nduplicate name. Strange.\n\n>  src/include/catalog/pg_publication_namespace.h |   3 +-\n> \n> I don't like how the distinction between table and sequence is done\n> using a bool field.  That affects also the APIs in pg_publication.c\n> and publicationcmds.c especially.  There is a lot of unadorned \"true\"\n> and \"false\" being passed around that isn't very clear, and it all\n> appears to originate at this catalog.  I think we could use a char\n> field here that uses the relkind constants.  That would also make the\n> code in pg_publication.c etc. slightly clearer.\n> \n\nI thought about using relkind, but it does not work all that nicely\nbecause we have multiple relkinds for a table (because of partitioned\ntables). So I found that confusing.\n\nMaybe we should just use 'r' for any table, in this catalog?\n\n> \n> See attached patch for more small tweaks.\n> \n> Your patch still contains a number of XXX and FIXME comments, which in\n> my assessment are all more or less correct, so I didn't comment on those\n> separately.\n> \n\nYeah, I plan to look at those next.\n\n> Other than that, this seems pretty good.\n> \n> Earlier in the thread I commented on some aspects of the new grammar\n> (e.g., do we need FOR ALL SEQUENCES?).  I think this would be useful to\n> review again after all the new logical replication patches are in.  I\n> don't want to hold up this patch for that at this point.\n\nI'm not particularly attached to the grammar, but I don't see any reason\nnot to have mostly the same grammar/options as for tables.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 14 Mar 2022 01:46:03 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nAttached is a rebased patch, addressing most of the remaining issues.\nThe main improvements are:\n\n\n1) pg_publication_namespace.pntype and type checks\n\nOriginally, the patch used pnsequences flag to distinguish which entries\nadded by FOR ALL TABLES IN SCHEMA and FOR ALL SEQUENCES IN SCHEMA. I've\ndecided to replace this with a simple char column, called pntype, where\n't' means tables and 's' sequences. As explained before, relkind doesn't\nwork well because of partitioned tables. A char, with a function to\nmatch it to relkind values works fairly well.\n\nI've revisited the question how to represent publications publishing the\nsame schema twice - once for tables, once for sequences. There were\nproposals to represent this with a single row, i.e. turn pntype into an\narray of char values. So it'd be either ['t'], ['s'] or ['s', 't']. I\nspent some time working on that, but I've decided to keep the current\napproach with two separate rows - it's easier to manage, lookup etc.\n\n\n2) pg_get_object_address\n\nI've updated the objectaddress code to consider pntype when looking-up\nthe pntype value, so each row in pg_publication_namespace gets the\ncorrect ObjectAddress.\n\n\n3) for all [tables | sequences]\n\nThe original patch did not allow creating publication for all tables and\nall sequences at the same time. I've tweaked the grammar to allow this:\n\n CREATE PUBLICATION p FOR ALL list_of_types;\n\nwhere list_of_types is arbitrary combination of TABLES and SEQUENCES.\nIt's implemented in a slightly awkward way - partially in the grammar,\npartially in the publicationcmds.c. I suspect there's a (cleaner) way to\ndo this entirely in the grammar but I haven't succeeded yet.\n\n\n4) prevent 'ADD TABLE sequence' and 'ADD SEQUENCE table'\n\nIt was possible to do \"ADD TABLE\" and pass it a sequence, which would\nfail to notice if the publication already includes all sequences from\nthe schema. I've added a check preventing that (and a similar one for\nADD SEQUENCE).\n\n\n5) missing block in AlterTableNamespace to cross-check moving published\nsequence to already published schema\n\nA block of code was missing from AlterTableNamespace, checking that\nwe're not moving a sequence into a schema that is already published (all\nthe sequences from it).\n\n\n6) a couple comment fixes\n\nVarious comment improvements and fixes. At this point there's a couple\ntrivial FIXME/XXX comments remaining.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 20 Mar 2022 23:55:37 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 20.03.22 23:55, Tomas Vondra wrote:\n> Attached is a rebased patch, addressing most of the remaining issues.\n\nThis looks okay to me, if the two FIXMEs are addressed. Remember to \nalso update protocol.sgml if you change LOGICAL_REP_MSG_SEQUENCE.\n\n\n", "msg_date": "Mon, 21 Mar 2022 14:05:39 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 3/21/22 14:05, Peter Eisentraut wrote:\n> On 20.03.22 23:55, Tomas Vondra wrote:\n>> Attached is a rebased patch, addressing most of the remaining issues.\n> \n> This looks okay to me, if the two FIXMEs are addressed.  Remember to\n> also update protocol.sgml if you change LOGICAL_REP_MSG_SEQUENCE.\n\nThanks. Do we want to use a different constant for the sequence message?\nI've used 'X' for the WIP patch, but maybe there's a better value?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 21 Mar 2022 22:54:38 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 21.03.22 22:54, Tomas Vondra wrote:\n> On 3/21/22 14:05, Peter Eisentraut wrote:\n>> On 20.03.22 23:55, Tomas Vondra wrote:\n>>> Attached is a rebased patch, addressing most of the remaining issues.\n>>\n>> This looks okay to me, if the two FIXMEs are addressed.  Remember to\n>> also update protocol.sgml if you change LOGICAL_REP_MSG_SEQUENCE.\n> \n> Thanks. Do we want to use a different constant for the sequence message?\n> I've used 'X' for the WIP patch, but maybe there's a better value?\n\nI would do small 's'. Alternatively, 'Q'/'q' is still available, too.\n\n\n", "msg_date": "Tue, 22 Mar 2022 09:09:16 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Mon, Mar 21, 2022 at 4:25 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> Attached is a rebased patch, addressing most of the remaining issues.\n>\n\nIt appears that on the apply side, the patch always creates a new\nrelfilenode irrespective of whether the sequence message is\ntransactional or not. Is it required to create a new relfilenode for\nnon-transactional messages? If not that could be costly?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Mar 2022 17:39:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n> On 22. 3. 2022, at 13:09, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Mon, Mar 21, 2022 at 4:25 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> \n>> Hi,\n>> \n>> Attached is a rebased patch, addressing most of the remaining issues.\n>> \n> \n> It appears that on the apply side, the patch always creates a new\n> relfilenode irrespective of whether the sequence message is\n> transactional or not. Is it required to create a new relfilenode for\n> non-transactional messages? If not that could be costly?\n> \n\n\nThat's a good catch, I think we should just write the page in the non-transactional case, no need to mess with relnodes.\n\n\nPetr\n\n", "msg_date": "Tue, 22 Mar 2022 13:11:29 +0100", "msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Tue, Mar 22, 2022 at 5:41 PM Petr Jelinek\n<petr.jelinek@enterprisedb.com> wrote:\n>\n> > On 22. 3. 2022, at 13:09, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Mar 21, 2022 at 4:25 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> Attached is a rebased patch, addressing most of the remaining issues.\n> >>\n> >\n> > It appears that on the apply side, the patch always creates a new\n> > relfilenode irrespective of whether the sequence message is\n> > transactional or not. Is it required to create a new relfilenode for\n> > non-transactional messages? If not that could be costly?\n> >\n>\n>\n> That's a good catch, I think we should just write the page in the non-transactional case, no need to mess with relnodes.\n>\n\nWhat if the current node has also incremented from the existing\nsequence? Basically, how will we deal with conflicts? It seems we will\noverwrite the actions done on the existing node which means sequence\nvalues can go back.\n\nOn looking a bit more closely, I think I see some more fundamental\nproblems here:\n\n* Don't we need some syncing mechanism between apply worker and\nsequence sync worker so that apply worker skips the sequence changes\ntill the sync worker is finished, otherwise, there is a risk of one\noverriding the values of the other?\n\n* Currently, the patch uses one sync worker per sequence. It seems to\nbe a waste of resources considering apart from one additional process,\nwe need origin/slot to sync each sequence.\n\n* Don't we need explicit privilege checking before applying sequence\ndata as we do in commit a2ab9c06ea15fbcb2bfde570986a06b37f52bcca for\ntables?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Mar 2022 17:20:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "> On 23. 3. 2022, at 12:50, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Tue, Mar 22, 2022 at 5:41 PM Petr Jelinek\n> <petr.jelinek@enterprisedb.com <mailto:petr.jelinek@enterprisedb.com>> wrote:\n>> \n>>> On 22. 3. 2022, at 13:09, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>> \n>>> On Mon, Mar 21, 2022 at 4:25 AM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>> \n>>>> Hi,\n>>>> \n>>>> Attached is a rebased patch, addressing most of the remaining issues.\n>>>> \n>>> \n>>> It appears that on the apply side, the patch always creates a new\n>>> relfilenode irrespective of whether the sequence message is\n>>> transactional or not. Is it required to create a new relfilenode for\n>>> non-transactional messages? If not that could be costly?\n>>> \n>> \n>> \n>> That's a good catch, I think we should just write the page in the non-transactional case, no need to mess with relnodes.\n>> \n> \n> What if the current node has also incremented from the existing\n> sequence? Basically, how will we deal with conflicts? It seems we will\n> overwrite the actions done on the existing node which means sequence\n> values can go back.\n> \n\n\nI think this is perfectly acceptable behavior, we are replicating state from upstream, not reconciling state on downstream.\n\nYou can't really use the builtin sequences to implement distributed sequence via replication. If user wants to write to both nodes they should not replicate the sequence value and instead offset the sequence on each node so they produce different ranges, that's quite common approach. One day we might want revisit adding support for custom sequence AMs.\n\n\n> * Currently, the patch uses one sync worker per sequence. It seems to\n> be a waste of resources considering apart from one additional process,\n> we need origin/slot to sync each sequence.\n> \n\n\nThis is indeed wasteful but not something that I'd consider blocker for the patch personally.\n\n-- \nPetr\nOn 23. 3. 2022, at 12:50, Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Mar 22, 2022 at 5:41 PM Petr Jelinek<petr.jelinek@enterprisedb.com> wrote:On 22. 3. 2022, at 13:09, Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Mar 21, 2022 at 4:25 AM Tomas Vondra<tomas.vondra@enterprisedb.com> wrote:Hi,Attached is a rebased patch, addressing most of the remaining issues.It appears that on the apply side, the patch always creates a newrelfilenode irrespective of whether the sequence message istransactional or not. Is it required to create a new relfilenode fornon-transactional messages? If not that could be costly?That's a good catch, I think we should just write the page in the non-transactional case, no need to mess with relnodes.What if the current node has also incremented from the existingsequence? Basically, how will we deal with conflicts? It seems we willoverwrite the actions done on the existing node which means sequencevalues can go back.I think this is perfectly acceptable behavior, we are replicating state from upstream, not reconciling state on downstream.You can't really use the builtin sequences to implement distributed sequence via replication. If user wants to write to both nodes they should not replicate the sequence value and instead offset the sequence on each node so they produce different ranges, that's quite common approach. One day we might want revisit adding support for custom sequence AMs.* Currently, the patch uses one sync worker per sequence. It seems tobe a waste of resources considering apart from one additional process,we need origin/slot to sync each sequence.This is indeed wasteful but not something that I'd consider blocker for the patch personally.--  Petr", "msg_date": "Wed, 23 Mar 2022 13:46:29 +0100", "msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 3/23/22 13:46, Petr Jelinek wrote:\n> \n>> On 23. 3. 2022, at 12:50, Amit Kapila <amit.kapila16@gmail.com\n>> <mailto:amit.kapila16@gmail.com>> wrote:\n>>\n>> On Tue, Mar 22, 2022 at 5:41 PM Petr Jelinek\n>> <petr.jelinek@enterprisedb.com <mailto:petr.jelinek@enterprisedb.com>>\n>> wrote:\n>>>\n>>>> On 22. 3. 2022, at 13:09, Amit Kapila <amit.kapila16@gmail.com\n>>>> <mailto:amit.kapila16@gmail.com>> wrote:\n>>>>\n>>>> On Mon, Mar 21, 2022 at 4:25 AM Tomas Vondra\n>>>> <tomas.vondra@enterprisedb.com\n>>>> <mailto:tomas.vondra@enterprisedb.com>> wrote:\n>>>>>\n>>>>> Hi,\n>>>>>\n>>>>> Attached is a rebased patch, addressing most of the remaining issues.\n>>>>>\n>>>>\n>>>> It appears that on the apply side, the patch always creates a new\n>>>> relfilenode irrespective of whether the sequence message is\n>>>> transactional or not. Is it required to create a new relfilenode for\n>>>> non-transactional messages? If not that could be costly?\n>>>>\n>>>\n>>>\n>>> That's a good catch, I think we should just write the page in the\n>>> non-transactional case, no need to mess with relnodes.\n>>>\n>>\n>> What if the current node has also incremented from the existing\n>> sequence? Basically, how will we deal with conflicts? It seems we will\n>> overwrite the actions done on the existing node which means sequence\n>> values can go back.\n>>\n> \n> \n> I think this is perfectly acceptable behavior, we are replicating state\n> from upstream, not reconciling state on downstream.\n> \n> You can't really use the builtin sequences to implement distributed\n> sequence via replication. If user wants to write to both nodes they\n> should not replicate the sequence value and instead offset the sequence\n> on each node so they produce different ranges, that's quite common\n> approach. One day we might want revisit adding support for custom\n> sequence AMs.\n> \n\nExactly. Moreover it's about the same behavior as if you update table\ndata on the subscriber, and then an UPDATE gets replicated and\noverwrites the local change.\n\nAttached is a patch fixing the relfilenode issue - now we only allocate\na new relfilenode for the transactional case, and an in-place update\nsimilar to a setval() otherwise. And thanks for noticing this.\n\n> \n>> * Currently, the patch uses one sync worker per sequence. It seems to\n>> be a waste of resources considering apart from one additional process,\n>> we need origin/slot to sync each sequence.\n>>\n> \n> \n> This is indeed wasteful but not something that I'd consider blocker for\n> the patch personally.\n> \n\nRight, and the same argument can be made for tablesync of tiny tables\n(which a sequence essentially is). I'm sure there are ways to improve\nthis, but that can be done later.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 23 Mar 2022 23:30:06 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Awesome to see this get committed, thanks Tomas.\n\nIs there anything left or shall I update the CF entry to committed?\n\n\n", "msg_date": "Thu, 24 Mar 2022 17:52:45 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nPushed, after going through the patch once more, addressed the remaining\nFIXMEs, corrected a couple places in the docs and comments, etc. Minor\ntweaks, nothing important.\n\nI've been thinking about the grammar a bit more after pushing, and I\nrealized that maybe it'd be better to handle the FOR ALL TABLES /\nSEQUENCES clause as PublicationObjSpec, not as a separate/special case.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 24 Mar 2022 22:59:11 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 3/24/22 22:52, Greg Stark wrote:\n> Awesome to see this get committed, thanks Tomas.\n> \n> Is there anything left or shall I update the CF entry to committed?\n\nYeah, let's mark it as committed. I was waiting for some feedback from\nthe buildfarm - there are some failures, but it seems unrelated.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 24 Mar 2022 23:01:29 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Pushed.\n>\n\nSome of the comments given by me [1] don't seem to be addressed or\nresponded to. Let me try to say again for the ease of discussion:\n\n* Don't we need some syncing mechanism between apply worker and\nsequence sync worker so that apply worker skips the sequence changes\ntill the sync worker is finished, otherwise, there is a risk of one\noverriding the values of the other? See how we take care of this for a\ntable in should_apply_changes_for_rel() and its callers. If we don't\ndo this for sequences for some reason then probably a comment\nsomewhere is required.\n\n* Don't we need explicit privilege checking before applying sequence\ndata as we do in commit a2ab9c06ea15fbcb2bfde570986a06b37f52bcca for\ntables?\n\nFew new comments:\n=================\n1. A simple test like the below crashes for me:\npostgres=# create sequence s1;\nCREATE SEQUENCE\npostgres=# create sequence s2;\nCREATE SEQUENCE\npostgres=# create publication pub1 for sequence s1, s2;\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\n2. In apply_handle_sequence() do we need AccessExclusiveLock for\nnon-transactional case?\n\n3. In apply_handle_sequence(), I think for transactional case, we need\nto skip the operation, if the skip lsn is set. See how we skip in\napply_handle_insert() and similar functions.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Jn-DttQ%3D4Pdh9YCe1w%2BzGbgC%2B0uR0sfg2RtkjiAPmB9g%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 25 Mar 2022 09:31:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Fri, Mar 25, 2022 at 6:59 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> Pushed, after going through the patch once more, addressed the remaining\n> FIXMEs, corrected a couple places in the docs and comments, etc. Minor\n> tweaks, nothing important.\n>\n\nThe commit updates tab-completion for ALTER PUBLICATION but seems not\nto update for CREATE PUBLICATION. I've attached a patch for that.\n\nAlso, the commit add a new pgoutput option \"sequences\":\n\n+ else if (strcmp(defel->defname, \"sequences\") == 0)\n+ {\n+ if (sequences_option_given)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"conflicting\nor redundant options\")));\n+ sequences_option_given = true;\n+\n+ data->sequences = defGetBoolean(defel);\n+ }\n\nBut as far as I read changes, there is no use of this option, and this\ncode is not tested. Can we remove it or is it for upcoming changes?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Fri, 25 Mar 2022 16:00:13 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 3/25/22 05:01, Amit Kapila wrote:\n> On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Pushed.\n>>\n> \n> Some of the comments given by me [1] don't seem to be addressed or\n> responded to. Let me try to say again for the ease of discussion:\n> \n\nD'oh! I got distracted by Petr's response to that message, and missed\nthis part ...\n\n> * Don't we need some syncing mechanism between apply worker and\n> sequence sync worker so that apply worker skips the sequence changes\n> till the sync worker is finished, otherwise, there is a risk of one\n> overriding the values of the other? See how we take care of this for a\n> table in should_apply_changes_for_rel() and its callers. If we don't\n> do this for sequences for some reason then probably a comment\n> somewhere is required.\n> \n\nHow would that happen? If we're effectively setting the sequence as a\nside effect of inserting the data, then why should we even replicate the\nsequence? We'll have the problem later too, no?\n\n> * Don't we need explicit privilege checking before applying sequence\n> data as we do in commit a2ab9c06ea15fbcb2bfde570986a06b37f52bcca for\n> tables?\n> \n\nSo essentially something like TargetPrivilegesCheck in the worker? I\nthink you're probably right we need something like that.\n\n> Few new comments:\n> =================\n> 1. A simple test like the below crashes for me:\n> postgres=# create sequence s1;\n> CREATE SEQUENCE\n> postgres=# create sequence s2;\n> CREATE SEQUENCE\n> postgres=# create publication pub1 for sequence s1, s2;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> \n\nYeah, preprocess_pubobj_list seems to be a few bricks shy. I have a fix,\nwill push shortly.\n\n> 2. In apply_handle_sequence() do we need AccessExclusiveLock for\n> non-transactional case?\n> \n\nGood catch. This lock was inherited from ResetSequence, but now that the\ntransactional case works differently, we probably don't need it.\n\n> 3. In apply_handle_sequence(), I think for transactional case, we need\n> to skip the operation, if the skip lsn is set. See how we skip in\n> apply_handle_insert() and similar functions.\n> \n\nRight.\n\n\nThanks for these reports!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 25 Mar 2022 11:26:39 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 3/25/22 08:00, Masahiko Sawada wrote:\n> On Fri, Mar 25, 2022 at 6:59 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> Pushed, after going through the patch once more, addressed the remaining\n>> FIXMEs, corrected a couple places in the docs and comments, etc. Minor\n>> tweaks, nothing important.\n>>\n> \n> The commit updates tab-completion for ALTER PUBLICATION but seems not\n> to update for CREATE PUBLICATION. I've attached a patch for that.\n> \n\nThanks. I'm pretty sure the patch did that, but it likely got lost in\none of the rebases due to a conflict. Too bad we don't have tests for\ntab-complete. Will fix.\n\n> Also, the commit add a new pgoutput option \"sequences\":\n> \n> + else if (strcmp(defel->defname, \"sequences\") == 0)\n> + {\n> + if (sequences_option_given)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"conflicting\n> or redundant options\")));\n> + sequences_option_given = true;\n> +\n> + data->sequences = defGetBoolean(defel);\n> + }\n> \n> But as far as I read changes, there is no use of this option, and this\n> code is not tested. Can we remove it or is it for upcoming changes?\n> \n\npgoutput_sequence uses this\n\n\tif (!data->sequences)\n\t\treturn;\n\nThis was inspired by what we do for logical messages, but maybe there's\nan argument we don't need this, considering we have \"sequence\" action\nand that a sequence has to be added to the publication. I don't think\nthere's any future patch relying on this (and it could add it back, if\nneeded).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 25 Mar 2022 11:34:14 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Fri, Mar 25, 2022 at 3:56 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n> On 3/25/22 05:01, Amit Kapila wrote:\n> > On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Pushed.\n> >>\n> >\n> > Some of the comments given by me [1] don't seem to be addressed or\n> > responded to. Let me try to say again for the ease of discussion:\n> >\n>\n> D'oh! I got distracted by Petr's response to that message, and missed\n> this part ...\n>\n> > * Don't we need some syncing mechanism between apply worker and\n> > sequence sync worker so that apply worker skips the sequence changes\n> > till the sync worker is finished, otherwise, there is a risk of one\n> > overriding the values of the other? See how we take care of this for a\n> > table in should_apply_changes_for_rel() and its callers. If we don't\n> > do this for sequences for some reason then probably a comment\n> > somewhere is required.\n> >\n>\n> How would that happen? If we're effectively setting the sequence as a\n> side effect of inserting the data, then why should we even replicate the\n> sequence?\n>\n\nI was talking just about sequence values here, considering that some\nsequence is just replicating based on nextval. I think the problem is\nthat apply worker might override what copy has done if an apply worker\nis behind the sequence sync worker as both can run in parallel. Let me\ntry to take some theoretical example to explain this:\n\nAssume, at LSN 10000, the value of sequence s1 is 10. Then by LSN\n12000, the value of s1 becomes 20. Now, say copy decides to copy the\nsequence value till LSN 12000 which means it will make the value as 20\non the subscriber, now, in parallel, apply worker can process LSN\n10000 and make it again 10. Apply worker might end up redoing all\nsequence operations along with some transactional ones where we\nrecreate the file. I am not sure what exact problem it can lead to but\nI think we don't need to redo the work.\n\n We'll have the problem later too, no?\n>\n> > * Don't we need explicit privilege checking before applying sequence\n> > data as we do in commit a2ab9c06ea15fbcb2bfde570986a06b37f52bcca for\n> > tables?\n> >\n>\n> So essentially something like TargetPrivilegesCheck in the worker?\n>\n\nRight.\n\nFew more comments:\n==================\n1.\n@@ -636,7 +704,7 @@ CreatePublication(ParseState *pstate,\nCreatePublicationStmt *stmt)\n get_database_name(MyDatabaseId));\n\n /* FOR ALL TABLES requires superuser */\n- if (stmt->for_all_tables && !superuser())\n+ if (for_all_tables && !superuser())\n ereport(ERROR,\n (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n errmsg(\"must be superuser to create FOR ALL TABLES\npublication\")));\n\nDon't we need a similar check for 'for_all_schema' publications?\n\n2.\n+<varlistentry>\n+<term>\n+ Int8\n+</term>\n+<listitem>\n+<para>\n+ 1 if the sequence update is transactions, 0 otherwise.\n\nShall we say transactional instead of transactions?\n\n3.\n+/*\n+ * Determine object type given the object type set for a schema.\n+ */\n+char\n+pub_get_object_type_for_relkind(char relkind)\n\nShouldn't it be 'relation' instead of 'schema' at the end of the sentence?\n\n4.\n@@ -1739,13 +1804,13 @@ get_rel_sync_entry(PGOutputData *data,\nRelation relation)\n {\n Oid schemaId = get_rel_namespace(relid);\n List *pubids = GetRelationPublications(relid);\n-\n+ char objectType =\npub_get_object_type_for_relkind(get_rel_relkind(relid));\n\nA few lines after this we are again getting relkind which is not a big\ndeal but OTOH there doesn't seem to be a need to fetch the same thing\ntwice from the cache.\n\n5.\n+\n+ /* Check that user is allowed to manipulate the publication tables. */\n+ if (sequences && pubform->puballsequences)\n\n/tables/sequences\n\n6.\n+apply_handle_sequence(StringInfo s)\n{\n...\n+\n+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,\n+ seq.seqname, -1),\n+ RowExclusiveLock, false);\n...\n}\n\nAs here, we are using missing_ok, if the sequence doesn't exist, it\nwill give a message like: \"ERROR: relation \"public.s1\" does not\nexist\" whereas for tables we give a slightly more clear message like:\n\"ERROR: logical replication target relation \"public.t1\" does not\nexist\". This is handled via logicalrep_rel_open().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 25 Mar 2022 16:51:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\nOn 3/25/22 12:21, Amit Kapila wrote:\n> On Fri, Mar 25, 2022 at 3:56 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>>\n>> On 3/25/22 05:01, Amit Kapila wrote:\n>>> On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> Pushed.\n>>>>\n>>>\n>>> Some of the comments given by me [1] don't seem to be addressed or\n>>> responded to. Let me try to say again for the ease of discussion:\n>>>\n>>\n>> D'oh! I got distracted by Petr's response to that message, and missed\n>> this part ...\n>>\n>>> * Don't we need some syncing mechanism between apply worker and\n>>> sequence sync worker so that apply worker skips the sequence changes\n>>> till the sync worker is finished, otherwise, there is a risk of one\n>>> overriding the values of the other? See how we take care of this for a\n>>> table in should_apply_changes_for_rel() and its callers. If we don't\n>>> do this for sequences for some reason then probably a comment\n>>> somewhere is required.\n>>>\n>>\n>> How would that happen? If we're effectively setting the sequence as a\n>> side effect of inserting the data, then why should we even replicate the\n>> sequence?\n>>\n> \n> I was talking just about sequence values here, considering that some\n> sequence is just replicating based on nextval. I think the problem is\n> that apply worker might override what copy has done if an apply worker\n> is behind the sequence sync worker as both can run in parallel. Let me\n> try to take some theoretical example to explain this:\n> \n> Assume, at LSN 10000, the value of sequence s1 is 10. Then by LSN\n> 12000, the value of s1 becomes 20. Now, say copy decides to copy the\n> sequence value till LSN 12000 which means it will make the value as 20\n> on the subscriber, now, in parallel, apply worker can process LSN\n> 10000 and make it again 10. Apply worker might end up redoing all\n> sequence operations along with some transactional ones where we\n> recreate the file. I am not sure what exact problem it can lead to but\n> I think we don't need to redo the work.\n> \n> We'll have the problem later too, no?\n>\n\nAh, I was confused why this would be an issue for sequences and not for\nplain tables, but now I realize apply_handle_sequence() is not called in\napply_handle_sequence. Yes, that's probably a thinko.\n\n\n>>> * Don't we need explicit privilege checking before applying sequence\n>>> data as we do in commit a2ab9c06ea15fbcb2bfde570986a06b37f52bcca for\n>>> tables?\n>>>\n>>\n>> So essentially something like TargetPrivilegesCheck in the worker?\n>>\n> \n> Right.\n> \n\nOK, will do.\n\n> Few more comments:\n> ==================\n> 1.\n> @@ -636,7 +704,7 @@ CreatePublication(ParseState *pstate,\n> CreatePublicationStmt *stmt)\n> get_database_name(MyDatabaseId));\n> \n> /* FOR ALL TABLES requires superuser */\n> - if (stmt->for_all_tables && !superuser())\n> + if (for_all_tables && !superuser())\n> ereport(ERROR,\n> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> errmsg(\"must be superuser to create FOR ALL TABLES\n> publication\")));\n> \n> Don't we need a similar check for 'for_all_schema' publications?\n> \n\nI think you mean \"for_all_sequences\", right?\n\n> 2.\n> +<varlistentry>\n> +<term>\n> + Int8\n> +</term>\n> +<listitem>\n> +<para>\n> + 1 if the sequence update is transactions, 0 otherwise.\n> \n> Shall we say transactional instead of transactions?\n> \n> 3.\n> +/*\n> + * Determine object type given the object type set for a schema.\n> + */\n> +char\n> +pub_get_object_type_for_relkind(char relkind)\n> \n> Shouldn't it be 'relation' instead of 'schema' at the end of the sentence?\n> \n> 4.\n> @@ -1739,13 +1804,13 @@ get_rel_sync_entry(PGOutputData *data,\n> Relation relation)\n> {\n> Oid schemaId = get_rel_namespace(relid);\n> List *pubids = GetRelationPublications(relid);\n> -\n> + char objectType =\n> pub_get_object_type_for_relkind(get_rel_relkind(relid));\n> \n> A few lines after this we are again getting relkind which is not a big\n> deal but OTOH there doesn't seem to be a need to fetch the same thing\n> twice from the cache.\n> \n> 5.\n> +\n> + /* Check that user is allowed to manipulate the publication tables. */\n> + if (sequences && pubform->puballsequences)\n> \n> /tables/sequences\n> \n> 6.\n> +apply_handle_sequence(StringInfo s)\n> {\n> ...\n> +\n> + relid = RangeVarGetRelid(makeRangeVar(seq.nspname,\n> + seq.seqname, -1),\n> + RowExclusiveLock, false);\n> ...\n> }\n> \n> As here, we are using missing_ok, if the sequence doesn't exist, it\n> will give a message like: \"ERROR: relation \"public.s1\" does not\n> exist\" whereas for tables we give a slightly more clear message like:\n> \"ERROR: logical replication target relation \"public.t1\" does not\n> exist\". This is handled via logicalrep_rel_open().\n> \n\nThanks, I'll look at rewording these comments and messages.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 25 Mar 2022 12:59:42 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> Pushed, after going through the patch once more, addressed the remaining\n> FIXMEs, corrected a couple places in the docs and comments, etc. Minor\n> tweaks, nothing important.\n\nWhile rebasing patch [1] I found a couple of comments:\nstatic void\n ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,\n- List **rels, List **schemas)\n+ List **tables, List **sequences,\n+ List **tables_schemas, List **sequences_schemas,\n+ List **schemas)\n {\n ListCell *cell;\n PublicationObjSpec *pubobj;\n@@ -185,12 +194,23 @@ ObjectsInPublicationToOids(List\n*pubobjspec_list, ParseState *pstate,\n switch (pubobj->pubobjtype)\n {\n case PUBLICATIONOBJ_TABLE:\n- *rels = lappend(*rels, pubobj->pubtable);\n+ *tables = lappend(*tables, pubobj->pubtable);\n+ break;\n+ case PUBLICATIONOBJ_SEQUENCE:\n+ *sequences = lappend(*sequences, pubobj->pubtable);\n break;\n case PUBLICATIONOBJ_TABLES_IN_SCHEMA:\n schemaid = get_namespace_oid(pubobj->name, false);\n\n /* Filter out duplicates if user specifies \"sch1, sch1\" */\n+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);\n+ *schemas = list_append_unique_oid(*schemas, schemaid);\n+ break;\n\nNow tables_schemas and sequence_schemas are being updated and used in\nObjectsInPublicationToOids, schema parameter is no longer being used\nafter processing in ObjectsInPublicationToOids, I felt we can remove\nthat parameter.\n\n /* ALTER PUBLICATION <name> ADD */\n else if (Matches(\"ALTER\", \"PUBLICATION\", MatchAny, \"ADD\"))\n- COMPLETE_WITH(\"ALL TABLES IN SCHEMA\", \"TABLE\");\n+ COMPLETE_WITH(\"ALL TABLES IN SCHEMA\", \"ALL SEQUENCES IN SCHEMA\",\n\"TABLE\", \"SEQUENCE\");\n\nTab completion of alter publication for ADD and DROP is the same, we\ncould combine it.\n\nAttached a patch for the same.\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CALDaNm3%3DJrucjhiiwsYQw5-PGtBHFONa6F7hhWCXMsGvh%3DtamA%40mail.gmail.com\n\nRegards,\nVignesh", "msg_date": "Fri, 25 Mar 2022 20:04:57 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 3/25/22 12:59, Tomas Vondra wrote:\n> \n> On 3/25/22 12:21, Amit Kapila wrote:\n>> On Fri, Mar 25, 2022 at 3:56 PM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>>\n>>> On 3/25/22 05:01, Amit Kapila wrote:\n>>>> On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra\n>>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>>\n>>>>> Pushed.\n>>>>>\n>>>>\n>>>> Some of the comments given by me [1] don't seem to be addressed or\n>>>> responded to. Let me try to say again for the ease of discussion:\n>>>>\n>>>\n>>> D'oh! I got distracted by Petr's response to that message, and missed\n>>> this part ...\n>>>\n>>>> * Don't we need some syncing mechanism between apply worker and\n>>>> sequence sync worker so that apply worker skips the sequence changes\n>>>> till the sync worker is finished, otherwise, there is a risk of one\n>>>> overriding the values of the other? See how we take care of this for a\n>>>> table in should_apply_changes_for_rel() and its callers. If we don't\n>>>> do this for sequences for some reason then probably a comment\n>>>> somewhere is required.\n>>>>\n>>>\n>>> How would that happen? If we're effectively setting the sequence as a\n>>> side effect of inserting the data, then why should we even replicate the\n>>> sequence?\n>>>\n>>\n>> I was talking just about sequence values here, considering that some\n>> sequence is just replicating based on nextval. I think the problem is\n>> that apply worker might override what copy has done if an apply worker\n>> is behind the sequence sync worker as both can run in parallel. Let me\n>> try to take some theoretical example to explain this:\n>>\n>> Assume, at LSN 10000, the value of sequence s1 is 10. Then by LSN\n>> 12000, the value of s1 becomes 20. Now, say copy decides to copy the\n>> sequence value till LSN 12000 which means it will make the value as 20\n>> on the subscriber, now, in parallel, apply worker can process LSN\n>> 10000 and make it again 10. Apply worker might end up redoing all\n>> sequence operations along with some transactional ones where we\n>> recreate the file. I am not sure what exact problem it can lead to but\n>> I think we don't need to redo the work.\n>>\n>> We'll have the problem later too, no?\n>>\n> \n> Ah, I was confused why this would be an issue for sequences and not for\n> plain tables, but now I realize apply_handle_sequence() is not called in\n> apply_handle_sequence. Yes, that's probably a thinko.\n> \n\nHmm, so fixing this might be a bit trickier than I expected.\n\nFirstly, currently we only send nspname/relname in the sequence message,\nnot the remote OID or schema. The idea was that for sequences we don't\nreally need schema info, so this seemed OK.\n\nBut should_apply_changes_for_rel() needs LogicalRepRelMapEntry, and to\ncreate/maintain that those records we need to send the schema.\n\nAttached is a WIP patch does that.\n\nTwo places need more work, I think:\n\n1) maybe_send_schema needs ReorderBufferChange, but we don't have that\nfor sequences, we only have TXN. I created a simple wrapper, but maybe\nwe should just tweak maybe_send_schema to use TXN.\n\n2) The transaction handling in is a bit confusing. The non-transactional\nincrements won't have any explicit commit later, so we can't just rely\non begin_replication_step/end_replication_step. But I want to try\nspending a bit more time on this.\n\n\nBut there's a more serious issue, I think. So far, we allowed this:\n\n BEGIN;\n CREATE SEQUENCE s2;\n ALTER PUBLICATION p ADD SEQUENCE s2;\n INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);\n COMMIT;\n\nand the behavior was that we replicated the changes. But with the patch\napplied, that no longer happens, because should_apply_changes_for_rel\nsays the change should not be applied.\n\nAnd after thinking about this, I think that's correct - we can't apply\nchanges until ALTER SUBSCRIPTION ... REFRESH PUBLICATION gets executed,\nand we can't do that until the transaction commits.\n\nSo I guess that's correct, and the current behavior is a bug.\n\nFor a while I was thinking that maybe this means we don't need the\ntransactional behavior at all, but I think we do - we have to handle\nALTER SEQUENCE cases that are transactional.\n\nDoes that make sense, Amit?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 25 Mar 2022 17:50:49 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 3/25/22 15:34, vignesh C wrote:\n> On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> Pushed, after going through the patch once more, addressed the remaining\n>> FIXMEs, corrected a couple places in the docs and comments, etc. Minor\n>> tweaks, nothing important.\n> \n> While rebasing patch [1] I found a couple of comments:\n> static void\n> ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,\n> - List **rels, List **schemas)\n> + List **tables, List **sequences,\n> + List **tables_schemas, List **sequences_schemas,\n> + List **schemas)\n> {\n> ListCell *cell;\n> PublicationObjSpec *pubobj;\n> @@ -185,12 +194,23 @@ ObjectsInPublicationToOids(List\n> *pubobjspec_list, ParseState *pstate,\n> switch (pubobj->pubobjtype)\n> {\n> case PUBLICATIONOBJ_TABLE:\n> - *rels = lappend(*rels, pubobj->pubtable);\n> + *tables = lappend(*tables, pubobj->pubtable);\n> + break;\n> + case PUBLICATIONOBJ_SEQUENCE:\n> + *sequences = lappend(*sequences, pubobj->pubtable);\n> break;\n> case PUBLICATIONOBJ_TABLES_IN_SCHEMA:\n> schemaid = get_namespace_oid(pubobj->name, false);\n> \n> /* Filter out duplicates if user specifies \"sch1, sch1\" */\n> + *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);\n> + *schemas = list_append_unique_oid(*schemas, schemaid);\n> + break;\n> \n> Now tables_schemas and sequence_schemas are being updated and used in\n> ObjectsInPublicationToOids, schema parameter is no longer being used\n> after processing in ObjectsInPublicationToOids, I felt we can remove\n> that parameter.\n> \n\nThanks! That's a nice simplification, I'll get that pushed in a couple\nminutes.\n\n> /* ALTER PUBLICATION <name> ADD */\n> else if (Matches(\"ALTER\", \"PUBLICATION\", MatchAny, \"ADD\"))\n> - COMPLETE_WITH(\"ALL TABLES IN SCHEMA\", \"TABLE\");\n> + COMPLETE_WITH(\"ALL TABLES IN SCHEMA\", \"ALL SEQUENCES IN SCHEMA\",\n> \"TABLE\", \"SEQUENCE\");\n> \n> Tab completion of alter publication for ADD and DROP is the same, we\n> could combine it.\n> \n\nWe could, but I find these combined rules harder to read, so I'll keep\nthe current tab-completion.\n\n> Attached a patch for the same.\n> Thoughts?\n\nThanks for taking a look! Appreciated.\n\n> \n> [1] - https://www.postgresql.org/message-id/CALDaNm3%3DJrucjhiiwsYQw5-PGtBHFONa6F7hhWCXMsGvh%3DtamA%40mail.gmail.com\n> \n\nregars\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 25 Mar 2022 20:58:11 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Hi,\n\nI've fixed most of the reported issues (or at least I think so), with\nthe exception of those in apply_handle_sequence function, i.e.:\n\n1) properly coordinating with the tablesync worker\n\n2) considering skip_lsn, skipping changes\n\n3) missing privilege check, similar to TargetPrivilegesCheck\n\n4) nicer error message if the sequence does not exist\n\n\nThe apply_handle_sequence stuff seems to be inter-related, so I plan to\ndeal with that in a single separate commit - the main part being the\ntablesync coordination, per the fix I shared earlier today. But I need\ntime to think about that, I don't want to rush that.\n\n\nThanks for the feedback!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 25 Mar 2022 21:10:41 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Fri, Mar 25, 2022 at 10:20 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hmm, so fixing this might be a bit trickier than I expected.\n>\n> Firstly, currently we only send nspname/relname in the sequence message,\n> not the remote OID or schema. The idea was that for sequences we don't\n> really need schema info, so this seemed OK.\n>\n> But should_apply_changes_for_rel() needs LogicalRepRelMapEntry, and to\n> create/maintain that those records we need to send the schema.\n>\n> Attached is a WIP patch does that.\n>\n> Two places need more work, I think:\n>\n> 1) maybe_send_schema needs ReorderBufferChange, but we don't have that\n> for sequences, we only have TXN. I created a simple wrapper, but maybe\n> we should just tweak maybe_send_schema to use TXN.\n>\n> 2) The transaction handling in is a bit confusing. The non-transactional\n> increments won't have any explicit commit later, so we can't just rely\n> on begin_replication_step/end_replication_step. But I want to try\n> spending a bit more time on this.\n>\n\nI didn't understand what you want to say in point (2).\n\n>\n> But there's a more serious issue, I think. So far, we allowed this:\n>\n> BEGIN;\n> CREATE SEQUENCE s2;\n> ALTER PUBLICATION p ADD SEQUENCE s2;\n> INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);\n> COMMIT;\n>\n> and the behavior was that we replicated the changes. But with the patch\n> applied, that no longer happens, because should_apply_changes_for_rel\n> says the change should not be applied.\n>\n> And after thinking about this, I think that's correct - we can't apply\n> changes until ALTER SUBSCRIPTION ... REFRESH PUBLICATION gets executed,\n> and we can't do that until the transaction commits.\n>\n> So I guess that's correct, and the current behavior is a bug.\n>\n\nYes, I also think that is a bug.\n\n> For a while I was thinking that maybe this means we don't need the\n> transactional behavior at all, but I think we do - we have to handle\n> ALTER SEQUENCE cases that are transactional.\n>\n\nI need some time to think about this. At all places, it is mentioned\nas creating a sequence for transactional cases which at the very least\nneed some tweak.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 26 Mar 2022 12:58:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 3/26/22 08:28, Amit Kapila wrote:\n> On Fri, Mar 25, 2022 at 10:20 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hmm, so fixing this might be a bit trickier than I expected.\n>>\n>> Firstly, currently we only send nspname/relname in the sequence message,\n>> not the remote OID or schema. The idea was that for sequences we don't\n>> really need schema info, so this seemed OK.\n>>\n>> But should_apply_changes_for_rel() needs LogicalRepRelMapEntry, and to\n>> create/maintain that those records we need to send the schema.\n>>\n>> Attached is a WIP patch does that.\n>>\n>> Two places need more work, I think:\n>>\n>> 1) maybe_send_schema needs ReorderBufferChange, but we don't have that\n>> for sequences, we only have TXN. I created a simple wrapper, but maybe\n>> we should just tweak maybe_send_schema to use TXN.\n>>\n>> 2) The transaction handling in is a bit confusing. The non-transactional\n>> increments won't have any explicit commit later, so we can't just rely\n>> on begin_replication_step/end_replication_step. But I want to try\n>> spending a bit more time on this.\n>>\n> \n> I didn't understand what you want to say in point (2).\n> \n\nMy point is that handle_apply_sequence() either needs to use the same\ntransaction handling as other apply methods, or start (and commit) a\nseparate transaction for the \"transactional\" case.\n\nWhich means we can't use the begin_replication_step/end_replication_step\nand the current code seems a bit complex. And I'm not sure it's quite\ncorrect. So this place needs more work.\n\n>>\n>> But there's a more serious issue, I think. So far, we allowed this:\n>>\n>> BEGIN;\n>> CREATE SEQUENCE s2;\n>> ALTER PUBLICATION p ADD SEQUENCE s2;\n>> INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);\n>> COMMIT;\n>>\n>> and the behavior was that we replicated the changes. But with the patch\n>> applied, that no longer happens, because should_apply_changes_for_rel\n>> says the change should not be applied.\n>>\n>> And after thinking about this, I think that's correct - we can't apply\n>> changes until ALTER SUBSCRIPTION ... REFRESH PUBLICATION gets executed,\n>> and we can't do that until the transaction commits.\n>>\n>> So I guess that's correct, and the current behavior is a bug.\n>>\n> \n> Yes, I also think that is a bug.\n> \n\nOK\n\n>> For a while I was thinking that maybe this means we don't need the\n>> transactional behavior at all, but I think we do - we have to handle\n>> ALTER SEQUENCE cases that are transactional.\n>>\n> \n> I need some time to think about this.\n\nUnderstood.\n\n> At all places, it is mentioned\n> as creating a sequence for transactional cases which at the very least\n> need some tweak.\n> \n\nWhich places?\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 26 Mar 2022 10:56:23 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Sat, Mar 26, 2022 at 3:26 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 3/26/22 08:28, Amit Kapila wrote:\n> >>\n> >> 2) The transaction handling in is a bit confusing. The non-transactional\n> >> increments won't have any explicit commit later, so we can't just rely\n> >> on begin_replication_step/end_replication_step. But I want to try\n> >> spending a bit more time on this.\n> >>\n> >\n> > I didn't understand what you want to say in point (2).\n> >\n>\n> My point is that handle_apply_sequence() either needs to use the same\n> transaction handling as other apply methods, or start (and commit) a\n> separate transaction for the \"transactional\" case.\n>\n> Which means we can't use the begin_replication_step/end_replication_step\n>\n\nWe already call CommitTransactionCommand after end_replication_step at\na few places in that file so as there is no explicit commit in\nnon-transactional case, we can probably call CommitTransactionCommand\nfor it.\n\n> and the current code seems a bit complex. And I'm not sure it's quite\n> correct. So this place needs more work.\n>\n\nAgreed.\n\n>\n> >> For a while I was thinking that maybe this means we don't need the\n> >> transactional behavior at all, but I think we do - we have to handle\n> >> ALTER SEQUENCE cases that are transactional.\n> >>\n> >\n> > I need some time to think about this.\n>\n\nWhile thinking about this, I think I see a problem with the\nnon-transactional handling of sequences. It seems that we will skip\nsending non-transactional sequence change if it occurs before the\ndecoding has reached a consistent point but the surrounding commit\noccurs after a consistent point is reached. In such cases, the\ncorresponding DMLs like inserts will be sent but sequence changes\nwon't be sent. For example (this scenario is based on\ntwophase_snapshot.spec),\n\nInitial setup:\n==============\nCreate table t1_seq(c1 int);\nCreate Sequence seq1;\n\nTest Execution via multiple sessions (this test allows insert in\nsession-2 to happen before we reach a consistent point and commit\nhappens after a consistent point):\n=======================================================================================================\n\nSession-2:\nBegin;\nSELECT pg_current_xact_id();\n\nSession-1:\nSELECT 'init' FROM pg_create_logical_replication_slot('test_slot',\n'test_decoding', false, true);\n\nSession-3:\nBegin;\nSELECT pg_current_xact_id();\n\nSession-2:\nCommit;\nBegin;\nINSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);\n\nSession-3:\nCommit;\n\nSession-2:\nCommit 'foo'\n\nSession-1:\nSELECT data FROM pg_logical_slot_get_changes('test_slot', NULL, NULL,\n'include-xids', 'false', 'skip-empty-xacts', '1');\n\n data\n----------------------------------------------\n BEGIN\n table public.t1_seq: INSERT: c1[integer]:1\n table public.t1_seq: INSERT: c1[integer]:2\n table public.t1_seq: INSERT: c1[integer]:3\n table public.t1_seq: INSERT: c1[integer]:4\n table public.t1_seq: INSERT: c1[integer]:5\n table public.t1_seq: INSERT: c1[integer]:6\n\n\nNow, if we normally try to decode such an insert, the result would be\nsomething like:\n data\n------------------------------------------------------------------------------\n sequence public.seq1: transactional:0 last_value: 33 log_cnt: 0 is_called:1\n sequence public.seq1: transactional:0 last_value: 66 log_cnt: 0 is_called:1\n sequence public.seq1: transactional:0 last_value: 99 log_cnt: 0 is_called:1\n sequence public.seq1: transactional:0 last_value: 132 log_cnt: 0 is_called:1\n BEGIN\n table public.t1_seq: INSERT: c1[integer]:1\n table public.t1_seq: INSERT: c1[integer]:2\n table public.t1_seq: INSERT: c1[integer]:3\n table public.t1_seq: INSERT: c1[integer]:4\n table public.t1_seq: INSERT: c1[integer]:5\n table public.t1_seq: INSERT: c1[integer]:6\n\nThis will create an inconsistent replica as sequence changes won't be\nreplicated. I thought about changing snapshot dealing of\nnon-transactional sequence changes similar to transactional ones but\nthat also won't work because it is only at commit we decide whether we\ncan send the changes.\n\nFor the transactional case, as we are considering the create sequence\noperation as transactional, we would unnecessarily queue them even\nthough that is not required. Basically, they don't need to be\nconsidered transactional and we can simply ignore such messages like\nother DDLs. But for that probably we need to distinguish Alter/Create\ncase which may or may not be straightforward. Now, queuing them is\nprobably harmless unless it causes the transaction to spill/stream.\n\nI still couldn't think completely about cases where a mix of\ntransactional and non-transactional changes occur in the same\ntransaction as I think it somewhat depends on what we want to do about\nthe above cases.\n\n> > At all places, it is mentioned\n> > as creating a sequence for transactional cases which at the very least\n> > need some tweak.\n> >\n>\n> Which places?\n>\n\nIn comments like:\na. When decoding sequences, we differentiate between sequences created\nin a (running) transaction and sequences created in other (already\ncommitted) transactions.\nb. ... But for new sequences, we need to handle them in a transactional way, ..\nc. ... Change needs to be handled as transactional, because the\nsequence was created in a transaction that is still running ...\n\nIt seems all these places indicate a scenario of creating a sequence\nwhereas we want to do transactional stuff mainly for Alter.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 28 Mar 2022 10:59:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Sat, Mar 26, 2022 at 6:56 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 3/26/22 08:28, Amit Kapila wrote:\n> > On Fri, Mar 25, 2022 at 10:20 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Hmm, so fixing this might be a bit trickier than I expected.\n> >>\n> >> Firstly, currently we only send nspname/relname in the sequence message,\n> >> not the remote OID or schema. The idea was that for sequences we don't\n> >> really need schema info, so this seemed OK.\n> >>\n> >> But should_apply_changes_for_rel() needs LogicalRepRelMapEntry, and to\n> >> create/maintain that those records we need to send the schema.\n> >>\n> >> Attached is a WIP patch does that.\n> >>\n> >> Two places need more work, I think:\n> >>\n> >> 1) maybe_send_schema needs ReorderBufferChange, but we don't have that\n> >> for sequences, we only have TXN. I created a simple wrapper, but maybe\n> >> we should just tweak maybe_send_schema to use TXN.\n> >>\n> >> 2) The transaction handling in is a bit confusing. The non-transactional\n> >> increments won't have any explicit commit later, so we can't just rely\n> >> on begin_replication_step/end_replication_step. But I want to try\n> >> spending a bit more time on this.\n> >>\n> >\n> > I didn't understand what you want to say in point (2).\n> >\n>\n> My point is that handle_apply_sequence() either needs to use the same\n> transaction handling as other apply methods, or start (and commit) a\n> separate transaction for the \"transactional\" case.\n>\n> Which means we can't use the begin_replication_step/end_replication_step\n> and the current code seems a bit complex. And I'm not sure it's quite\n> correct. So this place needs more work.\n>\n> >>\n> >> But there's a more serious issue, I think. So far, we allowed this:\n> >>\n> >> BEGIN;\n> >> CREATE SEQUENCE s2;\n> >> ALTER PUBLICATION p ADD SEQUENCE s2;\n> >> INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);\n> >> COMMIT;\n> >>\n> >> and the behavior was that we replicated the changes. But with the patch\n> >> applied, that no longer happens, because should_apply_changes_for_rel\n> >> says the change should not be applied.\n> >>\n> >> And after thinking about this, I think that's correct - we can't apply\n> >> changes until ALTER SUBSCRIPTION ... REFRESH PUBLICATION gets executed,\n> >> and we can't do that until the transaction commits.\n> >>\n> >> So I guess that's correct, and the current behavior is a bug.\n> >>\n> >\n> > Yes, I also think that is a bug.\n> >\n>\n> OK\n\nI also think that this is a bug. Given this behavior is a bug and\nnewly-added sequence data should be replicated only after ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION, is there any case where the\nsequence message applied on the subscriber is transactional?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 1 Apr 2022 13:51:54 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Fri, Apr 1, 2022 at 10:22 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Mar 26, 2022 at 6:56 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> > >>\n> > >> But there's a more serious issue, I think. So far, we allowed this:\n> > >>\n> > >> BEGIN;\n> > >> CREATE SEQUENCE s2;\n> > >> ALTER PUBLICATION p ADD SEQUENCE s2;\n> > >> INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);\n> > >> COMMIT;\n> > >>\n> > >> and the behavior was that we replicated the changes. But with the patch\n> > >> applied, that no longer happens, because should_apply_changes_for_rel\n> > >> says the change should not be applied.\n> > >>\n> > >> And after thinking about this, I think that's correct - we can't apply\n> > >> changes until ALTER SUBSCRIPTION ... REFRESH PUBLICATION gets executed,\n> > >> and we can't do that until the transaction commits.\n> > >>\n> > >> So I guess that's correct, and the current behavior is a bug.\n> > >>\n> > >\n> > > Yes, I also think that is a bug.\n> > >\n> >\n> > OK\n>\n> I also think that this is a bug. Given this behavior is a bug and\n> newly-added sequence data should be replicated only after ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION, is there any case where the\n> sequence message applied on the subscriber is transactional?\n>\n\nIt could be required for Alter Sequence as that can also rewrite the\nrelfilenode. However, IIUC, I think there is a bigger problem with\nnon-transactional sequence implementation as that can cause\ninconsistent replica. See the problem description and test case in my\nprevious email [1] (While thinking about this, I think I see a problem\nwith the non-transactional handling of sequences....). Can you please\nonce check that and let me know if I am missing something there? If\nnot, then I think we may need to first think of a solution for\nnon-transactional sequence handling.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KAFdQEULk%2B4C%3DieWA5UPSUtf_gtqKsFj9J9f2c%3D8hm4g%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 1 Apr 2022 11:11:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 3/28/22 07:29, Amit Kapila wrote:\n> ...\n>\n> While thinking about this, I think I see a problem with the\n> non-transactional handling of sequences. It seems that we will skip\n> sending non-transactional sequence change if it occurs before the\n> decoding has reached a consistent point but the surrounding commit\n> occurs after a consistent point is reached. In such cases, the\n> corresponding DMLs like inserts will be sent but sequence changes\n> won't be sent. For example (this scenario is based on\n> twophase_snapshot.spec),\n> \n> Initial setup:\n> ==============\n> Create table t1_seq(c1 int);\n> Create Sequence seq1;\n> \n> Test Execution via multiple sessions (this test allows insert in\n> session-2 to happen before we reach a consistent point and commit\n> happens after a consistent point):\n> =======================================================================================================\n> \n> Session-2:\n> Begin;\n> SELECT pg_current_xact_id();\n> \n> Session-1:\n> SELECT 'init' FROM pg_create_logical_replication_slot('test_slot',\n> 'test_decoding', false, true);\n> \n> Session-3:\n> Begin;\n> SELECT pg_current_xact_id();\n> \n> Session-2:\n> Commit;\n> Begin;\n> INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);\n> \n> Session-3:\n> Commit;\n> \n> Session-2:\n> Commit 'foo'\n> \n> Session-1:\n> SELECT data FROM pg_logical_slot_get_changes('test_slot', NULL, NULL,\n> 'include-xids', 'false', 'skip-empty-xacts', '1');\n> \n> data\n> ----------------------------------------------\n> BEGIN\n> table public.t1_seq: INSERT: c1[integer]:1\n> table public.t1_seq: INSERT: c1[integer]:2\n> table public.t1_seq: INSERT: c1[integer]:3\n> table public.t1_seq: INSERT: c1[integer]:4\n> table public.t1_seq: INSERT: c1[integer]:5\n> table public.t1_seq: INSERT: c1[integer]:6\n> \n> \n> Now, if we normally try to decode such an insert, the result would be\n> something like:\n> data\n> ------------------------------------------------------------------------------\n> sequence public.seq1: transactional:0 last_value: 33 log_cnt: 0 is_called:1\n> sequence public.seq1: transactional:0 last_value: 66 log_cnt: 0 is_called:1\n> sequence public.seq1: transactional:0 last_value: 99 log_cnt: 0 is_called:1\n> sequence public.seq1: transactional:0 last_value: 132 log_cnt: 0 is_called:1\n> BEGIN\n> table public.t1_seq: INSERT: c1[integer]:1\n> table public.t1_seq: INSERT: c1[integer]:2\n> table public.t1_seq: INSERT: c1[integer]:3\n> table public.t1_seq: INSERT: c1[integer]:4\n> table public.t1_seq: INSERT: c1[integer]:5\n> table public.t1_seq: INSERT: c1[integer]:6\n> \n> This will create an inconsistent replica as sequence changes won't be\n> replicated.\n\nHmm, that's interesting. I wonder if it can actually happen, though.\nHave you been able to reproduce that, somehow?\n\n> I thought about changing snapshot dealing of\n> non-transactional sequence changes similar to transactional ones but\n> that also won't work because it is only at commit we decide whether we\n> can send the changes.\n> \nI wonder if there's some earlier LSN (similar to the consistent point)\nwhich might be useful for this.\n\nOr maybe we should queue even the non-transactional changes, not\nper-transaction but in a global list, and then at each commit either\ndiscard inspect them (at that point we know the lowest LSN for all\ntransactions and the consistent point). Seems complex, though.\n\n> For the transactional case, as we are considering the create sequence\n> operation as transactional, we would unnecessarily queue them even\n> though that is not required. Basically, they don't need to be\n> considered transactional and we can simply ignore such messages like\n> other DDLs. But for that probably we need to distinguish Alter/Create\n> case which may or may not be straightforward. Now, queuing them is\n> probably harmless unless it causes the transaction to spill/stream.\n> \n\nI'm not sure I follow. Why would we queue them unnecessarily?\n\nAlso, there's the bug with decoding changes in transactions that create\nthe sequence and add it to a publication. I think the agreement was that\nthis behavior was incorrect, we should not decode changes until the\nsubscription is refreshed. Doesn't that mean can't be any CREATE case,\njust ALTER?\n\n> I still couldn't think completely about cases where a mix of\n> transactional and non-transactional changes occur in the same\n> transaction as I think it somewhat depends on what we want to do about\n> the above cases.\n> \n\nUnderstood. I need to think about this too.\n\n>>> At all places, it is mentioned\n>>> as creating a sequence for transactional cases which at the very least\n>>> need some tweak.\n>>>\n>>\n>> Which places?\n>>\n> \n> In comments like:\n> a. When decoding sequences, we differentiate between sequences created\n> in a (running) transaction and sequences created in other (already\n> committed) transactions.\n> b. ... But for new sequences, we need to handle them in a transactional way, ..\n> c. ... Change needs to be handled as transactional, because the\n> sequence was created in a transaction that is still running ...\n> \n> It seems all these places indicate a scenario of creating a sequence\n> whereas we want to do transactional stuff mainly for Alter.\n> \n\nRight, I'll think about how to clarify the comments.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 Apr 2022 17:02:14 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 4/1/22 17:02, Tomas Vondra wrote:\n> \n> \n> On 3/28/22 07:29, Amit Kapila wrote:\n>> ...\n>>\n>> While thinking about this, I think I see a problem with the\n>> non-transactional handling of sequences. It seems that we will skip\n>> sending non-transactional sequence change if it occurs before the\n>> decoding has reached a consistent point but the surrounding commit\n>> occurs after a consistent point is reached. In such cases, the\n>> corresponding DMLs like inserts will be sent but sequence changes\n>> won't be sent. For example (this scenario is based on\n>> twophase_snapshot.spec),\n>>\n>> Initial setup:\n>> ==============\n>> Create table t1_seq(c1 int);\n>> Create Sequence seq1;\n>>\n>> Test Execution via multiple sessions (this test allows insert in\n>> session-2 to happen before we reach a consistent point and commit\n>> happens after a consistent point):\n>> =======================================================================================================\n>>\n>> Session-2:\n>> Begin;\n>> SELECT pg_current_xact_id();\n>>\n>> Session-1:\n>> SELECT 'init' FROM pg_create_logical_replication_slot('test_slot',\n>> 'test_decoding', false, true);\n>>\n>> Session-3:\n>> Begin;\n>> SELECT pg_current_xact_id();\n>>\n>> Session-2:\n>> Commit;\n>> Begin;\n>> INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);\n>>\n>> Session-3:\n>> Commit;\n>>\n>> Session-2:\n>> Commit 'foo'\n>>\n>> Session-1:\n>> SELECT data FROM pg_logical_slot_get_changes('test_slot', NULL, NULL,\n>> 'include-xids', 'false', 'skip-empty-xacts', '1');\n>>\n>> data\n>> ----------------------------------------------\n>> BEGIN\n>> table public.t1_seq: INSERT: c1[integer]:1\n>> table public.t1_seq: INSERT: c1[integer]:2\n>> table public.t1_seq: INSERT: c1[integer]:3\n>> table public.t1_seq: INSERT: c1[integer]:4\n>> table public.t1_seq: INSERT: c1[integer]:5\n>> table public.t1_seq: INSERT: c1[integer]:6\n>>\n>>\n>> Now, if we normally try to decode such an insert, the result would be\n>> something like:\n>> data\n>> ------------------------------------------------------------------------------\n>> sequence public.seq1: transactional:0 last_value: 33 log_cnt: 0 is_called:1\n>> sequence public.seq1: transactional:0 last_value: 66 log_cnt: 0 is_called:1\n>> sequence public.seq1: transactional:0 last_value: 99 log_cnt: 0 is_called:1\n>> sequence public.seq1: transactional:0 last_value: 132 log_cnt: 0 is_called:1\n>> BEGIN\n>> table public.t1_seq: INSERT: c1[integer]:1\n>> table public.t1_seq: INSERT: c1[integer]:2\n>> table public.t1_seq: INSERT: c1[integer]:3\n>> table public.t1_seq: INSERT: c1[integer]:4\n>> table public.t1_seq: INSERT: c1[integer]:5\n>> table public.t1_seq: INSERT: c1[integer]:6\n>>\n>> This will create an inconsistent replica as sequence changes won't be\n>> replicated.\n> \n> Hmm, that's interesting. I wonder if it can actually happen, though.\n> Have you been able to reproduce that, somehow?\n> \n>> I thought about changing snapshot dealing of\n>> non-transactional sequence changes similar to transactional ones but\n>> that also won't work because it is only at commit we decide whether we\n>> can send the changes.\n>>\n> I wonder if there's some earlier LSN (similar to the consistent point)\n> which might be useful for this.\n> \n> Or maybe we should queue even the non-transactional changes, not\n> per-transaction but in a global list, and then at each commit either\n> discard inspect them (at that point we know the lowest LSN for all\n> transactions and the consistent point). Seems complex, though.\n> \n>> For the transactional case, as we are considering the create sequence\n>> operation as transactional, we would unnecessarily queue them even\n>> though that is not required. Basically, they don't need to be\n>> considered transactional and we can simply ignore such messages like\n>> other DDLs. But for that probably we need to distinguish Alter/Create\n>> case which may or may not be straightforward. Now, queuing them is\n>> probably harmless unless it causes the transaction to spill/stream.\n>>\n> \n> I'm not sure I follow. Why would we queue them unnecessarily?\n> \n> Also, there's the bug with decoding changes in transactions that create\n> the sequence and add it to a publication. I think the agreement was that\n> this behavior was incorrect, we should not decode changes until the\n> subscription is refreshed. Doesn't that mean can't be any CREATE case,\n> just ALTER?\n> \n\nSo, I investigated this a bit more, and I wrote a couple test_decoding\nisolation tests (patch attached) demonstrating the issue. Actually, I\nshould say \"issues\" because it's a bit worse than you described ...\n\nThe whole problem is in this chunk of code in sequence_decode():\n\n\n /* Skip the change if already processed (per the snapshot). */\n if (transactional &&\n !SnapBuildProcessChange(builder, xid, buf->origptr))\n return;\n else if (!transactional &&\n (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||\n SnapBuildXactNeedsSkip(builder, buf->origptr)))\n return;\n\n /* Queue the increment (or send immediately if not transactional). */\n snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);\n ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,\n origin_id, target_node, transactional,\n xlrec->created, tuplebuf);\n\nWith the script you described, the increment is non-transactional, so we\nend up in the second branch, return and thus discard the increment.\n\nBut it's also possible the change is transactional, which can only\ntrigger the first branch. But it does not, so we start building the\nsnapshot. But the first thing SnapBuildGetOrBuildSnapshot does is\n\n Assert(builder->state == SNAPBUILD_CONSISTENT);\n\nand we're still not in a consistent snapshot, so it just crashes and\nburn :-(\n\nThe sequences.spec file has two definitions of s2restart step, one empty\n(resulting in non-transactional change), one with ALTER SEQUENCE (which\nmeans the change will be transactional).\n\n\nThe really \"funny\" thing is this is not new code - this is an exact copy\nfrom logicalmsg_decode(), and logical messages have all those issues\ntoo. We may discard some messages, trigger the same Assert, etc. There's\na messages2.spec demonstrating this (s2message step defines whether the\nmessage is transactional or not).\n\nSo I guess we need to fix both places, perhaps in a similar way. And one\nof those will have to be backpatched (which may make it more complex).\n\n\nThe only option I see is reworking the decoding so that it does not need\nthe snapshot at all. We'll need to stash the changes just like any other\nchange, apply them at end of transaction, and the main difference\nbetween transactional and non-transactional case would be what happens\nat abort. Transactional changes would be discarded, non-transactional\nwould be applied anyway.\n\nThe challenge is this reorders the sequence changes, so we'll need to\nreconcile them somehow. One option would be to simply (1) apply the\nchange with the highest LSN in the transaction, and then walk all other\nin-progress transactions and changes for that sequence with a lower LSN.\nNot sure how complex/expensive that would be, though. Another problem is\nnot all increments are WAL-logged, of course, not sure about that.\n\nAnother option might be revisiting the approach proposed by Hannu in\nSeptember [1], i.e. tracking sequences touched in a transaction, and\nthen replicating the current state at that particular moment.\n\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAMT0RQQeDR51xs8zTa25YpfKB1B34nS-Q4hhsRPznVsjMB_P1w%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 2 Apr 2022 02:17:11 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Fri, Apr 1, 2022 at 8:32 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 3/28/22 07:29, Amit Kapila wrote:\n> > I thought about changing snapshot dealing of\n> > non-transactional sequence changes similar to transactional ones but\n> > that also won't work because it is only at commit we decide whether we\n> > can send the changes.\n> >\n> I wonder if there's some earlier LSN (similar to the consistent point)\n> which might be useful for this.\n>\n> Or maybe we should queue even the non-transactional changes, not\n> per-transaction but in a global list, and then at each commit either\n> discard inspect them (at that point we know the lowest LSN for all\n> transactions and the consistent point). Seems complex, though.\n>\n\nI couldn't follow '..discard inspect them ..'. Do you mean we inspect\nthem and discard whichever are not required? It seems here we are\ntalking about a new global ReorderBufferGlobal instead of\nReorderBufferTXN to collect these changes but we don't need only\nconsistent point LSN because we do send if the commit of containing\ntransaction is after consistent point LSN, so we need some transaction\ninformation as well. I think it could bring new challenges.\n\n> > For the transactional case, as we are considering the create sequence\n> > operation as transactional, we would unnecessarily queue them even\n> > though that is not required. Basically, they don't need to be\n> > considered transactional and we can simply ignore such messages like\n> > other DDLs. But for that probably we need to distinguish Alter/Create\n> > case which may or may not be straightforward. Now, queuing them is\n> > probably harmless unless it causes the transaction to spill/stream.\n> >\n>\n> I'm not sure I follow. Why would we queue them unnecessarily?\n>\n> Also, there's the bug with decoding changes in transactions that create\n> the sequence and add it to a publication. I think the agreement was that\n> this behavior was incorrect, we should not decode changes until the\n> subscription is refreshed. Doesn't that mean can't be any CREATE case,\n> just ALTER?\n>\n\nYeah, but how will we distinguish them. Aren't they using the same\nkind of WAL record?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 2 Apr 2022 16:05:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Sat, Apr 2, 2022 at 5:47 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 4/1/22 17:02, Tomas Vondra wrote:\n>\n> The only option I see is reworking the decoding so that it does not need\n> the snapshot at all. We'll need to stash the changes just like any other\n> change, apply them at end of transaction, and the main difference\n> between transactional and non-transactional case would be what happens\n> at abort. Transactional changes would be discarded, non-transactional\n> would be applied anyway.\n>\n\nI think in the above I am not following how we can make it work\nwithout considering *snapshot at all* because based on that we would\nhave done the initial sync (copy_sequence) and if we don't follow that\nlater it can lead to inconsistency. I might be missing something here.\n\n> The challenge is this reorders the sequence changes, so we'll need to\n> reconcile them somehow. One option would be to simply (1) apply the\n> change with the highest LSN in the transaction, and then walk all other\n> in-progress transactions and changes for that sequence with a lower LSN.\n> Not sure how complex/expensive that would be, though. Another problem is\n> not all increments are WAL-logged, of course, not sure about that.\n>\n> Another option might be revisiting the approach proposed by Hannu in\n> September [1], i.e. tracking sequences touched in a transaction, and\n> then replicating the current state at that particular moment.\n>\n\nI'll think about that approach as well.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 2 Apr 2022 16:13:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 4/2/22 12:35, Amit Kapila wrote:\n> On Fri, Apr 1, 2022 at 8:32 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 3/28/22 07:29, Amit Kapila wrote:\n>>> I thought about changing snapshot dealing of\n>>> non-transactional sequence changes similar to transactional ones but\n>>> that also won't work because it is only at commit we decide whether we\n>>> can send the changes.\n>>>\n>> I wonder if there's some earlier LSN (similar to the consistent point)\n>> which might be useful for this.\n>>\n>> Or maybe we should queue even the non-transactional changes, not\n>> per-transaction but in a global list, and then at each commit either\n>> discard inspect them (at that point we know the lowest LSN for all\n>> transactions and the consistent point). Seems complex, though.\n>>\n> \n> I couldn't follow '..discard inspect them ..'. Do you mean we inspect\n> them and discard whichever are not required? It seems here we are\n> talking about a new global ReorderBufferGlobal instead of\n> ReorderBufferTXN to collect these changes but we don't need only\n> consistent point LSN because we do send if the commit of containing\n> transaction is after consistent point LSN, so we need some transaction\n> information as well. I think it could bring new challenges.\n> \n\nSorry for the gibberish. Yes, I meant to discard sequence changes that\nare no longer needed, due to being \"obsoleted\" by the applied change. We\nmust not apply \"older\" changes (using LSN) because that would make the\nsequence go backwards.\n\nI'm not entirely sure whether the list of changes should be kept in TXN\nor in the global reorderbuffer object - we need to track which TXN the\nchange belongs to (because of transactional changes) but we also need to\ndiscard the unnecessary changes efficiently (and walking TXN might be\nexpensive).\n\nBut yes, I'm sure there will be challenges. One being that tracking just\nthe decoded WAL stuff is not enough, because nextval() may not generate\nWAL. But we still need to make sure the increment is replicated.\n\nWhat I think we might do is this:\n\n- add a global list of decoded sequence increments to ReorderBuffer\n\n- at each commit/abort walk the list, walk the list and consider all\nincrements up to the commit LSN that \"match\" (non-transactional match\nall TXNs, transactional match only the current TXN)\n\n- replicate the last \"matching\" status for each sequence, discard the\nprocessed ones\n\nWe could probably optimize this by not tracking every single increment,\nbut merge them \"per transaction\", I think.\n\nI'm sure this description is pretty rough and will need refining, handle\nvarious corner-cases etc.\n\n>>> For the transactional case, as we are considering the create sequence\n>>> operation as transactional, we would unnecessarily queue them even\n>>> though that is not required. Basically, they don't need to be\n>>> considered transactional and we can simply ignore such messages like\n>>> other DDLs. But for that probably we need to distinguish Alter/Create\n>>> case which may or may not be straightforward. Now, queuing them is\n>>> probably harmless unless it causes the transaction to spill/stream.\n>>>\n>>\n>> I'm not sure I follow. Why would we queue them unnecessarily?\n>>\n>> Also, there's the bug with decoding changes in transactions that create\n>> the sequence and add it to a publication. I think the agreement was that\n>> this behavior was incorrect, we should not decode changes until the\n>> subscription is refreshed. Doesn't that mean can't be any CREATE case,\n>> just ALTER?\n>>\n> \n> Yeah, but how will we distinguish them. Aren't they using the same\n> kind of WAL record?\n> \n\nSame WAL record, but the \"created\" flag which should distinguish these\ntwo cases, IIRC.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 2 Apr 2022 13:51:52 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 4/2/22 12:43, Amit Kapila wrote:\n> On Sat, Apr 2, 2022 at 5:47 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 4/1/22 17:02, Tomas Vondra wrote:\n>>\n>> The only option I see is reworking the decoding so that it does not need\n>> the snapshot at all. We'll need to stash the changes just like any other\n>> change, apply them at end of transaction, and the main difference\n>> between transactional and non-transactional case would be what happens\n>> at abort. Transactional changes would be discarded, non-transactional\n>> would be applied anyway.\n>>\n> \n> I think in the above I am not following how we can make it work\n> without considering *snapshot at all* because based on that we would\n> have done the initial sync (copy_sequence) and if we don't follow that\n> later it can lead to inconsistency. I might be missing something here.\n> \n\nWell, what I meant to say is that we can't consider the snapshot at this\nphase of decoding. We'd still consider it later, at commit/abort time,\nof course. I.e. it'd be fairly similar to what heap_decode/DecodeInsert\ndoes, for example. AFAIK this does not build the snapshot anywhere.\n\n>> The challenge is this reorders the sequence changes, so we'll need to\n>> reconcile them somehow. One option would be to simply (1) apply the\n>> change with the highest LSN in the transaction, and then walk all other\n>> in-progress transactions and changes for that sequence with a lower LSN.\n>> Not sure how complex/expensive that would be, though. Another problem is\n>> not all increments are WAL-logged, of course, not sure about that.\n>>\n>> Another option might be revisiting the approach proposed by Hannu in\n>> September [1], i.e. tracking sequences touched in a transaction, and\n>> then replicating the current state at that particular moment.\n>>\n> \n> I'll think about that approach as well.\n> \n\nThanks!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 2 Apr 2022 13:58:43 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 4/2/22 13:51, Tomas Vondra wrote:\n> \n> \n> On 4/2/22 12:35, Amit Kapila wrote:\n>> On Fri, Apr 1, 2022 at 8:32 PM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> On 3/28/22 07:29, Amit Kapila wrote:\n>>>> I thought about changing snapshot dealing of\n>>>> non-transactional sequence changes similar to transactional ones but\n>>>> that also won't work because it is only at commit we decide whether we\n>>>> can send the changes.\n>>>>\n>>> I wonder if there's some earlier LSN (similar to the consistent point)\n>>> which might be useful for this.\n>>>\n>>> Or maybe we should queue even the non-transactional changes, not\n>>> per-transaction but in a global list, and then at each commit either\n>>> discard inspect them (at that point we know the lowest LSN for all\n>>> transactions and the consistent point). Seems complex, though.\n>>>\n>>\n>> I couldn't follow '..discard inspect them ..'. Do you mean we inspect\n>> them and discard whichever are not required? It seems here we are\n>> talking about a new global ReorderBufferGlobal instead of\n>> ReorderBufferTXN to collect these changes but we don't need only\n>> consistent point LSN because we do send if the commit of containing\n>> transaction is after consistent point LSN, so we need some transaction\n>> information as well. I think it could bring new challenges.\n>>\n> \n> Sorry for the gibberish. Yes, I meant to discard sequence changes that\n> are no longer needed, due to being \"obsoleted\" by the applied change. We\n> must not apply \"older\" changes (using LSN) because that would make the\n> sequence go backwards.\n> \n> I'm not entirely sure whether the list of changes should be kept in TXN\n> or in the global reorderbuffer object - we need to track which TXN the\n> change belongs to (because of transactional changes) but we also need to\n> discard the unnecessary changes efficiently (and walking TXN might be\n> expensive).\n> \n> But yes, I'm sure there will be challenges. One being that tracking just\n> the decoded WAL stuff is not enough, because nextval() may not generate\n> WAL. But we still need to make sure the increment is replicated.\n> \n> What I think we might do is this:\n> \n> - add a global list of decoded sequence increments to ReorderBuffer\n> \n> - at each commit/abort walk the list, walk the list and consider all\n> increments up to the commit LSN that \"match\" (non-transactional match\n> all TXNs, transactional match only the current TXN)\n> \n> - replicate the last \"matching\" status for each sequence, discard the\n> processed ones\n> \n> We could probably optimize this by not tracking every single increment,\n> but merge them \"per transaction\", I think.\n> \n> I'm sure this description is pretty rough and will need refining, handle\n> various corner-cases etc.\n> \nI did some experiments over the weekend, exploring how to rework the\nsequence decoding in various ways. Let me share some WIP patches,\nhopefully that can be useful for trying more stuff and moving this\ndiscussion forward.\n\nI tried two things - (1) accumulating sequence increments in global\narray and then doing something with it, and (2) treating all sequence\nincrements as regular changes (in a TXN) and then doing something\nspecial during the replay. Attached are two patchsets, one for each\napproach.\n\nNote: It's important to remember decoding of sequences is not the only\ncode affected by this. The logical messages have the same issue,\ncertainly when it comes to transactional vs. non-transactional stuff and\nhandling of snapshots. Even if the sequence decoding ends up being\nreverted, we still need to fix that, somehow. And my feeling is the\nsolutions ought to be pretty similar in both cases.\n\nNow, regarding the two approaches:\n\n(1) accumulating sequences in global hash table\n\nThe main problem with regular sequence increments is that those need to\nbe non-transactional - a transaction may use a sequence without any\nWAL-logging, if the WAL was written by an earlier transaction. The\nproblem is the earlier trasaction might have been rolled back, and thus\nsimply discarded by the logical decoding. But we still need to apply\nthat, in order not to lose the sequence increment.\n\nThe current code just applies those non-transactional increments right\nafter decoding the increment, but that does not work because we may not\nhave a snapshot at that point. And we only have the snapshot when within\na transaction (AFAICS) so this queues all changes and then applies the\nchanges later.\n\nThe changes need to be shared by all transactions, so queueing them in a\nglobal works fairly well - otherwise we'd have to walk all transactions,\nin order to see if there are relevant sequence increments.\n\nBut some increments may be transactional, e.g. when the sequence is\ncreated or altered in a transaction. To allow tracking this, this uses a\nhash table, with relfilenode as a key.\n\nThere's a couple issues with this, though. Firstly, stashing the changes\noutside transactions, it's not included in memory accounting, it's not\nspilled to disk or streamed, etc. I guess fixing this is possible, but\nit's certainly not straightforward, because we mix increments from many\ndifferent transactions.\n\nA bigger issue is that I'm not sure this actually handles the snapshots\ncorrectly either.\n\nThe non-transactional increments affect all transactions, so when\nReorderBufferProcessSequences gets executed, it processes all of them,\nno matter the source transaction. Can we be sure the snapshot in the\napplying transaction is the same (or \"compatible\") as the snapshot in\nthe source transaction?\n\nTransactional increments can be simply processed as regular changes, of\ncourse, but one difference is that we always create the transaction\n(while before we just triggered the apply callback). This is necessary\nas now we drive all of this from ReorderBufferCommit(), and without the\ntransaction the increment not be applied / there would be no snapshot.\n\nIt does seem to work, though, although I haven't tested it much so far.\nOne annoying bit is that we have to always walk all sequences and\nincrements, for each change in the transaction. Which seems quite\nexpensive, although the number of in-progress increments should be\npretty low (roughly equal to the number of sequences). Or at least the\npart we need to consider for a single change (i.e. between two LSNs).\n\nSo maybe this should work. The one part this does not handle at all are\naborted transactions. At the moment we just discard those, which means\n(a) we fail to discard the transactional changes from the hash table,\nand (b) we can't apply the non-transactional changes, because with the\nchanges we also discard the snapshots we need.\n\nI wonder if we could use a different snapshot, though. Non-transactional\nchanges can't change the relfilenode, after all. Not sure. If not, the\nonly solution I can think of is processing even aborted transactions,\nbut skipping changes except those that update snapshots.\n\nThere's a serious problem with streaming, though - we don't know which\ntransaction will commit first, hence we can't decide whether to send the\nsequence changes. This seems pretty fatal to me. So we'd have to stream\nthe sequence changes only at commit, and then do some of this work on\nthe worker (i.e. merge the sequence changes to the right place). That\nseems pretty annoying.\n\n\n(2) treating sequence change as regular changes\n\nThis adopts a different approach - instead of accumulating the sequence\nincrements in a global hash table, it treats them as regular changes.\nWhich solves the snapshot issue, and issues with spilling to disk,\nstreaming and so on.\n\nBut it has various other issues with handling concurrent transactions,\nunfortunately, which probably make this approach infeasible:\n\n* The non-transactional stuff has to be applied in the first transaction\nthat commits, not in the transaction that generated the WAL. That does\nnot work too well with this approach, because we have to walk changes in\nall other transactions.\n\n* Another serious issue seems to be streaming - if we already streamed\nsome of the changes, we can't iterate through them anymore.\n\nAlso, having to walk the transactions over and over for each change, to\napply relevant sequence increments, that's mighty expensive. The other\napproach needs to do that too, but walking the global hash table seems\nmuch cheaper.\n\nThe other issue this handling of aborted transactions - we need to apply\nsequence increments even from those transactions, of course. The other\napproach has this issue too, though.\n\n\n(3) tracking sequences touched by transaction\n\nThis is the approach proposed by Hannu Krosing. I haven't explored this\nagain yet, but I recall I wrote a PoC patch a couple months back.\n\nIt seems to me most of the problems stems from trying to derive sequence\nstate from decoded WAL changes, which is problematic because of the\nnon-transactional nature of sequences (i.e. WAL for one transaction\naffects other transactions in non-obvious ways). And this approach\nsimply works around that entirely - instead of trying to deduce the\nsequence state from WAL, we'd make sure to write the current sequence\nstate (or maybe just ID of the sequence) at commit time. Which should\neliminate most of the complexity / problems, I think.\n\n\nI'm not really sure what to do about this. All of those reworks seems\nlike an extensive redesign of the patch, and considering the last CF is\nalready over ... not great.\n\nHowever, even if we end up reverting this, we'll still have the same\nproblem with snapshots for logical messages.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 3 Apr 2022 23:40:43 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Sat, Apr 2, 2022 at 8:52 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 4/2/22 12:35, Amit Kapila wrote:\n> > On Fri, Apr 1, 2022 at 8:32 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 3/28/22 07:29, Amit Kapila wrote:\n> >>> I thought about changing snapshot dealing of\n> >>> non-transactional sequence changes similar to transactional ones but\n> >>> that also won't work because it is only at commit we decide whether we\n> >>> can send the changes.\n> >>>\n> >> I wonder if there's some earlier LSN (similar to the consistent point)\n> >> which might be useful for this.\n> >>\n> >> Or maybe we should queue even the non-transactional changes, not\n> >> per-transaction but in a global list, and then at each commit either\n> >> discard inspect them (at that point we know the lowest LSN for all\n> >> transactions and the consistent point). Seems complex, though.\n> >>\n> >\n> > I couldn't follow '..discard inspect them ..'. Do you mean we inspect\n> > them and discard whichever are not required? It seems here we are\n> > talking about a new global ReorderBufferGlobal instead of\n> > ReorderBufferTXN to collect these changes but we don't need only\n> > consistent point LSN because we do send if the commit of containing\n> > transaction is after consistent point LSN, so we need some transaction\n> > information as well. I think it could bring new challenges.\n> >\n>\n> Sorry for the gibberish. Yes, I meant to discard sequence changes that\n> are no longer needed, due to being \"obsoleted\" by the applied change. We\n> must not apply \"older\" changes (using LSN) because that would make the\n> sequence go backwards.\n\nIt's not related to this issue but I think that non-transactional\nsequence changes could be resent in case of the subscriber crashes\nbecause it doesn’t update replication origin LSN, is that right? If\nso, while resending the sequence changes, the sequence value on the\nsubscriber can temporarily go backward.\n\n>\n> I'm not entirely sure whether the list of changes should be kept in TXN\n> or in the global reorderbuffer object - we need to track which TXN the\n> change belongs to (because of transactional changes) but we also need to\n> discard the unnecessary changes efficiently (and walking TXN might be\n> expensive).\n>\n> But yes, I'm sure there will be challenges. One being that tracking just\n> the decoded WAL stuff is not enough, because nextval() may not generate\n> WAL. But we still need to make sure the increment is replicated.\n>\n> What I think we might do is this:\n>\n> - add a global list of decoded sequence increments to ReorderBuffer\n>\n> - at each commit/abort walk the list, walk the list and consider all\n> increments up to the commit LSN that \"match\" (non-transactional match\n> all TXNs, transactional match only the current TXN)\n>\n> - replicate the last \"matching\" status for each sequence, discard the\n> processed ones\n>\n> We could probably optimize this by not tracking every single increment,\n> but merge them \"per transaction\", I think.\n>\n> I'm sure this description is pretty rough and will need refining, handle\n> various corner-cases etc.\n>\n> >>> For the transactional case, as we are considering the create sequence\n> >>> operation as transactional, we would unnecessarily queue them even\n> >>> though that is not required. Basically, they don't need to be\n> >>> considered transactional and we can simply ignore such messages like\n> >>> other DDLs. But for that probably we need to distinguish Alter/Create\n> >>> case which may or may not be straightforward. Now, queuing them is\n> >>> probably harmless unless it causes the transaction to spill/stream.\n> >>>\n> >>\n> >> I'm not sure I follow. Why would we queue them unnecessarily?\n> >>\n> >> Also, there's the bug with decoding changes in transactions that create\n> >> the sequence and add it to a publication. I think the agreement was that\n> >> this behavior was incorrect, we should not decode changes until the\n> >> subscription is refreshed. Doesn't that mean can't be any CREATE case,\n> >> just ALTER?\n> >>\n> >\n> > Yeah, but how will we distinguish them. Aren't they using the same\n> > kind of WAL record?\n> >\n>\n> Same WAL record, but the \"created\" flag which should distinguish these\n> two cases, IIRC.\n\nSince the \"created\" flag indicates that we created a new relfilenode\nso it's true when both CREATE and ALTER.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 4 Apr 2022 15:12:07 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Mon, Apr 4, 2022 at 11:42 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Apr 2, 2022 at 8:52 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>\n> It's not related to this issue but I think that non-transactional\n> sequence changes could be resent in case of the subscriber crashes\n> because it doesn’t update replication origin LSN, is that right? If\n> so, while resending the sequence changes, the sequence value on the\n> subscriber can temporarily go backward.\n>\n\nYes, this can happen for the non-transactional sequence changes though\nthis is a different problem than what is happening on the decoding\nside.\n\n> > >> Also, there's the bug with decoding changes in transactions that create\n> > >> the sequence and add it to a publication. I think the agreement was that\n> > >> this behavior was incorrect, we should not decode changes until the\n> > >> subscription is refreshed. Doesn't that mean can't be any CREATE case,\n> > >> just ALTER?\n> > >>\n> > >\n> > > Yeah, but how will we distinguish them. Aren't they using the same\n> > > kind of WAL record?\n> > >\n> >\n> > Same WAL record, but the \"created\" flag which should distinguish these\n> > two cases, IIRC.\n>\n> Since the \"created\" flag indicates that we created a new relfilenode\n> so it's true when both CREATE and ALTER.\n>\n\nYes, this is my understanding as well. So, we need something else.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 4 Apr 2022 17:15:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Sat, Apr 2, 2022 at 5:47 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 4/1/22 17:02, Tomas Vondra wrote:\n>\n> So, I investigated this a bit more, and I wrote a couple test_decoding\n> isolation tests (patch attached) demonstrating the issue. Actually, I\n> should say \"issues\" because it's a bit worse than you described ...\n>\n> The whole problem is in this chunk of code in sequence_decode():\n>\n>\n> /* Skip the change if already processed (per the snapshot). */\n> if (transactional &&\n> !SnapBuildProcessChange(builder, xid, buf->origptr))\n> return;\n> else if (!transactional &&\n> (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||\n> SnapBuildXactNeedsSkip(builder, buf->origptr)))\n> return;\n>\n> /* Queue the increment (or send immediately if not transactional). */\n> snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);\n> ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,\n> origin_id, target_node, transactional,\n> xlrec->created, tuplebuf);\n>\n> With the script you described, the increment is non-transactional, so we\n> end up in the second branch, return and thus discard the increment.\n>\n> But it's also possible the change is transactional, which can only\n> trigger the first branch. But it does not, so we start building the\n> snapshot. But the first thing SnapBuildGetOrBuildSnapshot does is\n>\n> Assert(builder->state == SNAPBUILD_CONSISTENT);\n>\n> and we're still not in a consistent snapshot, so it just crashes and\n> burn :-(\n>\n> The sequences.spec file has two definitions of s2restart step, one empty\n> (resulting in non-transactional change), one with ALTER SEQUENCE (which\n> means the change will be transactional).\n>\n>\n> The really \"funny\" thing is this is not new code - this is an exact copy\n> from logicalmsg_decode(), and logical messages have all those issues\n> too. We may discard some messages, trigger the same Assert, etc. There's\n> a messages2.spec demonstrating this (s2message step defines whether the\n> message is transactional or not).\n>\n\nIt seems to me that the Assert in SnapBuildGetOrBuildSnapshot() is\nwrong. It is required only for non-transactional logical messages. For\ntransactional message(s), we decide at the commit time whether the\nsnapshot has reached a consistent state and then decide whether to\nskip the entire transaction or not. So, the possible fix for Assert\ncould be that we pass an additional parameter 'transactional' to\nSnapBuildGetOrBuildSnapshot() and then assert only when it is false. I\nhave also checked the development thread for this work and it appears\nto be introduced for non-transactional cases only. See email[1], this\nnew function and Assert was for the non-transactional case and later\nwhile rearranging the code, this problem got introduced.\n\nNow, for the non-transactional cases, I am not sure if there is a\none-to-one mapping with the sequence case. The way sequences are dealt\nwith on the subscriber-side (first we copy initial data and then\nreplicate the incremental changes) appears more as we deal with the\ntable and its incremental changes. There is some commonality with\nnon-transactional messages w.r.t the case where we want sequence\nchanges to be sent even on rollbacks unless some DDL has happened for\nthem but if we see the overall solution it doesn't appear that we can\nuse it similar to messages. I think this is the reason we are facing\nthe other problems w.r.t to syncing sequences to subscribers including\nthe problem reported by Sawada-San yesterday.\n\nNow, the particular case where we won't send a non-transactional\nlogical message unless the snapshot is consistent could be considered\nas its behavior and may be documented better. I am not very sure about\nthis as there is no example of the way sync for these messages happens\nin the core but if someone outside the core wants a different behavior\nand presents the case then we can probably try to enhance it. I feel\nthe same is not true for sequences because it can cause the replica\n(subscriber) to go out of sync with the master (publisher).\n\n> So I guess we need to fix both places, perhaps in a similar way.\n>\n\nIt depends but I think for logical messages we should do the minimal\nfix required for Asserts and probably document the current behavior\nbit better unless we think there is a case to make it behave similar\nto sequences.\n\n\n[1] - https://www.postgresql.org/message-id/56D4B3AD.5000207%402ndquadrant.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 5 Apr 2022 10:57:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Mon, Apr 4, 2022 at 3:10 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I did some experiments over the weekend, exploring how to rework the\n> sequence decoding in various ways. Let me share some WIP patches,\n> hopefully that can be useful for trying more stuff and moving this\n> discussion forward.\n>\n> I tried two things - (1) accumulating sequence increments in global\n> array and then doing something with it, and (2) treating all sequence\n> increments as regular changes (in a TXN) and then doing something\n> special during the replay. Attached are two patchsets, one for each\n> approach.\n>\n> Note: It's important to remember decoding of sequences is not the only\n> code affected by this. The logical messages have the same issue,\n> certainly when it comes to transactional vs. non-transactional stuff and\n> handling of snapshots. Even if the sequence decoding ends up being\n> reverted, we still need to fix that, somehow. And my feeling is the\n> solutions ought to be pretty similar in both cases.\n>\n> Now, regarding the two approaches:\n>\n> (1) accumulating sequences in global hash table\n>\n> The main problem with regular sequence increments is that those need to\n> be non-transactional - a transaction may use a sequence without any\n> WAL-logging, if the WAL was written by an earlier transaction. The\n> problem is the earlier trasaction might have been rolled back, and thus\n> simply discarded by the logical decoding. But we still need to apply\n> that, in order not to lose the sequence increment.\n>\n> The current code just applies those non-transactional increments right\n> after decoding the increment, but that does not work because we may not\n> have a snapshot at that point. And we only have the snapshot when within\n> a transaction (AFAICS) so this queues all changes and then applies the\n> changes later.\n>\n> The changes need to be shared by all transactions, so queueing them in a\n> global works fairly well - otherwise we'd have to walk all transactions,\n> in order to see if there are relevant sequence increments.\n>\n> But some increments may be transactional, e.g. when the sequence is\n> created or altered in a transaction. To allow tracking this, this uses a\n> hash table, with relfilenode as a key.\n>\n> There's a couple issues with this, though. Firstly, stashing the changes\n> outside transactions, it's not included in memory accounting, it's not\n> spilled to disk or streamed, etc. I guess fixing this is possible, but\n> it's certainly not straightforward, because we mix increments from many\n> different transactions.\n>\n> A bigger issue is that I'm not sure this actually handles the snapshots\n> correctly either.\n>\n> The non-transactional increments affect all transactions, so when\n> ReorderBufferProcessSequences gets executed, it processes all of them,\n> no matter the source transaction. Can we be sure the snapshot in the\n> applying transaction is the same (or \"compatible\") as the snapshot in\n> the source transaction?\n>\n\nI don't think we can assume that. I think it is possible that some\nother transaction's WAL can be in-between start/end lsn of txn (which\nwe decide to send) which may not finally reach a consistent state.\nConsider a case similar to shown in one of my previous emails:\nSession-2:\nBegin;\nSELECT pg_current_xact_id();\n\nSession-1:\nSELECT 'init' FROM pg_create_logical_replication_slot('test_slot',\n'test_decoding', false, true);\n\nSession-3:\nBegin;\nSELECT pg_current_xact_id();\n\nSession-2:\nCommit;\nBegin;\nINSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);\n\nSession-3:\nCommit;\n\nSession-2:\nCommit;\n\nHere, we send changes (say insert from txn 700) from session-2 because\nsession-3's commit happens before it. Now, consider another\ntransaction parallel to txn 700 which generates some WAL related to\nsequences but it committed before session-3's commit. So though, its\nchanges will be the in-between start/end LSN of txn 700 but those\nshouldn't be sent.\n\nI have not tried this and also this may be solvable in some way but I\nthink processing changes from other TXNs sounds risky to me in terms\nof snapshot handling.\n\n>\n>\n> (2) treating sequence change as regular changes\n>\n> This adopts a different approach - instead of accumulating the sequence\n> increments in a global hash table, it treats them as regular changes.\n> Which solves the snapshot issue, and issues with spilling to disk,\n> streaming and so on.\n>\n> But it has various other issues with handling concurrent transactions,\n> unfortunately, which probably make this approach infeasible:\n>\n> * The non-transactional stuff has to be applied in the first transaction\n> that commits, not in the transaction that generated the WAL. That does\n> not work too well with this approach, because we have to walk changes in\n> all other transactions.\n>\n\nWhy do you want to traverse other TXNs in this approach? Is it because\nthe current TXN might be using some value of sequence which has been\nactually WAL logged in the other transaction but that other\ntransaction has not been sent yet? I think if we don't send that then\nprobably replica sequences columns (in some tables) have some values\nbut actually the sequence itself won't have still that value which\nsounds problematic. Is that correct?\n\n> * Another serious issue seems to be streaming - if we already streamed\n> some of the changes, we can't iterate through them anymore.\n>\n> Also, having to walk the transactions over and over for each change, to\n> apply relevant sequence increments, that's mighty expensive. The other\n> approach needs to do that too, but walking the global hash table seems\n> much cheaper.\n>\n> The other issue this handling of aborted transactions - we need to apply\n> sequence increments even from those transactions, of course. The other\n> approach has this issue too, though.\n>\n>\n> (3) tracking sequences touched by transaction\n>\n> This is the approach proposed by Hannu Krosing. I haven't explored this\n> again yet, but I recall I wrote a PoC patch a couple months back.\n>\n> It seems to me most of the problems stems from trying to derive sequence\n> state from decoded WAL changes, which is problematic because of the\n> non-transactional nature of sequences (i.e. WAL for one transaction\n> affects other transactions in non-obvious ways). And this approach\n> simply works around that entirely - instead of trying to deduce the\n> sequence state from WAL, we'd make sure to write the current sequence\n> state (or maybe just ID of the sequence) at commit time. Which should\n> eliminate most of the complexity / problems, I think.\n>\n\nThat sounds promising but I haven't thought in detail about that approach.\n\n>\n> I'm not really sure what to do about this. All of those reworks seems\n> like an extensive redesign of the patch, and considering the last CF is\n> already over ... not great.\n>\n\nYeah, I share the same feeling that even if we devise solutions to all\nthe known problems it requires quite some time to ensure everything is\ncorrect.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 5 Apr 2022 15:36:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 4/5/22 12:06, Amit Kapila wrote:\n> On Mon, Apr 4, 2022 at 3:10 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> I did some experiments over the weekend, exploring how to rework the\n>> sequence decoding in various ways. Let me share some WIP patches,\n>> hopefully that can be useful for trying more stuff and moving this\n>> discussion forward.\n>>\n>> I tried two things - (1) accumulating sequence increments in global\n>> array and then doing something with it, and (2) treating all sequence\n>> increments as regular changes (in a TXN) and then doing something\n>> special during the replay. Attached are two patchsets, one for each\n>> approach.\n>>\n>> Note: It's important to remember decoding of sequences is not the only\n>> code affected by this. The logical messages have the same issue,\n>> certainly when it comes to transactional vs. non-transactional stuff and\n>> handling of snapshots. Even if the sequence decoding ends up being\n>> reverted, we still need to fix that, somehow. And my feeling is the\n>> solutions ought to be pretty similar in both cases.\n>>\n>> Now, regarding the two approaches:\n>>\n>> (1) accumulating sequences in global hash table\n>>\n>> The main problem with regular sequence increments is that those need to\n>> be non-transactional - a transaction may use a sequence without any\n>> WAL-logging, if the WAL was written by an earlier transaction. The\n>> problem is the earlier trasaction might have been rolled back, and thus\n>> simply discarded by the logical decoding. But we still need to apply\n>> that, in order not to lose the sequence increment.\n>>\n>> The current code just applies those non-transactional increments right\n>> after decoding the increment, but that does not work because we may not\n>> have a snapshot at that point. And we only have the snapshot when within\n>> a transaction (AFAICS) so this queues all changes and then applies the\n>> changes later.\n>>\n>> The changes need to be shared by all transactions, so queueing them in a\n>> global works fairly well - otherwise we'd have to walk all transactions,\n>> in order to see if there are relevant sequence increments.\n>>\n>> But some increments may be transactional, e.g. when the sequence is\n>> created or altered in a transaction. To allow tracking this, this uses a\n>> hash table, with relfilenode as a key.\n>>\n>> There's a couple issues with this, though. Firstly, stashing the changes\n>> outside transactions, it's not included in memory accounting, it's not\n>> spilled to disk or streamed, etc. I guess fixing this is possible, but\n>> it's certainly not straightforward, because we mix increments from many\n>> different transactions.\n>>\n>> A bigger issue is that I'm not sure this actually handles the snapshots\n>> correctly either.\n>>\n>> The non-transactional increments affect all transactions, so when\n>> ReorderBufferProcessSequences gets executed, it processes all of them,\n>> no matter the source transaction. Can we be sure the snapshot in the\n>> applying transaction is the same (or \"compatible\") as the snapshot in\n>> the source transaction?\n>>\n> \n> I don't think we can assume that. I think it is possible that some\n> other transaction's WAL can be in-between start/end lsn of txn (which\n> we decide to send) which may not finally reach a consistent state.\n> Consider a case similar to shown in one of my previous emails:\n> Session-2:\n> Begin;\n> SELECT pg_current_xact_id();\n> \n> Session-1:\n> SELECT 'init' FROM pg_create_logical_replication_slot('test_slot',\n> 'test_decoding', false, true);\n> \n> Session-3:\n> Begin;\n> SELECT pg_current_xact_id();\n> \n> Session-2:\n> Commit;\n> Begin;\n> INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);\n> \n> Session-3:\n> Commit;\n> \n> Session-2:\n> Commit;\n> \n> Here, we send changes (say insert from txn 700) from session-2 because\n> session-3's commit happens before it. Now, consider another\n> transaction parallel to txn 700 which generates some WAL related to\n> sequences but it committed before session-3's commit. So though, its\n> changes will be the in-between start/end LSN of txn 700 but those\n> shouldn't be sent.\n> \n> I have not tried this and also this may be solvable in some way but I\n> think processing changes from other TXNs sounds risky to me in terms\n> of snapshot handling.\n> \n\nYes, I know this can happen. I was only really thinking about what might\nhappen to the relfilenode of the sequence itself - and I don't think any\nconcurrent transaction could swoop in and change the relfilenode in any\nmeaningful way, due to locking.\n\nBut of course, if we expect/require to have a perfect snapshot for that\nexact position in the transaction, this won't work. IMO the whole idea\nthat we can have non-transactional bits in naturally transactional\ndecoding seems a bit suspicious (at least in hindsight).\n\nNo matter what we do for sequences, though, this still affects logical\nmessages too. Not sure what to do there :-(\n\n>>\n>>\n>> (2) treating sequence change as regular changes\n>>\n>> This adopts a different approach - instead of accumulating the sequence\n>> increments in a global hash table, it treats them as regular changes.\n>> Which solves the snapshot issue, and issues with spilling to disk,\n>> streaming and so on.\n>>\n>> But it has various other issues with handling concurrent transactions,\n>> unfortunately, which probably make this approach infeasible:\n>>\n>> * The non-transactional stuff has to be applied in the first transaction\n>> that commits, not in the transaction that generated the WAL. That does\n>> not work too well with this approach, because we have to walk changes in\n>> all other transactions.\n>>\n> \n> Why do you want to traverse other TXNs in this approach? Is it because\n> the current TXN might be using some value of sequence which has been\n> actually WAL logged in the other transaction but that other\n> transaction has not been sent yet? I think if we don't send that then\n> probably replica sequences columns (in some tables) have some values\n> but actually the sequence itself won't have still that value which\n> sounds problematic. Is that correct?\n> \n\nWell, how else would you get to sequence changes in the other TXNs?\n\nConsider this:\n\nT1: begin\nT2: begin\n\nT2: nextval('s') -> writes WAL for 32 values\nT1: nextval('s') -> gets value without WAL\n\nT1: commit\nT2: commit\n\nNow, if we commit T1 without \"applying\" the sequence change from T2, we\nloose the sequence state. But we still write/replicate the value\ngenerated from the sequence.\n\n>> * Another serious issue seems to be streaming - if we already streamed\n>> some of the changes, we can't iterate through them anymore.\n>>\n>> Also, having to walk the transactions over and over for each change, to\n>> apply relevant sequence increments, that's mighty expensive. The other\n>> approach needs to do that too, but walking the global hash table seems\n>> much cheaper.\n>>\n>> The other issue this handling of aborted transactions - we need to apply\n>> sequence increments even from those transactions, of course. The other\n>> approach has this issue too, though.\n>>\n>>\n>> (3) tracking sequences touched by transaction\n>>\n>> This is the approach proposed by Hannu Krosing. I haven't explored this\n>> again yet, but I recall I wrote a PoC patch a couple months back.\n>>\n>> It seems to me most of the problems stems from trying to derive sequence\n>> state from decoded WAL changes, which is problematic because of the\n>> non-transactional nature of sequences (i.e. WAL for one transaction\n>> affects other transactions in non-obvious ways). And this approach\n>> simply works around that entirely - instead of trying to deduce the\n>> sequence state from WAL, we'd make sure to write the current sequence\n>> state (or maybe just ID of the sequence) at commit time. Which should\n>> eliminate most of the complexity / problems, I think.\n>>\n> \n> That sounds promising but I haven't thought in detail about that approach.\n> \n\nSo, here's a patch doing that. It's a reworked/improved version of the\npatch [1] shared in November.\n\nIt seems to be working pretty nicely. The behavior is a little bit\ndifferent, of course, because we only replicate \"committed\" changes, so\nif you do nextval() in aborted transaction that is not replicated. Which\nI think is fine, because we generally make no durability guarantees for\naborted transactions in general.\n\nBut there are a couple issues too:\n\n1) locking\n\nWe have to read sequence change before the commit, but we must not allow\nreordering (because then the state might go backwards again). I'm not\nsure how serious impact could this have on performance.\n\n2) dropped sequences\n\nI'm not sure what to do about sequences dropped in the transaction. The\npatch simply attempts to read the current sequence state before the\ncommit, but if the sequence was dropped (in that transaction), that\ncan't happen. I'm not sure if that's OK or not.\n\n3) WAL record\n\nTo replicate the stuff the patch uses a LogicalMessage, but I guess a\nseparate WAL record would be better. But that's a technical detail.\n\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/2cd38bab-c874-8e0b-98e7-d9abaaf9806a@enterprisedb.com\n\n>>\n>> I'm not really sure what to do about this. All of those reworks seems\n>> like an extensive redesign of the patch, and considering the last CF is\n>> already over ... not great.\n>>\n> \n> Yeah, I share the same feeling that even if we devise solutions to all\n> the known problems it requires quite some time to ensure everything is\n> correct.\n> \n\nTrue. Let's keep working on this for a bit more time and then we can\ndecide what to do.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 6 Apr 2022 16:13:52 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On 4/6/22 16:13, Tomas Vondra wrote:\n> \n> \n> On 4/5/22 12:06, Amit Kapila wrote:\n>> On Mon, Apr 4, 2022 at 3:10 AM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> I did some experiments over the weekend, exploring how to rework the\n>>> sequence decoding in various ways. Let me share some WIP patches,\n>>> hopefully that can be useful for trying more stuff and moving this\n>>> discussion forward.\n>>>\n>>> I tried two things - (1) accumulating sequence increments in global\n>>> array and then doing something with it, and (2) treating all sequence\n>>> increments as regular changes (in a TXN) and then doing something\n>>> special during the replay. Attached are two patchsets, one for each\n>>> approach.\n>>>\n>>> Note: It's important to remember decoding of sequences is not the only\n>>> code affected by this. The logical messages have the same issue,\n>>> certainly when it comes to transactional vs. non-transactional stuff and\n>>> handling of snapshots. Even if the sequence decoding ends up being\n>>> reverted, we still need to fix that, somehow. And my feeling is the\n>>> solutions ought to be pretty similar in both cases.\n>>>\n>>> Now, regarding the two approaches:\n>>>\n>>> (1) accumulating sequences in global hash table\n>>>\n>>> The main problem with regular sequence increments is that those need to\n>>> be non-transactional - a transaction may use a sequence without any\n>>> WAL-logging, if the WAL was written by an earlier transaction. The\n>>> problem is the earlier trasaction might have been rolled back, and thus\n>>> simply discarded by the logical decoding. But we still need to apply\n>>> that, in order not to lose the sequence increment.\n>>>\n>>> The current code just applies those non-transactional increments right\n>>> after decoding the increment, but that does not work because we may not\n>>> have a snapshot at that point. And we only have the snapshot when within\n>>> a transaction (AFAICS) so this queues all changes and then applies the\n>>> changes later.\n>>>\n>>> The changes need to be shared by all transactions, so queueing them in a\n>>> global works fairly well - otherwise we'd have to walk all transactions,\n>>> in order to see if there are relevant sequence increments.\n>>>\n>>> But some increments may be transactional, e.g. when the sequence is\n>>> created or altered in a transaction. To allow tracking this, this uses a\n>>> hash table, with relfilenode as a key.\n>>>\n>>> There's a couple issues with this, though. Firstly, stashing the changes\n>>> outside transactions, it's not included in memory accounting, it's not\n>>> spilled to disk or streamed, etc. I guess fixing this is possible, but\n>>> it's certainly not straightforward, because we mix increments from many\n>>> different transactions.\n>>>\n>>> A bigger issue is that I'm not sure this actually handles the snapshots\n>>> correctly either.\n>>>\n>>> The non-transactional increments affect all transactions, so when\n>>> ReorderBufferProcessSequences gets executed, it processes all of them,\n>>> no matter the source transaction. Can we be sure the snapshot in the\n>>> applying transaction is the same (or \"compatible\") as the snapshot in\n>>> the source transaction?\n>>>\n>>\n>> I don't think we can assume that. I think it is possible that some\n>> other transaction's WAL can be in-between start/end lsn of txn (which\n>> we decide to send) which may not finally reach a consistent state.\n>> Consider a case similar to shown in one of my previous emails:\n>> Session-2:\n>> Begin;\n>> SELECT pg_current_xact_id();\n>>\n>> Session-1:\n>> SELECT 'init' FROM pg_create_logical_replication_slot('test_slot',\n>> 'test_decoding', false, true);\n>>\n>> Session-3:\n>> Begin;\n>> SELECT pg_current_xact_id();\n>>\n>> Session-2:\n>> Commit;\n>> Begin;\n>> INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);\n>>\n>> Session-3:\n>> Commit;\n>>\n>> Session-2:\n>> Commit;\n>>\n>> Here, we send changes (say insert from txn 700) from session-2 because\n>> session-3's commit happens before it. Now, consider another\n>> transaction parallel to txn 700 which generates some WAL related to\n>> sequences but it committed before session-3's commit. So though, its\n>> changes will be the in-between start/end LSN of txn 700 but those\n>> shouldn't be sent.\n>>\n>> I have not tried this and also this may be solvable in some way but I\n>> think processing changes from other TXNs sounds risky to me in terms\n>> of snapshot handling.\n>>\n> \n> Yes, I know this can happen. I was only really thinking about what might\n> happen to the relfilenode of the sequence itself - and I don't think any\n> concurrent transaction could swoop in and change the relfilenode in any\n> meaningful way, due to locking.\n> \n> But of course, if we expect/require to have a perfect snapshot for that\n> exact position in the transaction, this won't work. IMO the whole idea\n> that we can have non-transactional bits in naturally transactional\n> decoding seems a bit suspicious (at least in hindsight).\n> \n> No matter what we do for sequences, though, this still affects logical\n> messages too. Not sure what to do there :-(\n> \n>>>\n>>>\n>>> (2) treating sequence change as regular changes\n>>>\n>>> This adopts a different approach - instead of accumulating the sequence\n>>> increments in a global hash table, it treats them as regular changes.\n>>> Which solves the snapshot issue, and issues with spilling to disk,\n>>> streaming and so on.\n>>>\n>>> But it has various other issues with handling concurrent transactions,\n>>> unfortunately, which probably make this approach infeasible:\n>>>\n>>> * The non-transactional stuff has to be applied in the first transaction\n>>> that commits, not in the transaction that generated the WAL. That does\n>>> not work too well with this approach, because we have to walk changes in\n>>> all other transactions.\n>>>\n>>\n>> Why do you want to traverse other TXNs in this approach? Is it because\n>> the current TXN might be using some value of sequence which has been\n>> actually WAL logged in the other transaction but that other\n>> transaction has not been sent yet? I think if we don't send that then\n>> probably replica sequences columns (in some tables) have some values\n>> but actually the sequence itself won't have still that value which\n>> sounds problematic. Is that correct?\n>>\n> \n> Well, how else would you get to sequence changes in the other TXNs?\n> \n> Consider this:\n> \n> T1: begin\n> T2: begin\n> \n> T2: nextval('s') -> writes WAL for 32 values\n> T1: nextval('s') -> gets value without WAL\n> \n> T1: commit\n> T2: commit\n> \n> Now, if we commit T1 without \"applying\" the sequence change from T2, we\n> loose the sequence state. But we still write/replicate the value\n> generated from the sequence.\n> \n>>> * Another serious issue seems to be streaming - if we already streamed\n>>> some of the changes, we can't iterate through them anymore.\n>>>\n>>> Also, having to walk the transactions over and over for each change, to\n>>> apply relevant sequence increments, that's mighty expensive. The other\n>>> approach needs to do that too, but walking the global hash table seems\n>>> much cheaper.\n>>>\n>>> The other issue this handling of aborted transactions - we need to apply\n>>> sequence increments even from those transactions, of course. The other\n>>> approach has this issue too, though.\n>>>\n>>>\n>>> (3) tracking sequences touched by transaction\n>>>\n>>> This is the approach proposed by Hannu Krosing. I haven't explored this\n>>> again yet, but I recall I wrote a PoC patch a couple months back.\n>>>\n>>> It seems to me most of the problems stems from trying to derive sequence\n>>> state from decoded WAL changes, which is problematic because of the\n>>> non-transactional nature of sequences (i.e. WAL for one transaction\n>>> affects other transactions in non-obvious ways). And this approach\n>>> simply works around that entirely - instead of trying to deduce the\n>>> sequence state from WAL, we'd make sure to write the current sequence\n>>> state (or maybe just ID of the sequence) at commit time. Which should\n>>> eliminate most of the complexity / problems, I think.\n>>>\n>>\n>> That sounds promising but I haven't thought in detail about that approach.\n>>\n> \n> So, here's a patch doing that. It's a reworked/improved version of the\n> patch [1] shared in November.\n> \n> It seems to be working pretty nicely. The behavior is a little bit\n> different, of course, because we only replicate \"committed\" changes, so\n> if you do nextval() in aborted transaction that is not replicated. Which\n> I think is fine, because we generally make no durability guarantees for\n> aborted transactions in general.\n> \n> But there are a couple issues too:\n> \n> 1) locking\n> \n> We have to read sequence change before the commit, but we must not allow\n> reordering (because then the state might go backwards again). I'm not\n> sure how serious impact could this have on performance.\n> \n> 2) dropped sequences\n> \n> I'm not sure what to do about sequences dropped in the transaction. The\n> patch simply attempts to read the current sequence state before the\n> commit, but if the sequence was dropped (in that transaction), that\n> can't happen. I'm not sure if that's OK or not.\n> \n> 3) WAL record\n> \n> To replicate the stuff the patch uses a LogicalMessage, but I guess a\n> separate WAL record would be better. But that's a technical detail.\n> \n> \n> regards\n> \n> [1]\n> https://www.postgresql.org/message-id/2cd38bab-c874-8e0b-98e7-d9abaaf9806a@enterprisedb.com\n> \n>>>\n>>> I'm not really sure what to do about this. All of those reworks seems\n>>> like an extensive redesign of the patch, and considering the last CF is\n>>> already over ... not great.\n>>>\n>>\n>> Yeah, I share the same feeling that even if we devise solutions to all\n>> the known problems it requires quite some time to ensure everything is\n>> correct.\n>>\n> \n> True. Let's keep working on this for a bit more time and then we can\n> decide what to do.\n> \n\nI've pushed a revert af all the commits related to this - decoding of\nsequences and test_decoding / built-in replication changes. The approach\ncombining transactional and non-transactional behavior implemented by\nthe patch clearly has issues, and it seems foolish to hope we'll find a\nsimple fix. So the changes would have to be much more extensive, and\ndoing that after the last CF seems like an obviously bad idea.\n\nAttached is a rebased patch, implementing the approach based on\nWAL-logging sequences at commit time.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 7 Apr 2022 20:34:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "Some typos I found before the patch was reverted.\n\ndiff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml\nindex a6ea6ff3fcf..d4bd8d41c4b 100644\n--- a/doc/src/sgml/logicaldecoding.sgml\n+++ b/doc/src/sgml/logicaldecoding.sgml\n@@ -834,9 +834,8 @@ typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,\n non-transactional increments, the transaction may be either NULL or not\n NULL, depending on if the transaction already has an XID assigned.\n The <parameter>sequence_lsn</parameter> has the WAL location of the\n- sequence update. <parameter>transactional</parameter> says if the\n- sequence has to be replayed as part of the transaction or directly.\n-\n+ sequence update. <parameter>transactional</parameter> indicates whether\n+ the sequence has to be replayed as part of the transaction or directly.\n The <parameter>last_value</parameter>, <parameter>log_cnt</parameter> and\n <parameter>is_called</parameter> parameters describe the sequence change.\n </para>\ndiff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\nindex 60866431db3..b71122cce5d 100644\n--- a/src/backend/replication/logical/reorderbuffer.c\n+++ b/src/backend/replication/logical/reorderbuffer.c\n@@ -927,7 +927,7 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,\n * Treat the sequence increment as transactional?\n *\n * The hash table tracks all sequences created in in-progress transactions,\n- * so we simply do a lookup (the sequence is identified by relfilende). If\n+ * so we simply do a lookup (the sequence is identified by relfilenode). If\n * we find a match, the increment should be handled as transactional.\n */\n bool\n@@ -2255,7 +2255,7 @@ ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,\n \ttuple = &change->data.sequence.tuple->tuple;\n \tseq = (Form_pg_sequence_data) GETSTRUCT(tuple);\n \n-\t/* Only ever called from ReorderBufferApplySequence, so transational. */\n+\t/* Only ever called from ReorderBufferApplySequence, so transactional. */\n \tif (streaming)\n \t\trb->stream_sequence(rb, txn, change->lsn, relation, true,\n \t\t\t\t\t\t\tseq->last_value, seq->log_cnt, seq->is_called);\ndiff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c\nindex a15ce9edb13..8193bfe6515 100644\n--- a/src/backend/utils/cache/relcache.c\n+++ b/src/backend/utils/cache/relcache.c\n@@ -5601,7 +5601,7 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)\n \t\t\t\t\t\t\t\t\t GetSchemaPublications(schemaid, objType));\n \n \t/*\n-\t * If this is a partion (and thus a table), lookup all ancestors and track\n+\t * If this is a partition (and thus a table), lookup all ancestors and track\n \t * all publications them too.\n \t */\n \tif (relation->rd_rel->relispartition)\n\n\n", "msg_date": "Thu, 7 Apr 2022 19:07:56 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "> But of course, if we expect/require to have a perfect snapshot for that\n> exact position in the transaction, this won't work. IMO the whole idea\n> that we can have non-transactional bits in naturally transactional\n> decoding seems a bit suspicious (at least in hindsight).\n>\n> No matter what we do for sequences, though, this still affects logical\n> messages too. Not sure what to do there :-(\n\nHi, I spent some time trying to understand this problem while I was\nevaluating its impact on the DDL replication in [1]. I think for DDL\nwe could always remove the\nnon-transactional bits since DDL will probably always be processed\ntransactionally.\n\nI attempted to solve the problem for messages. Here is a potential\nsolution by keeping track of\nthe last decoded/acked non-transactional message/operation lsn and use\nit to check if a non-transactional message record should be skipped\nduring decoding,\nto do that I added new fields\nReplicationSlotPersistentData.non_xact_op_at,\nXLogReaderState.NonXactOpRecPtr and\nSnapBuild.start_decoding_nonxactop_at.\nThis is the end LSN of the last non-transactional message/operation\ndecoded/acked. I verified this approach solves the issue of\nmissing decoding of non-transactional messages under\nconcurrency/before the builder state reaches SNAPBUILD_CONSISTENT.\nOnce\nthe builder state reach SNAPBUILD_CONSISTENT, the new field\nReplicationSlotPersistentData.non_xact_op_at can be set\nto ReplicationSlotPersistentData.confirmed_flush.\n\nSimilar to the sequence issue, here is the test case for logical messages:\n\nTest concurrent execution in 3 sessions that allows pg_logical_emit_message in\nsession-2 to happen before we reach a consistent point and commit\nhappens after a consistent point:\n\nSession-2:\n\nBegin;\nSELECT pg_current_xact_id();\n\nSession-1:\nSELECT 'init' FROM pg_create_logical_replication_slot('test_slot',\n'test_decoding', false, true);\n\nSession-3:\n\nBegin;\nSELECT pg_current_xact_id();\n\nSession-2:\n\nCommit;\nBegin;\nSELECT pg_logical_emit_message(true, 'test_decoding', 'msg1');\nSELECT pg_logical_emit_message(false, 'test_decoding', 'msg2');\n\nSession-3:\n\nCommit;\n\nSession-1: (at this point, the session will crash without the fix)\n\nSELECT data FROM pg_logical_slot_get_changes('test_slot', NULL, NULL,\n'force-binary', '0', 'skip-empty-xacts', '1');\ndata\n---------------------------------------------------------------------\nmessage: transactional: 0 prefix: test_decoding, sz: 4 content:msg1\n\nSession-2:\n\nCommit;\n\nSession-1:\n\nSELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'force-binary', '0', 'skip-empty-xacts', '1');\ndata\n---------------------------------------------------------------------\nmessage: transactional: 1 prefix: test_decoding, sz: 4 content:msg2\n\nI also tried the same approach on sequences (on a commit before the\nrevert of sequence replication) and it seems to be working but\nI think it needs further testing.\n\nPatch 0001-Intorduce-new-field-ReplicationSlotPersistentData.no.patch\napplies on master which contains the fix for logical messages.\n\n[1] https://www.postgresql.org/message-id/flat/CAAD30U+pVmfKwUKy8cbZOnUXyguJ-uBNejwD75Kyo=OjdQGJ9g@mail.gmail.com\n\nThoughts?\n\nWith Regards,\nZheng", "msg_date": "Wed, 25 May 2022 16:42:29 -0400", "msg_from": "Zheng Li <zhengli10@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Thu, Apr 07, 2022 at 08:34:50PM +0200, Tomas Vondra wrote:\n> I've pushed a revert af all the commits related to this - decoding of\n> sequences and test_decoding / built-in replication changes.\n\nTwo July buildfarm runs failed with PANIC during standby promotion:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-07-19%2004%3A13%3A18\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-07-31%2011%3A33%3A13\n\nThe attached patch hacks things so an ordinary x86_64 GNU/Linux machine\nreproduces this consistently. \"git bisect\" then traced the regression to the\nabove revert commit (2c7ea57e56ca5f668c32d4266e0a3e45b455bef5). The pg_ctl\ntest suite passes under this hack in all supported branches, and it passed on\nv15 until that revert. Would you investigate?\n\nThe buildfarm animal uses keep_error_builds. From kept data directories, I\ndeduced these events:\n\n- After the base backup, auto-analyze ran on the primary and wrote WAL.\n- Standby streamed and wrote up to 0/301FFF.\n- Standby received the promote signal. Terminated streaming. WAL page at 0/302000 remained all-zeros.\n- Somehow, end-of-recovery became a PANIC.\n\nKey portions from buildfarm logs:\n\n=== good run standby2 log\n2022-07-21 22:55:16.860 UTC [25034912:5] LOG: received promote request\n2022-07-21 22:55:16.878 UTC [26804682:2] FATAL: terminating walreceiver process due to administrator command\n2022-07-21 22:55:16.878 UTC [25034912:6] LOG: invalid record length at 0/3000060: wanted 24, got 0\n2022-07-21 22:55:16.878 UTC [25034912:7] LOG: redo done at 0/3000028 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.42 s\n2022-07-21 22:55:16.878 UTC [25034912:8] LOG: selected new timeline ID: 2\n2022-07-21 22:55:17.004 UTC [25034912:9] LOG: archive recovery complete\n2022-07-21 22:55:17.005 UTC [23724044:1] LOG: checkpoint starting: force\n2022-07-21 22:55:17.008 UTC [14549364:4] LOG: database system is ready to accept connections\n2022-07-21 22:55:17.093 UTC [23724044:2] LOG: checkpoint complete: wrote 3 buffers (2.3%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.019 s, sync=0.001 s, total=0.089 s; sync files=0, longest=0.000 s, average=0.000 s; distance=16384 kB, estimate=16384 kB\n2022-07-21 22:55:17.143 UTC [27394418:1] [unknown] LOG: connection received: host=[local]\n2022-07-21 22:55:17.144 UTC [27394418:2] [unknown] LOG: connection authorized: user=nm database=postgres application_name=003_promote.pl\n2022-07-21 22:55:17.147 UTC [27394418:3] 003_promote.pl LOG: statement: SELECT pg_is_in_recovery()\n2022-07-21 22:55:17.148 UTC [27394418:4] 003_promote.pl LOG: disconnection: session time: 0:00:00.005 user=nm database=postgres host=[local]\n2022-07-21 22:55:58.301 UTC [14549364:5] LOG: received immediate shutdown request\n2022-07-21 22:55:58.337 UTC [14549364:6] LOG: database system is shut down\n\n=== failed run standby2 log, with my annotations\n2022-07-19 05:28:22.136 UTC [7340406:5] LOG: received promote request\n2022-07-19 05:28:22.163 UTC [8519860:2] FATAL: terminating walreceiver process due to administrator command\n2022-07-19 05:28:22.166 UTC [7340406:6] LOG: invalid magic number 0000 in log segment 000000010000000000000003, offset 131072\n New compared to the good run. XLOG_PAGE_MAGIC didn't match. This implies the WAL ended at a WAL page boundary.\n2022-07-19 05:28:22.166 UTC [7340406:7] LOG: redo done at 0/301F168 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.18 s\n2022-07-19 05:28:22.166 UTC [7340406:8] LOG: last completed transaction was at log time 2022-07-19 05:28:13.956716+00\n New compared to the good run. The good run had no transactions to replay. The bad run replayed records from an auto-analyze.\n2022-07-19 05:28:22.166 UTC [7340406:9] PANIC: invalid record length at 0/301F168: wanted 24, got 0\n More WAL overall in bad run, due to auto-analyze. End of recovery wrongly considered a PANIC.\n2022-07-19 05:28:22.583 UTC [8388800:4] LOG: startup process (PID 7340406) was terminated by signal 6: IOT/Abort trap\n2022-07-19 05:28:22.584 UTC [8388800:5] LOG: terminating any other active server processes\n2022-07-19 05:28:22.587 UTC [8388800:6] LOG: shutting down due to startup process failure\n2022-07-19 05:28:22.627 UTC [8388800:7] LOG: database system is shut down\n\nLet me know if I've left out details you want; I may be able to dig more out\nof the buildfarm artifacts.", "msg_date": "Sat, 6 Aug 2022 17:36:27 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 8/7/22 02:36, Noah Misch wrote:\n> On Thu, Apr 07, 2022 at 08:34:50PM +0200, Tomas Vondra wrote:\n>> I've pushed a revert af all the commits related to this - decoding of\n>> sequences and test_decoding / built-in replication changes.\n> \n> Two July buildfarm runs failed with PANIC during standby promotion:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-07-19%2004%3A13%3A18\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-07-31%2011%3A33%3A13\n> \n> The attached patch hacks things so an ordinary x86_64 GNU/Linux machine\n> reproduces this consistently. \"git bisect\" then traced the regression to the\n> above revert commit (2c7ea57e56ca5f668c32d4266e0a3e45b455bef5). The pg_ctl\n> test suite passes under this hack in all supported branches, and it passed on\n> v15 until that revert. Would you investigate?\n> \n> The buildfarm animal uses keep_error_builds. From kept data directories, I\n> deduced these events:\n> \n> - After the base backup, auto-analyze ran on the primary and wrote WAL.\n> - Standby streamed and wrote up to 0/301FFF.\n> - Standby received the promote signal. Terminated streaming. WAL page at 0/302000 remained all-zeros.\n> - Somehow, end-of-recovery became a PANIC.\n> \n\nI think it'd be really bizarre if this was due to the revert, as that\nsimply undoes minor WAL changes (and none of this should affect what\nhappens at WAL page boundary etc.). It just restores WAL as it was\nbefore 0da92dc, nothing particularly complicated. I did go through all\nof the changes again and I haven't spotted anything particularly\nsuspicious, but I'll give it another try tomorrow.\n\nHowever, I did try bisecting this using the attached patch, and that\ndoes not suggest the issue is in the revert commit. It actually fails\nall the way back to 5dc0418fab2, and it starts working on 9553b4115f1.\n\n ...\n 6392f2a0968 Try to silence \"-Wmissing-braces\" complaints in ...\n => 5dc0418fab2 Prefetch data referenced by the WAL, take II.\n 9553b4115f1 Fix warning introduced in 5c279a6d350.\n ...\n\nThis is merely 10 commits before the revert, and it seems way more\nrelated to WAL. Also, adding this to the two nodes in 003_standby.pl\nmakes the issue go away, it seems:\n\n $node_standby->append_conf('postgresql.conf',\n\t qq(recovery_prefetch = off));\n\nI'd bet it's about WAL prefetching, not the revert, and the bisect was a\nbit incorrect, because the commits are close and the failures happen to\nbe rare. (Presumably you first did the bisect and then wrote the patch\nthat reproduces this, right?)\n\nAdding Thomas Munro to the thread, he's the WAL prefetching expert ;-)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 7 Aug 2022 15:18:52 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Sun, Aug 07, 2022 at 03:18:52PM +0200, Tomas Vondra wrote:\n> On 8/7/22 02:36, Noah Misch wrote:\n> > On Thu, Apr 07, 2022 at 08:34:50PM +0200, Tomas Vondra wrote:\n> >> I've pushed a revert af all the commits related to this - decoding of\n> >> sequences and test_decoding / built-in replication changes.\n> > \n> > Two July buildfarm runs failed with PANIC during standby promotion:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-07-19%2004%3A13%3A18\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-07-31%2011%3A33%3A13\n> > \n> > The attached patch hacks things so an ordinary x86_64 GNU/Linux machine\n> > reproduces this consistently. \"git bisect\" then traced the regression to the\n> > above revert commit (2c7ea57e56ca5f668c32d4266e0a3e45b455bef5). The pg_ctl\n> > test suite passes under this hack in all supported branches, and it passed on\n> > v15 until that revert. Would you investigate?\n> > \n> > The buildfarm animal uses keep_error_builds. From kept data directories, I\n> > deduced these events:\n> > \n> > - After the base backup, auto-analyze ran on the primary and wrote WAL.\n> > - Standby streamed and wrote up to 0/301FFF.\n> > - Standby received the promote signal. Terminated streaming. WAL page at 0/302000 remained all-zeros.\n> > - Somehow, end-of-recovery became a PANIC.\n> \n> I think it'd be really bizarre if this was due to the revert, as that\n> simply undoes minor WAL changes (and none of this should affect what\n> happens at WAL page boundary etc.). It just restores WAL as it was\n> before 0da92dc, nothing particularly complicated. I did go through all\n> of the changes again and I haven't spotted anything particularly\n> suspicious, but I'll give it another try tomorrow.\n> \n> However, I did try bisecting this using the attached patch, and that\n> does not suggest the issue is in the revert commit. It actually fails\n> all the way back to 5dc0418fab2, and it starts working on 9553b4115f1.\n> \n> ...\n> 6392f2a0968 Try to silence \"-Wmissing-braces\" complaints in ...\n> => 5dc0418fab2 Prefetch data referenced by the WAL, take II.\n> 9553b4115f1 Fix warning introduced in 5c279a6d350.\n> ...\n> \n> This is merely 10 commits before the revert, and it seems way more\n> related to WAL. Also, adding this to the two nodes in 003_standby.pl\n> makes the issue go away, it seems:\n> \n> $node_standby->append_conf('postgresql.conf',\n> \t qq(recovery_prefetch = off));\n> \n> I'd bet it's about WAL prefetching, not the revert, and the bisect was a\n> bit incorrect, because the commits are close and the failures happen to\n> be rare. (Presumably you first did the bisect and then wrote the patch\n> that reproduces this, right?)\n\nNo. I wrote the patch, then used the patch to drive the bisect. With ten\niterations, commit 2c7ea57 passes 0/10, while 2c7ea57^ passes 10/10. I've now\ntried recovery_prefetch=off. With that, the test passes 10/10 at 2c7ea57.\nGiven your observation of a failure at 5dc0418fab2, I agree with your\nconclusion. Whatever the role of 2c7ea57 in exposing the failure on my\nmachine, a root cause in WAL prefetching looks more likely.\n\n> Adding Thomas Munro to the thread, he's the WAL prefetching expert ;-)\n\n\n", "msg_date": "Sun, 7 Aug 2022 12:12:05 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Mon, Aug 8, 2022 at 7:12 AM Noah Misch <noah@leadboat.com> wrote:\n> On Sun, Aug 07, 2022 at 03:18:52PM +0200, Tomas Vondra wrote:\n> > I'd bet it's about WAL prefetching, not the revert, and the bisect was a\n> > bit incorrect, because the commits are close and the failures happen to\n> > be rare. (Presumably you first did the bisect and then wrote the patch\n> > that reproduces this, right?)\n>\n> No. I wrote the patch, then used the patch to drive the bisect. With ten\n> iterations, commit 2c7ea57 passes 0/10, while 2c7ea57^ passes 10/10. I've now\n> tried recovery_prefetch=off. With that, the test passes 10/10 at 2c7ea57.\n> Given your observation of a failure at 5dc0418fab2, I agree with your\n> conclusion. Whatever the role of 2c7ea57 in exposing the failure on my\n> machine, a root cause in WAL prefetching looks more likely.\n>\n> > Adding Thomas Munro to the thread, he's the WAL prefetching expert ;-)\n\nThanks for the repro patch and bisection work. Looking...\n\n\n", "msg_date": "Mon, 8 Aug 2022 09:09:53 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Mon, Aug 8, 2022 at 9:09 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thanks for the repro patch and bisection work. Looking...\n\nI don't have the complete explanation yet, but it's something like\nthis. We hit the following branch in xlogrecovery.c...\n\n if (StandbyMode &&\n !XLogReaderValidatePageHeader(xlogreader,\ntargetPagePtr, readBuf))\n {\n /*\n * Emit this error right now then retry this page\nimmediately. Use\n * errmsg_internal() because the message was already translated.\n */\n if (xlogreader->errormsg_buf[0])\n ereport(emode_for_corrupt_record(emode,\nxlogreader->EndRecPtr),\n (errmsg_internal(\"%s\",\nxlogreader->errormsg_buf)));\n\n /* reset any error XLogReaderValidatePageHeader()\nmight have set */\n xlogreader->errormsg_buf[0] = '\\0';\n goto next_record_is_invalid;\n }\n\n... but, even though there was a (suppressed) error, nothing\ninvalidates the reader's page buffer. Normally,\nXLogReadValidatePageHeader() failure or any other kind of error\nencountered by xlogreader.c'd decoding logic would do that, but here\nthe read_page callback is directly calling the header validation.\nWithout prefetching, that doesn't seem to matter, but reading ahead\ncan cause us to have the problem page in our buffer at the wrong time,\nand then not re-read it when we should. Or something like that.\n\nThe attached patch that simply moves the cache invalidation into\nreport_invalid_record(), so that it's reached by the above code and\neverything else that reports an error, seems to fix the problem in\nsrc/bin/pg_ctl/t/003_promote.pl with Noah's spanner-in-the-works patch\napplied, and passes check-world without it. I need to look at this\nsome more, though, and figure out if it's the right fix.", "msg_date": "Mon, 8 Aug 2022 18:15:46 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "At Mon, 8 Aug 2022 18:15:46 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Mon, Aug 8, 2022 at 9:09 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Thanks for the repro patch and bisection work. Looking...\n> \n> I don't have the complete explanation yet, but it's something like\n> this. We hit the following branch in xlogrecovery.c...\n> \n> if (StandbyMode &&\n> !XLogReaderValidatePageHeader(xlogreader,\n> targetPagePtr, readBuf))\n> {\n> /*\n> * Emit this error right now then retry this page\n> immediately. Use\n> * errmsg_internal() because the message was already translated.\n> */\n> if (xlogreader->errormsg_buf[0])\n> ereport(emode_for_corrupt_record(emode,\n> xlogreader->EndRecPtr),\n> (errmsg_internal(\"%s\",\n> xlogreader->errormsg_buf)));\n> \n> /* reset any error XLogReaderValidatePageHeader()\n> might have set */\n> xlogreader->errormsg_buf[0] = '\\0';\n> goto next_record_is_invalid;\n> }\n> \n> ... but, even though there was a (suppressed) error, nothing\n> invalidates the reader's page buffer. Normally,\n> XLogReadValidatePageHeader() failure or any other kind of error\n> encountered by xlogreader.c'd decoding logic would do that, but here\n> the read_page callback is directly calling the header validation.\n> Without prefetching, that doesn't seem to matter, but reading ahead\n> can cause us to have the problem page in our buffer at the wrong time,\n> and then not re-read it when we should. Or something like that.\n> \n> The attached patch that simply moves the cache invalidation into\n> report_invalid_record(), so that it's reached by the above code and\n> everything else that reports an error, seems to fix the problem in\n> src/bin/pg_ctl/t/003_promote.pl with Noah's spanner-in-the-works patch\n> applied, and passes check-world without it. I need to look at this\n> some more, though, and figure out if it's the right fix.\n\nIf WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral\nmisses the chance to inavlidate reader-state. That state is not an\nerror while in StandbyMode.\n\nIn the repro case, XLogPageRead returns XLREAD_WOULDBLOCK after the\nfirst failure. This situation (of course) was not considered when\nthat code was introduced. If that function is going to return with\nXLREAD_WOULDBLOCK while lastSourceFailed, it should be turned into\nXLREAD_FAIL. So, the following also works.\n\ndiff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c\nindex 21088e78f6..9f242fe656 100644\n--- a/src/backend/access/transam/xlogrecovery.c\n+++ b/src/backend/access/transam/xlogrecovery.c\n@@ -3220,7 +3220,9 @@ retry:\n \t\t\t\t\t\t\t\t\t\t\txlogreader->nonblocking))\n \t\t{\n \t\t\tcase XLREAD_WOULDBLOCK:\n-\t\t\t\treturn XLREAD_WOULDBLOCK;\n+\t\t\t\tif (!lastSourceFailed)\n+\t\t\t\t\treturn XLREAD_WOULDBLOCK;\n+\t\t\t\t/* Fall through. */\n \t\t\tcase XLREAD_FAIL:\n \t\t\t\tif (readFile >= 0)\n \t\t\t\t\tclose(readFile);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 08 Aug 2022 17:33:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "At Mon, 08 Aug 2022 17:33:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> If WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral\n> misses the chance to inavlidate reader-state. That state is not an\n> error while in StandbyMode.\n\nMmm... Maybe I wanted to say: (Still I'm not sure the rewrite works..)\n\nIf WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral\nwould miss the chance to invalidate reader-state. When XLogPageRead\nis called in blocking mode while in StandbyMode (that is, the\ntraditional condition) , the function continues retrying until it\nsucceeds, or returns XLRAD_FAIL if promote is triggered. In other\nwords, it was not supposed to return non-failure while the header\nvalidation is failing while in standby mode. But while in nonblocking\nmode, the function can return non-failure with lastSourceFailed =\ntrue, which seems wrong.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 08 Aug 2022 17:56:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Mon, Aug 08, 2022 at 06:15:46PM +1200, Thomas Munro wrote:\n> The attached patch that simply moves the cache invalidation into\n> report_invalid_record(), so that it's reached by the above code and\n> everything else that reports an error, seems to fix the problem in\n> src/bin/pg_ctl/t/003_promote.pl with Noah's spanner-in-the-works patch\n> applied, and passes check-world without it. I need to look at this\n> some more, though, and figure out if it's the right fix.\n\nThomas, where are you on this open item? A potential PANIC at\npromotion is bad. One possible exit path would be to switch the\ndefault of recovery_prefetch, though that's a kind of last-resort\noption seen from here.\n--\nMichael", "msg_date": "Tue, 23 Aug 2022 13:21:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Tue, Aug 23, 2022 at 4:21 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Aug 08, 2022 at 06:15:46PM +1200, Thomas Munro wrote:\n> > The attached patch that simply moves the cache invalidation into\n> > report_invalid_record(), so that it's reached by the above code and\n> > everything else that reports an error, seems to fix the problem in\n> > src/bin/pg_ctl/t/003_promote.pl with Noah's spanner-in-the-works patch\n> > applied, and passes check-world without it. I need to look at this\n> > some more, though, and figure out if it's the right fix.\n>\n> Thomas, where are you on this open item? A potential PANIC at\n> promotion is bad. One possible exit path would be to switch the\n> default of recovery_prefetch, though that's a kind of last-resort\n> option seen from here.\n\nI will get a fix committed this week -- I need to study\nHoriguchi-san's analysis...\n\n\n", "msg_date": "Tue, 23 Aug 2022 16:32:46 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Tue, Aug 23, 2022 at 12:33 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Aug 23, 2022 at 4:21 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Mon, Aug 08, 2022 at 06:15:46PM +1200, Thomas Munro wrote:\n> > > The attached patch that simply moves the cache invalidation into\n> > > report_invalid_record(), so that it's reached by the above code and\n> > > everything else that reports an error, seems to fix the problem in\n> > > src/bin/pg_ctl/t/003_promote.pl with Noah's spanner-in-the-works patch\n> > > applied, and passes check-world without it. I need to look at this\n> > > some more, though, and figure out if it's the right fix.\n> >\n> > Thomas, where are you on this open item? A potential PANIC at\n> > promotion is bad. One possible exit path would be to switch the\n> > default of recovery_prefetch, though that's a kind of last-resort\n> > option seen from here.\n>\n> I will get a fix committed this week -- I need to study\n> Horiguchi-san's analysis...\n\nHi!\n\nI haven't been paying attention to this thread, but my attention was\njust drawn to it, and I'm wondering if the issue you're trying to\ntrack down here is actually the same as what I reported yesterday\nhere:\n\nhttps://www.postgresql.org/message-id/CA+TgmoY0Lri=fCueg7m_2R_bSspUb1F8OFycEGaHNJw_EUW-=Q@mail.gmail.com\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Aug 2022 11:04:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Wed, Aug 24, 2022 at 3:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I haven't been paying attention to this thread, but my attention was\n> just drawn to it, and I'm wondering if the issue you're trying to\n> track down here is actually the same as what I reported yesterday\n> here:\n>\n> https://www.postgresql.org/message-id/CA+TgmoY0Lri=fCueg7m_2R_bSspUb1F8OFycEGaHNJw_EUW-=Q@mail.gmail.com\n\nSummarising a chat we had about this: Different bug, similar\ningredients. Robert describes a screw-up in what is written, but here\nwe're talking about a cache invalidation bug while reading.\n\n\n", "msg_date": "Wed, 24 Aug 2022 12:51:02 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Mon, Aug 8, 2022 at 8:56 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Mon, 08 Aug 2022 17:33:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > If WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral\n> > misses the chance to inavlidate reader-state. That state is not an\n> > error while in StandbyMode.\n>\n> Mmm... Maybe I wanted to say: (Still I'm not sure the rewrite works..)\n>\n> If WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral\n> would miss the chance to invalidate reader-state. When XLogPageRead\n> is called in blocking mode while in StandbyMode (that is, the\n> traditional condition) , the function continues retrying until it\n> succeeds, or returns XLRAD_FAIL if promote is triggered. In other\n> words, it was not supposed to return non-failure while the header\n> validation is failing while in standby mode. But while in nonblocking\n> mode, the function can return non-failure with lastSourceFailed =\n> true, which seems wrong.\n\nNew ideas:\n\n0001: Instead of figuring out when to invalidate the cache, let's\njust invalidate it before every read attempt. It is only marked valid\nafter success (ie state->readLen > 0). No need to worry about error\ncases.\n\n0002: While here, I don't like xlogrecovery.c clobbering\nxlogreader.c's internal error state, so I think we should have a\nfunction for that with a documented purpose. It was also a little\ninconsistent that it didn't clear a flag (but not buggy AFAICS; kinda\nwondering if I should just get rid of that flag, but that's for\nanother day).\n\n0003: Thinking about your comments above made me realise that I don't\nreally want XLogReadPage() to be internally retrying for obscure\nfailures while reading ahead. I think I prefer to give up on\nprefetching as soon as anything tricky happens, and deal with\ncomplexities once recovery catches up to that point. I am still\nthinking about this point.\n\nHere's the patch set I'm testing.", "msg_date": "Mon, 29 Aug 2022 22:21:30 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "\n\nOn 8/29/22 12:21, Thomas Munro wrote:\n> On Mon, Aug 8, 2022 at 8:56 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> At Mon, 08 Aug 2022 17:33:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>>> If WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral\n>>> misses the chance to inavlidate reader-state. That state is not an\n>>> error while in StandbyMode.\n>>\n>> Mmm... Maybe I wanted to say: (Still I'm not sure the rewrite works..)\n>>\n>> If WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral\n>> would miss the chance to invalidate reader-state. When XLogPageRead\n>> is called in blocking mode while in StandbyMode (that is, the\n>> traditional condition) , the function continues retrying until it\n>> succeeds, or returns XLRAD_FAIL if promote is triggered. In other\n>> words, it was not supposed to return non-failure while the header\n>> validation is failing while in standby mode. But while in nonblocking\n>> mode, the function can return non-failure with lastSourceFailed =\n>> true, which seems wrong.\n> \n> New ideas:\n> \n> 0001: Instead of figuring out when to invalidate the cache, let's\n> just invalidate it before every read attempt. It is only marked valid\n> after success (ie state->readLen > 0). No need to worry about error\n> cases.\n> \n\nMaybe I misunderstand how all this works, but won't this have a really\nbad performance impact. If not, why do we need the cache at all?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 29 Aug 2022 20:04:13 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical decoding and replication of sequences" }, { "msg_contents": "On Tue, Aug 30, 2022 at 6:04 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 8/29/22 12:21, Thomas Munro wrote:\n> > 0001: Instead of figuring out when to invalidate the cache, let's\n> > just invalidate it before every read attempt. It is only marked valid\n> > after success (ie state->readLen > 0). No need to worry about error\n> > cases.\n>\n> Maybe I misunderstand how all this works, but won't this have a really\n> bad performance impact. If not, why do we need the cache at all?\n\nIt's a bit confusing because there are several levels of \"read\". The\ncache remains valid as long as the caller of ReadPageInternal() keeps\nasking for data that is in range (see early return after comment \"/*\ncheck whether we have all the requested data already */\"). As soon as\nthe caller asks for something not in range, this patch marks the cache\ninvalid before calling the page_read() callback (= XLogPageRead()).\nIt is only marked valid again after that succeeds. Here's a new\nversion with no code change, just a better commit message to try to\nexplain that more clearly.", "msg_date": "Tue, 30 Aug 2022 09:42:02 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding and replication of sequences" } ]
[ { "msg_contents": "In release-14.sgml:\n\n<!--\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n2021-03-04 [3174d69fb] Remove server and libpq support for old FE/BE protocol v\n-->\n\n <para>\n Remove server and <link linkend=\"libpq\">libpq</link> support\n for the version 2 <link linkend=\"protocol\">wire protocol</link>\n (Heikki Linnakangas)\n </para>\n\n <para>\n This was last used as the default in Postgres 7.2 (year 2002).\n </para>\n </listitem>\n\nI thought the last version which used the protocol as the default was\n7.3, not 7.2? Because v3 was introduced in 7.4.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 08 Jun 2021 09:13:29 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Remove server and libpq support for the version 2 wire protocol" }, { "msg_contents": "On Tue, Jun 8, 2021 at 09:13:29AM +0900, Tatsuo Ishii wrote:\n> In release-14.sgml:\n> \n> <!--\n> Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> 2021-03-04 [3174d69fb] Remove server and libpq support for old FE/BE protocol v\n> -->\n> \n> <para>\n> Remove server and <link linkend=\"libpq\">libpq</link> support\n> for the version 2 <link linkend=\"protocol\">wire protocol</link>\n> (Heikki Linnakangas)\n> </para>\n> \n> <para>\n> This was last used as the default in Postgres 7.2 (year 2002).\n> </para>\n> </listitem>\n> \n> I thought the last version which used the protocol as the default was\n> 7.3, not 7.2? Because v3 was introduced in 7.4.\n\nAh, yes, correct. New text is:\n\n This was last used as the default in Postgres 7.3 (year 2002).\n\nThanks for finding that mistake.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 8 Jun 2021 16:48:12 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Remove server and libpq support for the version 2 wire protocol" } ]
[ { "msg_contents": "Hello!\n\nWhile I was using pgbench from the master branch, I discovered an error on\npgbench logs.\nWhen I run pgbench, the log file contains a lot of redundant 0s.\n\nI ran git bisect and found out that this error occured since the commit\n547f04e7348b6ed992bd4a197d39661fe7c25097 (Mar 10, 2021).\n\nI ran the tests below on the problematic commit and the commit before it.\n(I used Debian 10.9 and Ubuntu 20.04)\n\n=====\n./pg_ctl -D /tmp/data init\n./pg_ctl -D /tmp/data start\n\n./pgbench -i -s 1 postgres\n\n./pgbench -r -c 1 -j 1 -T 1 --aggregate-interval 1 -l --log-prefix\npgbench-log postgres\n./pgbench -r -c 2 -j 4 -T 60 --aggregate-interval 1 -l --log-prefix\npgbench-log postgres\n./pgbench -r -c 2 -j 4 -T 60 --aggregate-interval 10 -l --log-prefix\npgbench-log postgres\n=====\n\nThe result screenshots are in the attachments.\n(I didn't attach the problematic 60 second log file which was bigger than\n1GB.)\n\nPlease take a look at this issue.\n\nThank you!\n\nRegards,\nYoungHwan", "msg_date": "Tue, 8 Jun 2021 12:09:47 +0900", "msg_from": "YoungHwan Joo <rulyox@gmail.com>", "msg_from_op": true, "msg_subject": "Error on pgbench logs" }, { "msg_contents": "At Tue, 8 Jun 2021 12:09:47 +0900, YoungHwan Joo <rulyox@gmail.com> wrote in \n> Hello!\n> \n> While I was using pgbench from the master branch, I discovered an error on\n> pgbench logs.\n> When I run pgbench, the log file contains a lot of redundant 0s.\n> \n> I ran git bisect and found out that this error occured since the commit\n> 547f04e7348b6ed992bd4a197d39661fe7c25097 (Mar 10, 2021).\n\nUgh! Thanks for the hunting!\n\nThe cause is that the time unit is changed to usec but the patch\nforgot to convert agg_interval into the same unit in doLog. I tempted\nto change it into pg_time_usec_t but it seems that it is better that\nthe unit is same with other similar variables like duration.\n\nSo I think that the attached fix works for you. (However, I'm not sure\nthe emitted log is correct or not, though..)\n\nI didn't check for the similar bugs for other variables yet.\n\n> I ran the tests below on the problematic commit and the commit before it.\n> (I used Debian 10.9 and Ubuntu 20.04)\n> \n> =====\n> ./pg_ctl -D /tmp/data init\n> ./pg_ctl -D /tmp/data start\n> \n> ./pgbench -i -s 1 postgres\n> \n> ./pgbench -r -c 1 -j 1 -T 1 --aggregate-interval 1 -l --log-prefix\n> pgbench-log postgres\n> ./pgbench -r -c 2 -j 4 -T 60 --aggregate-interval 1 -l --log-prefix\n> pgbench-log postgres\n> ./pgbench -r -c 2 -j 4 -T 60 --aggregate-interval 10 -l --log-prefix\n> pgbench-log postgres\n> =====\n> \n> The result screenshots are in the attachments.\n> (I didn't attach the problematic 60 second log file which was bigger than\n> 1GB.)\n> \n> Please take a look at this issue.\n> \n> Thank you!\n> \n> Regards,\n> YoungHwan\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 08 Jun 2021 18:59:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "On Tue, Jun 08, 2021 at 06:59:04PM +0900, Kyotaro Horiguchi wrote:\n> The cause is that the time unit is changed to usec but the patch\n> forgot to convert agg_interval into the same unit in doLog. I tempted\n> to change it into pg_time_usec_t but it seems that it is better that\n> the unit is same with other similar variables like duration.\n\nAs the option remains in seconds, I think that it is simpler to keep\nit as an int, and do the conversion where need be. It would be good\nto document that agg_interval is in seconds where the variable is\ndefined.\n\n- while (agg->start_time + agg_interval <= now)\n+ while (agg->start_time + agg_interval * 1000000 <= now)\nIn need of a cast with (int64), no?\n\nThe other things are \"progress\" and \"duration\". These look correctly\nhandled to me.\n--\nMichael", "msg_date": "Wed, 9 Jun 2021 12:46:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "Hello Michael,\n\n>> The cause is that the time unit is changed to usec but the patch\n>> forgot to convert agg_interval into the same unit in doLog. I tempted\n>> to change it into pg_time_usec_t but it seems that it is better that\n>> the unit is same with other similar variables like duration.\n>\n> As the option remains in seconds, I think that it is simpler to keep\n> it as an int, and do the conversion where need be. It would be good\n> to document that agg_interval is in seconds where the variable is\n> defined.\n>\n> - while (agg->start_time + agg_interval <= now)\n> + while (agg->start_time + agg_interval * 1000000 <= now)\n>\n> In need of a cast with (int64), no?\n\nYes, it would be better. In practice I would not expect the interval to be \nlarge enough to trigger an overflow (maxint µs is about 36 minutes).\n\n> The other things are \"progress\" and \"duration\". These look correctly\n> handled to me.\n\nHmmm… What about tests?\n\nI'm pretty sure that I wrote a test about time sensitive features with a 2 \nseconds run (-T, -P, maybe these aggregates as well), but the test needed \nto be quite loose so as to pass on slow/heavy loaded hosts, and was \nremoved at some point on the ground that it was somehow imprecise.\nI'm not sure whether it is worth to try again.\n\n-- \nFabien.", "msg_date": "Wed, 9 Jun 2021 09:46:10 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "Bonjour Michaᅵl,\n\nHere is an updated patch. While having a look at Kyotaro-san patch, I \nnoticed that the aggregate stuff did not print the last aggregate. I think \nthat it is a side effect of switching the precision from per-second to \nper-ᅵs. I've done an attempt at also fixing that which seems to work.\n\n-- \nFabien.", "msg_date": "Thu, 10 Jun 2021 23:29:30 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "On Thu, Jun 10, 2021 at 11:29:30PM +0200, Fabien COELHO wrote:\n> +\t\t/* flush remaining stats */\n> +\t\tif (!logged && latency == 0.0)\n> +\t\t\tlogAgg(logfile, agg);\n\nYou are right, this is missing the final stats. Why the choice of\nlatency here for the check? That's just to make the difference\nbetween the case where doLog() is called while processing the\nbenchmark for the end of the transaction and the case where doLog() is\ncalled once a thread ends, no? Wouldn't it be better to do a final\npush of the states once a thread reaches CSTATE_FINISHED or\nCSTATE_ABORTED instead?\n--\nMichael", "msg_date": "Fri, 11 Jun 2021 15:23:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "At Fri, 11 Jun 2021 15:23:41 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Jun 10, 2021 at 11:29:30PM +0200, Fabien COELHO wrote:\n> > +\t\t/* flush remaining stats */\n> > +\t\tif (!logged && latency == 0.0)\n> > +\t\t\tlogAgg(logfile, agg);\n> \n> You are right, this is missing the final stats. Why the choice of\n> latency here for the check? That's just to make the difference\n> between the case where doLog() is called while processing the\n> benchmark for the end of the transaction and the case where doLog() is\n> called once a thread ends, no? Wouldn't it be better to do a final\n> push of the states once a thread reaches CSTATE_FINISHED or\n> CSTATE_ABORTED instead?\n\nDoesn't threadRun already doing that?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Jun 2021 15:56:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "At Fri, 11 Jun 2021 15:56:55 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Doesn't threadRun already doing that?\n\n(s/Does/Is)\n\nThat's once per thread, sorry for the noise.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Jun 2021 16:02:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "Bonjour Micha�l,\n\n>> +\t\t/* flush remaining stats */\n>> +\t\tif (!logged && latency == 0.0)\n>> +\t\t\tlogAgg(logfile, agg);\n>\n> You are right, this is missing the final stats. Why the choice of\n> latency here for the check?\n\nFor me zero latency really says that there is no actual transaction to \ncount, so it is a good trigger for the closing call. I did not wish to add \na new \"flush\" parameter, or a specific function. I agree that it looks \nstrange, though.\n\n> That's just to make the difference between the case where doLog() is \n> called while processing the benchmark for the end of the transaction and \n> the case where doLog() is called once a thread ends, no?\n\nYes.\n\n> Wouldn't it be better to do a final push of the states once a thread \n> reaches CSTATE_FINISHED or CSTATE_ABORTED instead?\n\nThe call was already in place at the end of threadRun and had just become \nineffective. I did not wish to revisit its place and change the overall \nstructure, it is just a bug fix. I agree that it could be done differently \nwith the added logAgg function which could be called directly. Attached \nanother version which does that.\n\n-- \nFabien.", "msg_date": "Fri, 11 Jun 2021 16:09:10 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "On Fri, 11 Jun 2021 16:09:10 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> Bonjour Michaël,\n> \n> >> +\t\t/* flush remaining stats */\n> >> +\t\tif (!logged && latency == 0.0)\n> >> +\t\t\tlogAgg(logfile, agg);\n> >\n> > You are right, this is missing the final stats. Why the choice of\n> > latency here for the check?\n> \n> For me zero latency really says that there is no actual transaction to \n> count, so it is a good trigger for the closing call. I did not wish to add \n> a new \"flush\" parameter, or a specific function. I agree that it looks \n> strange, though.\n\nIt will not work if the transaction is skipped, in which case latency is 0.0.\nIt would work if we check also \"skipped\" as bellow.\n\n+\t\tif (!logged && !skipped && latency == 0.0)\n\nHowever, it still might not work if the latency is so small so that we could\nobserve latency == 0.0. I observed this when I used a script that contained\nonly a meta command. This is not usual and would be a corner case, though.\n \n> > Wouldn't it be better to do a final push of the states once a thread \n> > reaches CSTATE_FINISHED or CSTATE_ABORTED instead?\n> \n> The call was already in place at the end of threadRun and had just become \n> ineffective. I did not wish to revisit its place and change the overall \n> structure, it is just a bug fix. I agree that it could be done differently \n> with the added logAgg function which could be called directly. Attached \n> another version which does that.\n\n \t\t\t/* log aggregated but not yet reported transactions */\n \t\t\tdoLog(thread, state, &aggs, false, 0, 0);\n+\t\t\tlogAgg(thread->logfile, &aggs);\n\n\nI think we don't have to call doLog() before logAgg(). If we call doLog(),\nwe will count an extra transaction that is not actually processed because\naccumStats() is called in this.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Sun, 13 Jun 2021 03:07:51 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "On Thu, 10 Jun 2021 23:29:30 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> Bonjour Michaël,\n> \n> Here is an updated patch. While having a look at Kyotaro-san patch, I \n> noticed that the aggregate stuff did not print the last aggregate. I think \n> that it is a side effect of switching the precision from per-second to \n> per-µs. I've done an attempt at also fixing that which seems to work.\n\nThis is just out of curiosity.\n\n+\t\twhile ((next = agg->start_time + agg_interval * INT64CONST(1000000)) <= now)\n\nI can find the similar code to convert \"seconds\" to \"us\" using casting like\n\n end_time = threads[0].create_time + (int64) 1000000 * duration;\n\nor\n \n next_report = last_report + (int64) 1000000 * progress;\n\nIs there a reason use INT64CONST instead of (int64)? Do these imply the same effect?\n\nSorry, if this is a dumb question...\n\nRegards,\nYugo Nagata\n \n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Sun, 13 Jun 2021 03:27:42 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "\n> +\t\twhile ((next = agg->start_time + agg_interval * INT64CONST(1000000)) <= now)\n>\n> I can find the similar code to convert \"seconds\" to \"us\" using casting like\n>\n> end_time = threads[0].create_time + (int64) 1000000 * duration;\n>\n> or\n>\n> next_report = last_report + (int64) 1000000 * progress;\n>\n> Is there a reason use INT64CONST instead of (int64)? Do these imply the same effect?\n\nI guess that the macros does 1000000LL or something similar to directly \ncreate an int64 constant. It is necessary if the constant would overflow a \nusual 32 bits C integer, whereas the cast is sufficient if there is no \noverflow. Maybe I c/should have used the previous approach.\n\n> Sorry, if this is a dumb question...\n\nNope.\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 12 Jun 2021 23:32:54 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": ">> + while ((next = agg->start_time + agg_interval * INT64CONST(1000000))\n>> <= now)\n>>\n>> I can find the similar code to convert \"seconds\" to \"us\" using casting\n>> like\n>>\n>> end_time = threads[0].create_time + (int64) 1000000 * duration;\n>>\n>> or\n>>\n>> next_report = last_report + (int64) 1000000 * progress;\n>>\n>> Is there a reason use INT64CONST instead of (int64)? Do these imply\n>> the same effect?\n> \n> I guess that the macros does 1000000LL or something similar to\n> directly create an int64 constant. It is necessary if the constant\n> would overflow a usual 32 bits C integer, whereas the cast is\n> sufficient if there is no overflow. Maybe I c/should have used the\n> previous approach.\n\nI think using INT64CONST to create a 64-bit constant is the standard\npractice in PostgreSQL.\n\ncommit 9d6b160d7db76809f0c696d9073f6955dd5a973a\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Fri Sep 1 15:14:18 2017 -0400\n\n Make [U]INT64CONST safe for use in #if conditions.\n \n Instead of using a cast to force the constant to be the right width,\n assume we can plaster on an L, UL, LL, or ULL suffix as appropriate.\n The old approach to this is very hoary, dating from before we were\n willing to require compilers to have working int64 types.\n \n This fix makes the PG_INT64_MIN, PG_INT64_MAX, and PG_UINT64_MAX\n constants safe to use in preprocessor conditions, where a cast\n doesn't work. Other symbolic constants that might be defined using\n [U]INT64CONST are likewise safer than before.\n \n Also fix the SIZE_MAX macro to be similarly safe, if we are forced\n to provide a definition for that. The test added in commit 2e70d6b5e\n happens to do what we want even with the hack \"(size_t) -1\" definition,\n but we could easily get burnt on other tests in future.\n \n Back-patch to all supported branches, like the previous commits.\n \n Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us\n\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 14 Jun 2021 09:42:56 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "On Sun, Jun 13, 2021 at 03:07:51AM +0900, Yugo NAGATA wrote:\n> It will not work if the transaction is skipped, in which case latency is 0.0.\n> It would work if we check also \"skipped\" as bellow.\n> \n> +\t\tif (!logged && !skipped && latency == 0.0)\n> \n> However, it still might not work if the latency is so small so that we could\n> observe latency == 0.0. I observed this when I used a script that contained\n> only a meta command. This is not usual and would be a corner case, though.\n\nHmm. I am not sure to completely follow the idea here. It would be\ngood to make this code less confusing than it is now.\n\n> \t\t\t/* log aggregated but not yet reported transactions */\n> \t\t\tdoLog(thread, state, &aggs, false, 0, 0);\n> +\t\t\tlogAgg(thread->logfile, &aggs);\n> \n> I think we don't have to call doLog() before logAgg(). If we call doLog(),\n> we will count an extra transaction that is not actually processed because\n> accumStats() is called in this.\n\nYes, calling both is weird. Is using logAgg() directly in the context\nactually right when it comes to sample_rate? We may not log anything\non HEAD if sample_rate is enabled, but we would finish by logging\nsomething all the time with this patch. If I am following this code\ncorrectly, we don't care about accumStats() in the code path of a\nthread we are done with, right?\n--\nMichael", "msg_date": "Tue, 15 Jun 2021 14:17:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "Hello Michaᅵl,\n\n>> I think we don't have to call doLog() before logAgg(). If we call doLog(),\n>> we will count an extra transaction that is not actually processed because\n>> accumStats() is called in this.\n>\n> Yes, calling both is weird.\n\nThe motivation to call doLog is to catch up zeros on slow rates, so as to \navoid holes in the log, including at the end of the run. This \"trick\" was \nalready used by the code. I agree that it would record a non existant \ntransaction, which is not desirable. I wanted to avoid a special \nparameter, but this seems unrealistic.\n\n> Is using logAgg() directly in the context actually right when it comes \n> to sample_rate?\n\nThe point is just to trigger the last display, which is not triggered by \nthe previous I think because of the precision: the start of the run is\nnot exactly the start of the thread.\n\n> We may not log anything on HEAD if sample_rate is enabled, but we would \n> finish by logging something all the time with this patch.\n\nI do not get it.\n\n> If I am following this code correctly, we don't care about accumStats() \n> in the code path of a thread we are done with, right?\n\nYes.\n\nAttached a v3 which adds a boolean to distinguish recording vs flushing.\n\n-- \nFabien.", "msg_date": "Tue, 15 Jun 2021 10:05:29 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "On Tue, 15 Jun 2021 10:05:29 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> Hello Michaël,\n> \n> >> I think we don't have to call doLog() before logAgg(). If we call doLog(),\n> >> we will count an extra transaction that is not actually processed because\n> >> accumStats() is called in this.\n> >\n> > Yes, calling both is weird.\n> \n> The motivation to call doLog is to catch up zeros on slow rates, so as to \n> avoid holes in the log, including at the end of the run. This \"trick\" was \n> already used by the code. I agree that it would record a non existant \n> transaction, which is not desirable. I wanted to avoid a special \n> parameter, but this seems unrealistic.\n> \n> > Is using logAgg() directly in the context actually right when it comes \n> > to sample_rate?\n> \n> The point is just to trigger the last display, which is not triggered by \n> the previous I think because of the precision: the start of the run is\n> not exactly the start of the thread.\n> \n> > We may not log anything on HEAD if sample_rate is enabled, but we would \n> > finish by logging something all the time with this patch.\n> \n> I do not get it.\n\nIt was not a problem because --sampling-rate --aggregate-interval cannot be\nused at the same time.\n \n> > If I am following this code correctly, we don't care about accumStats() \n> > in the code path of a thread we are done with, right?\n> \n> Yes.\n> \n> Attached a v3 which adds a boolean to distinguish recording vs flushing.\n\nSorry, but I can't find any patach attached...\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 15 Jun 2021 17:15:14 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "On Tue, Jun 15, 2021 at 05:15:14PM +0900, Yugo NAGATA wrote:\n> On Tue, 15 Jun 2021 10:05:29 +0200 (CEST) Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> It was not a problem because --sampling-rate --aggregate-interval cannot be\n> used at the same time.\n\nYep, you are right, thanks. I have missed that both options cannot be\nspecified at the same time.\n--\nMichael", "msg_date": "Tue, 15 Jun 2021 18:01:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "> Attached a v3 which adds a boolean to distinguish recording vs flushing.\n\nBetter with the attachement… sorry for the noise.\n\n-- \nFabien.", "msg_date": "Tue, 15 Jun 2021 11:38:00 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "On Tue, 15 Jun 2021 18:01:18 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jun 15, 2021 at 05:15:14PM +0900, Yugo NAGATA wrote:\n> > On Tue, 15 Jun 2021 10:05:29 +0200 (CEST) Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > It was not a problem because --sampling-rate --aggregate-interval cannot be\n> > used at the same time.\n> \n> Yep, you are right, thanks. I have missed that both options cannot be\n> specified at the same time.\n\nMaybe, adding Assert(sample_rate == 0.0 || agg_interval == 0) or moving\nthe check of sample_rate into the else block could improve code readability?\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 15 Jun 2021 21:31:40 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "On Tue, 15 Jun 2021 11:38:00 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> > Attached a v3 which adds a boolean to distinguish recording vs flushing.\n\nI am fine with this version, but I think it would be better if we have a comment\nexplaining what \"tx\" is for.\n\nAlso, how about adding Assert(tx) instead of using \"else if (tx)\" because\nwe are assuming that tx is always true when agg_interval is not used, right?\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 15 Jun 2021 21:53:06 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "On Tue, Jun 15, 2021 at 09:53:06PM +0900, Yugo NAGATA wrote:\n> I am fine with this version, but I think it would be better if we have a comment\n> explaining what \"tx\" is for.\n> \n> Also, how about adding Assert(tx) instead of using \"else if (tx)\" because\n> we are assuming that tx is always true when agg_interval is not used, right?\n\nAgreed on both points. From what I get, this code could be clarified\nmuch more, and perhaps partially refactored to have less spaghetti\ncode between the point where we call it at the end of a thread or when\ngathering stats of a transaction mid-run, but that's not something to\ndo post-beta1. I am not completely sure that the result would be\nworth it either.\n\nLet's document things and let's the readers know better the\nassumptions this area of the code relies on, for clarity. The \ndependency between agg_interval and sample_rate is one of those\nthings, somebody needs now to look at the option parsing why only one\nis possible at the time. Using an extra tx flag to track what to do\nafter the loop for the aggregate print to the log file is an\nimprovement in this direction.\n--\nMichael", "msg_date": "Wed, 16 Jun 2021 07:53:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "Michaᅵl-san, Yugo-san,\n\n>> I am fine with this version, but I think it would be better if we have \n>> a comment explaining what \"tx\" is for.\n\nYes. Done.\n\n>> Also, how about adding Assert(tx) instead of using \"else if (tx)\" because\n>> we are assuming that tx is always true when agg_interval is not used, right?\n\nOk. Done.\n\n> Agreed on both points. From what I get, this code could be clarified\n> much more,\n\nI agree that the code is a little bit awkward.\n\n> and perhaps partially refactored to have less spaghetti\n> code between the point where we call it at the end of a thread or when\n> gathering stats of a transaction mid-run, but that's not something to\n> do post-beta1.\n\nYep.\n\n> I am not completely sure that the result would be worth it either.\n\nI'm not either.\n\n> Let's document things and let's the readers know better the\n> assumptions this area of the code relies on, for clarity.\n\nSure.\n\n> The dependency between agg_interval and sample_rate is one of those \n> things, somebody needs now to look at the option parsing why only one is \n> possible at the time.\n\nActually it would work if both are mixed: the code would aggregate a \nsample. However it does not look very useful to do that, so it is \narbitrary forbidden. Not sure whether this is that useful to prevent this \nuse case.\n\n> Using an extra tx flag to track what to do after the loop for the \n> aggregate print to the log file is an improvement in this direction.\n\nYep.\n\nAttached v4 improves comments and moves tx as an assert.\n\n-- \nFabien.", "msg_date": "Wed, 16 Jun 2021 08:58:17 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": "On Wed, Jun 16, 2021 at 08:58:17AM +0200, Fabien COELHO wrote:\n> Actually it would work if both are mixed: the code would aggregate a sample.\n> However it does not look very useful to do that, so it is arbitrary\n> forbidden. Not sure whether this is that useful to prevent this use case.\n\nOkay, noted.\n\n> Attached v4 improves comments and moves tx as an assert.\n\nThanks. I have not tested in details but that looks rather sane to me\nat quick glance. I'll look at that more tomorrow.\n\n> + * The function behaviors changes depending on sample_rate (a fraction of\n> + * transaction is reported) and agg_interval (transactions are aggregated\n> + * over the interval and reported once).\n\nThe first part of this sentence has an incorrect grammar.\n--\nMichael", "msg_date": "Wed, 16 Jun 2021 16:49:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" }, { "msg_contents": ">> + * The function behaviors changes depending on sample_rate (a fraction of\n>> + * transaction is reported) and agg_interval (transactions are aggregated\n>> + * over the interval and reported once).\n>\n> The first part of this sentence has an incorrect grammar.\n\nIndeed. v5 attempts to improve comments.\n\n-- \nFabien.", "msg_date": "Wed, 16 Jun 2021 09:59:39 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Error on pgbench logs" } ]
[ { "msg_contents": "Hi,\n\nIn the KnownAssignedTransactionIdes sub-module, two lines of unused code \nwere found in a previous change.\n\n--\nQuan Zongliang\nCPUG", "msg_date": "Tue, 8 Jun 2021 17:32:37 +0800", "msg_from": "Quan Zongliang <quanzongliang@yeah.net>", "msg_from_op": true, "msg_subject": "Remove unused code from the KnownAssignedTransactionIdes submodule" }, { "msg_contents": "On Tue, 2021-06-08 at 17:32 +0800, Quan Zongliang wrote:\r\n> Hi,\r\n> \r\n> In the KnownAssignedTransactionIdes sub-module, two lines of unused code \r\n> were found in a previous change.\r\n\r\nHuh. Looks like this code died as part of 2fc7af5e966?\r\n\r\nCC'ing Thomas just in case we're missing something, but I'll mark this\r\nReady for Committer. Thanks!\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 1 Jul 2021 17:24:22 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused code from the KnownAssignedTransactionIdes\n submodule" }, { "msg_contents": "On Fri, Jul 2, 2021 at 5:24 AM Jacob Champion <pchampion@vmware.com> wrote:\n> On Tue, 2021-06-08 at 17:32 +0800, Quan Zongliang wrote:\n> > In the KnownAssignedTransactionIdes sub-module, two lines of unused code\n> > were found in a previous change.\n>\n> Huh. Looks like this code died as part of 2fc7af5e966?\n>\n> CC'ing Thomas just in case we're missing something, but I'll mark this\n> Ready for Committer. Thanks!\n\nThanks! Agreed. No change to generated code on my machine. Pushed.\n\n\n", "msg_date": "Fri, 2 Jul 2021 13:38:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused code from the KnownAssignedTransactionIdes\n submodule" } ]
[ { "msg_contents": "Hi,\n\nI noticed that the first function parameter in get_qual_from_partbound(**Relation rel**, Relation parent,\nis not used in the function.\n\nIs it better to remove it like the attached patch ?\n\nBest regards,\nhouzj", "msg_date": "Tue, 8 Jun 2021 09:50:28 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Unused function parameter in get_qual_from_partbound()" }, { "msg_contents": "On Tue, 8 Jun 2021 at 21:50, houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> I noticed that the first function parameter in get_qual_from_partbound(**Relation rel**, Relation parent,\n> is not used in the function.\n>\n> Is it better to remove it like the attached patch ?\n\nGoing by [1] it was used when it went in. It looks like it was for\nmapping attribute numbers between parent and partition rels.\n\nGoing by [2], it looks like it became unused due to the attribute\nmapping code being moved down into map_partition_varattnos().\n\nAs for whether we should remove it or not, because it's an external\nfunction that an extension might want to use, it would need to wait\nuntil at least we branch for PG15.\n\nLikely it's best to add the patch to the July commitfest so that we\ncan make a decision then.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/catalog/partition.c;h=6dab45f0edf8b1617d7239652fe36f113d30fd7a;hb=f0e44751d71\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blobdiff;f=src/backend/catalog/partition.c;h=874e69d8d62e8e93164093e7352756ebfd0f69bc;hp=f54e1bdf3fb52cefed9b0d2fe7ab2a169231579d;hb=0563a3a8b5;hpb=0c2070cefa0e5d097b715c9a3b9b5499470019aa\n\n\n", "msg_date": "Tue, 8 Jun 2021 23:30:05 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unused function parameter in get_qual_from_partbound()" }, { "msg_contents": "On Tuesday, June 8, 2021 7:30 PM David Rowley <dgrowleyml@gmail.com>\r\n> On Tue, 8 Jun 2021 at 21:50, houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\r\n> wrote:\r\n> > I noticed that the first function parameter in\r\n> > get_qual_from_partbound(**Relation rel**, Relation parent, is not used in the\r\n> function.\r\n> >\r\n> > Is it better to remove it like the attached patch ?\r\n> \r\n> Going by [1] it was used when it went in. It looks like it was for mapping attribute\r\n> numbers between parent and partition rels.\r\n> \r\n> Going by [2], it looks like it became unused due to the attribute mapping code\r\n> being moved down into map_partition_varattnos().\r\n> \r\n> As for whether we should remove it or not, because it's an external function\r\n> that an extension might want to use, it would need to wait until at least we\r\n> branch for PG15.\r\n> \r\n> Likely it's best to add the patch to the July commitfest so that we can make a\r\n> decision then.\r\n\r\nOK, Thanks for the explanation.\r\nAdded to CF: https://commitfest.postgresql.org/33/3159/\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Wed, 9 Jun 2021 00:28:48 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Unused function parameter in get_qual_from_partbound()" }, { "msg_contents": "On Wed, Jun 09, 2021 at 12:28:48AM +0000, houzj.fnst@fujitsu.com wrote:\n> OK, Thanks for the explanation.\n> Added to CF: https://commitfest.postgresql.org/33/3159/\n\nAt first glance, this looked to me like breaking something just for\nsake of breaking it, but removing the rel argument could be helpful\nto simplify any external code calling it as there would be no need for\nthis extra Relation. So that looks like a good idea, no need to rush\nthat into 14 though.\n--\nMichael", "msg_date": "Wed, 9 Jun 2021 11:50:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Unused function parameter in get_qual_from_partbound()" }, { "msg_contents": "Hello,\n\nGoogling around, I didn't find any extensions that would break from this\nchange. Even if there are any, this change will simplify the relevant\ncallsites. It also aligns the interface nicely with get_qual_for_hash,\nget_qual_for_list and get_qual_for_range.\n\nMarking this as ready for committer. It can be committed when the branch\nis cut for 15.\n\nRegards,\nSoumyadeep (VMware)\n\n\n", "msg_date": "Sat, 10 Jul 2021 15:44:23 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unused function parameter in get_qual_from_partbound()" }, { "msg_contents": "> Marking this as ready for committer. It can be committed when the branch\n> is cut for 15.\n\nI see that REL_14_STABLE is already cut. So this can go in now.\n\n\n", "msg_date": "Sat, 10 Jul 2021 15:57:43 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unused function parameter in get_qual_from_partbound()" }, { "msg_contents": "On Tue, Jun 8, 2021 at 10:50 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> At first glance, this looked to me like breaking something just for\n> sake of breaking it, but removing the rel argument could be helpful\n> to simplify any external code calling it as there would be no need for\n> this extra Relation. So that looks like a good idea, no need to rush\n> that into 14 though.\n\nI found no external references in codesearch.debian.net. I plan to commit\nthis in the next couple of days unless there are objections.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Jun 8, 2021 at 10:50 PM Michael Paquier <michael@paquier.xyz> wrote:>> At first glance, this looked to me like breaking something just for> sake of breaking it, but removing the rel argument could be helpful> to simplify any external code calling it as there would be no need for> this extra Relation.  So that looks like a good idea, no need to rush> that into 14 though.I found no external references in codesearch.debian.net. I plan to commit this in the next couple of days unless there are objections.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 12 Jul 2021 08:46:05 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Unused function parameter in get_qual_from_partbound()" }, { "msg_contents": "On Mon, Jul 12, 2021 at 8:46 AM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n> On Tue, Jun 8, 2021 at 10:50 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n> >\n> > At first glance, this looked to me like breaking something just for\n> > sake of breaking it, but removing the rel argument could be helpful\n> > to simplify any external code calling it as there would be no need for\n> > this extra Relation. So that looks like a good idea, no need to rush\n> > that into 14 though.\n>\n> I found no external references in codesearch.debian.net. I plan to commit\nthis in the next couple of days unless there are objections.\n\nThis has been committed.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 12, 2021 at 8:46 AM John Naylor <john.naylor@enterprisedb.com> wrote:>> On Tue, Jun 8, 2021 at 10:50 PM Michael Paquier <michael@paquier.xyz> wrote:> >> > At first glance, this looked to me like breaking something just for> > sake of breaking it, but removing the rel argument could be helpful> > to simplify any external code calling it as there would be no need for> > this extra Relation.  So that looks like a good idea, no need to rush> > that into 14 though.>> I found no external references in codesearch.debian.net. I plan to commit this in the next couple of days unless there are objections.This has been committed.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 14 Jul 2021 10:06:02 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Unused function parameter in get_qual_from_partbound()" } ]
[ { "msg_contents": "It could be useful to use bool in exclusion constraints, but it's\ncurrently not nicely supported. The attached patch adds support for\nbool to the btree_gist extension, so we can do this.\n\nI am adding this to the commitfest 2021-07.", "msg_date": "Tue, 8 Jun 2021 13:48:10 +0300", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": true, "msg_subject": "GiST operator class for bool" }, { "msg_contents": "Hi!\n\n> 8 июня 2021 г., в 13:48, Emre Hasegeli <emre@hasegeli.com> написал(а):\n> \n> It could be useful to use bool in exclusion constraints, but it's\n> currently not nicely supported. The attached patch adds support for\n> bool to the btree_gist extension, so we can do this.\n> \n> I am adding this to the commitfest 2021-07.\n> <0001-btree_gist-Support-bool.patch>\n\nIt definitely makes sense to include bool into list of supported types.\nBut patch that you propose does not support sorting build added in PG14.\nOr we can add this functionality later in https://commitfest.postgresql.org/31/2824/ ...\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 8 Jun 2021 14:07:32 +0300", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: GiST operator class for bool" }, { "msg_contents": "> But patch that you propose does not support sorting build added in PG14.\n\nIt looks like the change to btree_gist is not committed yet. I'll\nrebase my patch once it's committed.\n\nIt was a long thread. I couldn't read all of it. Though, the last\npatches felt to me like a part of what's already been committed.\nShouldn't they also be committed to version 14?\n\n\n", "msg_date": "Tue, 8 Jun 2021 17:53:02 +0300", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": true, "msg_subject": "Re: GiST operator class for bool" }, { "msg_contents": "\n\n> 8 июня 2021 г., в 19:53, Emre Hasegeli <emre@hasegeli.com> написал(а):\n> \n>> But patch that you propose does not support sorting build added in PG14.\n> \n> It looks like the change to btree_gist is not committed yet. I'll\n> rebase my patch once it's committed.\nChanges to GiST are committed. There will be no need to rebase anyway :)\n\n> \n> It was a long thread. I couldn't read all of it. Though, the last\n> patches felt to me like a part of what's already been committed.\n> Shouldn't they also be committed to version 14?\n\nWell, yeah, it could would be cool to have gist build and gist_btree support it together, but there was many parts and we could did not finish it before feature freeze.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 14 Jun 2021 14:33:19 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: GiST operator class for bool" }, { "msg_contents": "Hi,\n\nI looked at this patch today - it's pretty simple and in pretty good\nshape, I can't think of anything that'd need fixing. Perhaps the test\nmight also do EXPLAIN like for other types, to verify the new index is\nactually used. But that's minor enough to handle during commit.\n\n\nI've marked this as RFC and will get it committed in a day or two.\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Nov 2021 16:18:42 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GiST operator class for bool" }, { "msg_contents": "On 11/3/21 16:18, Tomas Vondra wrote:\n> Hi,\n> \n> I looked at this patch today - it's pretty simple and in pretty good\n> shape, I can't think of anything that'd need fixing. Perhaps the test\n> might also do EXPLAIN like for other types, to verify the new index is\n> actually used. But that's minor enough to handle during commit.\n> \n> \n> I've marked this as RFC and will get it committed in a day or two.\n> \n\nPushed, after adding some simple EXPLAIN to the regression test.\n\nThanks for the patch!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 6 Nov 2021 17:09:05 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GiST operator class for bool" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> Pushed, after adding some simple EXPLAIN to the regression test.\n\nskink is reporting that this has some valgrind issues [1].\nI suspect sloppy conversion between bool and Datum, but\ndidn't go looking.\n\n==1805451== VALGRINDERROR-BEGIN\n==1805451== Uninitialised byte(s) found during client check request\n==1805451== at 0x59EFEA: PageAddItemExtended (bufpage.c:346)\n==1805451== by 0x2100B9: gistfillbuffer (gistutil.c:46)\n==1805451== by 0x2050F9: gistplacetopage (gist.c:562)\n==1805451== by 0x20546B: gistinserttuples (gist.c:1277)\n==1805451== by 0x205BB5: gistinserttuple (gist.c:1230)\n==1805451== by 0x206067: gistdoinsert (gist.c:885)\n==1805451== by 0x2084FB: gistBuildCallback (gistbuild.c:829)\n==1805451== by 0x23B572: heapam_index_build_range_scan (heapam_handler.c:1694)\n==1805451== by 0x208E7D: table_index_build_scan (tableam.h:1756)\n==1805451== by 0x208E7D: gistbuild (gistbuild.c:309)\n==1805451== by 0x2D10C8: index_build (index.c:2983)\n==1805451== by 0x2D2A7D: index_create (index.c:1232)\n==1805451== by 0x383E67: DefineIndex (indexcmds.c:1128)\n==1805451== Address 0x10cab1e4 is 12 bytes inside a block of size 16 client-defined\n==1805451== at 0x712AC5: palloc0 (mcxt.c:1118)\n==1805451== by 0x1E0A07: index_form_tuple (indextuple.c:146)\n==1805451== by 0x210BA8: gistFormTuple (gistutil.c:582)\n==1805451== by 0x2084C2: gistBuildCallback (gistbuild.c:813)\n==1805451== by 0x23B572: heapam_index_build_range_scan (heapam_handler.c:1694)\n==1805451== by 0x208E7D: table_index_build_scan (tableam.h:1756)\n==1805451== by 0x208E7D: gistbuild (gistbuild.c:309)\n==1805451== by 0x2D10C8: index_build (index.c:2983)\n==1805451== by 0x2D2A7D: index_create (index.c:1232)\n==1805451== by 0x383E67: DefineIndex (indexcmds.c:1128)\n==1805451== by 0x5AED2E: ProcessUtilitySlow (utility.c:1535)\n==1805451== by 0x5AE262: standard_ProcessUtility (utility.c:1066)\n==1805451== by 0x5AE33A: ProcessUtility (utility.c:527)\n==1805451== \n==1805451== VALGRINDERROR-END\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-11-06%2023%3A56%3A57\n\n\n", "msg_date": "Sun, 07 Nov 2021 11:44:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GiST operator class for bool" }, { "msg_contents": "Hi,\n\nOn 11/7/21 17:44, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> Pushed, after adding some simple EXPLAIN to the regression test.\n> \n> skink is reporting that this has some valgrind issues [1].\n> I suspect sloppy conversion between bool and Datum, but\n> didn't go looking.\n> \n\nIt's actually a bit worse than that :-( The opclass is somewhat confused \nabout the type it should use for storage. The gbtree_ninfo struct says \nit's using gbtreekey4, the SQL script claims the params are gbtreekey8, \nand it should actually use gbtreekey2. Sorry for not noticing that.\n\nThe attached patch fixes the valgrind error for me.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 7 Nov 2021 20:53:21 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GiST operator class for bool" }, { "msg_contents": "On 11/7/21 20:53, Tomas Vondra wrote:\n> Hi,\n> \n> On 11/7/21 17:44, Tom Lane wrote:\n>> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>>> Pushed, after adding some simple EXPLAIN to the regression test.\n>>\n>> skink is reporting that this has some valgrind issues [1].\n>> I suspect sloppy conversion between bool and Datum, but\n>> didn't go looking.\n>>\n> \n> It's actually a bit worse than that :-( The opclass is somewhat confused \n> about the type it should use for storage. The gbtree_ninfo struct says \n> it's using gbtreekey4, the SQL script claims the params are gbtreekey8, \n> and it should actually use gbtreekey2. Sorry for not noticing that.\n> \n> The attached patch fixes the valgrind error for me.\n> \n\nI've pushed the fix, hopefully that'll make skink happy.\n\nWhat surprised me a bit is that the opclass used gbtreekey4 storage, the \nequality support proc was defined as using gbtreekey8\n\n FUNCTION 7 gbt_bool_same (gbtreekey8, gbtreekey8, internal),\n\nyet the gistvalidate() did not report this. Turns out this is because\n\n ok = check_amproc_signature(procform->amproc, INTERNALOID, false,\n 3, 3, opckeytype, opckeytype,\n INTERNALOID);\n\ni.e. with exact=false, so these type differences are ignored. Changing \nit to true reports the issue (and no other issues in check-world).\n\nBut maybe there are reasons to keep using false?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 8 Nov 2021 02:24:22 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GiST operator class for bool" }, { "msg_contents": "Hello,\n\nI don't see any changes in the documentation.[1]\n\nShould bool appear in the looong list of supported operator classes?\n\n[1] https://www.postgresql.org/docs/devel/btree-gist.html\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Tue, 7 Dec 2021 00:35:58 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: GiST operator class for bool" }, { "msg_contents": "On 12/6/21 22:35, Pavel Luzanov wrote:\n> Hello,\n> \n> I don't see any changes in the documentation.[1]\n> \n> Should bool appear in the looong list of supported operator classes?\n> \n\nYou're right, I forgot to update the list of data types in the docs. \nFixed, thanks for the report.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 11 Dec 2021 05:04:00 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GiST operator class for bool" } ]
[ { "msg_contents": "Hi all,\r\n\r\nWe've been working on ways to expand the list of third-party auth\r\nmethods that Postgres provides. Some example use cases might be \"I want\r\nto let anyone with a Google account read this table\" or \"let anyone who\r\nbelongs to this GitHub organization connect as a superuser\".\r\n\r\nAttached is a proof of concept that implements pieces of OAuth 2.0\r\nfederated authorization, via the OAUTHBEARER SASL mechanism from RFC\r\n7628 [1]. Currently, only Linux is supported due to some ugly hacks in\r\nthe backend.\r\n\r\nThe architecture can support the following use cases, as long as your\r\nOAuth issuer of choice implements the necessary specs, and you know how\r\nto write a validator for your issuer's bearer tokens:\r\n\r\n- Authentication only, where an external validator uses the bearer\r\ntoken to determine the end user's identity, and Postgres decides\r\nwhether that user ID is authorized to connect via the standard pg_ident\r\nuser mapping.\r\n\r\n- Authorization only, where the validator uses the bearer token to\r\ndetermine the allowed roles for the end user, and then checks to make\r\nsure that the connection's role is one of those. This bypasses pg_ident\r\nand allows pseudonymous connections, where Postgres doesn't care who\r\nyou are as long as the token proves you're allowed to assume the role\r\nyou want.\r\n\r\n- A combination, where the validator provides both an authn_id (for\r\nlater audits of database access) and an authorization decision based on\r\nthe bearer token and role provided.\r\n\r\nIt looks kinda like this during use:\r\n\r\n $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'\r\n Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG\r\n\r\n= Quickstart =\r\n\r\nFor anyone who likes building and seeing green tests ASAP.\r\n\r\nPrerequisite software:\r\n- iddawc v0.9.9 [2], library and dev headers, for client support\r\n- Python 3, for the test suite only\r\n\r\n(Some newer distributions have dev packages for iddawc, but mine did\r\nnot.)\r\n\r\nConfigure using --with-oauth (and, if you've installed iddawc into a\r\nnon-standard location, be sure to use --with-includes and --with-\r\nlibraries. Make sure either rpath or LD_LIBRARY_PATH will get you what\r\nyou need). Install as usual.\r\n\r\nTo run the test suite, make sure the contrib/authn_id extension is\r\ninstalled, then init and start your dev cluster. No other configuration\r\nis required; the test will do it for you. Switch to the src/test/python\r\ndirectory, point your PG* envvars to a superuser connection on the\r\ncluster (so that a \"bare\" psql will connect automatically), and run\r\n`make installcheck`.\r\n\r\n= Production Setup =\r\n\r\n(but don't use this in production, please)\r\n\r\nActually setting up a \"real\" system requires knowing the specifics of\r\nyour third-party issuer of choice. Your issuer MUST implement OpenID\r\nDiscovery and the OAuth Device Authorization flow! Seriously, check\r\nthis before spending a lot of time writing a validator against an\r\nissuer that can't actually talk to libpq.\r\n\r\nThe broad strokes are as follows:\r\n\r\n1. Register a new public client with your issuer to get an OAuth client\r\nID for libpq. You'll use this as the oauth_client_id in the connection\r\nstring. (If your issuer doesn't support public clients and gives you a\r\nclient secret, you can use the oauth_client_secret connection parameter\r\nto provide that too.)\r\n\r\nThe client you register must be able to use a device authorization\r\nflow; some issuers require additional setup for that.\r\n\r\n2. Set up your HBA with the 'oauth' auth method, and set the 'issuer'\r\nand 'scope' options. 'issuer' is the base URL identifying your third-\r\nparty issuer (for example, https://accounts.google.com), and 'scope' is\r\nthe set of OAuth scopes that the client and server will need to\r\nauthenticate and/or authorize the user (e.g. \"openid email\").\r\n\r\nSo a sample HBA line might look like\r\n\r\n host all all samehost oauth issuer=\"https://accounts.google.com\" scope=\"openid email\"\r\n\r\n3. In postgresql.conf, set up an oauth_validator_command that's capable\r\nof verifying bearer tokens and implements the validator protocol. This\r\nis the hardest part. See below.\r\n\r\n= Design =\r\n\r\nOn the client side, I've implemented the Device Authorization flow (RFC\r\n8628, [3]). What this means in practice is that libpq reaches out to a\r\nthird-party issuer (e.g. Google, Azure, etc.), identifies itself with a\r\nclient ID, and requests permission to act on behalf of the end user.\r\nThe issuer responds with a login URL and a one-time code, which libpq\r\npresents to the user using the notice hook. The end user then navigates\r\nto that URL, presents their code, authenticates to the issuer, and\r\ngrants permission for libpq to retrieve a bearer token. libpq grabs a\r\ntoken and sends it to the server for verification.\r\n\r\n(The bearer token, in this setup, is essentially a plaintext password,\r\nand you must secure it like you would a plaintext password. The token\r\nhas an expiration date and can be explicitly revoked, which makes it\r\nslightly better than a password, but this is still a step backwards\r\nfrom something like SCRAM with channel binding. There are ways to bind\r\na bearer token to a client certificate [4], which would mitigate the\r\nrisk of token theft -- but your issuer has to support that, and I\r\nhaven't found much support in the wild.)\r\n\r\nThe server side is where things get more difficult for the DBA. The\r\nOAUTHBEARER spec has this to say about the server side implementation:\r\n\r\n The server validates the response according to the specification for\r\n the OAuth Access Token Types used.\r\n\r\nAnd here's what the Bearer Token specification [5] says:\r\n\r\n This document does not specify the encoding or the contents of the\r\n token; hence, detailed recommendations about the means of\r\n guaranteeing token integrity protection are outside the scope of\r\n this document.\r\n\r\nIt's the Wild West. Every issuer does their own thing in their own\r\nspecial way. Some don't really give you a way to introspect information\r\nabout a bearer token at all, because they assume that the issuer of the\r\ntoken and the consumer of the token are essentially the same service.\r\nSome major players provide their own custom libraries, implemented in\r\nyour-language-of-choice, to deal with their particular brand of magic.\r\n\r\nSo I punted and added the oauth_validator_command GUC. A token\r\nvalidator command reads the bearer token from a file descriptor that's\r\npassed to it, then does whatever magic is necessary to validate that\r\ntoken and find out who owns it. Optionally, it can look at the role\r\nthat's being connected and make sure that the token authorizes the user\r\nto actually use that role. Then it says yea or nay to Postgres, and\r\noptionally tells the server who the user is so that their ID can be\r\nlogged and mapped through pg_ident.\r\n\r\n(See the commit message in 0005 for a full description of the protocol.\r\nThe test suite also has two toy implementations that illustrate the\r\nprotocol, but they provide zero security.)\r\n\r\nThis is easily the worst part of the patch, not only because my\r\nimplementation is a bad hack on OpenPipeStream(), but because it\r\nbalances the security of the entire system on the shoulders of a DBA\r\nwho does not have time to read umpteen OAuth specifications cover to\r\ncover. More thought and coding effort is needed here, but I didn't want\r\nto gold-plate a bad design. I'm not sure what alternatives there are\r\nwithin the rules laid out by OAUTHBEARER. And the system is _extremely_\r\nflexible, in the way that only code that's maintained by somebody else\r\ncan be.\r\n\r\n= Patchset Roadmap =\r\n\r\nThe seven patches can be grouped into three:\r\n\r\n1. Prep\r\n\r\n 0001 decouples the SASL code from the SCRAM implementation.\r\n 0002 makes it possible to use common/jsonapi from the frontend.\r\n 0003 lets the json_errdetail() result be freed, to avoid leaks.\r\n\r\n2. OAUTHBEARER Implementation\r\n\r\n 0004 implements the client with libiddawc.\r\n 0005 implements server HBA support and oauth_validator_command.\r\n\r\n3. Testing\r\n\r\n 0006 adds a simple test extension to retrieve the authn_id.\r\n 0007 adds the Python test suite I've been developing against.\r\n\r\nThe first three patches are, hopefully, generally useful outside of\r\nthis implementation, and I'll plan to register them in the next\r\ncommitfest. The middle two patches are the \"interesting\" pieces, and\r\nI've split them into client and server for ease of understanding,\r\nthough neither is particularly useful without the other.\r\n\r\nThe last two patches grew out of a test suite that I originally built\r\nto be able to exercise NSS corner cases at the protocol/byte level. It\r\nwas incredibly helpful during implementation of this new SASL\r\nmechanism, since I could write the client and server independently of\r\neach other and get high coverage of broken/malicious implementations.\r\nIt's based on pytest and Construct, and the Python 3 requirement might\r\nturn some away, but I wanted to include it in case anyone else wanted\r\nto hack on the code. src/test/python/README explains more.\r\n\r\n= Thoughts/Reflections =\r\n\r\n...in no particular order.\r\n\r\nI picked OAuth 2.0 as my first experiment in federated auth mostly\r\nbecause I was already familiar with pieces of it. I think SAML (via the\r\nSAML20 mechanism, RFC 6595) would be a good companion to this proof of\r\nconcept, if there is general interest in federated deployments.\r\n\r\nI don't really like the OAUTHBEARER spec, but I'm not sure there's a\r\nbetter alternative. Everything is left as an exercise for the reader.\r\nIt's not particularly extensible. Standard OAuth is built for\r\nauthorization, not authentication, and from reading the RFC's history,\r\nit feels like it was a hack to just get something working. New\r\nstandards like OpenID Connect have begun to fill in the gaps, but the\r\nSASL mechanisms have not kept up. (The OPENID20 mechanism is, to my\r\nunderstanding, unrelated/obsolete.) And support for helpful OIDC\r\nfeatures seems to be spotty in the real world.\r\n\r\nThe iddawc dependency for client-side OAuth was extremely helpful to\r\ndevelop this proof of concept quickly, but I don't think it would be an\r\nappropriate component to build a real feature on. It's extremely\r\nheavyweight -- it incorporates a huge stack of dependencies, including\r\na logging framework and a web server, to implement features we would\r\nprobably never use -- and it's fairly difficult to debug in practice.\r\nIf a device authorization flow were the only thing that libpq needed to\r\nsupport natively, I think we should just depend on a widely used HTTP\r\nclient, like libcurl or neon, and implement the minimum spec directly\r\nagainst the existing test suite.\r\n\r\nThere are a huge number of other authorization flows besides Device\r\nAuthorization; most would involve libpq automatically opening a web\r\nbrowser for you. I felt like that wasn't an appropriate thing for a\r\nlibrary to do by default, especially when one of the most important\r\nclients is a command-line application. Perhaps there could be a hook\r\nfor applications to be able to override the builtin flow and substitute\r\ntheir own.\r\n\r\nSince bearer tokens are essentially plaintext passwords, the relevant\r\nspecs require the use of transport-level protection, and I think it'd\r\nbe wise for the client to require TLS to be in place before performing\r\nthe initial handshake or sending a token.\r\n\r\nNot every OAuth issuer is also an OpenID Discovery provider, so it's\r\nfrustrating that OAUTHBEARER (which is purportedly an OAuth 2.0\r\nfeature) requires OIDD for real-world implementations. Perhaps we could\r\nhack around this with a data: URI or something.\r\n\r\nThe client currently performs the OAuth login dance every single time a\r\nconnection is made, but a proper OAuth client would cache its tokens to\r\nreuse later, and keep an eye on their expiration times. This would make\r\ndaily use a little more like that of Kerberos, but we would have to\r\ndesign a way to create and secure a token cache on disk.\r\n\r\nIf you've read this far, thank you for your interest, and I hope you\r\nenjoy playing with it!\r\n\r\n--Jacob\r\n\r\n[1] https://datatracker.ietf.org/doc/html/rfc7628\r\n[2] https://github.com/babelouest/iddawc\r\n[3] https://datatracker.ietf.org/doc/html/rfc8628\r\n[4] https://datatracker.ietf.org/doc/html/rfc8705\r\n[5] https://datatracker.ietf.org/doc/html/rfc6750#section-5.2", "msg_date": "Tue, 8 Jun 2021 16:37:46 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "[PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Tue, Jun 08, 2021 at 04:37:46PM +0000, Jacob Champion wrote:\n> 1. Prep\n> \n> 0001 decouples the SASL code from the SCRAM implementation.\n> 0002 makes it possible to use common/jsonapi from the frontend.\n> 0003 lets the json_errdetail() result be freed, to avoid leaks.\n>\n> The first three patches are, hopefully, generally useful outside of\n> this implementation, and I'll plan to register them in the next\n> commitfest. The middle two patches are the \"interesting\" pieces, and\n> I've split them into client and server for ease of understanding,\n> though neither is particularly useful without the other.\n\nBeginning with the beginning, could you spawn two threads for the\njsonapi rework and the SASL/SCRAM business? I agree that these look\nindependently useful. Glad to see someone improving the code with\nSASL and SCRAM which are too inter-dependent now. I saw in the RFCs\ndedicated to OAUTH the need for the JSON part as well.\n\n+# define check_stack_depth()\n+# ifdef JSONAPI_NO_LOG\n+# define json_log_and_abort(...) \\\n+ do { fprintf(stderr, __VA_ARGS__); exit(1); } while(0)\n+# else\nIn patch 0002, this is the wrong approach. libpq will not be able to\nfeed on such reports, and you cannot use any of the APIs from the\npalloc() family either as these just fail on OOM. libpq should be\nable to know about the error, and would fill in the error back to the\napplication. This abstraction is not necessary on HEAD as\npg_verifybackup is fine with this level of reporting. My rough guess\nis that we will need to split the existing jsonapi.c into two files,\none that can be used in shared libraries and a second that handles the \nerrors.\n\n+ /* TODO: SASL_EXCHANGE_FAILURE with output is forbidden in SASL */\n if (result == SASL_EXCHANGE_SUCCESS)\n sendAuthRequest(port,\n AUTH_REQ_SASL_FIN,\n output,\n outputlen);\nPerhaps that's an issue we need to worry on its own? I didn't recall\nthis part..\n--\nMichael", "msg_date": "Fri, 18 Jun 2021 13:07:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 08/06/2021 19:37, Jacob Champion wrote:\n> We've been working on ways to expand the list of third-party auth\n> methods that Postgres provides. Some example use cases might be \"I want\n> to let anyone with a Google account read this table\" or \"let anyone who\n> belongs to this GitHub organization connect as a superuser\".\n\nCool!\n\n> The iddawc dependency for client-side OAuth was extremely helpful to\n> develop this proof of concept quickly, but I don't think it would be an\n> appropriate component to build a real feature on. It's extremely\n> heavyweight -- it incorporates a huge stack of dependencies, including\n> a logging framework and a web server, to implement features we would\n> probably never use -- and it's fairly difficult to debug in practice.\n> If a device authorization flow were the only thing that libpq needed to\n> support natively, I think we should just depend on a widely used HTTP\n> client, like libcurl or neon, and implement the minimum spec directly\n> against the existing test suite.\n\nYou could punt and let the application implement that stuff. I'm \nimagining that the application code would look something like this:\n\nconn = PQconnectStartParams(...);\nfor (;;)\n{\n status = PQconnectPoll(conn)\n switch (status)\n {\n case CONNECTION_SASL_TOKEN_REQUIRED:\n /* open a browser for the user, get token */\n token = open_browser()\n PQauthResponse(token);\n break;\n ...\n }\n}\n\nIt would be nice to have a simple default implementation, though, for \npsql and all the other client applications that come with PostgreSQL itself.\n\n> If you've read this far, thank you for your interest, and I hope you\n> enjoy playing with it!\n\nA few small things caught my eye in the backend oauth_exchange function:\n\n> + /* Handle the client's initial message. */\n> + p = strdup(input);\n\nthis strdup() should be pstrdup().\n\nIn the same function, there are a bunch of reports like this:\n\n> ereport(ERROR,\n> + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> + errmsg(\"malformed OAUTHBEARER message\"),\n> + errdetail(\"Comma expected, but found character \\\"%s\\\".\",\n> + sanitize_char(*p))));\n\nI don't think the double quotes are needed here, because sanitize_char \nwill return quotes if it's a single character. So it would end up \nlooking like this: ... found character \"'x'\".\n\n- Heikki\n\n\n", "msg_date": "Fri, 18 Jun 2021 11:31:00 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, 2021-06-18 at 11:31 +0300, Heikki Linnakangas wrote:\r\n> On 08/06/2021 19:37, Jacob Champion wrote:\r\n> > We've been working on ways to expand the list of third-party auth\r\n> > methods that Postgres provides. Some example use cases might be \"I want\r\n> > to let anyone with a Google account read this table\" or \"let anyone who\r\n> > belongs to this GitHub organization connect as a superuser\".\r\n> \r\n> Cool!\r\n\r\nGlad you think so! :D\r\n\r\n> > The iddawc dependency for client-side OAuth was extremely helpful to\r\n> > develop this proof of concept quickly, but I don't think it would be an\r\n> > appropriate component to build a real feature on. It's extremely\r\n> > heavyweight -- it incorporates a huge stack of dependencies, including\r\n> > a logging framework and a web server, to implement features we would\r\n> > probably never use -- and it's fairly difficult to debug in practice.\r\n> > If a device authorization flow were the only thing that libpq needed to\r\n> > support natively, I think we should just depend on a widely used HTTP\r\n> > client, like libcurl or neon, and implement the minimum spec directly\r\n> > against the existing test suite.\r\n> \r\n> You could punt and let the application implement that stuff. I'm \r\n> imagining that the application code would look something like this:\r\n> \r\n> conn = PQconnectStartParams(...);\r\n> for (;;)\r\n> {\r\n> status = PQconnectPoll(conn)\r\n> switch (status)\r\n> {\r\n> case CONNECTION_SASL_TOKEN_REQUIRED:\r\n> /* open a browser for the user, get token */\r\n> token = open_browser()\r\n> PQauthResponse(token);\r\n> break;\r\n> ...\r\n> }\r\n> }\r\n\r\nI was toying with the idea of having a callback for libpq clients,\r\nwhere they could take full control of the OAuth flow if they wanted to.\r\nDoing it inline with PQconnectPoll seems like it would work too. It has\r\na couple of drawbacks that I can see:\r\n\r\n- If a client isn't currently using a poll loop, they'd have to switch\r\nto one to be able to use OAuth connections. Not a difficult change, but\r\nconsidering all the other hurdles to making this work, I'm hoping to\r\nminimize the hoop-jumping.\r\n\r\n- A client would still have to receive a bunch of OAuth parameters from\r\nsome new libpq API in order to construct the correct URL to visit, so\r\nthe overall complexity for implementers might be higher than if we just\r\npassed those params directly in a callback.\r\n\r\n> It would be nice to have a simple default implementation, though, for \r\n> psql and all the other client applications that come with PostgreSQL itself.\r\n\r\nI agree. I think having a bare-bones implementation in libpq itself\r\nwould make initial adoption *much* easier, and then if specific\r\napplications wanted to have richer control over an authorization flow,\r\nthen they could implement that themselves with the aforementioned\r\ncallback.\r\n\r\nThe Device Authorization flow was the most minimal working\r\nimplementation I could find, since it doesn't require a web browser on\r\nthe system, just the ability to print a prompt to the console. But if\r\nanyone knows of a better flow for this use case, I'm all ears.\r\n\r\n> > If you've read this far, thank you for your interest, and I hope you\r\n> > enjoy playing with it!\r\n> \r\n> A few small things caught my eye in the backend oauth_exchange function:\r\n> \r\n> > + /* Handle the client's initial message. */\r\n> > + p = strdup(input);\r\n> \r\n> this strdup() should be pstrdup().\r\n\r\nThanks, I'll fix that in the next re-roll.\r\n\r\n> In the same function, there are a bunch of reports like this:\r\n> \r\n> > ereport(ERROR,\r\n> > + (errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n> > + errmsg(\"malformed OAUTHBEARER message\"),\r\n> > + errdetail(\"Comma expected, but found character \\\"%s\\\".\",\r\n> > + sanitize_char(*p))));\r\n> \r\n> I don't think the double quotes are needed here, because sanitize_char \r\n> will return quotes if it's a single character. So it would end up \r\n> looking like this: ... found character \"'x'\".\r\n\r\nI'll fix this too. Thanks!\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 22 Jun 2021 23:22:31 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, 2021-06-18 at 13:07 +0900, Michael Paquier wrote:\r\n> On Tue, Jun 08, 2021 at 04:37:46PM +0000, Jacob Champion wrote:\r\n> > 1. Prep\r\n> > \r\n> > 0001 decouples the SASL code from the SCRAM implementation.\r\n> > 0002 makes it possible to use common/jsonapi from the frontend.\r\n> > 0003 lets the json_errdetail() result be freed, to avoid leaks.\r\n> > \r\n> > The first three patches are, hopefully, generally useful outside of\r\n> > this implementation, and I'll plan to register them in the next\r\n> > commitfest. The middle two patches are the \"interesting\" pieces, and\r\n> > I've split them into client and server for ease of understanding,\r\n> > though neither is particularly useful without the other.\r\n> \r\n> Beginning with the beginning, could you spawn two threads for the\r\n> jsonapi rework and the SASL/SCRAM business?\r\n\r\nDone [1, 2]. I've copied your comments into those threads with my\r\nresponses, and I'll have them registered in commitfest shortly.\r\n\r\nThanks!\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/3d2a6f5d50e741117d6baf83eb67ebf1a8a35a11.camel%40vmware.com\r\n[2] https://www.postgresql.org/message-id/a250d475ba1c0cc0efb7dfec8e538fcc77cdcb8e.camel%40vmware.com\r\n", "msg_date": "Tue, 22 Jun 2021 23:26:03 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Tue, Jun 22, 2021 at 11:26:03PM +0000, Jacob Champion wrote:\n> Done [1, 2]. I've copied your comments into those threads with my\n> responses, and I'll have them registered in commitfest shortly.\n\nThanks!\n--\nMichael", "msg_date": "Wed, 23 Jun 2021 15:10:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Tue, 2021-06-22 at 23:22 +0000, Jacob Champion wrote:\r\n> On Fri, 2021-06-18 at 11:31 +0300, Heikki Linnakangas wrote:\r\n> > \r\n> > A few small things caught my eye in the backend oauth_exchange function:\r\n> > \r\n> > > + /* Handle the client's initial message. */\r\n> > > + p = strdup(input);\r\n> > \r\n> > this strdup() should be pstrdup().\r\n> \r\n> Thanks, I'll fix that in the next re-roll.\r\n> \r\n> > In the same function, there are a bunch of reports like this:\r\n> > \r\n> > > ereport(ERROR,\r\n> > > + (errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n> > > + errmsg(\"malformed OAUTHBEARER message\"),\r\n> > > + errdetail(\"Comma expected, but found character \\\"%s\\\".\",\r\n> > > + sanitize_char(*p))));\r\n> > \r\n> > I don't think the double quotes are needed here, because sanitize_char \r\n> > will return quotes if it's a single character. So it would end up \r\n> > looking like this: ... found character \"'x'\".\r\n> \r\n> I'll fix this too. Thanks!\r\n\r\nv2, attached, incorporates Heikki's suggested fixes and also rebases on\r\ntop of latest HEAD, which had the SASL refactoring changes committed\r\nlast month.\r\n\r\nThe biggest change from the last patchset is 0001, an attempt at\r\nenabling jsonapi in the frontend without the use of palloc(), based on\r\nsuggestions by Michael and Tom from last commitfest. I've also made\r\nsome improvements to the pytest suite. No major changes to the OAuth\r\nimplementation yet.\r\n\r\n--Jacob", "msg_date": "Wed, 25 Aug 2021 18:41:39 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Wed, Aug 25, 2021 at 11:42 AM Jacob Champion <pchampion@vmware.com>\nwrote:\n\n> On Tue, 2021-06-22 at 23:22 +0000, Jacob Champion wrote:\n> > On Fri, 2021-06-18 at 11:31 +0300, Heikki Linnakangas wrote:\n> > >\n> > > A few small things caught my eye in the backend oauth_exchange\n> function:\n> > >\n> > > > + /* Handle the client's initial message. */\n> > > > + p = strdup(input);\n> > >\n> > > this strdup() should be pstrdup().\n> >\n> > Thanks, I'll fix that in the next re-roll.\n> >\n> > > In the same function, there are a bunch of reports like this:\n> > >\n> > > > ereport(ERROR,\n> > > > + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > > > + errmsg(\"malformed OAUTHBEARER message\"),\n> > > > + errdetail(\"Comma expected, but found\n> character \\\"%s\\\".\",\n> > > > + sanitize_char(*p))));\n> > >\n> > > I don't think the double quotes are needed here, because sanitize_char\n> > > will return quotes if it's a single character. So it would end up\n> > > looking like this: ... found character \"'x'\".\n> >\n> > I'll fix this too. Thanks!\n>\n> v2, attached, incorporates Heikki's suggested fixes and also rebases on\n> top of latest HEAD, which had the SASL refactoring changes committed\n> last month.\n>\n> The biggest change from the last patchset is 0001, an attempt at\n> enabling jsonapi in the frontend without the use of palloc(), based on\n> suggestions by Michael and Tom from last commitfest. I've also made\n> some improvements to the pytest suite. No major changes to the OAuth\n> implementation yet.\n>\n> --Jacob\n>\nHi,\nFor v2-0001-common-jsonapi-support-FRONTEND-clients.patch :\n\n+ /* Clean up. */\n+ termJsonLexContext(&lex);\n\nAt the end of termJsonLexContext(), empty is copied to lex. For stack\nbased JsonLexContext, the copy seems unnecessary.\nMaybe introduce a boolean parameter for termJsonLexContext() to signal that\nthe copy can be omitted ?\n\n+#ifdef FRONTEND\n+ /* make sure initialization succeeded */\n+ if (lex->strval == NULL)\n+ return JSON_OUT_OF_MEMORY;\n\nShould PQExpBufferBroken(lex->strval) be used for the check ?\n\nThanks\n\nOn Wed, Aug 25, 2021 at 11:42 AM Jacob Champion <pchampion@vmware.com> wrote:On Tue, 2021-06-22 at 23:22 +0000, Jacob Champion wrote:\n> On Fri, 2021-06-18 at 11:31 +0300, Heikki Linnakangas wrote:\n> > \n> > A few small things caught my eye in the backend oauth_exchange function:\n> > \n> > > +       /* Handle the client's initial message. */\n> > > +       p = strdup(input);\n> > \n> > this strdup() should be pstrdup().\n> \n> Thanks, I'll fix that in the next re-roll.\n> \n> > In the same function, there are a bunch of reports like this:\n> > \n> > >                    ereport(ERROR,\n> > > +                          (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > > +                           errmsg(\"malformed OAUTHBEARER message\"),\n> > > +                           errdetail(\"Comma expected, but found character \\\"%s\\\".\",\n> > > +                                     sanitize_char(*p))));\n> > \n> > I don't think the double quotes are needed here, because sanitize_char \n> > will return quotes if it's a single character. So it would end up \n> > looking like this: ... found character \"'x'\".\n> \n> I'll fix this too. Thanks!\n\nv2, attached, incorporates Heikki's suggested fixes and also rebases on\ntop of latest HEAD, which had the SASL refactoring changes committed\nlast month.\n\nThe biggest change from the last patchset is 0001, an attempt at\nenabling jsonapi in the frontend without the use of palloc(), based on\nsuggestions by Michael and Tom from last commitfest. I've also made\nsome improvements to the pytest suite. No major changes to the OAuth\nimplementation yet.\n\n--JacobHi,For v2-0001-common-jsonapi-support-FRONTEND-clients.patch :+   /* Clean up. */+   termJsonLexContext(&lex); At the end of termJsonLexContext(), empty is copied to lex. For stack based JsonLexContext, the copy seems unnecessary.Maybe introduce a boolean parameter for termJsonLexContext() to signal that the copy can be omitted ?+#ifdef FRONTEND+       /* make sure initialization succeeded */+       if (lex->strval == NULL)+           return JSON_OUT_OF_MEMORY;Should PQExpBufferBroken(lex->strval) be used for the check ?Thanks", "msg_date": "Wed, 25 Aug 2021 15:25:03 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Wed, Aug 25, 2021 at 3:25 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Wed, Aug 25, 2021 at 11:42 AM Jacob Champion <pchampion@vmware.com>\n> wrote:\n>\n>> On Tue, 2021-06-22 at 23:22 +0000, Jacob Champion wrote:\n>> > On Fri, 2021-06-18 at 11:31 +0300, Heikki Linnakangas wrote:\n>> > >\n>> > > A few small things caught my eye in the backend oauth_exchange\n>> function:\n>> > >\n>> > > > + /* Handle the client's initial message. */\n>> > > > + p = strdup(input);\n>> > >\n>> > > this strdup() should be pstrdup().\n>> >\n>> > Thanks, I'll fix that in the next re-roll.\n>> >\n>> > > In the same function, there are a bunch of reports like this:\n>> > >\n>> > > > ereport(ERROR,\n>> > > > + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n>> > > > + errmsg(\"malformed OAUTHBEARER message\"),\n>> > > > + errdetail(\"Comma expected, but found\n>> character \\\"%s\\\".\",\n>> > > > + sanitize_char(*p))));\n>> > >\n>> > > I don't think the double quotes are needed here, because\n>> sanitize_char\n>> > > will return quotes if it's a single character. So it would end up\n>> > > looking like this: ... found character \"'x'\".\n>> >\n>> > I'll fix this too. Thanks!\n>>\n>> v2, attached, incorporates Heikki's suggested fixes and also rebases on\n>> top of latest HEAD, which had the SASL refactoring changes committed\n>> last month.\n>>\n>> The biggest change from the last patchset is 0001, an attempt at\n>> enabling jsonapi in the frontend without the use of palloc(), based on\n>> suggestions by Michael and Tom from last commitfest. I've also made\n>> some improvements to the pytest suite. No major changes to the OAuth\n>> implementation yet.\n>>\n>> --Jacob\n>>\n> Hi,\n> For v2-0001-common-jsonapi-support-FRONTEND-clients.patch :\n>\n> + /* Clean up. */\n> + termJsonLexContext(&lex);\n>\n> At the end of termJsonLexContext(), empty is copied to lex. For stack\n> based JsonLexContext, the copy seems unnecessary.\n> Maybe introduce a boolean parameter for termJsonLexContext() to signal\n> that the copy can be omitted ?\n>\n> +#ifdef FRONTEND\n> + /* make sure initialization succeeded */\n> + if (lex->strval == NULL)\n> + return JSON_OUT_OF_MEMORY;\n>\n> Should PQExpBufferBroken(lex->strval) be used for the check ?\n>\n> Thanks\n>\nHi,\nFor v2-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch :\n\n+ i_init_session(&session);\n+\n+ if (!conn->oauth_client_id)\n+ {\n+ /* We can't talk to a server without a client identifier. */\n+ appendPQExpBufferStr(&conn->errorMessage,\n+ libpq_gettext(\"no oauth_client_id is set for\nthe connection\"));\n+ goto cleanup;\n\nCan conn->oauth_client_id check be performed ahead of i_init_session() ?\nThat way, ```goto cleanup``` can be replaced with return.\n\n+ if (!error_code || (strcmp(error_code, \"authorization_pending\")\n+ && strcmp(error_code, \"slow_down\")))\n\nWhat if, in the future, there is error code different from the above two\nwhich doesn't represent \"OAuth token retrieval failed\" scenario ?\n\nFor client_initial_response(),\n\n+ token_buf = createPQExpBuffer();\n+ if (!token_buf)\n+ goto cleanup;\n\nIf token_buf is NULL, there doesn't seem to be anything to free. We can\nreturn directly.\n\nCheers\n\nOn Wed, Aug 25, 2021 at 3:25 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Wed, Aug 25, 2021 at 11:42 AM Jacob Champion <pchampion@vmware.com> wrote:On Tue, 2021-06-22 at 23:22 +0000, Jacob Champion wrote:\n> On Fri, 2021-06-18 at 11:31 +0300, Heikki Linnakangas wrote:\n> > \n> > A few small things caught my eye in the backend oauth_exchange function:\n> > \n> > > +       /* Handle the client's initial message. */\n> > > +       p = strdup(input);\n> > \n> > this strdup() should be pstrdup().\n> \n> Thanks, I'll fix that in the next re-roll.\n> \n> > In the same function, there are a bunch of reports like this:\n> > \n> > >                    ereport(ERROR,\n> > > +                          (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > > +                           errmsg(\"malformed OAUTHBEARER message\"),\n> > > +                           errdetail(\"Comma expected, but found character \\\"%s\\\".\",\n> > > +                                     sanitize_char(*p))));\n> > \n> > I don't think the double quotes are needed here, because sanitize_char \n> > will return quotes if it's a single character. So it would end up \n> > looking like this: ... found character \"'x'\".\n> \n> I'll fix this too. Thanks!\n\nv2, attached, incorporates Heikki's suggested fixes and also rebases on\ntop of latest HEAD, which had the SASL refactoring changes committed\nlast month.\n\nThe biggest change from the last patchset is 0001, an attempt at\nenabling jsonapi in the frontend without the use of palloc(), based on\nsuggestions by Michael and Tom from last commitfest. I've also made\nsome improvements to the pytest suite. No major changes to the OAuth\nimplementation yet.\n\n--JacobHi,For v2-0001-common-jsonapi-support-FRONTEND-clients.patch :+   /* Clean up. */+   termJsonLexContext(&lex); At the end of termJsonLexContext(), empty is copied to lex. For stack based JsonLexContext, the copy seems unnecessary.Maybe introduce a boolean parameter for termJsonLexContext() to signal that the copy can be omitted ?+#ifdef FRONTEND+       /* make sure initialization succeeded */+       if (lex->strval == NULL)+           return JSON_OUT_OF_MEMORY;Should PQExpBufferBroken(lex->strval) be used for the check ?ThanksHi,For v2-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch :+   i_init_session(&session);++   if (!conn->oauth_client_id)+   {+       /* We can't talk to a server without a client identifier. */+       appendPQExpBufferStr(&conn->errorMessage,+                            libpq_gettext(\"no oauth_client_id is set for the connection\"));+       goto cleanup;Can conn->oauth_client_id check be performed ahead of i_init_session() ? That way, ```goto cleanup``` can be replaced with return.+       if (!error_code || (strcmp(error_code, \"authorization_pending\")+                           && strcmp(error_code, \"slow_down\")))What if, in the future, there is error code different from the above two which doesn't represent \"OAuth token retrieval failed\" scenario ?For client_initial_response(),+   token_buf = createPQExpBuffer();+   if (!token_buf)+       goto cleanup;If token_buf is NULL, there doesn't seem to be anything to free. We can return directly.Cheers", "msg_date": "Wed, 25 Aug 2021 16:24:19 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Wed, 2021-08-25 at 15:25 -0700, Zhihong Yu wrote:\r\n> \r\n> Hi,\r\n> For v2-0001-common-jsonapi-support-FRONTEND-clients.patch :\r\n> \r\n> + /* Clean up. */\r\n> + termJsonLexContext(&lex); \r\n> \r\n> At the end of termJsonLexContext(), empty is copied to lex. For stack\r\n> based JsonLexContext, the copy seems unnecessary.\r\n> Maybe introduce a boolean parameter for termJsonLexContext() to\r\n> signal that the copy can be omitted ?\r\n\r\nDo you mean heap-based? i.e. destroyJsonLexContext() does an\r\nunnecessary copy before free? Yeah, in that case it's not super useful,\r\nbut I think I'd want some evidence that the performance hit matters\r\nbefore optimizing it.\r\n\r\nAre there any other internal APIs that take a boolean parameter like\r\nthat? If not, I think we'd probably just want to remove the copy\r\nentirely if it's a problem.\r\n\r\n> +#ifdef FRONTEND\r\n> + /* make sure initialization succeeded */\r\n> + if (lex->strval == NULL)\r\n> + return JSON_OUT_OF_MEMORY;\r\n> \r\n> Should PQExpBufferBroken(lex->strval) be used for the check ?\r\n\r\nIt should be okay to continue if the strval is broken but non-NULL,\r\nsince it's about to be reset. That has the fringe benefit of allowing\r\nthe function to go as far as possible without failing, though that's\r\nprobably a pretty weak justification.\r\n\r\nIn practice, do you think that the probability of success is low enough\r\nthat we should just short-circuit and be done with it?\r\n\r\nOn Wed, 2021-08-25 at 16:24 -0700, Zhihong Yu wrote:\r\n> \r\n> For v2-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch :\r\n> \r\n> + i_init_session(&session);\r\n> +\r\n> + if (!conn->oauth_client_id)\r\n> + {\r\n> + /* We can't talk to a server without a client identifier. */\r\n> + appendPQExpBufferStr(&conn->errorMessage,\r\n> + libpq_gettext(\"no oauth_client_id is set for the connection\"));\r\n> + goto cleanup;\r\n> \r\n> Can conn->oauth_client_id check be performed ahead\r\n> of i_init_session() ? That way, ```goto cleanup``` can be replaced\r\n> with return.\r\n\r\nYeah, I think that makes sense. FYI, this is probably one of the\r\nfunctions that will be rewritten completely once iddawc is removed.\r\n\r\n> + if (!error_code || (strcmp(error_code, \"authorization_pending\")\r\n> + && strcmp(error_code, \"slow_down\")))\r\n> \r\n> What if, in the future, there is error code different from the above\r\n> two which doesn't represent \"OAuth token retrieval failed\" scenario ?\r\n\r\nWe'd have to update our code; that would be a breaking change to the\r\nDevice Authorization spec. Here's what it says today [1]:\r\n\r\n The \"authorization_pending\" and \"slow_down\" error codes define\r\n particularly unique behavior, as they indicate that the OAuth client\r\n should continue to poll the token endpoint by repeating the token\r\n request (implementing the precise behavior defined above). If the\r\n client receives an error response with any other error code, it MUST\r\n stop polling and SHOULD react accordingly, for example, by displaying\r\n an error to the user.\r\n\r\n> For client_initial_response(),\r\n> \r\n> + token_buf = createPQExpBuffer();\r\n> + if (!token_buf)\r\n> + goto cleanup;\r\n> \r\n> If token_buf is NULL, there doesn't seem to be anything to free. We\r\n> can return directly.\r\n\r\nThat's true today, but implementations have a habit of changing. I\r\npersonally prefer not to introduce too many exit points from a function\r\nthat's already using goto. In my experience, that makes future\r\nmaintenance harder.\r\n\r\nThanks for the reviews! Have you been able to give the patchset a try\r\nwith an OAuth deployment?\r\n\r\n--Jacob\r\n\r\n[1] https://datatracker.ietf.org/doc/html/rfc8628#section-3.5\r\n", "msg_date": "Thu, 26 Aug 2021 16:13:08 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Thu, Aug 26, 2021 at 9:13 AM Jacob Champion <pchampion@vmware.com> wrote:\n\n> On Wed, 2021-08-25 at 15:25 -0700, Zhihong Yu wrote:\n> >\n> > Hi,\n> > For v2-0001-common-jsonapi-support-FRONTEND-clients.patch :\n> >\n> > + /* Clean up. */\n> > + termJsonLexContext(&lex);\n> >\n> > At the end of termJsonLexContext(), empty is copied to lex. For stack\n> > based JsonLexContext, the copy seems unnecessary.\n> > Maybe introduce a boolean parameter for termJsonLexContext() to\n> > signal that the copy can be omitted ?\n>\n> Do you mean heap-based? i.e. destroyJsonLexContext() does an\n> unnecessary copy before free? Yeah, in that case it's not super useful,\n> but I think I'd want some evidence that the performance hit matters\n> before optimizing it.\n>\n> Are there any other internal APIs that take a boolean parameter like\n> that? If not, I think we'd probably just want to remove the copy\n> entirely if it's a problem.\n>\n> > +#ifdef FRONTEND\n> > + /* make sure initialization succeeded */\n> > + if (lex->strval == NULL)\n> > + return JSON_OUT_OF_MEMORY;\n> >\n> > Should PQExpBufferBroken(lex->strval) be used for the check ?\n>\n> It should be okay to continue if the strval is broken but non-NULL,\n> since it's about to be reset. That has the fringe benefit of allowing\n> the function to go as far as possible without failing, though that's\n> probably a pretty weak justification.\n>\n> In practice, do you think that the probability of success is low enough\n> that we should just short-circuit and be done with it?\n>\n> On Wed, 2021-08-25 at 16:24 -0700, Zhihong Yu wrote:\n> >\n> > For v2-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch :\n> >\n> > + i_init_session(&session);\n> > +\n> > + if (!conn->oauth_client_id)\n> > + {\n> > + /* We can't talk to a server without a client identifier. */\n> > + appendPQExpBufferStr(&conn->errorMessage,\n> > + libpq_gettext(\"no oauth_client_id is set\n> for the connection\"));\n> > + goto cleanup;\n> >\n> > Can conn->oauth_client_id check be performed ahead\n> > of i_init_session() ? That way, ```goto cleanup``` can be replaced\n> > with return.\n>\n> Yeah, I think that makes sense. FYI, this is probably one of the\n> functions that will be rewritten completely once iddawc is removed.\n>\n> > + if (!error_code || (strcmp(error_code, \"authorization_pending\")\n> > + && strcmp(error_code, \"slow_down\")))\n> >\n> > What if, in the future, there is error code different from the above\n> > two which doesn't represent \"OAuth token retrieval failed\" scenario ?\n>\n> We'd have to update our code; that would be a breaking change to the\n> Device Authorization spec. Here's what it says today [1]:\n>\n> The \"authorization_pending\" and \"slow_down\" error codes define\n> particularly unique behavior, as they indicate that the OAuth client\n> should continue to poll the token endpoint by repeating the token\n> request (implementing the precise behavior defined above). If the\n> client receives an error response with any other error code, it MUST\n> stop polling and SHOULD react accordingly, for example, by displaying\n> an error to the user.\n>\n> > For client_initial_response(),\n> >\n> > + token_buf = createPQExpBuffer();\n> > + if (!token_buf)\n> > + goto cleanup;\n> >\n> > If token_buf is NULL, there doesn't seem to be anything to free. We\n> > can return directly.\n>\n> That's true today, but implementations have a habit of changing. I\n> personally prefer not to introduce too many exit points from a function\n> that's already using goto. In my experience, that makes future\n> maintenance harder.\n>\n> Thanks for the reviews! Have you been able to give the patchset a try\n> with an OAuth deployment?\n>\n> --Jacob\n>\n> [1] https://datatracker.ietf.org/doc/html/rfc8628#section-3.5\n\nHi,\nbq. destroyJsonLexContext() does an unnecessary copy before free? Yeah, in\nthat case it's not super useful,\nbut I think I'd want some evidence that the performance hit matters before\noptimizing it.\n\nYes I agree.\n\nbq. In practice, do you think that the probability of success is low enough\nthat we should just short-circuit and be done with it?\n\nHaven't had a chance to try your patches out yet.\nI will leave this to people who are more familiar with OAuth\nimplementation(s).\n\nbq. I personally prefer not to introduce too many exit points from a\nfunction that's already using goto.\n\nFair enough.\n\nCheers\n\nOn Thu, Aug 26, 2021 at 9:13 AM Jacob Champion <pchampion@vmware.com> wrote:On Wed, 2021-08-25 at 15:25 -0700, Zhihong Yu wrote:\n> \n> Hi,\n> For v2-0001-common-jsonapi-support-FRONTEND-clients.patch :\n> \n> +   /* Clean up. */\n> +   termJsonLexContext(&lex); \n> \n> At the end of termJsonLexContext(), empty is copied to lex. For stack\n> based JsonLexContext, the copy seems unnecessary.\n> Maybe introduce a boolean parameter for termJsonLexContext() to\n> signal that the copy can be omitted ?\n\nDo you mean heap-based? i.e. destroyJsonLexContext() does an\nunnecessary copy before free? Yeah, in that case it's not super useful,\nbut I think I'd want some evidence that the performance hit matters\nbefore optimizing it.\n\nAre there any other internal APIs that take a boolean parameter like\nthat? If not, I think we'd probably just want to remove the copy\nentirely if it's a problem.\n\n> +#ifdef FRONTEND\n> +       /* make sure initialization succeeded */\n> +       if (lex->strval == NULL)\n> +           return JSON_OUT_OF_MEMORY;\n> \n> Should PQExpBufferBroken(lex->strval) be used for the check ?\n\nIt should be okay to continue if the strval is broken but non-NULL,\nsince it's about to be reset. That has the fringe benefit of allowing\nthe function to go as far as possible without failing, though that's\nprobably a pretty weak justification.\n\nIn practice, do you think that the probability of success is low enough\nthat we should just short-circuit and be done with it?\n\nOn Wed, 2021-08-25 at 16:24 -0700, Zhihong Yu wrote:\n> \n> For v2-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch :\n> \n> +   i_init_session(&session);\n> +\n> +   if (!conn->oauth_client_id)\n> +   {\n> +       /* We can't talk to a server without a client identifier. */\n> +       appendPQExpBufferStr(&conn->errorMessage,\n> +                            libpq_gettext(\"no oauth_client_id is set for the connection\"));\n> +       goto cleanup;\n> \n> Can conn->oauth_client_id check be performed ahead\n> of i_init_session() ? That way, ```goto cleanup``` can be replaced\n> with return.\n\nYeah, I think that makes sense. FYI, this is probably one of the\nfunctions that will be rewritten completely once iddawc is removed.\n\n> +       if (!error_code || (strcmp(error_code, \"authorization_pending\")\n> +                           && strcmp(error_code, \"slow_down\")))\n> \n> What if, in the future, there is error code different from the above\n> two which doesn't represent \"OAuth token retrieval failed\" scenario ?\n\nWe'd have to update our code; that would be a breaking change to the\nDevice Authorization spec. Here's what it says today [1]:\n\n   The \"authorization_pending\" and \"slow_down\" error codes define\n   particularly unique behavior, as they indicate that the OAuth client\n   should continue to poll the token endpoint by repeating the token\n   request (implementing the precise behavior defined above).  If the\n   client receives an error response with any other error code, it MUST\n   stop polling and SHOULD react accordingly, for example, by displaying\n   an error to the user.\n\n> For client_initial_response(),\n> \n> +   token_buf = createPQExpBuffer();\n> +   if (!token_buf)\n> +       goto cleanup;\n> \n> If token_buf is NULL, there doesn't seem to be anything to free. We\n> can return directly.\n\nThat's true today, but implementations have a habit of changing. I\npersonally prefer not to introduce too many exit points from a function\nthat's already using goto. In my experience, that makes future\nmaintenance harder.\n\nThanks for the reviews! Have you been able to give the patchset a try\nwith an OAuth deployment?\n\n--Jacob\n\n[1] https://datatracker.ietf.org/doc/html/rfc8628#section-3.5Hi,bq. destroyJsonLexContext() does an unnecessary copy before free? Yeah, in that case it's not super useful,but I think I'd want some evidence that the performance hit matters before optimizing it. Yes I agree.bq. In practice, do you think that the probability of success is low enough that we should just short-circuit and be done with it?Haven't had a chance to try your patches out yet.I will leave this to people who are more familiar with OAuth implementation(s).bq.  I personally prefer not to introduce too many exit points from a function that's already using goto.Fair enough.Cheers", "msg_date": "Thu, 26 Aug 2021 09:20:17 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Thu, Aug 26, 2021 at 04:13:08PM +0000, Jacob Champion wrote:\n> Do you mean heap-based? i.e. destroyJsonLexContext() does an\n> unnecessary copy before free? Yeah, in that case it's not super useful,\n> but I think I'd want some evidence that the performance hit matters\n> before optimizing it.\n\nAs an authentication code path, the impact is minimal and my take on\nthat would be to keep the code simple. Now if you'd really wish to\nstress that without relying on the backend, one simple way is to use\npgbench -C -n with a mostly-empty script (one meta-command) coupled\nwith some profiling.\n--\nMichael", "msg_date": "Fri, 27 Aug 2021 11:32:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, 2021-08-27 at 11:32 +0900, Michael Paquier wrote:\r\n> Now if you'd really wish to\r\n> stress that without relying on the backend, one simple way is to use\r\n> pgbench -C -n with a mostly-empty script (one meta-command) coupled\r\n> with some profiling.\r\n\r\nAh, thanks! I'll add that to the toolbox.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 31 Aug 2021 20:48:43 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Hi all,\r\n\r\nv3 rebases this patchset over the top of Samay's pluggable auth\r\nprovider API [1], included here as patches 0001-3. The final patch in\r\nthe set ports the server implementation from a core feature to a\r\ncontrib module; to switch between the two approaches, simply leave out\r\nthat final patch.\r\n\r\nThere are still some backend changes that must be made to get this\r\nworking, as pointed out in 0009, and obviously libpq support still\r\nrequires code changes.\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/flat/CAJxrbyxTRn5P8J-p%2BwHLwFahK5y56PhK28VOb55jqMO05Y-DJw%40mail.gmail.com", "msg_date": "Fri, 4 Mar 2022 19:13:42 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Hi Jacob,\n\nThank you for porting this on top of the pluggable auth methods API. I've\naddressed the feedback around other backend changes in my latest patch, but\nthe client side changes still remain. I had a few questions to understand\nthem better.\n\n(a) What specifically do the client side changes in the patch implement?\n(b) Are the changes you made on the client side specific to OAUTH or are\nthey about making SASL more generic? As an additional question, if someone\nwanted to implement something similar on top of your patch, would they\nstill have to make client side changes?\n\nRegards,\nSamay\n\nOn Fri, Mar 4, 2022 at 11:13 AM Jacob Champion <pchampion@vmware.com> wrote:\n\n> Hi all,\n>\n> v3 rebases this patchset over the top of Samay's pluggable auth\n> provider API [1], included here as patches 0001-3. The final patch in\n> the set ports the server implementation from a core feature to a\n> contrib module; to switch between the two approaches, simply leave out\n> that final patch.\n>\n> There are still some backend changes that must be made to get this\n> working, as pointed out in 0009, and obviously libpq support still\n> requires code changes.\n>\n> --Jacob\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CAJxrbyxTRn5P8J-p%2BwHLwFahK5y56PhK28VOb55jqMO05Y-DJw%40mail.gmail.com\n>\n\nHi Jacob,Thank you for porting this on top of the pluggable auth methods API. I've addressed the feedback around other backend changes in my latest patch, but the client side changes still remain. I had a few questions to understand them better.(a) What specifically do the client side changes in the patch implement?(b) Are the changes you made on the client side specific to OAUTH or are they about making SASL more generic? As an additional question, if someone wanted to implement something similar on top of your patch, would they still have to make client side changes?Regards,SamayOn Fri, Mar 4, 2022 at 11:13 AM Jacob Champion <pchampion@vmware.com> wrote:Hi all,\n\nv3 rebases this patchset over the top of Samay's pluggable auth\nprovider API [1], included here as patches 0001-3. The final patch in\nthe set ports the server implementation from a core feature to a\ncontrib module; to switch between the two approaches, simply leave out\nthat final patch.\n\nThere are still some backend changes that must be made to get this\nworking, as pointed out in 0009, and obviously libpq support still\nrequires code changes.\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/flat/CAJxrbyxTRn5P8J-p%2BwHLwFahK5y56PhK28VOb55jqMO05Y-DJw%40mail.gmail.com", "msg_date": "Tue, 22 Mar 2022 14:48:08 -0700", "msg_from": "samay sharma <smilingsamay@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Tue, 2022-03-22 at 14:48 -0700, samay sharma wrote:\r\n> Thank you for porting this on top of the pluggable auth methods API.\r\n> I've addressed the feedback around other backend changes in my latest\r\n> patch, but the client side changes still remain. I had a few\r\n> questions to understand them better.\r\n> \r\n> (a) What specifically do the client side changes in the patch implement?\r\n\r\nHi Samay,\r\n\r\nThe client-side changes are an implementation of the OAuth 2.0 Device\r\nAuthorization Grant [1] in libpq. The majority of the OAuth logic is\r\nhandled by the third-party iddawc library.\r\n\r\nThe server tells the client what OIDC provider to contact, and then\r\nlibpq prompts you to log into that provider on your\r\nsmartphone/browser/etc. using a one-time code. After you give libpq\r\npermission to act on your behalf, the Bearer token gets sent to libpq\r\nvia a direct connection, and libpq forwards it to the server so that\r\nthe server can determine whether you're allowed in.\r\n\r\n> (b) Are the changes you made on the client side specific to OAUTH or\r\n> are they about making SASL more generic?\r\n\r\nThe original patchset included changes to make SASL more generic. Many\r\nof those changes have since been merged, and the remaining code is\r\nmostly OAuth-specific, but there are still improvements to be made.\r\n(And there's some JSON crud to sift through in the first couple of\r\npatches. I'm still mad that the OAUTHBEARER spec requires clients to\r\nparse JSON in the first place.)\r\n\r\n> As an additional question,\r\n> if someone wanted to implement something similar on top of your\r\n> patch, would they still have to make client side changes?\r\n\r\nAny new SASL mechanisms require changes to libpq at this point. You\r\nneed to implement a new pg_sasl_mech, modify pg_SASL_init() to select\r\nthe mechanism correctly, and add whatever connection string options you\r\nneed, along with the associated state in pg_conn. Patch 0004 has all\r\nthe client-side magic for OAUTHBEARER.\r\n\r\n--Jacob\r\n\r\n[1] https://datatracker.ietf.org/doc/html/rfc8628\r\n", "msg_date": "Tue, 22 Mar 2022 22:44:12 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, 2022-03-04 at 19:13 +0000, Jacob Champion wrote:\r\n> v3 rebases this patchset over the top of Samay's pluggable auth\r\n> provider API [1], included here as patches 0001-3.\r\n\r\nv4 rebases over the latest version of the pluggable auth patchset\r\n(included as 0001-4). Note that there's a recent conflict as\r\nof d4781d887; use an older commit as the base (or wait for the other\r\nthread to be updated).\r\n\r\n--Jacob", "msg_date": "Sat, 26 Mar 2022 00:00:22 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Hi Hackers,\n\nWe are trying to implement AAD(Azure AD) support in PostgreSQL and it\ncan be achieved with support of the OAuth method. To support AAD on\ntop of OAuth in a generic fashion (i.e for all other OAuth providers),\nwe are proposing this patch. It basically exposes two new hooks (one\nfor error reporting and one for OAuth provider specific token\nvalidation) and passing OAuth bearer token to backend. It also adds\nsupport for client credentials flow of OAuth additional to device code\nflow which Jacob has proposed.\n\nThe changes for each component are summarized below.\n\n1. Provider-specific extension:\n Each OAuth provider implements their own token validator as an\nextension. Extension registers an OAuth provider hook which is matched\nto a line in the HBA file.\n\n2. Add support to pass on the OAuth bearer token. In this\nobtaining the bearer token is left to 3rd party application or user.\n\n ./psql -U <username> -d 'dbname=postgres\noauth_client_id=<client_id> oauth_bearer_token=<token>\n\n3. HBA: An additional param ‘provider’ is added for the oauth method.\n Defining \"oauth\" as method + passing provider, issuer endpoint\nand expected audience\n\n * * * * oauth provider=<token validation extension>\nissuer=.... scope=....\n\n4. Engine Backend:\n Support for generic OAUTHBEARER type, requesting client to\nprovide token and passing to token for provider-specific extension.\n\n5. Engine Frontend: Two-tiered approach.\n a) libpq transparently passes on the token received\nfrom 3rd party client as is to the backend.\n b) libpq optionally compiled for the clients which\nexplicitly need libpq to orchestrate OAuth communication with the\nissuer (it depends heavily on 3rd party library iddawc as Jacob\nalready pointed out. The library seems to be supporting all the OAuth\nflows.)\n\nPlease let us know your thoughts as the proposed method supports\ndifferent OAuth flows with the use of provider specific hooks. We\nthink that the proposal would be useful for various OAuth providers.\n\nThanks,\nMahendrakar.\n\n\nOn Tue, 20 Sept 2022 at 10:18, Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Tue, 2021-06-22 at 23:22 +0000, Jacob Champion wrote:\n> > On Fri, 2021-06-18 at 11:31 +0300, Heikki Linnakangas wrote:\n> > >\n> > > A few small things caught my eye in the backend oauth_exchange function:\n> > >\n> > > > + /* Handle the client's initial message. */\n> > > > + p = strdup(input);\n> > >\n> > > this strdup() should be pstrdup().\n> >\n> > Thanks, I'll fix that in the next re-roll.\n> >\n> > > In the same function, there are a bunch of reports like this:\n> > >\n> > > > ereport(ERROR,\n> > > > + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > > > + errmsg(\"malformed OAUTHBEARER message\"),\n> > > > + errdetail(\"Comma expected, but found character \\\"%s\\\".\",\n> > > > + sanitize_char(*p))));\n> > >\n> > > I don't think the double quotes are needed here, because sanitize_char\n> > > will return quotes if it's a single character. So it would end up\n> > > looking like this: ... found character \"'x'\".\n> >\n> > I'll fix this too. Thanks!\n>\n> v2, attached, incorporates Heikki's suggested fixes and also rebases on\n> top of latest HEAD, which had the SASL refactoring changes committed\n> last month.\n>\n> The biggest change from the last patchset is 0001, an attempt at\n> enabling jsonapi in the frontend without the use of palloc(), based on\n> suggestions by Michael and Tom from last commitfest. I've also made\n> some improvements to the pytest suite. No major changes to the OAuth\n> implementation yet.\n>\n> --Jacob", "msg_date": "Tue, 20 Sep 2022 10:33:10 +0530", "msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Hi Mahendrakar, thanks for your interest and for the patch!\n\nOn Mon, Sep 19, 2022 at 10:03 PM mahendrakar s\n<mahendrakarforpg@gmail.com> wrote:\n> The changes for each component are summarized below.\n>\n> 1. Provider-specific extension:\n> Each OAuth provider implements their own token validator as an\n> extension. Extension registers an OAuth provider hook which is matched\n> to a line in the HBA file.\n\nHow easy is it to write a Bearer validator using C? My limited\nunderstanding was that most providers were publishing libraries in\nhigher-level languages.\n\nAlong those lines, sample validators will need to be provided, both to\nhelp in review and to get the pytest suite green again. (And coverage\nfor the new code is important, too.)\n\n> 2. Add support to pass on the OAuth bearer token. In this\n> obtaining the bearer token is left to 3rd party application or user.\n>\n> ./psql -U <username> -d 'dbname=postgres\n> oauth_client_id=<client_id> oauth_bearer_token=<token>\n\nThis hurts, but I think people are definitely going to ask for it, given\nthe frightening practice of copy-pasting these (incredibly sensitive\nsecret) tokens all over the place... Ideally I'd like to implement\nsender constraints for the Bearer token, to *prevent* copy-pasting (or,\nyou know, outright theft). But I'm not sure that sender constraints are\nwell-implemented yet for the major providers.\n\n> 3. HBA: An additional param ‘provider’ is added for the oauth method.\n> Defining \"oauth\" as method + passing provider, issuer endpoint\n> and expected audience\n>\n> * * * * oauth provider=<token validation extension>\n> issuer=.... scope=....\n\nNaming aside (this conflicts with Samay's previous proposal, I think), I\nhave concerns about the implementation. There's this code:\n\n> +\t\tif (oauth_provider && oauth_provider->name)\n> +\t\t{\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t(errmsg(\"OAuth provider \\\"%s\\\" is already loaded.\",\n> +\t\t\t\t\toauth_provider->name)));\n> +\t\t}\n\nwhich appears to prevent loading more than one global provider. But\nthere's also code that deals with a provider list? (Again, it'd help to\nhave test code covering the new stuff.)\n\n> b) libpq optionally compiled for the clients which\n> explicitly need libpq to orchestrate OAuth communication with the\n> issuer (it depends heavily on 3rd party library iddawc as Jacob\n> already pointed out. The library seems to be supporting all the OAuth\n> flows.)\n\nSpeaking of iddawc, I don't think it's a dependency we should choose to\nrely on. For all the code that it has, it doesn't seem to provide\ncompatibility with several real-world providers.\n\nGoogle, for one, chose not to follow the IETF spec it helped author, and\niddawc doesn't support its flavor of Device Authorization. At another\npoint, I think iddawc tried to decode Azure's Bearer tokens, which is\nincorrect...\n\nI haven't been able to check if those problems have been fixed in a\nrecent version, but if we're going to tie ourselves to a huge\ndependency, I'd at least like to believe that said dependency is\nbattle-tested and solid, and personally I don't feel like iddawc is.\n\n> -\tauth_method = I_TOKEN_AUTH_METHOD_NONE;\n> -\tif (conn->oauth_client_secret && *conn->oauth_client_secret)\n> -\t\tauth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;\n\nThis code got moved, but I'm not sure why? It doesn't appear to have\nmade a change to the logic.\n\n> +\tif (conn->oauth_client_secret && *conn->oauth_client_secret)\n> +\t{\n> +\t\tsession_response_type = I_RESPONSE_TYPE_CLIENT_CREDENTIALS;\n> +\t}\n\nIs this an Azure-specific requirement? Ideally a public client (which\npsql is) shouldn't have to provide a secret to begin with, if I\nunderstand that bit of the protocol correctly. I think Google also\nrequired provider-specific changes in this part of the code, and\nunfortunately I don't think they looked the same as yours.\n\nWe'll have to figure all that out... Standards are great; everyone has\none of their own. :)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 20 Sep 2022 16:19:31 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Tue, Sep 20, 2022 at 4:19 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > 2. Add support to pass on the OAuth bearer token. In this\n> > obtaining the bearer token is left to 3rd party application or user.\n> >\n> > ./psql -U <username> -d 'dbname=postgres\n> > oauth_client_id=<client_id> oauth_bearer_token=<token>\n>\n> This hurts, but I think people are definitely going to ask for it, given\n> the frightening practice of copy-pasting these (incredibly sensitive\n> secret) tokens all over the place...\n\nAfter some further thought -- in this case, you already have an opaque\nBearer token (and therefore you already know, out of band, which\nprovider needs to be used), you're willing to copy-paste it from\nwhatever service you got it from, and you have an extension plugged\ninto Postgres on the backend that verifies this Bearer blob using some\nprocedure that Postgres knows nothing about.\n\nWhy do you need the OAUTHBEARER mechanism logic at that point? Isn't\nthat identical to a custom password scheme? It seems like that could\nbe handled completely by Samay's pluggable auth proposal.\n\n--Jacob\n\n\n", "msg_date": "Wed, 21 Sep 2022 09:03:22 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "We can support both passing the token from an upstream client and libpq implementing OAUTH2 protocol to obtaining one.\n\nLibpq implementing OAUTHBEARER is needed for community/3rd party tools to have user-friendly authentication experience:\n1. For community client tools, like pg_admin, psql etc. \n Example experience: pg_admin would be able to open a popup dialog to authenticate customer and keep refresh token to avoid asking the user frequently.\n2. For 3rd party connectors supporting generic OAUTH with any provider. Useful for datawiz clients, like Tableau or ETL tools. Those can support both user and client OAUTH flows.\n\nLibpq passing toked directly from an upstream client is useful in other scenarios:\n1. Enterprise clients, built with .Net / Java and using provider-specific authentication libraries, like MSAL for AAD. Those can also support more advance provider-specific token acquisition flows.\n2. Resource-tight (like IoT) clients. Those can be compiled without optional libpq flag not including the iddawc or other dependency.\n\nThanks!\nAndrey.\n\n-----Original Message-----\nFrom: Jacob Champion <jchampion@timescale.com> \nSent: Wednesday, September 21, 2022 9:03 AM\nTo: mahendrakar s <mahendrakarforpg@gmail.com>\nCc: pgsql-hackers@postgresql.org; smilingsamay@gmail.com; andres@anarazel.de; Andrey Chudnovskiy <Andrey.Chudnovskiy@microsoft.com>; Mahendrakar Srinivasarao <mahendrakars@microsoft.com>\nSubject: [EXTERNAL] Re: [PoC] Federated Authn/z with OAUTHBEARER\n\n[You don't often get email from jchampion@timescale.com. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]\n\nOn Tue, Sep 20, 2022 at 4:19 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > 2. Add support to pass on the OAuth bearer token. In this\n> > obtaining the bearer token is left to 3rd party application or user.\n> >\n> > ./psql -U <username> -d 'dbname=postgres \n> > oauth_client_id=<client_id> oauth_bearer_token=<token>\n>\n> This hurts, but I think people are definitely going to ask for it, \n> given the frightening practice of copy-pasting these (incredibly \n> sensitive\n> secret) tokens all over the place...\n\nAfter some further thought -- in this case, you already have an opaque Bearer token (and therefore you already know, out of band, which provider needs to be used), you're willing to copy-paste it from whatever service you got it from, and you have an extension plugged into Postgres on the backend that verifies this Bearer blob using some procedure that Postgres knows nothing about.\n\nWhy do you need the OAUTHBEARER mechanism logic at that point? Isn't that identical to a custom password scheme? It seems like that could be handled completely by Samay's pluggable auth proposal.\n\n--Jacob\n\n\n", "msg_date": "Wed, 21 Sep 2022 22:10:25 +0000", "msg_from": "Andrey Chudnovskiy <Andrey.Chudnovskiy@microsoft.com>", "msg_from_op": false, "msg_subject": "RE: [EXTERNAL] Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Wed, Sep 21, 2022 at 3:10 PM Andrey Chudnovskiy\n<Andrey.Chudnovskiy@microsoft.com> wrote:\n> We can support both passing the token from an upstream client and libpq implementing OAUTH2 protocol to obtaining one.\n\nRight, I agree that we could potentially do both.\n\n> Libpq passing toked directly from an upstream client is useful in other scenarios:\n> 1. Enterprise clients, built with .Net / Java and using provider-specific authentication libraries, like MSAL for AAD. Those can also support more advance provider-specific token acquisition flows.\n> 2. Resource-tight (like IoT) clients. Those can be compiled without optional libpq flag not including the iddawc or other dependency.\n\nWhat I don't understand is how the OAUTHBEARER mechanism helps you in\nthis case. You're short-circuiting the negotiation where the server\ntells the client what provider to use and what scopes to request, and\ninstead you're saying \"here's a secret string, just take it and\nvalidate it with magic.\"\n\nI realize the ability to pass an opaque token may be useful, but from\nthe server's perspective, I don't see what differentiates it from the\npassword auth method plus a custom authenticator plugin. Why pay for\nthe additional complexity of OAUTHBEARER if you're not going to use\nit?\n\n--Jacob\n\n\n", "msg_date": "Wed, 21 Sep 2022 15:31:29 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "First, My message from corp email wasn't displayed in the thread,\nThat is what Jacob replied to, let me post it here for context:\n\n> We can support both passing the token from an upstream client and libpq implementing OAUTH2 protocol to obtain one.\n>\n> Libpq implementing OAUTHBEARER is needed for community/3rd party tools to have user-friendly authentication experience:\n>\n> 1. For community client tools, like pg_admin, psql etc.\n> Example experience: pg_admin would be able to open a popup dialog to authenticate customers and keep refresh tokens to avoid asking the user frequently.\n> 2. For 3rd party connectors supporting generic OAUTH with any provider. Useful for datawiz clients, like Tableau or ETL tools. Those can support both user and client OAUTH flows.\n>\n> Libpq passing toked directly from an upstream client is useful in other scenarios:\n> 1. Enterprise clients, built with .Net / Java and using provider-specific authentication libraries, like MSAL for AAD. Those can also support more advanced provider-specific token acquisition flows.\n> 2. Resource-tight (like IoT) clients. Those can be compiled without the optional libpq flag not including the iddawc or other dependency.\n\n-----------------------------------------------------------------------------------------------------\nOn this:\n\n> What I don't understand is how the OAUTHBEARER mechanism helps you in\n> this case. You're short-circuiting the negotiation where the server\n> tells the client what provider to use and what scopes to request, and\n> instead you're saying \"here's a secret string, just take it and\n> validate it with magic.\"\n>\n> I realize the ability to pass an opaque token may be useful, but from\n> the server's perspective, I don't see what differentiates it from the\n> password auth method plus a custom authenticator plugin. Why pay for\n> the additional complexity of OAUTHBEARER if you're not going to use\n> it?\n\nYes, passing a token as a new auth method won't make much sense in\nisolation. However:\n1. Since OAUTHBEARER is supported in the ecosystem, passing a token as\na way to authenticate with OAUTHBEARER is more consistent (IMO), then\npassing it as a password.\n2. Validation on the backend side doesn't depend on whether the token\nis obtained by libpq or transparently passed by the upstream client.\n3. Single OAUTH auth method on the server side for both scenarios,\nwould allow both enterprise clients with their own Token acquisition\nand community clients using libpq flows to connect as the same PG\nusers/roles.\n\nOn Wed, Sep 21, 2022 at 8:36 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Wed, Sep 21, 2022 at 3:10 PM Andrey Chudnovskiy\n> <Andrey.Chudnovskiy@microsoft.com> wrote:\n> > We can support both passing the token from an upstream client and libpq implementing OAUTH2 protocol to obtaining one.\n>\n> Right, I agree that we could potentially do both.\n>\n> > Libpq passing toked directly from an upstream client is useful in other scenarios:\n> > 1. Enterprise clients, built with .Net / Java and using provider-specific authentication libraries, like MSAL for AAD. Those can also support more advance provider-specific token acquisition flows.\n> > 2. Resource-tight (like IoT) clients. Those can be compiled without optional libpq flag not including the iddawc or other dependency.\n>\n> What I don't understand is how the OAUTHBEARER mechanism helps you in\n> this case. You're short-circuiting the negotiation where the server\n> tells the client what provider to use and what scopes to request, and\n> instead you're saying \"here's a secret string, just take it and\n> validate it with magic.\"\n>\n> I realize the ability to pass an opaque token may be useful, but from\n> the server's perspective, I don't see what differentiates it from the\n> password auth method plus a custom authenticator plugin. Why pay for\n> the additional complexity of OAUTHBEARER if you're not going to use\n> it?\n>\n> --Jacob\n>\n>\n>\n>\n\n\n", "msg_date": "Wed, 21 Sep 2022 21:55:08 -0700", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 9/21/22 21:55, Andrey Chudnovsky wrote:\n> First, My message from corp email wasn't displayed in the thread,\n\nI see it on the public archives [1]. Your client is choosing some pretty\nconfusing quoting tactics, though, which you may want to adjust. :D\n\nI have what I'll call some \"skeptical curiosity\" here -- you don't need\nto defend your use cases to me by any means, but I'd love to understand\nmore about them.\n\n> Yes, passing a token as a new auth method won't make much sense in\n> isolation. However:\n> 1. Since OAUTHBEARER is supported in the ecosystem, passing a token as\n> a way to authenticate with OAUTHBEARER is more consistent (IMO), then\n> passing it as a password.\n\nAgreed. It's probably not a very strong argument for the new mechanism,\nthough, especially if you're not using the most expensive code inside it.\n\n> 2. Validation on the backend side doesn't depend on whether the token\n> is obtained by libpq or transparently passed by the upstream client.\n\nSure.\n\n> 3. Single OAUTH auth method on the server side for both scenarios,\n> would allow both enterprise clients with their own Token acquisition\n> and community clients using libpq flows to connect as the same PG\n> users/roles.\n\nOkay, this is a stronger argument. With that in mind, I want to revisit\nyour examples and maybe provide some counterproposals:\n\n>> Libpq passing toked directly from an upstream client is useful in other scenarios:\n>> 1. Enterprise clients, built with .Net / Java and using provider-specific authentication libraries, like MSAL for AAD. Those can also support more advanced provider-specific token acquisition flows.\n\nI can see that providing a token directly would help you work around\nlimitations in libpq's \"standard\" OAuth flows, whether we use iddawc or\nnot. And it's cheap in terms of implementation. But I have a feeling it\nwould fall apart rapidly with error cases, where the server is giving\nlibpq information via the OAUTHBEARER mechanism, but libpq can only\ncommunicate to your wrapper through human-readable error messages on stderr.\n\nThis seems like clear motivation for client-side SASL plugins (which\nwere also discussed on Samay's proposal thread). That's a lot more\nexpensive to implement in libpq, but if it were hypothetically\navailable, wouldn't you rather your provider-specific code be able to\nspeak OAUTHBEARER directly with the server?\n\n>> 2. Resource-tight (like IoT) clients. Those can be compiled without the optional libpq flag not including the iddawc or other dependency.\n\nI want to dig into this much more; resource-constrained systems are near\nand dear to me. I can see two cases here:\n\nCase 1: The device is an IoT client that wants to connect on its own\nbehalf. Why would you want to use OAuth in that case? And how would the\nIoT device get its Bearer token to begin with? I'm much more used to\narchitectures that provision high-entropy secrets for this, whether\nthey're incredibly long passwords per device (in which case,\nchannel-bound SCRAM should be a fairly strong choice?) or client certs\n(which can be better decentralized, but make for a lot of bookkeeping).\n\nIf the answer to that is, \"we want an IoT client to be able to connect\nusing the same role as a person\", then I think that illustrates a clear\nneed for SASL negotiation. That would let the IoT client choose\nSCRAM-*-PLUS or EXTERNAL, and the person at the keyboard can choose\nOAUTHBEARER. Then we have incredible flexibility, because you don't have\nto engineer one mechanism to handle them all.\n\nCase 2: The constrained device is being used as a jump point. So there's\nan actual person at a keyboard, trying to get into a backend server\n(maybe behind a firewall layer, etc.), and the middlebox is either not\nweb-connected or is incredibly tiny for some reason. That might be a\ngood use case for a copy-pasted Bearer token, but is there actual demand\nfor that use case? What motivation would you (or your end user) have for\nchoosing a fairly heavy, web-centric authentication method in such a\nconstrained environment?\n\nAre there other resource-constrained use cases I've missed?\n\nThanks,\n--Jacob\n\n[1]\nhttps://www.postgresql.org/message-id/MN0PR21MB31694BAC193ECE1807FD45358F4F9%40MN0PR21MB3169.namprd21.prod.outlook.com\n\n\n\n", "msg_date": "Thu, 22 Sep 2022 14:53:55 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Mar 25, 2022 at 5:00 PM Jacob Champion <pchampion@vmware.com> wrote:\n> v4 rebases over the latest version of the pluggable auth patchset\n> (included as 0001-4). Note that there's a recent conflict as\n> of d4781d887; use an older commit as the base (or wait for the other\n> thread to be updated).\n\nHere's a newly rebased v5. (They're all zipped now, which I probably\nshould have done a while back, sorry.)\n\n- As before, 0001-4 are the pluggable auth set; they've now diverged\nfrom the official version over on the other thread [1].\n- I'm not sure that 0005 is still completely coherent after the\nrebase, given the recent changes to jsonapi.c. But for now, the tests\nare green, and that should be enough to keep the conversation going.\n- 0008 will hopefully be obsoleted when the SYSTEM_USER proposal [2] lands.\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/CAJxrbyxgFzfqby%2BVRCkeAhJnwVZE50%2BZLPx0JT2TDg9LbZtkCg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/7e692b8c-0b11-45db-1cad-3afc5b57409f@amazon.com", "msg_date": "Fri, 23 Sep 2022 15:39:19 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": ">>> Libpq passing toked directly from an upstream client is useful in other scenarios:\n>>> 1. Enterprise clients, built with .Net / Java and using provider-specific authentication libraries, like MSAL for AAD. Those can also support more advanced provider-specific token acquisition flows.\n\n> I can see that providing a token directly would help you work around\n> limitations in libpq's \"standard\" OAuth flows, whether we use iddawc or\n> not. And it's cheap in terms of implementation. But I have a feeling it\n> would fall apart rapidly with error cases, where the server is giving\n> libpq information via the OAUTHBEARER mechanism, but libpq can only\n> communicate to your wrapper through human-readable error messages on stderr.\n\nFor the providing token directly, that would be primarily used for\nscenarios where the same party controls both the server and the client\nside wrapper.\nI.e. The client knows how to get a token for a particular principal\nand doesn't need any additional information other than human readable\nmessages.\nPlease clarify the scenarios where you see this falling apart.\n\nI can provide an example in the cloud world. We (Azure) as well as\nother providers offer ways to obtain OAUTH tokens for\nService-to-Service communication at IAAS / PAAS level.\non Azure \"Managed Identity\" feature integrated in Compute VM allows a\nclient to make a local http call to get a token. VM itself manages the\ncertificate livecycle, as well as implements the corresponding OAUTH\nflow.\nThis capability is used by both our 1st party PAAS offerings, as well\nas 3rd party services deploying on VMs or managed K8S clusters.\nHere, the client doesn't need libpq assistance in obtaining the token.\n\n> This seems like clear motivation for client-side SASL plugins (which\n> were also discussed on Samay's proposal thread). That's a lot more\n> expensive to implement in libpq, but if it were hypothetically\n> available, wouldn't you rather your provider-specific code be able to\n> speak OAUTHBEARER directly with the server?\n\nI generally agree that pluggable auth layers in libpq could be\nbeneficial. However, as you pointed out in Samay's thread, that would\nrequire a new distribution model for libpq / clients to optionally\ninclude provider-specific logic.\n\nMy optimistic plan here would be to implement several core OAUTH flows\nin libpq core which would be generic enough to support major\nenterprise OAUTH providers:\n1. Client Credentials flow (Client_id + Client_secret) for backend applications.\n2. Authorization Code Flow with PKCE and/or Device code flow for GUI\napplications.\n\n(2.) above would require a protocol between libpq and upstream clients\nto exchange several messages.\nYour patch includes a way for libpq to deliver to the client a message\nabout the next authentication steps, so planned to build on top of\nthat.\n\nA little about scenarios, we look at.\nWhat we're trying to achieve here is an easy integration path for\nmultiple players in the ecosystem:\n- Managed PaaS Postgres providers (both us and multi-cloud solutions)\n- SaaS providers deploying postgres on IaaS/PaaS providers' clouds\n- Tools - pg_admin, psql and other ones.\n- BI, ETL, Federation and other scenarios where postgres is used as\nthe data source.\n\nIf we can offer a provider agnostic solution for Backend <=> libpq <=>\nUpstreal client path, we can have all players above build support for\nOAUTH credentials, managed by the cloud provider of their choice.\n\nFor us, that would mean:\n- Better administrator experience with pg_admin / psql handling of the\nAAD (Azure Active Directory) authentication flows.\n- Path for integration solutions using Postgres to build AAD\nauthentication in their management experience.\n- Ability to use AAD identity provider for any Postgres deployments\nother than our 1st party PaaS offering.\n- Ability to offer github as the identity provider for PaaS Postgres offering.\n\nOther players in the ecosystem above would be able to get the same benefits.\n\nDoes that make sense and possible without provider specific libpq plugin?\n\n-------------------------\nOn resource constrained scenarios.\n> I want to dig into this much more; resource-constrained systems are near\n> and dear to me. I can see two cases here:\n\nI just referred to the ability to compile libpq without extra\ndependencies to save some kilobytes.\nNot sure if OAUTH is widely used in those cases. It involves overhead\nanyway, and requires the device to talk to an additional party (OAUTH\nprovider).\nLikely Cert authentication is easier.\nIf needed, it can get libpq with full OAUTH support and use a client\ncode. But I didn't think about this scenario.\n\nOn Fri, Sep 23, 2022 at 3:39 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Fri, Mar 25, 2022 at 5:00 PM Jacob Champion <pchampion@vmware.com> wrote:\n> > v4 rebases over the latest version of the pluggable auth patchset\n> > (included as 0001-4). Note that there's a recent conflict as\n> > of d4781d887; use an older commit as the base (or wait for the other\n> > thread to be updated).\n>\n> Here's a newly rebased v5. (They're all zipped now, which I probably\n> should have done a while back, sorry.)\n>\n> - As before, 0001-4 are the pluggable auth set; they've now diverged\n> from the official version over on the other thread [1].\n> - I'm not sure that 0005 is still completely coherent after the\n> rebase, given the recent changes to jsonapi.c. But for now, the tests\n> are green, and that should be enough to keep the conversation going.\n> - 0008 will hopefully be obsoleted when the SYSTEM_USER proposal [2] lands.\n>\n> Thanks,\n> --Jacob\n>\n> [1] https://www.postgresql.org/message-id/CAJxrbyxgFzfqby%2BVRCkeAhJnwVZE50%2BZLPx0JT2TDg9LbZtkCg%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/flat/7e692b8c-0b11-45db-1cad-3afc5b57409f@amazon.com\n\n\n", "msg_date": "Mon, 26 Sep 2022 18:39:28 -0700", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Mon, Sep 26, 2022 at 6:39 PM Andrey Chudnovsky\n<achudnovskij@gmail.com> wrote:\n> For the providing token directly, that would be primarily used for\n> scenarios where the same party controls both the server and the client\n> side wrapper.\n> I.e. The client knows how to get a token for a particular principal\n> and doesn't need any additional information other than human readable\n> messages.\n> Please clarify the scenarios where you see this falling apart.\n\nThe most concrete example I can see is with the OAUTHBEARER error\nresponse. If you want to eventually handle differing scopes per role,\nor different error statuses (which the proof-of-concept currently\nhardcodes as `invalid_token`), then the client can't assume it knows\nwhat the server is going to say there. I think that's true even if you\ncontrol both sides and are hardcoding the provider.\n\nHow should we communicate those pieces to a custom client when it's\npassing a token directly? The easiest way I can see is for the custom\nclient to speak the OAUTHBEARER protocol directly (e.g. SASL plugin).\nIf you had to parse the libpq error message, I don't think that'd be\nparticularly maintainable.\n\n> I can provide an example in the cloud world. We (Azure) as well as\n> other providers offer ways to obtain OAUTH tokens for\n> Service-to-Service communication at IAAS / PAAS level.\n> on Azure \"Managed Identity\" feature integrated in Compute VM allows a\n> client to make a local http call to get a token. VM itself manages the\n> certificate livecycle, as well as implements the corresponding OAUTH\n> flow.\n> This capability is used by both our 1st party PAAS offerings, as well\n> as 3rd party services deploying on VMs or managed K8S clusters.\n> Here, the client doesn't need libpq assistance in obtaining the token.\n\nCool. To me that's the strongest argument yet for directly providing\ntokens to libpq.\n\n> My optimistic plan here would be to implement several core OAUTH flows\n> in libpq core which would be generic enough to support major\n> enterprise OAUTH providers:\n> 1. Client Credentials flow (Client_id + Client_secret) for backend applications.\n> 2. Authorization Code Flow with PKCE and/or Device code flow for GUI\n> applications.\n\nAs long as it's clear to DBAs when to use which flow (because existing\ndocumentation for that is hit-and-miss), I think it's reasonable to\neventually support multiple flows. Personally my preference would be\nto start with one or two core flows, and expand outward once we're\nsure that we do those perfectly. Otherwise the explosion of knobs and\nbuttons might be overwhelming, both to users and devs.\n\nRelated to the question of flows is the client implementation library.\nI've mentioned that I don't think iddawc is production-ready. As far\nas I'm aware, there is only one certified OpenID relying party written\nin C, and that's... an Apache server plugin. That leaves us either\nchoosing an untested library, scouring the web for a \"tested\" library\n(and hoping we're right in our assessment), or implementing our own\n(which is going to tamp down enthusiasm for supporting many flows,\nthough that has its own set of benefits). If you know of any reliable\nimplementations with a C API, please let me know.\n\n> (2.) above would require a protocol between libpq and upstream clients\n> to exchange several messages.\n> Your patch includes a way for libpq to deliver to the client a message\n> about the next authentication steps, so planned to build on top of\n> that.\n\nSpecifically it delivers that message to an end user. If you want a\ngeneric machine client to be able to use that, then we'll need to talk\nabout how.\n\n> A little about scenarios, we look at.\n> What we're trying to achieve here is an easy integration path for\n> multiple players in the ecosystem:\n> - Managed PaaS Postgres providers (both us and multi-cloud solutions)\n> - SaaS providers deploying postgres on IaaS/PaaS providers' clouds\n> - Tools - pg_admin, psql and other ones.\n> - BI, ETL, Federation and other scenarios where postgres is used as\n> the data source.\n>\n> If we can offer a provider agnostic solution for Backend <=> libpq <=>\n> Upstreal client path, we can have all players above build support for\n> OAUTH credentials, managed by the cloud provider of their choice.\n\nWell... I don't quite understand why we'd go to the trouble of\nproviding a provider-agnostic communication solution only to have\neveryone write their own provider-specific client support. Unless\nyou're saying Microsoft would provide an officially blessed plugin for\nthe *server* side only, and Google would provide one of their own, and\nso on.\n\nThe server side authorization is the only place where I think it makes\nsense to specialize by default. libpq should remain agnostic, with the\nunderstanding that we'll need to make hard decisions when a major\nprovider decides not to follow a spec.\n\n> For us, that would mean:\n> - Better administrator experience with pg_admin / psql handling of the\n> AAD (Azure Active Directory) authentication flows.\n> - Path for integration solutions using Postgres to build AAD\n> authentication in their management experience.\n> - Ability to use AAD identity provider for any Postgres deployments\n> other than our 1st party PaaS offering.\n> - Ability to offer github as the identity provider for PaaS Postgres offering.\n\nGitHub is unfortunately a bit tricky, unless they've started\nsupporting OpenID recently?\n\n> Other players in the ecosystem above would be able to get the same benefits.\n>\n> Does that make sense and possible without provider specific libpq plugin?\n\nIf the players involved implement the flows and follow the specs, yes.\nThat's a big \"if\", unfortunately. I think GitHub and Google are two\nmajor players who are currently doing things their own way.\n\n> I just referred to the ability to compile libpq without extra\n> dependencies to save some kilobytes.\n> Not sure if OAUTH is widely used in those cases. It involves overhead\n> anyway, and requires the device to talk to an additional party (OAUTH\n> provider).\n> Likely Cert authentication is easier.\n> If needed, it can get libpq with full OAUTH support and use a client\n> code. But I didn't think about this scenario.\n\nMakes sense. Thanks!\n\n--Jacob\n\n\n", "msg_date": "Tue, 27 Sep 2022 14:45:55 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> The most concrete example I can see is with the OAUTHBEARER error\n> response. If you want to eventually handle differing scopes per role,\n> or different error statuses (which the proof-of-concept currently\n> hardcodes as `invalid_token`), then the client can't assume it knows\n> what the server is going to say there. I think that's true even if you\n> control both sides and are hardcoding the provider.\n\nOk, I see the point. It's related to the topic of communication\nbetween libpq and the upstream client.\n\n\n> How should we communicate those pieces to a custom client when it's\n> passing a token directly? The easiest way I can see is for the custom\n> client to speak the OAUTHBEARER protocol directly (e.g. SASL plugin).\n> If you had to parse the libpq error message, I don't think that'd be\n> particularly maintainable.\n\nI agree that parsing the message is not a sustainable way.\nCould you provide more details on the SASL plugin approach you propose?\n\nSpecifically, is this basically a set of extension hooks for the client\nside?\nWith the need for the client to be compiled with the plugins based on\nthe set of providers it needs.\n\n\n> Well... I don't quite understand why we'd go to the trouble of\n> providing a provider-agnostic communication solution only to have\n> everyone write their own provider-specific client support. Unless\n> you're saying Microsoft would provide an officially blessed plugin for\n> the *server* side only, and Google would provide one of their own, and\n> so on.\n\nYes, via extensions. Identity providers can open source extensions to\nuse their auth services outside of first party PaaS offerings.\nFor 3rd party Postgres PaaS or on premise deployments.\n\n\n> The server side authorization is the only place where I think it makes\n> sense to specialize by default. libpq should remain agnostic, with the\n> understanding that we'll need to make hard decisions when a major\n> provider decides not to follow a spec.\n\nCompletely agree with agnostic libpq. Though needs validation with\nseveral major providers to know if this is possible.\n\n\n> Specifically it delivers that message to an end user. If you want a\n> generic machine client to be able to use that, then we'll need to talk\n> about how.\n\nYes, that's what needs to be decided.\nIn both Device code and Authorization code scenarios, libpq and the\nclient would need to exchange a couple of pieces of metadata.\nPlus, after success, the client should be able to access a refresh token\nfor further use.\n\nCan we implement a generic protocol like for this between libpq and the\nclients?\n\n> The most concrete example I can see is with the OAUTHBEARER error\n> response. If you want to eventually handle differing scopes per role,\n> or different error statuses (which the proof-of-concept currently\n> hardcodes as `invalid_token`), then the client can't assume it knows\n> what the server is going to say there. I think that's true even if you\n> control both sides and are hardcoding the provider.\n\nOk, I see the point. It's related to the topic of communication\nbetween libpq and the upstream client.\n\n> How should we communicate those pieces to a custom client when it's\n> passing a token directly? The easiest way I can see is for the custom\n> client to speak the OAUTHBEARER protocol directly (e.g. SASL plugin).\n> If you had to parse the libpq error message, I don't think that'd be\n> particularly maintainable.\n\nI agree that parsing the message is not a sustainable way.\nCould you provide more details on the SASL plugin approach you propose?\n\nSpecifically, is this basically a set of extension hooks for the client side?\nWith the need for the client to be compiled with the plugins based on\nthe set of providers it needs.\n\n> Well... I don't quite understand why we'd go to the trouble of\n> providing a provider-agnostic communication solution only to have\n> everyone write their own provider-specific client support. Unless\n> you're saying Microsoft would provide an officially blessed plugin for\n> the *server* side only, and Google would provide one of their own, and\n> so on.\n\nYes, via extensions. Identity providers can open source extensions to\nuse their auth services outside of first party PaaS offerings.\nFor 3rd party Postgres PaaS or on premise deployments.\n\n> The server side authorization is the only place where I think it makes\n> sense to specialize by default. libpq should remain agnostic, with the\n> understanding that we'll need to make hard decisions when a major\n> provider decides not to follow a spec.\n\nCompletely agree with agnostic libpq. Though needs validation with\nseveral major providers to know if this is possible.\n\n> Specifically it delivers that message to an end user. If you want a\n> generic machine client to be able to use that, then we'll need to talk\n> about how.\n\nYes, that's what needs to be decided.\nIn both Device code and Authorization code scenarios, libpq and the\nclient would need to exchange a couple of pieces of metadata.\nPlus, after success, the client should be able to access a refresh token for further use.Can we implement a generic protocol like for this between libpq and the clients?", "msg_date": "Fri, 30 Sep 2022 07:47:34 -0700", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Sep 30, 2022 at 7:47 AM Andrey Chudnovsky\n<achudnovskij@gmail.com> wrote:\n> > How should we communicate those pieces to a custom client when it's\n> > passing a token directly? The easiest way I can see is for the custom\n> > client to speak the OAUTHBEARER protocol directly (e.g. SASL plugin).\n> > If you had to parse the libpq error message, I don't think that'd be\n> > particularly maintainable.\n>\n> I agree that parsing the message is not a sustainable way.\n> Could you provide more details on the SASL plugin approach you propose?\n>\n> Specifically, is this basically a set of extension hooks for the client side?\n> With the need for the client to be compiled with the plugins based on\n> the set of providers it needs.\n\nThat's a good question. I can see two broad approaches, with maybe\nsome ability to combine them into a hybrid:\n\n1. If there turns out to be serious interest in having libpq itself\nhandle OAuth natively (with all of the web-facing code that implies,\nand all of the questions still left to answer), then we might be able\nto provide a \"token hook\" in the same way that we currently provide a\npassphrase hook for OpenSSL keys. By default, libpq would use its\ninternal machinery to take the provider details, navigate its builtin\nflow, and return the Bearer token. If you wanted to override that\nbehavior as a client, you could replace the builtin flow with your\nown, by registering a set of callbacks.\n\n2. Alternatively, OAuth support could be provided via a mechanism\nplugin for some third-party SASL library (GNU libgsasl, Cyrus\nlibsasl2). We could provide an OAuth plugin in contrib that handles\nthe default flow. Other providers could publish their alternative\nplugins to completely replace the OAUTHBEARER mechanism handling.\n\nApproach (2) would make for some duplicated effort since every\nprovider has to write code to speak the OAUTHBEARER protocol. It might\nsimplify provider-specific distribution, since (at least for Cyrus) I\nthink you could build a single plugin that supports both the client\nand server side. But it would be a lot easier to unknowingly (or\nknowingly) break the spec, since you'd control both the client and\nserver sides. There would be less incentive to interoperate.\n\nFinally, we could potentially take pieces from both, by having an\nofficial OAuth mechanism plugin that provides a client-side hook to\noverride the flow. I have no idea if the benefits would offset the\ncosts of a plugin-for-a-plugin style architecture. And providers would\nstill be free to ignore it and just provide a full mechanism plugin\nanyway.\n\n> > Well... I don't quite understand why we'd go to the trouble of\n> > providing a provider-agnostic communication solution only to have\n> > everyone write their own provider-specific client support. Unless\n> > you're saying Microsoft would provide an officially blessed plugin for\n> > the *server* side only, and Google would provide one of their own, and\n> > so on.\n>\n> Yes, via extensions. Identity providers can open source extensions to\n> use their auth services outside of first party PaaS offerings.\n> For 3rd party Postgres PaaS or on premise deployments.\n\nSounds reasonable.\n\n> > The server side authorization is the only place where I think it makes\n> > sense to specialize by default. libpq should remain agnostic, with the\n> > understanding that we'll need to make hard decisions when a major\n> > provider decides not to follow a spec.\n>\n> Completely agree with agnostic libpq. Though needs validation with\n> several major providers to know if this is possible.\n\nAgreed.\n\n> > Specifically it delivers that message to an end user. If you want a\n> > generic machine client to be able to use that, then we'll need to talk\n> > about how.\n>\n> Yes, that's what needs to be decided.\n> In both Device code and Authorization code scenarios, libpq and the\n> client would need to exchange a couple of pieces of metadata.\n> Plus, after success, the client should be able to access a refresh token for further use.\n>\n> Can we implement a generic protocol like for this between libpq and the clients?\n\nI think we can probably prototype a callback hook for approach (1)\npretty quickly. (2) is a lot more work and investigation, but it's\nwork that I'm interested in doing (when I get the time). I think there\nare other very good reasons to consider a third-party SASL library,\nand some good lessons to be learned, even if the community decides not\nto go down that road.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Fri, 30 Sep 2022 13:45:29 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> I think we can probably prototype a callback hook for approach (1)\n> pretty quickly. (2) is a lot more work and investigation, but it's\n> work that I'm interested in doing (when I get the time). I think there\n> are other very good reasons to consider a third-party SASL library,\n> and some good lessons to be learned, even if the community decides not\n> to go down that road.\n\nMakes sense. We will work on (1.) and do some check if there are any\nblockers for a shared solution to support github and google.\n\nOn Fri, Sep 30, 2022 at 1:45 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Fri, Sep 30, 2022 at 7:47 AM Andrey Chudnovsky\n> <achudnovskij@gmail.com> wrote:\n> > > How should we communicate those pieces to a custom client when it's\n> > > passing a token directly? The easiest way I can see is for the custom\n> > > client to speak the OAUTHBEARER protocol directly (e.g. SASL plugin).\n> > > If you had to parse the libpq error message, I don't think that'd be\n> > > particularly maintainable.\n> >\n> > I agree that parsing the message is not a sustainable way.\n> > Could you provide more details on the SASL plugin approach you propose?\n> >\n> > Specifically, is this basically a set of extension hooks for the client side?\n> > With the need for the client to be compiled with the plugins based on\n> > the set of providers it needs.\n>\n> That's a good question. I can see two broad approaches, with maybe\n> some ability to combine them into a hybrid:\n>\n> 1. If there turns out to be serious interest in having libpq itself\n> handle OAuth natively (with all of the web-facing code that implies,\n> and all of the questions still left to answer), then we might be able\n> to provide a \"token hook\" in the same way that we currently provide a\n> passphrase hook for OpenSSL keys. By default, libpq would use its\n> internal machinery to take the provider details, navigate its builtin\n> flow, and return the Bearer token. If you wanted to override that\n> behavior as a client, you could replace the builtin flow with your\n> own, by registering a set of callbacks.\n>\n> 2. Alternatively, OAuth support could be provided via a mechanism\n> plugin for some third-party SASL library (GNU libgsasl, Cyrus\n> libsasl2). We could provide an OAuth plugin in contrib that handles\n> the default flow. Other providers could publish their alternative\n> plugins to completely replace the OAUTHBEARER mechanism handling.\n>\n> Approach (2) would make for some duplicated effort since every\n> provider has to write code to speak the OAUTHBEARER protocol. It might\n> simplify provider-specific distribution, since (at least for Cyrus) I\n> think you could build a single plugin that supports both the client\n> and server side. But it would be a lot easier to unknowingly (or\n> knowingly) break the spec, since you'd control both the client and\n> server sides. There would be less incentive to interoperate.\n>\n> Finally, we could potentially take pieces from both, by having an\n> official OAuth mechanism plugin that provides a client-side hook to\n> override the flow. I have no idea if the benefits would offset the\n> costs of a plugin-for-a-plugin style architecture. And providers would\n> still be free to ignore it and just provide a full mechanism plugin\n> anyway.\n>\n> > > Well... I don't quite understand why we'd go to the trouble of\n> > > providing a provider-agnostic communication solution only to have\n> > > everyone write their own provider-specific client support. Unless\n> > > you're saying Microsoft would provide an officially blessed plugin for\n> > > the *server* side only, and Google would provide one of their own, and\n> > > so on.\n> >\n> > Yes, via extensions. Identity providers can open source extensions to\n> > use their auth services outside of first party PaaS offerings.\n> > For 3rd party Postgres PaaS or on premise deployments.\n>\n> Sounds reasonable.\n>\n> > > The server side authorization is the only place where I think it makes\n> > > sense to specialize by default. libpq should remain agnostic, with the\n> > > understanding that we'll need to make hard decisions when a major\n> > > provider decides not to follow a spec.\n> >\n> > Completely agree with agnostic libpq. Though needs validation with\n> > several major providers to know if this is possible.\n>\n> Agreed.\n>\n> > > Specifically it delivers that message to an end user. If you want a\n> > > generic machine client to be able to use that, then we'll need to talk\n> > > about how.\n> >\n> > Yes, that's what needs to be decided.\n> > In both Device code and Authorization code scenarios, libpq and the\n> > client would need to exchange a couple of pieces of metadata.\n> > Plus, after success, the client should be able to access a refresh token for further use.\n> >\n> > Can we implement a generic protocol like for this between libpq and the clients?\n>\n> I think we can probably prototype a callback hook for approach (1)\n> pretty quickly. (2) is a lot more work and investigation, but it's\n> work that I'm interested in doing (when I get the time). I think there\n> are other very good reasons to consider a third-party SASL library,\n> and some good lessons to be learned, even if the community decides not\n> to go down that road.\n>\n> Thanks,\n> --Jacob\n\n\n", "msg_date": "Mon, 3 Oct 2022 11:04:27 -0700", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Hi,\n\n\nWe validated on  libpq handling OAuth natively with different flows\nwith different OIDC certified providers.\n\nFlows: Device Code, Client Credentials and Refresh Token.\nProviders: Microsoft, Google and Okta.\nAlso validated with OAuth provider Github.\n\nWe propose using OpenID Connect (OIDC) as the protocol, instead of\nOAuth, as it is:\n- Discovery mechanism to bridge the differences and provide metadata.\n- Stricter protocol and certification process to reliably identify\nwhich providers can be supported.\n- OIDC is designed for authentication, while the main purpose of OAUTH is to\nauthorize applications on behalf of the user.\n\nGithub is not OIDC certified, so won’t be supported with this proposal.\nHowever, it may be supported in the future through the ability for the\nextension to provide custom discovery document content.\n\nOpenID configuration has a well-known discovery mechanism\nfor the provider configuration URI which is\ndefined in OpenID Connect. It allows libpq to fetch\nmetadata about provider (i.e endpoints, supported grants, response types, etc).\n\nIn the attached patch (based on V2 patch in the thread and does not\ncontain Samay's changes):\n- Provider can configure issuer url and scope through the options hook.)\n- Server passes on an open discovery url and scope to libpq.\n- Libpq handles OAuth flow based on the flow_type sent in the\nconnection string [1].\n- Added callbacks to notify a structure to client tools if OAuth flow\nrequires user interaction.\n- Pg backend uses hooks to validate bearer token.\n\nNote that authentication code flow with PKCE for GUI clients is not\nimplemented yet.\n\nProposed next steps:\n- Broaden discussion to reach agreement on the approach.\n- Implement libpq changes without iddawc\n- Prototype GUI flow with pgAdmin\n\nThanks,\nMahendrakar.\n\n[1]:\nconnection string for refresh token flow:\n./psql -U <user> -d 'dbname=postgres oauth_client_id=<client_id>\noauth_flow_type=<flowtype> oauth_refresh_token=<refresh token>'\n\nOn Mon, 3 Oct 2022 at 23:34, Andrey Chudnovsky <achudnovskij@gmail.com> wrote:\n>\n> > I think we can probably prototype a callback hook for approach (1)\n> > pretty quickly. (2) is a lot more work and investigation, but it's\n> > work that I'm interested in doing (when I get the time). I think there\n> > are other very good reasons to consider a third-party SASL library,\n> > and some good lessons to be learned, even if the community decides not\n> > to go down that road.\n>\n> Makes sense. We will work on (1.) and do some check if there are any\n> blockers for a shared solution to support github and google.\n>\n> On Fri, Sep 30, 2022 at 1:45 PM Jacob Champion <jchampion@timescale.com> wrote:\n> >\n> > On Fri, Sep 30, 2022 at 7:47 AM Andrey Chudnovsky\n> > <achudnovskij@gmail.com> wrote:\n> > > > How should we communicate those pieces to a custom client when it's\n> > > > passing a token directly? The easiest way I can see is for the custom\n> > > > client to speak the OAUTHBEARER protocol directly (e.g. SASL plugin).\n> > > > If you had to parse the libpq error message, I don't think that'd be\n> > > > particularly maintainable.\n> > >\n> > > I agree that parsing the message is not a sustainable way.\n> > > Could you provide more details on the SASL plugin approach you propose?\n> > >\n> > > Specifically, is this basically a set of extension hooks for the client side?\n> > > With the need for the client to be compiled with the plugins based on\n> > > the set of providers it needs.\n> >\n> > That's a good question. I can see two broad approaches, with maybe\n> > some ability to combine them into a hybrid:\n> >\n> > 1. If there turns out to be serious interest in having libpq itself\n> > handle OAuth natively (with all of the web-facing code that implies,\n> > and all of the questions still left to answer), then we might be able\n> > to provide a \"token hook\" in the same way that we currently provide a\n> > passphrase hook for OpenSSL keys. By default, libpq would use its\n> > internal machinery to take the provider details, navigate its builtin\n> > flow, and return the Bearer token. If you wanted to override that\n> > behavior as a client, you could replace the builtin flow with your\n> > own, by registering a set of callbacks.\n> >\n> > 2. Alternatively, OAuth support could be provided via a mechanism\n> > plugin for some third-party SASL library (GNU libgsasl, Cyrus\n> > libsasl2). We could provide an OAuth plugin in contrib that handles\n> > the default flow. Other providers could publish their alternative\n> > plugins to completely replace the OAUTHBEARER mechanism handling.\n> >\n> > Approach (2) would make for some duplicated effort since every\n> > provider has to write code to speak the OAUTHBEARER protocol. It might\n> > simplify provider-specific distribution, since (at least for Cyrus) I\n> > think you could build a single plugin that supports both the client\n> > and server side. But it would be a lot easier to unknowingly (or\n> > knowingly) break the spec, since you'd control both the client and\n> > server sides. There would be less incentive to interoperate.\n> >\n> > Finally, we could potentially take pieces from both, by having an\n> > official OAuth mechanism plugin that provides a client-side hook to\n> > override the flow. I have no idea if the benefits would offset the\n> > costs of a plugin-for-a-plugin style architecture. And providers would\n> > still be free to ignore it and just provide a full mechanism plugin\n> > anyway.\n> >\n> > > > Well... I don't quite understand why we'd go to the trouble of\n> > > > providing a provider-agnostic communication solution only to have\n> > > > everyone write their own provider-specific client support. Unless\n> > > > you're saying Microsoft would provide an officially blessed plugin for\n> > > > the *server* side only, and Google would provide one of their own, and\n> > > > so on.\n> > >\n> > > Yes, via extensions. Identity providers can open source extensions to\n> > > use their auth services outside of first party PaaS offerings.\n> > > For 3rd party Postgres PaaS or on premise deployments.\n> >\n> > Sounds reasonable.\n> >\n> > > > The server side authorization is the only place where I think it makes\n> > > > sense to specialize by default. libpq should remain agnostic, with the\n> > > > understanding that we'll need to make hard decisions when a major\n> > > > provider decides not to follow a spec.\n> > >\n> > > Completely agree with agnostic libpq. Though needs validation with\n> > > several major providers to know if this is possible.\n> >\n> > Agreed.\n> >\n> > > > Specifically it delivers that message to an end user. If you want a\n> > > > generic machine client to be able to use that, then we'll need to talk\n> > > > about how.\n> > >\n> > > Yes, that's what needs to be decided.\n> > > In both Device code and Authorization code scenarios, libpq and the\n> > > client would need to exchange a couple of pieces of metadata.\n> > > Plus, after success, the client should be able to access a refresh token for further use.\n> > >\n> > > Can we implement a generic protocol like for this between libpq and the clients?\n> >\n> > I think we can probably prototype a callback hook for approach (1)\n> > pretty quickly. (2) is a lot more work and investigation, but it's\n> > work that I'm interested in doing (when I get the time). I think there\n> > are other very good reasons to consider a third-party SASL library,\n> > and some good lessons to be learned, even if the community decides not\n> > to go down that road.\n> >\n> > Thanks,\n> > --Jacob", "msg_date": "Wed, 23 Nov 2022 15:28:32 +0530", "msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 11/23/22 01:58, mahendrakar s wrote:\n> We validated on  libpq handling OAuth natively with different flows\n> with different OIDC certified providers.\n> \n> Flows: Device Code, Client Credentials and Refresh Token.\n> Providers: Microsoft, Google and Okta.\n\nGreat, thank you!\n\n> Also validated with OAuth provider Github.\n\n(How did you get discovery working? I tried this and had to give up\neventually.)\n\n> We propose using OpenID Connect (OIDC) as the protocol, instead of\n> OAuth, as it is:\n> - Discovery mechanism to bridge the differences and provide metadata.\n> - Stricter protocol and certification process to reliably identify\n> which providers can be supported.\n> - OIDC is designed for authentication, while the main purpose of OAUTH is to\n> authorize applications on behalf of the user.\n\nHow does this differ from the previous proposal? The OAUTHBEARER SASL\nmechanism already relies on OIDC for discovery. (I think that decision\nis confusing from an architectural and naming standpoint, but I don't\nthink they really had an alternative...)\n\n> Github is not OIDC certified, so won’t be supported with this proposal.\n> However, it may be supported in the future through the ability for the\n> extension to provide custom discovery document content.\n\nRight.\n\n> OpenID configuration has a well-known discovery mechanism\n> for the provider configuration URI which is\n> defined in OpenID Connect. It allows libpq to fetch\n> metadata about provider (i.e endpoints, supported grants, response types, etc).\n\nSure, but this is already how the original PoC works. The test suite\nimplements an OIDC provider, for instance. Is there something different\nto this that I'm missing?\n\n> In the attached patch (based on V2 patch in the thread and does not\n> contain Samay's changes):\n> - Provider can configure issuer url and scope through the options hook.)\n> - Server passes on an open discovery url and scope to libpq.\n> - Libpq handles OAuth flow based on the flow_type sent in the\n> connection string [1].\n> - Added callbacks to notify a structure to client tools if OAuth flow\n> requires user interaction.\n> - Pg backend uses hooks to validate bearer token.\n\nThank you for the sample!\n\n> Note that authentication code flow with PKCE for GUI clients is not\n> implemented yet.\n> \n> Proposed next steps:\n> - Broaden discussion to reach agreement on the approach.\n\nHigh-level thoughts on this particular patch (I assume you're not\nlooking for low-level implementation comments yet):\n\n0) The original hook proposal upthread, I thought, was about allowing\nlibpq's flow implementation to be switched out by the application. I\ndon't see that approach taken here. It's fine if that turned out to be a\nbad idea, of course, but this patch doesn't seem to match what we were\ntalking about.\n\n1) I'm really concerned about the sudden explosion of flows. We went\nfrom one flow (Device Authorization) to six. It's going to be hard\nenough to validate that *one* flow is useful and can be securely\ndeployed by end users; I don't think we're going to be able to maintain\nsix, especially in combination with my statement that iddawc is not an\nappropriate dependency for us.\n\nI'd much rather give applications the ability to use their own OAuth\ncode, and then maintain within libpq only the flows that are broadly\nuseful. This ties back to (0) above.\n\n2) Breaking the refresh token into its own pseudoflow is, I think,\npassing the buck onto the user for something that's incredibly security\nsensitive. The refresh token is powerful; I don't really want it to be\nprinted anywhere, let alone copy-pasted by the user. Imagine the\nphishing opportunities.\n\nIf we want to support refresh tokens, I believe we should be developing\na plan to cache and secure them within the client. They should be used\nas an accelerator for other flows, not as their own flow.\n\n3) I don't like the departure from the OAUTHBEARER mechanism that's\npresented here. For one, since I can't see a sample plugin that makes\nuse of the \"flow type\" magic numbers that have been added, I don't\nreally understand why the extension to the mechanism is necessary.\n\nFor two, if we think OAUTHBEARER is insufficient, the people who wrote\nit would probably like to hear about it. Claiming support for a spec,\nand then implementing an extension without review from the people who\nwrote the spec, is not something I'm personally interested in doing.\n\n4) The test suite is still broken, so it's difficult to see these things\nin practice for review purposes.\n\n> - Implement libpq changes without iddawc\n\nThis in particular will be much easier with a functioning test suite,\nand with a smaller number of flows.\n\n> - Prototype GUI flow with pgAdmin\n\nCool!\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 23 Nov 2022 12:05:37 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> How does this differ from the previous proposal? The OAUTHBEARER SASL\n> mechanism already relies on OIDC for discovery. (I think that decision\n> is confusing from an architectural and naming standpoint, but I don't\n> think they really had an alternative...)\nMostly terminology questions here. OAUTHBEARER SASL appears to be the\nspec about using OAUTH2 tokens for Authentication.\nWhile any OAUTH2 can generally work, we propose to specifically\nhighlight that only OIDC providers can be supported, as we need the\ndiscovery document.\nAnd we won't be able to support Github under that requirement.\nSince the original patch used that too - no change on that, just\nconfirmation that we need OIDC compliance.\n\n> 0) The original hook proposal upthread, I thought, was about allowing\n> libpq's flow implementation to be switched out by the application. I\n> don't see that approach taken here. It's fine if that turned out to be a\n> bad idea, of course, but this patch doesn't seem to match what we were\n> talking about.\nWe still plan to allow the client to pass the token. Which is a\ngeneric way to implement its own OAUTH flows.\n\n> 1) I'm really concerned about the sudden explosion of flows. We went\n> from one flow (Device Authorization) to six. It's going to be hard\n> enough to validate that *one* flow is useful and can be securely\n> deployed by end users; I don't think we're going to be able to maintain\n> six, especially in combination with my statement that iddawc is not an\n> appropriate dependency for us.\n\n> I'd much rather give applications the ability to use their own OAuth\n> code, and then maintain within libpq only the flows that are broadly\n> useful. This ties back to (0) above.\nWe consider the following set of flows to be minimum required:\n- Client Credentials - For Service to Service scenarios.\n- Authorization Code with PKCE - For rich clients,including pgAdmin.\n- Device code - for psql (and possibly other non-GUI clients).\n- Refresh code (separate discussion)\nWhich is pretty much the list described here:\nhttps://oauth.net/2/grant-types/ and in OAUTH2 specs.\nClient Credentials is very simple, so does Refresh Code.\nIf you prefer to pick one of the richer flows, Authorization code for\nGUI scenarios is probably much more widely used.\nPlus it's easier to implement too, as interaction goes through a\nseries of callbacks. No polling required.\n\n> 2) Breaking the refresh token into its own pseudoflow is, I think,\n> passing the buck onto the user for something that's incredibly security\n> sensitive. The refresh token is powerful; I don't really want it to be\n> printed anywhere, let alone copy-pasted by the user. Imagine the\n> phishing opportunities.\n\n> If we want to support refresh tokens, I believe we should be developing\n> a plan to cache and secure them within the client. They should be used\n> as an accelerator for other flows, not as their own flow.\nIt's considered a separate \"grant_type\" in the specs / APIs.\nhttps://openid.net/specs/openid-connect-core-1_0.html#RefreshTokens\n\nFor the clients, it would be storing the token and using it to authenticate.\nOn the question of sensitivity, secure credentials stores are\ndifferent for each platform, with a lot of cloud offerings for this.\npgAdmin, for example, has its own way to secure credentials to avoid\nasking users for passwords every time the app is opened.\nI believe we should delegate the refresh token management to the clients.\n\n>3) I don't like the departure from the OAUTHBEARER mechanism that's\n> presented here. For one, since I can't see a sample plugin that makes\n> use of the \"flow type\" magic numbers that have been added, I don't\n> really understand why the extension to the mechanism is necessary.\nI don't think it's much of a departure, but rather a separation of\nresponsibilities between libpq and upstream clients.\nAs libpq can be used in different apps, the client would need\ndifferent types of flows/grants.\nI.e. those need to be provided to libpq at connection initialization\nor some other point.\nWe will change to \"grant_type\" though and use string to be closer to the spec.\nWhat do you think is the best way for the client to signal which OAUTH\nflow should be used?\n\nOn Wed, Nov 23, 2022 at 12:05 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On 11/23/22 01:58, mahendrakar s wrote:\n> > We validated on  libpq handling OAuth natively with different flows\n> > with different OIDC certified providers.\n> >\n> > Flows: Device Code, Client Credentials and Refresh Token.\n> > Providers: Microsoft, Google and Okta.\n>\n> Great, thank you!\n>\n> > Also validated with OAuth provider Github.\n>\n> (How did you get discovery working? I tried this and had to give up\n> eventually.)\n>\n> > We propose using OpenID Connect (OIDC) as the protocol, instead of\n> > OAuth, as it is:\n> > - Discovery mechanism to bridge the differences and provide metadata.\n> > - Stricter protocol and certification process to reliably identify\n> > which providers can be supported.\n> > - OIDC is designed for authentication, while the main purpose of OAUTH is to\n> > authorize applications on behalf of the user.\n>\n> How does this differ from the previous proposal? The OAUTHBEARER SASL\n> mechanism already relies on OIDC for discovery. (I think that decision\n> is confusing from an architectural and naming standpoint, but I don't\n> think they really had an alternative...)\n>\n> > Github is not OIDC certified, so won’t be supported with this proposal.\n> > However, it may be supported in the future through the ability for the\n> > extension to provide custom discovery document content.\n>\n> Right.\n>\n> > OpenID configuration has a well-known discovery mechanism\n> > for the provider configuration URI which is\n> > defined in OpenID Connect. It allows libpq to fetch\n> > metadata about provider (i.e endpoints, supported grants, response types, etc).\n>\n> Sure, but this is already how the original PoC works. The test suite\n> implements an OIDC provider, for instance. Is there something different\n> to this that I'm missing?\n>\n> > In the attached patch (based on V2 patch in the thread and does not\n> > contain Samay's changes):\n> > - Provider can configure issuer url and scope through the options hook.)\n> > - Server passes on an open discovery url and scope to libpq.\n> > - Libpq handles OAuth flow based on the flow_type sent in the\n> > connection string [1].\n> > - Added callbacks to notify a structure to client tools if OAuth flow\n> > requires user interaction.\n> > - Pg backend uses hooks to validate bearer token.\n>\n> Thank you for the sample!\n>\n> > Note that authentication code flow with PKCE for GUI clients is not\n> > implemented yet.\n> >\n> > Proposed next steps:\n> > - Broaden discussion to reach agreement on the approach.\n>\n> High-level thoughts on this particular patch (I assume you're not\n> looking for low-level implementation comments yet):\n>\n> 0) The original hook proposal upthread, I thought, was about allowing\n> libpq's flow implementation to be switched out by the application. I\n> don't see that approach taken here. It's fine if that turned out to be a\n> bad idea, of course, but this patch doesn't seem to match what we were\n> talking about.\n>\n> 1) I'm really concerned about the sudden explosion of flows. We went\n> from one flow (Device Authorization) to six. It's going to be hard\n> enough to validate that *one* flow is useful and can be securely\n> deployed by end users; I don't think we're going to be able to maintain\n> six, especially in combination with my statement that iddawc is not an\n> appropriate dependency for us.\n>\n> I'd much rather give applications the ability to use their own OAuth\n> code, and then maintain within libpq only the flows that are broadly\n> useful. This ties back to (0) above.\n>\n> 2) Breaking the refresh token into its own pseudoflow is, I think,\n> passing the buck onto the user for something that's incredibly security\n> sensitive. The refresh token is powerful; I don't really want it to be\n> printed anywhere, let alone copy-pasted by the user. Imagine the\n> phishing opportunities.\n>\n> If we want to support refresh tokens, I believe we should be developing\n> a plan to cache and secure them within the client. They should be used\n> as an accelerator for other flows, not as their own flow.\n>\n> 3) I don't like the departure from the OAUTHBEARER mechanism that's\n> presented here. For one, since I can't see a sample plugin that makes\n> use of the \"flow type\" magic numbers that have been added, I don't\n> really understand why the extension to the mechanism is necessary.\n>\n> For two, if we think OAUTHBEARER is insufficient, the people who wrote\n> it would probably like to hear about it. Claiming support for a spec,\n> and then implementing an extension without review from the people who\n> wrote the spec, is not something I'm personally interested in doing.\n>\n> 4) The test suite is still broken, so it's difficult to see these things\n> in practice for review purposes.\n>\n> > - Implement libpq changes without iddawc\n>\n> This in particular will be much easier with a functioning test suite,\n> and with a smaller number of flows.\n>\n> > - Prototype GUI flow with pgAdmin\n>\n> Cool!\n>\n> Thanks,\n> --Jacob\n\n\n", "msg_date": "Wed, 23 Nov 2022 19:45:48 -0800", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Hi Jacob,\n\nI had validated Github by skipping the discovery mechanism and letting\nthe provider extension pass on the endpoints. This is just for\nvalidation purposes.\nIf it needs to be supported, then need a way to send the discovery\ndocument from extension.\n\n\nThanks,\nMahendrakar.\n\nOn Thu, 24 Nov 2022 at 09:16, Andrey Chudnovsky <achudnovskij@gmail.com> wrote:\n>\n> > How does this differ from the previous proposal? The OAUTHBEARER SASL\n> > mechanism already relies on OIDC for discovery. (I think that decision\n> > is confusing from an architectural and naming standpoint, but I don't\n> > think they really had an alternative...)\n> Mostly terminology questions here. OAUTHBEARER SASL appears to be the\n> spec about using OAUTH2 tokens for Authentication.\n> While any OAUTH2 can generally work, we propose to specifically\n> highlight that only OIDC providers can be supported, as we need the\n> discovery document.\n> And we won't be able to support Github under that requirement.\n> Since the original patch used that too - no change on that, just\n> confirmation that we need OIDC compliance.\n>\n> > 0) The original hook proposal upthread, I thought, was about allowing\n> > libpq's flow implementation to be switched out by the application. I\n> > don't see that approach taken here. It's fine if that turned out to be a\n> > bad idea, of course, but this patch doesn't seem to match what we were\n> > talking about.\n> We still plan to allow the client to pass the token. Which is a\n> generic way to implement its own OAUTH flows.\n>\n> > 1) I'm really concerned about the sudden explosion of flows. We went\n> > from one flow (Device Authorization) to six. It's going to be hard\n> > enough to validate that *one* flow is useful and can be securely\n> > deployed by end users; I don't think we're going to be able to maintain\n> > six, especially in combination with my statement that iddawc is not an\n> > appropriate dependency for us.\n>\n> > I'd much rather give applications the ability to use their own OAuth\n> > code, and then maintain within libpq only the flows that are broadly\n> > useful. This ties back to (0) above.\n> We consider the following set of flows to be minimum required:\n> - Client Credentials - For Service to Service scenarios.\n> - Authorization Code with PKCE - For rich clients,including pgAdmin.\n> - Device code - for psql (and possibly other non-GUI clients).\n> - Refresh code (separate discussion)\n> Which is pretty much the list described here:\n> https://oauth.net/2/grant-types/ and in OAUTH2 specs.\n> Client Credentials is very simple, so does Refresh Code.\n> If you prefer to pick one of the richer flows, Authorization code for\n> GUI scenarios is probably much more widely used.\n> Plus it's easier to implement too, as interaction goes through a\n> series of callbacks. No polling required.\n>\n> > 2) Breaking the refresh token into its own pseudoflow is, I think,\n> > passing the buck onto the user for something that's incredibly security\n> > sensitive. The refresh token is powerful; I don't really want it to be\n> > printed anywhere, let alone copy-pasted by the user. Imagine the\n> > phishing opportunities.\n>\n> > If we want to support refresh tokens, I believe we should be developing\n> > a plan to cache and secure them within the client. They should be used\n> > as an accelerator for other flows, not as their own flow.\n> It's considered a separate \"grant_type\" in the specs / APIs.\n> https://openid.net/specs/openid-connect-core-1_0.html#RefreshTokens\n>\n> For the clients, it would be storing the token and using it to authenticate.\n> On the question of sensitivity, secure credentials stores are\n> different for each platform, with a lot of cloud offerings for this.\n> pgAdmin, for example, has its own way to secure credentials to avoid\n> asking users for passwords every time the app is opened.\n> I believe we should delegate the refresh token management to the clients.\n>\n> >3) I don't like the departure from the OAUTHBEARER mechanism that's\n> > presented here. For one, since I can't see a sample plugin that makes\n> > use of the \"flow type\" magic numbers that have been added, I don't\n> > really understand why the extension to the mechanism is necessary.\n> I don't think it's much of a departure, but rather a separation of\n> responsibilities between libpq and upstream clients.\n> As libpq can be used in different apps, the client would need\n> different types of flows/grants.\n> I.e. those need to be provided to libpq at connection initialization\n> or some other point.\n> We will change to \"grant_type\" though and use string to be closer to the spec.\n> What do you think is the best way for the client to signal which OAUTH\n> flow should be used?\n>\n> On Wed, Nov 23, 2022 at 12:05 PM Jacob Champion <jchampion@timescale.com> wrote:\n> >\n> > On 11/23/22 01:58, mahendrakar s wrote:\n> > > We validated on  libpq handling OAuth natively with different flows\n> > > with different OIDC certified providers.\n> > >\n> > > Flows: Device Code, Client Credentials and Refresh Token.\n> > > Providers: Microsoft, Google and Okta.\n> >\n> > Great, thank you!\n> >\n> > > Also validated with OAuth provider Github.\n> >\n> > (How did you get discovery working? I tried this and had to give up\n> > eventually.)\n> >\n> > > We propose using OpenID Connect (OIDC) as the protocol, instead of\n> > > OAuth, as it is:\n> > > - Discovery mechanism to bridge the differences and provide metadata.\n> > > - Stricter protocol and certification process to reliably identify\n> > > which providers can be supported.\n> > > - OIDC is designed for authentication, while the main purpose of OAUTH is to\n> > > authorize applications on behalf of the user.\n> >\n> > How does this differ from the previous proposal? The OAUTHBEARER SASL\n> > mechanism already relies on OIDC for discovery. (I think that decision\n> > is confusing from an architectural and naming standpoint, but I don't\n> > think they really had an alternative...)\n> >\n> > > Github is not OIDC certified, so won’t be supported with this proposal.\n> > > However, it may be supported in the future through the ability for the\n> > > extension to provide custom discovery document content.\n> >\n> > Right.\n> >\n> > > OpenID configuration has a well-known discovery mechanism\n> > > for the provider configuration URI which is\n> > > defined in OpenID Connect. It allows libpq to fetch\n> > > metadata about provider (i.e endpoints, supported grants, response types, etc).\n> >\n> > Sure, but this is already how the original PoC works. The test suite\n> > implements an OIDC provider, for instance. Is there something different\n> > to this that I'm missing?\n> >\n> > > In the attached patch (based on V2 patch in the thread and does not\n> > > contain Samay's changes):\n> > > - Provider can configure issuer url and scope through the options hook.)\n> > > - Server passes on an open discovery url and scope to libpq.\n> > > - Libpq handles OAuth flow based on the flow_type sent in the\n> > > connection string [1].\n> > > - Added callbacks to notify a structure to client tools if OAuth flow\n> > > requires user interaction.\n> > > - Pg backend uses hooks to validate bearer token.\n> >\n> > Thank you for the sample!\n> >\n> > > Note that authentication code flow with PKCE for GUI clients is not\n> > > implemented yet.\n> > >\n> > > Proposed next steps:\n> > > - Broaden discussion to reach agreement on the approach.\n> >\n> > High-level thoughts on this particular patch (I assume you're not\n> > looking for low-level implementation comments yet):\n> >\n> > 0) The original hook proposal upthread, I thought, was about allowing\n> > libpq's flow implementation to be switched out by the application. I\n> > don't see that approach taken here. It's fine if that turned out to be a\n> > bad idea, of course, but this patch doesn't seem to match what we were\n> > talking about.\n> >\n> > 1) I'm really concerned about the sudden explosion of flows. We went\n> > from one flow (Device Authorization) to six. It's going to be hard\n> > enough to validate that *one* flow is useful and can be securely\n> > deployed by end users; I don't think we're going to be able to maintain\n> > six, especially in combination with my statement that iddawc is not an\n> > appropriate dependency for us.\n> >\n> > I'd much rather give applications the ability to use their own OAuth\n> > code, and then maintain within libpq only the flows that are broadly\n> > useful. This ties back to (0) above.\n> >\n> > 2) Breaking the refresh token into its own pseudoflow is, I think,\n> > passing the buck onto the user for something that's incredibly security\n> > sensitive. The refresh token is powerful; I don't really want it to be\n> > printed anywhere, let alone copy-pasted by the user. Imagine the\n> > phishing opportunities.\n> >\n> > If we want to support refresh tokens, I believe we should be developing\n> > a plan to cache and secure them within the client. They should be used\n> > as an accelerator for other flows, not as their own flow.\n> >\n> > 3) I don't like the departure from the OAUTHBEARER mechanism that's\n> > presented here. For one, since I can't see a sample plugin that makes\n> > use of the \"flow type\" magic numbers that have been added, I don't\n> > really understand why the extension to the mechanism is necessary.\n> >\n> > For two, if we think OAUTHBEARER is insufficient, the people who wrote\n> > it would probably like to hear about it. Claiming support for a spec,\n> > and then implementing an extension without review from the people who\n> > wrote the spec, is not something I'm personally interested in doing.\n> >\n> > 4) The test suite is still broken, so it's difficult to see these things\n> > in practice for review purposes.\n> >\n> > > - Implement libpq changes without iddawc\n> >\n> > This in particular will be much easier with a functioning test suite,\n> > and with a smaller number of flows.\n> >\n> > > - Prototype GUI flow with pgAdmin\n> >\n> > Cool!\n> >\n> > Thanks,\n> > --Jacob\n\n\n", "msg_date": "Thu, 24 Nov 2022 13:50:49 +0530", "msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 11/23/22 19:45, Andrey Chudnovsky wrote:\n> Mostly terminology questions here. OAUTHBEARER SASL appears to be the\n> spec about using OAUTH2 tokens for Authentication.\n> While any OAUTH2 can generally work, we propose to specifically\n> highlight that only OIDC providers can be supported, as we need the\n> discovery document.\n\n*If* you're using in-band discovery, yes. But I thought your use case\nwas explicitly tailored to out-of-band token retrieval:\n\n> The client knows how to get a token for a particular principal\n> and doesn't need any additional information other than human readable\n> messages.\n\nIn that case, isn't OAuth sufficient? There's definitely a need to\ndocument the distinction, but I don't think we have to require OIDC as\nlong as the client application makes up for the missing information.\n(OAUTHBEARER makes the openid-configuration error member optional,\npresumably for this reason.)\n\n>> 0) The original hook proposal upthread, I thought, was about allowing\n>> libpq's flow implementation to be switched out by the application. I\n>> don't see that approach taken here. It's fine if that turned out to be a\n>> bad idea, of course, but this patch doesn't seem to match what we were\n>> talking about.\n> We still plan to allow the client to pass the token. Which is a\n> generic way to implement its own OAUTH flows.\n\nOkay. But why push down the implementation into the server?\n\nTo illustrate what I mean, here's the architecture of my proposed patchset:\n\n +-------+ +----------+\n | | -------------- Empty Token ------------> | |\n | libpq | <----- Error Result (w/ Discovery ) ---- | |\n | | | |\n | +--------+ +--------------+ | |\n | | iddawc | <--- [ Flow ] ----> | Issuer/ | | Postgres |\n | | | <-- Access Token -- | Authz Server | | |\n | +--------+ +--------------+ | +-----------+\n | | | | |\n | | -------------- Access Token -----------> | > | Validator |\n | | <---- Authorization Success/Failure ---- | < | |\n | | | +-----------+\n +-------+ +----------+\n\nIn this implementation, there's only one black box: the validator, which\nis responsible for taking an access token from an untrusted client,\nverifying that it was issued correctly for the Postgres service, and\neither 1) determining whether the bearer is authorized to access the\ndatabase, or 2) determining the authenticated ID of the bearer so that\nthe HBA can decide whether they're authorized. (Or both.)\n\nThis approach is limited by the flows that we explicitly enable within\nlibpq and its OAuth implementation library. You mentioned that you\nwanted to support other flows, including clients with out-of-band\nknowledge, and I suggested:\n\n> If you wanted to override [iddawc's]\n> behavior as a client, you could replace the builtin flow with your\n> own, by registering a set of callbacks.\n\nIn other words, the hooks would replace iddawc in the above diagram.\nIn my mind, something like this:\n\n +-------+ +----------+\n +------+ | ----------- Empty Token ------------> | Postgres |\n | | < | <---------- Error Result ------------ | |\n | Hook | | | +-----------+\n | | | | | |\n +------+ > | ------------ Access Token ----------> | > | Validator |\n | | <--- Authorization Success/Failure -- | < | |\n | libpq | | +-----------+\n +-------+ +----------+\n\nNow there's a second black box -- the client hook -- which takes an\nOAUTHBEARER error result (which may or may not have OIDC discovery\ninformation) and returns the access token. How it does this is\nunspecified -- it'll probably use some OAuth 2.0 flow, but maybe not.\nMaybe it sends the user to a web browser; maybe it uses some of the\nmagic provider-specific libraries you mentioned upthread. It might have\na refresh token cached so it doesn't have to involve the user at all.\n\nCrucially, though, the two black boxes remain independent of each other.\nThey have well-defined inputs and outputs (the client hook could be\nroughly described as \"implement get_auth_token()\"). Their correctness\ncan be independently verified against published OAuth specs and/or\nprovider documentation. And the client application still makes a single\ncall to PQconnect*().\n\nCompare this to the architecture proposed by your patch:\n\n Client App\n +----------------------+\n | +-------+ +----------+\n | | libpq | | Postgres |\n | PQconnect > | | | +-------+\n | +------+ | ------- Flow Type (!) -------> | > | |\n | +- < | Hook | < | <------- Error Result -------- | < | |\n | [ get +------+ | | | |\n | token ] | | | | |\n | | | | | | Hooks |\n | v | | | | |\n | PQconnect > | ----> | ------ Access Token ---------> | > | |\n | | | <--- Authz Success/Failure --- | < | |\n | +-------+ | +-------+\n +----------------------+ +----------+\n\nRather than decouple things, I think this proposal drives a spike\nthrough the client app, libpq, and the server. Please correct me if I've\nmisunderstood pieces of the patch, but the following is my view of it:\n\nWhat used to be a validator hook on the server side now actively\nparticipates in the client-side flow for some reason. (I still don't\nunderstand what the server is supposed to do with that knowledge.\nChanging your authz requirements based on the flow the client wants to\nuse seems like a good way to introduce bugs.)\n\nThe client-side hook is now coupled to the application logic: you have\nto know to expect an error from the first PQconnect*() call, then check\nwhatever magic your hook has done for you to be able to set up the\nsecond call to PQconnect*() with the correctly scoped bearer token. So\nif you want to switch between the internal libpq OAuth implementation\nand your own hook, you have to rewrite your app logic.\n\nOn top of all that, the \"flow type code\" being sent is a custom\nextension to OAUTHBEARER that appears to be incompatible with the RFC's\ndiscovery exchange (which is done by sending an empty auth token during\nthe first round trip).\n\n> We consider the following set of flows to be minimum required:\n> - Client Credentials - For Service to Service scenarios.\n\nOkay, that's simple enough that I think it could probably be maintained\ninside libpq with minimal cost. At the same time, is it complicated\nenough that you need libpq to do it for you?\n\nMaybe once we get the hooks ironed out, it'll be more obvious what the\ntradeoff is...\n\n> If you prefer to pick one of the richer flows, Authorization code for\n> GUI scenarios is probably much more widely used.\n> Plus it's easier to implement too, as interaction goes through a\n> series of callbacks. No polling required.\n\nI don't think flows requiring the invocation of web browsers and custom\nURL handlers are a clear fit for libpq. For a first draft, at least, I\nthink that use case should be pushed upward into the client application\nvia a custom hook.\n\n>> If we want to support refresh tokens, I believe we should be developing\n>> a plan to cache and secure them within the client. They should be used\n>> as an accelerator for other flows, not as their own flow.\n> It's considered a separate \"grant_type\" in the specs / APIs.\n> https://openid.net/specs/openid-connect-core-1_0.html#RefreshTokens\n\nYes, but that doesn't mean we have to expose it to users via a\nconnection option. You don't get a refresh token out of the blue; you\nget it by going through some other flow, and then you use it in\npreference to going through that flow again later.\n\n> For the clients, it would be storing the token and using it to authenticate.\n> On the question of sensitivity, secure credentials stores are\n> different for each platform, with a lot of cloud offerings for this.\n> pgAdmin, for example, has its own way to secure credentials to avoid\n> asking users for passwords every time the app is opened.\n> I believe we should delegate the refresh token management to the clients.\n\nDelegating to client apps would be fine (and implicitly handled by a\ntoken hook, because the client app would receive the refresh token\ndirectly rather than going through libpq). Delegating to end users, not\nso much. Printing a refresh token to stderr as proposed here is, I\nthink, making things unnecessarily difficult (and/or dangerous) for users.\n\n>> 3) I don't like the departure from the OAUTHBEARER mechanism that's\n>> presented here. For one, since I can't see a sample plugin that makes\n>> use of the \"flow type\" magic numbers that have been added, I don't\n>> really understand why the extension to the mechanism is necessary.\n> I don't think it's much of a departure, but rather a separation of\n> responsibilities between libpq and upstream clients.\n\nGiven the proposed architectures above, 1) I think this is further\ncoupling the components, not separating them; and 2) I can't agree that\nan incompatible discovery mechanism is \"not much of a departure\". If\nOAUTHBEARER's functionality isn't good enough for some reason, let's\ntalk about why.\n\n> As libpq can be used in different apps, the client would need\n> different types of flows/grants.\n> I.e. those need to be provided to libpq at connection initialization\n> or some other point.\n\nWhy do libpq (or the server!) need to know those things at all, if\nthey're not going to implement the flow?\n\n> We will change to \"grant_type\" though and use string to be closer to the spec.\n> What do you think is the best way for the client to signal which OAUTH\n> flow should be used?\n\nlibpq should not need to know the grant type in use if the client is\nbypassing its internal implementation entirely.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 29 Nov 2022 13:12:21 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 11/24/22 00:20, mahendrakar s wrote:\n> I had validated Github by skipping the discovery mechanism and letting\n> the provider extension pass on the endpoints. This is just for\n> validation purposes.\n> If it needs to be supported, then need a way to send the discovery\n> document from extension.\n\nYeah. I had originally bounced around the idea that we could send a\ndata:// URL, but I think that opens up problems.\n\nYou're supposed to be able to link the issuer URI with the URI you got\nthe configuration from, and if they're different, you bail out. If a\nserver makes up its own OpenID configuration, we'd have to bypass that\nsafety check, and decide what the risks and mitigations are... Not sure\nit's worth it.\n\nEspecially if you could just lobby GitHub to, say, provide an OpenID\nconfig. (Maybe there's a security-related reason they don't.)\n\n--Jacob\n\n\n", "msg_date": "Tue, 29 Nov 2022 13:19:59 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Jacob,\nThanks for your feedback.\nI think we can focus on the roles and responsibilities of the components first.\nDetails of the patch can be elaborated. Like \"flow type code\" is a\nmistake on our side, and we will use the term \"grant_type\" which is\ndefined by OIDC spec. As well as details of usage of refresh_token.\n\n> Rather than decouple things, I think this proposal drives a spike\n> through the client app, libpq, and the server. Please correct me if I've\n> misunderstood pieces of the patch, but the following is my view of it:\n\n> What used to be a validator hook on the server side now actively\n> participates in the client-side flow for some reason. (I still don't\n> understand what the server is supposed to do with that knowledge.\n> Changing your authz requirements based on the flow the client wants to\n> use seems like a good way to introduce bugs.)\n\n> The client-side hook is now coupled to the application logic: you have\n> to know to expect an error from the first PQconnect*() call, then check\n> whatever magic your hook has done for you to be able to set up the\n> second call to PQconnect*() with the correctly scoped bearer token. So\n> if you want to switch between the internal libpq OAuth implementation\n> and your own hook, you have to rewrite your app logic.\n\nBasically Yes. We propose an increase of the server side hook responsibility.\n From just validating the token, to also return the provider root URL\nand required audience. And possibly provide more metadata in the\nfuture.\nWhich is in our opinion aligned with SASL protocol, where the server\nside is responsible for telling the client auth requirements based on\nthe requested role in the startup packet.\n\nOur understanding is that in the original patch that information came\npurely from hba, and we propose extension being able to control that\nmetadata.\nAs we see extension as being owned by the identity provider, compared\nto HBA which is owned by the server administrator or cloud provider.\n\nThis change of the roles is based on the vision of 4 independent actor\ntypes in the ecosystem:\n1. Identity Providers (Okta, Google, Microsoft, other OIDC providers).\n - Publish open source extensions for PostgreSQL.\n - Don't have to own the server deployments, and must ensure their\nextensions can work in any environment. This is where we think\nadditional hook responsibility helps.\n2. Server Owners / PAAS providers (On premise admins, Cloud providers,\nmulti-cloud PAAS providers).\n - Install extensions and configure HBA to allow clients to\nauthenticate with the identity providers of their choice.\n3. Client Application Developers (Data Wis, integration tools,\nPgAdmin, monitoring tools, e.t.c.)\n - Independent from specific Identity providers or server providers.\nWrite one code for all identity providers.\n - Rely on application deployment owners to configure which OIDC\nprovider to use across client and server setups.\n4. Application Deployment Owners (End customers setting up applications)\n - The only actor actually aware of which identity provider to use.\nConfigures the stack based on the Identity and PostgreSQL deployments\nthey have.\n\nThe critical piece of the vision is (3.) above is applications\nagnostic of the identity providers. Those applications rely on\nproperly configured servers and rich driver logic (libpq,\ncom.postgresql, npgsql) to allow their application to popup auth\nwindows or do service-to-service authentication with any provider. In\nour view that would significantly democratize the deployment of OAUTH\nauthentication in the community.\n\nIn order to allow this separation, we propose:\n1. HBA + Extension is the single source of truth of Provider root URL\n+ Required Audience for each role. If some backfill for missing OIDC\ndiscovery is needed, the provider-specific extension would be\nproviding it.\n2. Client Application knows which grant_type to use in which scenario.\nBut can be coded without knowledge of a specific provider. So can't\nprovide discovery details.\n3. Driver (libpq, others) - coordinate the authentication flow based\non client grant_type and identity provider metadata to allow client\napplications to use any flow with any provider in a unified way.\n\nYes, this would require a little more complicated flow between\ncomponents than in your original patch. And yes, more complexity comes\nwith more opportunity to make bugs.\nHowever, I see PG Server and Libpq as the places which can have more\ncomplexity. For the purpose of making work for the community\nparticipants easier and simplify adoption.\n\nDoes this make sense to you?\n\n\nOn Tue, Nov 29, 2022 at 1:20 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On 11/24/22 00:20, mahendrakar s wrote:\n> > I had validated Github by skipping the discovery mechanism and letting\n> > the provider extension pass on the endpoints. This is just for\n> > validation purposes.\n> > If it needs to be supported, then need a way to send the discovery\n> > document from extension.\n>\n> Yeah. I had originally bounced around the idea that we could send a\n> data:// URL, but I think that opens up problems.\n>\n> You're supposed to be able to link the issuer URI with the URI you got\n> the configuration from, and if they're different, you bail out. If a\n> server makes up its own OpenID configuration, we'd have to bypass that\n> safety check, and decide what the risks and mitigations are... Not sure\n> it's worth it.\n>\n> Especially if you could just lobby GitHub to, say, provide an OpenID\n> config. (Maybe there's a security-related reason they don't.)\n>\n> --Jacob\n\n\n", "msg_date": "Mon, 5 Dec 2022 16:15:06 -0800", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Mon, Dec 5, 2022 at 4:15 PM Andrey Chudnovsky <achudnovskij@gmail.com> wrote:\n> I think we can focus on the roles and responsibilities of the components first.\n> Details of the patch can be elaborated. Like \"flow type code\" is a\n> mistake on our side, and we will use the term \"grant_type\" which is\n> defined by OIDC spec. As well as details of usage of refresh_token.\n\n(For the record, whether we call it \"flow type\" or \"grant type\"\ndoesn't address my concern.)\n\n> Basically Yes. We propose an increase of the server side hook responsibility.\n> From just validating the token, to also return the provider root URL\n> and required audience. And possibly provide more metadata in the\n> future.\n\nI think it's okay to have the extension and HBA collaborate to provide\ndiscovery information. Your proposal goes further than that, though,\nand makes the server aware of the chosen client flow. That appears to\nbe an architectural violation: why does an OAuth resource server need\nto know the client flow at all?\n\n> Which is in our opinion aligned with SASL protocol, where the server\n> side is responsible for telling the client auth requirements based on\n> the requested role in the startup packet.\n\nYou've proposed an alternative SASL mechanism. There's nothing wrong\nwith that, per se, but I think it should be clear why we've chosen\nsomething nonstandard.\n\n> Our understanding is that in the original patch that information came\n> purely from hba, and we propose extension being able to control that\n> metadata.\n> As we see extension as being owned by the identity provider, compared\n> to HBA which is owned by the server administrator or cloud provider.\n\nThat seems reasonable, considering how tightly coupled the Issuer and\nthe token validation process are.\n\n> 2. Server Owners / PAAS providers (On premise admins, Cloud providers,\n> multi-cloud PAAS providers).\n> - Install extensions and configure HBA to allow clients to\n> authenticate with the identity providers of their choice.\n\n(For a future conversation: they need to set up authorization, too,\nwith custom scopes or some other magic. It's not enough to check who\nthe token belongs to; even if Postgres is just using the verified\nemail from OpenID as an authenticator, you have to also know that the\nuser authorized the token -- and therefore the client -- to access\nPostgres on their behalf.)\n\n> 3. Client Application Developers (Data Wis, integration tools,\n> PgAdmin, monitoring tools, e.t.c.)\n> - Independent from specific Identity providers or server providers.\n> Write one code for all identity providers.\n\nIdeally, yes, but that only works if all identity providers implement\nthe same flows in compatible ways. We're already seeing instances\nwhere that's not the case and we'll necessarily have to deal with that\nup front.\n\n> - Rely on application deployment owners to configure which OIDC\n> provider to use across client and server setups.\n> 4. Application Deployment Owners (End customers setting up applications)\n> - The only actor actually aware of which identity provider to use.\n> Configures the stack based on the Identity and PostgreSQL deployments\n> they have.\n\n(I have doubts that the roles will be as decoupled in practice as you\nhave described them, but I'd rather defer that for now.)\n\n> The critical piece of the vision is (3.) above is applications\n> agnostic of the identity providers. Those applications rely on\n> properly configured servers and rich driver logic (libpq,\n> com.postgresql, npgsql) to allow their application to popup auth\n> windows or do service-to-service authentication with any provider. In\n> our view that would significantly democratize the deployment of OAUTH\n> authentication in the community.\n\nThat seems to be restating the goal of OAuth and OIDC. Can you explain\nhow the incompatible change allows you to accomplish this better than\nstandard implementations?\n\n> In order to allow this separation, we propose:\n> 1. HBA + Extension is the single source of truth of Provider root URL\n> + Required Audience for each role. If some backfill for missing OIDC\n> discovery is needed, the provider-specific extension would be\n> providing it.\n> 2. Client Application knows which grant_type to use in which scenario.\n> But can be coded without knowledge of a specific provider. So can't\n> provide discovery details.\n> 3. Driver (libpq, others) - coordinate the authentication flow based\n> on client grant_type and identity provider metadata to allow client\n> applications to use any flow with any provider in a unified way.\n>\n> Yes, this would require a little more complicated flow between\n> components than in your original patch.\n\nWhy? I claim that standard OAUTHBEARER can handle all of that. What\ndoes your proposed architecture (the third diagram) enable that my\nproposed hook (the second diagram) doesn't?\n\n> And yes, more complexity comes\n> with more opportunity to make bugs.\n> However, I see PG Server and Libpq as the places which can have more\n> complexity. For the purpose of making work for the community\n> participants easier and simplify adoption.\n>\n> Does this make sense to you?\n\nSome of it, but it hasn't really addressed the questions from my last mail.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 7 Dec 2022 11:06:07 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> I think it's okay to have the extension and HBA collaborate to provide\n> discovery information. Your proposal goes further than that, though,\n> and makes the server aware of the chosen client flow. That appears to\n> be an architectural violation: why does an OAuth resource server need\n> to know the client flow at all?\n\nOk. It may have left there from intermediate iterations. We did\nconsider making extension drive the flow for specific grant_type, but\ndecided against that idea. For the same reason you point to.\nIs it correct that your main concern about use of grant_type was that\nit's propagated to the server? Then yes, we will remove sending it to\nthe server.\n\n> Ideally, yes, but that only works if all identity providers implement\n> the same flows in compatible ways. We're already seeing instances\n> where that's not the case and we'll necessarily have to deal with that\n> up front.\n\nYes, based on our analysis OIDC spec is detailed enough, that\nproviders implementing that one, can be supported with generic code in\nlibpq / client.\nGithub specifically won't fit there though. Microsoft Azure AD,\nGoogle, Okta (including Auth0) will.\nTheoretically discovery documents can be returned from the extension\n(server-side) which is provider specific. Though we didn't plan to\nprioritize that.\n\n> That seems to be restating the goal of OAuth and OIDC. Can you explain\n> how the incompatible change allows you to accomplish this better than\n> standard implementations?\n\nDo you refer to passing grant_type to the server? Which we will get\nrid of in the next iteration. Or other incompatible changes as well?\n\n> Why? I claim that standard OAUTHBEARER can handle all of that. What\n> does your proposed architecture (the third diagram) enable that my\n> proposed hook (the second diagram) doesn't?\n\nThe hook proposed on the 2nd diagram effectively delegates all Oauth\nflows implementations to the client.\nWe propose libpq takes care of pulling OpenId discovery and coordination.\nWhich is effectively Diagram 1 + more flows + server hook providing\nroot url/audience.\n\nCreated the diagrams with all components for 3 flows:\n1. Authorization code grant (Clients with Browser access):\n +----------------------+ +----------+\n | +-------+ |\n |\n | PQconnect | | |\n |\n | [auth_code] | | |\n+-----------+\n | -> | | -------------- Empty Token ------------> | >\n| |\n | | libpq | <----- Error(w\\ Root URL + Audience ) -- | <\n| Pre-Auth |\n | | | |\n| Hook |\n | | | |\n+-----------+\n | | | +--------------+ | |\n | | | -------[GET]---------> | OIDC | | Postgres |\n | +------+ | <--Provider Metadata-- | Discovery | | |\n | +- < | Hook | < | +--------------+ |\n |\n | | +------+ | |\n |\n | v | | |\n |\n | [get auth | | |\n |\n | code] | | |\n |\n |<user action>| | |\n |\n | | | | |\n |\n | + | | |\n |\n | PQconnect > | +--------+ +--------------+ |\n |\n | | | iddawc | <-- [ Auth code ]-> | Issuer/ | | |\n | | | | <-- Access Token -- | Authz Server | | |\n | | +--------+ +--------------+ | |\n | | | |\n+-----------+\n | | | -------------- Access Token -----------> | >\n| Validator |\n | | | <---- Authorization Success/Failure ---- | <\n| Hook |\n | +------+ | |\n+-----------+\n | +-< | Hook | | |\n |\n | v +------+ | |\n |\n |[store +-------+ |\n |\n | refresh_token] +----------+\n +----------------------+\n\n2. Device code grant\n +----------------------+ +----------+\n | +-------+ |\n |\n | PQconnect | | |\n |\n | [auth_code] | | |\n+-----------+\n | -> | | -------------- Empty Token ------------> | >\n| |\n | | libpq | <----- Error(w\\ Root URL + Audience ) -- | <\n| Pre-Auth |\n | | | |\n| Hook |\n | | | |\n+-----------+\n | | | +--------------+ | |\n | | | -------[GET]---------> | OIDC | | Postgres |\n | +------+ | <--Provider Metadata-- | Discovery | | |\n | +- < | Hook | < | +--------------+ |\n |\n | | +------+ | |\n |\n | v | | |\n |\n | [device | +---------+ +--------------+ |\n |\n | code] | | iddawc | | Issuer/ | |\n |\n |<user action>| | | --[ Device code ]-> | Authz Server | |\n |\n | | |<polling>| --[ Device code ]-> | | |\n |\n | | | | --[ Device code ]-> | | |\n |\n | | | | | | | |\n | | | | <-- Access Token -- | | | |\n | | +---------+ +--------------+ | |\n | | | |\n+-----------+\n | | | -------------- Access Token -----------> | >\n| Validator |\n | | | <---- Authorization Success/Failure ---- | <\n| Hook |\n | +------+ | |\n+-----------+\n | +-< | Hook | | |\n |\n | v +------+ | |\n |\n |[store +-------+ |\n |\n | refresh_token] +----------+\n +----------------------+\n\n3. Non-interactive flows (Client Secret / Refresh_Token)\n +----------------------+ +----------+\n | +-------+ |\n |\n | PQconnect | | |\n |\n | [grant_type]| | | |\n | -> | | |\n+-----------+\n | | | -------------- Empty Token ------------> | >\n| |\n | | libpq | <----- Error(w\\ Root URL + Audience ) -- | <\n| Pre-Auth |\n | | | |\n| Hook |\n | | | |\n+-----------+\n | | | +--------------+ | |\n | | | -------[GET]---------> | OIDC | | Postgres |\n | | | <--Provider Metadata-- | Discovery | | |\n | | | +--------------+ |\n |\n | | | |\n |\n | | +--------+ +--------------+ |\n |\n | | | iddawc | <-- [ Secret ]----> | Issuer/ | | |\n | | | | <-- Access Token -- | Authz Server | | |\n | | +--------+ +--------------+ | |\n | | | |\n+-----------+\n | | | -------------- Access Token -----------> | >\n| Validator |\n | | | <---- Authorization Success/Failure ---- | <\n| Hook |\n | | | |\n+-----------+\n | +-------+ +----------+\n +----------------------+\n\nI think what was the most confusing in our latest patch is that\nflow_type was passed to the server.\nWe are not proposing this going forward.\n\n> (For a future conversation: they need to set up authorization, too,\n> with custom scopes or some other magic. It's not enough to check who\n> the token belongs to; even if Postgres is just using the verified\n> email from OpenID as an authenticator, you have to also know that the\n> user authorized the token -- and therefore the client -- to access\n> Postgres on their behalf.)\n\nMy understanding is that metadata in the tokens is provider specific,\nso server side hook would be the right place to handle that.\nPlus I can envision for some providers it can make sense to make a\nremote call to pull some information.\n\nThe way we implement Azure AD auth today in PAAS PostgreSQL offering:\n- Server administrator uses special extension functions to create\nAzure AD enabled PostgreSQL roles.\n- PostgreSQL extension maps Roles to unique identity Ids (UID) in the Directory.\n- Connection flow: If the token is valid and Role => UID mapping\nmatches, we authenticate as the Role.\n- Then its native PostgreSQL role based access control takes care of privileges.\n\nThis is the same for both User- and System-to-system authorization.\nThough I assume different providers may treat user- and system-\nidentities differently. So their extension would handle that.\n\nThanks!\nAndrey.\n\nOn Wed, Dec 7, 2022 at 11:06 AM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Mon, Dec 5, 2022 at 4:15 PM Andrey Chudnovsky <achudnovskij@gmail.com> wrote:\n> > I think we can focus on the roles and responsibilities of the components first.\n> > Details of the patch can be elaborated. Like \"flow type code\" is a\n> > mistake on our side, and we will use the term \"grant_type\" which is\n> > defined by OIDC spec. As well as details of usage of refresh_token.\n>\n> (For the record, whether we call it \"flow type\" or \"grant type\"\n> doesn't address my concern.)\n>\n> > Basically Yes. We propose an increase of the server side hook responsibility.\n> > From just validating the token, to also return the provider root URL\n> > and required audience. And possibly provide more metadata in the\n> > future.\n>\n> I think it's okay to have the extension and HBA collaborate to provide\n> discovery information. Your proposal goes further than that, though,\n> and makes the server aware of the chosen client flow. That appears to\n> be an architectural violation: why does an OAuth resource server need\n> to know the client flow at all?\n>\n> > Which is in our opinion aligned with SASL protocol, where the server\n> > side is responsible for telling the client auth requirements based on\n> > the requested role in the startup packet.\n>\n> You've proposed an alternative SASL mechanism. There's nothing wrong\n> with that, per se, but I think it should be clear why we've chosen\n> something nonstandard.\n>\n> > Our understanding is that in the original patch that information came\n> > purely from hba, and we propose extension being able to control that\n> > metadata.\n> > As we see extension as being owned by the identity provider, compared\n> > to HBA which is owned by the server administrator or cloud provider.\n>\n> That seems reasonable, considering how tightly coupled the Issuer and\n> the token validation process are.\n>\n> > 2. Server Owners / PAAS providers (On premise admins, Cloud providers,\n> > multi-cloud PAAS providers).\n> > - Install extensions and configure HBA to allow clients to\n> > authenticate with the identity providers of their choice.\n>\n> (For a future conversation: they need to set up authorization, too,\n> with custom scopes or some other magic. It's not enough to check who\n> the token belongs to; even if Postgres is just using the verified\n> email from OpenID as an authenticator, you have to also know that the\n> user authorized the token -- and therefore the client -- to access\n> Postgres on their behalf.)\n>\n> > 3. Client Application Developers (Data Wis, integration tools,\n> > PgAdmin, monitoring tools, e.t.c.)\n> > - Independent from specific Identity providers or server providers.\n> > Write one code for all identity providers.\n>\n> Ideally, yes, but that only works if all identity providers implement\n> the same flows in compatible ways. We're already seeing instances\n> where that's not the case and we'll necessarily have to deal with that\n> up front.\n>\n> > - Rely on application deployment owners to configure which OIDC\n> > provider to use across client and server setups.\n> > 4. Application Deployment Owners (End customers setting up applications)\n> > - The only actor actually aware of which identity provider to use.\n> > Configures the stack based on the Identity and PostgreSQL deployments\n> > they have.\n>\n> (I have doubts that the roles will be as decoupled in practice as you\n> have described them, but I'd rather defer that for now.)\n>\n> > The critical piece of the vision is (3.) above is applications\n> > agnostic of the identity providers. Those applications rely on\n> > properly configured servers and rich driver logic (libpq,\n> > com.postgresql, npgsql) to allow their application to popup auth\n> > windows or do service-to-service authentication with any provider. In\n> > our view that would significantly democratize the deployment of OAUTH\n> > authentication in the community.\n>\n> That seems to be restating the goal of OAuth and OIDC. Can you explain\n> how the incompatible change allows you to accomplish this better than\n> standard implementations?\n>\n> > In order to allow this separation, we propose:\n> > 1. HBA + Extension is the single source of truth of Provider root URL\n> > + Required Audience for each role. If some backfill for missing OIDC\n> > discovery is needed, the provider-specific extension would be\n> > providing it.\n> > 2. Client Application knows which grant_type to use in which scenario.\n> > But can be coded without knowledge of a specific provider. So can't\n> > provide discovery details.\n> > 3. Driver (libpq, others) - coordinate the authentication flow based\n> > on client grant_type and identity provider metadata to allow client\n> > applications to use any flow with any provider in a unified way.\n> >\n> > Yes, this would require a little more complicated flow between\n> > components than in your original patch.\n>\n> Why? I claim that standard OAUTHBEARER can handle all of that. What\n> does your proposed architecture (the third diagram) enable that my\n> proposed hook (the second diagram) doesn't?\n>\n> > And yes, more complexity comes\n> > with more opportunity to make bugs.\n> > However, I see PG Server and Libpq as the places which can have more\n> > complexity. For the purpose of making work for the community\n> > participants easier and simplify adoption.\n> >\n> > Does this make sense to you?\n>\n> Some of it, but it hasn't really addressed the questions from my last mail.\n>\n> Thanks,\n> --Jacob\n\n\n", "msg_date": "Wed, 7 Dec 2022 15:22:51 -0800", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "That being said, the Diagram 2 would look like this with our proposal:\n +----------------------+ +----------+\n | +-------+ | Postgres |\n | PQconnect ->| | |\n |\n | | | |\n+-----------+\n | | | -------------- Empty Token ------------> | >\n| |\n | | libpq | <----- Error(w\\ Root URL + Audience ) -- | <\n| Pre-Auth |\n | +------+ | |\n| Hook |\n | +- < | Hook | | |\n+-----------+\n | | +------+ | | |\n | v | | |\n |\n | [get token]| | |\n |\n | | | | |\n |\n | + | | |\n+-----------+\n | PQconnect > | | -------------- Access Token -----------> | >\n| Validator |\n | | | <---- Authorization Success/Failure ---- | <\n| Hook |\n | | | |\n+-----------+\n | +-------+ |\n | +----------------------+\n+----------+\n\n\nWith the application taking care of all Token acquisition logic. While\nthe server-side hook is participating in the pre-authentication reply.\n\nThat is definitely a required scenario for the long term and the\neasiest to implement in the client core.\nAnd if we can do at least that flow in PG16 it will be a strong\nfoundation to provide more support for specific grants in libpq going\nforward.\n\nDoes the diagram above look good to you? We can then start cleaning up\nthe patch to get that in first.\n\nThanks!\nAndrey.\n\n\nOn Wed, Dec 7, 2022 at 3:22 PM Andrey Chudnovsky <achudnovskij@gmail.com> wrote:\n>\n> > I think it's okay to have the extension and HBA collaborate to provide\n> > discovery information. Your proposal goes further than that, though,\n> > and makes the server aware of the chosen client flow. That appears to\n> > be an architectural violation: why does an OAuth resource server need\n> > to know the client flow at all?\n>\n> Ok. It may have left there from intermediate iterations. We did\n> consider making extension drive the flow for specific grant_type, but\n> decided against that idea. For the same reason you point to.\n> Is it correct that your main concern about use of grant_type was that\n> it's propagated to the server? Then yes, we will remove sending it to\n> the server.\n>\n> > Ideally, yes, but that only works if all identity providers implement\n> > the same flows in compatible ways. We're already seeing instances\n> > where that's not the case and we'll necessarily have to deal with that\n> > up front.\n>\n> Yes, based on our analysis OIDC spec is detailed enough, that\n> providers implementing that one, can be supported with generic code in\n> libpq / client.\n> Github specifically won't fit there though. Microsoft Azure AD,\n> Google, Okta (including Auth0) will.\n> Theoretically discovery documents can be returned from the extension\n> (server-side) which is provider specific. Though we didn't plan to\n> prioritize that.\n>\n> > That seems to be restating the goal of OAuth and OIDC. Can you explain\n> > how the incompatible change allows you to accomplish this better than\n> > standard implementations?\n>\n> Do you refer to passing grant_type to the server? Which we will get\n> rid of in the next iteration. Or other incompatible changes as well?\n>\n> > Why? I claim that standard OAUTHBEARER can handle all of that. What\n> > does your proposed architecture (the third diagram) enable that my\n> > proposed hook (the second diagram) doesn't?\n>\n> The hook proposed on the 2nd diagram effectively delegates all Oauth\n> flows implementations to the client.\n> We propose libpq takes care of pulling OpenId discovery and coordination.\n> Which is effectively Diagram 1 + more flows + server hook providing\n> root url/audience.\n>\n> Created the diagrams with all components for 3 flows:\n> 1. Authorization code grant (Clients with Browser access):\n> +----------------------+ +----------+\n> | +-------+ |\n> |\n> | PQconnect | | |\n> |\n> | [auth_code] | | |\n> +-----------+\n> | -> | | -------------- Empty Token ------------> | >\n> | |\n> | | libpq | <----- Error(w\\ Root URL + Audience ) -- | <\n> | Pre-Auth |\n> | | | |\n> | Hook |\n> | | | |\n> +-----------+\n> | | | +--------------+ | |\n> | | | -------[GET]---------> | OIDC | | Postgres |\n> | +------+ | <--Provider Metadata-- | Discovery | | |\n> | +- < | Hook | < | +--------------+ |\n> |\n> | | +------+ | |\n> |\n> | v | | |\n> |\n> | [get auth | | |\n> |\n> | code] | | |\n> |\n> |<user action>| | |\n> |\n> | | | | |\n> |\n> | + | | |\n> |\n> | PQconnect > | +--------+ +--------------+ |\n> |\n> | | | iddawc | <-- [ Auth code ]-> | Issuer/ | | |\n> | | | | <-- Access Token -- | Authz Server | | |\n> | | +--------+ +--------------+ | |\n> | | | |\n> +-----------+\n> | | | -------------- Access Token -----------> | >\n> | Validator |\n> | | | <---- Authorization Success/Failure ---- | <\n> | Hook |\n> | +------+ | |\n> +-----------+\n> | +-< | Hook | | |\n> |\n> | v +------+ | |\n> |\n> |[store +-------+ |\n> |\n> | refresh_token] +----------+\n> +----------------------+\n>\n> 2. Device code grant\n> +----------------------+ +----------+\n> | +-------+ |\n> |\n> | PQconnect | | |\n> |\n> | [auth_code] | | |\n> +-----------+\n> | -> | | -------------- Empty Token ------------> | >\n> | |\n> | | libpq | <----- Error(w\\ Root URL + Audience ) -- | <\n> | Pre-Auth |\n> | | | |\n> | Hook |\n> | | | |\n> +-----------+\n> | | | +--------------+ | |\n> | | | -------[GET]---------> | OIDC | | Postgres |\n> | +------+ | <--Provider Metadata-- | Discovery | | |\n> | +- < | Hook | < | +--------------+ |\n> |\n> | | +------+ | |\n> |\n> | v | | |\n> |\n> | [device | +---------+ +--------------+ |\n> |\n> | code] | | iddawc | | Issuer/ | |\n> |\n> |<user action>| | | --[ Device code ]-> | Authz Server | |\n> |\n> | | |<polling>| --[ Device code ]-> | | |\n> |\n> | | | | --[ Device code ]-> | | |\n> |\n> | | | | | | | |\n> | | | | <-- Access Token -- | | | |\n> | | +---------+ +--------------+ | |\n> | | | |\n> +-----------+\n> | | | -------------- Access Token -----------> | >\n> | Validator |\n> | | | <---- Authorization Success/Failure ---- | <\n> | Hook |\n> | +------+ | |\n> +-----------+\n> | +-< | Hook | | |\n> |\n> | v +------+ | |\n> |\n> |[store +-------+ |\n> |\n> | refresh_token] +----------+\n> +----------------------+\n>\n> 3. Non-interactive flows (Client Secret / Refresh_Token)\n> +----------------------+ +----------+\n> | +-------+ |\n> |\n> | PQconnect | | |\n> |\n> | [grant_type]| | | |\n> | -> | | |\n> +-----------+\n> | | | -------------- Empty Token ------------> | >\n> | |\n> | | libpq | <----- Error(w\\ Root URL + Audience ) -- | <\n> | Pre-Auth |\n> | | | |\n> | Hook |\n> | | | |\n> +-----------+\n> | | | +--------------+ | |\n> | | | -------[GET]---------> | OIDC | | Postgres |\n> | | | <--Provider Metadata-- | Discovery | | |\n> | | | +--------------+ |\n> |\n> | | | |\n> |\n> | | +--------+ +--------------+ |\n> |\n> | | | iddawc | <-- [ Secret ]----> | Issuer/ | | |\n> | | | | <-- Access Token -- | Authz Server | | |\n> | | +--------+ +--------------+ | |\n> | | | |\n> +-----------+\n> | | | -------------- Access Token -----------> | >\n> | Validator |\n> | | | <---- Authorization Success/Failure ---- | <\n> | Hook |\n> | | | |\n> +-----------+\n> | +-------+ +----------+\n> +----------------------+\n>\n> I think what was the most confusing in our latest patch is that\n> flow_type was passed to the server.\n> We are not proposing this going forward.\n>\n> > (For a future conversation: they need to set up authorization, too,\n> > with custom scopes or some other magic. It's not enough to check who\n> > the token belongs to; even if Postgres is just using the verified\n> > email from OpenID as an authenticator, you have to also know that the\n> > user authorized the token -- and therefore the client -- to access\n> > Postgres on their behalf.)\n>\n> My understanding is that metadata in the tokens is provider specific,\n> so server side hook would be the right place to handle that.\n> Plus I can envision for some providers it can make sense to make a\n> remote call to pull some information.\n>\n> The way we implement Azure AD auth today in PAAS PostgreSQL offering:\n> - Server administrator uses special extension functions to create\n> Azure AD enabled PostgreSQL roles.\n> - PostgreSQL extension maps Roles to unique identity Ids (UID) in the Directory.\n> - Connection flow: If the token is valid and Role => UID mapping\n> matches, we authenticate as the Role.\n> - Then its native PostgreSQL role based access control takes care of privileges.\n>\n> This is the same for both User- and System-to-system authorization.\n> Though I assume different providers may treat user- and system-\n> identities differently. So their extension would handle that.\n>\n> Thanks!\n> Andrey.\n>\n> On Wed, Dec 7, 2022 at 11:06 AM Jacob Champion <jchampion@timescale.com> wrote:\n> >\n> > On Mon, Dec 5, 2022 at 4:15 PM Andrey Chudnovsky <achudnovskij@gmail.com> wrote:\n> > > I think we can focus on the roles and responsibilities of the components first.\n> > > Details of the patch can be elaborated. Like \"flow type code\" is a\n> > > mistake on our side, and we will use the term \"grant_type\" which is\n> > > defined by OIDC spec. As well as details of usage of refresh_token.\n> >\n> > (For the record, whether we call it \"flow type\" or \"grant type\"\n> > doesn't address my concern.)\n> >\n> > > Basically Yes. We propose an increase of the server side hook responsibility.\n> > > From just validating the token, to also return the provider root URL\n> > > and required audience. And possibly provide more metadata in the\n> > > future.\n> >\n> > I think it's okay to have the extension and HBA collaborate to provide\n> > discovery information. Your proposal goes further than that, though,\n> > and makes the server aware of the chosen client flow. That appears to\n> > be an architectural violation: why does an OAuth resource server need\n> > to know the client flow at all?\n> >\n> > > Which is in our opinion aligned with SASL protocol, where the server\n> > > side is responsible for telling the client auth requirements based on\n> > > the requested role in the startup packet.\n> >\n> > You've proposed an alternative SASL mechanism. There's nothing wrong\n> > with that, per se, but I think it should be clear why we've chosen\n> > something nonstandard.\n> >\n> > > Our understanding is that in the original patch that information came\n> > > purely from hba, and we propose extension being able to control that\n> > > metadata.\n> > > As we see extension as being owned by the identity provider, compared\n> > > to HBA which is owned by the server administrator or cloud provider.\n> >\n> > That seems reasonable, considering how tightly coupled the Issuer and\n> > the token validation process are.\n> >\n> > > 2. Server Owners / PAAS providers (On premise admins, Cloud providers,\n> > > multi-cloud PAAS providers).\n> > > - Install extensions and configure HBA to allow clients to\n> > > authenticate with the identity providers of their choice.\n> >\n> > (For a future conversation: they need to set up authorization, too,\n> > with custom scopes or some other magic. It's not enough to check who\n> > the token belongs to; even if Postgres is just using the verified\n> > email from OpenID as an authenticator, you have to also know that the\n> > user authorized the token -- and therefore the client -- to access\n> > Postgres on their behalf.)\n> >\n> > > 3. Client Application Developers (Data Wis, integration tools,\n> > > PgAdmin, monitoring tools, e.t.c.)\n> > > - Independent from specific Identity providers or server providers.\n> > > Write one code for all identity providers.\n> >\n> > Ideally, yes, but that only works if all identity providers implement\n> > the same flows in compatible ways. We're already seeing instances\n> > where that's not the case and we'll necessarily have to deal with that\n> > up front.\n> >\n> > > - Rely on application deployment owners to configure which OIDC\n> > > provider to use across client and server setups.\n> > > 4. Application Deployment Owners (End customers setting up applications)\n> > > - The only actor actually aware of which identity provider to use.\n> > > Configures the stack based on the Identity and PostgreSQL deployments\n> > > they have.\n> >\n> > (I have doubts that the roles will be as decoupled in practice as you\n> > have described them, but I'd rather defer that for now.)\n> >\n> > > The critical piece of the vision is (3.) above is applications\n> > > agnostic of the identity providers. Those applications rely on\n> > > properly configured servers and rich driver logic (libpq,\n> > > com.postgresql, npgsql) to allow their application to popup auth\n> > > windows or do service-to-service authentication with any provider. In\n> > > our view that would significantly democratize the deployment of OAUTH\n> > > authentication in the community.\n> >\n> > That seems to be restating the goal of OAuth and OIDC. Can you explain\n> > how the incompatible change allows you to accomplish this better than\n> > standard implementations?\n> >\n> > > In order to allow this separation, we propose:\n> > > 1. HBA + Extension is the single source of truth of Provider root URL\n> > > + Required Audience for each role. If some backfill for missing OIDC\n> > > discovery is needed, the provider-specific extension would be\n> > > providing it.\n> > > 2. Client Application knows which grant_type to use in which scenario.\n> > > But can be coded without knowledge of a specific provider. So can't\n> > > provide discovery details.\n> > > 3. Driver (libpq, others) - coordinate the authentication flow based\n> > > on client grant_type and identity provider metadata to allow client\n> > > applications to use any flow with any provider in a unified way.\n> > >\n> > > Yes, this would require a little more complicated flow between\n> > > components than in your original patch.\n> >\n> > Why? I claim that standard OAUTHBEARER can handle all of that. What\n> > does your proposed architecture (the third diagram) enable that my\n> > proposed hook (the second diagram) doesn't?\n> >\n> > > And yes, more complexity comes\n> > > with more opportunity to make bugs.\n> > > However, I see PG Server and Libpq as the places which can have more\n> > > complexity. For the purpose of making work for the community\n> > > participants easier and simplify adoption.\n> > >\n> > > Does this make sense to you?\n> >\n> > Some of it, but it hasn't really addressed the questions from my last mail.\n> >\n> > Thanks,\n> > --Jacob\n\n\n", "msg_date": "Wed, 7 Dec 2022 20:25:09 -0800", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Wed, Dec 7, 2022 at 3:22 PM Andrey Chudnovsky\n<achudnovskij@gmail.com> wrote:\n> \n>> I think it's okay to have the extension and HBA collaborate to\n>> provide discovery information. Your proposal goes further than\n>> that, though, and makes the server aware of the chosen client flow.\n>> That appears to be an architectural violation: why does an OAuth\n>> resource server need to know the client flow at all?\n> \n> Ok. It may have left there from intermediate iterations. We did \n> consider making extension drive the flow for specific grant_type,\n> but decided against that idea. For the same reason you point to. Is\n> it correct that your main concern about use of grant_type was that \n> it's propagated to the server? Then yes, we will remove sending it\n> to the server.\n\nOkay. Yes, that was my primary concern.\n\n>> Ideally, yes, but that only works if all identity providers\n>> implement the same flows in compatible ways. We're already seeing\n>> instances where that's not the case and we'll necessarily have to\n>> deal with that up front.\n> \n> Yes, based on our analysis OIDC spec is detailed enough, that \n> providers implementing that one, can be supported with generic code\n> in libpq / client. Github specifically won't fit there though.\n> Microsoft Azure AD, Google, Okta (including Auth0) will. \n> Theoretically discovery documents can be returned from the extension \n> (server-side) which is provider specific. Though we didn't plan to \n> prioritize that.\n\nAs another example, Google's device authorization grant is incompatible\nwith the spec (which they co-authored). I want to say I had problems\nwith Azure AD not following that spec either, but I don't remember\nexactly what they were. I wouldn't be surprised to find more tiny\ndepartures once we get deeper into implementation.\n\n>> That seems to be restating the goal of OAuth and OIDC. Can you\n>> explain how the incompatible change allows you to accomplish this\n>> better than standard implementations?\n> \n> Do you refer to passing grant_type to the server? Which we will get \n> rid of in the next iteration. Or other incompatible changes as well?\n\nJust the grant type, yeah.\n\n>> Why? I claim that standard OAUTHBEARER can handle all of that.\n>> What does your proposed architecture (the third diagram) enable\n>> that my proposed hook (the second diagram) doesn't?\n> \n> The hook proposed on the 2nd diagram effectively delegates all Oauth \n> flows implementations to the client. We propose libpq takes care of\n> pulling OpenId discovery and coordination. Which is effectively\n> Diagram 1 + more flows + server hook providing root url/audience.\n> \n> Created the diagrams with all components for 3 flows: [snip]\n\n(I'll skip ahead to your later mail on this.)\n\n>> (For a future conversation: they need to set up authorization,\n>> too, with custom scopes or some other magic. It's not enough to\n>> check who the token belongs to; even if Postgres is just using the\n>> verified email from OpenID as an authenticator, you have to also\n>> know that the user authorized the token -- and therefore the client\n>> -- to access Postgres on their behalf.)\n> \n> My understanding is that metadata in the tokens is provider\n> specific, so server side hook would be the right place to handle\n> that. Plus I can envision for some providers it can make sense to\n> make a remote call to pull some information.\n\nThe server hook is the right place to check the scopes, yes, but I think\nthe DBA should be able to specify what those scopes are to begin with.\nThe provider of the extension shouldn't be expected by the architecture\nto hardcode those decisions, even if Azure AD chooses to short-circuit\nthat choice and provide magic instead.\n\nOn 12/7/22 20:25, Andrey Chudnovsky wrote:\n> That being said, the Diagram 2 would look like this with our proposal:\n> [snip]\n> \n> With the application taking care of all Token acquisition logic. While\n> the server-side hook is participating in the pre-authentication reply.\n> \n> That is definitely a required scenario for the long term and the\n> easiest to implement in the client core.> And if we can do at least that flow in PG16 it will be a strong\n> foundation to provide more support for specific grants in libpq going\n> forward.\n\nAgreed.\n> Does the diagram above look good to you? We can then start cleaning up\n> the patch to get that in first.\n\nI maintain that the hook doesn't need to hand back artifacts to the\nclient for a second PQconnect call. It can just use those artifacts to\nobtain the access token and hand that right back to libpq. (I think any\nrequirement that clients be rewritten to call PQconnect twice will\nprobably be a sticking point for adoption of an OAuth patch.)\n\nThat said, now that your proposal is also compatible with OAUTHBEARER, I\ncan pony up some code to hopefully prove my point. (I don't know if I'll\nbe able to do that by the holidays though.)\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Thu, 8 Dec 2022 16:41:21 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> The server hook is the right place to check the scopes, yes, but I think\n> the DBA should be able to specify what those scopes are to begin with.\n> The provider of the extension shouldn't be expected by the architecture\n> to hardcode those decisions, even if Azure AD chooses to short-circuit\n> that choice and provide magic instead.\n\nHardcode is definitely not expected, but customization for identity\nprovider specific, I think, should be allowed.\nI can provide a couple of advanced use cases which happen in the cloud\ndeployments world, and require per-role management:\n- Multi-tenant deployments, when root provider URL would be different\nfor different roles, based on which tenant they come from.\n- Federation to multiple providers. Solutions like Amazon Cognito\nwhich offer a layer of abstraction with several providers\ntransparently supported.\n\nIf your concern is extension not honoring the DBA configured values:\nWould a server-side logic to prefer HBA value over extension-provided\nresolve this concern?\nWe are definitely biased towards the cloud deployment scenarios, where\ndirect access to .hba files is usually not offered at all.\nLet's find the middle ground here.\n\nA separate reason for creating this pre-authentication hook is further\nextensibility to support more metadata.\nSpecifically when we add support for OAUTH flows to libpq, server-side\nextensions can help bridge the gap between the identity provider\nimplementation and OAUTH/OIDC specs.\nFor example, that could allow the Github extension to provide an OIDC\ndiscovery document.\n\nI definitely see identity providers as institutional actors here which\ncan be given some power through the extension hooks to customize the\nbehavior within the framework.\n\n> I maintain that the hook doesn't need to hand back artifacts to the\n> client for a second PQconnect call. It can just use those artifacts to\n> obtain the access token and hand that right back to libpq. (I think any\n> requirement that clients be rewritten to call PQconnect twice will\n> probably be a sticking point for adoption of an OAuth patch.)\n\nObtaining a token is an asynchronous process with a human in the loop.\nNot sure if expecting a hook function to return a token synchronously\nis the best option here.\nCan that be an optional return value of the hook in cases when a token\ncan be obtained synchronously?\n\nOn Thu, Dec 8, 2022 at 4:41 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Wed, Dec 7, 2022 at 3:22 PM Andrey Chudnovsky\n> <achudnovskij@gmail.com> wrote:\n> >\n> >> I think it's okay to have the extension and HBA collaborate to\n> >> provide discovery information. Your proposal goes further than\n> >> that, though, and makes the server aware of the chosen client flow.\n> >> That appears to be an architectural violation: why does an OAuth\n> >> resource server need to know the client flow at all?\n> >\n> > Ok. It may have left there from intermediate iterations. We did\n> > consider making extension drive the flow for specific grant_type,\n> > but decided against that idea. For the same reason you point to. Is\n> > it correct that your main concern about use of grant_type was that\n> > it's propagated to the server? Then yes, we will remove sending it\n> > to the server.\n>\n> Okay. Yes, that was my primary concern.\n>\n> >> Ideally, yes, but that only works if all identity providers\n> >> implement the same flows in compatible ways. We're already seeing\n> >> instances where that's not the case and we'll necessarily have to\n> >> deal with that up front.\n> >\n> > Yes, based on our analysis OIDC spec is detailed enough, that\n> > providers implementing that one, can be supported with generic code\n> > in libpq / client. Github specifically won't fit there though.\n> > Microsoft Azure AD, Google, Okta (including Auth0) will.\n> > Theoretically discovery documents can be returned from the extension\n> > (server-side) which is provider specific. Though we didn't plan to\n> > prioritize that.\n>\n> As another example, Google's device authorization grant is incompatible\n> with the spec (which they co-authored). I want to say I had problems\n> with Azure AD not following that spec either, but I don't remember\n> exactly what they were. I wouldn't be surprised to find more tiny\n> departures once we get deeper into implementation.\n>\n> >> That seems to be restating the goal of OAuth and OIDC. Can you\n> >> explain how the incompatible change allows you to accomplish this\n> >> better than standard implementations?\n> >\n> > Do you refer to passing grant_type to the server? Which we will get\n> > rid of in the next iteration. Or other incompatible changes as well?\n>\n> Just the grant type, yeah.\n>\n> >> Why? I claim that standard OAUTHBEARER can handle all of that.\n> >> What does your proposed architecture (the third diagram) enable\n> >> that my proposed hook (the second diagram) doesn't?\n> >\n> > The hook proposed on the 2nd diagram effectively delegates all Oauth\n> > flows implementations to the client. We propose libpq takes care of\n> > pulling OpenId discovery and coordination. Which is effectively\n> > Diagram 1 + more flows + server hook providing root url/audience.\n> >\n> > Created the diagrams with all components for 3 flows: [snip]\n>\n> (I'll skip ahead to your later mail on this.)\n>\n> >> (For a future conversation: they need to set up authorization,\n> >> too, with custom scopes or some other magic. It's not enough to\n> >> check who the token belongs to; even if Postgres is just using the\n> >> verified email from OpenID as an authenticator, you have to also\n> >> know that the user authorized the token -- and therefore the client\n> >> -- to access Postgres on their behalf.)\n> >\n> > My understanding is that metadata in the tokens is provider\n> > specific, so server side hook would be the right place to handle\n> > that. Plus I can envision for some providers it can make sense to\n> > make a remote call to pull some information.\n>\n> The server hook is the right place to check the scopes, yes, but I think\n> the DBA should be able to specify what those scopes are to begin with.\n> The provider of the extension shouldn't be expected by the architecture\n> to hardcode those decisions, even if Azure AD chooses to short-circuit\n> that choice and provide magic instead.\n>\n> On 12/7/22 20:25, Andrey Chudnovsky wrote:\n> > That being said, the Diagram 2 would look like this with our proposal:\n> > [snip]\n> >\n> > With the application taking care of all Token acquisition logic. While\n> > the server-side hook is participating in the pre-authentication reply.\n> >\n> > That is definitely a required scenario for the long term and the\n> > easiest to implement in the client core.> And if we can do at least that flow in PG16 it will be a strong\n> > foundation to provide more support for specific grants in libpq going\n> > forward.\n>\n> Agreed.\n> > Does the diagram above look good to you? We can then start cleaning up\n> > the patch to get that in first.\n>\n> I maintain that the hook doesn't need to hand back artifacts to the\n> client for a second PQconnect call. It can just use those artifacts to\n> obtain the access token and hand that right back to libpq. (I think any\n> requirement that clients be rewritten to call PQconnect twice will\n> probably be a sticking point for adoption of an OAuth patch.)\n>\n> That said, now that your proposal is also compatible with OAUTHBEARER, I\n> can pony up some code to hopefully prove my point. (I don't know if I'll\n> be able to do that by the holidays though.)\n>\n> Thanks!\n> --Jacob\n\n\n", "msg_date": "Mon, 12 Dec 2022 21:06:20 -0800", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Mon, Dec 12, 2022 at 9:06 PM Andrey Chudnovsky\n<achudnovskij@gmail.com> wrote:\n> If your concern is extension not honoring the DBA configured values:\n> Would a server-side logic to prefer HBA value over extension-provided\n> resolve this concern?\n\nYeah. It also seals the role of the extension here as \"optional\".\n\n> We are definitely biased towards the cloud deployment scenarios, where\n> direct access to .hba files is usually not offered at all.\n> Let's find the middle ground here.\n\nSure. I don't want to make this difficult in cloud scenarios --\nobviously I'd like for Timescale Cloud to be able to make use of this\ntoo. But if we make this easy for a lone DBA (who doesn't have any\ninstitutional power with the providers) to use correctly and securely,\nthen it should follow that the providers who _do_ have power and\nresources will have an easy time of it as well. The reverse isn't\nnecessarily true. So I'm definitely planning to focus on the DBA case\nfirst.\n\n> A separate reason for creating this pre-authentication hook is further\n> extensibility to support more metadata.\n> Specifically when we add support for OAUTH flows to libpq, server-side\n> extensions can help bridge the gap between the identity provider\n> implementation and OAUTH/OIDC specs.\n> For example, that could allow the Github extension to provide an OIDC\n> discovery document.\n>\n> I definitely see identity providers as institutional actors here which\n> can be given some power through the extension hooks to customize the\n> behavior within the framework.\n\nWe'll probably have to make some compromises in this area, but I think\nthey should be carefully considered exceptions and not a core feature\nof the mechanism. The gaps you point out are just fragmentation, and\nadding custom extensions to deal with it leads to further\nfragmentation instead of providing pressure on providers to just\nimplement the specs. Worst case, we open up new exciting security\nflaws, and then no one can analyze them independently because no one\nother than the provider knows how the two sides work together anymore.\n\nDon't get me wrong; it would be naive to proceed as if the OAUTHBEARER\nspec were perfect, because it's clearly not. But if we need to make\nextensions to it, we can participate in IETF discussions and make our\ncase publicly for review, rather than enshrining MS/GitHub/Google/etc.\nversions of the RFC and enabling that proliferation as a Postgres core\nfeature.\n\n> Obtaining a token is an asynchronous process with a human in the loop.\n> Not sure if expecting a hook function to return a token synchronously\n> is the best option here.\n> Can that be an optional return value of the hook in cases when a token\n> can be obtained synchronously?\n\nI don't think the hook is generally going to be able to return a token\nsynchronously, and I expect the final design to be async-first. As far\nas I know, this will need to be solved for the builtin flows as well\n(you don't want a synchronous HTTP call to block your PQconnectPoll\narchitecture), so the hook should be able to make use of whatever\nsolution we land on for that.\n\nThis is hand-wavy, and I don't expect it to be easy to solve. I just\ndon't think we have to solve it twice.\n\nHave a good end to the year!\n--Jacob\n\n\n", "msg_date": "Fri, 16 Dec 2022 15:18:38 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Hi All,\n\nChanges added to Jacob's patch(v2) as per the discussion in the thread.\n\nThe changes allow the customer to send the OAUTH BEARER token through psql\nconnection string.\n\nExample:\npsql -U user@example.com -d 'dbname=postgres oauth_bearer_token=abc'\n\nTo configure OAUTH, the pg_hba.conf line look like:\nlocal all all oauth\n provider=oauth_provider issuer=\"https://example.com\" scope=\"openid email\"\n\nWe also added hook to libpq to pass on the metadata about the issuer.\n\nThanks,\nMahendrakar.\n\n\nOn Sat, 17 Dec 2022 at 04:48, Jacob Champion <jchampion@timescale.com>\nwrote:\n>\n> On Mon, Dec 12, 2022 at 9:06 PM Andrey Chudnovsky\n> <achudnovskij@gmail.com> wrote:\n> > If your concern is extension not honoring the DBA configured values:\n> > Would a server-side logic to prefer HBA value over extension-provided\n> > resolve this concern?\n>\n> Yeah. It also seals the role of the extension here as \"optional\".\n>\n> > We are definitely biased towards the cloud deployment scenarios, where\n> > direct access to .hba files is usually not offered at all.\n> > Let's find the middle ground here.\n>\n> Sure. I don't want to make this difficult in cloud scenarios --\n> obviously I'd like for Timescale Cloud to be able to make use of this\n> too. But if we make this easy for a lone DBA (who doesn't have any\n> institutional power with the providers) to use correctly and securely,\n> then it should follow that the providers who _do_ have power and\n> resources will have an easy time of it as well. The reverse isn't\n> necessarily true. So I'm definitely planning to focus on the DBA case\n> first.\n>\n> > A separate reason for creating this pre-authentication hook is further\n> > extensibility to support more metadata.\n> > Specifically when we add support for OAUTH flows to libpq, server-side\n> > extensions can help bridge the gap between the identity provider\n> > implementation and OAUTH/OIDC specs.\n> > For example, that could allow the Github extension to provide an OIDC\n> > discovery document.\n> >\n> > I definitely see identity providers as institutional actors here which\n> > can be given some power through the extension hooks to customize the\n> > behavior within the framework.\n>\n> We'll probably have to make some compromises in this area, but I think\n> they should be carefully considered exceptions and not a core feature\n> of the mechanism. The gaps you point out are just fragmentation, and\n> adding custom extensions to deal with it leads to further\n> fragmentation instead of providing pressure on providers to just\n> implement the specs. Worst case, we open up new exciting security\n> flaws, and then no one can analyze them independently because no one\n> other than the provider knows how the two sides work together anymore.\n>\n> Don't get me wrong; it would be naive to proceed as if the OAUTHBEARER\n> spec were perfect, because it's clearly not. But if we need to make\n> extensions to it, we can participate in IETF discussions and make our\n> case publicly for review, rather than enshrining MS/GitHub/Google/etc.\n> versions of the RFC and enabling that proliferation as a Postgres core\n> feature.\n>\n> > Obtaining a token is an asynchronous process with a human in the loop.\n> > Not sure if expecting a hook function to return a token synchronously\n> > is the best option here.\n> > Can that be an optional return value of the hook in cases when a token\n> > can be obtained synchronously?\n>\n> I don't think the hook is generally going to be able to return a token\n> synchronously, and I expect the final design to be async-first. As far\n> as I know, this will need to be solved for the builtin flows as well\n> (you don't want a synchronous HTTP call to block your PQconnectPoll\n> architecture), so the hook should be able to make use of whatever\n> solution we land on for that.\n>\n> This is hand-wavy, and I don't expect it to be easy to solve. I just\n> don't think we have to solve it twice.\n>\n> Have a good end to the year!\n> --Jacob", "msg_date": "Fri, 13 Jan 2023 00:38:35 +0530", "msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "More information on the latest patch.\n\n1. We aligned the implementation with the barebone SASL for OAUTH\ndescribed here - https://www.rfc-editor.org/rfc/rfc7628\nThe flow can be explained in the diagram below:\n\n +----------------------+ +----------+\n | +-------+ | Postgres |\n | PQconnect ->| | | |\n | | | | +-----------+\n | | | ---------- Empty Token---------> | > | |\n | | libpq | <-- Error(Discovery + Scope ) -- | < | Pre-Auth |\n | +------+ | | | Hook |\n | +- < | Hook | | | +-----------+\n | | +------+ | | |\n | v | | | |\n | [get token]| | | |\n | | | | | |\n | + | | | +-----------+\n | PQconnect > | | --------- Access Token --------> | > | Validator |\n | | | <---------- Auth Result -------- | < | Hook |\n | | | | +-----------+\n | +-------+ | |\n +----------------------+ +----------+\n\n2. Removed Device Code implementation in libpq. Several reasons:\n - Reduce scope and focus on the protocol first.\n - Device code implementation uses iddawc dependency. Taking this\ndependency is a controversial step which requires broader discussion.\n - Device code implementation without iddaws would significantly\nincrease the scope of the patch, as libpq needs to poll the token\nendpoint, setup different API calls, e.t.c.\n - That flow should canonically only be used for clients which can't\ninvoke browsers. If it is the only flow to be implemented, it can be\nused in the context when it's not expected by the OAUTH protocol.\n\n3. Temporarily removed test suite. We are actively working on aligning\nthe tests with the latest changes. Will add a patch with tests soon.\n\nWe will change the \"V3\" prefix to make it the next after the previous\niterations.\n\nThanks!\nAndrey.\n\nOn Thu, Jan 12, 2023 at 11:08 AM mahendrakar s\n<mahendrakarforpg@gmail.com> wrote:\n>\n> Hi All,\n>\n> Changes added to Jacob's patch(v2) as per the discussion in the thread.\n>\n> The changes allow the customer to send the OAUTH BEARER token through psql connection string.\n>\n> Example:\n> psql -U user@example.com -d 'dbname=postgres oauth_bearer_token=abc'\n>\n> To configure OAUTH, the pg_hba.conf line look like:\n> local all all oauth provider=oauth_provider issuer=\"https://example.com\" scope=\"openid email\"\n>\n> We also added hook to libpq to pass on the metadata about the issuer.\n>\n> Thanks,\n> Mahendrakar.\n>\n>\n> On Sat, 17 Dec 2022 at 04:48, Jacob Champion <jchampion@timescale.com> wrote:\n> >\n> > On Mon, Dec 12, 2022 at 9:06 PM Andrey Chudnovsky\n> > <achudnovskij@gmail.com> wrote:\n> > > If your concern is extension not honoring the DBA configured values:\n> > > Would a server-side logic to prefer HBA value over extension-provided\n> > > resolve this concern?\n> >\n> > Yeah. It also seals the role of the extension here as \"optional\".\n> >\n> > > We are definitely biased towards the cloud deployment scenarios, where\n> > > direct access to .hba files is usually not offered at all.\n> > > Let's find the middle ground here.\n> >\n> > Sure. I don't want to make this difficult in cloud scenarios --\n> > obviously I'd like for Timescale Cloud to be able to make use of this\n> > too. But if we make this easy for a lone DBA (who doesn't have any\n> > institutional power with the providers) to use correctly and securely,\n> > then it should follow that the providers who _do_ have power and\n> > resources will have an easy time of it as well. The reverse isn't\n> > necessarily true. So I'm definitely planning to focus on the DBA case\n> > first.\n> >\n> > > A separate reason for creating this pre-authentication hook is further\n> > > extensibility to support more metadata.\n> > > Specifically when we add support for OAUTH flows to libpq, server-side\n> > > extensions can help bridge the gap between the identity provider\n> > > implementation and OAUTH/OIDC specs.\n> > > For example, that could allow the Github extension to provide an OIDC\n> > > discovery document.\n> > >\n> > > I definitely see identity providers as institutional actors here which\n> > > can be given some power through the extension hooks to customize the\n> > > behavior within the framework.\n> >\n> > We'll probably have to make some compromises in this area, but I think\n> > they should be carefully considered exceptions and not a core feature\n> > of the mechanism. The gaps you point out are just fragmentation, and\n> > adding custom extensions to deal with it leads to further\n> > fragmentation instead of providing pressure on providers to just\n> > implement the specs. Worst case, we open up new exciting security\n> > flaws, and then no one can analyze them independently because no one\n> > other than the provider knows how the two sides work together anymore.\n> >\n> > Don't get me wrong; it would be naive to proceed as if the OAUTHBEARER\n> > spec were perfect, because it's clearly not. But if we need to make\n> > extensions to it, we can participate in IETF discussions and make our\n> > case publicly for review, rather than enshrining MS/GitHub/Google/etc.\n> > versions of the RFC and enabling that proliferation as a Postgres core\n> > feature.\n> >\n> > > Obtaining a token is an asynchronous process with a human in the loop.\n> > > Not sure if expecting a hook function to return a token synchronously\n> > > is the best option here.\n> > > Can that be an optional return value of the hook in cases when a token\n> > > can be obtained synchronously?\n> >\n> > I don't think the hook is generally going to be able to return a token\n> > synchronously, and I expect the final design to be async-first. As far\n> > as I know, this will need to be solved for the builtin flows as well\n> > (you don't want a synchronous HTTP call to block your PQconnectPoll\n> > architecture), so the hook should be able to make use of whatever\n> > solution we land on for that.\n> >\n> > This is hand-wavy, and I don't expect it to be easy to solve. I just\n> > don't think we have to solve it twice.\n> >\n> > Have a good end to the year!\n> > --Jacob\n\n\n", "msg_date": "Sun, 15 Jan 2023 12:03:32 -0800", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Sun, Jan 15, 2023 at 12:03 PM Andrey Chudnovsky\n<achudnovskij@gmail.com> wrote:\n> 2. Removed Device Code implementation in libpq. Several reasons:\n> - Reduce scope and focus on the protocol first.\n> - Device code implementation uses iddawc dependency. Taking this\n> dependency is a controversial step which requires broader discussion.\n> - Device code implementation without iddaws would significantly\n> increase the scope of the patch, as libpq needs to poll the token\n> endpoint, setup different API calls, e.t.c.\n> - That flow should canonically only be used for clients which can't\n> invoke browsers. If it is the only flow to be implemented, it can be\n> used in the context when it's not expected by the OAUTH protocol.\n\nI'm not understanding the concern in the final point -- providers\ngenerally require you to opt into device authorization, at least as far\nas I can tell. So if you decide that it's not appropriate for your use\ncase... don't enable it. (And I haven't seen any claims that opting into\ndevice authorization weakens the other flows in any way. So if we're\ngoing to implement a flow in libpq, I still think device authorization\nis the best choice, since it works on headless machines as well as those\nwith browsers.)\n\nAll of this points at a bigger question to the community: if we choose\nnot to provide a flow implementation in libpq, is adding OAUTHBEARER\nworth the additional maintenance cost?\n\nMy personal vote would be \"no\". I think the hook-only approach proposed\nhere would ensure that only larger providers would implement it in\npractice, and in that case I'd rather spend cycles on generic SASL.\n\n> 3. Temporarily removed test suite. We are actively working on aligning\n> the tests with the latest changes. Will add a patch with tests soon.\n\nOkay. Case in point, the following change to the patch appears to be\ninvalid JSON:\n\n> + appendStringInfo(&buf,\n> + \"{ \"\n> + \"\\\"status\\\": \\\"invalid_token\\\", \"\n> + \"\\\"openid-configuration\\\": \\\"%s\\\",\"\n> + \"\\\"scope\\\": \\\"%s\\\" \",\n> + \"\\\"issuer\\\": \\\"%s\\\" \",\n> + \"}\",\n\nAdditionally, the \"issuer\" field added here is not part of the RFC. I've\nwritten my thoughts about unofficial extensions upthread but haven't\nreceived a response, so I'm going to start being more strident: Please,\nfor the sake of reviewers, call out changes you've made to the spec, and\nwhy they're justified.\n\nThe patches seem to be out of order now (and the documentation in the\ncommit messages has been removed).\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 17 Jan 2023 14:43:59 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> All of this points at a bigger question to the community: if we choose\n> not to provide a flow implementation in libpq, is adding OAUTHBEARER\n> worth the additional maintenance cost?\n\n> My personal vote would be \"no\". I think the hook-only approach proposed\n> here would ensure that only larger providers would implement it in\n> practice\n\nFlow implementations in libpq are definitely a long term plan, and I\nagree that it would democratise the adoption.\nIn the previous posts in this conversation I outlined the ones I think\nwe should support.\n\nHowever, I don't see why it's strictly necessary to couple those.\nAs long as the SASL exchange for OAUTHBEARER mechanism is supported by\nthe protocol, the Client side can evolve at its own pace.\n\nAt the same time, the current implementation allows clients to start\nbuilding provider-agnostic OAUTH support. By using iddawc or OAUTH\nclient implementations in the respective platforms.\nSo I wouldn't refer to \"larger providers\", but rather \"more motivated\nclients\" here. Which definitely overlaps, but keeps the system open.\n\n> I'm not understanding the concern in the final point -- providers\n> generally require you to opt into device authorization, at least as far\n> as I can tell. So if you decide that it's not appropriate for your use\n> case... don't enable it. (And I haven't seen any claims that opting into\n> device authorization weakens the other flows in any way. So if we're\n> going to implement a flow in libpq, I still think device authorization\n> is the best choice, since it works on headless machines as well as those\n> with browsers.)\nI agree with the statement that Device code is the best first choice\nif we absolutely have to pick one.\nThough I don't think we have to.\n\nWhile device flow can be used for all kinds of user-facing\napplications, it's specifically designed for input-constrained\nscenarios. As clearly stated in the Abstract here -\nhttps://www.rfc-editor.org/rfc/rfc8628\nThe authorization code with pkce flow is recommended by the RFSc and\nmajor providers for cases when it's feasible.\nThe long term goal is to provide both, though I don't see why the\nbackbone protocol implementation first wouldn't add value.\n\nAnother point is user authentication is one side of the whole story\nand the other critical one is system-to-system authentication. Where\nwe have Client Credentials and Certificates.\nWith the latter it is much harder to get generically implemented, as\nprovider-specific tokens need to be signed.\n\nAdding the other reasoning, I think libpq support for specific flows\ncan get in the further iterations, after the protocol support.\n\n> in that case I'd rather spend cycles on generic SASL.\nI see 2 approaches to generic SASL:\n(a). Generic SASL is a framework used in the protocol, with the\nmechanisms implemented on top and exposed to the DBAs as auth types to\nconfigure in hba.\nThis is the direction we're going here, which is well aligned with the\nexisting hba-based auth configuration.\n(b). Generic SASL exposed to developers on the server- and client-\nside to extend on. It seems to be a much longer shot.\nThe specific points of large ambiguity are libpq distribution model\n(which you pointed to) and potential pluggability of insecure\nmechanisms.\n\nI do see (a) as a sweet spot with a lot of value for various\nparticipants with much less ambiguity.\n\n> Additionally, the \"issuer\" field added here is not part of the RFC. I've\n> written my thoughts about unofficial extensions upthread but haven't\n> received a response, so I'm going to start being more strident: Please,\n> for the sake of reviewers, call out changes you've made to the spec, and\n> why they're justified.\nThanks for your feedback on this. We had this discussion as well, and\nadded that as a convenience for the client to identify the provider.\nI don't see a reason why an issuer would be absolutely necessary, so\nwe will get your point that sticking to RFCs is a safer choice.\n\n> The patches seem to be out of order now (and the documentation in the\n> commit messages has been removed).\nFeedback taken. Work in progress.\n\nOn Tue, Jan 17, 2023 at 2:44 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Sun, Jan 15, 2023 at 12:03 PM Andrey Chudnovsky\n> <achudnovskij@gmail.com> wrote:\n> > 2. Removed Device Code implementation in libpq. Several reasons:\n> > - Reduce scope and focus on the protocol first.\n> > - Device code implementation uses iddawc dependency. Taking this\n> > dependency is a controversial step which requires broader discussion.\n> > - Device code implementation without iddaws would significantly\n> > increase the scope of the patch, as libpq needs to poll the token\n> > endpoint, setup different API calls, e.t.c.\n> > - That flow should canonically only be used for clients which can't\n> > invoke browsers. If it is the only flow to be implemented, it can be\n> > used in the context when it's not expected by the OAUTH protocol.\n>\n> I'm not understanding the concern in the final point -- providers\n> generally require you to opt into device authorization, at least as far\n> as I can tell. So if you decide that it's not appropriate for your use\n> case... don't enable it. (And I haven't seen any claims that opting into\n> device authorization weakens the other flows in any way. So if we're\n> going to implement a flow in libpq, I still think device authorization\n> is the best choice, since it works on headless machines as well as those\n> with browsers.)\n>\n> All of this points at a bigger question to the community: if we choose\n> not to provide a flow implementation in libpq, is adding OAUTHBEARER\n> worth the additional maintenance cost?\n>\n> My personal vote would be \"no\". I think the hook-only approach proposed\n> here would ensure that only larger providers would implement it in\n> practice, and in that case I'd rather spend cycles on generic SASL.\n>\n> > 3. Temporarily removed test suite. We are actively working on aligning\n> > the tests with the latest changes. Will add a patch with tests soon.\n>\n> Okay. Case in point, the following change to the patch appears to be\n> invalid JSON:\n>\n> > + appendStringInfo(&buf,\n> > + \"{ \"\n> > + \"\\\"status\\\": \\\"invalid_token\\\", \"\n> > + \"\\\"openid-configuration\\\": \\\"%s\\\",\"\n> > + \"\\\"scope\\\": \\\"%s\\\" \",\n> > + \"\\\"issuer\\\": \\\"%s\\\" \",\n> > + \"}\",\n>\n> Additionally, the \"issuer\" field added here is not part of the RFC. I've\n> written my thoughts about unofficial extensions upthread but haven't\n> received a response, so I'm going to start being more strident: Please,\n> for the sake of reviewers, call out changes you've made to the spec, and\n> why they're justified.\n>\n> The patches seem to be out of order now (and the documentation in the\n> commit messages has been removed).\n>\n> Thanks,\n> --Jacob\n\n\n", "msg_date": "Tue, 17 Jan 2023 17:53:56 -0800", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Hi All,\n\nThe \"issuer\" field has been removed to align with the RFC\nimplementation - https://www.rfc-editor.org/rfc/rfc7628.\nThis patch \"v6\" is a single patch to support the OAUTH BEARER token\nthrough psql connection string.\nBelow flow is supported. Added the documentation in the commit messages.\n\n +----------------------+ +----------+\n | +-------+ | Postgres |\n | PQconnect ->| | | |\n | | | | +-----------+\n | | | ---------- Empty Token---------> | > | |\n | | libpq | <-- Error(Discovery + Scope ) -- | < | Pre-Auth |\n | +------+ | | | Hook |\n | +- < | Hook | | | +-----------+\n | | +------+ | | |\n | v | | | |\n | [get token]| | | |\n | | | | | |\n | + | | | +-----------+\n | PQconnect > | | --------- Access Token --------> | > | Validator |\n | | | <---------- Auth Result -------- | < | Hook |\n | | | | +-----------+\n | +-------+ | |\n +----------------------+ +----------+\n\nPlease note that we are working on modifying/adding new tests (from\nJacob's Patch) with the latest changes. Will add a patch with tests\nsoon.\n\nThanks,\nMahendrakar.\n\nOn Wed, 18 Jan 2023 at 07:24, Andrey Chudnovsky <achudnovskij@gmail.com> wrote:\n>\n> > All of this points at a bigger question to the community: if we choose\n> > not to provide a flow implementation in libpq, is adding OAUTHBEARER\n> > worth the additional maintenance cost?\n>\n> > My personal vote would be \"no\". I think the hook-only approach proposed\n> > here would ensure that only larger providers would implement it in\n> > practice\n>\n> Flow implementations in libpq are definitely a long term plan, and I\n> agree that it would democratise the adoption.\n> In the previous posts in this conversation I outlined the ones I think\n> we should support.\n>\n> However, I don't see why it's strictly necessary to couple those.\n> As long as the SASL exchange for OAUTHBEARER mechanism is supported by\n> the protocol, the Client side can evolve at its own pace.\n>\n> At the same time, the current implementation allows clients to start\n> building provider-agnostic OAUTH support. By using iddawc or OAUTH\n> client implementations in the respective platforms.\n> So I wouldn't refer to \"larger providers\", but rather \"more motivated\n> clients\" here. Which definitely overlaps, but keeps the system open.\n>\n> > I'm not understanding the concern in the final point -- providers\n> > generally require you to opt into device authorization, at least as far\n> > as I can tell. So if you decide that it's not appropriate for your use\n> > case... don't enable it. (And I haven't seen any claims that opting into\n> > device authorization weakens the other flows in any way. So if we're\n> > going to implement a flow in libpq, I still think device authorization\n> > is the best choice, since it works on headless machines as well as those\n> > with browsers.)\n> I agree with the statement that Device code is the best first choice\n> if we absolutely have to pick one.\n> Though I don't think we have to.\n>\n> While device flow can be used for all kinds of user-facing\n> applications, it's specifically designed for input-constrained\n> scenarios. As clearly stated in the Abstract here -\n> https://www.rfc-editor.org/rfc/rfc8628\n> The authorization code with pkce flow is recommended by the RFSc and\n> major providers for cases when it's feasible.\n> The long term goal is to provide both, though I don't see why the\n> backbone protocol implementation first wouldn't add value.\n>\n> Another point is user authentication is one side of the whole story\n> and the other critical one is system-to-system authentication. Where\n> we have Client Credentials and Certificates.\n> With the latter it is much harder to get generically implemented, as\n> provider-specific tokens need to be signed.\n>\n> Adding the other reasoning, I think libpq support for specific flows\n> can get in the further iterations, after the protocol support.\n>\n> > in that case I'd rather spend cycles on generic SASL.\n> I see 2 approaches to generic SASL:\n> (a). Generic SASL is a framework used in the protocol, with the\n> mechanisms implemented on top and exposed to the DBAs as auth types to\n> configure in hba.\n> This is the direction we're going here, which is well aligned with the\n> existing hba-based auth configuration.\n> (b). Generic SASL exposed to developers on the server- and client-\n> side to extend on. It seems to be a much longer shot.\n> The specific points of large ambiguity are libpq distribution model\n> (which you pointed to) and potential pluggability of insecure\n> mechanisms.\n>\n> I do see (a) as a sweet spot with a lot of value for various\n> participants with much less ambiguity.\n>\n> > Additionally, the \"issuer\" field added here is not part of the RFC. I've\n> > written my thoughts about unofficial extensions upthread but haven't\n> > received a response, so I'm going to start being more strident: Please,\n> > for the sake of reviewers, call out changes you've made to the spec, and\n> > why they're justified.\n> Thanks for your feedback on this. We had this discussion as well, and\n> added that as a convenience for the client to identify the provider.\n> I don't see a reason why an issuer would be absolutely necessary, so\n> we will get your point that sticking to RFCs is a safer choice.\n>\n> > The patches seem to be out of order now (and the documentation in the\n> > commit messages has been removed).\n> Feedback taken. Work in progress.\n>\n> On Tue, Jan 17, 2023 at 2:44 PM Jacob Champion <jchampion@timescale.com> wrote:\n> >\n> > On Sun, Jan 15, 2023 at 12:03 PM Andrey Chudnovsky\n> > <achudnovskij@gmail.com> wrote:\n> > > 2. Removed Device Code implementation in libpq. Several reasons:\n> > > - Reduce scope and focus on the protocol first.\n> > > - Device code implementation uses iddawc dependency. Taking this\n> > > dependency is a controversial step which requires broader discussion.\n> > > - Device code implementation without iddaws would significantly\n> > > increase the scope of the patch, as libpq needs to poll the token\n> > > endpoint, setup different API calls, e.t.c.\n> > > - That flow should canonically only be used for clients which can't\n> > > invoke browsers. If it is the only flow to be implemented, it can be\n> > > used in the context when it's not expected by the OAUTH protocol.\n> >\n> > I'm not understanding the concern in the final point -- providers\n> > generally require you to opt into device authorization, at least as far\n> > as I can tell. So if you decide that it's not appropriate for your use\n> > case... don't enable it. (And I haven't seen any claims that opting into\n> > device authorization weakens the other flows in any way. So if we're\n> > going to implement a flow in libpq, I still think device authorization\n> > is the best choice, since it works on headless machines as well as those\n> > with browsers.)\n> >\n> > All of this points at a bigger question to the community: if we choose\n> > not to provide a flow implementation in libpq, is adding OAUTHBEARER\n> > worth the additional maintenance cost?\n> >\n> > My personal vote would be \"no\". I think the hook-only approach proposed\n> > here would ensure that only larger providers would implement it in\n> > practice, and in that case I'd rather spend cycles on generic SASL.\n> >\n> > > 3. Temporarily removed test suite. We are actively working on aligning\n> > > the tests with the latest changes. Will add a patch with tests soon.\n> >\n> > Okay. Case in point, the following change to the patch appears to be\n> > invalid JSON:\n> >\n> > > + appendStringInfo(&buf,\n> > > + \"{ \"\n> > > + \"\\\"status\\\": \\\"invalid_token\\\", \"\n> > > + \"\\\"openid-configuration\\\": \\\"%s\\\",\"\n> > > + \"\\\"scope\\\": \\\"%s\\\" \",\n> > > + \"\\\"issuer\\\": \\\"%s\\\" \",\n> > > + \"}\",\n> >\n> > Additionally, the \"issuer\" field added here is not part of the RFC. I've\n> > written my thoughts about unofficial extensions upthread but haven't\n> > received a response, so I'm going to start being more strident: Please,\n> > for the sake of reviewers, call out changes you've made to the spec, and\n> > why they're justified.\n> >\n> > The patches seem to be out of order now (and the documentation in the\n> > commit messages has been removed).\n> >\n> > Thanks,\n> > --Jacob", "msg_date": "Wed, 25 Jan 2023 10:16:15 +0530", "msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Greetings,\n\n* mahendrakar s (mahendrakarforpg@gmail.com) wrote:\n> The \"issuer\" field has been removed to align with the RFC\n> implementation - https://www.rfc-editor.org/rfc/rfc7628.\n> This patch \"v6\" is a single patch to support the OAUTH BEARER token\n> through psql connection string.\n> Below flow is supported. Added the documentation in the commit messages.\n> \n> +----------------------+ +----------+\n> | +-------+ | Postgres |\n> | PQconnect ->| | | |\n> | | | | +-----------+\n> | | | ---------- Empty Token---------> | > | |\n> | | libpq | <-- Error(Discovery + Scope ) -- | < | Pre-Auth |\n> | +------+ | | | Hook |\n> | +- < | Hook | | | +-----------+\n> | | +------+ | | |\n> | v | | | |\n> | [get token]| | | |\n> | | | | | |\n> | + | | | +-----------+\n> | PQconnect > | | --------- Access Token --------> | > | Validator |\n> | | | <---------- Auth Result -------- | < | Hook |\n> | | | | +-----------+\n> | +-------+ | |\n> +----------------------+ +----------+\n> \n> Please note that we are working on modifying/adding new tests (from\n> Jacob's Patch) with the latest changes. Will add a patch with tests\n> soon.\n\nHaving skimmed back through this thread again, I still feel that the\ndirection that was originally being taken (actually support something in\nlibpq and the backend, be it with libiddawc or something else or even\nour own code, and not just throw hooks in various places) makes a lot\nmore sense and is a lot closer to how Kerberos and client-side certs and\neven LDAP auth work today. That also seems like a much better answer\nfor our users when it comes to new authentication methods than having\nextensions and making libpq developers have to write their own custom\ncode, not to mention that we'd still need to implement something in psql\nto provide such a hook if we are to have psql actually usefully exercise\nthis, no?\n\nIn the Kerberos test suite we have today, we actually bring up a proper\nKerberos server, set things up, and then test end-to-end installing a\nkeytab for the server, getting a TGT, getting a service ticket, testing\nauthentication and encryption, etc. Looking around, it seems like the\nequivilant would perhaps be to use Glewlwyd and libiddawc or libcurl and\nour own code to really be able to test this and show that it works and\nthat we're doing it correctly, and to let us know if we break something.\n\nThanks,\n\nStephen", "msg_date": "Mon, 20 Feb 2023 17:35:36 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Mon, Feb 20, 2023 at 2:35 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Having skimmed back through this thread again, I still feel that the\n> direction that was originally being taken (actually support something in\n> libpq and the backend, be it with libiddawc or something else or even\n> our own code, and not just throw hooks in various places) makes a lot\n> more sense and is a lot closer to how Kerberos and client-side certs and\n> even LDAP auth work today.\n\nCool, that helps focus the effort. Thanks!\n\n> That also seems like a much better answer\n> for our users when it comes to new authentication methods than having\n> extensions and making libpq developers have to write their own custom\n> code, not to mention that we'd still need to implement something in psql\n> to provide such a hook if we are to have psql actually usefully exercise\n> this, no?\n\nI don't mind letting clients implement their own flows... as long as\nit's optional. So even if we did use a hook in the end, I agree that\nwe've got to exercise it ourselves.\n\n> In the Kerberos test suite we have today, we actually bring up a proper\n> Kerberos server, set things up, and then test end-to-end installing a\n> keytab for the server, getting a TGT, getting a service ticket, testing\n> authentication and encryption, etc. Looking around, it seems like the\n> equivilant would perhaps be to use Glewlwyd and libiddawc or libcurl and\n> our own code to really be able to test this and show that it works and\n> that we're doing it correctly, and to let us know if we break something.\n\nThe original patchset includes a test server in Python -- a major\nadvantage being that you can test the client and server independently\nof each other, since the implementation is so asymmetric. Additionally\ntesting against something like Glewlwyd would be a great way to stack\ncoverage. (If we *only* test against a packaged server, though, it'll\nbe harder to test our stuff in the presence of malfunctions and other\ncorner cases.)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 21 Feb 2023 14:24:12 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Thanks for the feedback,\n\n> Having skimmed back through this thread again, I still feel that the\n> direction that was originally being taken (actually support something in\n> libpq and the backend, be it with libiddawc or something else or even\n> our own code, and not just throw hooks in various places) makes a lot\n> more sense and is a lot closer to how Kerberos and client-side certs and\n> even LDAP auth work today. That also seems like a much better answer\n> for our users when it comes to new authentication methods than having\n> extensions and making libpq developers have to write their own custom\n> code, not to mention that we'd still need to implement something in psql\n> to provide such a hook if we are to have psql actually usefully exercise\n> this, no?\n\nlibpq implementation is the long term plan. However, our intention is\nto start with the protocol implementation which allows us to build on\ntop of.\n\nWhile device code is the right solution for psql, having that as the\nonly one can result in incentive to use it in the cases it's not\nintended to.\nReasonably good implementation should support all of the following:\n(1.) authorization code with pkce (for GUI applications)\n(2.) device code (for console user logins)\n(3.) client secret\n(4.) some support for client certificate flow\n\n(1.) and (4.) require more work to get implemented, though necessary\nfor encouraging the most secure grant types.\nAs we didn't have those pieces, we're proposing starting with the\nprotocol, which can be used by the ecosystem to build token flow\nimplementations.\nThen add the libpq support for individual grant types.\n\nWe originally looked at starting with bare bone protocol for PG16 and\nadding libpq support in PG17.\nThat plan won't happen, though still splitting the work into separate\nstages would make more sense in my opinion.\n\nSeveral questions to follow up:\n(a.) Would you support committing the protocol first? or you see libpq\nimplementation for grants as the prerequisite to consider the auth\ntype?\n(b.) As of today, the server side core does not validate that the\ntoken is actually a valid jwt token. Instead relies on the extensions\nto do the validation.\nDo you think server core should do the basic validation before passing\nto extensions to prevent the auth type being used for anything other\nthan OAUTH flows?\n\nTests are the plan for the commit-ready implementation.\n\nThanks!\nAndrey.\n\nOn Tue, Feb 21, 2023 at 2:24 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Mon, Feb 20, 2023 at 2:35 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Having skimmed back through this thread again, I still feel that the\n> > direction that was originally being taken (actually support something in\n> > libpq and the backend, be it with libiddawc or something else or even\n> > our own code, and not just throw hooks in various places) makes a lot\n> > more sense and is a lot closer to how Kerberos and client-side certs and\n> > even LDAP auth work today.\n>\n> Cool, that helps focus the effort. Thanks!\n>\n> > That also seems like a much better answer\n> > for our users when it comes to new authentication methods than having\n> > extensions and making libpq developers have to write their own custom\n> > code, not to mention that we'd still need to implement something in psql\n> > to provide such a hook if we are to have psql actually usefully exercise\n> > this, no?\n>\n> I don't mind letting clients implement their own flows... as long as\n> it's optional. So even if we did use a hook in the end, I agree that\n> we've got to exercise it ourselves.\n>\n> > In the Kerberos test suite we have today, we actually bring up a proper\n> > Kerberos server, set things up, and then test end-to-end installing a\n> > keytab for the server, getting a TGT, getting a service ticket, testing\n> > authentication and encryption, etc. Looking around, it seems like the\n> > equivilant would perhaps be to use Glewlwyd and libiddawc or libcurl and\n> > our own code to really be able to test this and show that it works and\n> > that we're doing it correctly, and to let us know if we break something.\n>\n> The original patchset includes a test server in Python -- a major\n> advantage being that you can test the client and server independently\n> of each other, since the implementation is so asymmetric. Additionally\n> testing against something like Glewlwyd would be a great way to stack\n> coverage. (If we *only* test against a packaged server, though, it'll\n> be harder to test our stuff in the presence of malfunctions and other\n> corner cases.)\n>\n> Thanks,\n> --Jacob\n\n\n", "msg_date": "Tue, 21 Feb 2023 23:00:46 -0800", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (jchampion@timescale.com) wrote:\n> On Mon, Feb 20, 2023 at 2:35 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Having skimmed back through this thread again, I still feel that the\n> > direction that was originally being taken (actually support something in\n> > libpq and the backend, be it with libiddawc or something else or even\n> > our own code, and not just throw hooks in various places) makes a lot\n> > more sense and is a lot closer to how Kerberos and client-side certs and\n> > even LDAP auth work today.\n> \n> Cool, that helps focus the effort. Thanks!\n\nGreat, glad to hear that.\n\n> > That also seems like a much better answer\n> > for our users when it comes to new authentication methods than having\n> > extensions and making libpq developers have to write their own custom\n> > code, not to mention that we'd still need to implement something in psql\n> > to provide such a hook if we are to have psql actually usefully exercise\n> > this, no?\n> \n> I don't mind letting clients implement their own flows... as long as\n> it's optional. So even if we did use a hook in the end, I agree that\n> we've got to exercise it ourselves.\n\nThis really doesn't feel like a great area to try and do hooks or\nsimilar in, not the least because that approach has been tried and tried\nagain (PAM, GSSAPI, SASL would all be examples..) and frankly none of\nthem has turned out great (which is why we can't just tell people \"well,\ninstall the pam_oauth2 and watch everything work!\") and this strikes me\nas trying to do that yet again but worse as it's not even a dedicated\nproject trying to solve the problem but more like a side project. SCRAM\nwas good, we've come a long way thanks to that, this feels like it\nshould be more in line with that rather than trying to invent yet\nanother new \"generic\" set of hooks/APIs that will just cause DBAs and\nour users headaches trying to make work.\n\n> > In the Kerberos test suite we have today, we actually bring up a proper\n> > Kerberos server, set things up, and then test end-to-end installing a\n> > keytab for the server, getting a TGT, getting a service ticket, testing\n> > authentication and encryption, etc. Looking around, it seems like the\n> > equivilant would perhaps be to use Glewlwyd and libiddawc or libcurl and\n> > our own code to really be able to test this and show that it works and\n> > that we're doing it correctly, and to let us know if we break something.\n> \n> The original patchset includes a test server in Python -- a major\n> advantage being that you can test the client and server independently\n> of each other, since the implementation is so asymmetric. Additionally\n> testing against something like Glewlwyd would be a great way to stack\n> coverage. (If we *only* test against a packaged server, though, it'll\n> be harder to test our stuff in the presence of malfunctions and other\n> corner cases.)\n\nOh, that's even better- I agree entirely that having test code that can\nbe instructed to return specific errors so that we can test that our\ncode responds properly is great (and is why pgbackrest has things like\na stub'd out libpq, fake s3, GCS, and Azure servers, and more) and would\ncertainly want to keep that, even if we also build out a test that uses\na real server to provide integration testing with not-our-code too.\n\nThanks!\n\nStephen", "msg_date": "Thu, 23 Feb 2023 13:47:55 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> This really doesn't feel like a great area to try and do hooks or\n> similar in, not the least because that approach has been tried and tried\n> again (PAM, GSSAPI, SASL would all be examples..) and frankly none of\n> them has turned out great (which is why we can't just tell people \"well,\n> install the pam_oauth2 and watch everything work!\") and this strikes me\n> as trying to do that yet again but worse as it's not even a dedicated\n> project trying to solve the problem but more like a side project.\n\nIn this case it's not intended to be an open-ended hook, but rather an\nimplementation of a specific rfc (rfc-7628) which defines a\nclient-server communication for the authentication flow.\nThe rfc itself does leave a lot of flexibility on specific parts of\nthe implementation. Which do require hooks:\n(1.) Server side hook to validate the token, which is specific to the\nOAUTH provider.\n(2.) Client side hook to request the client to obtain the token.\n\nOn (1.), we would need a hook for the OAUTH provider extension to do\nvalidation. We can though do some basic check that the credential is\nindeed a JWT token signed by the requested issuer.\n\nSpecifically (2.) is where we can provide a layer in libpq to simplify\nthe integration. i.e. implement some OAUTH flows.\nThough we would need some flexibility for the clients to bring their own token:\nFor example there are cases where the credential to obtain the token\nis stored in a separate secure location and the token is returned from\na separate service or pushed from a more secure environment.\n\n> another new \"generic\" set of hooks/APIs that will just cause DBAs and\n> our users headaches trying to make work.\nAs I mentioned above, it's an rfc implementation, rather than our invention.\nWhen it comes to DBAs and the users.\nBuiltin libpq implementations which allows psql and pgadmin to\nseamlessly connect should suffice those needs.\nWhile extensibility would allow the ecosystem to be open for OAUTH\nproviders, SAAS developers, PAAS providers and other institutional\nplayers.\n\nThanks!\nAndrey.\n\nOn Thu, Feb 23, 2023 at 10:47 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Jacob Champion (jchampion@timescale.com) wrote:\n> > On Mon, Feb 20, 2023 at 2:35 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > Having skimmed back through this thread again, I still feel that the\n> > > direction that was originally being taken (actually support something in\n> > > libpq and the backend, be it with libiddawc or something else or even\n> > > our own code, and not just throw hooks in various places) makes a lot\n> > > more sense and is a lot closer to how Kerberos and client-side certs and\n> > > even LDAP auth work today.\n> >\n> > Cool, that helps focus the effort. Thanks!\n>\n> Great, glad to hear that.\n>\n> > > That also seems like a much better answer\n> > > for our users when it comes to new authentication methods than having\n> > > extensions and making libpq developers have to write their own custom\n> > > code, not to mention that we'd still need to implement something in psql\n> > > to provide such a hook if we are to have psql actually usefully exercise\n> > > this, no?\n> >\n> > I don't mind letting clients implement their own flows... as long as\n> > it's optional. So even if we did use a hook in the end, I agree that\n> > we've got to exercise it ourselves.\n>\n> This really doesn't feel like a great area to try and do hooks or\n> similar in, not the least because that approach has been tried and tried\n> again (PAM, GSSAPI, SASL would all be examples..) and frankly none of\n> them has turned out great (which is why we can't just tell people \"well,\n> install the pam_oauth2 and watch everything work!\") and this strikes me\n> as trying to do that yet again but worse as it's not even a dedicated\n> project trying to solve the problem but more like a side project. SCRAM\n> was good, we've come a long way thanks to that, this feels like it\n> should be more in line with that rather than trying to invent yet\n> another new \"generic\" set of hooks/APIs that will just cause DBAs and\n> our users headaches trying to make work.\n>\n> > > In the Kerberos test suite we have today, we actually bring up a proper\n> > > Kerberos server, set things up, and then test end-to-end installing a\n> > > keytab for the server, getting a TGT, getting a service ticket, testing\n> > > authentication and encryption, etc. Looking around, it seems like the\n> > > equivilant would perhaps be to use Glewlwyd and libiddawc or libcurl and\n> > > our own code to really be able to test this and show that it works and\n> > > that we're doing it correctly, and to let us know if we break something.\n> >\n> > The original patchset includes a test server in Python -- a major\n> > advantage being that you can test the client and server independently\n> > of each other, since the implementation is so asymmetric. Additionally\n> > testing against something like Glewlwyd would be a great way to stack\n> > coverage. (If we *only* test against a packaged server, though, it'll\n> > be harder to test our stuff in the presence of malfunctions and other\n> > corner cases.)\n>\n> Oh, that's even better- I agree entirely that having test code that can\n> be instructed to return specific errors so that we can test that our\n> code responds properly is great (and is why pgbackrest has things like\n> a stub'd out libpq, fake s3, GCS, and Azure servers, and more) and would\n> certainly want to keep that, even if we also build out a test that uses\n> a real server to provide integration testing with not-our-code too.\n>\n> Thanks!\n>\n> Stephen\n\n\n", "msg_date": "Thu, 23 Feb 2023 15:04:22 -0800", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Greetings,\n\n* Andrey Chudnovsky (achudnovskij@gmail.com) wrote:\n> > This really doesn't feel like a great area to try and do hooks or\n> > similar in, not the least because that approach has been tried and tried\n> > again (PAM, GSSAPI, SASL would all be examples..) and frankly none of\n> > them has turned out great (which is why we can't just tell people \"well,\n> > install the pam_oauth2 and watch everything work!\") and this strikes me\n> > as trying to do that yet again but worse as it's not even a dedicated\n> > project trying to solve the problem but more like a side project.\n> \n> In this case it's not intended to be an open-ended hook, but rather an\n> implementation of a specific rfc (rfc-7628) which defines a\n> client-server communication for the authentication flow.\n> The rfc itself does leave a lot of flexibility on specific parts of\n> the implementation. Which do require hooks:\n\nColor me skeptical on an RFC that requires hooks.\n\n> (1.) Server side hook to validate the token, which is specific to the\n> OAUTH provider.\n> (2.) Client side hook to request the client to obtain the token.\n\nPerhaps I'm missing it... but weren't these handled with what the\noriginal patch that Jacob had was doing?\n\n> On (1.), we would need a hook for the OAUTH provider extension to do\n> validation. We can though do some basic check that the credential is\n> indeed a JWT token signed by the requested issuer.\n> \n> Specifically (2.) is where we can provide a layer in libpq to simplify\n> the integration. i.e. implement some OAUTH flows.\n> Though we would need some flexibility for the clients to bring their own token:\n> For example there are cases where the credential to obtain the token\n> is stored in a separate secure location and the token is returned from\n> a separate service or pushed from a more secure environment.\n\nIn those cases... we could, if we wanted, simply implement the code to\nactually pull the token, no? We don't *have* to have a hook here for\nthis, we could just make it work.\n\n> > another new \"generic\" set of hooks/APIs that will just cause DBAs and\n> > our users headaches trying to make work.\n> As I mentioned above, it's an rfc implementation, rather than our invention.\n\nWhile I only took a quick look, I didn't see anything in that RFC that\nexplicitly says that hooks or a plugin or a library or such is required\nto meet the RFC. Sure, there are places which say that the\nimplementation is specific to a particular server or client but that's\nnot the same thing.\n\n> When it comes to DBAs and the users.\n> Builtin libpq implementations which allows psql and pgadmin to\n> seamlessly connect should suffice those needs.\n> While extensibility would allow the ecosystem to be open for OAUTH\n> providers, SAAS developers, PAAS providers and other institutional\n> players.\n\nEach to end up writing their own code to do largely the same thing\nwithout the benefit of the larger community to be able to review and\nensure that it's done properly?\n\nThat doesn't sound like a great approach to me.\n\nThanks,\n\nStephen", "msg_date": "Mon, 27 Feb 2023 15:31:04 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Sep 23, 2022 at 3:39 PM Jacob Champion <jchampion@timescale.com> wrote:\n> Here's a newly rebased v5. (They're all zipped now, which I probably\n> should have done a while back, sorry.)\n\nTo keep this current, v7 is rebased over latest, without the pluggable\nauthentication patches. This doesn't yet address the architectural\nfeedback that was discussed previously, so if you're primarily\ninterested in that, you can safely ignore this version of the\npatchset.\n\nThe key changes here include\n- Meson support, for both the build and the pytest suite\n- Cirrus support (and unsurprisingly, Mac and Windows builds fail due\nto the Linux-oriented draft code)\n- A small tweak to support iddawc down to 0.9.8 (shipped with e.g.\nDebian Bullseye)\n- Removal of the authn_id test extension in favor of SYSTEM_USER\n\nThe meson+pytest support was big enough that I split it into its own\npatch. It's not very polished yet, but it mostly works, and when\nrunning tests via Meson it'll now spin up a test server for you. My\nvirtualenv approach apparently interacts poorly with the multiarch\nCirrus setup (64-bit tests pass, 32-bit tests fail).\n\nMoving forward, the first thing I plan to tackle is asynchronous\noperation, so that polling clients can still operate sanely. If I can\nfind a good solution there, the conversations about possible extension\npoints should get a lot easier.\n\nThanks,\n--Jacob", "msg_date": "Thu, 27 Apr 2023 10:35:20 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 4/27/23 10:35, Jacob Champion wrote:\n> Moving forward, the first thing I plan to tackle is asynchronous\n> operation, so that polling clients can still operate sanely. If I can\n> find a good solution there, the conversations about possible extension\n> points should get a lot easier.\n\nAttached is patchset v8, now with concurrency and 300% more cURL! And\nmany more questions to answer.\n\nThis is a full reimplementation of the client-side OAuth flow. It's an\nasync-first engine built on top of cURL's multi handles. All pending\noperations are multiplexed into a single epoll set (the \"altsock\"),\nwhich is exposed through PQsocket() for the duration of the OAuth flow.\nClients return to the flow on their next call to PQconnectPoll().\n\nAndrey and Mahendrakar: you'll probably be interested in the\nconn->async_auth() callback, conn->altsock, and the pg_fe_run_oauth_flow\nentry point. This is intended to be the foundation for alternative flows.\n\nI've kept the blocking iddawc implementation for comparison, but if\nyou're running the tests against it, be aware that the asynchronous\ntests will, predictably, hang. Skip them with `py.test -k 'not\nasynchronous'`.\n\n= The Good =\n\n- PQconnectPoll() is no longer indefinitely blocked on a single\nconnection's OAuth handshake. (iddawc doesn't appear to have any\nasynchronous primitives in its API, unless I've missed something crucial.)\n\n- We now have a swappable entry point. Alternative flows could be\nimplemented by applications without forcing clients to redesign their\npolling loops (PQconnect* should just work as expected).\n\n- We have full control over corner cases in our default flow. Debugging\nfailures is much nicer, with explanations of exactly what has gone wrong\nand where, compared to iddawc's \"I_ERROR\" messages.\n\n- cURL is not a lightweight library by any means, but we're no longer\nbundling things like web servers that we're not going to use.\n\n= The Bad =\n\n- Unsurprisingly, there's a lot more code now that we're implementing\nthe flow ourselves. The client patch has tripled in size, and we'd be on\nthe hook for implementing and staying current with the RFCs.\n\n- The client implementation is currently epoll-/Linux-specific. I think\nkqueue shouldn't be too much trouble for the BSDs, but it's even more\ncode to maintain.\n\n- Some clients in the wild (psycopg2/psycopg) suppress all notifications\nduring PQconnectPoll(). To accommodate them, I no longer use the\nnoticeHooks for communicating the user code, but that means we have to\ncome up with some other way to let applications override the printing to\nstderr. Something like the OpenSSL decryption callback, maybe?\n\n= The Ugly =\n\n- Unless someone is aware of some amazing Winsock magic, I'm pretty sure\nthe multiplexed-socket approach is dead in the water on Windows. I think\nthe strategy there probably has to be a background thread plus a fake\n\"self-pipe\" (loopback socket) for polling... which may be controversial?\n\n- We have to figure out how to initialize cURL in a thread-safe manner.\nNewer versions of libcurl and OpenSSL improve upon this situation, but I\ndon't think there's a way to check at compile time whether the\ninitialization strategy is safe or not (and even at runtime, I think\nthere may be a chicken-and-egg problem with the API, where it's not safe\nto check for thread-safe initialization until after you've safely\ninitialized).\n\n= Next Steps =\n\nThere are so many TODOs in the cURL implementation: it's been a while\nsince I've done any libcurl programming, it all needs to be hardened,\nand I need to comb through the relevant specs again. But I don't want to\ngold-plate it if this overall approach is unacceptable. So, questions\nfor the gallery:\n\n1) Would starting up a background thread (pooled or not) be acceptable\non Windows? Alternatively, does anyone know enough Winsock deep magic to\ncombine multiple pending events into one (selectable!) socket?\n\n2) If a background thread is acceptable on one platform, does it make\nmore sense to use one on every platform and just have synchronous code\neverywhere? Or should we use a threadless async implementation when we can?\n\n3) Is the current conn->async_auth() entry point sufficient for an\napplication to implement the Microsoft flows discussed upthread?\n\n4) Would we want to try to require a new enough cURL/OpenSSL to avoid\nthread safety problems during initialization, or do we need to introduce\nsome API equivalent to PQinitOpenSSL?\n\n5) Does this maintenance tradeoff (full control over the client vs. a\nlarge amount of RFC-governed code) seem like it could be okay?\n\nThanks,\n--Jacob", "msg_date": "Fri, 19 May 2023 15:01:11 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Sat, 20 May 2023 at 00:01, Jacob Champion <jchampion@timescale.com> wrote:\n\n> - Some clients in the wild (psycopg2/psycopg) suppress all notifications\n> during PQconnectPoll().\n\nIf there is anything we can improve in psycopg please reach out.\n\n-- Daniele\n\n\n", "msg_date": "Tue, 23 May 2023 13:22:20 +0200", "msg_from": "Daniele Varrazzo <daniele.varrazzo@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Tue, May 23, 2023 at 4:22 AM Daniele Varrazzo\n<daniele.varrazzo@gmail.com> wrote:\n> On Sat, 20 May 2023 at 00:01, Jacob Champion <jchampion@timescale.com> wrote:\n> > - Some clients in the wild (psycopg2/psycopg) suppress all notifications\n> > during PQconnectPoll().\n>\n> If there is anything we can improve in psycopg please reach out.\n\nWill do, thank you! But in this case, I think there's nothing to\nimprove in psycopg -- in fact, it highlighted the problem with my\ninitial design, and now I think the notice processor will never be an\nappropriate avenue for communication of the user code.\n\nThe biggest issue is that there's a chicken-and-egg situation: if\nyou're using the synchronous PQconnect* API, you can't override the\nnotice hooks while the handshake is in progress, because you don't\nhave a connection handle yet. The second problem is that there are a\nbunch of parameters coming back from the server (user code,\nverification URI, expiration time) that the application may choose to\ndisplay or use, and communicating those pieces in a (probably already\ntranslated) flat text string is a pretty hostile API.\n\nSo I think we'll probably need to provide a global handler API,\nsimilar to the passphrase hook we currently provide, that can receive\nthese pieces separately and assemble them however the application\ndesires. The hard part will be to avoid painting ourselves into a\ncorner, because this particular information is specific to the device\nauthorization flow, and if we ever want to add other flows into libpq,\nwe'll probably not want to add even more hooks.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 23 May 2023 08:56:47 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "\nOn 5/19/23 15:01, Jacob Champion wrote:\n> But I don't want to\n> gold-plate it if this overall approach is unacceptable. So, questions\n> for the gallery:\n> \n> 1) Would starting up a background thread (pooled or not) be acceptable\n> on Windows? Alternatively, does anyone know enough Winsock deep magic to\n> combine multiple pending events into one (selectable!) socket?\n> \n> 2) If a background thread is acceptable on one platform, does it make\n> more sense to use one on every platform and just have synchronous code\n> everywhere? Or should we use a threadless async implementation when we can?\n> \n> 3) Is the current conn->async_auth() entry point sufficient for an\n> application to implement the Microsoft flows discussed upthread?\n> \n> 4) Would we want to try to require a new enough cURL/OpenSSL to avoid\n> thread safety problems during initialization, or do we need to introduce\n> some API equivalent to PQinitOpenSSL?\n> \n> 5) Does this maintenance tradeoff (full control over the client vs. a\n> large amount of RFC-governed code) seem like it could be okay?\n\nThere was additional interest at PGCon, so I've registered this in the\ncommitfest.\n\nPotential reviewers should be aware that the current implementation\nrequires Linux (or, more specifically, epoll), as the cfbot shows. But\nif you have any opinions on the above questions, those will help me\ntackle the other platforms. :D\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Thu, 29 Jun 2023 09:28:24 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Sat, May 20, 2023 at 10:01 AM Jacob Champion <jchampion@timescale.com> wrote:\n> - The client implementation is currently epoll-/Linux-specific. I think\n> kqueue shouldn't be too much trouble for the BSDs, but it's even more\n> code to maintain.\n\nI guess you also need a fallback that uses plain old POSIX poll()? I\nsee you're not just using epoll but also timerfd. Could that be\nconverted to plain old timeout bookkeeping? That should be enough to\nget every other Unix and *possibly* also Windows to work with the same\ncode path.\n\n> - Unless someone is aware of some amazing Winsock magic, I'm pretty sure\n> the multiplexed-socket approach is dead in the water on Windows. I think\n> the strategy there probably has to be a background thread plus a fake\n> \"self-pipe\" (loopback socket) for polling... which may be controversial?\n\nI am not a Windows user or hacker, but there are certainly several\nways to multiplex sockets. First there is the WSAEventSelect() +\nWaitForMultipleObjects() approach that latch.c uses. It has the\nadvantage that it allows socket readiness to be multiplexed with\nvarious other things that use Windows \"events\". But if you don't need\nthat, ie you *only* need readiness-based wakeup for a bunch of sockets\nand no other kinds of fd or object, you can use winsock's plain old\nselect() or its fairly faithful poll() clone called WSAPoll(). It\nlooks a bit like that'd be true here if you could kill the timerfd?\n\nIt's a shame to write modern code using select(), but you can find\nlots of shouting all over the internet about WSAPoll()'s defects, most\nfamously the cURL guys[1] whose blog is widely cited, so people still\ndo it. Possibly some good news on that front: by my reading of the\ndocs, it looks like that problem was fixed in Windows 10 2004[2] which\nitself is by now EOL, so all systems should have the fix? I suspect\nthat means that, finally, you could probably just use the same poll()\ncode path for Unix (when epoll is not available) *and* Windows these\ndays, making porting a lot easier. But I've never tried it, so I\ndon't know what other problems there might be. Another thing people\ncomplain about is the lack of socketpair() or similar in winsock which\nmeans you unfortunately can't easily make anonymous\nselect/poll-compatible local sockets, but that doesn't seem to be\nneeded here.\n\n[1] https://daniel.haxx.se/blog/2012/10/10/wsapoll-is-broken/\n[2] https://learn.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-wsapoll\n\n\n", "msg_date": "Sat, 1 Jul 2023 16:28:52 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Jun 30, 2023 at 9:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sat, May 20, 2023 at 10:01 AM Jacob Champion <jchampion@timescale.com> wrote:\n> > - The client implementation is currently epoll-/Linux-specific. I think\n> > kqueue shouldn't be too much trouble for the BSDs, but it's even more\n> > code to maintain.\n>\n> I guess you also need a fallback that uses plain old POSIX poll()?\n\nThe use of the epoll API here is to combine several sockets into one,\nnot to actually call epoll_wait() itself. kqueue descriptors should\nlet us do the same, IIUC.\n\n> I see you're not just using epoll but also timerfd. Could that be\n> converted to plain old timeout bookkeeping? That should be enough to\n> get every other Unix and *possibly* also Windows to work with the same\n> code path.\n\nI might be misunderstanding your suggestion, but I think our internal\nbookkeeping is orthogonal to that. The use of timerfd here allows us\nto forward libcurl's timeout requirements up to the top-level\nPQsocket(). As an example, libcurl is free to tell us to call it again\nin ten milliseconds, and we have to make sure a nonblocking client\ncalls us again after that elapses; otherwise they might hang waiting\nfor data that's not coming.\n\n> > - Unless someone is aware of some amazing Winsock magic, I'm pretty sure\n> > the multiplexed-socket approach is dead in the water on Windows. I think\n> > the strategy there probably has to be a background thread plus a fake\n> > \"self-pipe\" (loopback socket) for polling... which may be controversial?\n>\n> I am not a Windows user or hacker, but there are certainly several\n> ways to multiplex sockets. First there is the WSAEventSelect() +\n> WaitForMultipleObjects() approach that latch.c uses.\n\nI don't think that strategy plays well with select() clients, though\n-- it requires a handle array, and we've just got the one socket.\n\nMy goal is to maintain compatibility with existing PQconnectPoll()\napplications, where the only way we get to communicate with the client\nis through the PQsocket() for the connection. Ideally, you shouldn't\nhave to completely rewrite your application loop just to make use of\nOAuth. (I assume a requirement like that would be a major roadblock to\ncommitting this -- and if that's not a correct assumption, then I\nguess my job gets a lot easier?)\n\n> It's a shame to write modern code using select(), but you can find\n> lots of shouting all over the internet about WSAPoll()'s defects, most\n> famously the cURL guys[1] whose blog is widely cited, so people still\n> do it.\n\nRight -- that's basically the root of my concern. I can't guarantee\nthat existing Windows clients out there are all using\nWaitForMultipleObjects(). From what I can tell, whatever we hand up\nthrough PQsocket() has to be fully Winsock-/select-compatible.\n\n> Another thing people\n> complain about is the lack of socketpair() or similar in winsock which\n> means you unfortunately can't easily make anonymous\n> select/poll-compatible local sockets, but that doesn't seem to be\n> needed here.\n\nFor the background-thread implementation, it probably would be. I've\nbeen looking at libevent (BSD-licensed) and its socketpair hack for\nWindows...\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Wed, 5 Jul 2023 14:00:27 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Thu, Jul 6, 2023 at 9:00 AM Jacob Champion <jchampion@timescale.com> wrote:\n> My goal is to maintain compatibility with existing PQconnectPoll()\n> applications, where the only way we get to communicate with the client\n> is through the PQsocket() for the connection. Ideally, you shouldn't\n> have to completely rewrite your application loop just to make use of\n> OAuth. (I assume a requirement like that would be a major roadblock to\n> committing this -- and if that's not a correct assumption, then I\n> guess my job gets a lot easier?)\n\nAh, right, I get it.\n\nI guess there are a couple of ways to do it if we give up the goal of\nno-code-change-for-the-client:\n\n1. Generalised PQsocket(), that so that a client can call something like:\n\nint PQpollset(const PGConn *conn, struct pollfd fds[], int fds_size,\nint *nfds, int *timeout_ms);\n\nThat way, libpq could tell you about which events it would like to\nwait for on which fds, and when it would like you to call it back due\nto timeout, and you can either pass that information directly to\npoll() or WSAPoll() or some equivalent interface (we don't care, we\njust gave you the info you need), or combine it in obvious ways with\nwhatever else you want to multiplex with in your client program.\n\n2. Convert those events into new libpq events like 'I want you to\ncall me back in 100ms', and 'call me back when socket #42 has data',\nand let clients handle that by managing their own poll set etc. (This\nis something I've speculated about to support more efficient\npostgres_fdw shard query multiplexing; gotta figure out how to get\nmultiple connections' events into one WaitEventSet...)\n\nI guess there is a practical middle ground where client code on\nsystems that have epoll/kqueue can use OAUTHBEARER without any code\nchange, and the feature is available on other systems too but you'll\nhave to change your client code to use one of those interfaces or else\nyou get an error 'coz we just can't do it. Or, more likely in the\nfirst version, you just can't do it at all... Doesn't seem that bad\nto me.\n\nBTW I will happily do the epoll->kqueue port work if necessary.\n\n\n", "msg_date": "Thu, 6 Jul 2023 10:07:17 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Wed, Jul 5, 2023 at 3:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I guess there are a couple of ways to do it if we give up the goal of\n> no-code-change-for-the-client:\n>\n> 1. Generalised PQsocket(), that so that a client can call something like:\n>\n> int PQpollset(const PGConn *conn, struct pollfd fds[], int fds_size,\n> int *nfds, int *timeout_ms);\n>\n> That way, libpq could tell you about which events it would like to\n> wait for on which fds, and when it would like you to call it back due\n> to timeout, and you can either pass that information directly to\n> poll() or WSAPoll() or some equivalent interface (we don't care, we\n> just gave you the info you need), or combine it in obvious ways with\n> whatever else you want to multiplex with in your client program.\n\nI absolutely wanted something like this while I was writing the code\n(it would have made things much easier), but I'd feel bad adding that\nmuch complexity to the API if the vast majority of connections use\nexactly one socket. Are there other use cases in libpq where you think\nthis expanded API could be useful? Maybe to lift some of the existing\nrestrictions for PQconnectPoll(), add async DNS resolution, or\nsomething?\n\nCouple complications I can think of at the moment:\n1. Clients using persistent pollsets will have to remove old\ndescriptors, presumably by tracking the delta since the last call,\nwhich might make for a rough transition. Bookkeeping bugs probably\nwouldn't show up unless they used OAuth in their test suites. With the\ncurrent model, that's more hidden and libpq takes responsibility for\ngetting it right.\n2. In the future, we might need to think carefully around situations\nwhere we want multiple PGConn handles to share descriptors (e.g.\nmultiplexed backend connections). I avoid tricky questions at the\nmoment by assigning only one connection per multi pool.\n\n> 2. Convert those events into new libpq events like 'I want you to\n> call me back in 100ms', and 'call me back when socket #42 has data',\n> and let clients handle that by managing their own poll set etc. (This\n> is something I've speculated about to support more efficient\n> postgres_fdw shard query multiplexing; gotta figure out how to get\n> multiple connections' events into one WaitEventSet...)\n\nSomething analogous to libcurl's socket and timeout callbacks [1],\nthen? Or is there an existing libpq API you were thinking about using?\n\n> I guess there is a practical middle ground where client code on\n> systems that have epoll/kqueue can use OAUTHBEARER without any code\n> change, and the feature is available on other systems too but you'll\n> have to change your client code to use one of those interfaces or else\n> you get an error 'coz we just can't do it.\n\nThat's a possibility -- if your platform is able to do it nicely,\nmight as well use it. (In a similar vein, I'd personally vote against\nhaving every platform use a background thread, even if we decided to\nimplement it for Windows.)\n\n> Or, more likely in the\n> first version, you just can't do it at all... Doesn't seem that bad\n> to me.\n\nAny initial opinions on whether it's worse or better than a worker thread?\n\n> BTW I will happily do the epoll->kqueue port work if necessary.\n\nAnd I will happily take you up on that; thanks!\n\n--Jacob\n\n[1] https://curl.se/libcurl/c/CURLMOPT_SOCKETFUNCTION.html\n\n\n", "msg_date": "Thu, 6 Jul 2023 09:56:49 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Jul 7, 2023 at 4:57 AM Jacob Champion <jchampion@timescale.com> wrote:\n> On Wed, Jul 5, 2023 at 3:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > 2. Convert those events into new libpq events like 'I want you to\n> > call me back in 100ms', and 'call me back when socket #42 has data',\n> > and let clients handle that by managing their own poll set etc. (This\n> > is something I've speculated about to support more efficient\n> > postgres_fdw shard query multiplexing; gotta figure out how to get\n> > multiple connections' events into one WaitEventSet...)\n>\n> Something analogous to libcurl's socket and timeout callbacks [1],\n> then? Or is there an existing libpq API you were thinking about using?\n\nYeah. Libpq already has an event concept. I did some work on getting\nlong-lived WaitEventSet objects to be used in various places, some of\nwhich got committed[1], but not yet the parts related to postgres_fdw\n(which uses libpq connections to talk to other PostgreSQL servers, and\nruns into the limitations of PQsocket()). Horiguchi-san had the good\nidea of extending the event system to cover socket changes, but I\nhaven't actually tried it yet. One day.\n\n> > Or, more likely in the\n> > first version, you just can't do it at all... Doesn't seem that bad\n> > to me.\n>\n> Any initial opinions on whether it's worse or better than a worker thread?\n\nMy vote is that it's perfectly fine to make a new feature that only\nworks on some OSes. If/when someone wants to work on getting it going\non Windows/AIX/Solaris (that's the complete set of no-epoll, no-kqueue\nOSes we target), they can write the patch.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJAC4Oqao%3DqforhNey20J8CiG2R%3DoBPqvfR0vOJrFysGw%40mail.gmail.com\n\n\n", "msg_date": "Fri, 7 Jul 2023 08:47:47 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Thu, Jul 6, 2023 at 1:48 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Jul 7, 2023 at 4:57 AM Jacob Champion <jchampion@timescale.com> wrote:\n> > Something analogous to libcurl's socket and timeout callbacks [1],\n> > then? Or is there an existing libpq API you were thinking about using?\n>\n> Yeah. Libpq already has an event concept.\n\nThanks -- I don't know how I never noticed libpq-events.h before.\n\nPer-connection events (or callbacks) might bring up the same\nchicken-and-egg situation discussed above, with the notice hook. We'll\nbe fine as long as PQconnectStart is guaranteed to return before the\nPQconnectPoll engine gets to authentication, and it looks like that's\ntrue with today's implementation, which returns pessimistically at\nseveral points instead of just trying to continue the exchange. But I\ndon't know if that's intended as a guarantee for the future. At the\nvery least we would have to pin that implementation detail.\n\n> > > Or, more likely in the\n> > > first version, you just can't do it at all... Doesn't seem that bad\n> > > to me.\n> >\n> > Any initial opinions on whether it's worse or better than a worker thread?\n>\n> My vote is that it's perfectly fine to make a new feature that only\n> works on some OSes. If/when someone wants to work on getting it going\n> on Windows/AIX/Solaris (that's the complete set of no-epoll, no-kqueue\n> OSes we target), they can write the patch.\n\nOkay. I'm curious to hear others' thoughts on that, too, if anyone's lurking.\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Fri, 7 Jul 2023 11:48:26 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Thanks Jacob for making progress on this.\n\n> 3) Is the current conn->async_auth() entry point sufficient for an\n> application to implement the Microsoft flows discussed upthread?\n\nPlease confirm my understanding of the flow is correct:\n1. Client calls PQconnectStart.\n - The client doesn't know yet what is the issuer and the scope.\n - Parameters are strings, so callback is not provided yet.\n2. Client gets PgConn from PQconnectStart return value and updates\nconn->async_auth to its own callback.\n3. Client polls PQconnectPoll and checks conn->sasl_state until the\nvalue is SASL_ASYNC\n4. Client accesses conn->oauth_issuer and conn->oauth_scope and uses\nthose info to trigger the token flow.\n5. Expectations on async_auth:\n a. It returns PGRES_POLLING_READING while token acquisition is going on\n b. It returns PGRES_POLLING_OK and sets conn->sasl_state->token\nwhen token acquisition succeeds.\n6. Is the client supposed to do anything with the altsock parameter?\n\nIs the above accurate understanding?\n\nIf yes, it looks workable with a couple of improvements I think would be nice:\n1. Currently, oauth_exchange function sets conn->async_auth =\npg_fe_run_oauth_flow and starts Device Code flow automatically when\nreceiving challenge and metadata from the server.\n There probably should be a way for the client to prevent default\nDevice Code flow from triggering.\n2. The current signature and expectations from async_auth function\nseems to be tightly coupled with the internal implementation:\n - Pieces of information need to be picked and updated in different\nplaces in the PgConn structure.\n - Function is expected to return PostgresPollingStatusType which\nis used to communicate internal state to the client.\n Would it make sense to separate the internal callback used to\ncommunicate with Device Code flow from client facing API?\n I.e. introduce a new client facing structure and enum to facilitate\ncallback and its return value.\n\n-----------\nOn a separate note:\nThe backend code currently spawns an external command for token validation.\nAs we discussed before, an extension hook would be a more efficient\nextensibility option.\nWe see clients make 10k+ connections using OAuth tokens per minute to\nour service, and stating external processes would be too much overhead\nhere.\n\n-----------\n\n> 5) Does this maintenance tradeoff (full control over the client vs. a\n> large amount of RFC-governed code) seem like it could be okay?\n\nIt's nice for psql to have Device Code flow. Can be made even more\nconvenient with refresh tokens support.\nAnd for clients on resource constrained devices to be able to\nauthenticate with Client Credentials (app secret) without bringing\nmore dependencies.\n\nIn most other cases, upstream PostgreSQL drivers written in higher\nlevel languages have libraries / abstractions to implement OAUTH flows\nfor the platforms they support.\n\nOn Fri, Jul 7, 2023 at 11:48 AM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Thu, Jul 6, 2023 at 1:48 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Fri, Jul 7, 2023 at 4:57 AM Jacob Champion <jchampion@timescale.com> wrote:\n> > > Something analogous to libcurl's socket and timeout callbacks [1],\n> > > then? Or is there an existing libpq API you were thinking about using?\n> >\n> > Yeah. Libpq already has an event concept.\n>\n> Thanks -- I don't know how I never noticed libpq-events.h before.\n>\n> Per-connection events (or callbacks) might bring up the same\n> chicken-and-egg situation discussed above, with the notice hook. We'll\n> be fine as long as PQconnectStart is guaranteed to return before the\n> PQconnectPoll engine gets to authentication, and it looks like that's\n> true with today's implementation, which returns pessimistically at\n> several points instead of just trying to continue the exchange. But I\n> don't know if that's intended as a guarantee for the future. At the\n> very least we would have to pin that implementation detail.\n>\n> > > > Or, more likely in the\n> > > > first version, you just can't do it at all... Doesn't seem that bad\n> > > > to me.\n> > >\n> > > Any initial opinions on whether it's worse or better than a worker thread?\n> >\n> > My vote is that it's perfectly fine to make a new feature that only\n> > works on some OSes. If/when someone wants to work on getting it going\n> > on Windows/AIX/Solaris (that's the complete set of no-epoll, no-kqueue\n> > OSes we target), they can write the patch.\n>\n> Okay. I'm curious to hear others' thoughts on that, too, if anyone's lurking.\n>\n> Thanks!\n> --Jacob\n\n\n", "msg_date": "Fri, 7 Jul 2023 14:16:05 -0700", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Jul 7, 2023 at 4:57 AM Jacob Champion <jchampion@timescale.com> wrote:\n> On Wed, Jul 5, 2023 at 3:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > BTW I will happily do the epoll->kqueue port work if necessary.\n>\n> And I will happily take you up on that; thanks!\n\nSome initial hacking, about 2 coffees' worth:\nhttps://github.com/macdice/postgres/commits/oauth-kqueue\n\nThis compiles on FreeBSD and macOS, but I didn't have time to figure\nout all your Python testing magic so I don't know if it works yet and\nit's still red on CI... one thing I wondered about is the *altsock =\ntimerfd part which I couldn't do.\n\nThe situation on macOS is a little odd: the man page says EVFILT_TIMER\nis not implemented. But clearly it is, we can read the source code as\nI had to do to find out which unit of time it defaults to[1] (huh,\nApple's github repo for Darwin appears to have been archived recently\n-- no more source code updates? that'd be a shame!), and it works\nexactly as expected in simple programs. So I would just assume it\nworks until we see evidence otherwise. (We already use a couple of\nother things on macOS more or less by accident because configure finds\nthem, where they are undocumented or undeclared.)\n\n[1] https://github.com/apple/darwin-xnu/blob/main/bsd/kern/kern_event.c#L1345\n\n\n", "msg_date": "Sat, 8 Jul 2023 13:00:51 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Jul 7, 2023 at 2:16 PM Andrey Chudnovsky <achudnovskij@gmail.com> wrote:\n> Please confirm my understanding of the flow is correct:\n> 1. Client calls PQconnectStart.\n> - The client doesn't know yet what is the issuer and the scope.\n\nRight. (Strictly speaking it doesn't even know that OAuth will be used\nfor the connection, yet, though at some point we'll be able to force\nthe issue with e.g. `require_auth=oauth`. That's not currently\nimplemented.)\n\n> - Parameters are strings, so callback is not provided yet.\n> 2. Client gets PgConn from PQconnectStart return value and updates\n> conn->async_auth to its own callback.\n\nThis is where some sort of official authn callback registration (see\nabove reply to Daniele) would probably come in handy.\n\n> 3. Client polls PQconnectPoll and checks conn->sasl_state until the\n> value is SASL_ASYNC\n\nIn my head, the client's custom callback would always be invoked\nduring the call to PQconnectPoll, rather than making the client do\nwork in between calls. That way, a client can use custom flows even\nwith a synchronous PQconnectdb().\n\n> 4. Client accesses conn->oauth_issuer and conn->oauth_scope and uses\n> those info to trigger the token flow.\n\nRight.\n\n> 5. Expectations on async_auth:\n> a. It returns PGRES_POLLING_READING while token acquisition is going on\n> b. It returns PGRES_POLLING_OK and sets conn->sasl_state->token\n> when token acquisition succeeds.\n\nYes. Though the token should probably be returned through some\nexplicit part of the callback, now that you mention it...\n\n> 6. Is the client supposed to do anything with the altsock parameter?\n\nThe callback needs to set the altsock up with a select()able\ndescriptor, which wakes up the client when more work is ready to be\ndone. Without that, you can't handle multiple connections on a single\nthread.\n\n> If yes, it looks workable with a couple of improvements I think would be nice:\n> 1. Currently, oauth_exchange function sets conn->async_auth =\n> pg_fe_run_oauth_flow and starts Device Code flow automatically when\n> receiving challenge and metadata from the server.\n> There probably should be a way for the client to prevent default\n> Device Code flow from triggering.\n\nAgreed. I'd like the client to be able to override this directly.\n\n> 2. The current signature and expectations from async_auth function\n> seems to be tightly coupled with the internal implementation:\n> - Pieces of information need to be picked and updated in different\n> places in the PgConn structure.\n> - Function is expected to return PostgresPollingStatusType which\n> is used to communicate internal state to the client.\n> Would it make sense to separate the internal callback used to\n> communicate with Device Code flow from client facing API?\n> I.e. introduce a new client facing structure and enum to facilitate\n> callback and its return value.\n\nYep, exactly right! I just wanted to check that the architecture\n*looked* sufficient before pulling it up into an API.\n\n> On a separate note:\n> The backend code currently spawns an external command for token validation.\n> As we discussed before, an extension hook would be a more efficient\n> extensibility option.\n> We see clients make 10k+ connections using OAuth tokens per minute to\n> our service, and stating external processes would be too much overhead\n> here.\n\n+1. I'm curious, though -- what language do you expect to use to write\na production validator hook? Surely not low-level C...?\n\n> > 5) Does this maintenance tradeoff (full control over the client vs. a\n> > large amount of RFC-governed code) seem like it could be okay?\n>\n> It's nice for psql to have Device Code flow. Can be made even more\n> convenient with refresh tokens support.\n> And for clients on resource constrained devices to be able to\n> authenticate with Client Credentials (app secret) without bringing\n> more dependencies.\n>\n> In most other cases, upstream PostgreSQL drivers written in higher\n> level languages have libraries / abstractions to implement OAUTH flows\n> for the platforms they support.\n\nYeah, I'm really interested in seeing which existing high-level flows\ncan be mixed in through a driver. Trying not to get too far ahead of\nmyself :D\n\nThanks for the review!\n\n--Jacob\n\n\n", "msg_date": "Mon, 10 Jul 2023 16:21:58 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Jul 7, 2023 at 6:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Jul 7, 2023 at 4:57 AM Jacob Champion <jchampion@timescale.com> wrote:\n> > On Wed, Jul 5, 2023 at 3:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > BTW I will happily do the epoll->kqueue port work if necessary.\n> >\n> > And I will happily take you up on that; thanks!\n>\n> Some initial hacking, about 2 coffees' worth:\n> https://github.com/macdice/postgres/commits/oauth-kqueue\n>\n> This compiles on FreeBSD and macOS, but I didn't have time to figure\n> out all your Python testing magic so I don't know if it works yet and\n> it's still red on CI...\n\nThis is awesome, thank you!\n\nI need to look into the CI more, but it looks like the client tests\nare passing, which is a good sign. (I don't understand why the\nserver-side tests are failing on FreeBSD, but they shouldn't be using\nthe libpq code at all, so I think your kqueue implementation is in the\nclear. Cirrus doesn't have the logs from the server-side test failures\nanywhere -- probably a bug in my Meson patch.)\n\n> one thing I wondered about is the *altsock =\n> timerfd part which I couldn't do.\n\nI did that because I'm not entirely sure that libcurl is guaranteed to\nhave cleared out all its sockets from the mux, and I didn't want to\ninvite spurious wakeups. I should probably verify whether or not\nthat's possible. If so, we could just make that code resilient to\nearly wakeup, so that it matters less, or set up a second kqueue that\nonly holds the timer if that turns out to be unacceptable?\n\n> The situation on macOS is a little odd: the man page says EVFILT_TIMER\n> is not implemented. But clearly it is, we can read the source code as\n> I had to do to find out which unit of time it defaults to[1] (huh,\n> Apple's github repo for Darwin appears to have been archived recently\n> -- no more source code updates? that'd be a shame!), and it works\n> exactly as expected in simple programs. So I would just assume it\n> works until we see evidence otherwise. (We already use a couple of\n> other things on macOS more or less by accident because configure finds\n> them, where they are undocumented or undeclared.)\n\nHuh. Something to keep an eye on... might be a problem with older versions?\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Mon, 10 Jul 2023 16:50:22 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Mon, Jul 10, 2023 at 4:50 PM Jacob Champion <jchampion@timescale.com> wrote:\n> I don't understand why the\n> server-side tests are failing on FreeBSD, but they shouldn't be using\n> the libpq code at all, so I think your kqueue implementation is in the\n> clear.\n\nOh, whoops, it's just the missed CLOEXEC flag in the final patch. (If\nthe write side of the pipe gets copied around, it hangs open and the\nvalidator never sees the \"end\" of the token.) I'll switch the logic\naround to set the flag on the write side instead of unsetting it on\nthe read side.\n\nI have a WIP patch that passes tests on FreeBSD, which I'll clean up\nand post Sometime Soon. macOS builds now but still fails before it\nruns the test; looks like it's having trouble finding OpenSSL during\n`pip install` of the test modules...\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Tue, 11 Jul 2023 10:50:28 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Wed, Jul 12, 2023 at 5:50 AM Jacob Champion <jchampion@timescale.com> wrote:\n> Oh, whoops, it's just the missed CLOEXEC flag in the final patch. (If\n> the write side of the pipe gets copied around, it hangs open and the\n> validator never sees the \"end\" of the token.) I'll switch the logic\n> around to set the flag on the write side instead of unsetting it on\n> the read side.\n\nOops, sorry about that. Glad to hear it's all working!\n\n(FTR my parenthetical note about macOS/XNU sources on Github was a\nfalse alarm: the \"apple\" account has stopped publishing a redundant\ncopy of that, but \"apple-oss-distributions\" is the account I should\nhave been looking at and it is live. I guess it migrated at some\npoint, or something. Phew.)\n\n\n", "msg_date": "Wed, 12 Jul 2023 13:37:22 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> > - Parameters are strings, so callback is not provided yet.\n> > 2. Client gets PgConn from PQconnectStart return value and updates\n> > conn->async_auth to its own callback.\n>\n> This is where some sort of official authn callback registration (see\n> above reply to Daniele) would probably come in handy.\n+1\n\n> > 3. Client polls PQconnectPoll and checks conn->sasl_state until the\n> > value is SASL_ASYNC\n>\n> In my head, the client's custom callback would always be invoked\n> during the call to PQconnectPoll, rather than making the client do\n> work in between calls. That way, a client can use custom flows even\n> with a synchronous PQconnectdb().\nThe way I see this API working is the asynchronous client needs at least 2\nPQConnectPoll calls:\n1. To be notified of what the authentication requirements are and get\nparameters.\n2. When it acquires the token, the callback is used to inform libpq of the\ntoken and return PGRES_POLLING_OK.\n\nFor the synchronous client, the callback implementation would need to be\naware of the fact that synchronous implementation invokes callback\nfrequently and be implemented accordingly.\n\nBottom lime, I don't see much problem with the current proposal. Just the\nway of callback to know that OAUTH token is requested and get parameters\nrelies on PQconnectPoll being invoked after corresponding parameters of\nconn object are populated.\n\n> > > 5. Expectations on async_auth:\n> > > a. It returns PGRES_POLLING_READING while token acquisition is\ngoing on\n> > > b. It returns PGRES_POLLING_OK and sets conn->sasl_state->token\n> > > when token acquisition succeeds.\n> >\n> > Yes. Though the token should probably be returned through some\n> > explicit part of the callback, now that you mention it...\n>\n> > 6. Is the client supposed to do anything with the altsock parameter?\n>\n> The callback needs to set the altsock up with a select()able\n> descriptor, which wakes up the client when more work is ready to be\n> done. Without that, you can't handle multiple connections on a single\n> thread.\n\nOk, thanks for clarification.\n\n> > On a separate note:\n> > The backend code currently spawns an external command for token\nvalidation.\n> > As we discussed before, an extension hook would be a more efficient\n> > extensibility option.\n> > We see clients make 10k+ connections using OAuth tokens per minute to\n> > our service, and stating external processes would be too much overhead\n> > here.\n>\n> +1. I'm curious, though -- what language do you expect to use to write\n> a production validator hook? Surely not low-level C...?\n\nFor the server side code, it would likely be identity providers publishing\nextensions to validate their tokens.\nThose can do that in C too. Or extensions now can be implemented in Rust\nusing pgrx. Which is developer friendly enough in my opinion.\n\n> Yeah, I'm really interested in seeing which existing high-level flows\n> can be mixed in through a driver. Trying not to get too far ahead of\n> myself :D\n\nI can think of the following as the most common:\n1. Authorization code with PKCE. This is by far the most common for the\nuser login flows. Requires to spin up a browser and listen to redirect\nURL/Port. Most high level platforms have libraries to do both.\n2. Client Certificates. This requires an identity provider specific library\nto construct and sign the token. The providers publish SDKs to do that for\nmost common app development platforms.\n\n> >   - Parameters are strings, so callback is not provided yet.> > 2. Client gets PgConn from PQconnectStart return value and updates> > conn->async_auth to its own callback.>> This is where some sort of official authn callback registration (see> above reply to Daniele) would probably come in handy.+1> > 3. Client polls PQconnectPoll and checks conn->sasl_state until the> > value is SASL_ASYNC>> In my head, the client's custom callback would always be invoked> during the call to PQconnectPoll, rather than making the client do> work in between calls. That way, a client can use custom flows even> with a synchronous PQconnectdb().The way I see this API working is the asynchronous client needs at least 2 PQConnectPoll calls:1. To be notified of what the authentication requirements are and get parameters.2. When it acquires the token, the callback is used to inform libpq of the token and return PGRES_POLLING_OK.For the synchronous client, the callback implementation would need to be aware of the fact that synchronous implementation invokes callback frequently and be implemented accordingly.Bottom lime, I don't see much problem with the current proposal. Just the way of callback to know that OAUTH token is requested and get parameters relies on PQconnectPoll being invoked after corresponding parameters of conn object are populated.> > > 5. Expectations on async_auth:> > >     a. It returns PGRES_POLLING_READING while token acquisition is going on> > >     b. It returns PGRES_POLLING_OK and sets conn->sasl_state->token> > > when token acquisition succeeds.> >> > Yes. Though the token should probably be returned through some> > explicit part of the callback, now that you mention it...>> > 6. Is the client supposed to do anything with the altsock parameter?>> The callback needs to set the altsock up with a select()able> descriptor, which wakes up the client when more work is ready to be> done. Without that, you can't handle multiple connections on a single> thread.Ok, thanks for clarification.> > On a separate note:> > The backend code currently spawns an external command for token validation.> > As we discussed before, an extension hook would be a more efficient> > extensibility option.> > We see clients make 10k+ connections using OAuth tokens per minute to> > our service, and stating external processes would be too much overhead> > here.> > +1. I'm curious, though -- what language do you expect to use to write> a production validator hook? Surely not low-level C...?For the server side code, it would likely be identity providers publishing extensions to validate their tokens.Those can do that in C too. Or extensions now can be implemented in Rust using pgrx. Which is developer friendly enough in my opinion.> Yeah, I'm really interested in seeing which existing high-level flows> can be mixed in through a driver. Trying not to get too far ahead of> myself :DI can think of the following as the most common:1. Authorization code with PKCE. This is by far the most common for the user login flows. Requires to spin up a browser and listen to redirect URL/Port. Most high level platforms have libraries to do both.2. Client Certificates. This requires an identity provider specific library to construct and sign the token. The providers publish SDKs to do that for most common app development platforms.", "msg_date": "Wed, 12 Jul 2023 21:51:48 -0700", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Tue, Jul 11, 2023 at 10:50 AM Jacob Champion\n<jchampion@timescale.com> wrote:\n> I have a WIP patch that passes tests on FreeBSD, which I'll clean up\n> and post Sometime Soon. macOS builds now but still fails before it\n> runs the test; looks like it's having trouble finding OpenSSL during\n> `pip install` of the test modules...\n\nHi Thomas,\n\nv9 folds in your kqueue implementation (thanks again!) and I have a\nquick question to check my understanding:\n\n> + case CURL_POLL_REMOVE:\n> + /*\n> + * We don't know which of these is currently registered, perhaps\n> + * both, so we try to remove both. This means we need to tolerate\n> + * ENOENT below.\n> + */\n> + EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE, 0, 0, 0);\n> + nev++;\n> + EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE, 0, 0, 0);\n> + nev++;\n> + break;\n\nWe're not setting EV_RECEIPT for these -- is that because none of the\nfilters we're using are EV_CLEAR, and so it doesn't matter if we\naccidentally pull pending events off the queue during the kevent() call?\n\nv9 also improves the Cirrus debugging experience and fixes more issues\non macOS, so the tests should be green there now. The final patch in the\nseries works around what I think is a build bug in psycopg2 2.9 [1] for\nthe BSDs+meson.\n\nThanks,\n--Jacob\n\n[1] https://github.com/psycopg/psycopg2/issues/1599", "msg_date": "Mon, 17 Jul 2023 16:55:06 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Tue, Jul 18, 2023 at 11:55 AM Jacob Champion <jchampion@timescale.com> wrote:\n> We're not setting EV_RECEIPT for these -- is that because none of the\n> filters we're using are EV_CLEAR, and so it doesn't matter if we\n> accidentally pull pending events off the queue during the kevent() call?\n\n+1 for EV_RECEIPT (\"just tell me about errors, don't drain any\nevents\"). I had a vague memory that it caused portability problems.\nJust checked... it was OpenBSD I was thinking of, but they finally\nadded that flag in 6.2 (2017). Our older-than-that BF OpenBSD animal\nrecently retired so that should be fine. (Yes, without EV_CLEAR it's\n\"level triggered\" not \"edge triggered\" in epoll terminology, so the\nway I had it was not broken, but the way you're suggesting would be\nnicer.) Note that you'll have to skip data == 0 (no error) too.\n\n+ #ifdef HAVE_SYS_EVENT_H\n+ /* macOS doesn't define the time unit macros, but uses milliseconds\nby default. */\n+ #ifndef NOTE_MSECONDS\n+ #define NOTE_MSECONDS 0\n+ #endif\n+ #endif\n\nWhile comparing the cousin OSs' man pages just now, I noticed that\nit's not only macOS that lacks NOTE_MSECONDS, it's also OpenBSD and\nNetBSD < 10. Maybe just delete that cruft ^^^ and use literal 0 in\nfflags directly. FreeBSD, and recently also NetBSD, decided to get\nfancy with high resolution timers, but 0 gets the traditional unit of\nmilliseconds on all platforms (I just wrote it like that because I\nstarted from FreeBSD and didn't know the history/portability story).\n\n\n", "msg_date": "Wed, 19 Jul 2023 11:03:53 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Tue, Jul 18, 2023 at 4:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Jul 18, 2023 at 11:55 AM Jacob Champion <jchampion@timescale.com> wrote:\n> +1 for EV_RECEIPT (\"just tell me about errors, don't drain any\n> events\").\n\nSounds good.\n\n> While comparing the cousin OSs' man pages just now, I noticed that\n> it's not only macOS that lacks NOTE_MSECONDS, it's also OpenBSD and\n> NetBSD < 10. Maybe just delete that cruft ^^^ and use literal 0 in\n> fflags directly.\n\nSo I don't lose track of it, here's a v10 with those two changes.\n\nThanks!\n--Jacob", "msg_date": "Wed, 26 Jul 2023 09:43:14 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "v11 is a quick rebase over the recent Cirrus changes, and I've dropped\n0006 now that psycopg2 can build against BSD/Meson setups (thanks Daniele!).\n\n--Jacob", "msg_date": "Wed, 30 Aug 2023 15:57:39 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "v12 implements a first draft of a client hook, so applications can\nreplace either the device prompt or the entire OAuth flow. (Andrey and\nMahendrakar: hopefully this is close to what you need.) It also cleans\nup some of the JSON tech debt.\n\nSince (IMO) we don't want to introduce new hooks every time we make\nimprovements to the internal flows, the new hook is designed to\nretrieve multiple pieces of data from the application. Clients either\ndeclare their ability to get that data, or delegate the job to the\nnext link in the chain, which by default is a no-op. That lets us add\nnew data types to the end, and older clients will ignore them until\nthey're taught otherwise. (I'm trying hard not to over-engineer this,\nbut it seems like the concept of \"give me some piece of data to\ncontinue authenticating\" could pretty easily subsume things like the\nPQsslKeyPassHook if we wanted.)\n\nThe PQAUTHDATA_OAUTH_BEARER_TOKEN case is the one that replaces the\nflow entirely, as discussed upthread. Your application gets the\ndiscovery URI and the requested scope for the connection. It can then\neither delegate back to libpq (e.g. if the issuer isn't one it can\nhelp with), immediately return a token (e.g. if one is already cached\nfor the current user), or install a nonblocking callback to implement\na custom async flow. When the connection is closed (or fails), the\nhook provides a cleanup function to free any resources it may have\nallocated.\n\nThanks,\n--Jacob", "msg_date": "Wed, 6 Sep 2023 15:11:23 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Hi,\n\nOn Fri, 3 Nov 2023 at 17:14, Jacob Champion <jchampion@timescale.com> wrote:\n>\n> v12 implements a first draft of a client hook, so applications can\n> replace either the device prompt or the entire OAuth flow. (Andrey and\n> Mahendrakar: hopefully this is close to what you need.) It also cleans\n> up some of the JSON tech debt.\n\nI went through CFbot and found that build is failing, links:\n\nhttps://cirrus-ci.com/task/6061898244816896\nhttps://cirrus-ci.com/task/6624848198238208\nhttps://cirrus-ci.com/task/5217473314684928\nhttps://cirrus-ci.com/task/6343373221527552\n\nJust want to make sure you are aware of these failures.\n\nThanks,\nShlok Kumar Kyal\n\n\n", "msg_date": "Fri, 3 Nov 2023 17:58:28 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Nov 3, 2023 at 5:28 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n> Just want to make sure you are aware of these failures.\n\nThanks for the nudge! Looks like I need to reconcile with the changes\nto JsonLexContext in 1c99cde2. I should be able to get to that next\nweek; in the meantime I'll mark it Waiting on Author.\n\n--Jacob\n\n\n", "msg_date": "Fri, 3 Nov 2023 16:48:29 -0700", "msg_from": "Jacob Champion <champion.p@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Nov 3, 2023 at 4:48 PM Jacob Champion <champion.p@gmail.com> wrote:\n> Thanks for the nudge! Looks like I need to reconcile with the changes\n> to JsonLexContext in 1c99cde2. I should be able to get to that next\n> week; in the meantime I'll mark it Waiting on Author.\n\nv13 rebases over latest. The JsonLexContext changes have simplified\n0001 quite a bit, and there's probably a bit more minimization that\ncould be done.\n\nUnfortunately the configure/Makefile build of libpq now seems to be\npulling in an `exit()` dependency in a way that Meson does not. (Or\nmaybe Meson isn't checking?) I still need to investigate that\ndifference and fix it, so I recommend Meson if you're looking to\ntest-drive a build.\n\nThanks,\n--Jacob", "msg_date": "Wed, 8 Nov 2023 11:00:18 -0800", "msg_from": "Jacob Champion <champion.p@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Hi Jacob,\n\nWanted to follow up on one of the topics discussed here in the past:\nDo you plan to support adding an extension hook to validate the token?\n\nIt would allow a more efficient integration, then spinning a separate\nprocess.\n\nThanks!\nAndrey.\n\nOn Wed, Nov 8, 2023 at 11:00 AM Jacob Champion <champion.p@gmail.com> wrote:\n\n> On Fri, Nov 3, 2023 at 4:48 PM Jacob Champion <champion.p@gmail.com>\n> wrote:\n> > Thanks for the nudge! Looks like I need to reconcile with the changes\n> > to JsonLexContext in 1c99cde2. I should be able to get to that next\n> > week; in the meantime I'll mark it Waiting on Author.\n>\n> v13 rebases over latest. The JsonLexContext changes have simplified\n> 0001 quite a bit, and there's probably a bit more minimization that\n> could be done.\n>\n> Unfortunately the configure/Makefile build of libpq now seems to be\n> pulling in an `exit()` dependency in a way that Meson does not. (Or\n> maybe Meson isn't checking?) I still need to investigate that\n> difference and fix it, so I recommend Meson if you're looking to\n> test-drive a build.\n>\n> Thanks,\n> --Jacob\n>\n\nHi Jacob,Wanted to follow up on one of the topics discussed here in the past:Do you plan to support adding an extension hook to validate the token?It would allow a more efficient integration, then spinning a separate process.Thanks!Andrey.On Wed, Nov 8, 2023 at 11:00 AM Jacob Champion <champion.p@gmail.com> wrote:On Fri, Nov 3, 2023 at 4:48 PM Jacob Champion <champion.p@gmail.com> wrote:\n> Thanks for the nudge! Looks like I need to reconcile with the changes\n> to JsonLexContext in 1c99cde2. I should be able to get to that next\n> week; in the meantime I'll mark it Waiting on Author.\n\nv13 rebases over latest. The JsonLexContext changes have simplified\n0001 quite a bit, and there's probably a bit more minimization that\ncould be done.\n\nUnfortunately the configure/Makefile build of libpq now seems to be\npulling in an `exit()` dependency in a way that Meson does not. (Or\nmaybe Meson isn't checking?) I still need to investigate that\ndifference and fix it, so I recommend Meson if you're looking to\ntest-drive a build.\n\nThanks,\n--Jacob", "msg_date": "Thu, 9 Nov 2023 17:42:52 -0800", "msg_from": "Andrey Chudnovsky <achudnovskij@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Thu, Nov 9, 2023 at 5:43 PM Andrey Chudnovsky <achudnovskij@gmail.com> wrote:\n> Do you plan to support adding an extension hook to validate the token?\n>\n> It would allow a more efficient integration, then spinning a separate process.\n\nI think an API in the style of archive modules might probably be a\ngood way to go, yeah.\n\nIt's probably not very high on the list of priorities, though, since\nthe inputs and outputs are going to \"look\" the same whether you're\ninside or outside of the server process. The client side is going to\nneed the bulk of the work/testing/validation. Speaking of which -- how\nis the current PQauthDataHook design doing when paired with MS AAD\n(er, Entra now I guess)? I haven't had an Azure test bed available for\na while.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 15 Nov 2023 12:20:56 -0800", "msg_from": "Jacob Champion <champion.p@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> On 8 Nov 2023, at 20:00, Jacob Champion <champion.p@gmail.com> wrote:\n\n> Unfortunately the configure/Makefile build of libpq now seems to be\n> pulling in an `exit()` dependency in a way that Meson does not.\n\nI believe this comes from the libcurl and specifically the ntlm_wb support\nwhich is often enabled in system and package manager provided installations.\nThere isn't really a fix here apart from requiring a libcurl not compiled with\nntlm_wb support, or adding an exception to the exit() check in the Makefile.\n\nBringing this up with other curl developers to see if it could be fixed we\ninstead decided to deprecate the whole module as it's quirky and not used much.\nThis won't help with existing installations but at least it will be deprecated\nand removed by the time v17 ships, so gating on a version shipped after its\nremoval will avoid it.\n\nhttps://github.com/curl/curl/commit/04540f69cfd4bf16e80e7c190b645f1baf505a84\n\n> (Or maybe Meson isn't checking?) I still need to investigate that\n> difference and fix it, so I recommend Meson if you're looking to\n> test-drive a build.\n\nThere is no corresponding check in the Meson build, which seems like a TODO.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 10:43:48 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Tue, Dec 5, 2023 at 1:44 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 8 Nov 2023, at 20:00, Jacob Champion <champion.p@gmail.com> wrote:\n>\n> > Unfortunately the configure/Makefile build of libpq now seems to be\n> > pulling in an `exit()` dependency in a way that Meson does not.\n>\n> I believe this comes from the libcurl and specifically the ntlm_wb support\n> which is often enabled in system and package manager provided installations.\n> There isn't really a fix here apart from requiring a libcurl not compiled with\n> ntlm_wb support, or adding an exception to the exit() check in the Makefile.\n>\n> Bringing this up with other curl developers to see if it could be fixed we\n> instead decided to deprecate the whole module as it's quirky and not used much.\n> This won't help with existing installations but at least it will be deprecated\n> and removed by the time v17 ships, so gating on a version shipped after its\n> removal will avoid it.\n>\n> https://github.com/curl/curl/commit/04540f69cfd4bf16e80e7c190b645f1baf505a84\n\nOoh, thank you for looking into that and fixing it!\n\n> > (Or maybe Meson isn't checking?) I still need to investigate that\n> > difference and fix it, so I recommend Meson if you're looking to\n> > test-drive a build.\n>\n> There is no corresponding check in the Meson build, which seems like a TODO.\n\nOkay, I'll look into that too when I get time.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 9 Jan 2024 10:48:55 -0800", "msg_from": "Jacob Champion <champion.p@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Hi all,\n\nv14 rebases over latest and fixes a warning when assertions are\ndisabled. 0006 is temporary and hacks past a couple of issues I have\nnot yet root caused -- one of which makes me wonder if 0001 needs to\nbe considered alongside the recent pg_combinebackup and incremental\nJSON work...?\n\n--Jacob", "msg_date": "Tue, 20 Feb 2024 17:00:28 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Tue, Feb 20, 2024 at 5:00 PM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> v14 rebases over latest and fixes a warning when assertions are\n> disabled.\n\nv15 is a housekeeping update that adds typedefs.list entries and runs\npgindent. It also includes a temporary patch from Daniel to get the\ncfbot a bit farther (see above discussion on libcurl/exit).\n\n--Jacob", "msg_date": "Thu, 22 Feb 2024 06:08:41 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Thu, Feb 22, 2024 at 6:08 AM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> v15 is a housekeeping update that adds typedefs.list entries and runs\n> pgindent.\n\nv16 is more transformational!\n\nDaniel contributed 0004, which completely replaces the\nvalidator_command architecture with a C module API. This solves a\nbunch of problems as discussed upthread and vastly simplifies the test\nframework for the server side. 0004 also adds a set of Perl tests,\nwhich will begin to subsume some of the Python server-side tests as I\nget around to porting them. (@Daniel: 0005 is my diff against your\noriginal patch, for review.)\n\n0008 has been modified to quickfix the pgcommon linkage on the\nMakefile side; my previous attempt at this only fixed Meson. The\npatchset is now carrying a lot of squash-cruft, and I plan to flatten\nit in the next version.\n\nThanks,\n--Jacob", "msg_date": "Fri, 23 Feb 2024 17:01:28 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Tue, Feb 27, 2024 at 11:20 AM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> This is done in v17, which is also now based on the two patches pulled\n> out by Daniel in [1].\n\nIt looks like my patchset has been eaten by a malware scanner:\n\n 550 Message content failed content scanning\n(Sanesecurity.Foxhole.Mail_gz.UNOFFICIAL)\n\nWas there a recent change to the lists? Is anyone able to see what the\nactual error was so I don't do it again?\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 27 Feb 2024 11:33:55 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "[Trying again, with all patches unzipped and the CC list temporarily\nremoved to avoid flooding people's inboxes. Original message follows.]\n\nOn Fri, Feb 23, 2024 at 5:01 PM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> The\n> patchset is now carrying a lot of squash-cruft, and I plan to flatten\n> it in the next version.\n\nThis is done in v17, which is also now based on the two patches pulled\nout by Daniel in [1]. Besides the squashes, which make up most of the\nrange-diff, I've fixed a call to strncasecmp() which is not available\non Windows.\n\nDaniel and I discussed trying a Python version of the test server,\nsince the standard library there should give us more goodies to work\nwith. A proof of concept is in 0009. I think the big question I have\nfor it is, how would we communicate what we want the server to do for\nthe test? (We could perhaps switch on magic values of the client ID?)\nIn the end I'd like to be testing close to 100% of the failure modes,\nand that's likely to mean a lot of back-and-forth if the server\nimplementation isn't in the Perl process.\n\n--Jacob\n\n[1] https://postgr.es/m/flat/F51F8777-FAF5-49F2-BC5E-8F9EB423ECE0%40yesql.se", "msg_date": "Wed, 28 Feb 2024 06:05:52 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> On 28 Feb 2024, at 15:05, Jacob Champion <jacob.champion@enterprisedb.com> wrote:\n> \n> [Trying again, with all patches unzipped and the CC list temporarily\n> removed to avoid flooding people's inboxes. Original message follows.]\n> \n> On Fri, Feb 23, 2024 at 5:01 PM Jacob Champion\n> <jacob.champion@enterprisedb.com> wrote:\n>> The\n>> patchset is now carrying a lot of squash-cruft, and I plan to flatten\n>> it in the next version.\n> \n> This is done in v17, which is also now based on the two patches pulled\n> out by Daniel in [1]. Besides the squashes, which make up most of the\n> range-diff, I've fixed a call to strncasecmp() which is not available\n> on Windows.\n> \n> Daniel and I discussed trying a Python version of the test server,\n> since the standard library there should give us more goodies to work\n> with. A proof of concept is in 0009. I think the big question I have\n> for it is, how would we communicate what we want the server to do for\n> the test? (We could perhaps switch on magic values of the client ID?)\n> In the end I'd like to be testing close to 100% of the failure modes,\n> and that's likely to mean a lot of back-and-forth if the server\n> implementation isn't in the Perl process.\n\nThanks for the new version, I'm digesting the test patches but for now I have a\nfew smaller comments:\n\n\n+#define ALLOC(size) malloc(size)\nI wonder if we should use pg_malloc_extended(size, MCXT_ALLOC_NO_OOM) instead\nto self document the code. We clearly don't want feature-parity with server-\nside palloc here. I know we use malloc in similar ALLOC macros so it's not\nunique in that regard, but maybe?\n\n\n+#ifdef FRONTEND\n+ destroyPQExpBuffer(lex->errormsg);\n+#else\n+ pfree(lex->errormsg->data);\n+ pfree(lex->errormsg);\n+#endif\nWouldn't it be nicer if we abstracted this into a destroyStrVal function to a)\navoid the ifdefs and b) make it more like the rest of the new API? While it's\nonly used in two places (close to each other) it's a shame to let the\nunderlying API bleed through the abstraction.\n\n\n+ CURLM *curlm; /* top-level multi handle for cURL operations */\nNitpick, but curl is not capitalized cURL anymore (for some value of \"anymore\"\nsince it changed in 2016 [0]). I do wonder if we should consistently write\n\"libcurl\" as well since we don't use curl but libcurl.\n\n\n+ PQExpBufferData work_data; /* scratch buffer for general use (remember\n+ to clear out prior contents first!) */\nThis seems like asking for subtle bugs due to uncleared buffers bleeding into\nanother operation (especially since we are writing this data across the wire).\nHow about having an array the size of OAuthStep of unallocated buffers where\neach step use it's own? Storing the content of each step could also be useful\nfor debugging. Looking at the statemachine here it's not an obvious change but\nalso not impossible.\n\n\n+ * TODO: This disables DNS resolution timeouts unless libcurl has been\n+ * compiled against alternative resolution support. We should check that.\ncurl_version_info() can be used to check for c-ares support.\n\n\n+ * so you don't have to write out the error handling every time. They assume\n+ * that they're embedded in a function returning bool, however.\nIt feels a bit iffy to encode the returntype in the macro, we can use the same\ntrick that DISABLE_SIGPIPE employs where a failaction is passed in.\n\n\n+ if (!strcmp(name, field->name))\nProject style is to test for (strcmp(x,y) == 0) rather than (!strcmp()) to\nimprove readability.\n\n\n+ libpq_append_conn_error(conn, \"out of memory\");\nWhile not introduced in this patch, it's not an ideal pattern to report \"out of\nmemory\" errors via a function which may allocate memory.\n\n\n+ appendPQExpBufferStr(&conn->errorMessage,\n+ libpq_gettext(\"server's error message contained an embedded NULL\"));\nWe should maybe add \", discarding\" or something similar after this string to\nindicate that there was an actual error which has been thrown away, the error\nwasn't that the server passed an embedded NULL.\n\n\n+#ifdef USE_OAUTH\n+ else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&\n+ !selected_mechanism)\nI wonder if we instead should move the guards inside the statement and error\nout with \"not built with OAuth support\" or something similar like how we do\nwith TLS and other optional components?\n\n\n+ errdetail(\"Comma expected, but found character %s.\",\n+ sanitize_char(*p))));\nThe %s formatter should be wrapped like '%s' to indicate that the message part\nis the character in question (and we can then reuse the translation since the\nerror message already exist for SCRAM).\n\n\n+ temp = curl_slist_append(temp, \"authorization_code\");\n+ if (!temp)\n+ oom = true;\n+\n+ temp = curl_slist_append(temp, \"implicit\");\nWhile not a bug per se, it reads a bit odd to call another operation that can\nallocate memory when the oom flag has been set. I think we can move some\nthings around a little to make it clearer.\n\nThe attached diff contains some (most?) of the above as a patch on top of your\nv17, but as a .txt to keep the CFBot from munging on it.\n\n--\nDaniel Gustafsson", "msg_date": "Wed, 28 Feb 2024 18:40:23 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "\nOn 2024-02-28 We 09:05, Jacob Champion wrote:\n>\n> Daniel and I discussed trying a Python version of the test server,\n> since the standard library there should give us more goodies to work\n> with. A proof of concept is in 0009. I think the big question I have\n> for it is, how would we communicate what we want the server to do for\n> the test? (We could perhaps switch on magic values of the client ID?)\n> In the end I'd like to be testing close to 100% of the failure modes,\n> and that's likely to mean a lot of back-and-forth if the server\n> implementation isn't in the Perl process.\n\n\n\nCan you give some more details about what this python gadget would buy \nus? I note that there are a couple of CPAN modules that provide OAuth2 \nservers, not sure if they would be of any use.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 28 Feb 2024 16:50:23 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> On 28 Feb 2024, at 22:50, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> On 2024-02-28 We 09:05, Jacob Champion wrote:\n>> \n>> Daniel and I discussed trying a Python version of the test server,\n>> since the standard library there should give us more goodies to work\n>> with. A proof of concept is in 0009. I think the big question I have\n>> for it is, how would we communicate what we want the server to do for\n>> the test? (We could perhaps switch on magic values of the client ID?)\n>> In the end I'd like to be testing close to 100% of the failure modes,\n>> and that's likely to mean a lot of back-and-forth if the server\n>> implementation isn't in the Perl process.\n> \n> Can you give some more details about what this python gadget would buy us? I note that there are a couple of CPAN modules that provide OAuth2 servers, not sure if they would be of any use.\n\nThe main benefit would be to be able to provide a full testharness without\nadding any additional dependencies over what we already have (Python being\nrequired by meson). That should ideally make it easy to get good coverage from\nBF animals as no installation is needed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 28 Feb 2024 22:52:40 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "[re-adding the CC list I dropped earlier]\n\nOn Wed, Feb 28, 2024 at 1:52 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 28 Feb 2024, at 22:50, Andrew Dunstan <andrew@dunslane.net> wrote:\n> > Can you give some more details about what this python gadget would buy us? I note that there are a couple of CPAN modules that provide OAuth2 servers, not sure if they would be of any use.\n>\n> The main benefit would be to be able to provide a full testharness without\n> adding any additional dependencies over what we already have (Python being\n> required by meson). That should ideally make it easy to get good coverage from\n> BF animals as no installation is needed.\n\nAs an additional note, the test suite ideally needs to be able to\nexercise failure modes where the provider itself is malfunctioning. So\nwe hand-roll responses rather than deferring to an external\nOAuth/OpenID implementation, which adds HTTP and JSON dependencies at\nminimum, and Python includes both. See also the discussion with\nStephen upthread [1].\n\n(I do think it'd be nice to eventually include a prepackaged OAuth\nserver in the test suite, to stack coverage for the happy path and\nfurther test interoperability.)\n\nThanks,\n--Jacob\n\n[1] https://postgr.es/m/CAAWbhmh%2B6q4t3P%2BwDmS%3DJuHBpcgF-VM2cXNft8XV02yk-cHCpQ%40mail.gmail.com\n\n\n", "msg_date": "Thu, 29 Feb 2024 06:49:02 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> On 27 Feb 2024, at 20:20, Jacob Champion <jacob.champion@enterprisedb.com> wrote:\n> \n> On Fri, Feb 23, 2024 at 5:01 PM Jacob Champion\n> <jacob.champion@enterprisedb.com> wrote:\n>> The\n>> patchset is now carrying a lot of squash-cruft, and I plan to flatten\n>> it in the next version.\n> \n> This is done in v17, which is also now based on the two patches pulled\n> out by Daniel in [1]. Besides the squashes, which make up most of the\n> range-diff, I've fixed a call to strncasecmp() which is not available\n> on Windows.\n\nTwo quick questions:\n\n+ /* TODO */\n+ CHECK_SETOPT(actx, CURLOPT_WRITEDATA, stderr);\nI might be missing something, but what this is intended for in\nsetup_curl_handles()?\n\n\n--- /dev/null\n+++ b/src/interfaces/libpq/fe-auth-oauth-iddawc.c\nAs discussed off-list I think we should leave iddawc support for later and\nfocus on getting one library properly supported to start with. If you agree,\nlet's drop this from the patchset to make it easier to digest. We should make\nsure we keep pluggability such that another library can be supported though,\nmuch like the libpq TLS support.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 29 Feb 2024 22:08:44 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Wed, Feb 28, 2024 at 9:40 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> +#define ALLOC(size) malloc(size)\n> I wonder if we should use pg_malloc_extended(size, MCXT_ALLOC_NO_OOM) instead\n> to self document the code. We clearly don't want feature-parity with server-\n> side palloc here. I know we use malloc in similar ALLOC macros so it's not\n> unique in that regard, but maybe?\n\nI have a vague recollection that linking fe_memutils into libpq\ntripped the exit() checks, but I can try again and see.\n\n> +#ifdef FRONTEND\n> + destroyPQExpBuffer(lex->errormsg);\n> +#else\n> + pfree(lex->errormsg->data);\n> + pfree(lex->errormsg);\n> +#endif\n> Wouldn't it be nicer if we abstracted this into a destroyStrVal function to a)\n> avoid the ifdefs and b) make it more like the rest of the new API? While it's\n> only used in two places (close to each other) it's a shame to let the\n> underlying API bleed through the abstraction.\n\nGood idea. I'll fold this from your patch into the next set (and do\nthe same for the ones I've marked +1 below).\n\n> + CURLM *curlm; /* top-level multi handle for cURL operations */\n> Nitpick, but curl is not capitalized cURL anymore (for some value of \"anymore\"\n> since it changed in 2016 [0]). I do wonder if we should consistently write\n> \"libcurl\" as well since we don't use curl but libcurl.\n\nHuh, I missed that memo. Thanks -- that makes it much easier to type!\n\n> + PQExpBufferData work_data; /* scratch buffer for general use (remember\n> + to clear out prior contents first!) */\n> This seems like asking for subtle bugs due to uncleared buffers bleeding into\n> another operation (especially since we are writing this data across the wire).\n> How about having an array the size of OAuthStep of unallocated buffers where\n> each step use it's own? Storing the content of each step could also be useful\n> for debugging. Looking at the statemachine here it's not an obvious change but\n> also not impossible.\n\nI like that idea; I'll give it a look.\n\n> + * TODO: This disables DNS resolution timeouts unless libcurl has been\n> + * compiled against alternative resolution support. We should check that.\n> curl_version_info() can be used to check for c-ares support.\n\n+1\n\n> + * so you don't have to write out the error handling every time. They assume\n> + * that they're embedded in a function returning bool, however.\n> It feels a bit iffy to encode the returntype in the macro, we can use the same\n> trick that DISABLE_SIGPIPE employs where a failaction is passed in.\n\n+1\n\n> + if (!strcmp(name, field->name))\n> Project style is to test for (strcmp(x,y) == 0) rather than (!strcmp()) to\n> improve readability.\n\n+1\n\n> + libpq_append_conn_error(conn, \"out of memory\");\n> While not introduced in this patch, it's not an ideal pattern to report \"out of\n> memory\" errors via a function which may allocate memory.\n\nDoes trying (and failing) to allocate more memory cause any harm? Best\ncase, we still have enough room in the errorMessage to fit the whole\nerror; worst case, we mark the errorMessage broken and then\nPQerrorMessage() can handle it correctly.\n\n> + appendPQExpBufferStr(&conn->errorMessage,\n> + libpq_gettext(\"server's error message contained an embedded NULL\"));\n> We should maybe add \", discarding\" or something similar after this string to\n> indicate that there was an actual error which has been thrown away, the error\n> wasn't that the server passed an embedded NULL.\n\n+1\n\n> +#ifdef USE_OAUTH\n> + else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&\n> + !selected_mechanism)\n> I wonder if we instead should move the guards inside the statement and error\n> out with \"not built with OAuth support\" or something similar like how we do\n> with TLS and other optional components?\n\nThis one seems like a step backwards. IIUC, the client can currently\nhandle a situation where the server returns multiple mechanisms\n(though the server doesn't support that yet), and I'd really like to\nmake use of that property without making users upgrade libpq.\n\nThat said, it'd be good to have a more specific error message in the\ncase where we don't have a match...\n\n> + errdetail(\"Comma expected, but found character %s.\",\n> + sanitize_char(*p))));\n> The %s formatter should be wrapped like '%s' to indicate that the message part\n> is the character in question (and we can then reuse the translation since the\n> error message already exist for SCRAM).\n\n+1\n\n> + temp = curl_slist_append(temp, \"authorization_code\");\n> + if (!temp)\n> + oom = true;\n> +\n> + temp = curl_slist_append(temp, \"implicit\");\n> While not a bug per se, it reads a bit odd to call another operation that can\n> allocate memory when the oom flag has been set. I think we can move some\n> things around a little to make it clearer.\n\nI'm not a huge fan of nested happy paths/pyramids of doom, but in this\ncase it's small enough that I'm not opposed. :D\n\n> The attached diff contains some (most?) of the above as a patch on top of your\n> v17, but as a .txt to keep the CFBot from munging on it.\n\nThanks very much! I plan to apply all but the USE_OAUTH guard change\n(but let me know if you feel strongly about it).\n\n--Jacob\n\n\n", "msg_date": "Thu, 29 Feb 2024 16:04:05 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Thu, Feb 29, 2024 at 1:08 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> + /* TODO */\n> + CHECK_SETOPT(actx, CURLOPT_WRITEDATA, stderr);\n> I might be missing something, but what this is intended for in\n> setup_curl_handles()?\n\nAh, that's cruft left over from early debugging, just so that I could\nsee what was going on. I'll remove it.\n\n> --- /dev/null\n> +++ b/src/interfaces/libpq/fe-auth-oauth-iddawc.c\n> As discussed off-list I think we should leave iddawc support for later and\n> focus on getting one library properly supported to start with. If you agree,\n> let's drop this from the patchset to make it easier to digest. We should make\n> sure we keep pluggability such that another library can be supported though,\n> much like the libpq TLS support.\n\nAgreed. The number of changes being folded into the next set is\nalready pretty big so I think this will wait until next+1.\n\n--Jacob\n\n\n", "msg_date": "Thu, 29 Feb 2024 16:11:49 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Thu, Feb 29, 2024 at 4:04 PM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> On Wed, Feb 28, 2024 at 9:40 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > + temp = curl_slist_append(temp, \"authorization_code\");\n> > + if (!temp)\n> > + oom = true;\n> > +\n> > + temp = curl_slist_append(temp, \"implicit\");\n> > While not a bug per se, it reads a bit odd to call another operation that can\n> > allocate memory when the oom flag has been set. I think we can move some\n> > things around a little to make it clearer.\n>\n> I'm not a huge fan of nested happy paths/pyramids of doom, but in this\n> case it's small enough that I'm not opposed. :D\n\nI ended up rewriting this patch hunk a bit to handle earlier OOM\nfailures; let me know what you think.\n\n--\n\nv18 is the result of plenty of yak shaving now that the Windows build\nis working. In addition to Daniel's changes as discussed upthread,\n- I have rebased over v2 of the SASL-refactoring patches\n- the last CompilerWarnings failure has been fixed\n- the py.test suite now runs on Windows (but does not yet completely pass)\n- py.test has been completely disabled for the 32-bit Debian test in\nCirrus; I don't know if there's a way to install 32-bit Python\nside-by-side with 64-bit\n\nWe are now very, very close to green.\n\nThe new oauth_validator tests can't work on Windows, since the client\ndoesn't support OAuth there. The python/server tests can handle this\ncase, since they emulate the client behavior; do we want to try\nsomething similar in Perl?\n\n--Jacob", "msg_date": "Thu, 29 Feb 2024 17:08:01 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Thu, Feb 29, 2024 at 5:08 PM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> We are now very, very close to green.\n\nv19 gets us a bit closer by adding a missed import for Windows. I've\nalso removed iddawc support, so the client patch is lighter.\n\n> The new oauth_validator tests can't work on Windows, since the client\n> doesn't support OAuth there. The python/server tests can handle this\n> case, since they emulate the client behavior; do we want to try\n> something similar in Perl?\n\nIn addition to this question, I'm starting to notice intermittent\nfailures of the form\n\n error: ... failed to fetch OpenID discovery document: failed to\nqueue HTTP request\n\nThis corresponds to a TODO in the libcurl implementation -- if the\ninitial call to curl_multi_socket_action() reports that no handles are\nrunning, I treated that as an error. But it looks like it's possible\nfor libcurl to finish a request synchronously if the remote responds\nquickly enough, so that needs to change.\n\n--Jacob", "msg_date": "Fri, 1 Mar 2024 09:46:27 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Mar 1, 2024 at 9:46 AM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> v19 gets us a bit closer by adding a missed import for Windows. I've\n> also removed iddawc support, so the client patch is lighter.\n\nv20 fixes a bunch more TODOs:\n1) the client initial response is validated more closely\n2) the server's invalid_token parameters are properly escaped into the\ncontaining JSON (though, eventually, we probably want to just reject\ninvalid HBA settings instead of passing them through to the client)\n3) Windows-specific responses have been recorded in the test suite\n\nWhile poking at item 2, I was reminded that there's an alternative way\nto get OAuth parameters from the server, and it's subtly incompatible\nwith the OpenID spec because OpenID didn't follow the rules for\n.well-known URI construction [1]. :( Some sort of knob will be\nrequired to switch the behaviors.\n\nI renamed the API for the validator module from res->authenticated to\nres->authorized. Authentication is optional, but a validator *must*\ncheck that the client it's talking to was authorized by the user to\naccess the server, whether or not the user is authenticated. (It may\nadditionally verify that the user is authorized to access the\ndatabase, or it may simply authenticate the user and defer to the\nusermap.) Documenting that particular subtlety is going to be\ninteresting...\n\nThe tests now exercise different issuers for different users, which\nwill also be a good way to signal the server to respond in different\nways during the validator tests. It does raise the question: if a\nthird party provides an issuer-specific module, how do we switch\nbetween that and some other module for a different user?\n\nAndrew asked over at [2] if we could perhaps get 0001 in as well. I\nthink the main thing to figure out there is, is requiring linkage\nagainst libpq (see 0008) going to be okay for the frontend binaries\nthat need JSON support? Or do we need to do something like moving\nPQExpBuffer into src/common to simplify the dependency tree?\n\n--Jacob\n\n[1] https://www.rfc-editor.org/rfc/rfc8414.html#section-5\n[2] https://www.postgresql.org/message-id/682c8fff-355c-a04f-57ac-81055c4ccda8%40dunslane.net", "msg_date": "Mon, 11 Mar 2024 15:51:24 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "v21 is a quick rebase over HEAD, which has adopted a few pieces of\nv20. I've also fixed a race condition in the tests.\n\nOn Mon, Mar 11, 2024 at 3:51 PM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> Andrew asked over at [2] if we could perhaps get 0001 in as well. I\n> think the main thing to figure out there is, is requiring linkage\n> against libpq (see 0008) going to be okay for the frontend binaries\n> that need JSON support? Or do we need to do something like moving\n> PQExpBuffer into src/common to simplify the dependency tree?\n\n0001 has been pared down to the part that teaches jsonapi.c to use\nPQExpBuffer and track out-of-memory conditions; the linkage questions\nremain.\n\nThanks,\n--Jacob", "msg_date": "Fri, 22 Mar 2024 11:21:19 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> On 22 Mar 2024, at 19:21, Jacob Champion <jacob.champion@enterprisedb.com> wrote:\n> \n> v21 is a quick rebase over HEAD, which has adopted a few pieces of\n> v20. I've also fixed a race condition in the tests.\n\nThanks for the rebase, I have a few comments from working with it a bit:\n\nIn jsonapi.c, makeJsonLexContextCstringLen initialize a JsonLexContext with\npalloc0 which would need to be ported over to use ALLOC for frontend code. On\nthat note, the errorhandling in parse_oauth_json() for content-type checks\nattempts to free the JsonLexContext even before it has been created. Here we\ncan just return false.\n\n\n- echo 'libpq must not be calling any function which invokes exit'; exit 1; \\\n+ echo 'libpq must not be calling any function which invokes exit'; \\\nThe offending codepath in libcurl was in the NTLM_WB module, a very old and\nobscure form of NTLM support which was replaced (yet remained in the tree) a\nlong time ago by a full NTLM implementatin. Based on the findings in this\nthread it was deprecated with a removal date set to April 2024 [0]. A bug in\nthe 8.4.0 release however disconnected NTLM_WB from the build and given the\nlack of complaints it was decided to leave as is, so we can base our libcurl\nrequirements on 8.4.0 while keeping the exit() check intact.\n\n\n+ else if (strcasecmp(content_type, \"application/json\") != 0)\nThis needs to handle parameters as well since it will now fail if the charset\nparameter is appended (which undoubtedly will be pretty common). The easiest\nway is probably to just verify the mediatype and skip the parameters since we\nknow it can only be charset?\n\n\n+ /* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */\n+ CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);\n+ CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);\nCURLOPT_ERRORBUFFER is the old and finicky way of extracting error messages, we\nshould absolutely move to using CURLOPT_DEBUGFUNCTION instead.\n\n\n+ /* && response_code != 401 TODO */ )\nWhy is this marked with a TODO, do you remember?\n\n\n+ print(\"# OAuth provider (PID $pid) is listening on port $port\\n\");\nCode running under Test::More need to use diag() for printing non-test output\nlike this.\n\n\nAnother issue I have is the sheer size and the fact that so much code is\nreplaced by subsequent commits, so I took the liberty to squash some of this\ndown into something less daunting. The attached v22 retains the 0001 and then\ncondenses the rest into two commits for frontent and backend parts. I did drop\nthe Python pytest patch since I feel that it's unlikely to go in from this\nthread (adding pytest seems worthy of its own thread and discussion), and the\nweight of it makes this seem scarier than it is. For using it, it can be\neasily applied from the v21 patchset independently. I did tweak the commit\nmessage to match reality a bit better, but there is a lot of work left there.\n\nThe final patch contains fixes for all of the above review comments as well as\na some refactoring, smaller clean-ups and TODO fixing. If these fixes are\naccepted I'll incorporate them into the two commits.\n\nNext I intend to work on writing documentation for this.\n\n--\nDaniel Gustafsson\n\n[0] https://curl.se/dev/deprecate.html\n[1] https://github.com/curl/curl/pull/12479", "msg_date": "Thu, 28 Mar 2024 23:34:02 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Thu, Mar 28, 2024 at 3:34 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> In jsonapi.c, makeJsonLexContextCstringLen initialize a JsonLexContext with\n> palloc0 which would need to be ported over to use ALLOC for frontend code.\n\nSeems reasonable (but see below, too).\n\n> On\n> that note, the errorhandling in parse_oauth_json() for content-type checks\n> attempts to free the JsonLexContext even before it has been created. Here we\n> can just return false.\n\nAgreed. They're zero-initialized, so freeJsonLexContext() is safe\nIIUC, but it's clearer not to call the free function at all.\n\nBut for these additions:\n\n> - makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);\n> + if (!makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true))\n> + {\n> + actx_error(actx, \"out of memory\");\n> + return false;\n> + }\n\n...since we're using the stack-based API as opposed to the heap-based\nAPI, they shouldn't be possible to hit. Any failures in createStrVal()\nare deferred to parse time on purpose.\n\n> - echo 'libpq must not be calling any function which invokes exit'; exit 1; \\\n> + echo 'libpq must not be calling any function which invokes exit'; \\\n> The offending codepath in libcurl was in the NTLM_WB module, a very old and\n> obscure form of NTLM support which was replaced (yet remained in the tree) a\n> long time ago by a full NTLM implementatin. Based on the findings in this\n> thread it was deprecated with a removal date set to April 2024 [0]. A bug in\n> the 8.4.0 release however disconnected NTLM_WB from the build and given the\n> lack of complaints it was decided to leave as is, so we can base our libcurl\n> requirements on 8.4.0 while keeping the exit() check intact.\n\nOf the Cirrus machines, it looks like only FreeBSD has a new enough\nlibcurl for that. Ubuntu won't until 24.04, Debian Bookworm doesn't\nhave it unless you're running backports, RHEL 9 is still on 7.x... I\nthink requiring libcurl 8 is effectively saying no one will be able to\nuse this for a long time. Is there an alternative?\n\n> + else if (strcasecmp(content_type, \"application/json\") != 0)\n> This needs to handle parameters as well since it will now fail if the charset\n> parameter is appended (which undoubtedly will be pretty common). The easiest\n> way is probably to just verify the mediatype and skip the parameters since we\n> know it can only be charset?\n\nGood catch. application/json no longer defines charsets officially\n[1], so I think we should be able to just ignore them. The new\nstrncasecmp needs to handle a spurious prefix, too; I have that on my\nTODO list.\n\n> + /* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */\n> + CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);\n> + CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);\n> CURLOPT_ERRORBUFFER is the old and finicky way of extracting error messages, we\n> should absolutely move to using CURLOPT_DEBUGFUNCTION instead.\n\nThis new way doesn't do the same thing. Here's a sample error:\n\n connection to server at \"127.0.0.1\", port 56619 failed: failed to\nfetch OpenID discovery document: Weird server reply ( Trying\n127.0.0.1:36647...\n Connected to localhost (127.0.0.1) port 36647 (#0)\n Mark bundle as not supporting multiuse\n HTTP 1.0, assume close after body\n Invalid Content-Length: value\n Closing connection 0\n )\n\nIMO that's too much noise. Prior to the change, the same error would have been\n\n connection to server at \"127.0.0.1\", port 56619 failed: failed to\nfetch OpenID discovery document: Weird server reply (Invalid\nContent-Length: value)\n\nThe error buffer is finicky for sure, but it's also a generic one-line\nexplanation of what went wrong... Is there an alternative API for that\nI'm missing?\n\n> + /* && response_code != 401 TODO */ )\n> Why is this marked with a TODO, do you remember?\n\nYeah -- I have a feeling that 401s coming back are going to need more\nhelpful hints to the user, since it implies that libpq itself hasn't\nauthenticated correctly as opposed to some user-related auth failure.\nI was hoping to find some sample behaviors in the wild and record\nthose into the suite.\n\n> + print(\"# OAuth provider (PID $pid) is listening on port $port\\n\");\n> Code running under Test::More need to use diag() for printing non-test output\n> like this.\n\nAh, thanks.\n\n> +#if LIBCURL_VERSION_MAJOR <= 8 && LIBCURL_VERSION_MINOR < 4\n\nI don't think this catches versions like 7.76, does it? Maybe\n`LIBCURL_VERSION_MAJOR < 8 || (LIBCURL_VERSION_MAJOR == 8 &&\nLIBCURL_VERSION_MINOR < 4)`, or else `LIBCURL_VERSION_NUM < 0x080400`?\n\n> my $pid = open(my $read_fh, \"-|\", $ENV{PYTHON}, \"t/oauth_server.py\")\n> - // die \"failed to start OAuth server: $!\";\n> + or die \"failed to start OAuth server: $!\";\n>\n> - read($read_fh, $port, 7) // die \"failed to read port number: $!\";\n> + read($read_fh, $port, 7) or die \"failed to read port number: $!\";\n\nThe first hunk here looks good (thanks for the catch!) but I think the\nsecond is not correct behavior. $! doesn't get set unless undef is\nreturned, if I'm reading the docs correctly. Yay Perl.\n\n> + /* Sanity check the previous operation */\n> + if (actx->running != 1)\n> + {\n> + actx_error(actx, \"failed to queue HTTP request\");\n> + return false;\n> + }\n\n`running` can be set to zero on success, too. I'm having trouble\nforcing that code path in a test so far, but we're going to have to do\nsomething special in that case.\n\n> Another issue I have is the sheer size and the fact that so much code is\n> replaced by subsequent commits, so I took the liberty to squash some of this\n> down into something less daunting. The attached v22 retains the 0001 and then\n> condenses the rest into two commits for frontent and backend parts.\n\nLooks good.\n\n> I did drop\n> the Python pytest patch since I feel that it's unlikely to go in from this\n> thread (adding pytest seems worthy of its own thread and discussion), and the\n> weight of it makes this seem scarier than it is.\n\nUntil its coverage gets ported over, can we keep it as a `DO NOT\nMERGE` patch? Otherwise there's not much to run in Cirrus.\n\n> The final patch contains fixes for all of the above review comments as well as\n> a some refactoring, smaller clean-ups and TODO fixing. If these fixes are\n> accepted I'll incorporate them into the two commits.\n>\n> Next I intend to work on writing documentation for this.\n\nAwesome, thank you! I will start adding coverage to the new code paths.\n\n--Jacob\n\n[1] https://datatracker.ietf.org/doc/html/rfc7159#section-11\n\n\n", "msg_date": "Mon, 1 Apr 2024 15:07:45 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Mon, Apr 1, 2024 at 3:07 PM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n>\n> Awesome, thank you! I will start adding coverage to the new code paths.\n\nThis patchset rotted more than I thought it would with the new\nincremental JSON, and I got stuck in rebase hell. Rather than chip\naway at that while the cfbot is red, here's a rebase of v22 to get the\nCI up again, and I will port what I've been working on over that. (So,\nfor prior reviewers: recent upthread and offline feedback is not yet\nincorporated, sorry, come back later.)\n\nThe big change in v23 is that I've removed fe_memutils.c from\nlibpgcommon_shlib completely, to try to reduce my own hair-pulling\nwhen it comes to keeping exit() out of libpq. (It snuck in several\nways with incremental JSON.)\n\nAs far as I can tell, removing fe_memutils causes only one problem,\nwhich is that Informix ECPG is relying on pnstrdup(). And I think that\nmay be a bug in itself? There's code in deccvasc() right after the\npnstrdup() call that takes care of a failed allocation, but the\nfrontend pnstrdup() is going to call exit() on failure. So my 0001\npatch reverts that change, which was made in 0b9466fce. If that can go\nin, and I'm not missing something that makes that call okay, maybe\n0002 can be peeled off as well.\n\nThanks,\n--Jacob", "msg_date": "Wed, 3 Jul 2024 10:02:01 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Hi Daniel,\n\nOn Mon, Apr 1, 2024 at 3:07 PM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> Of the Cirrus machines, it looks like only FreeBSD has a new enough\n> libcurl for that. Ubuntu won't until 24.04, Debian Bookworm doesn't\n> have it unless you're running backports, RHEL 9 is still on 7.x... I\n> think requiring libcurl 8 is effectively saying no one will be able to\n> use this for a long time. Is there an alternative?\n\nSince the exit() checks appear to be happy now that fe_memutils is\nout, I've lowered the requirement to the version of libcurl that seems\nto be shipped in RHEL 8 (7.61.0). This also happens to be when TLS 1.3\nciphersuite control was added to Curl, which seems like something we\nmay want in the very near future, so I'm taking that as a good sign\nfor what is otherwise a very arbitrary cutoff point. Counterproposals\nwelcome :D\n\n> Good catch. application/json no longer defines charsets officially\n> [1], so I think we should be able to just ignore them. The new\n> strncasecmp needs to handle a spurious prefix, too; I have that on my\n> TODO list.\n\nI've expanded this handling in v24, attached.\n\n> This new way doesn't do the same thing. Here's a sample error:\n>\n> connection to server at \"127.0.0.1\", port 56619 failed: failed to\n> fetch OpenID discovery document: Weird server reply ( Trying\n> 127.0.0.1:36647...\n> Connected to localhost (127.0.0.1) port 36647 (#0)\n> Mark bundle as not supporting multiuse\n> HTTP 1.0, assume close after body\n> Invalid Content-Length: value\n> Closing connection 0\n> )\n>\n> IMO that's too much noise. Prior to the change, the same error would have been\n>\n> connection to server at \"127.0.0.1\", port 56619 failed: failed to\n> fetch OpenID discovery document: Weird server reply (Invalid\n> Content-Length: value)\n\nI have reverted this change for now, but I'm still hoping there's an\nalternative that can help us clean up?\n\n> `running` can be set to zero on success, too. I'm having trouble\n> forcing that code path in a test so far, but we're going to have to do\n> something special in that case.\n\nFor whatever reason, the magic timing for this is popping up more and\nmore often on Cirrus, leading to really annoying test failures. So I\nmay have to abandon the search for a perfect test case and just fix\nit.\n\n> > I did drop\n> > the Python pytest patch since I feel that it's unlikely to go in from this\n> > thread (adding pytest seems worthy of its own thread and discussion), and the\n> > weight of it makes this seem scarier than it is.\n>\n> Until its coverage gets ported over, can we keep it as a `DO NOT\n> MERGE` patch? Otherwise there's not much to run in Cirrus.\n\nI have added this back (marked loudly as don't-merge) so that we keep\nthe test coverage for now. The Perl suite (plus Python server) has\nbeen tricked out a lot more in v24, so it shouldn't be too bad to get\nthings ported.\n\n> > Next I intend to work on writing documentation for this.\n>\n> Awesome, thank you! I will start adding coverage to the new code paths.\n\nPeter E asked for some documentation stubs to ease review, which I've\nadded. Hopefully that doesn't step on your toes any.\n\nA large portion of your \"Review comments\" patch has been pulled\nbackwards into the previous commits; the remaining pieces are things\nI'm still peering at and/or writing tests for. I also owe this thread\nan updated roadmap and summary, to make it a little less daunting for\nnew reviewers. Soon (tm).\n\nThanks!\n--Jacob", "msg_date": "Tue, 9 Jul 2024 17:05:18 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "I have some comments about the first three patches, that deal with \nmemory management.\n\nv24-0001-Revert-ECPG-s-use-of-pnstrdup.patch\n\nThis looks right.\n\nI suppose another approach would be to put a full replacement for \nstrndup() into src/port/. But since there is currently only one user, \nand most other users should be using pnstrdup(), the presented approach \nseems ok.\n\nWe should take the check for exit() calls from libpq and expand it to \ncover the other libraries as well. Maybe there are other problems like \nthis?\n\n\nv24-0002-Remove-fe_memutils-from-libpgcommon_shlib.patch\n\nI don't quite understand how this problem can arise. The description says\n\n\"\"\"\nlibpq appears to have no need for this, and the exit() references cause\nour libpq-refs-stamp test to fail if the linker doesn't strip out the\nunused code.\n\"\"\"\n\nBut under what circumstances does \"the linker doesn't strip out\" happen? \n If this happens accidentally, then we should have seen some buildfarm \nfailures or something?\n\nAlso, one could look further and notice that restricted_token.c and \nsprompt.c both a) are not needed by libpq and b) can trigger exit() \ncalls. Then it's not clear why those are not affected.\n\n\nv24-0003-common-jsonapi-support-libpq-as-a-client.patch\n\nI'm reminded of thread [0]. I think there is quite a bit of confusion \nabout the pqexpbuffer vs. stringinfo APIs, and they are probably used \nincorrectly quite a bit. There are now also programs that use both of \nthem! This patch now introduces another layer on top of them. I fear, \nat the end, nobody is going to understand any of this anymore. Also, \nchanging all the programs to link in libpq for pqexpbuffer seems like \nthe opposite direction from what was suggested in [0].\n\nI think we need to do some deeper thinking here about how we want the \nmemory management on the client side to work. Maybe we could just use \none API but have some flags or callbacks to control the out-of-memory \nbehavior.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/16d0beac-a141-e5d3-60e9-323da75f49bf%40eisentraut.org\n\n\n\n", "msg_date": "Mon, 29 Jul 2024 14:01:58 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Thanks for working on this patchset, I'm looking over 0004 and 0005 but came\nacross a thing I wanted to bring up one thing sooner than waiting for the\nreview. In parse_device_authz we have this:\n\n {\"user_code\", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},\n {\"verification_uri\", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},\n\n /*\n * The following fields are technically REQUIRED, but we don't use\n * them anywhere yet:\n *\n * - expires_in\n */\n\n {\"interval\", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},\n\nTogether with a colleage we found the Azure provider use \"verification_url\"\nrather than xxx_uri. Another discrepancy is that it uses a string for the\ninterval (ie: \"interval\":\"5\"). One can of course argue that Azure is wrong and\nshould feel bad, but I fear that virtually all (major) providers will have\ndifferences like this, so we will have to deal with it in an extensible fashion\n(compile time, not runtime configurable).\n\nI was toying with making the name json_field name member an array, to allow\nvariations. That won't help with the fieldtype differences though, so another\ntrain of thought was to have some form of REQUIRED_XOR where fields can tied\ntogether. What do you think about something along these lines?\n\nAnother thing, shouldn't we really parse and interpret *all* REQUIRED fields\neven if we don't use them to ensure that the JSON is wellformed? If the JSON\nwe get is malformed in any way it seems like the safe/conservative option to\nerror out.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 29 Jul 2024 22:51:20 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Mon, Jul 29, 2024 at 5:02 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> We should take the check for exit() calls from libpq and expand it to\n> cover the other libraries as well. Maybe there are other problems like\n> this?\n\nSeems reasonable, yeah.\n\n> But under what circumstances does \"the linker doesn't strip out\" happen?\n> If this happens accidentally, then we should have seen some buildfarm\n> failures or something?\n\nOn my machine, for example, I see differences with optimization\nlevels. Say you inadvertently call pfree() in a _shlib build, as I did\nmultiple times upthread. By itself, that shouldn't actually be a\nproblem (it eventually redirects to free()), so it should be legal to\ncall pfree(), and with -O2 the build succeeds. But with -Og, the\nexit() check trips, and when I disassemble I see that pg_malloc() et\nall have infected the shared object. After all, we did tell the linker\nto put that object file in, and we don't ask it to garbage-collect\nsections.\n\n> Also, one could look further and notice that restricted_token.c and\n> sprompt.c both a) are not needed by libpq and b) can trigger exit()\n> calls. Then it's not clear why those are not affected.\n\nI think it's easier for the linker to omit whole object files rather\nthan partial ones. If libpq doesn't use any of those APIs there's not\nreally a reason to trip over it.\n\n(Maybe the _shlib variants should just contain the minimum objects\nrequired to compile.)\n\n> I'm reminded of thread [0]. I think there is quite a bit of confusion\n> about the pqexpbuffer vs. stringinfo APIs, and they are probably used\n> incorrectly quite a bit. There are now also programs that use both of\n> them! This patch now introduces another layer on top of them. I fear,\n> at the end, nobody is going to understand any of this anymore.\n\n\"anymore\"? :)\n\nIn all seriousness -- I agree that this isn't sustainable. At the\nmoment the worst pain (the new layer) is isolated to jsonapi.c, which\nseems like an okay place to try something new, since there aren't that\nmany clients. But to be honest I'm not excited about deciding the Best\nWay Forward based on a sample size of JSON.\n\n> Also,\n> changing all the programs to link in libpq for pqexpbuffer seems like\n> the opposite direction from what was suggested in [0].\n\n(I don't really want to keep that new libpq dependency. We'd just have\nto decide where PQExpBuffer is going to go if we're not okay with it.)\n\n> I think we need to do some deeper thinking here about how we want the\n> memory management on the client side to work. Maybe we could just use\n> one API but have some flags or callbacks to control the out-of-memory\n> behavior.\n\nAny src/common code that needs to handle both in-band and out-of-band\nfailure modes will still have to decide whether it's going to 1)\nduplicate code paths or 2) just act as if in-band failures can always\nhappen. I think that's probably essential complexity; an ideal API\nmight make it nicer to deal with but it can't abstract it away.\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Mon, 29 Jul 2024 15:30:21 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Mon, Jul 29, 2024 at 1:51 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Together with a colleage we found the Azure provider use \"verification_url\"\n> rather than xxx_uri.\n\nYeah, I think that's originally a Google-ism. (As far as I can tell\nthey helped author the spec for this and then didn't follow it. :/ ) I\ndidn't recall Azure having used it back when I was testing against it,\nthough, so that's good to know.\n\n> Another discrepancy is that it uses a string for the\n> interval (ie: \"interval\":\"5\").\n\nOh, that's a new one. I don't remember needing to hack around that\neither; maybe iddawc handled it silently?\n\n> One can of course argue that Azure is wrong and\n> should feel bad, but I fear that virtually all (major) providers will have\n> differences like this, so we will have to deal with it in an extensible fashion\n> (compile time, not runtime configurable).\n\nSuch is life... verification_url we will just have to deal with by\ndefault, I think, since Google does/did it too. Not sure about\ninterval -- but do we want to make our distribution maintainers deal\nwith a compile-time setting for libpq, just to support various OAuth\nflavors? To me it seems like we should just hold our noses and support\nknown (large) departures in the core.\n\n> I was toying with making the name json_field name member an array, to allow\n> variations. That won't help with the fieldtype differences though, so another\n> train of thought was to have some form of REQUIRED_XOR where fields can tied\n> together. What do you think about something along these lines?\n\nIf I designed it right, just adding alternative spellings directly to\nthe fields list should work. (The \"required\" check is by struct\nmember, not name, so both spellings can point to the same\ndestination.) The alternative typing on the other hand might require\nsomething like a new sentinel \"type\" that will accept both... I hadn't\nexpected that.\n\n> Another thing, shouldn't we really parse and interpret *all* REQUIRED fields\n> even if we don't use them to ensure that the JSON is wellformed? If the JSON\n> we get is malformed in any way it seems like the safe/conservative option to\n> error out.\n\nGood, I was hoping to have a conversation about that. I am fine with\neither option in principle. In practice I expect to add code to use\n`expires_in` (so that we can pass it to custom OAuth hook\nimplementations) and `scope` (to check if the server has changed it on\nus).\n\nThat leaves the provider... Forcing the provider itself to implement\nunused stuff in order to interoperate seems like it could backfire on\nus, especially since IETF standardized an alternate .well-known URI\n[1] that changes some of these REQUIRED things into OPTIONAL. (One way\nfor us to interpret this: those fields may be required for OpenID, but\nyour OAuth provider might not be an OpenID provider, and our code\ndoesn't require OpenID.) I think we should probably tread lightly in\nthat particular case. Thoughts on that?\n\nThanks!\n--Jacob\n\n[1] https://www.rfc-editor.org/rfc/rfc8414.html\n\n\n", "msg_date": "Mon, 29 Jul 2024 16:15:33 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 30.07.24 00:30, Jacob Champion wrote:\n>> But under what circumstances does \"the linker doesn't strip out\" happen?\n>> If this happens accidentally, then we should have seen some buildfarm\n>> failures or something?\n> On my machine, for example, I see differences with optimization\n> levels. Say you inadvertently call pfree() in a _shlib build, as I did\n> multiple times upthread. By itself, that shouldn't actually be a\n> problem (it eventually redirects to free()), so it should be legal to\n> call pfree(), and with -O2 the build succeeds. But with -Og, the\n> exit() check trips, and when I disassemble I see that pg_malloc() et\n> all have infected the shared object. After all, we did tell the linker\n> to put that object file in, and we don't ask it to garbage-collect\n> sections.\n\nI'm tempted to say, this is working as intended.\n\nlibpgcommon is built as a static library. So we can put all the object \nfiles in the library, and its users only use the object files they \nreally need. So this garbage collection you allude to actually does \nhappen, on an object-file level.\n\nYou shouldn't use pfree() interchangeably with free(), even if that is \nnot enforced because it's the same thing underneath. First, it just \nmakes sense to keep the alloc and free pairs matched up. And second, on \nWindows there is some additional restriction (vague knowledge) that the \nallocate and free functions must be in the same library, so mixing them \nfreely might not even work.\n\n\n\n", "msg_date": "Fri, 2 Aug 2024 19:13:40 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Aug 2, 2024 at 10:13 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> You shouldn't use pfree() interchangeably with free(), even if that is\n> not enforced because it's the same thing underneath. First, it just\n> makes sense to keep the alloc and free pairs matched up. And second, on\n> Windows there is some additional restriction (vague knowledge) that the\n> allocate and free functions must be in the same library, so mixing them\n> freely might not even work.\n\nAh, I forgot about the CRT problems on Windows. So my statement of\n\"the linker might not garbage collect\" is pretty much irrelevant.\n\nBut it sounds like we agree that we shouldn't be using fe_memutils at\nall in shlib builds. (If you can't use palloc -- it calls exit -- then\nyou can't use pfree either.) Is 0002 still worth pursuing, once I've\ncorrectly wordsmithed the commit? Or did I misunderstand your point?\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Fri, 2 Aug 2024 10:51:59 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 02.08.24 19:51, Jacob Champion wrote:\n> But it sounds like we agree that we shouldn't be using fe_memutils at\n> all in shlib builds. (If you can't use palloc -- it calls exit -- then\n> you can't use pfree either.) Is 0002 still worth pursuing, once I've\n> correctly wordsmithed the commit? Or did I misunderstand your point?\n\nYes, I think with an adjusted comment and commit message, the actual \nchange makes sense.\n\n\n\n", "msg_date": "Fri, 2 Aug 2024 20:48:40 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Aug 2, 2024 at 11:48 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> Yes, I think with an adjusted comment and commit message, the actual\n> change makes sense.\n\nDone in v25.\n\n...along with a bunch of other stuff:\n\n1. All the debug-mode things that we want for testing but not in\nproduction have now been hidden behind a PGOAUTHDEBUG environment\nvariable, instead of being enabled by default. At the moment, that\nmeans 1) sensitive HTTP traffic gets printed on stderr, 2) plaintext\nHTTP is allowed, and 3) servers may DoS the client by sending a\nzero-second retry interval (which speeds up testing a lot). I've\nresurrected some of Daniel's CURLOPT_DEBUGFUNCTION implementation for\nthis.\n\nI think this feature needs more thought, but I'm not sure how much. In\nparticular I don't think a connection string option would be\nappropriate (imagine the \"fun\" a proxy solution would have with a\nspray-my-password-to-stderr switch). But maybe it makes sense to\nfurther divide the dangerous behavior up, so that for example you can\ndebug the HTTP stream without also allowing plaintext connections, or\nsomething. And maybe stricter maintainers would like to compile the\nfeature out entirely?\n\n2. The verification_url variant from Azure and Google is now directly supported.\n\n@Daniel: I figured out why I wasn't seeing the string-based-interval\nissue in my testing. I've been using Azure's v2.0 OpenID endpoint,\nwhich seems to be much more compliant than the original. Since this is\na new feature, would it be okay to just push new users to that\nendpoint rather than supporting the previous weirdness in our code?\n(Either way, I think we should support verification_url.)\n\nAlong those lines, with Azure I'm now seeing that device_code is not\nadvertised in grant_types_supported... is that new behavior? Or did\niddawc just not care?\n\n3. I've restructured the libcurl calls to allow\ncurl_multi_socket_action() to synchronously succeed on its first call,\nwhich we've been seeing a lot in the CI as mentioned upthread. This\nled to a bunch of refactoring of the top-level state machine, which\nhad gotten too complex. I'm much happier with the code organization\nnow, but it's a big diff.\n\n4. I've changed things around to get rid of two modern libcurl\ndeprecation warnings. I need to ask curl-library about my use of\ncurl_multi_socket_all(), which seems like it's exactly what our use\ncase needs.\n\nThanks,\n--Jacob", "msg_date": "Mon, 5 Aug 2024 10:53:24 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 05.08.24 19:53, Jacob Champion wrote:\n> On Fri, Aug 2, 2024 at 11:48 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> Yes, I think with an adjusted comment and commit message, the actual\n>> change makes sense.\n> \n> Done in v25.\n> \n> ...along with a bunch of other stuff:\n\nI have committed 0001, and I plan to backpatch it once the release \nfreeze lifts.\n\nI'll work on 0002 next.\n\n\n\n", "msg_date": "Wed, 7 Aug 2024 09:34:14 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 07.08.24 09:34, Peter Eisentraut wrote:\n> On 05.08.24 19:53, Jacob Champion wrote:\n>> On Fri, Aug 2, 2024 at 11:48 AM Peter Eisentraut \n>> <peter@eisentraut.org> wrote:\n>>> Yes, I think with an adjusted comment and commit message, the actual\n>>> change makes sense.\n>>\n>> Done in v25.\n>>\n>> ...along with a bunch of other stuff:\n> \n> I have committed 0001, and I plan to backpatch it once the release \n> freeze lifts.\n> \n> I'll work on 0002 next.\n\nI have committed 0002 now.\n\n\n\n", "msg_date": "Mon, 12 Aug 2024 08:37:01 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Sun, Aug 11, 2024 at 11:37 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> I have committed 0002 now.\n\nThanks Peter! Rebased over both in v26.\n\n--Jacob", "msg_date": "Tue, 13 Aug 2024 14:11:56 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 13.08.24 23:11, Jacob Champion wrote:\n> On Sun, Aug 11, 2024 at 11:37 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> I have committed 0002 now.\n> \n> Thanks Peter! Rebased over both in v26.\n\nI have looked again at the jsonapi memory management patch (v26-0001).\nAs previously mentioned, I think adding a third or fourth (depending\non how you count) memory management API is maybe something we should\navoid. Also, the weird layering where src/common/ now (sometimes)\ndepends on libpq seems not great.\n\nI'm thinking, maybe we leave the use of StringInfo at the source code\nlevel, but #define the symbols to use PQExpBuffer. Something like\n\n#ifdef JSONAPI_USE_PQEXPBUFFER\n\n#define StringInfo PQExpBuffer\n#define appendStringInfo appendPQExpBuffer\n#define appendBinaryStringInfo appendBinaryPQExpBuffer\n#define palloc malloc\n//etc.\n\n#endif\n\n(simplified, the argument lists might differ)\n\nOr, if people find that too scary, something like\n\n#ifdef JSONAPI_USE_PQEXPBUFFER\n\n#define jsonapi_StringInfo PQExpBuffer\n#define jsonapi_appendStringInfo appendPQExpBuffer\n#define jsonapi_appendBinaryStringInfo appendBinaryPQExpBuffer\n#define jsonapi_palloc malloc\n//etc.\n\n#else\n\n#define jsonapi_StringInfo StringInfo\n#define jsonapi_appendStringInfo appendStringInfo\n#define jsonapi_appendBinaryStringInfo appendBinaryStringInfo\n#define jsonapi_palloc palloc\n//etc.\n\n#endif\n\nThat way, it's at least more easy to follow the source code because\nyou see a mostly-familiar API.\n\nAlso, we should make this PQExpBuffer-using mode only used by libpq,\nnot by frontend programs. So libpq takes its own copy of jsonapi.c\nand compiles it using JSONAPI_USE_PQEXPBUFFER. That will make the\nlibpq build descriptions a bit more complicated, but everyone who is\nnot libpq doesn't need to change.\n\nOnce you get past all the function renaming, the logic changes in\njsonapi.c all look pretty reasonable. Refactoring like\nallocate_incremental_state() makes sense.\n\nYou could add pg_nodiscard attributes to\nmakeJsonLexContextCstringLen() and makeJsonLexContextIncremental() so\nthat callers who are using the libpq mode are forced to check for\nerrors. Or maybe there is a clever way to avoid even that: Create a\nfixed JsonLexContext like\n\n static const JsonLexContext failed_oom;\n\nand on OOM you return that one from makeJsonLexContext*(). And then\nin pg_parse_json(), when you get handed that context, you return\nJSON_OUT_OF_MEMORY immediately.\n\nOther than that detail and the need to use freeJsonLexContext(), it\nlooks like this new mode doesn't impose any additional burden on\ncallers, since during parsing they need to check for errors anyway,\nand this just adds one more error type for out of memory. That's a good \noutcome.\n\n\n\n", "msg_date": "Mon, 26 Aug 2024 10:18:10 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Mon, Aug 26, 2024 at 1:18 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> Or, if people find that too scary, something like\n>\n> #ifdef JSONAPI_USE_PQEXPBUFFER\n>\n> #define jsonapi_StringInfo PQExpBuffer\n> [...]\n>\n> That way, it's at least more easy to follow the source code because\n> you see a mostly-familiar API.\n\nI was having trouble reasoning about the palloc-that-isn't-palloc code\nduring the first few drafts, so I will try a round with the jsonapi_\nprefix.\n\n> Also, we should make this PQExpBuffer-using mode only used by libpq,\n> not by frontend programs. So libpq takes its own copy of jsonapi.c\n> and compiles it using JSONAPI_USE_PQEXPBUFFER. That will make the\n> libpq build descriptions a bit more complicated, but everyone who is\n> not libpq doesn't need to change.\n\nSounds reasonable. It complicates the test coverage situation a little\nbit, but I think my current patch was maybe insufficient there anyway,\nsince the coverage for the backend flavor silently dropped...\n\n> Or maybe there is a clever way to avoid even that: Create a\n> fixed JsonLexContext like\n>\n> static const JsonLexContext failed_oom;\n>\n> and on OOM you return that one from makeJsonLexContext*(). And then\n> in pg_parse_json(), when you get handed that context, you return\n> JSON_OUT_OF_MEMORY immediately.\n\nI like this idea.\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Mon, 26 Aug 2024 16:23:06 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Mon, Aug 26, 2024 at 4:23 PM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> I was having trouble reasoning about the palloc-that-isn't-palloc code\n> during the first few drafts, so I will try a round with the jsonapi_\n> prefix.\n\nv27 takes a stab at that. I have kept the ALLOC/FREE naming to match\nthe strategy in other src/common source files.\n\nThe name of the variable JSONAPI_USE_PQEXPBUFFER leads to sections of\ncode that look like this:\n\n+#ifdef JSONAPI_USE_PQEXPBUFFER\n+ if (!new_prediction || !new_fnames || !new_fnull)\n+ return false;\n+#endif\n\nTo me it wouldn't be immediately obvious why \"using PQExpBuffer\" has\nanything to do with this code; the key idea is that we expect any\nallocations to be able to fail. Maybe a name like JSONAPI_ALLOW_OOM or\nJSONAPI_SHLIB_ALLOCATIONS or...?\n\n> It complicates the test coverage situation a little\n> bit, but I think my current patch was maybe insufficient there anyway,\n> since the coverage for the backend flavor silently dropped...\n\nTo do this without too much pain, I split the \"forbidden\" objects into\ntheir own shared library, used only by the JSON tests which needed\nthem. I tried not to wrap too much ceremony around them, since they're\nonly needed in one place, so they don't have an associated Meson\ndependency object.\n\n> > Or maybe there is a clever way to avoid even that: Create a\n> > fixed JsonLexContext like\n> >\n> > static const JsonLexContext failed_oom;\n\nI think this turned out nicely. Two slight deviations from this are\nthat we can't return a pointer-to-const, and we also need an OOM\nsentinel for the JsonIncrementalState, since it's possible to\ninitialize incremental parsing into a JsonLexContext that's on the\nstack.\n\n--Jacob", "msg_date": "Wed, 28 Aug 2024 09:31:07 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 28.08.24 18:31, Jacob Champion wrote:\n> On Mon, Aug 26, 2024 at 4:23 PM Jacob Champion\n> <jacob.champion@enterprisedb.com> wrote:\n>> I was having trouble reasoning about the palloc-that-isn't-palloc code\n>> during the first few drafts, so I will try a round with the jsonapi_\n>> prefix.\n> \n> v27 takes a stab at that. I have kept the ALLOC/FREE naming to match\n> the strategy in other src/common source files.\n\nThis looks pretty good to me. Maybe on the naming side, this seems like \na gratuitous divergence:\n\n+#define jsonapi_createStringInfo makeStringInfo\n\n> The name of the variable JSONAPI_USE_PQEXPBUFFER leads to sections of\n> code that look like this:\n> \n> +#ifdef JSONAPI_USE_PQEXPBUFFER\n> + if (!new_prediction || !new_fnames || !new_fnull)\n> + return false;\n> +#endif\n> \n> To me it wouldn't be immediately obvious why \"using PQExpBuffer\" has\n> anything to do with this code; the key idea is that we expect any\n> allocations to be able to fail. Maybe a name like JSONAPI_ALLOW_OOM or\n> JSONAPI_SHLIB_ALLOCATIONS or...?\n\nSeems ok to me as is. I think the purpose of JSONAPI_USE_PQEXPBUFFER is \nadequately explained by this comment\n\n+/*\n+ * By default, we will use palloc/pfree along with StringInfo. In libpq,\n+ * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on \nout-of-memory.\n+ */\n+#ifdef JSONAPI_USE_PQEXPBUFFER\n\nFor some of the other proposed names, I'd be afraid that someone might \nthink you are free to mix and match APIs, OOM behavior, and compilation \noptions.\n\n\nSome comments on src/include/common/jsonapi.h:\n\n-#include \"lib/stringinfo.h\"\n\nI suspect this will fail headerscheck? Probably needs an exception \nadded there.\n\n+#ifdef JSONAPI_USE_PQEXPBUFFER\n+#define StrValType PQExpBufferData\n+#else\n+#define StrValType StringInfoData\n+#endif\n\nMaybe use jsonapi_StrValType here.\n\n+typedef struct StrValType StrValType;\n\nI don't think that is needed. It would just duplicate typedefs that \nalready exist elsewhere, depending on what StrValType is set to.\n\n+ bool parse_strval;\n+ StrValType *strval; /* only used if \nparse_strval == true */\n\nThe parse_strval field could use a better explanation.\n\nI actually don't understand the need for this field. AFAICT, this is\njust used to record whether strval is valid. But in the cases where\nit's not valid, why do we need to record that? Couldn't you just return\nfailed_oom in those cases?\n\n\n\n", "msg_date": "Fri, 30 Aug 2024 11:49:43 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Aug 30, 2024 at 2:49 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> This looks pretty good to me. Maybe on the naming side, this seems like\n> a gratuitous divergence:\n>\n> +#define jsonapi_createStringInfo makeStringInfo\n\nWhoops, fixed.\n\n> Seems ok to me as is. I think the purpose of JSONAPI_USE_PQEXPBUFFER is\n> adequately explained by this comment\n>\n> +/*\n> + * By default, we will use palloc/pfree along with StringInfo. In libpq,\n> + * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on\n> out-of-memory.\n> + */\n> +#ifdef JSONAPI_USE_PQEXPBUFFER\n>\n> For some of the other proposed names, I'd be afraid that someone might\n> think you are free to mix and match APIs, OOM behavior, and compilation\n> options.\n\nYeah, that's fair.\n\n> Some comments on src/include/common/jsonapi.h:\n>\n> -#include \"lib/stringinfo.h\"\n>\n> I suspect this will fail headerscheck? Probably needs an exception\n> added there.\n\nCurrently it passes on my machine and the cfbot. The\nforward-declaration of the struct should be enough to make clients\nhappy. Or was there a different way to break it?\n\n> +#ifdef JSONAPI_USE_PQEXPBUFFER\n> +#define StrValType PQExpBufferData\n> +#else\n> +#define StrValType StringInfoData\n> +#endif\n>\n> Maybe use jsonapi_StrValType here.\n\nDone.\n\n> +typedef struct StrValType StrValType;\n>\n> I don't think that is needed. It would just duplicate typedefs that\n> already exist elsewhere, depending on what StrValType is set to.\n\nOkay, removed.\n\n> The parse_strval field could use a better explanation.\n>\n> I actually don't understand the need for this field. AFAICT, this is\n> just used to record whether strval is valid.\n\nNo, it's meant to track the value of the need_escapes argument to the\nconstructor. I've renamed it and moved the assignment to hopefully\nmake that a little more obvious. WDYT?\n\n> But in the cases where\n> it's not valid, why do we need to record that? Couldn't you just return\n> failed_oom in those cases?\n\nWe can do that if you'd like. I was just worried about using a valid\n(broken) value of PQExpBuffer as a sentinel instead of a separate\nflag. It would work as long as reviewers stay vigilant, but if we go\nthat direction and someone adds an unchecked\n\n lex->strval = jsonapi_makeStringInfo();\n // should check for NULL now, but we forgot\n\ninto a future patch, an allocation failure in _shlib builds would\nsilently disable string escaping instead of resulting in a\nJSON_OUT_OF_MEMORY later.\n\nThanks,\n--Jacob", "msg_date": "Tue, 3 Sep 2024 13:56:07 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 03.09.24 22:56, Jacob Champion wrote:\n>> The parse_strval field could use a better explanation.\n>>\n>> I actually don't understand the need for this field. AFAICT, this is\n>> just used to record whether strval is valid.\n> No, it's meant to track the value of the need_escapes argument to the\n> constructor. I've renamed it and moved the assignment to hopefully\n> make that a little more obvious. WDYT?\n\nYes, this is clearer.\n\nThis patch (v28-0001) looks good to me now.\n\n\n\n", "msg_date": "Wed, 4 Sep 2024 11:28:24 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On 04.09.24 11:28, Peter Eisentraut wrote:\n> On 03.09.24 22:56, Jacob Champion wrote:\n>>> The parse_strval field could use a better explanation.\n>>>\n>>> I actually don't understand the need for this field.  AFAICT, this is\n>>> just used to record whether strval is valid.\n>> No, it's meant to track the value of the need_escapes argument to the\n>> constructor. I've renamed it and moved the assignment to hopefully\n>> make that a little more obvious. WDYT?\n> \n> Yes, this is clearer.\n> \n> This patch (v28-0001) looks good to me now.\n\nThis has been committed.\n\nAbout the subsequent patches:\n\nIs there any sense in dealing with the libpq and backend patches \nseparately in sequence, or is this split just for ease of handling?\n\n(I suppose the 0004 \"review comments\" patch should be folded into the \nrespective other patches?)\n\nWhat could be the next steps to keep this moving along, other than stare \nat the remaining patches until we're content with them? ;-)\n\n\n\n", "msg_date": "Wed, 11 Sep 2024 09:37:34 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "> On 11 Sep 2024, at 09:37, Peter Eisentraut <peter@eisentraut.org> wrote:\n\n> Is there any sense in dealing with the libpq and backend patches separately in sequence, or is this split just for ease of handling?\n\nI think it's just make reviewing a bit easier. At this point I think they can\nbe merged together, it's mostly out of historic reasons IIUC since the patchset\nearlier on supported more than one library.\n\n> (I suppose the 0004 \"review comments\" patch should be folded into the respective other patches?)\n\nYes (0003 now), along with the 0004 in the attached version (I bumped to v29 as\none commit is now committed, but the attached doesn't change Jacobs commits but\nrather add to them) which contains more review comments. More on that below:\n\nI added a warning to autconf in case --with-oauth is used without --with-python\nsince this combination will error out in running the tests. Might be\nsuperfluous but I had an embarrassingly long headscratcher myself as to why the\ntests kept failing =)\n\nCURL_IGNORE_DEPRECATION(x;) broke pgindent, it needs to keep the semicolon on\nthe outside like CURL_IGNORE_DEPRECATION(x);. This doesn't really work well\nwith how the macro is defined, not sure how we should handle that best (the\nattached makes the style as per how pgindent want's it with the semicolon\nreturned).\n\nThe oauth_validator test module need to load Makefile.global before exporting\nthe symbols from there. I also removed the placeholder regress test which did\nnothing and turned diag() calls into note() calls to keep the output from\ncluttering.\n\nThere is a first stab at documenting the validator module API, more to come (it\ndoesn't compile right now).\n\nIt contains a pgindent and pgperltidy run to keep things as close to in final\nsync as we can to catch things like the curl deprecation macro mentioned above\nearly.\n\n> What could be the next steps to keep this moving along, other than stare at the remaining patches until we're content with them? ;-)\n\nI'm in the \"stare at things\" stage now to try and get this into the tree =)\n\nTo further pick away at this huge patch I propose to merge the SASL message\nlength hunk which can be extracted separately. The attached .txt (to keep the\nCFBot from poking at it) contains a diff which can be committed ahead of the\nrest of this patch to make it a tad smaller and to keep the history of that\nchange a bit clearer.\n\n--\nDaniel Gustafsson", "msg_date": "Wed, 11 Sep 2024 15:44:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "(Thanks for the commit, Peter!)\n\nOn Wed, Sep 11, 2024 at 6:44 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 11 Sep 2024, at 09:37, Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> > Is there any sense in dealing with the libpq and backend patches separately in sequence, or is this split just for ease of handling?\n>\n> I think it's just make reviewing a bit easier. At this point I think they can\n> be merged together, it's mostly out of historic reasons IIUC since the patchset\n> earlier on supported more than one library.\n\nI can definitely do that (and yeah, it was to make the review slightly\nless daunting). The server side could potentially be committed\nindependently, if you want to parallelize a bit, but it'd have to be\ntorn back out if the libpq stuff didn't land in time.\n\n> > (I suppose the 0004 \"review comments\" patch should be folded into the respective other patches?)\n\nYes. I'm using that patch as a holding area while I write tests for\nthe hunks, and then moving them backwards.\n\n> I added a warning to autconf in case --with-oauth is used without --with-python\n> since this combination will error out in running the tests. Might be\n> superfluous but I had an embarrassingly long headscratcher myself as to why the\n> tests kept failing =)\n\nWhoops, sorry. I guess we should just skip them if Python isn't there?\n\n> CURL_IGNORE_DEPRECATION(x;) broke pgindent, it needs to keep the semicolon on\n> the outside like CURL_IGNORE_DEPRECATION(x);. This doesn't really work well\n> with how the macro is defined, not sure how we should handle that best (the\n> attached makes the style as per how pgindent want's it with the semicolon\n> returned).\n\nUgh... maybe a case for a pre_indent rule in pgindent?\n\n> The oauth_validator test module need to load Makefile.global before exporting\n> the symbols from there.\n\nHm. Why was that passing the CI, though...?\n\n> There is a first stab at documenting the validator module API, more to come (it\n> doesn't compile right now).\n>\n> It contains a pgindent and pgperltidy run to keep things as close to in final\n> sync as we can to catch things like the curl deprecation macro mentioned above\n> early.\n\nThanks!\n\n> > What could be the next steps to keep this moving along, other than stare at the remaining patches until we're content with them? ;-)\n>\n> I'm in the \"stare at things\" stage now to try and get this into the tree =)\n\nYeah, and I still owe you all an updated roadmap.\n\nWhile I fix up the tests, I've also been picking away at the JSON\nencoding problem that was mentioned in [1]; the recent SASLprep fix\nwas fallout from that, since I'm planning to pull in pieces of its\nUTF-8 validation. I will eventually want to fuzz the heck out of this.\n\n> To further pick away at this huge patch I propose to merge the SASL message\n> length hunk which can be extracted separately. The attached .txt (to keep the\n> CFBot from poking at it) contains a diff which can be committed ahead of the\n> rest of this patch to make it a tad smaller and to keep the history of that\n> change a bit clearer.\n\nLGTM!\n\n--\n\nPeter asked me if there were plans to provide a \"standard\" validator\nmodule, say as part of contrib. The tricky thing is that Bearer\nvalidation is issuer-specific, and many providers give you an opaque\ntoken that you're not supposed to introspect at all.\n\nWe could use token introspection (RFC 7662) for online verification,\nbut last I looked at it, no one had actually implemented those\nendpoints. For offline verification, I think the best we could do\nwould be to provide a generic JWT Profile (RFC 9068) validator, but\nagain I don't know if anyone is actually providing those token formats\nin practice. I'm inclined to push that out into the future.\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/ZjxQnOD1OoCkEeMN%40paquier.xyz\n\n\n", "msg_date": "Wed, 11 Sep 2024 15:54:18 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Wed, Sep 11, 2024 at 3:54 PM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> Yeah, and I still owe you all an updated roadmap.\n\nOkay, here goes. New reviewers: start here!\n\n== What is This? ==\n\nOAuth 2.0 is a way for a trusted third party (a \"provider\") to tell a\nserver whether a client on the other end of the line is allowed to do\nsomething. This patchset adds OAuth support to libpq with libcurl,\nprovides a server-side API so that extension modules can add support\nfor specific OAuth providers, and extends our SASL support to carry\nthe OAuth access tokens over the OAUTHBEARER mechanism.\n\nMost OAuth clients use a web browser to perform the third-party\nhandshake. (These are your \"Okta logins\", \"sign in with XXX\", etc.)\nBut there are plenty of people who use psql without a local browser,\nand invoking a browser safely across all supported platforms is\nactually surprisingly fraught. So this patchset implements something\ncalled device authorization, where the client will display a link and\na code, and then you can log in on whatever device is convenient for\nyou. Once you've told your provider that you trust libpq to connect to\nPostgres on your behalf, it'll give libpq an access token, and libpq\nwill forward that on to the server.\n\n== How This Fits, or: The Sales Pitch ==\n\nThe most popular third-party auth methods we have today are probably\nthe Kerberos family (AD/GSS/SSPI) and LDAP. If you're not already in\nan MS ecosystem, it's unlikely that you're using the former. And users\nof the latter are, in my experience, more-or-less resigned to its use,\nin spite of LDAP's architectural security problems and the fact that\nyou have to run weird synchronization scripts to tell Postgres what\ncertain users are allowed to do.\n\nOAuth provides a decently mature and widely-deployed third option. You\ndon't have to be running the infrastructure yourself, as long as you\nhave a provider you trust. If you are running your own infrastructure\n(or if your provider is configurable), the tokens being passed around\ncan carry org-specific user privileges, so that Postgres can figure\nout who's allowed to do what without out-of-band synchronization\nscripts. And those access tokens are a straight upgrade over\npasswords: even if they're somehow stolen, they are time-limited, they\nare optionally revocable, and they can be scoped to specific actions.\n\n== Extension Points ==\n\nThis patchset provides several points of customization:\n\nServer-side validation is farmed out entirely to an extension, which\nwe do not provide. (Each OAuth provider is free to come up with its\nown proprietary method of verifying its access tokens, and so far the\nbig players have absolutely not standardized.) Depending on the\nprovider, the extension may need to contact an external server to see\nwhat the token has been authorized to do, or it may be able to do that\noffline using signing keys and an agreed-upon token format.\n\nThe client driver using libpq may replace the device authorization\nprompt (which by default is done on standard error), for example to\nmove it into an existing GUI, display a scannable QR code instead of a\nlink, and so on.\n\nThe driver may also replace the entire OAuth flow. For example, a\nclient that already interacts with browsers may be able to use one of\nthe more standard web-based methods to get an access token. And\nclients attached to a service rather than an end user could use a more\nstraightforward server-to-server flow, with pre-established\ncredentials.\n\n== Architecture ==\n\nThe client needs to speak HTTP, which is implemented entirely with\nlibcurl. Originally, I used another OAuth library for rapid\nprototyping, but the quality just wasn't there and I ported the\nimplementation. An internal abstraction layer remains in the libpq\ncode, so if a better client library comes along, switching to it\nshouldn't be too painful.\n\nThe client-side hooks all go through a single extension point, so that\nwe don't continually add entry points in the API for each new piece of\nauthentication data that a driver may be able to provide. If we wanted\nto, we could potentially move the existing SSL passphrase hook into\nthat, or even handle password retries within libpq itself, but I don't\nsee any burning reason to do that now.\n\nI wanted to make sure that OAuth could be dropped into existing\ndeployments without driver changes. (Drivers will probably *want* to\nlook at the extension hooks for better UX, but they shouldn't\nnecessarily *have* to.) That has driven several parts of the design.\n\nDrivers using the async APIs should continue to work without blocking,\neven during the long HTTP handshakes. So the new client code is\nstructured as a typical event-driven state machine (similar to\nPQconnectPoll). The protocol machine hands off control to the OAuth\nmachine during authentication, without really needing to know how it\nworks, because the OAuth machine replaces the PQsocket with a\ngeneral-purpose multiplexer that handles all of the HTTP sockets and\nevents. Once that's completed, the OAuth machine hands control right\nback and we return to the Postgres protocol on the wire.\n\nThis decision led to a major compromise: Windows client support is\nnonexistent. Multiplexer handles exist in Windows (for example with\nWSAEventSelect, IIUC), but last I checked they were completely\nincompatible with Winsock select(), which means existing async-aware\ndrivers would fail. We could compromise by providing synchronous-only\nsupport, or by cobbling together a socketpair plus thread pool (or\nIOCP?), or simply by saying that existing Windows clients need a new\nAPI other than PQsocket() to be able to work properly. None of those\napproaches have been attempted yet, though.\n\n== Areas of Concern ==\n\nHere are the iffy things that a committer is signing up for:\n\nThe client implementation is roughly 3k lines, requiring domain\nknowledge of Curl, HTTP, JSON, and OAuth, the specifications of which\nare spread across several separate standards bodies. (And some big\nproviders ignore those anyway.)\n\nThe OAUTHBEARER mechanism is extensible, but not in the same way as\nHTTP. So sometimes, it looks like people design new OAuth features\nthat rely heavily on HTTP and forget to \"port\" them over to SASL. That\nmay be a point of future frustration.\n\nC is not really anyone's preferred language for implementing an\nextensible authn/z protocol running on top of HTTP, and constant\nvigilance is going to be required to maintain safety. What's more, we\ndon't really \"trust\" the endpoints we're talking to in the same way\nthat we normally trust our servers. It's a fairly hostile environment\nfor maintainers.\n\nAlong the same lines, our JSON implementation assumes some level of\ntrust in the JSON data -- which is true for the backend, and can be\nassumed for a DBA running our utilities, but is absolutely not the\ncase for a libpq client downloading data from Some Server on the\nInternet. I've been working to fuzz the implementation and there are a\nfew known problems registered in the CF already.\n\nCurl is not a lightweight dependency by any means. Typically, libcurl\nis configured with a wide variety of nice options, a tiny subset of\nwhich we're actually going to use, but all that code (and its\ntransitive dependencies!) is going to arrive in our process anyway.\nThat might not be a lot of fun if you're not using OAuth.\n\nIt's possible that the application embedding libpq is also a direct\nclient of libcurl. We need to make sure we're not stomping on their\ntoes at any point.\n\n== TODOs/Known Issues ==\n\nThe client does not deal with verification failure well at the moment;\nit just keeps retrying with a new OAuth handshake.\n\nSome people are not going to be okay with just contacting any web\nserver that Postgres tells them to. There's a more paranoid mode\nsketched out that lets the connection string specify the trusted\nissuer, but it's not complete.\n\nThe new code still needs to play well with orthogonal connection\noptions, like connect_timeout and require_auth.\n\nThe server does not deal well with multi-issuer setups yet. And you\nonly get one oauth_validator_library...\n\nHarden, harden, harden. There are still a handful of inline TODOs\naround double-checking certain pieces of the response before\ncontinuing with the handshake. Servers should not be able to run our\nrecursive descent parser out of stack. And my JSON code is using\nassertions too liberally, which will turn bugs into DoS vectors. I've\nbeen working to fit a fuzzer into more and more places, and I'm hoping\nto eventually drive it directly from the socket.\n\nDocumentation still needs to be filled in. (Thanks Daniel for your work here!)\n\n== Future Features ==\n\nThere is no support for token caching (refresh or otherwise). Each new\nconnection needs a new approval, and the only way to change that for\nv1 is to replace the entire flow. I think that's eventually going to\nannoy someone. The question is, where do you persist it? Does that\nneed to be another extensibility point?\n\nWe already have pretty good support for client certificates, and it'd\nbe great if we could bind our tokens to those. That way, even if you\nsomehow steal the tokens, you can't do anything with them without the\nprivate key! But the state of proof-of-possession in OAuth is an\nabsolute mess, involving at least three competing standards (Token\nBinding, mTLS, DPoP). I don't know what's going to win.\n\n--\n\nHope this helps! Next I'll be working to fold the patches together, as\ndiscussed upthread.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 16 Sep 2024 12:13:28 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Jacob Champion <jacob.champion@enterprisedb.com> wrote:\n\n> Peter asked me if there were plans to provide a \"standard\" validator\n> module, say as part of contrib. The tricky thing is that Bearer\n> validation is issuer-specific, and many providers give you an opaque\n> token that you're not supposed to introspect at all.\n> \n> We could use token introspection (RFC 7662) for online verification,\n> but last I looked at it, no one had actually implemented those\n> endpoints. For offline verification, I think the best we could do\n> would be to provide a generic JWT Profile (RFC 9068) validator, but\n> again I don't know if anyone is actually providing those token formats\n> in practice. I'm inclined to push that out into the future.\n\nHave you considered sending the token for validation to the server, like this\n\ncurl -X GET \"https://www.googleapis.com/oauth2/v3/userinfo\" -H \"Authorization: Bearer $TOKEN\"\n\nand getting the userid (e.g. email address) from the response, as described in\n[1]? ISTM that this is what pgadmin4 does - in paricular, see the\nget_user_profile() function in web/pgadmin/authenticate/oauth2.py.\n\n[1] https://www.oauth.com/oauth2-servers/signing-in-with-google/verifying-the-user-info/\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Fri, 27 Sep 2024 19:58:19 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "On Fri, Sep 27, 2024 at 10:58 AM Antonin Houska <ah@cybertec.at> wrote:\n> Have you considered sending the token for validation to the server, like this\n>\n> curl -X GET \"https://www.googleapis.com/oauth2/v3/userinfo\" -H \"Authorization: Bearer $TOKEN\"\n\nIn short, no, but I'm glad you asked. I think it's going to be a\ncommon request, and I need to get better at explaining why it's not\nsafe, so we can document it clearly. Or else someone can point out\nthat I'm misunderstanding, which honestly would make all this much\neasier and less complicated. I would love to be able to do it that\nway.\n\nWe cannot, for the same reason libpq must send the server an access\ntoken instead of an ID token. The /userinfo endpoint tells you who the\nend user is, but it doesn't tell you whether the Bearer is actually\nallowed to access the database. That difference is critical: it's\nentirely possible for an end user to be authorized to access the\ndatabase, *and yet* the Bearer token may not actually carry that\nauthorization on their behalf. (In fact, the user may have actively\nrefused to give the Bearer that permission.) That's why people are so\npedantic about saying that OAuth is an authorization framework and not\nan authentication framework.\n\nTo illustrate, think about all the third-party web services out there\nthat ask you to Sign In with Google. They ask Google for permission to\naccess your personal ID, and Google asks you if you're okay with that,\nand you either allow or deny it. Now imagine that I ran one of those\nservices, and I decided to become evil. I could take my legitimately\nacquired Bearer token -- which should only give me permission to query\nyour Google ID -- and send it to a Postgres database you're authorized\nto access.\n\nThe server is supposed to introspect it, say, \"hey, this token doesn't\ngive the bearer access to the database at all,\" and shut everything\ndown. For extra credit, the server could notice that the client ID\ntied to the access token isn't even one that it recognizes! But if all\nthe server does is ask Google, \"what's the email address associated\nwith this token's end user?\", then it's about to make some very bad\ndecisions. The email address it gets back doesn't belong to Jacob the\nEvil Bearer; it belongs to you.\n\nNow, the token introspection endpoint I mentioned upthread should give\nus the required information (scopes, etc.). But Google doesn't\nimplement that one. In fact they don't seem to have implemented custom\nscopes at all in the years since I started work on this feature, which\nmakes me think that people are probably not going to be able to safely\nlog into Postgres using Google tokens. Hopefully there's some feature\nburied somewhere that I haven't seen.\n\nLet me know if that makes sense. (And again: I'd love to be proven\nwrong. It would improve the reach of the feature considerably if I\nam.)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Fri, 27 Sep 2024 13:45:45 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Jacob Champion <jacob.champion@enterprisedb.com> wrote:\n\n> On Fri, Sep 27, 2024 at 10:58 AM Antonin Houska <ah@cybertec.at> wrote:\n> > Have you considered sending the token for validation to the server, like this\n> >\n> > curl -X GET \"https://www.googleapis.com/oauth2/v3/userinfo\" -H \"Authorization: Bearer $TOKEN\"\n> \n> In short, no, but I'm glad you asked. I think it's going to be a\n> common request, and I need to get better at explaining why it's not\n> safe, so we can document it clearly. Or else someone can point out\n> that I'm misunderstanding, which honestly would make all this much\n> easier and less complicated. I would love to be able to do it that\n> way.\n> \n> We cannot, for the same reason libpq must send the server an access\n> token instead of an ID token. The /userinfo endpoint tells you who the\n> end user is, but it doesn't tell you whether the Bearer is actually\n> allowed to access the database. That difference is critical: it's\n> entirely possible for an end user to be authorized to access the\n> database, *and yet* the Bearer token may not actually carry that\n> authorization on their behalf. (In fact, the user may have actively\n> refused to give the Bearer that permission.)\n\n> That's why people are so pedantic about saying that OAuth is an\n> authorization framework and not an authentication framework.\n\nThis statement alone sounds as if you missed *authentication*, but you seem to\nadmit above that the /userinfo endpoint provides it (\"tells you who the end\nuser is\"). I agree that it does. My understanding is that this endpoint, as\nwell as the concept of \"claims\" and \"scopes\", is introduced by OpenID, which\nis an *authentication* framework, although it's built on top of OAuth.\n\nRegarding *authorization*, I agree that the bearer token may not contain\nenough information to determine whether the owner of the token is allowed to\naccess the database. However, I consider database a special kind of\n\"application\", which can handle authorization on its own. In this case, the\nauthorization can be controlled by (not) assigning the user the LOGIN\nattribute, as well as by (not) granting it privileges on particular database\nobjects. In short, I think that *authentication* is all we need.\n\n> To illustrate, think about all the third-party web services out there\n> that ask you to Sign In with Google. They ask Google for permission to\n> access your personal ID, and Google asks you if you're okay with that,\n> and you either allow or deny it. Now imagine that I ran one of those\n> services, and I decided to become evil. I could take my legitimately\n> acquired Bearer token -- which should only give me permission to query\n> your Google ID -- and send it to a Postgres database you're authorized\n> to access.\n> \n> The server is supposed to introspect it, say, \"hey, this token doesn't\n> give the bearer access to the database at all,\" and shut everything\n> down. For extra credit, the server could notice that the client ID\n> tied to the access token isn't even one that it recognizes! But if all\n> the server does is ask Google, \"what's the email address associated\n> with this token's end user?\", then it's about to make some very bad\n> decisions. The email address it gets back doesn't belong to Jacob the\n> Evil Bearer; it belongs to you.\n\nAre you sure you can legitimately acquire the bearer token containing my email\naddress? I think the email address returned by the /userinfo endpoint is one\nof the standard claims [1]. Thus by returning the particular value of \"email\"\nfrom the endpoint the identity provider asserts that the token owner does have\nthis address. (And that, if \"email_verified\" claim is \"true\", it spent some\neffort to verify that the email address is controlled by that user.)\n\n> Now, the token introspection endpoint I mentioned upthread\n\nCan you please point me to the particular message?\n\n> should give us the required information (scopes, etc.). But Google doesn't\n> implement that one. In fact they don't seem to have implemented custom\n> scopes at all in the years since I started work on this feature, which makes\n> me think that people are probably not going to be able to safely log into\n> Postgres using Google tokens. Hopefully there's some feature buried\n> somewhere that I haven't seen.\n> \n> Let me know if that makes sense. (And again: I'd love to be proven\n> wrong. It would improve the reach of the feature considerably if I\n> am.)\n\nAnother question, assuming the token verification is resolved somehow:\nwouldn't it be sufficient for the initial implementation if the client could\npass the bearer token to libpq in the connection string?\n\nObviously, one use case is than an application / web server which needs the\ntoken to authenticate the user could eventually pass the token to the database\nserver. Thus, if users could authenticate to the database using their\nindividual ids, it would no longer be necessary to store a separate userid /\npassword for the application in a configuration file.\n\nAlso, if libpq accepted the bearer token via the connection string, it would\nbe possible to implement the authorization as a separate front-end application\n(e.g. pg_oauth_login) rather than adding more complexity to libpq itself.\n\n(I'm learning this stuff on-the-fly, so there might be something naive in my\ncomments.)\n\n[1] https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 30 Sep 2024 15:38:41 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Jacob Champion <jacob.champion@enterprisedb.com> wrote:\n> > Now, the token introspection endpoint I mentioned upthread\n> \n> Can you please point me to the particular message?\n\nPlease ignore this dumb question. You probably referred to the email I was\nresponding to.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 30 Sep 2024 18:54:28 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PoC] Federated Authn/z with OAUTHBEARER" } ]
[ { "msg_contents": "Hackers,\n\nOn master, when a statement level trigger is fired for a replicated truncate command, the following stack trace is generated:\n\nTRAP: FailedAssertion(\"portal != NULL\", File: \"pquery.c\", Line: 1760, PID: 93854)\n0 postgres 0x0000000108e269f2 ExceptionalCondition + 130\n1 postgres 0x0000000108bef2f4 EnsurePortalSnapshotExists + 100\n2 postgres 0x0000000108a93231 _SPI_execute_plan + 529\n3 postgres 0x0000000108a93c0f SPI_execute_plan_with_paramlist + 127\n4 plpgsql.so 0x00000001098bf9e5 exec_stmt_execsql + 277\n5 plpgsql.so 0x00000001098bbaf6 exec_stmts + 294\n6 plpgsql.so 0x00000001098bb367 exec_stmt_block + 1127\n7 plpgsql.so 0x00000001098ba57a plpgsql_exec_trigger + 442\n8 plpgsql.so 0x00000001098cb5b1 plpgsql_call_handler + 305\n9 postgres 0x0000000108a3137c ExecCallTriggerFunc + 348\n10 postgres 0x0000000108a3447d afterTriggerInvokeEvents + 1517\n11 postgres 0x0000000108a33bb0 AfterTriggerEndQuery + 128\n12 postgres 0x0000000108a1a9e2 ExecuteTruncateGuts + 2210\n13 postgres 0x0000000108b83369 apply_dispatch + 3913\n14 postgres 0x0000000108b82185 LogicalRepApplyLoop + 485\n15 postgres 0x0000000108b81f87 ApplyWorkerMain + 1047\n16 postgres 0x0000000108b474a2 StartBackgroundWorker + 386\n17 postgres 0x0000000108b55cf6 maybe_start_bgworkers + 1254\n18 postgres 0x0000000108b54510 sigusr1_handler + 464\n19 libsystem_platform.dylib 0x00007fff69f3d5fd _sigtramp + 29\n20 ??? 0x0000000000000000 0x0 + 0\n21 postgres 0x0000000108b537ae PostmasterMain + 3726\n22 postgres 0x0000000108aaa140 help + 0\n23 libdyld.dylib 0x00007fff69d44cc9 start + 1\n24 ??? 0x0000000000000004 0x0 + 4\n\nI believe the issue was introduced in commit 84f5c2908da which added EnsurePortalSnapshotExists. That's not going to work in the case of logical replication, because there isn't an ActivePortal nor a snapshot.\n\nAttached patch v1-0001 reliably reproduces the problem, though you have to Ctrl-C out of it, because the logical replication gets stuck in a loop after the Assert is triggered. You can see the stack trace by opening tmp_check/log/021_truncate_subscriber.log\n\n \n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 8 Jun 2021 14:52:14 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "logical replication of truncate command with trigger causes Assert" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On master, when a statement level trigger is fired for a replicated truncate command, the following stack trace is generated:\n\nHmm.\n\n> I believe the issue was introduced in commit 84f5c2908da which added EnsurePortalSnapshotExists. That's not going to work in the case of logical replication, because there isn't an ActivePortal nor a snapshot.\n\nThe right way to say that is \"commit 84f5c2908da exposed the pre-existing\nunsafe behavior of this code\". It's not okay to run arbitrary user code\nwithout holding a snapshot to protect TOAST dereference operations.\n\nI suppose that either apply_dispatch or LogicalRepApplyLoop needs to\ngrow some more snapshot management logic, but I've not looked at that\ncode much, so I don't have an opinion on just where to add it.\n\nThere's a reasonable case to be made that running user code outside\na Portal is a just-plain-bad idea, so we should fix the logical\napply worker to make it create a suitable Portal. I speculated about\nthat in the commit message for 84f5c2908da, but I don't feel like\ntaking point on such work.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Jun 2021 18:55:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "\n\n> On Jun 8, 2021, at 3:55 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> The right way to say that is \"commit 84f5c2908da exposed the pre-existing\n> unsafe behavior of this code\". It's not okay to run arbitrary user code\n> without holding a snapshot to protect TOAST dereference operations.\n\nSure, I didn't expect that you'd broken things so much as we now have an Assert where, at least for simple commands, things were working back in April. Those things may not have been working correctly -- I'll have to do some more test development to see if I can get the pre-84f5c2908da to misbehave -- but this may appear to be a regression in version 14 if we don't do something.\n\nCalling ExecuteTruncateGuts from within the logical replication worker was introduced in commit 039eb6e92f2, \"Logical replication support for TRUNCATE\", back in April 2018. So whatever we do will likely need to be backpatched.\n\n> I suppose that either apply_dispatch or LogicalRepApplyLoop needs to\n> grow some more snapshot management logic, but I've not looked at that\n> code much, so I don't have an opinion on just where to add it.\n\nI was looking at those for other reasons prior to hitting this bug. My purpose was to figure out how to get the code to respect privileges. Perhaps the solution to these two issues is related. I don't know yet.\n\nAs it stands, a subscription can only be created by a superuser, and the replication happens under that user's current_user and session_user. I naively thought that adding a built-in role pg_logical_replication which could create subscriptions would be of some use. I implemented that but, but now if I create a user named \"replication_manager\" with membership in pg_logical_replication but not superuser, it turns out that even though the apply worker runs as replication_manager, the insert/update/delete commands work without checking privileges. (They can insert/update/delete tables and execute functions owned by a database superuser for which \"replication_manager\" has no privileges.) So I need to go a bit further to get acl checks called from this code path.\n\n> There's a reasonable case to be made that running user code outside\n> a Portal is a just-plain-bad idea, so we should fix the logical\n> apply worker to make it create a suitable Portal. I speculated about\n> that in the commit message for 84f5c2908da, but I don't feel like\n> taking point on such work.\n\nI'll dig into this a bit more.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 8 Jun 2021 16:23:47 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Jun 8, 2021, at 3:55 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I suppose that either apply_dispatch or LogicalRepApplyLoop needs to\n>> grow some more snapshot management logic, but I've not looked at that\n>> code much, so I don't have an opinion on just where to add it.\n\n> I was looking at those for other reasons prior to hitting this bug.\n\nAfter looking at it a bit, I see a couple of options:\n\n1. Just wrap the call of ExecuteTruncateGuts with\nPushActiveSnapshot(GetTransactionSnapshot()) and PopActiveSnapshot().\n\n2. Decide that we ought to ensure that a snapshot exists throughout\nmost of this code. It's not entirely obvious to me that there is no\ncode path reachable from, say, apply_handle_truncate's collection of\nrelation OIDs that needs a snapshot. If we went for that, I'd think\nthe right solution is to do PushActiveSnapshot right after each\nensure_transaction call, and then PopActiveSnapshot on the way out of\nthe respective subroutine. We could then drop the snapshot management\ncalls that are currently associated with the executor state.\n\n> My purpose was to figure out how to get the code to respect\n> privileges. Perhaps the solution to these two issues is related.\n> I don't know yet.\n\nDoesn't seem tremendously related. But yeah, there is Stuff That\nIs Missing in these code paths.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Jun 2021 19:58:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "On Wed, Jun 9, 2021 at 5:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> > On Jun 8, 2021, at 3:55 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I suppose that either apply_dispatch or LogicalRepApplyLoop needs to\n> >> grow some more snapshot management logic, but I've not looked at that\n> >> code much, so I don't have an opinion on just where to add it.\n>\n> > I was looking at those for other reasons prior to hitting this bug.\n>\n> After looking at it a bit, I see a couple of options:\n>\n> 1. Just wrap the call of ExecuteTruncateGuts with\n> PushActiveSnapshot(GetTransactionSnapshot()) and PopActiveSnapshot().\n>\n> 2. Decide that we ought to ensure that a snapshot exists throughout\n> most of this code. It's not entirely obvious to me that there is no\n> code path reachable from, say, apply_handle_truncate's collection of\n> relation OIDs that needs a snapshot. If we went for that, I'd think\n> the right solution is to do PushActiveSnapshot right after each\n> ensure_transaction call, and then PopActiveSnapshot on the way out of\n> the respective subroutine. We could then drop the snapshot management\n> calls that are currently associated with the executor state.\n>\n\n+1 for the second option as with that, apart from what you said it\nwill take off some load from future developers to decide which part of\nchanges should be after acquiring snapshot.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Jun 2021 14:40:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Wed, Jun 9, 2021 at 5:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 2. Decide that we ought to ensure that a snapshot exists throughout\n>> most of this code. It's not entirely obvious to me that there is no\n>> code path reachable from, say, apply_handle_truncate's collection of\n>> relation OIDs that needs a snapshot. If we went for that, I'd think\n>> the right solution is to do PushActiveSnapshot right after each\n>> ensure_transaction call, and then PopActiveSnapshot on the way out of\n>> the respective subroutine. We could then drop the snapshot management\n>> calls that are currently associated with the executor state.\n\n> +1 for the second option as with that, apart from what you said it\n> will take off some load from future developers to decide which part of\n> changes should be after acquiring snapshot.\n\nHere's a draft patch for that. I decided the most sensible way to\norganize this is to pair the existing ensure_transaction() subroutine\nwith a cleanup subroutine. Rather unimaginatively, perhaps, I renamed\nit to begin_transaction_step and named the cleanup end_transaction_step.\n(Better ideas welcome.)\n\nAs written, this'll result in creating and deleting a snapshot for some\nstream-control messages that maybe don't need one; but the point here is\nnot to have to think too hard about whether they do, so that's OK with\nme. There are more CommandCounterIncrement calls than before, too,\nbut (a) those are cheap if there's nothing to do and (b) it's not real\nclear to me that the extra calls are not necessary.\n\nSomewhat unrelated, but ... am I reading the code correctly that\napply_handle_stream_start and related routines are using Asserts\nto check that the remote sent stream-control messages in the correct\norder? That seems many degrees short of acceptable.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 09 Jun 2021 10:52:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "\n\n> On Jun 9, 2021, at 7:52 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Here's a draft patch for that. I decided the most sensible way to\n> organize this is to pair the existing ensure_transaction() subroutine\n> with a cleanup subroutine. Rather unimaginatively, perhaps, I renamed\n> it to begin_transaction_step and named the cleanup end_transaction_step.\n> (Better ideas welcome.)\n\nThanks! The regression test I posted earlier passes with this patch applied.\n\n> Somewhat unrelated, but ... am I reading the code correctly that\n> apply_handle_stream_start and related routines are using Asserts\n> to check that the remote sent stream-control messages in the correct\n> order? That seems many degrees short of acceptable.\n\nEven if you weren't reading that correctly, this bit:\n\n xid = pq_getmsgint(s, 4);\n\n Assert(TransactionIdIsValid(xid));\n\nsimply asserts that the sending server didn't send an invalid subtransaction id.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 9 Jun 2021 08:14:25 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Jun 9, 2021, at 7:52 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Somewhat unrelated, but ... am I reading the code correctly that\n>> apply_handle_stream_start and related routines are using Asserts\n>> to check that the remote sent stream-control messages in the correct\n>> order? That seems many degrees short of acceptable.\n\n> Even if you weren't reading that correctly, this bit:\n\n> xid = pq_getmsgint(s, 4);\n\n> Assert(TransactionIdIsValid(xid));\n\n> simply asserts that the sending server didn't send an invalid subtransaction id.\n\nUgh, yeah. We should never be using Asserts to validate incoming\nmessages -- a test-and-elog is more appropriate.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Jun 2021 11:23:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "On Wed, Jun 9, 2021 at 8:44 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n> > On Jun 9, 2021, at 7:52 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Here's a draft patch for that. I decided the most sensible way to\n> > organize this is to pair the existing ensure_transaction() subroutine\n> > with a cleanup subroutine. Rather unimaginatively, perhaps, I renamed\n> > it to begin_transaction_step and named the cleanup end_transaction_step.\n> > (Better ideas welcome.)\n>\n> Thanks! The regression test I posted earlier passes with this patch applied.\n>\n\nI have also read the patch and it looks good to me.\n\n> > Somewhat unrelated, but ... am I reading the code correctly that\n> > apply_handle_stream_start and related routines are using Asserts\n> > to check that the remote sent stream-control messages in the correct\n> > order?\n> >\n\nYes. I think you are talking about Assert(!in_streamed_transaction).\nThere is no particular reason that such Asserts are required, so we\ncan change to test-and-elog as you suggested later in your email.\n\n> That seems many degrees short of acceptable.\n>\n> Even if you weren't reading that correctly, this bit:\n>\n> xid = pq_getmsgint(s, 4);\n>\n> Assert(TransactionIdIsValid(xid));\n>\n> simply asserts that the sending server didn't send an invalid subtransaction id.\n>\n\nThis also needs to be changed to test-and-elog.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 10 Jun 2021 09:40:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Wed, Jun 9, 2021 at 8:44 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> On Jun 9, 2021, at 7:52 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Somewhat unrelated, but ... am I reading the code correctly that\n>>> apply_handle_stream_start and related routines are using Asserts\n>>> to check that the remote sent stream-control messages in the correct\n>>> order?\n\n> This also needs to be changed to test-and-elog.\n\nHere's a proposed patch for this. It looks like pretty much all of the\nbogosity is new with the streaming business. You might quibble with\nwhich things I thought deserved elog versus ereport. Another thing\nI'm wondering is how many of these messages really need to be\ntranslated. We could use errmsg_internal and avoid burdening the\ntranslators, perhaps.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 10 Jun 2021 14:50:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "On Fri, Jun 11, 2021 at 12:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Wed, Jun 9, 2021 at 8:44 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >> On Jun 9, 2021, at 7:52 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> Somewhat unrelated, but ... am I reading the code correctly that\n> >>> apply_handle_stream_start and related routines are using Asserts\n> >>> to check that the remote sent stream-control messages in the correct\n> >>> order?\n>\n> > This also needs to be changed to test-and-elog.\n>\n> Here's a proposed patch for this. It looks like pretty much all of the\n> bogosity is new with the streaming business.\n>\n\nExcept for the change in change in apply_handle_commit which seems to\nbe from the time it is introduced in commit 7c4f5240\n\n> You might quibble with\n> which things I thought deserved elog versus report.\n\nI wonder why you used elog in handle_streamed_transaction and\napply_handle_commit? It seems all the other places use ereport for\nanything wrong it got from the protocol message.\n\n> Another thing\n> I'm wondering is how many of these messages really need to be\n> translated. We could use errmsg_internal and avoid burdening the\n> translators, perhaps.\n>\n\nNot sure but I see all existing similar ereport calls don't use errmsg_internal.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Jun 2021 10:59:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Fri, Jun 11, 2021 at 12:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Another thing\n>> I'm wondering is how many of these messages really need to be\n>> translated. We could use errmsg_internal and avoid burdening the\n>> translators, perhaps.\n\n> Not sure but I see all existing similar ereport calls don't use errmsg_internal.\n\nI was thinking maybe we could mark all these replication protocol\nviolation errors non-translatable. While we don't want to crash on a\nprotocol violation, it shouldn't really be a user-facing case either.\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Jun 2021 11:26:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "On Fri, Jun 11, 2021 at 8:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Fri, Jun 11, 2021 at 12:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Another thing\n> >> I'm wondering is how many of these messages really need to be\n> >> translated. We could use errmsg_internal and avoid burdening the\n> >> translators, perhaps.\n>\n> > Not sure but I see all existing similar ereport calls don't use errmsg_internal.\n>\n> I was thinking maybe we could mark all these replication protocol\n> violation errors non-translatable. While we don't want to crash on a\n> protocol violation, it shouldn't really be a user-facing case either.\n>\n\nI don't see any problem with that as these are not directly related to\nany user operation. So, +1 for making these non-translatable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 12 Jun 2021 12:45:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Fri, Jun 11, 2021 at 8:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I was thinking maybe we could mark all these replication protocol\n>> violation errors non-translatable. While we don't want to crash on a\n>> protocol violation, it shouldn't really be a user-facing case either.\n\n> I don't see any problem with that as these are not directly related to\n> any user operation. So, +1 for making these non-translatable.\n\nDone that way. On re-reading the code, there were a bunch more\nAsserts that could be triggered by bad input data, so the committed\npatch has rather more corrections than I posted before.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Jun 2021 13:01:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" }, { "msg_contents": "On 2021-Jun-12, Tom Lane wrote:\n\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Fri, Jun 11, 2021 at 8:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I was thinking maybe we could mark all these replication protocol\n> >> violation errors non-translatable. While we don't want to crash on a\n> >> protocol violation, it shouldn't really be a user-facing case either.\n> \n> > I don't see any problem with that as these are not directly related to\n> > any user operation. So, +1 for making these non-translatable.\n> \n> Done that way.\n\nGood call, thanks. Not only it's not very useful to translate such\nmessages, but it's also quite a burden because some of them are\ndifficult to translate.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)\n\n\n", "msg_date": "Sat, 12 Jun 2021 14:43:08 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: logical replication of truncate command with trigger causes\n Assert" } ]
[ { "msg_contents": "Hi,\n\ntest deadlock-simple ... ok 20 ms\ntest deadlock-hard ... ok 10624 ms\ntest deadlock-soft ... ok 147 ms\ntest deadlock-soft-2 ... ok 5154 ms\ntest deadlock-parallel ... ok 132 ms\ntest detach-partition-concurrently-1 ... ok 553 ms\ntest detach-partition-concurrently-2 ... ok 234 ms\ntest detach-partition-concurrently-3 ... ok 2389 ms\ntest detach-partition-concurrently-4 ... ok 1876 ms\n\nAny objections to making these new tests line up with the rest?", "msg_date": "Wed, 9 Jun 2021 13:57:45 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Adjust pg_regress output for new long test names" }, { "msg_contents": "On Wed, Jun 09, 2021 at 01:57:45PM +1200, Thomas Munro wrote:\n> Hi,\n> \n> test deadlock-simple ... ok 20 ms\n> test deadlock-hard ... ok 10624 ms\n> test deadlock-soft ... ok 147 ms\n> test deadlock-soft-2 ... ok 5154 ms\n> test deadlock-parallel ... ok 132 ms\n> test detach-partition-concurrently-1 ... ok 553 ms\n> test detach-partition-concurrently-2 ... ok 234 ms\n> test detach-partition-concurrently-3 ... ok 2389 ms\n> test detach-partition-concurrently-4 ... ok 1876 ms\n> \n> Any objections to making these new tests line up with the rest?\n\nNo objection, as the output is still way under 80 characters.\n\n\n", "msg_date": "Wed, 9 Jun 2021 10:17:35 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adjust pg_regress output for new long test names" }, { "msg_contents": "On Wed, Jun 09, 2021 at 10:17:35AM +0800, Julien Rouhaud wrote:\n> On Wed, Jun 09, 2021 at 01:57:45PM +1200, Thomas Munro wrote:\n>> Any objections to making these new tests line up with the rest?\n> \n> No objection, as the output is still way under 80 characters.\n\n+1.\n--\nMichael", "msg_date": "Wed, 9 Jun 2021 11:21:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Adjust pg_regress output for new long test names" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> test detach-partition-concurrently-1 ... ok 553 ms\n> test detach-partition-concurrently-2 ... ok 234 ms\n> test detach-partition-concurrently-3 ... ok 2389 ms\n> test detach-partition-concurrently-4 ... ok 1876 ms\n\n> Any objections to making these new tests line up with the rest?\n\n... or we could shorten those file names. I recall an episode\nawhile ago where somebody complained that their version of \"tar\"\ncouldn't handle some of the path names in our tarball, so\nkeeping things from getting to carpal-tunnel-inducing lengths\ndoes have its advantages.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Jun 2021 22:44:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Adjust pg_regress output for new long test names" }, { "msg_contents": "On Wed, Jun 09, 2021 at 01:57:45PM +1200, Thomas Munro wrote:\n> test deadlock-simple ... ok 20 ms\n> test deadlock-hard ... ok 10624 ms\n> test deadlock-soft ... ok 147 ms\n> test deadlock-soft-2 ... ok 5154 ms\n> test deadlock-parallel ... ok 132 ms\n> test detach-partition-concurrently-1 ... ok 553 ms\n> test detach-partition-concurrently-2 ... ok 234 ms\n> test detach-partition-concurrently-3 ... ok 2389 ms\n> test detach-partition-concurrently-4 ... ok 1876 ms\n\n> Make the test output visually consistent, as previously done by commit\n> 14378245.\n\nNot bad, but I would instead shorten the names to detach-[1234] or\ndetach-partition-[1234]. The marginal value of the second word is low, and\nthe third word helps even less.\n\n> -\t\t\tstatus(_(\"test %-28s ... \"), tests[0]);\n> +\t\t\tstatus(_(\"test %-32s ... \"), tests[0]);\n\nAs the whitespace gulf widens, it gets harder to match left and right sides\nvisually. We'd cope of course, but wider spacing isn't quite free.\n\n\n", "msg_date": "Tue, 8 Jun 2021 19:51:29 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Adjust pg_regress output for new long test names" }, { "msg_contents": "[Responding to two emails in one]\n\nOn Wed, Jun 9, 2021 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> ... or we could shorten those file names. I recall an episode\n> awhile ago where somebody complained that their version of \"tar\"\n> couldn't handle some of the path names in our tarball, so\n> keeping things from getting to carpal-tunnel-inducing lengths\n> does have its advantages.\n\nOn Wed, Jun 9, 2021 at 2:51 PM Noah Misch <noah@leadboat.com> wrote:\n> Not bad, but I would instead shorten the names to detach-[1234] or\n> detach-partition-[1234]. The marginal value of the second word is low, and\n> the third word helps even less.\n\nAlright, CC'ing Alvaro who added the long names to see if he wants to\nconsider that.\n\nThere's one other case of this phenomenon:\ntuplelock-upgrade-no-deadlock overflows by one character.\n\n\n", "msg_date": "Wed, 9 Jun 2021 15:21:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adjust pg_regress output for new long test names" }, { "msg_contents": "On Wed, Jun 09, 2021 at 03:21:36PM +1200, Thomas Munro wrote:\n> On Wed, Jun 9, 2021 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > ... or we could shorten those file names. I recall an episode\n> > awhile ago where somebody complained that their version of \"tar\"\n> > couldn't handle some of the path names in our tarball, so\n> > keeping things from getting to carpal-tunnel-inducing lengths\n> > does have its advantages.\n> \n> On Wed, Jun 9, 2021 at 2:51 PM Noah Misch <noah@leadboat.com> wrote:\n> > Not bad, but I would instead shorten the names to detach-[1234] or\n> > detach-partition-[1234]. The marginal value of the second word is low, and\n> > the third word helps even less.\n\nBetter still, the numbers can change to something descriptive:\n\ndetach-1 => detach-visibility\ndetach-2 => detach-fk-FOO\ndetach-3 => detach-incomplete\ndetach-4 => detach-fk-BAR\n\nI don't grasp the difference between -2 and -4 enough to suggest concrete FOO\nand BAR words.\n\n> Alright, CC'ing Alvaro who added the long names to see if he wants to\n> consider that.\n\n\n", "msg_date": "Tue, 8 Jun 2021 21:56:58 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Adjust pg_regress output for new long test names" }, { "msg_contents": "On 2021-Jun-08, Noah Misch wrote:\n\n> On Wed, Jun 09, 2021 at 03:21:36PM +1200, Thomas Munro wrote:\n> > On Wed, Jun 9, 2021 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > ... or we could shorten those file names. I recall an episode\n> > > awhile ago where somebody complained that their version of \"tar\"\n> > > couldn't handle some of the path names in our tarball, so\n> > > keeping things from getting to carpal-tunnel-inducing lengths\n> > > does have its advantages.\n\nSure. I'm also the author of tuplelock-upgrade-no-deadlock -- see\ncommit de87a084c0a5. (Oleksii submitted it as \"rowlock-upgrade-deadlock\").\nWe could rename that one too while at it.\n\n> > On Wed, Jun 9, 2021 at 2:51 PM Noah Misch <noah@leadboat.com> wrote:\n> > > Not bad, but I would instead shorten the names to detach-[1234] or\n> > > detach-partition-[1234]. The marginal value of the second word is low, and\n> > > the third word helps even less.\n> \n> Better still, the numbers can change to something descriptive:\n> \n> detach-1 => detach-visibility\n> detach-2 => detach-fk-FOO\n> detach-3 => detach-incomplete\n> detach-4 => detach-fk-BAR\n> \n> I don't grasp the difference between -2 and -4 enough to suggest concrete FOO\n> and BAR words.\n\nLooking at -2, it looks like a very small subset of -4. I probably\nwrote it first and failed to realize I could extend that one rather than\ncreate -4. We could just delete it.\n\nWe also have partition-concurrent-attach.spec; what if we make\neverything a consistent set? We could have\n\npartition-attach\npartition-detach-visibility (-1)\npartition-detach-incomplete (-3)\npartition-detach-fk (-4)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Wed, 9 Jun 2021 09:31:24 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Adjust pg_regress output for new long test names" }, { "msg_contents": "On 09.06.21 04:51, Noah Misch wrote:\n> On Wed, Jun 09, 2021 at 01:57:45PM +1200, Thomas Munro wrote:\n>> test deadlock-simple ... ok 20 ms\n>> test deadlock-hard ... ok 10624 ms\n>> test deadlock-soft ... ok 147 ms\n>> test deadlock-soft-2 ... ok 5154 ms\n>> test deadlock-parallel ... ok 132 ms\n>> test detach-partition-concurrently-1 ... ok 553 ms\n>> test detach-partition-concurrently-2 ... ok 234 ms\n>> test detach-partition-concurrently-3 ... ok 2389 ms\n>> test detach-partition-concurrently-4 ... ok 1876 ms\n>> Make the test output visually consistent, as previously done by commit\n>> 14378245.\n> Not bad, but I would instead shorten the names to detach-[1234] or\n> detach-partition-[1234]. The marginal value of the second word is low, and\n> the third word helps even less.\n\nDETACH CONCURRENTLY is a separate feature from plain DETACH.\n\nBut \"partition\" is surely redundant here.\n\n\n", "msg_date": "Wed, 9 Jun 2021 19:36:06 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Adjust pg_regress output for new long test names" }, { "msg_contents": "On 09.06.21 03:57, Thomas Munro wrote:\n> test deadlock-simple ... ok 20 ms\n> test deadlock-hard ... ok 10624 ms\n> test deadlock-soft ... ok 147 ms\n> test deadlock-soft-2 ... ok 5154 ms\n> test deadlock-parallel ... ok 132 ms\n> test detach-partition-concurrently-1 ... ok 553 ms\n> test detach-partition-concurrently-2 ... ok 234 ms\n> test detach-partition-concurrently-3 ... ok 2389 ms\n> test detach-partition-concurrently-4 ... ok 1876 ms\n> \n> Any objections to making these new tests line up with the rest?\n\nCan we scan all the test names first and then pick a suitable length?\n\n\n\n\n", "msg_date": "Wed, 9 Jun 2021 19:37:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Adjust pg_regress output for new long test names" }, { "msg_contents": "On Wed, Jun 9, 2021 at 1:37 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> Can we scan all the test names first and then pick a suitable length?\n\nFWIW, I think this discussion of shortening the test case names is\nprobably going in the wrong direction. It's true that in many cases\nsuch a thing can be done, but it's also true that the test case\nauthors picked those names because they felt that they described those\ntest cases well. It's not unlikely that future test case authors will\nhave similar feelings and will again pick names that are a little bit\nlonger. It's also not impossible that in shortening the names we will\nmake them less clear. For example, Peter said that \"partition\" was\nredundant in something like \"detach-partition-concurrently-4,\" but\nthat is only true if you think that a partition is the only thing that\ncan be detached. That is true today as far as the SQL grammar is\nconcerned, but from a source code perspective we speak of detaching\nfrom shm_mq objects or DSMs, and there could be more things, internal\nor SQL-visible, in the future.\n\nNow I don't care all that much; this isn't worth getting worked up\nabout. But if it were me, I'd tend to err in the direction of\naccommodating longer test names, and only shorten them if it's clear\nthat someone *really* went overboard.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Jun 2021 16:31:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adjust pg_regress output for new long test names" }, { "msg_contents": "On Wed, Jun 09, 2021 at 09:31:24AM -0400, Alvaro Herrera wrote:\n> On 2021-Jun-08, Noah Misch wrote:\n> > > On Wed, Jun 9, 2021 at 2:51 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > Not bad, but I would instead shorten the names to detach-[1234] or\n> > > > detach-partition-[1234]. The marginal value of the second word is low, and\n> > > > the third word helps even less.\n> > \n> > Better still, the numbers can change to something descriptive:\n> > \n> > detach-1 => detach-visibility\n> > detach-2 => detach-fk-FOO\n> > detach-3 => detach-incomplete\n> > detach-4 => detach-fk-BAR\n> > \n> > I don't grasp the difference between -2 and -4 enough to suggest concrete FOO\n> > and BAR words.\n> \n> Looking at -2, it looks like a very small subset of -4. I probably\n> wrote it first and failed to realize I could extend that one rather than\n> create -4. We could just delete it.\n> \n> We also have partition-concurrent-attach.spec; what if we make\n> everything a consistent set? We could have\n> \n> partition-attach\n> partition-detach-visibility (-1)\n> partition-detach-incomplete (-3)\n> partition-detach-fk (-4)\n\nThat works for me. I'd be fine with Peter Eisentraut's tweaks, too.\n\n\n", "msg_date": "Thu, 10 Jun 2021 18:43:44 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Adjust pg_regress output for new long test names" } ]
[ { "msg_contents": "Hello\n\nI recently migrated from version 8.3 of postgreSQL to v11, previously in\nall my queries for passing parameters I used the character :\n\nExample\nWhere id =: searched\n\nIn the new version when I try to make this query it sends me an error\n\nERROR syntax error at or near \":\"\n\nCould someone help me to know how I can configure the parameter passing\ncharacter or, failing that, how I should pass the parameters in this new\nversion.\n\nGreetings\n\n-- \nAtentamente Msc. Hassan Camacho.\n\nHelloI recently migrated from \n\nversion \n\n8.3 of postgreSQL to v11, previously in all my queries for passing parameters I used the character :ExampleWhere id =: searchedIn the new version when I try to make this query it sends me an errorERROR syntax error at or near \":\"Could someone help me to know how I can configure the parameter passing character or, failing that, how I should pass the parameters in this new version.Greetings-- Atentamente Msc. Hassan Camacho.", "msg_date": "Wed, 9 Jun 2021 05:30:15 -0500", "msg_from": "Hassan Camacho Cadre <hccadre@gmail.com>", "msg_from_op": true, "msg_subject": "How to pass a parameter in a query to postgreSQL 11" }, { "msg_contents": "On Wed, Jun 09, 2021 at 05:30:15AM -0500, Hassan Camacho Cadre wrote:\n> I recently migrated from version 8.3 of postgreSQL to v11, previously in\n> all my queries for passing parameters I used the character :\n> Example\n> Where id =: searched\n\nI guess you migrated to a whole new environment, with many new package\nversions, not just postgres ?\n\nWe don't know how you're issuing queries, but I'm guessing some other\napplication is what changed.\n\nPostgres uses $1 for query parameters, in both v8.3 and in v11.\nhttps://www.postgresql.org/docs/8.3/libpq-exec.html\n\nBTW, this is the list for development of postgres itself. It's much too busy\nto also answer other questions. Please raise the question on the -general\nlist, with information about your environment.\nhttps://www.postgresql.org/list/\n\nThanks,\n-- \nJustin\n\n> In the new version when I try to make this query it sends me an error\n> \n> ERROR syntax error at or near \":\"\n> \n> Could someone help me to know how I can configure the parameter passing\n> character or, failing that, how I should pass the parameters in this new\n> version.\n\n\n", "msg_date": "Wed, 9 Jun 2021 09:52:07 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: How to pass a parameter in a query to postgreSQL 11 (offtopic)" } ]
[ { "msg_contents": "Hi.\n\nThis patch allows pushing case expressions to foreign servers, so that \nmore types of updates could be executed directly.\n\nFor example, without patch:\n\nEXPLAIN (VERBOSE, COSTS OFF)\nUPDATE ft2 d SET c2 = CASE WHEN c2 > 0 THEN c2 ELSE 0 END\nWHERE c1 > 1000;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Update on public.ft2 d\n Remote SQL: UPDATE \"S 1\".\"T 1\" SET c2 = $2 WHERE ctid = $1\n -> Foreign Scan on public.ft2 d\n Output: CASE WHEN (c2 > 0) THEN c2 ELSE 0 END, ctid, d.*\n Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8, ctid FROM \n\"S 1\".\"T 1\" WHERE ((\"C 1\" > 1000)) FOR UPDATE\n\n\nEXPLAIN (VERBOSE, COSTS OFF)\nUPDATE ft2 d SET c2 = CASE WHEN c2 > 0 THEN c2 ELSE 0 END\nWHERE c1 > 1000;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Update on public.ft2 d\n -> Foreign Update on public.ft2 d\n Remote SQL: UPDATE \"S 1\".\"T 1\" SET c2 = (CASE WHEN (c2 > 0) \nTHEN c2 ELSE 0 END) WHERE ((\"C 1\" > 1000))\n\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Wed, 09 Jun 2021 14:55:19 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Case expression pushdown" }, { "msg_contents": "Looks quite useful to me. Can you please add this to the next commitfest?\n\nOn Wed, Jun 9, 2021 at 5:25 PM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n>\n> Hi.\n>\n> This patch allows pushing case expressions to foreign servers, so that\n> more types of updates could be executed directly.\n>\n> For example, without patch:\n>\n> EXPLAIN (VERBOSE, COSTS OFF)\n> UPDATE ft2 d SET c2 = CASE WHEN c2 > 0 THEN c2 ELSE 0 END\n> WHERE c1 > 1000;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------\n> Update on public.ft2 d\n> Remote SQL: UPDATE \"S 1\".\"T 1\" SET c2 = $2 WHERE ctid = $1\n> -> Foreign Scan on public.ft2 d\n> Output: CASE WHEN (c2 > 0) THEN c2 ELSE 0 END, ctid, d.*\n> Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8, ctid FROM\n> \"S 1\".\"T 1\" WHERE ((\"C 1\" > 1000)) FOR UPDATE\n>\n>\n> EXPLAIN (VERBOSE, COSTS OFF)\n> UPDATE ft2 d SET c2 = CASE WHEN c2 > 0 THEN c2 ELSE 0 END\n> WHERE c1 > 1000;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------\n> Update on public.ft2 d\n> -> Foreign Update on public.ft2 d\n> Remote SQL: UPDATE \"S 1\".\"T 1\" SET c2 = (CASE WHEN (c2 > 0)\n> THEN c2 ELSE 0 END) WHERE ((\"C 1\" > 1000))\n>\n>\n> --\n> Best regards,\n> Alexander Pyhalov,\n> Postgres Professional\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 15 Jun 2021 18:54:30 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Hi.\n\nAshutosh Bapat писал 2021-06-15 16:24:\n> Looks quite useful to me. Can you please add this to the next \n> commitfest?\n> \n\nAddded to commitfest. Here is an updated patch version.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Tue, 15 Jun 2021 19:29:17 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "On 2021-06-16 01:29, Alexander Pyhalov wrote:\n> Hi.\n> \n> Ashutosh Bapat писал 2021-06-15 16:24:\n>> Looks quite useful to me. Can you please add this to the next \n>> commitfest?\n>> \n> \n> Addded to commitfest. Here is an updated patch version.\n\nThanks for posting the patch.\nI agree with this content.\n\n> + Foreign Scan on public.ft2 (cost=156.58..165.45 rows=394 width=14)\nIt's not a big issue, but is there any intention behind the pattern of\noutputting costs in regression tests?\n\nRegards,\n\n-- \nYuki Seino\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 22 Jun 2021 22:03:02 +0900", "msg_from": "Seino Yuki <seinoyu@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Seino Yuki писал 2021-06-22 16:03:\n> On 2021-06-16 01:29, Alexander Pyhalov wrote:\n>> Hi.\n>> \n>> Ashutosh Bapat писал 2021-06-15 16:24:\n>>> Looks quite useful to me. Can you please add this to the next \n>>> commitfest?\n>>> \n>> \n>> Addded to commitfest. Here is an updated patch version.\n> \n> Thanks for posting the patch.\n> I agree with this content.\n> \n>> + Foreign Scan on public.ft2 (cost=156.58..165.45 rows=394 width=14)\n> It's not a big issue, but is there any intention behind the pattern of\n> outputting costs in regression tests?\n\nHi.\n\nNo, I don't think it makes much sense. Updated tests (also added case \nwith empty else).\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Tue, 22 Jun 2021 16:39:43 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Le 22/06/2021 à 15:39, Alexander Pyhalov a écrit :\n> Seino Yuki писал 2021-06-22 16:03:\n>> On 2021-06-16 01:29, Alexander Pyhalov wrote:\n>>> Hi.\n>>>\n>>> Ashutosh Bapat писал 2021-06-15 16:24:\n>>>> Looks quite useful to me. Can you please add this to the next \n>>>> commitfest?\n>>>>\n>>>\n>>> Addded to commitfest. Here is an updated patch version.\n>>\n>> Thanks for posting the patch.\n>> I agree with this content.\n>>\n>>> + Foreign Scan on public.ft2 (cost=156.58..165.45 rows=394 width=14)\n>> It's not a big issue, but is there any intention behind the pattern of\n>> outputting costs in regression tests?\n>\n> Hi.\n>\n> No, I don't think it makes much sense. Updated tests (also added case \n> with empty else).\n\n\nThe patch doesn't apply anymore to master, I join an update of your \npatch update in attachment. This is your patch rebased and untouched \nminus a comment in the test and renamed to v4.\n\n\nI could have miss something but I don't think that additional struct \nelements case_args in structs foreign_loc_cxt and deparse_expr_cxt are \nnecessary. They look to be useless.\n\nThe patch will also be more clear if the CaseWhen node was handled \nseparately in foreign_expr_walker() instead of being handled in the \nT_CaseExpr case. By this way the T_CaseExpr case just need to call \nrecursively foreign_expr_walker(). I also think that code in \nT_CaseTestExpr should just check the collation, there is nothing more to \ndo here like you have commented the function deparseCaseTestExpr(). This \nfunction can be removed as it does nothing if the case_args elements are \nremoved.\n\n\nThere is a problem the regression test with nested CASE clauses:\n\n EXPLAIN (VERBOSE, COSTS OFF)\n SELECT c1,c2,c3 FROM ft2 WHERE CASE CASE WHEN c2 > 0 THEN c2 END\n WHEN 100 THEN 601 WHEN c2 THEN c2 ELSE 0 END > 600 ORDER BY c1;\n\nthe original query use \"WHERE CASE CASE WHEN\" but the remote query is \nnot the same in the plan:\n\n Remote SQL: SELECT \"C 1\", c2, c3 FROM \"S 1\".\"T 1\" WHERE (((CASE WHEN\n ((CASE WHEN (c2 > 0) THEN c2 ELSE NULL::integer END) = 100) THEN 601\n WHEN ((CASE WHEN (c2 > 0) THEN c2 ELSE NULL::integer END) = c2) THEN\n c2 ELSE 0 END) > 600)) ORDER BY \"C 1\" ASC NULLS LAST\n\nHere this is \"WHERE (((CASE WHEN ((CASE WHEN\" I expected it to be \nunchanged to \"WHERE (((CASE (CASE WHEN\".\n\n\nAlso I would like the following regression tests to be added. It test \nthat the CASE clause in aggregate and function is pushed down as well as \nthe aggregate function. This was the original use case that I wanted to \nfix with this feature.\n\n -- CASE in aggregate function, both must be pushed down\n EXPLAIN (VERBOSE, COSTS OFF)\n SELECT sum(CASE WHEN mod(c1, 4) = 0 THEN 1 ELSE 2 END) FROM ft1;\n -- Same but without the ELSE clause\n EXPLAIN (VERBOSE, COSTS OFF)\n SELECT sum(CASE WHEN mod(c1, 4) = 0 THEN 1 END) FROM ft1;\n\n\nFor convenience I'm attaching a new patch v5 that change the code \nfollowing my comments above, fix the nested CASE issue and adds more \nregression tests.\n\n\nBest regards,\n\n-- \nGilles Darold", "msg_date": "Wed, 7 Jul 2021 14:02:33 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": false, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Hi.\n\nGilles Darold писал 2021-07-07 15:02:\n\n> Le 22/06/2021 à 15:39, Alexander Pyhalov a écrit :\n> \n>> Seino Yuki писал 2021-06-22 16:03:\n>> On 2021-06-16 01:29, Alexander Pyhalov wrote:\n>> Hi.\n>> \n>> Ashutosh Bapat писал 2021-06-15 16:24:\n>> Looks quite useful to me. Can you please add this to the next\n>> commitfest?\n>> \n>> Addded to commitfest. Here is an updated patch version.\n> \n> Thanks for posting the patch.\n> I agree with this content.\n> \n>> + Foreign Scan on public.ft2 (cost=156.58..165.45 rows=394\n>> width=14)\n> It's not a big issue, but is there any intention behind the pattern\n> of\n> outputting costs in regression tests?\n> \n> Hi.\n> \n> No, I don't think it makes much sense. Updated tests (also added case\n> with empty else).\n> \n> The patch doesn't apply anymore to master, I join an update of your\n> patch update in attachment. This is your patch rebased and untouched\n> minus a comment in the test and renamed to v4.\n> \n> I could have miss something but I don't think that additional struct\n> elements case_args in structs foreign_loc_cxt and deparse_expr_cxt are\n> necessary. They look to be useless.\n\nI thought we should compare arg collation and expression collation and \ndidn't suggest, that we can take CaseTestExpr's collation directly, \nwithout deriving it from CaseExpr's arg. Your version of course looks \nsaner.\n\n> \n> The patch will also be more clear if the CaseWhen node was handled\n> separately in foreign_expr_walker() instead of being handled in the\n> T_CaseExpr case. By this way the T_CaseExpr case just need to call\n> recursively foreign_expr_walker(). I also think that code in\n> T_CaseTestExpr should just check the collation, there is nothing more\n> to do here like you have commented the function deparseCaseTestExpr().\n> This function can be removed as it does nothing if the case_args\n> elements are removed.\n> \n> There is a problem the regression test with nested CASE clauses:\n> \n>> EXPLAIN (VERBOSE, COSTS OFF)\n>> SELECT c1,c2,c3 FROM ft2 WHERE CASE CASE WHEN c2 > 0 THEN c2 END\n>> WHEN 100 THEN 601 WHEN c2 THEN c2 ELSE 0 END > 600 ORDER BY c1;\n> \n> the original query use \"WHERE CASE CASE WHEN\" but the remote query is\n> not the same in the plan:\n> \n>> Remote SQL: SELECT \"C 1\", c2, c3 FROM \"S 1\".\"T 1\" WHERE (((CASE WHEN\n>> ((CASE WHEN (c2 > 0) THEN c2 ELSE NULL::integer END) = 100) THEN 601\n>> WHEN ((CASE WHEN (c2 > 0) THEN c2 ELSE NULL::integer END) = c2) THEN\n>> c2 ELSE 0 END) > 600)) ORDER BY \"C 1\" ASC NULLS LAST\n> \n> Here this is \"WHERE (((CASE WHEN ((CASE WHEN\" I expected it to be\n> unchanged to \"WHERE (((CASE (CASE WHEN\".\n\nI'm not sure this is an issue (as we change CASE A WHEN B ... to CASE \nWHEN (A=B)),\nand expressions should be free from side effects, but again your version\nlooks better.\n\nThanks for improving the patch, it looks saner now.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Wed, 07 Jul 2021 18:39:02 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Le 07/07/2021 à 17:39, Alexander Pyhalov a écrit :\n> Hi.\n>\n> Gilles Darold писал 2021-07-07 15:02:\n>\n>> Le 22/06/2021 à 15:39, Alexander Pyhalov a écrit :\n>>\n>>> Seino Yuki писал 2021-06-22 16:03:\n>>> On 2021-06-16 01:29, Alexander Pyhalov wrote:\n>>> Hi.\n>>>\n>>> Ashutosh Bapat писал 2021-06-15 16:24:\n>>> Looks quite useful to me. Can you please add this to the next\n>>> commitfest?\n>>>\n>>> Addded to commitfest. Here is an updated patch version.\n>>\n>> Thanks for posting the patch.\n>> I agree with this content.\n>>\n>>> + Foreign Scan on public.ft2 (cost=156.58..165.45 rows=394\n>>> width=14)\n>>  It's not a big issue, but is there any intention behind the pattern\n>> of\n>> outputting costs in regression tests?\n>>\n>> Hi.\n>>\n>> No, I don't think it makes much sense. Updated tests (also added case\n>> with empty else).\n>>\n>> The patch doesn't apply anymore to master, I join an update of your\n>> patch update in attachment. This is your patch rebased and untouched\n>> minus a comment in the test and renamed to v4.\n>>\n>> I could have miss something but I don't think that additional struct\n>> elements case_args in structs foreign_loc_cxt and deparse_expr_cxt are\n>> necessary. They look to be useless.\n>\n> I thought we should compare arg collation and expression collation and \n> didn't suggest, that we can take CaseTestExpr's collation directly, \n> without deriving it from CaseExpr's arg. Your version of course looks \n> saner.\n>\n>>\n>> The patch will also be more clear if the CaseWhen node was handled\n>> separately in foreign_expr_walker() instead of being handled in the\n>> T_CaseExpr case. By this way the T_CaseExpr case just need to call\n>> recursively foreign_expr_walker(). I also think that code in\n>> T_CaseTestExpr should just check the collation, there is nothing more\n>> to do here like you have commented the function deparseCaseTestExpr().\n>> This function can be removed as it does nothing if the case_args\n>> elements are removed.\n>>\n>> There is a problem the regression test with nested CASE clauses:\n>>\n>>> EXPLAIN (VERBOSE, COSTS OFF)\n>>> SELECT c1,c2,c3 FROM ft2 WHERE CASE CASE WHEN c2 > 0 THEN c2 END\n>>> WHEN 100 THEN 601 WHEN c2 THEN c2 ELSE 0 END > 600 ORDER BY c1;\n>>\n>> the original query use \"WHERE CASE CASE WHEN\" but the remote query is\n>> not the same in the plan:\n>>\n>>> Remote SQL: SELECT \"C 1\", c2, c3 FROM \"S 1\".\"T 1\" WHERE (((CASE WHEN\n>>> ((CASE WHEN (c2 > 0) THEN c2 ELSE NULL::integer END) = 100) THEN 601\n>>> WHEN ((CASE WHEN (c2 > 0) THEN c2 ELSE NULL::integer END) = c2) THEN\n>>> c2 ELSE 0 END) > 600)) ORDER BY \"C 1\" ASC NULLS LAST\n>>\n>> Here this is \"WHERE (((CASE WHEN ((CASE WHEN\" I expected it to be\n>> unchanged to \"WHERE (((CASE (CASE WHEN\".\n>\n> I'm not sure this is an issue (as we change CASE A WHEN B ... to CASE \n> WHEN (A=B)),\n> and expressions should be free from side effects, but again your version\n> looks better.\n\n\nRight it returns the same result but I think this is confusing to not \nsee the same case form in the remote query.\n\n\n>\n> Thanks for improving the patch, it looks saner now.\n\n\nGreat, I changing the state in the commitfest to \"Ready for committers\".\n\n\n-- \nGilles Darold\nMigOps Inc\n\n\n\n", "msg_date": "Wed, 7 Jul 2021 18:50:37 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": false, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Le 07/07/2021 à 18:50, Gilles Darold a écrit :\n>\n> Great, I changing the state in the commitfest to \"Ready for committers\".\n>\n>\nI'm attaching the v5 patch again as it doesn't appears in the Latest \nattachment list in the commitfest.\n\n\n-- \nGilles Darold\nMigOps Inc", "msg_date": "Wed, 7 Jul 2021 18:55:51 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": false, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Le 07/07/2021 à 18:55, Gilles Darold a écrit :\n> Le 07/07/2021 à 18:50, Gilles Darold a écrit :\n>>\n>> Great, I changing the state in the commitfest to \"Ready for committers\".\n>>\n>>\n> I'm attaching the v5 patch again as it doesn't appears in the Latest \n> attachment list in the commitfest.\n>\n>\nAnd the review summary:\n\n\nThis patch allows pushing CASE expressions to foreign servers, so that:\n\n   - more types of updates could be executed directly\n   - full foreign table scan can be avoid\n   - more push down of aggregates function\n\nThe patch compile and regressions tests with assert enabled passed \nsuccessfully.\nThere is a compiler warning but it is not related to this patch:\n\n         deparse.c: In function ‘foreign_expr_walker.isra.0’:\n         deparse.c:891:28: warning: ‘collation’ may be used \nuninitialized in this function [-Wmaybe-uninitialized]\n           891 |       outer_cxt->collation = collation;\n               |       ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~\n         deparse.c:874:10: warning: ‘state’ may be used uninitialized in \nthis function [-Wmaybe-uninitialized]\n           874 |  else if (state == outer_cxt->state)\n               |          ^\n\nThe regression test for this feature contains the use cases where push \ndown of CASE clause are useful.\nNested CASE are also part of the regression tests.\n\nThe patch adds insignificant overhead by processing further than before \na case expression but overall it adds a major performance improvement \nfor queries on foreign tables that use a CASE WHEN clause in a predicate \nor in an aggregate function.\n\n\nThis patch does what it claims to do without detect problem, as expected \nthe CASE clause is not pushed when a local table is involved in the CASE \nexpression of if a non default collation is used.\n\nReady for committers.\n\n\n\n", "msg_date": "Wed, 7 Jul 2021 20:28:34 +0200", "msg_from": "Gilles Darold <gillesdarold@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Gilles Darold <gilles@migops.com> writes:\n> I'm attaching the v5 patch again as it doesn't appears in the Latest \n> attachment list in the commitfest.\n\nSo this has a few issues:\n\n1. In foreign_expr_walker, you're failing to recurse to either the\n\"arg\" or \"defresult\" subtrees of a CaseExpr, so that it would fail\nto notice unshippable constructs within those.\n\n2. You're also failing to guard against the hazard that a WHEN\nexpression within a CASE-with-arg has been expanded into something\nthat doesn't look like \"CaseTestExpr = something\". As written,\nthis patch would likely dump core in that situation, and if it didn't\nit would send nonsense to the remote server. Take a look at the\ncheck for that situation in ruleutils.c (starting at line 8764\nas of HEAD) and adapt it to this. Probably what you want is to\njust deem the CASE un-pushable if it's been modified away from that\nstructure. This is enough of a corner case that optimizing it\nisn't worth a great deal of trouble ... but crashing is not ok.\n\n3. A potentially uncomfortable issue for the CASE-with-arg syntax\nis that the specific equality operator being used appears nowhere\nin the decompiled expression, thus raising the question of whether\nthe remote server will interpret it the same way we did. Given\nthat we restrict the values-to-be-compared to be of shippable\ntypes, maybe this is safe in practice, but I have a bad feeling\nabout it. I wonder if we'd be better off just refusing to ship\nCASE-with-arg at all, which would a-fortiori avoid point 2.\n\n4. I'm not sure that I believe any part of the collation handling.\nThere is the question of what collations will be used for the\nindividual WHEN comparisons, which can probably be left for\nthe recursive checks of the CaseWhen.expr subtrees to handle;\nand then there is the separate issue of whether the CASE's result\ncollation (which arises from the CaseWhen.result exprs plus the\nCaseExpr.defresult expr) can be deemed to be safely derived from\nremote Vars. I haven't totally thought through how that should\nwork, but I'm pretty certain that handling the CaseWhen's within\nseparate recursive invocations of foreign_expr_walker cannot\npossibly get it right. However, you'll likely have to flatten\nthose anyway (i.e., handle them within the loop in the CaseExpr\ncase) while fixing point 2.\n\n5. This is a cosmetic point, but: the locations of the various\nadditions in deparse.c seem to have been chosen with the aid\nof a dartboard. We do have a convention for this sort of thing,\nwhich is to lay out code concerned with different node types\nin the same order that the node types are declared in *nodes.h.\nI'm not sufficiently anal to want to fix the existing violations\nof that rule that I see in deparse.c; but the fact that somebody\ngot this wrong before isn't license to make things worse.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Jul 2021 12:49:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Tom Lane писал 2021-07-21 19:49:\n> Gilles Darold <gilles@migops.com> writes:\n>> I'm attaching the v5 patch again as it doesn't appears in the Latest\n>> attachment list in the commitfest.\n> \n> So this has a few issues:\n\nHi.\n\n> \n> 1. In foreign_expr_walker, you're failing to recurse to either the\n> \"arg\" or \"defresult\" subtrees of a CaseExpr, so that it would fail\n> to notice unshippable constructs within those.\n\nFixed this.\n\n> \n> 2. You're also failing to guard against the hazard that a WHEN\n> expression within a CASE-with-arg has been expanded into something\n> that doesn't look like \"CaseTestExpr = something\". As written,\n> this patch would likely dump core in that situation, and if it didn't\n> it would send nonsense to the remote server. Take a look at the\n> check for that situation in ruleutils.c (starting at line 8764\n> as of HEAD) and adapt it to this. Probably what you want is to\n> just deem the CASE un-pushable if it's been modified away from that\n> structure. This is enough of a corner case that optimizing it\n> isn't worth a great deal of trouble ... but crashing is not ok.\n> \n\nI think I fixed this by copying check from ruleutils.c.\n\n\n> 3. A potentially uncomfortable issue for the CASE-with-arg syntax\n> is that the specific equality operator being used appears nowhere\n> in the decompiled expression, thus raising the question of whether\n> the remote server will interpret it the same way we did. Given\n> that we restrict the values-to-be-compared to be of shippable\n> types, maybe this is safe in practice, but I have a bad feeling\n> about it. I wonder if we'd be better off just refusing to ship\n> CASE-with-arg at all, which would a-fortiori avoid point 2.\n\nI'm not shure how 'case a when b ...' is different from 'case when a=b \n...'\nin this case. If type of a or b is not shippable, we will not push down\nthis expression in any way. And if they are of builtin types, why do\nthese expressions differ?\n\n> \n> 4. I'm not sure that I believe any part of the collation handling.\n> There is the question of what collations will be used for the\n> individual WHEN comparisons, which can probably be left for\n> the recursive checks of the CaseWhen.expr subtrees to handle;\n> and then there is the separate issue of whether the CASE's result\n> collation (which arises from the CaseWhen.result exprs plus the\n> CaseExpr.defresult expr) can be deemed to be safely derived from\n> remote Vars. I haven't totally thought through how that should\n> work, but I'm pretty certain that handling the CaseWhen's within\n> separate recursive invocations of foreign_expr_walker cannot\n> possibly get it right. However, you'll likely have to flatten\n> those anyway (i.e., handle them within the loop in the CaseExpr\n> case) while fixing point 2.\n\nI've tried to account for a fact that we are interested only in\ncaseWhen->result collations, but still not sure that I'm right here.\n\n> \n> 5. This is a cosmetic point, but: the locations of the various\n> additions in deparse.c seem to have been chosen with the aid\n> of a dartboard. We do have a convention for this sort of thing,\n> which is to lay out code concerned with different node types\n> in the same order that the node types are declared in *nodes.h.\n> I'm not sufficiently anal to want to fix the existing violations\n> of that rule that I see in deparse.c; but the fact that somebody\n> got this wrong before isn't license to make things worse.\n> \n> \t\t\tregards, tom lane\n\nFixed this.\n\nThanks for review.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Thu, 22 Jul 2021 12:13:54 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Alexander Pyhalov <a.pyhalov@postgrespro.ru> writes:\n> [ 0001-Allow-pushing-CASE-expression-to-foreign-server-v6.patch ]\n\nThis doesn't compile cleanly:\n\ndeparse.c: In function 'foreign_expr_walker.isra.4':\ndeparse.c:920:8: warning: 'collation' may be used uninitialized in this function [-Wmaybe-uninitialized]\n if (collation != outer_cxt->collation)\n ^\ndeparse.c:914:3: warning: 'state' may be used uninitialized in this function [-Wmaybe-uninitialized]\n switch (state)\n ^~~~~~\n\nThese uninitialized variables very likely explain the fact that it fails\nregression tests, both for me and for the cfbot. Even if this weren't an\noutright bug, we don't tolerate code that produces warnings on common\ncompilers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Jul 2021 11:18:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Tom Lane писал 2021-07-26 18:18:\n> Alexander Pyhalov <a.pyhalov@postgrespro.ru> writes:\n>> [ 0001-Allow-pushing-CASE-expression-to-foreign-server-v6.patch ]\n> \n> This doesn't compile cleanly:\n> \n> deparse.c: In function 'foreign_expr_walker.isra.4':\n> deparse.c:920:8: warning: 'collation' may be used uninitialized in\n> this function [-Wmaybe-uninitialized]\n> if (collation != outer_cxt->collation)\n> ^\n> deparse.c:914:3: warning: 'state' may be used uninitialized in this\n> function [-Wmaybe-uninitialized]\n> switch (state)\n> ^~~~~~\n> \n> These uninitialized variables very likely explain the fact that it \n> fails\n> regression tests, both for me and for the cfbot. Even if this weren't \n> an\n> outright bug, we don't tolerate code that produces warnings on common\n> compilers.\n> \n> \t\t\tregards, tom lane\n\nHi.\n\nOf course, this is a patch issue. Don't understand how I overlooked \nthis.\nRebased on master and fixed it. Tests are passing here (but they also \npassed for previous patch version).\n\nWhat exact tests are failing?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Mon, 26 Jul 2021 19:03:54 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Le 26/07/2021 à 18:03, Alexander Pyhalov a écrit :\n> Tom Lane писал 2021-07-26 18:18:\n>> Alexander Pyhalov <a.pyhalov@postgrespro.ru> writes:\n>>> [ 0001-Allow-pushing-CASE-expression-to-foreign-server-v6.patch ]\n>>\n>> This doesn't compile cleanly:\n>>\n>> deparse.c: In function 'foreign_expr_walker.isra.4':\n>> deparse.c:920:8: warning: 'collation' may be used uninitialized in\n>> this function [-Wmaybe-uninitialized]\n>>      if (collation != outer_cxt->collation)\n>>         ^\n>> deparse.c:914:3: warning: 'state' may be used uninitialized in this\n>> function [-Wmaybe-uninitialized]\n>>    switch (state)\n>>    ^~~~~~\n>>\n>> These uninitialized variables very likely explain the fact that it fails\n>> regression tests, both for me and for the cfbot.  Even if this \n>> weren't an\n>> outright bug, we don't tolerate code that produces warnings on common\n>> compilers.\n>>\n>>             regards, tom lane\n>\n> Hi.\n>\n> Of course, this is a patch issue. Don't understand how I overlooked this.\n> Rebased on master and fixed it. Tests are passing here (but they also \n> passed for previous patch version).\n>\n> What exact tests are failing?\n>\n\nI confirm that there is no compilation warning and all regression tests \npass successfully for the v7 patch, I have not checked previous patch \nbut this one doesn't fail on cfbot too.\n\n\nBest regards,\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Wed, 28 Jul 2021 16:29:34 +0200", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": false, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Alexander Pyhalov <a.pyhalov@postgrespro.ru> writes:\n> [ 0001-Allow-pushing-CASE-expression-to-foreign-server-v7.patch ]\n\nI looked this over. It's better than before, but the collation\nhandling is still not at all correct. We have to consider that a\nCASE's arg expression supplies the collation for a contained\nCaseTestExpr, otherwise we'll come to the wrong conclusions about\nwhether \"CASE foreignvar WHEN ...\" is shippable, if the foreignvar\nis what's determining collation of the comparisons.\n\nThis means that the CaseExpr level of recursion has to pass data down\nto the CaseTestExpr level. In the attached, I did that by adding an\nadditional argument to foreign_expr_walker(). That's a bit invasive,\nbut it's not awful. I thought about instead adding fields to the\nforeign_loc_cxt struct. But that seemed considerably messier in the\nend, because we'd then have some fields that are information sourced\nat one recursion level and some that are info sourced at another\nlevel.\n\nI also whacked the regression test cases around a lot. They seemed\nto spend a lot of time on irrelevant combinations, while failing to\ncheck the things that matter, namely whether collation-based pushdown\ndecisions are made correctly.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 29 Jul 2021 16:54:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Tom Lane писал 2021-07-29 23:54:\n> Alexander Pyhalov <a.pyhalov@postgrespro.ru> writes:\n>> [ 0001-Allow-pushing-CASE-expression-to-foreign-server-v7.patch ]\n> \n> I looked this over. It's better than before, but the collation\n> handling is still not at all correct. We have to consider that a\n> CASE's arg expression supplies the collation for a contained\n> CaseTestExpr, otherwise we'll come to the wrong conclusions about\n> whether \"CASE foreignvar WHEN ...\" is shippable, if the foreignvar\n> is what's determining collation of the comparisons.\n> \n> This means that the CaseExpr level of recursion has to pass data down\n> to the CaseTestExpr level. In the attached, I did that by adding an\n> additional argument to foreign_expr_walker(). That's a bit invasive,\n> but it's not awful. I thought about instead adding fields to the\n> foreign_loc_cxt struct. But that seemed considerably messier in the\n> end, because we'd then have some fields that are information sourced\n> at one recursion level and some that are info sourced at another\n> level.\n> \n> I also whacked the regression test cases around a lot. They seemed\n> to spend a lot of time on irrelevant combinations, while failing to\n> check the things that matter, namely whether collation-based pushdown\n> decisions are made correctly.\n> \n> \t\t\tregards, tom lane\n\nHi.\n\nOverall looks good.\nThe only thing I'm confused about is in T_CaseTestExpr case - how can it \nbe that CaseTestExpr collation doesn't match case_arg_cxt->collation ?\nDo we we need to inspect only case_arg_cxt->state? Can we assert that \ncollation == case_arg_cxt->collation?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Fri, 30 Jul 2021 11:16:53 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "Alexander Pyhalov <a.pyhalov@postgrespro.ru> writes:\n> The only thing I'm confused about is in T_CaseTestExpr case - how can it \n> be that CaseTestExpr collation doesn't match case_arg_cxt->collation ?\n> Do we we need to inspect only case_arg_cxt->state? Can we assert that \n> collation == case_arg_cxt->collation?\n\nPerhaps, but:\n\n(1) I'm disinclined to make this code look different from the otherwise-\nidentical coding elsewhere in foreign_expr_walker.\n\n(2) That would create a hard assumption that foreign_expr_walker's\nconclusions about the collation of a subexpression match those of\nassign_query_collations. I'm not quite sure I believe that (and if\nit's true, why aren't we just relying on exprCollation?). Anyway,\nif we're to have an assertion that it's true, it should be in some\nplace that's a lot less out-of-the-way than CaseTestExpr, because\nif the assumption gets violated it might be a long time till we\nnotice.\n\nSo I think we're best off to just write it the way I did, at least\nso far as this patch is concerned. If we want to rethink the way\ncollation gets calculated here, that would be material for a\nseparate patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Jul 2021 10:17:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Case expression pushdown" }, { "msg_contents": "I wrote:\n> Alexander Pyhalov <a.pyhalov@postgrespro.ru> writes:\n>> Do we we need to inspect only case_arg_cxt->state? Can we assert that \n>> collation == case_arg_cxt->collation?\n\n> Perhaps, but:\n> ...\n\nOh, actually there's a third point: the shakiest part of this logic\nis the assumption that we've correctly matched a CaseTestExpr to\nits source CaseExpr. Seeing that inlining and constant-folding can\nmash things to the point where a WHEN expression doesn't look like\n\"CaseTestExpr = RHS\", it's a little nervous-making to assume there\ncouldn't be another CASE in between. While there's not known problems\nof this sort, if it did happen I'd prefer this code to react as\n\"don't push down\", not as \"assertion failure\".\n\n(There's been speculation in the past about whether we could find\na more bulletproof representation of this kind of CaseExpr. We've\nnot succeeded at that yet though.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Jul 2021 11:48:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Case expression pushdown" } ]
[ { "msg_contents": "Hi Hackers,\n\nMore than a year ago we submitted a patch that offered two primitives\n(ALIGN and NORMALIZE) to support the processing of temporal data with range\ntypes. During the ensuing discussion we decided to withdraw the original\npatch\nand to split it into smaller parts.\n\nIn the context of my BSc thesis, we started working and implementing a\nRange Merge Join (RMJ), which is key for most temporal operations. The RMJ\nis a useful operator in its own right and it greatly benefits any possible\ntemporal extension.\n\nWe have implemented the Range Merge Join algorithm by extending the\nexisting Merge Join to also support range conditions, i.e., BETWEEN-AND\nor @> (containment for range types). Range joins contain a containment\ncondition and may have (optional) equality conditions. For example the\nfollowing query joins employees with a department and work period with\nevents on a specific day for that department:\n\nSELECT emps.name, emps.dept, events.event, events.day\nFROM emps JOIN events ON emps.dept = events.dept\nAND events.day <@ emps.eperiod;\n\nThe resulting query plan is as follows:\n\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------\n Range Merge Join (cost=106.73..118.01 rows=3 width=100) (actual rows=6\nloops=1)\n Merge Cond: (emps.dept = events.dept)\n Range Cond: (events.day <@ emps.eperiod)\n -> Sort (cost=46.87..48.49 rows=650 width=96) (actual rows=5 loops=1)\n Sort Key: emps.dept, emps.eperiod\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on emps (cost=0.00..16.50 rows=650 width=96) (actual\nrows=5 loops=1)\n -> Sort (cost=59.86..61.98 rows=850 width=68) (actual rows=6 loops=1)\n Sort Key: events.dept, events.day\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on events (cost=0.00..18.50 rows=850 width=68)\n(actual rows=5 loops=1)\n Planning Time: 0.077 ms\n Execution Time: 0.092 ms\n(13 rows)\n\n\nExample queries and instances of tables can be found at the end of the mail.\n\nThe range merge join works with range types using <@ and also scalar data\ntypes\nusing \"a.ts BETWEEN b.ts AND b.te\" or \"b.ts <= a.ts AND a.ts <= b.te\".\nCurrently, PostgreSQL does not provide specialized join algorithms for range\nconditions (besides index nested loops), or Hash Join and Merge Joins that\nevaluate an equality condition only.\n\nOur idea is to have a separate range_cond besides the merge_cond for the\nMerge Join that stores the potential range conditions of a query. The state\ndiagram of the Merge Join is then extended to also take into consideration\nthe range_cond. See the simplified state diagram of the Range Merge Join as\nan extension of the Merge Join in the attachment. These additions besides a\nboolean check have no effect on the Marge Join when no range condition is\npresent.\n\nWe provide extensive testing results and further information, including the\nfull BSc Thesis (technical report), describing the implementation and tests\nin detail on http://tpg.inf.unibz.it/project-rmj and\nhttp://tpg.inf.unibz.it/downloads/rmj-report.pdf.\n\nWe performed several experiments and show that depending on the selectivity\nof\nthe range condition the range merge join outperforms existing execution\nalgorithms up to an order of magnitude. We found that the range merge join\nthat\nneeds to find range_cond from inequalities, incurs only a very small\noverhead\nin planning time in some TPCH queries (see Table 5.3 in the technical\nreport)\nand in general only a very small overhead for a large number of joins or\nmany\ninequality conditions (see Figure 5.1). To check the overhead of our\nextension\nfor the traditional merge join execution time, we executed the TPCH queries\nusing the merge join (hash join disabled) and found no statistically\nsignificant difference (see Table 5.4).\n\nWe are looking forward to your feedback and any suggestions to improve the\npatch.\n\nBest Regards,\n\nThomas Mannhart\n\n\nAttachments: State Diagram and Patch\n\nOPEN POINTS AND TODOs:\n\n- Currently we do not consider parallelization\n- Not all cases for input sort orders are considered yet\n\nEXAMPLE QUERIES:\n\nThe first query uses a range condition using BETWEEN AND only and no\nequality condition.\n\n----------------------------------------------------------------------------------------------\n\nDROP TABLE IF EXISTS marks;\nDROP TABLE IF EXISTS grades;\n\nCREATE TABLE marks (name text, snumber numeric, mark numeric);\nCREATE TABLE grades (mmin numeric, mmax numeric, grade numeric);\n\nINSERT INTO marks (name, snumber, mark) VALUES\n('Anton', 1232, 23.5),\n('Thomas', 4356, 95),\n('Michael', 1125, 72),\n('Hans', 3425, 90);\n\nINSERT INTO grades (mmin, mmax, grade) VALUES\n(0.0, 18, 1),\n(18.5, 36, 2),\n(36.5, 54, 3),\n(54.5, 72, 4),\n(72.5, 90, 5),\n(90.5, 100, 6);\n\nEXPLAIN(ANALYZE, TIMING FALSE)\nSELECT marks.name, marks.snumber, grades.grade\nFROM marks JOIN grades ON marks.mark BETWEEN grades.mmin AND grades.mmax;\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------\n Range Merge Join (cost=93.74..920.13 rows=46944 width=96) (actual rows=16\nloops=1)\n Range Cond: ((marks.mark >= grades.mmin) AND (marks.mark <= grades.mmax))\n -> Sort (cost=46.87..48.49 rows=650 width=96) (actual rows=12 loops=1)\n Sort Key: grades.mmin\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on grades (cost=0.00..16.50 rows=650 width=96)\n(actual rows=12 loops=1)\n -> Sort (cost=46.87..48.49 rows=650 width=96) (actual rows=21 loops=1)\n Sort Key: marks.mark\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on marks (cost=0.00..16.50 rows=650 width=96)\n(actual rows=8 loops=1)\n Planning Time: 0.078 ms\n Execution Time: 0.068 ms\n(12 rows)\n\n----------------------------------------------------------------------------------------------\n\n\nThe second query uses a range and an equality condition and joins the\nrelations using contained in (<@).\n\n----------------------------------------------------------------------------------------------\n\nDROP TABLE IF EXISTS emps;\nDROP TABLE IF EXISTS events;\n\nCREATE TABLE emps (name text, dept text, eperiod daterange);\nCREATE TABLE events (event text, dept text, day date);\n\nINSERT INTO emps (name, dept, eperiod) VALUES\n('Anton', 'Sales', '(2020-01-01, 2020-03-31)'),\n('Thomas', 'Marketing', '(2020-01-01, 2020-06-30)'),\n('Michael', 'Marketing', '(2020-03-01, 2020-12-31)'),\n('Hans', 'Sales', '(2020-01-01, 2020-12-31)'),\n('Thomas', 'Accounting', '(2020-07-01, 2020-12-31)');\n\nINSERT INTO events (event, dept, day) VALUES\n('Fair CH', 'Marketing', '2020-03-05'),\n('Presentation', 'Sales', '2020-06-15'),\n('Fair IT', 'Marketing', '2020-08-03'),\n('Balance Report', 'Accounting', '2020-08-03'),\n('Product launch', 'Marketing', '2020-10-15');\n\nEXPLAIN(ANALYZE, TIMING FALSE)\nSELECT emps.name, emps.dept, events.event, events.day\nFROM emps JOIN events ON emps.dept = events.dept\nAND events.day <@ emps.eperiod;\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------\n Range Merge Join (cost=106.73..118.01 rows=3 width=100) (actual rows=6\nloops=1)\n Merge Cond: (emps.dept = events.dept)\n Range Cond: (events.day <@ emps.eperiod)\n -> Sort (cost=46.87..48.49 rows=650 width=96) (actual rows=5 loops=1)\n Sort Key: emps.dept, emps.eperiod\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on emps (cost=0.00..16.50 rows=650 width=96) (actual\nrows=5 loops=1)\n -> Sort (cost=59.86..61.98 rows=850 width=68) (actual rows=6 loops=1)\n Sort Key: events.dept, events.day\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on events (cost=0.00..18.50 rows=850 width=68)\n(actual rows=5 loops=1)\n Planning Time: 0.077 ms\n Execution Time: 0.092 ms\n(13 rows)\n\n----------------------------------------------------------------------------------------------", "msg_date": "Wed, 9 Jun 2021 17:05:35 +0200", "msg_from": "Thomas <thomasmannhart97@gmail.com>", "msg_from_op": true, "msg_subject": "Patch: Range Merge Join" }, { "msg_contents": "On Thu, 10 Jun 2021 at 03:05, Thomas <thomasmannhart97@gmail.com> wrote:\n> We have implemented the Range Merge Join algorithm by extending the\n> existing Merge Join to also support range conditions, i.e., BETWEEN-AND\n> or @> (containment for range types).\n\nIt shouldn't be a blocker for you, but just so you're aware, there was\na previous proposal for this in [1] and a patch in [2]. I've include\nJeff here just so he's aware of this. Jeff may wish to state his\nintentions with his own patch. It's been a few years now.\n\nI only just glanced over the patch. I'd suggest getting rid of the /*\nThomas */ comments. We use git, so if you need an audit trail about\nchanges then you'll find it in git blame. If you have those for an\ninternal audit trail then you should consider using git. No committer\nwould commit those to PostgreSQL, so they might as well disappear.\n\nFor further review, please add the patch to the July commitfest [3].\nWe should be branching for pg15 sometime before the start of July.\nThere will be more focus on new patches around that time. Further\ndetails in [4].\n\nAlso, I see this if your first post to this list, so welcome, and\nthank you for the contribution. Also, just to set expectations;\npatches like this almost always take a while to get into shape for\nPostgreSQL. Please expect a lot of requests to change things. That's\nfairly standard procedure. The process often drags on for months and\nin some less common cases, years.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/6227.1334559170%40sss.pgh.pa.us#82c771950ba486dec911923a5e910000\n[2] https://www.postgresql.org/message-id/flat/CAMp0ubfwAFFW3O_NgKqpRPmm56M4weTEXjprb2gP_NrDaEC4Eg%40mail.gmail.com\n[3] https://commitfest.postgresql.org/33/\n[4] https://wiki.postgresql.org/wiki/CommitFest\n\n\n", "msg_date": "Thu, 10 Jun 2021 15:09:54 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Range Merge Join" }, { "msg_contents": "Thank you for the feedback.\nI removed the redundant comments from the patch and added this thread to\nthe July CF [1].\n\nBest Regards,\nThomas Mannhart\n\n[1] https://commitfest.postgresql.org/33/3160/\n\nAm Do., 10. Juni 2021 um 05:10 Uhr schrieb David Rowley <\ndgrowleyml@gmail.com>:\n\n> On Thu, 10 Jun 2021 at 03:05, Thomas <thomasmannhart97@gmail.com> wrote:\n> > We have implemented the Range Merge Join algorithm by extending the\n> > existing Merge Join to also support range conditions, i.e., BETWEEN-AND\n> > or @> (containment for range types).\n>\n> It shouldn't be a blocker for you, but just so you're aware, there was\n> a previous proposal for this in [1] and a patch in [2]. I've include\n> Jeff here just so he's aware of this. Jeff may wish to state his\n> intentions with his own patch. It's been a few years now.\n>\n> I only just glanced over the patch. I'd suggest getting rid of the /*\n> Thomas */ comments. We use git, so if you need an audit trail about\n> changes then you'll find it in git blame. If you have those for an\n> internal audit trail then you should consider using git. No committer\n> would commit those to PostgreSQL, so they might as well disappear.\n>\n> For further review, please add the patch to the July commitfest [3].\n> We should be branching for pg15 sometime before the start of July.\n> There will be more focus on new patches around that time. Further\n> details in [4].\n>\n> Also, I see this if your first post to this list, so welcome, and\n> thank you for the contribution. Also, just to set expectations;\n> patches like this almost always take a while to get into shape for\n> PostgreSQL. Please expect a lot of requests to change things. That's\n> fairly standard procedure. The process often drags on for months and\n> in some less common cases, years.\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/6227.1334559170%40sss.pgh.pa.us#82c771950ba486dec911923a5e910000\n> [2]\n> https://www.postgresql.org/message-id/flat/CAMp0ubfwAFFW3O_NgKqpRPmm56M4weTEXjprb2gP_NrDaEC4Eg%40mail.gmail.com\n> [3] https://commitfest.postgresql.org/33/\n> [4] https://wiki.postgresql.org/wiki/CommitFest\n>", "msg_date": "Thu, 10 Jun 2021 11:04:08 +0200", "msg_from": "Thomas <thomasmannhart97@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Patch: Range Merge Join" }, { "msg_contents": "On Thu, 2021-06-10 at 15:09 +1200, David Rowley wrote:\n> It shouldn't be a blocker for you, but just so you're aware, there\n> was\n> a previous proposal for this in [1] and a patch in [2]. I've include\n> Jeff here just so he's aware of this. Jeff may wish to state his\n> intentions with his own patch. It's been a few years now.\n\nGreat, thank you for working on this!\n\nI'll start with the reason I set the work down before: it did not work\nwell with multiple join keys. That might be fine, but I also started\nthinking it was specialized enough that I wanted to look into doing it\nas an extension using the CustomScan mechanism.\n\nDo you have any solution to working better with multiple join keys? And\ndo you have thoughts on whether it would be a good candidate for the\nCustomScan extension mechanism, which would make it easier to\nexperiment with?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 10 Jun 2021 19:14:32 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Range Merge Join" }, { "msg_contents": "On Thu, Jun 10, 2021 at 07:14:32PM -0700, Jeff Davis wrote:\n> On Thu, 2021-06-10 at 15:09 +1200, David Rowley wrote:\n> > It shouldn't be a blocker for you, but just so you're aware, there\n> > was\n> > a previous proposal for this in [1] and a patch in [2]. I've include\n> > Jeff here just so he's aware of this. Jeff may wish to state his\n> > intentions with his own patch. It's been a few years now.\n> \n> Great, thank you for working on this!\n> \n> I'll start with the reason I set the work down before: it did not work\n> well with multiple join keys. That might be fine, but I also started\n> thinking it was specialized enough that I wanted to look into doing it\n> as an extension using the CustomScan mechanism.\n> \n> Do you have any solution to working better with multiple join keys? And\n> do you have thoughts on whether it would be a good candidate for the\n> CustomScan extension mechanism, which would make it easier to\n> experiment with?\n> \n\nHi,\n\nIt seems this has been stalled since jun-2021. I intend mark this as\nRwF unless someone speaks in the next hour or so.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Mon, 4 Oct 2021 16:27:54 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Patch: Range Merge Join" }, { "msg_contents": "> On Mon, Oct 04, 2021 at 04:27:54PM -0500, Jaime Casanova wrote:\n>> On Thu, Jun 10, 2021 at 07:14:32PM -0700, Jeff Davis wrote:\n>> > \n>> > I'll start with the reason I set the work down before: it did not work\n>> > well with multiple join keys. That might be fine, but I also started\n>> > thinking it was specialized enough that I wanted to look into doing it\n>> > as an extension using the CustomScan mechanism.\n>> > \n>> > Do you have any solution to working better with multiple join keys? And\n>> > do you have thoughts on whether it would be a good candidate for the\n>> > CustomScan extension mechanism, which would make it easier to\n>> > experiment with?\n>> > \n>> \n>> Hi,\n>> \n>> It seems this has been stalled since jun-2021. I intend mark this as\n>> RwF unless someone speaks in the next hour or so.\n>> \n\nThomas <thomasmannhart97@gmail.com> wrote me:\n\n> Hi,\n> \n> I registered this patch for the commitfest in july. It had not been reviewed and moved to the next CF. I still like to submit it.\n> \n> Regards,\n> Thomas\n>\n\nJust for clarification RwF doesn't imply reject of the patch.\nNevertheless, given that there has been no real review I will mark this\npatch as \"Waiting on Author\" and move it to the next CF.\n\nMeanwhile, cfbot (aka http://commitfest.cputube.org) says this doesn't\ncompile. Here is a little patch to fix the compilation errors, after\nthat it passes all tests in make check-world.\n\nAlso attached a rebased version of your patch with the fixes so we turn\ncfbot entry green again\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Mon, 4 Oct 2021 19:30:34 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Patch: Range Merge Join" }, { "msg_contents": "Dear all,\nthanks for the feedback!\n\nWe had a closer look at the previous patches and the CustomScan\ninfrastructure.\n\nCompared to the previous patch, we do not (directly) focus on joins\nwith the overlap (&&) condition in this patch. Instead we consider\njoins with containment (@>) between a range and an element, and joins\nwith conditions over scalars of the form \"right.element BETWEEN\nleft.start AND left.end\", and more generally left.start >(=)\nright.element AND right.element <(=) left.end. We call such conditions\nrange conditions and these conditions can be combined with equality\nconditions in the Range Merge Join.\n\nThe Range Merge Join can use (optional) equality conditions and one\nrange condition of the form shown above. In this case the inputs are\nsorted first by the attributes used for equality and then one input by\nthe range (or start in the case of scalars) and the other input by the\nelement. The Range Merge Join is then a simple extension of the Merge\nJoin that in addition to the (optional) equality attributes also uses\nthe range condition in the merge join states. This is similar to an\nindex-nested loop with scalars for cases when the relation containing\nthe element has an index on the equality attributes followed by the\nelement. The Range Merge Join uses sorting and thus does not require\nthe index for this purpose and performs better.\n\nThe patch uses the optimizer estimates to evaluate if the Range Merge\nJoin is beneficial as compared to other execution strategies, but when\nno equality attributes are present, it becomes the only efficient\noption for the above range conditions. If a join contains multiple\nrange conditions, then based on the estimates the most effective\nstrategy is chosen for the Range Merge Join.\n\nAlthough we do not directly focus on joins with the overlap (&&)\ncondition between two ranges, we show in [1] that these joins can be\nevaluated using the union (UNION ALL) of two joins with a range\ncondition, where intuitively, one tests that the start of one input\nfalls within the range of the other and vice versa. We evaluated this\nusing regular (B-tree) indices and compare it to joins with the\noverlap (&&) condition using GiST, SP-GiST and others, and found that\nit performs better. The Range Merge Join would improve this further\nand would not require the creation of an index.\n\nWe did not consider an implementation as a CustomScan, as we feel the\njoin is rather general, can be implemented using a small extension of\nthe existing Merge Join, and would require a substantial duplication\nof the Merge Join code.\n\nKind regards,\nThomas, Anton, Johann, Michael, Peter\n\n[1] https://doi.org/10.1007/s00778-021-00692-3 (open access)\n\n\nAm Di., 5. Okt. 2021 um 02:30 Uhr schrieb Jaime Casanova <\njcasanov@systemguards.com.ec>:\n\n> > On Mon, Oct 04, 2021 at 04:27:54PM -0500, Jaime Casanova wrote:\n> >> On Thu, Jun 10, 2021 at 07:14:32PM -0700, Jeff Davis wrote:\n> >> >\n> >> > I'll start with the reason I set the work down before: it did not work\n> >> > well with multiple join keys. That might be fine, but I also started\n> >> > thinking it was specialized enough that I wanted to look into doing it\n> >> > as an extension using the CustomScan mechanism.\n> >> >\n> >> > Do you have any solution to working better with multiple join keys?\n> And\n> >> > do you have thoughts on whether it would be a good candidate for the\n> >> > CustomScan extension mechanism, which would make it easier to\n> >> > experiment with?\n> >> >\n> >>\n> >> Hi,\n> >>\n> >> It seems this has been stalled since jun-2021. I intend mark this as\n> >> RwF unless someone speaks in the next hour or so.\n> >>\n>\n> Thomas <thomasmannhart97@gmail.com> wrote me:\n>\n> > Hi,\n> >\n> > I registered this patch for the commitfest in july. It had not been\n> reviewed and moved to the next CF. I still like to submit it.\n> >\n> > Regards,\n> > Thomas\n> >\n>\n> Just for clarification RwF doesn't imply reject of the patch.\n> Nevertheless, given that there has been no real review I will mark this\n> patch as \"Waiting on Author\" and move it to the next CF.\n>\n> Meanwhile, cfbot (aka http://commitfest.cputube.org) says this doesn't\n> compile. Here is a little patch to fix the compilation errors, after\n> that it passes all tests in make check-world.\n>\n> Also attached a rebased version of your patch with the fixes so we turn\n> cfbot entry green again\n>\n> --\n> Jaime Casanova\n> Director de Servicios Profesionales\n> SystemGuards - Consultores de PostgreSQL\n>", "msg_date": "Wed, 10 Nov 2021 15:03:55 +0100", "msg_from": "Thomas <thomasmannhart97@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Patch: Range Merge Join" }, { "msg_contents": "This patch fails to compile due to an incorrect function name in an assertion:\n\n nodeMergejoin.c:297:9: warning: implicit declaration of function 'list_legth' is invalid in C99 [-Wimplicit-function-declaration]\n Assert(list_legth(node->rangeclause) < 3);\n ^\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 15:03:32 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Patch: Range Merge Join" }, { "msg_contents": "Thank you for the feedback and sorry for the oversight. I fixed the bug and\nattached a new version of the patch.\n\nKind Regards, Thomas\n\nAm Mi., 17. Nov. 2021 um 15:03 Uhr schrieb Daniel Gustafsson <\ndaniel@yesql.se>:\n\n> This patch fails to compile due to an incorrect function name in an\n> assertion:\n>\n> nodeMergejoin.c:297:9: warning: implicit declaration of function\n> 'list_legth' is invalid in C99 [-Wimplicit-function-declaration]\n> Assert(list_legth(node->rangeclause) < 3);\n> ^\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>", "msg_date": "Wed, 17 Nov 2021 15:45:26 +0100", "msg_from": "Thomas <thomasmannhart97@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Patch: Range Merge Join" }, { "msg_contents": "On 11/17/21 15:45, Thomas wrote:\n> Thank you for the feedback and sorry for the oversight. I fixed the bug \n> and attached a new version of the patch.\n> \n> Kind Regards, Thomas\n> \n> Am Mi., 17. Nov. 2021 um 15:03 Uhr schrieb Daniel Gustafsson \n> <daniel@yesql.se <mailto:daniel@yesql.se>>:\n> \n> This patch fails to compile due to an incorrect function name in an\n> assertion:\n> \n>   nodeMergejoin.c:297:9: warning: implicit declaration of function\n> 'list_legth' is invalid in C99 [-Wimplicit-function-declaration]\n>   Assert(list_legth(node->rangeclause) < 3);\n>\n\nThat still doesn't compile with asserts, because MJCreateRangeData has\n\n Assert(list_length(node->rangeclause) < 3);\n\nbut there's no 'node' variable :-/\n\n\nI took a brief look at the patch, and I think there are two main issues \npreventing it from moving forward.\n\n1) no tests\n\nThere's not a *single* regression test exercising the new code, so even \nafter adding Assert(false) to MJCreateRangeData() tests pass just fine. \nClearly, that needs to change.\n\n2) lack of comments\n\nThe patch adds a bunch of functions, but it does not really explain what \nthe functions do (unlike the various surrounding functions). Even if I \ncan work out what the functions do, it's much harder to determine what \nthe \"contract\" is (i.e. what assumptions the function do and what is \nguaranteed).\n\nSimilarly, the patch modifies/reworks large blocks of executor code, \nwithout updating the comments describing what the block does.\n\nSee 0002 for various places that I think are missing comments.\n\n\nAside from that, I have a couple minor comments:\n\n3) I'm not quite sure I like \"Range Merge Join\" to be honest. It's still \na \"Merge Join\" pretty much. What about ditching the \"Range\"? There'll \nstill be \"Range Cond\" key, which should be good enough I think.\n\n4) Some minor whitespace issues (tabs vs. spaces). See 0002.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 17 Nov 2021 23:28:43 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Range Merge Join" }, { "msg_contents": "Hi,\n\nOn Wed, Nov 17, 2021 at 11:28:43PM +0100, Tomas Vondra wrote:\n> On 11/17/21 15:45, Thomas wrote:\n> > Thank you for the feedback and sorry for the oversight. I fixed the bug\n> > and attached a new version of the patch.\n> > \n> > Kind Regards, Thomas\n> > \n> > Am Mi., 17. Nov. 2021 um 15:03�Uhr schrieb Daniel Gustafsson\n> > <daniel@yesql.se <mailto:daniel@yesql.se>>:\n> > \n> > This patch fails to compile due to an incorrect function name in an\n> > assertion:\n> > \n> > � nodeMergejoin.c:297:9: warning: implicit declaration of function\n> > 'list_legth' is invalid in C99 [-Wimplicit-function-declaration]\n> > � Assert(list_legth(node->rangeclause) < 3);\n> > \n> \n> That still doesn't compile with asserts, because MJCreateRangeData has\n> \n> Assert(list_length(node->rangeclause) < 3);\n> \n> but there's no 'node' variable :-/\n> \n> \n> I took a brief look at the patch, and I think there are two main issues\n> preventing it from moving forward.\n> \n> 1) no tests\n> \n> 2) lack of comments\n> \n> 3) I'm not quite sure I like \"Range Merge Join\" to be honest. It's still a\n> \"Merge Join\" pretty much. What about ditching the \"Range\"? There'll still be\n> \"Range Cond\" key, which should be good enough I think.\n> \n> 4) Some minor whitespace issues (tabs vs. spaces). See 0002.\n\nIt's been 2 months since Tomas posted that review.\n\nThomas, do you plan to work on that patch during this commitfest?\n\n\n", "msg_date": "Mon, 17 Jan 2022 15:39:33 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Range Merge Join" } ]
[ { "msg_contents": "Is there a way to get ‘character expansions’ with the ICU collations that are available in PostgreSQL?\r\n\r\nUsing this example on a database with UTF-8 encoding:\r\n\r\nCREATE COLLATION CI_AS (provider = icu, locale=’utf8@colStrength=secondary’, deterministic = false);\r\n\r\nCREATE TABLE MyTable3\r\n(\r\n ID INT IDENTITY(1, 1),\r\n Comments VARCHAR(100)\r\n)\r\n\r\n\r\nINSERT INTO MyTable3 (Comments) VALUES ('strasse')\r\nINSERT INTO MyTable3 (Comments) VALUES ('straße')\r\n\r\nSELECT * FROM MyTable3 WHERE Comments COLLATE CI_AS = 'strasse'\r\nSELECT * FROM MyTable3 WHERE Comments COLLATE CI_AS = 'straße'\r\n\r\nWe would like to control whether each SELECT statement finds both records (because the sort key of ‘ß’ equals the sort key of ‘ss’), or whether each SELECT statement finds just one record. ICU supports character expansions and other tailorings that support advanced features like changing the collation order for specific characters, and while CREATE COLLATION doesn’t expose tailoring directives that do either character expansion or specific character reorderings (other than @colReorder to reorder entire categories of characters such as Greek vs Roman) , it seems to be the expectation that many <language> <country> pairs such as en_US should already cause ‘ß’ to match ‘ss’, not just to have them sort close together (which they do).\r\n\r\nIf PostgreSQL supports character expansion with ICU collations, can someone provide an example where 'strasse' = 'straße'?\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \nIs there a way to get ‘character expansions’ with the ICU collations that are available in PostgreSQL?\n \nUsing this example on a database with UTF-8 encoding:\n \nCREATE COLLATION CI_AS (provider = icu, locale=’utf8@colStrength=secondary’, deterministic = false);\n \nCREATE TABLE MyTable3\n(\n    ID INT IDENTITY(1, 1),\r\n\n    Comments VARCHAR(100)\n)\n \n \nINSERT INTO MyTable3 (Comments) VALUES ('strasse')\nINSERT INTO MyTable3 (Comments) VALUES ('straße')\n \nSELECT * FROM MyTable3 WHERE Comments COLLATE CI_AS = 'strasse'\nSELECT * FROM MyTable3 WHERE Comments COLLATE CI_AS = 'straße'\n \nWe would like to control whether each SELECT statement finds both records (because the sort key of ‘ß’ equals the sort key of ‘ss’),\r\n or whether each SELECT statement finds just one record.  ICU supports character expansions and other\r\ntailorings that support advanced features like changing the collation order for specific characters, and while CREATE COLLATION doesn’t expose tailoring directives that do either character expansion or specific character reorderings (other than @colReorder\r\n to reorder entire categories of characters such as Greek vs Roman) , it seems to be the expectation that many <language> <country> pairs such as en_US should already cause ‘ß’ to match ‘ss’,\r\n not just to have them sort close together (which they do).\n \nIf PostgreSQL supports character expansion with ICU collations, can someone provide an example where\r\n'strasse' = 'straße'?", "msg_date": "Wed, 9 Jun 2021 15:31:33 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": true, "msg_subject": "Character expansion with ICU collations" }, { "msg_contents": "On 09.06.21 17:31, Finnerty, Jim wrote:\n> CREATE COLLATION CI_AS (provider = icu, \n> locale=’utf8@colStrength=secondary’, deterministic = false);\n> \n> CREATE TABLE MyTable3\n> (\n> \n>     ID INT IDENTITY(1, 1),\n>     Comments VARCHAR(100)\n> \n> )\n> \n> INSERT INTO MyTable3 (Comments) VALUES ('strasse')\n> INSERT INTO MyTable3 (Comments) VALUES ('straße')\n> SELECT * FROM MyTable3 WHERE Comments COLLATE CI_AS = 'strasse'\n> SELECT * FROM MyTable3 WHERE Comments COLLATE CI_AS = 'straße'\n> \n> We would like to control whether each SELECT statement finds both \n> records (because the sort key of ‘ß’ equals the sort key of ‘ss’), or \n> whether each SELECT statement finds just one record.\n\nYou can have these queries return both rows if you use an \naccent-ignoring collation, like this example in the documentation:\n\nCREATE COLLATION ignore_accents (provider = icu, locale = \n'und-u-ks-level1-kc-true', deterministic = false);\n\n\n", "msg_date": "Wed, 9 Jun 2021 19:54:54 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Character expansion with ICU collations" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> You can have these queries return both rows if you use an \n> accent-ignoring collation, like this example in the documentation:\n\n> CREATE COLLATION ignore_accents (provider = icu, locale = \n> 'und-u-ks-level1-kc-true', deterministic = false);\n\nIt occurs to me to wonder whether texteq() still obeys transitivity\nwhen using such a collation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Jun 2021 13:58:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Character expansion with ICU collations" } ]
[ { "msg_contents": "Good day,\n\nI'm trying to set up a chef recipe to reserve enough HugePages on a linux\nsystem for our PG servers. A given VM will only host one PG cluster and\nthat will be the only thing on that host that uses HugePages. Blogs that\nI've seen suggest that it would be as simple as taking the shared_buffers\nsetting and dividing that by 2MB (huge page size), however I found that I\nneeded some more.\n\nIn my test case, shared_buffers is set to 4003MB (calculated by chef) but\nPG failed to start until I reserved a few hundred more MB. When I checked\nVmPeak, it was 4321MB, so I ended up having to reserve over 2161 huge\npages, over a hundred more than I had originally thought.\n\nI'm told other factors contribute to this additional memory requirement,\nsuch as max_connections, wal_buffers, etc. I'm wondering if anyone has been\nable to come up with a reliable method for determining the HugePages\nrequirements for a PG cluster based on the GUC values (that would be known\nat deployment time).\n\nThanks,\nDon.\n\n-- \nDon Seiler\nwww.seiler.us\n\nGood day,I'm trying to set up a chef recipe to reserve enough HugePages on a linux system for our PG servers. A given VM will only host one PG cluster and that will be the only thing on that host that uses HugePages. Blogs that I've seen suggest that it would be as simple as taking the shared_buffers setting and dividing that by 2MB (huge page size), however I found that I needed some more.In my test case, shared_buffers is set to 4003MB (calculated by chef) but PG failed to start until I reserved a few hundred more MB. When I checked VmPeak, it was 4321MB, so I ended up having to reserve over 2161 huge pages, over a hundred more than I had originally thought.I'm told other factors contribute to this additional memory requirement, such as max_connections, wal_buffers, etc. I'm wondering if anyone has been able to come up with a reliable method for determining the HugePages requirements for a PG cluster based on the GUC values (that would be known at deployment time).Thanks,Don.-- Don Seilerwww.seiler.us", "msg_date": "Wed, 9 Jun 2021 11:41:52 -0500", "msg_from": "Don Seiler <don@seiler.us>", "msg_from_op": true, "msg_subject": "Estimating HugePages Requirements?" }, { "msg_contents": "On Thu, Jun 10, 2021 at 12:42 AM Don Seiler <don@seiler.us> wrote:\n>\n> I'm told other factors contribute to this additional memory requirement, such as max_connections, wal_buffers, etc. I'm wondering if anyone has been able to come up with a reliable method for determining the HugePages requirements for a PG cluster based on the GUC values (that would be known at deployment time).\n\nIt also depends on modules like pg_stat_statements and their own\nconfiguration. I think that you can find the required size that your\ncurrent configuration will allocate with:\n\nSELECT sum(allocated_size) FROM pg_shmem_allocations ;\n\n\n", "msg_date": "Thu, 10 Jun 2021 01:23:28 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "Please ignore, if you have read the blog below, if not, at the end of it\nthere is a github repo which has mem specs for various tpcc benchmarks.\nOfcourse, your workload expectations may vary from the test scenarios used,\nbut just in case.\n\nSettling the Myth of Transparent HugePages for Databases - Percona Database\nPerformance Blog\n<https://www.percona.com/blog/2019/03/06/settling-the-myth-of-transparent-hugepages-for-databases/>\n\nPlease ignore, if you have read the blog below, if not, at the end of it there is a github repo which has mem specs for various tpcc benchmarks.Ofcourse, your workload expectations may vary from the test scenarios used, but just in case.Settling the Myth of Transparent HugePages for Databases - Percona Database Performance Blog", "msg_date": "Thu, 10 Jun 2021 00:15:40 +0530", "msg_from": "Vijaykumar Jain <vijaykumarjain.github@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Jun 9, 2021 at 1:45 PM Vijaykumar Jain <\nvijaykumarjain.github@gmail.com> wrote:\n\n> Please ignore, if you have read the blog below, if not, at the end of it\n> there is a github repo which has mem specs for various tpcc benchmarks.\n> Ofcourse, your workload expectations may vary from the test scenarios\n> used, but just in case.\n>\n> Settling the Myth of Transparent HugePages for Databases - Percona\n> Database Performance Blog\n> <https://www.percona.com/blog/2019/03/06/settling-the-myth-of-transparent-hugepages-for-databases/>\n>\n\nThat blog post is about transparent huge pages, which is different than\nHugePages I'm looking at here. We already disable THP as a matter of course.\n\n-- \nDon Seiler\nwww.seiler.us\n\nOn Wed, Jun 9, 2021 at 1:45 PM Vijaykumar Jain <vijaykumarjain.github@gmail.com> wrote:Please ignore, if you have read the blog below, if not, at the end of it there is a github repo which has mem specs for various tpcc benchmarks.Ofcourse, your workload expectations may vary from the test scenarios used, but just in case.Settling the Myth of Transparent HugePages for Databases - Percona Database Performance Blog\nThat blog post is about transparent huge pages, which is different than HugePages I'm looking at here. We already disable THP as a matter of course.-- Don Seilerwww.seiler.us", "msg_date": "Wed, 9 Jun 2021 13:52:19 -0500", "msg_from": "Don Seiler <don@seiler.us>", "msg_from_op": true, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Jun 9, 2021 at 01:52:19PM -0500, Don Seiler wrote:\n> On Wed, Jun 9, 2021 at 1:45 PM Vijaykumar Jain <vijaykumarjain.github@gmail.com\n> > wrote:\n> \n> Please ignore, if you have read the blog below, if not, at the end of it\n> there is a github repo which has mem specs for various tpcc�benchmarks.\n> Ofcourse, your workload expectations�may vary from the test scenarios used,\n> but just in case.\n> \n> Settling the Myth of Transparent HugePages for Databases - Percona Database\n> Performance Blog\n> \n> \n> That blog post is about transparent huge pages, which is different than\n> HugePages I'm looking at here. We already disable THP as a matter of course.\n\nThis blog post talks about sizing huge pages too:\n\n\thttps://momjian.us/main/blogs/pgblog/2021.html#April_12_2021\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 9 Jun 2021 15:01:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Jun 9, 2021 at 7:23 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Jun 10, 2021 at 12:42 AM Don Seiler <don@seiler.us> wrote:\n> >\n> > I'm told other factors contribute to this additional memory requirement, such as max_connections, wal_buffers, etc. I'm wondering if anyone has been able to come up with a reliable method for determining the HugePages requirements for a PG cluster based on the GUC values (that would be known at deployment time).\n>\n> It also depends on modules like pg_stat_statements and their own\n> configuration. I think that you can find the required size that your\n> current configuration will allocate with:\n>\n> SELECT sum(allocated_size) FROM pg_shmem_allocations ;\n\nI wonder how hard it would be to for example expose that through a\ncommandline switch or tool.\n\nThe point being that in order to run the query you suggest, the server\nmust already be running. There is no way to use this to estimate the\nsize that you're going to need after changing the value of\nshared_buffers, which is a very common scenario. (You can change it,\nrestart without using huge pages because it fails, run that query,\nchange huge pages, and restart again -- but that's not exactly...\nconvenient)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 9 Jun 2021 21:07:09 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> I wonder how hard it would be to for example expose that through a\n> commandline switch or tool.\n\nJust try to start the server and see if it complains.\nFor instance, with shared_buffers=10000000 I get\n\n2021-06-09 15:08:56.821 EDT [1428121] FATAL: could not map anonymous shared memory: Cannot allocate memory\n2021-06-09 15:08:56.821 EDT [1428121] HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 83720568832 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.\n\nOf course, if it *does* start, you can do the other thing.\n\nAdmittedly, we could make that easier somehow; but if it took\n25 years for somebody to ask for this, I'm not sure it's\nworth creating a feature to make it a shade easier.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Jun 2021 15:15:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Jun 9, 2021 at 9:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > I wonder how hard it would be to for example expose that through a\n> > commandline switch or tool.\n>\n> Just try to start the server and see if it complains.\n> For instance, with shared_buffers=10000000 I get\n>\n> 2021-06-09 15:08:56.821 EDT [1428121] FATAL: could not map anonymous shared memory: Cannot allocate memory\n> 2021-06-09 15:08:56.821 EDT [1428121] HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 83720568832 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.\n>\n> Of course, if it *does* start, you can do the other thing.\n\nWell, I have to *stop* the existing one first, most likely, otherwise\nthere won't be enough huge pages (or indeed memory) available. And if\nthen doesn't start, you're looking at extended downtime.\n\nYou can automate this to minimize it (set the value in the conf, stop\nold, start new, if new doesn't start then stop new, reconfigure, start\nold again), but it's *far* from friendly.\n\nThis process works when you're setting up a brand new server with\nnobody using it. It doesn't work well, or at all, when you actually\nhave active users on it..\n\n\n> Admittedly, we could make that easier somehow; but if it took\n> 25 years for somebody to ask for this, I'm not sure it's\n> worth creating a feature to make it a shade easier.\n\nWe haven't had huge page support for 25 years, \"only\" since 9.4 so\nabout 7 years.\n\nAnd for every year that passes, huge pages become more interesting in\nthat in general memory sizes increase so the payoff of using them is\nincreased.\n\nUsing huge pages *should* be a trivial improvement to set up. But it's\nin my experience complicated enough that many just skip it simply for\nthat reason.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 9 Jun 2021 21:23:50 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Wed, Jun 9, 2021 at 9:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Just try to start the server and see if it complains.\n\n> Well, I have to *stop* the existing one first, most likely, otherwise\n> there won't be enough huge pages (or indeed memory) available.\n\nI'm not following. If you have a production server running, its\npg_shmem_allocations total should already be a pretty good guide\nto what you need to configure HugePages for. You need to know to\nround that up, of course --- but if you aren't building a lot of\nslop into the HugePages configuration anyway, you'll get burned\ndown the road.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Jun 2021 15:28:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Jun 9, 2021 at 9:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Wed, Jun 9, 2021 at 9:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Just try to start the server and see if it complains.\n>\n> > Well, I have to *stop* the existing one first, most likely, otherwise\n> > there won't be enough huge pages (or indeed memory) available.\n>\n> I'm not following. If you have a production server running, its\n> pg_shmem_allocations total should already be a pretty good guide\n> to what you need to configure HugePages for. You need to know to\n> round that up, of course --- but if you aren't building a lot of\n> slop into the HugePages configuration anyway, you'll get burned\n> down the road.\n\nI'm talking about the case when you want to *change* the value for\nshared_buffers (or other parameters that would change the amount of\nrequired huge pages), on a system where you're using huge pages.\npg_shmem_allocations will tell you what you need with the current\nvalue, not what you need with the new value.\n\nBut yes, you can do some math around it and make a well educated\nguess. But it would be very convenient to have the system able to do\nthat for you.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 9 Jun 2021 21:30:52 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "moving to pgsql-hackers@\r\n\r\nOn 6/9/21, 9:41 AM, \"Don Seiler\" <don@seiler.us> wrote:\r\n> I'm trying to set up a chef recipe to reserve enough HugePages on a\r\n> linux system for our PG servers. A given VM will only host one PG\r\n> cluster and that will be the only thing on that host that uses\r\n> HugePages. Blogs that I've seen suggest that it would be as simple\r\n> as taking the shared_buffers setting and dividing that by 2MB (huge\r\n> page size), however I found that I needed some more.\r\n>\r\n> In my test case, shared_buffers is set to 4003MB (calculated by\r\n> chef) but PG failed to start until I reserved a few hundred more MB.\r\n> When I checked VmPeak, it was 4321MB, so I ended up having to\r\n> reserve over 2161 huge pages, over a hundred more than I had\r\n> originally thought.\r\n>\r\n> I'm told other factors contribute to this additional memory\r\n> requirement, such as max_connections, wal_buffers, etc. I'm\r\n> wondering if anyone has been able to come up with a reliable method\r\n> for determining the HugePages requirements for a PG cluster based on\r\n> the GUC values (that would be known at deployment time).\r\n\r\nIn RDS, we've added a pg_ctl option that returns the amount of shared\r\nmemory required. Basically, we start up postmaster just enough to get\r\nan accurate value from CreateSharedMemoryAndSemaphores() and then shut\r\ndown. The patch is quite battle-tested at this point (we first\r\nstarted using it in 2017, and we've been enabling huge pages by\r\ndefault since v10). I'd be happy to clean it up and submit it for\r\ndiscussion in pgsql-hackers@ if there is interest.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 9 Jun 2021 20:52:47 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "\n\n> On Jun 9, 2021, at 1:52 PM, Bossart, Nathan <bossartn@amazon.com> wrote:\n> \n> I'd be happy to clean it up and submit it for\n> discussion in pgsql-hackers@ if there is interest.\n\nYes, I'd like to see it. Thanks for offering.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 9 Jun 2021 15:50:52 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "I agree, its confusing for many and that confusion arises from the fact\nthat you usually talk of shared_buffers in MB or GB whereas hugepages have\nto be configured in units of 2mb. But once they understand they realize its\npretty simple.\n\nDon, we have experienced the same not just with postgres but also with\noracle. I havent been able to get to the root of it, but what we usually do\nis, we add another 100-200 pages and that works for us. If the SGA or\nshared_buffers is high eg 96gb, then we add 250-500 pages. Those few\nhundred MBs may be wasted (because the moment you configure hugepages, the\noperating system considers it as used and does not use it any more) but\nnowadays, servers have 64 or 128 gb RAM easily and wasting that 500mb to\n1gb does not hurt really.\n\nHTH\n\nOn Thu, 10 Jun 2021 at 1:01 AM, Magnus Hagander <magnus@hagander.net> wrote:\n\n> On Wed, Jun 9, 2021 at 9:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Magnus Hagander <magnus@hagander.net> writes:\n> > > On Wed, Jun 9, 2021 at 9:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> Just try to start the server and see if it complains.\n> >\n> > > Well, I have to *stop* the existing one first, most likely, otherwise\n> > > there won't be enough huge pages (or indeed memory) available.\n> >\n> > I'm not following. If you have a production server running, its\n> > pg_shmem_allocations total should already be a pretty good guide\n> > to what you need to configure HugePages for. You need to know to\n> > round that up, of course --- but if you aren't building a lot of\n> > slop into the HugePages configuration anyway, you'll get burned\n> > down the road.\n>\n> I'm talking about the case when you want to *change* the value for\n> shared_buffers (or other parameters that would change the amount of\n> required huge pages), on a system where you're using huge pages.\n> pg_shmem_allocations will tell you what you need with the current\n> value, not what you need with the new value.\n>\n> But yes, you can do some math around it and make a well educated\n> guess. But it would be very convenient to have the system able to do\n> that for you.\n>\n> --\n> Magnus Hagander\n> Me: https://www.hagander.net/\n> Work: https://www.redpill-linpro.com/\n>\n>\n>\n\nI agree, its confusing for many and that confusion arises from the fact that you usually talk of shared_buffers in MB or GB whereas hugepages have to be configured in units of 2mb. But once they understand they realize its pretty simple.Don, we have experienced the same not just with postgres but also with oracle. I havent been able to get to the root of it, but what we usually do is, we add another 100-200 pages and that works for us. If the SGA or shared_buffers is high eg 96gb, then we add 250-500 pages. Those few hundred MBs  may be wasted (because the moment you configure hugepages, the operating system considers it as used and does not use it any more) but nowadays, servers have 64 or 128 gb RAM easily and wasting that 500mb to 1gb does not hurt really.HTHOn Thu, 10 Jun 2021 at 1:01 AM, Magnus Hagander <magnus@hagander.net> wrote:On Wed, Jun 9, 2021 at 9:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Wed, Jun 9, 2021 at 9:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Just try to start the server and see if it complains.\n>\n> > Well, I have to *stop* the existing one first, most likely, otherwise\n> > there won't be enough huge pages (or indeed memory) available.\n>\n> I'm not following.  If you have a production server running, its\n> pg_shmem_allocations total should already be a pretty good guide\n> to what you need to configure HugePages for.  You need to know to\n> round that up, of course --- but if you aren't building a lot of\n> slop into the HugePages configuration anyway, you'll get burned\n> down the road.\n\nI'm talking about the case when you want to *change* the value for\nshared_buffers (or other parameters that would change the amount of\nrequired huge pages), on a system where you're using huge pages.\npg_shmem_allocations will tell you what you need with the current\nvalue, not what you need with the new value.\n\nBut yes, you can do some math around it and make a well educated\nguess. But it would be very convenient to have the system able to do\nthat for you.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 10 Jun 2021 07:33:39 +0530", "msg_from": "P C <puravc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 6/9/21, 3:51 PM, \"Mark Dilger\" <mark.dilger@enterprisedb.com> wrote:\r\n>> On Jun 9, 2021, at 1:52 PM, Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>>\r\n>> I'd be happy to clean it up and submit it for\r\n>> discussion in pgsql-hackers@ if there is interest.\r\n>\r\n> Yes, I'd like to see it. Thanks for offering.\r\n\r\nHere's the general idea. It still needs a bit of polishing, but I'm\r\nhoping this is enough to spark some discussion on the approach.\r\n\r\nNathan", "msg_date": "Thu, 10 Jun 2021 03:09:24 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Jun 9, 2021, 21:03 P C <puravc@gmail.com> wrote:\n\n> I agree, its confusing for many and that confusion arises from the fact\n> that you usually talk of shared_buffers in MB or GB whereas hugepages have\n> to be configured in units of 2mb. But once they understand they realize its\n> pretty simple.\n>\n> Don, we have experienced the same not just with postgres but also with\n> oracle. I havent been able to get to the root of it, but what we usually do\n> is, we add another 100-200 pages and that works for us. If the SGA or\n> shared_buffers is high eg 96gb, then we add 250-500 pages. Those few\n> hundred MBs may be wasted (because the moment you configure hugepages, the\n> operating system considers it as used and does not use it any more) but\n> nowadays, servers have 64 or 128 gb RAM easily and wasting that 500mb to\n> 1gb does not hurt really.\n>\n\nI don't have a problem with the math, just wanted to know if it was\npossible to better estimate what the actual requirements would be at\ndeployment time. My fallback will probably be you did and just pad with an\nextra 512MB by default.\n\nDon.\n\nOn Wed, Jun 9, 2021, 21:03 P C <puravc@gmail.com> wrote:I agree, its confusing for many and that confusion arises from the fact that you usually talk of shared_buffers in MB or GB whereas hugepages have to be configured in units of 2mb. But once they understand they realize its pretty simple.Don, we have experienced the same not just with postgres but also with oracle. I havent been able to get to the root of it, but what we usually do is, we add another 100-200 pages and that works for us. If the SGA or shared_buffers is high eg 96gb, then we add 250-500 pages. Those few hundred MBs  may be wasted (because the moment you configure hugepages, the operating system considers it as used and does not use it any more) but nowadays, servers have 64 or 128 gb RAM easily and wasting that 500mb to 1gb does not hurt really.I don't have a problem with the math, just wanted to know if it was possible to better estimate what the actual requirements would be at deployment time. My fallback will probably be you did and just pad with an extra 512MB by default.Don.", "msg_date": "Wed, 9 Jun 2021 22:55:08 -0500", "msg_from": "Don Seiler <don@seiler.us>", "msg_from_op": true, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Jun 09, 2021 at 10:55:08PM -0500, Don Seiler wrote:\n> On Wed, Jun 9, 2021, 21:03 P C <puravc@gmail.com> wrote:\n> \n> > I agree, its confusing for many and that confusion arises from the fact\n> > that you usually talk of shared_buffers in MB or GB whereas hugepages have\n> > to be configured in units of 2mb. But once they understand they realize its\n> > pretty simple.\n> >\n> > Don, we have experienced the same not just with postgres but also with\n> > oracle. I havent been able to get to the root of it, but what we usually do\n> > is, we add another 100-200 pages and that works for us. If the SGA or\n> > shared_buffers is high eg 96gb, then we add 250-500 pages. Those few\n> > hundred MBs may be wasted (because the moment you configure hugepages, the\n> > operating system considers it as used and does not use it any more) but\n> > nowadays, servers have 64 or 128 gb RAM easily and wasting that 500mb to\n> > 1gb does not hurt really.\n> \n> I don't have a problem with the math, just wanted to know if it was\n> possible to better estimate what the actual requirements would be at\n> deployment time. My fallback will probably be you did and just pad with an\n> extra 512MB by default.\n\nIt's because the huge allocation isn't just shared_buffers, but also\nwal_buffers:\n\n| The amount of shared memory used for WAL data that has not yet been written to disk.\n| The default setting of -1 selects a size equal to 1/32nd (about 3%) of shared_buffers, ...\n\n.. and other stuff:\n\nsrc/backend/storage/ipc/ipci.c\n\t * Size of the Postgres shared-memory block is estimated via\n\t * moderately-accurate estimates for the big hogs, plus 100K for the\n\t * stuff that's too small to bother with estimating.\n\t *\n\t * We take some care during this phase to ensure that the total size\n\t * request doesn't overflow size_t. If this gets through, we don't\n\t * need to be so careful during the actual allocation phase.\n\t */\n\tsize = 100000;\n\tsize = add_size(size, PGSemaphoreShmemSize(numSemas));\n\tsize = add_size(size, SpinlockSemaSize());\n\tsize = add_size(size, hash_estimate_size(SHMEM_INDEX_SIZE,\n\t\t\t\t\t\t\t\t\t\t\t sizeof(ShmemIndexEnt)));\n\tsize = add_size(size, dsm_estimate_size());\n\tsize = add_size(size, BufferShmemSize());\n\tsize = add_size(size, LockShmemSize());\n\tsize = add_size(size, PredicateLockShmemSize());\n\tsize = add_size(size, ProcGlobalShmemSize());\n\tsize = add_size(size, XLOGShmemSize());\n\tsize = add_size(size, CLOGShmemSize());\n\tsize = add_size(size, CommitTsShmemSize());\n\tsize = add_size(size, SUBTRANSShmemSize());\n\tsize = add_size(size, TwoPhaseShmemSize());\n\tsize = add_size(size, BackgroundWorkerShmemSize());\n\tsize = add_size(size, MultiXactShmemSize());\n\tsize = add_size(size, LWLockShmemSize());\n\tsize = add_size(size, ProcArrayShmemSize());\n\tsize = add_size(size, BackendStatusShmemSize());\n\tsize = add_size(size, SInvalShmemSize());\n\tsize = add_size(size, PMSignalShmemSize());\n\tsize = add_size(size, ProcSignalShmemSize());\n\tsize = add_size(size, CheckpointerShmemSize());\n\tsize = add_size(size, AutoVacuumShmemSize());\n\tsize = add_size(size, ReplicationSlotsShmemSize());\n\tsize = add_size(size, ReplicationOriginShmemSize());\n\tsize = add_size(size, WalSndShmemSize());\n\tsize = add_size(size, WalRcvShmemSize());\n\tsize = add_size(size, PgArchShmemSize());\n\tsize = add_size(size, ApplyLauncherShmemSize());\n\tsize = add_size(size, SnapMgrShmemSize());\n\tsize = add_size(size, BTreeShmemSize());\n\tsize = add_size(size, SyncScanShmemSize());\n\tsize = add_size(size, AsyncShmemSize());\n#ifdef EXEC_BACKEND\n\tsize = add_size(size, ShmemBackendArraySize());\n#endif\n\n\t/* freeze the addin request size and include it */\n\taddin_request_allowed = false;\n\tsize = add_size(size, total_addin_request);\n\n /* might as well round it off to a multiple of a typical page size */\n size = add_size(size, 8192 - (size % 8192));\n\nBTW, I think it'd be nice if this were a NOTICE:\n| elog(DEBUG1, \"mmap(%zu) with MAP_HUGETLB failed, huge pages disabled: %m\", allocsize);\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 10 Jun 2021 19:23:33 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Thu, Jun 10, 2021 at 7:23 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, Jun 09, 2021 at 10:55:08PM -0500, Don Seiler wrote:\n> > On Wed, Jun 9, 2021, 21:03 P C <puravc@gmail.com> wrote:\n> >\n> > > I agree, its confusing for many and that confusion arises from the fact\n> > > that you usually talk of shared_buffers in MB or GB whereas hugepages\n> have\n> > > to be configured in units of 2mb. But once they understand they\n> realize its\n> > > pretty simple.\n> > >\n> > > Don, we have experienced the same not just with postgres but also with\n> > > oracle. I havent been able to get to the root of it, but what we\n> usually do\n> > > is, we add another 100-200 pages and that works for us. If the SGA or\n> > > shared_buffers is high eg 96gb, then we add 250-500 pages. Those few\n> > > hundred MBs may be wasted (because the moment you configure\n> hugepages, the\n> > > operating system considers it as used and does not use it any more) but\n> > > nowadays, servers have 64 or 128 gb RAM easily and wasting that 500mb\n> to\n> > > 1gb does not hurt really.\n> >\n> > I don't have a problem with the math, just wanted to know if it was\n> > possible to better estimate what the actual requirements would be at\n> > deployment time. My fallback will probably be you did and just pad with\n> an\n> > extra 512MB by default.\n>\n> It's because the huge allocation isn't just shared_buffers, but also\n> wal_buffers:\n>\n> | The amount of shared memory used for WAL data that has not yet been\n> written to disk.\n> | The default setting of -1 selects a size equal to 1/32nd (about 3%) of\n> shared_buffers, ...\n>\n> .. and other stuff:\n>\n> src/backend/storage/ipc/ipci.c\n> * Size of the Postgres shared-memory block is estimated via\n> * moderately-accurate estimates for the big hogs, plus 100K for\n> the\n> * stuff that's too small to bother with estimating.\n> *\n> * We take some care during this phase to ensure that the total\n> size\n> * request doesn't overflow size_t. If this gets through, we don't\n> * need to be so careful during the actual allocation phase.\n> */\n> size = 100000;\n> size = add_size(size, PGSemaphoreShmemSize(numSemas));\n> size = add_size(size, SpinlockSemaSize());\n> size = add_size(size, hash_estimate_size(SHMEM_INDEX_SIZE,\n>\n> sizeof(ShmemIndexEnt)));\n> size = add_size(size, dsm_estimate_size());\n> size = add_size(size, BufferShmemSize());\n> size = add_size(size, LockShmemSize());\n> size = add_size(size, PredicateLockShmemSize());\n> size = add_size(size, ProcGlobalShmemSize());\n> size = add_size(size, XLOGShmemSize());\n> size = add_size(size, CLOGShmemSize());\n> size = add_size(size, CommitTsShmemSize());\n> size = add_size(size, SUBTRANSShmemSize());\n> size = add_size(size, TwoPhaseShmemSize());\n> size = add_size(size, BackgroundWorkerShmemSize());\n> size = add_size(size, MultiXactShmemSize());\n> size = add_size(size, LWLockShmemSize());\n> size = add_size(size, ProcArrayShmemSize());\n> size = add_size(size, BackendStatusShmemSize());\n> size = add_size(size, SInvalShmemSize());\n> size = add_size(size, PMSignalShmemSize());\n> size = add_size(size, ProcSignalShmemSize());\n> size = add_size(size, CheckpointerShmemSize());\n> size = add_size(size, AutoVacuumShmemSize());\n> size = add_size(size, ReplicationSlotsShmemSize());\n> size = add_size(size, ReplicationOriginShmemSize());\n> size = add_size(size, WalSndShmemSize());\n> size = add_size(size, WalRcvShmemSize());\n> size = add_size(size, PgArchShmemSize());\n> size = add_size(size, ApplyLauncherShmemSize());\n> size = add_size(size, SnapMgrShmemSize());\n> size = add_size(size, BTreeShmemSize());\n> size = add_size(size, SyncScanShmemSize());\n> size = add_size(size, AsyncShmemSize());\n> #ifdef EXEC_BACKEND\n> size = add_size(size, ShmemBackendArraySize());\n> #endif\n>\n> /* freeze the addin request size and include it */\n> addin_request_allowed = false;\n> size = add_size(size, total_addin_request);\n>\n> /* might as well round it off to a multiple of a typical page size\n> */\n> size = add_size(size, 8192 - (size % 8192));\n>\n> BTW, I think it'd be nice if this were a NOTICE:\n> | elog(DEBUG1, \"mmap(%zu) with MAP_HUGETLB failed, huge pages disabled:\n> %m\", allocsize);\n>\n\nGreat detail. I did some trial and error around just a few variables\n(shared_buffers, wal_buffers, max_connections) and came up with a formula\nthat seems to be \"good enough\" for at least a rough default estimate.\n\nThe pseudo-code is basically:\n\nceiling((shared_buffers + 200 + (25 * shared_buffers/1024) +\n10*(max_connections-100)/200 + wal_buffers-16)/2)\n\nThis assumes that all values are in MB and that wal_buffers is set to a\nvalue other than the default of -1 obviously. I decided to default\nwal_buffers to 16MB in our environments since that's what -1 should go to\nbased on the description in the documentation for an instance with\nshared_buffers of the sizes in our deployments.\n\nThis formula did come up a little short (2MB) when I had a low\nshared_buffers value at 2GB. Raising that starting 200 value to something\nlike 250 would take care of that. The limited testing I did based on\ndifferent values we see across our production deployments worked otherwise.\nPlease let me know what you folks think. I know I'm ignoring a lot of other\nfactors, especially given what Justin recently shared.\n\nThe remaining trick for me now is to calculate this in chef since\nshared_buffers and wal_buffers attributes are strings with the unit (\"MB\")\nin them, rather than just numerical values. Thinking of changing that\nattribute to be just that and assume/require MB to make the calculations\neasier.\n\n-- \nDon Seiler\nwww.seiler.us\n\nOn Thu, Jun 10, 2021 at 7:23 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Wed, Jun 09, 2021 at 10:55:08PM -0500, Don Seiler wrote:\n> On Wed, Jun 9, 2021, 21:03 P C <puravc@gmail.com> wrote:\n> \n> > I agree, its confusing for many and that confusion arises from the fact\n> > that you usually talk of shared_buffers in MB or GB whereas hugepages have\n> > to be configured in units of 2mb. But once they understand they realize its\n> > pretty simple.\n> >\n> > Don, we have experienced the same not just with postgres but also with\n> > oracle. I havent been able to get to the root of it, but what we usually do\n> > is, we add another 100-200 pages and that works for us. If the SGA or\n> > shared_buffers is high eg 96gb, then we add 250-500 pages. Those few\n> > hundred MBs  may be wasted (because the moment you configure hugepages, the\n> > operating system considers it as used and does not use it any more) but\n> > nowadays, servers have 64 or 128 gb RAM easily and wasting that 500mb to\n> > 1gb does not hurt really.\n> \n> I don't have a problem with the math, just wanted to know if it was\n> possible to better estimate what the actual requirements would be at\n> deployment time. My fallback will probably be you did and just pad with an\n> extra 512MB by default.\n\nIt's because the huge allocation isn't just shared_buffers, but also\nwal_buffers:\n\n| The amount of shared memory used for WAL data that has not yet been written to disk.\n| The default setting of -1 selects a size equal to 1/32nd (about 3%) of shared_buffers, ...\n\n.. and other stuff:\n\nsrc/backend/storage/ipc/ipci.c\n         * Size of the Postgres shared-memory block is estimated via\n         * moderately-accurate estimates for the big hogs, plus 100K for the\n         * stuff that's too small to bother with estimating.\n         *\n         * We take some care during this phase to ensure that the total size\n         * request doesn't overflow size_t.  If this gets through, we don't\n         * need to be so careful during the actual allocation phase.\n         */\n        size = 100000;\n        size = add_size(size, PGSemaphoreShmemSize(numSemas));\n        size = add_size(size, SpinlockSemaSize());\n        size = add_size(size, hash_estimate_size(SHMEM_INDEX_SIZE,\n                                                                                         sizeof(ShmemIndexEnt)));\n        size = add_size(size, dsm_estimate_size());\n        size = add_size(size, BufferShmemSize());\n        size = add_size(size, LockShmemSize());\n        size = add_size(size, PredicateLockShmemSize());\n        size = add_size(size, ProcGlobalShmemSize());\n        size = add_size(size, XLOGShmemSize());\n        size = add_size(size, CLOGShmemSize());\n        size = add_size(size, CommitTsShmemSize());\n        size = add_size(size, SUBTRANSShmemSize());\n        size = add_size(size, TwoPhaseShmemSize());\n        size = add_size(size, BackgroundWorkerShmemSize());\n        size = add_size(size, MultiXactShmemSize());\n        size = add_size(size, LWLockShmemSize());\n        size = add_size(size, ProcArrayShmemSize());\n        size = add_size(size, BackendStatusShmemSize());\n        size = add_size(size, SInvalShmemSize());\n        size = add_size(size, PMSignalShmemSize());\n        size = add_size(size, ProcSignalShmemSize());\n        size = add_size(size, CheckpointerShmemSize());\n        size = add_size(size, AutoVacuumShmemSize());\n        size = add_size(size, ReplicationSlotsShmemSize());\n        size = add_size(size, ReplicationOriginShmemSize());\n        size = add_size(size, WalSndShmemSize());\n        size = add_size(size, WalRcvShmemSize());\n        size = add_size(size, PgArchShmemSize());\n        size = add_size(size, ApplyLauncherShmemSize());\n        size = add_size(size, SnapMgrShmemSize());\n        size = add_size(size, BTreeShmemSize());\n        size = add_size(size, SyncScanShmemSize());\n        size = add_size(size, AsyncShmemSize());\n#ifdef EXEC_BACKEND\n        size = add_size(size, ShmemBackendArraySize());\n#endif\n\n        /* freeze the addin request size and include it */\n        addin_request_allowed = false;\n        size = add_size(size, total_addin_request);\n\n        /* might as well round it off to a multiple of a typical page size */\n        size = add_size(size, 8192 - (size % 8192));\n\nBTW, I think it'd be nice if this were a NOTICE:\n| elog(DEBUG1, \"mmap(%zu) with MAP_HUGETLB failed, huge pages disabled: %m\", allocsize);Great detail. I did some trial and error around just a few variables (shared_buffers, wal_buffers, max_connections) and came up with a formula that seems to be \"good enough\" for at least a rough default estimate.The pseudo-code is basically:ceiling((shared_buffers + 200 + (25 * shared_buffers/1024) + 10*(max_connections-100)/200 + wal_buffers-16)/2) This assumes that all values are in MB and that wal_buffers is set to a value other than the default of -1 obviously. I decided to default wal_buffers to 16MB in our environments since that's what -1 should go to based on the description in the documentation for an instance with shared_buffers of the sizes in our deployments.This formula did come up a little short (2MB) when I had a low shared_buffers value at 2GB. Raising that starting 200 value to something like 250 would take care of that. The limited testing I did based on different values we see across our production deployments worked otherwise. Please let me know what you folks think. I know I'm ignoring a lot of other factors, especially given what Justin recently shared.The remaining trick for me now is to calculate this in chef since shared_buffers and wal_buffers attributes are strings with the unit (\"MB\") in them, rather than just numerical values. Thinking of changing that attribute to be just that and assume/require MB to make the calculations easier.-- Don Seilerwww.seiler.us", "msg_date": "Mon, 14 Jun 2021 09:16:39 -0500", "msg_from": "Don Seiler <don@seiler.us>", "msg_from_op": true, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 6/9/21, 8:09 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 6/9/21, 3:51 PM, \"Mark Dilger\" <mark.dilger@enterprisedb.com> wrote:\r\n>>> On Jun 9, 2021, at 1:52 PM, Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>>>\r\n>>> I'd be happy to clean it up and submit it for\r\n>>> discussion in pgsql-hackers@ if there is interest.\r\n>>\r\n>> Yes, I'd like to see it. Thanks for offering.\r\n>\r\n> Here's the general idea. It still needs a bit of polishing, but I'm\r\n> hoping this is enough to spark some discussion on the approach.\r\n\r\nHere's a rebased version of the patch.\r\n\r\nNathan", "msg_date": "Mon, 9 Aug 2021 22:57:18 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Mon, Aug 9, 2021 at 3:57 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n\n> On 6/9/21, 8:09 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\n> > On 6/9/21, 3:51 PM, \"Mark Dilger\" <mark.dilger@enterprisedb.com> wrote:\n> >>> On Jun 9, 2021, at 1:52 PM, Bossart, Nathan <bossartn@amazon.com>\n> wrote:\n> >>>\n> >>> I'd be happy to clean it up and submit it for\n> >>> discussion in pgsql-hackers@ if there is interest.\n> >>\n> >> Yes, I'd like to see it. Thanks for offering.\n> >\n> > Here's the general idea. It still needs a bit of polishing, but I'm\n> > hoping this is enough to spark some discussion on the approach.\n>\n> Here's a rebased version of the patch.\n>\n> Nathan\n>\n> Hi,\n\n-extern void CreateSharedMemoryAndSemaphores(void);\n+extern Size CreateSharedMemoryAndSemaphores(bool size_only);\n\nShould the parameter be enum / bitmask so that future addition would not\nchange the method signature ?\n\nCheers\n\nOn Mon, Aug 9, 2021 at 3:57 PM Bossart, Nathan <bossartn@amazon.com> wrote:On 6/9/21, 8:09 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\n> On 6/9/21, 3:51 PM, \"Mark Dilger\" <mark.dilger@enterprisedb.com> wrote:\n>>> On Jun 9, 2021, at 1:52 PM, Bossart, Nathan <bossartn@amazon.com> wrote:\n>>>\n>>> I'd be happy to clean it up and submit it for\n>>> discussion in pgsql-hackers@ if there is interest.\n>>\n>> Yes, I'd like to see it.  Thanks for offering.\n>\n> Here's the general idea.  It still needs a bit of polishing, but I'm\n> hoping this is enough to spark some discussion on the approach.\n\nHere's a rebased version of the patch.\n\nNathan\nHi,-extern void CreateSharedMemoryAndSemaphores(void);+extern Size CreateSharedMemoryAndSemaphores(bool size_only);Should the parameter be enum / bitmask so that future addition would not change the method signature ?Cheers", "msg_date": "Mon, 9 Aug 2021 16:10:27 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 8/9/21, 4:05 PM, \"Zhihong Yu\" <zyu@yugabyte.com> wrote:\r\n> -extern void CreateSharedMemoryAndSemaphores(void);\r\n> +extern Size CreateSharedMemoryAndSemaphores(bool size_only);\r\n>\r\n> Should the parameter be enum / bitmask so that future addition would not change the method signature ?\r\n\r\nI don't have a strong opinion about this. I don't feel that it's\r\nreally necessary, but if reviewers want a bitmask instead, I can\r\nchange it.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 9 Aug 2021 23:48:34 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Thu, Jun 10, 2021 at 07:23:33PM -0500, Justin Pryzby wrote:\n> On Wed, Jun 09, 2021 at 10:55:08PM -0500, Don Seiler wrote:\n> > On Wed, Jun 9, 2021, 21:03 P C <puravc@gmail.com> wrote:\n> > \n> > > I agree, its confusing for many and that confusion arises from the fact\n> > > that you usually talk of shared_buffers in MB or GB whereas hugepages have\n> > > to be configured in units of 2mb. But once they understand they realize its\n> > > pretty simple.\n> > >\n> > > Don, we have experienced the same not just with postgres but also with\n> > > oracle. I havent been able to get to the root of it, but what we usually do\n> > > is, we add another 100-200 pages and that works for us. If the SGA or\n> > > shared_buffers is high eg 96gb, then we add 250-500 pages. Those few\n> > > hundred MBs may be wasted (because the moment you configure hugepages, the\n> > > operating system considers it as used and does not use it any more) but\n> > > nowadays, servers have 64 or 128 gb RAM easily and wasting that 500mb to\n> > > 1gb does not hurt really.\n> > \n> > I don't have a problem with the math, just wanted to know if it was\n> > possible to better estimate what the actual requirements would be at\n> > deployment time. My fallback will probably be you did and just pad with an\n> > extra 512MB by default.\n> \n> It's because the huge allocation isn't just shared_buffers, but also\n> wal_buffers:\n> \n> | The amount of shared memory used for WAL data that has not yet been written to disk.\n> | The default setting of -1 selects a size equal to 1/32nd (about 3%) of shared_buffers, ...\n> \n> .. and other stuff:\n\nI wonder if this shouldn't be solved the other way around:\n\nDefine shared_buffers as the exact size to be allocated/requested from the OS\n(regardless of whether they're huge pages or not), and have postgres compute\neverything else based on that. So shared_buffers=2GB would end up being 1950MB\n(or so) of buffer cache. We'd have to check that after the other allocations,\nthere's still at least 128kB left for the buffer cache. Maybe we'd have to\nbump the minimum value of shared_buffers.\n\n> src/backend/storage/ipc/ipci.c\n> \t * Size of the Postgres shared-memory block is estimated via\n> \t * moderately-accurate estimates for the big hogs, plus 100K for the\n> \t * stuff that's too small to bother with estimating.\n> \t *\n> \t * We take some care during this phase to ensure that the total size\n> \t * request doesn't overflow size_t. If this gets through, we don't\n> \t * need to be so careful during the actual allocation phase.\n> \t */\n> \tsize = 100000;\n> \tsize = add_size(size, PGSemaphoreShmemSize(numSemas));\n> \tsize = add_size(size, SpinlockSemaSize());\n> \tsize = add_size(size, hash_estimate_size(SHMEM_INDEX_SIZE,\n> \t\t\t\t\t\t\t\t\t\t\t sizeof(ShmemIndexEnt)));\n> \tsize = add_size(size, dsm_estimate_size());\n> \tsize = add_size(size, BufferShmemSize());\n> \tsize = add_size(size, LockShmemSize());\n> \tsize = add_size(size, PredicateLockShmemSize());\n> \tsize = add_size(size, ProcGlobalShmemSize());\n> \tsize = add_size(size, XLOGShmemSize());\n> \tsize = add_size(size, CLOGShmemSize());\n> \tsize = add_size(size, CommitTsShmemSize());\n> \tsize = add_size(size, SUBTRANSShmemSize());\n> \tsize = add_size(size, TwoPhaseShmemSize());\n> \tsize = add_size(size, BackgroundWorkerShmemSize());\n> \tsize = add_size(size, MultiXactShmemSize());\n> \tsize = add_size(size, LWLockShmemSize());\n> \tsize = add_size(size, ProcArrayShmemSize());\n> \tsize = add_size(size, BackendStatusShmemSize());\n> \tsize = add_size(size, SInvalShmemSize());\n> \tsize = add_size(size, PMSignalShmemSize());\n> \tsize = add_size(size, ProcSignalShmemSize());\n> \tsize = add_size(size, CheckpointerShmemSize());\n> \tsize = add_size(size, AutoVacuumShmemSize());\n> \tsize = add_size(size, ReplicationSlotsShmemSize());\n> \tsize = add_size(size, ReplicationOriginShmemSize());\n> \tsize = add_size(size, WalSndShmemSize());\n> \tsize = add_size(size, WalRcvShmemSize());\n> \tsize = add_size(size, PgArchShmemSize());\n> \tsize = add_size(size, ApplyLauncherShmemSize());\n> \tsize = add_size(size, SnapMgrShmemSize());\n> \tsize = add_size(size, BTreeShmemSize());\n> \tsize = add_size(size, SyncScanShmemSize());\n> \tsize = add_size(size, AsyncShmemSize());\n> #ifdef EXEC_BACKEND\n> \tsize = add_size(size, ShmemBackendArraySize());\n> #endif\n> \n> \t/* freeze the addin request size and include it */\n> \taddin_request_allowed = false;\n> \tsize = add_size(size, total_addin_request);\n> \n> /* might as well round it off to a multiple of a typical page size */\n> size = add_size(size, 8192 - (size % 8192));\n> \n> BTW, I think it'd be nice if this were a NOTICE:\n> | elog(DEBUG1, \"mmap(%zu) with MAP_HUGETLB failed, huge pages disabled: %m\", allocsize);\n\n\n", "msg_date": "Mon, 9 Aug 2021 18:58:53 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "Hi,\n\nOn 2021-08-09 18:58:53 -0500, Justin Pryzby wrote:\n> Define shared_buffers as the exact size to be allocated/requested from the OS\n> (regardless of whether they're huge pages or not), and have postgres compute\n> everything else based on that. So shared_buffers=2GB would end up being 1950MB\n> (or so) of buffer cache. We'd have to check that after the other allocations,\n> there's still at least 128kB left for the buffer cache. Maybe we'd have to\n> bump the minimum value of shared_buffers.\n\nI don't like that. How much \"other\" shared memory we're going to need is\nvery hard to predict and depends on extensions, configuration options\nlike max_locks_per_transaction, max_connections to a significant\ndegree. This way the user ends up needing to guess at least as much as\nbefore to get to a sensible shared_buffers.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Aug 2021 20:38:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "Hi,\n\nOn 2021-08-09 22:57:18 +0000, Bossart, Nathan wrote:\n\n> @@ -1026,6 +1031,18 @@ PostmasterMain(int argc, char *argv[])\n> \t */\n> \tInitializeMaxBackends();\n> \n> +\tif (output_shmem)\n> +\t{\n> +\t\tchar output[64];\n> +\t\tSize size;\n> +\n> +\t\tsize = CreateSharedMemoryAndSemaphores(true);\n> +\t\tsprintf(output, \"%zu\", size);\n> +\n> +\t\tputs(output);\n> +\t\tExitPostmaster(0);\n> +\t}\n\nI don't like putting this into PostmasterMain(). Either BootstrapMain()\n(specifically checker mode) or GucInfoMain() seem like better places.\n\n\n> -void\n> -CreateSharedMemoryAndSemaphores(void)\n> +Size\n> +CreateSharedMemoryAndSemaphores(bool size_only)\n> {\n> \tPGShmemHeader *shim = NULL;\n> \n> @@ -161,6 +161,9 @@ CreateSharedMemoryAndSemaphores(void)\n> \t\t/* might as well round it off to a multiple of a typical page size */\n> \t\tsize = add_size(size, 8192 - (size % 8192));\n> \n> +\t\tif (size_only)\n> +\t\t\treturn size;\n> +\n> \t\telog(DEBUG3, \"invoking IpcMemoryCreate(size=%zu)\", size);\n> \n> \t\t/*\n> @@ -288,4 +291,6 @@ CreateSharedMemoryAndSemaphores(void)\n> \t */\n> \tif (shmem_startup_hook)\n> \t\tshmem_startup_hook();\n> +\n> +\treturn 0;\n> }\n\nThat seems like an ugly API to me. Why don't we split the size\ndetermination and shmem creation functions into two?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Aug 2021 20:42:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 8/9/21, 8:43 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> I don't like putting this into PostmasterMain(). Either BootstrapMain()\r\n> (specifically checker mode) or GucInfoMain() seem like better places.\r\n\r\nI think BootstrapModeMain() makes the most sense. It fits in nicely\r\nwith the --check logic that's already there. With v3, the following\r\ncommand can be used to retrieve the amount of shared memory required.\r\n\r\n postgres --output-shmem -D dir\r\n\r\nWhile testing this new option, I noticed that you can achieve similar\r\nresults today with the following command, although this one will\r\nactually try to create the shared memory, too.\r\n\r\n postgres --check -D dir -c log_min_messages=debug3 2> >(grep IpcMemoryCreate)\r\n\r\nIMO the new option is still handy, but I can see the argument that it\r\nmight not be necessary.\r\n\r\n> That seems like an ugly API to me. Why don't we split the size\r\n> determination and shmem creation functions into two?\r\n\r\nI did it this way in v3.\r\n\r\nNathan", "msg_date": "Wed, 11 Aug 2021 23:23:52 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Aug 11, 2021 at 11:23:52PM +0000, Bossart, Nathan wrote:\n> I think BootstrapModeMain() makes the most sense. It fits in nicely\n> with the --check logic that's already there. With v3, the following\n> command can be used to retrieve the amount of shared memory required.\n> \n> postgres --output-shmem -D dir\n> \n> While testing this new option, I noticed that you can achieve similar\n> results today with the following command, although this one will\n> actually try to create the shared memory, too.\n\nThat may not be the best option.\n\n> IMO the new option is still handy, but I can see the argument that it\n> might not be necessary.\n\nA separate option looks handy. Wouldn't it be better to document it\nin postgres-ref.sgml then?\n--\nMichael", "msg_date": "Fri, 27 Aug 2021 15:46:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Fri, Aug 27, 2021 at 8:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Aug 11, 2021 at 11:23:52PM +0000, Bossart, Nathan wrote:\n> > I think BootstrapModeMain() makes the most sense. It fits in nicely\n> > with the --check logic that's already there. With v3, the following\n> > command can be used to retrieve the amount of shared memory required.\n> >\n> > postgres --output-shmem -D dir\n> >\n> > While testing this new option, I noticed that you can achieve similar\n> > results today with the following command, although this one will\n> > actually try to create the shared memory, too.\n>\n> That may not be the best option.\n\nI would say that can be a disastrous option.\n\nFirst of all it would probably not work if you already have something\nrunning -- especially when using huge pages. And if it does work, in\nthat or other scenarios, it can potentially have significant impact on\na running cluster to suddenly allocate many GB of more memory...\n\n\n> > IMO the new option is still handy, but I can see the argument that it\n> > might not be necessary.\n>\n> A separate option looks handy. Wouldn't it be better to document it\n> in postgres-ref.sgml then?\n\nI'd say a lot more than just handy. I don't think the workaround is\nreally all that useful.\n\n(haven't looked at the actual patch yet, just commenting on the principle)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 27 Aug 2021 16:40:27 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 8/27/21, 7:41 AM, \"Magnus Hagander\" <magnus@hagander.net> wrote:\r\n> On Fri, Aug 27, 2021 at 8:46 AM Michael Paquier <michael@paquier.xyz> wrote:\r\n>> On Wed, Aug 11, 2021 at 11:23:52PM +0000, Bossart, Nathan wrote:\r\n>> > While testing this new option, I noticed that you can achieve similar\r\n>> > results today with the following command, although this one will\r\n>> > actually try to create the shared memory, too.\r\n>>\r\n>> That may not be the best option.\r\n>\r\n> I would say that can be a disastrous option.\r\n>\r\n> First of all it would probably not work if you already have something\r\n> running -- especially when using huge pages. And if it does work, in\r\n> that or other scenarios, it can potentially have significant impact on\r\n> a running cluster to suddenly allocate many GB of more memory...\r\n\r\nThe v3 patch actually didn't work if the server was already running.\r\nI removed that restriction in v4.\r\n\r\n>> > IMO the new option is still handy, but I can see the argument that it\r\n>> > might not be necessary.\r\n>>\r\n>> A separate option looks handy. Wouldn't it be better to document it\r\n>> in postgres-ref.sgml then?\r\n>\r\n> I'd say a lot more than just handy. I don't think the workaround is\r\n> really all that useful.\r\n\r\nI added some documentation in v4.\r\n\r\nNathan", "msg_date": "Fri, 27 Aug 2021 18:16:01 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 2021-08-27 16:40:27 +0200, Magnus Hagander wrote:\n> On Fri, Aug 27, 2021 at 8:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n> I'd say a lot more than just handy. I don't think the workaround is\n> really all that useful.\n\n+1\n\n\n", "msg_date": "Fri, 27 Aug 2021 11:46:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 8/27/21, 11:16 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> I added some documentation in v4.\r\n\r\nI realized that my attempt at documenting this new option was missing\r\nsome important context about the meaning of the return value when used\r\nagainst a running server. I added that in v5.\r\n\r\nNathan", "msg_date": "Fri, 27 Aug 2021 19:27:18 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "Hi,\n\nOn 2021-08-27 19:27:18 +0000, Bossart, Nathan wrote:\n> + <varlistentry>\n> + <term><option>--output-shmem</option></term>\n> + <listitem>\n> + <para>\n> + Prints the amount of shared memory required for the given\n> + configuration and exits. This can be used on a running server, but\n> + the return value reflects the amount of shared memory needed based\n> + on the current invocation. It does not return the amount of shared\n> + memory in use by the running server. This must be the first\n> + argument on the command line.\n> + </para>\n> +\n> + <para>\n> + This option is useful for determining the number of huge pages\n> + needed for the server. For more information, see\n> + <xref linkend=\"linux-huge-pages\"/>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n\nOne thing I wonder is if this wouldn't better be dealt with in a more generic\nway. While this is the most problematic runtime computed GUC, it's not the\nonly one. What if we introduced a new shared_memory_size GUC, and made\n--describe-config output it? Perhaps adding --describe-config=guc-name?\n\nI also wonder if we should output the number of hugepages needed instead of\nthe \"raw\" bytes of shared memory. The whole business about figuring out the\nhuge page size, dividing the shared memory size by that and then rounding up\ncould be removed in that case. Due to huge_page_size it's not even immediately\nobvious which huge page size one should use...\n\n\n> diff --git a/src/backend/main/main.c b/src/backend/main/main.c\n> index 3a2a0d598c..c141ae3d1c 100644\n> --- a/src/backend/main/main.c\n> +++ b/src/backend/main/main.c\n> @@ -182,9 +182,11 @@ main(int argc, char *argv[])\n> \t */\n> \n> \tif (argc > 1 && strcmp(argv[1], \"--check\") == 0)\n> -\t\tBootstrapModeMain(argc, argv, true);\n> +\t\tBootstrapModeMain(argc, argv, true, false);\n> +\telse if (argc > 1 && strcmp(argv[1], \"--output-shmem\") == 0)\n> +\t\tBootstrapModeMain(argc, argv, false, true);\n> \telse if (argc > 1 && strcmp(argv[1], \"--boot\") == 0)\n> -\t\tBootstrapModeMain(argc, argv, false);\n> +\t\tBootstrapModeMain(argc, argv, false, false);\n> #ifdef EXEC_BACKEND\n> \telse if (argc > 1 && strncmp(argv[1], \"--fork\", 6) == 0)\n> \t\tSubPostmasterMain(argc, argv);\n\nhelp() needs an update too.\n\n\n> diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c\n> index 3e4ec53a97..b225b1ee70 100644\n> --- a/src/backend/storage/ipc/ipci.c\n> +++ b/src/backend/storage/ipc/ipci.c\n> @@ -75,6 +75,87 @@ RequestAddinShmemSpace(Size size)\n> \ttotal_addin_request = add_size(total_addin_request, size);\n> }\n> \n> +/*\n> + * CalculateShmemSize\n> + *\t\tCalculates the amount of shared memory and number of semaphores needed.\n> + *\n> + * If num_semaphores is not NULL, it will be set to the number of semaphores\n> + * required.\n> + *\n> + * Note that this function freezes the additional shared memory request size\n> + * from loadable modules.\n> + */\n> +Size\n> +CalculateShmemSize(int *num_semaphores)\n> +{\n\nCan you split this into a separate commit? It feels fairy uncontroversial to\nme, so I think we could just apply it soon?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Aug 2021 12:38:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 8/27/21, 12:39 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> One thing I wonder is if this wouldn't better be dealt with in a more generic\r\n> way. While this is the most problematic runtime computed GUC, it's not the\r\n> only one. What if we introduced a new shared_memory_size GUC, and made\r\n> --describe-config output it? Perhaps adding --describe-config=guc-name?\r\n>\r\n> I also wonder if we should output the number of hugepages needed instead of\r\n> the \"raw\" bytes of shared memory. The whole business about figuring out the\r\n> huge page size, dividing the shared memory size by that and then rounding up\r\n> could be removed in that case. Due to huge_page_size it's not even immediately\r\n> obvious which huge page size one should use...\r\n\r\nI like both of these ideas.\r\n\r\n> Can you split this into a separate commit? It feels fairy uncontroversial to\r\n> me, so I think we could just apply it soon?\r\n\r\nI attached a patch for just the uncontroversial part, which is\r\nunfortunately all I have time for today.\r\n\r\nNathan", "msg_date": "Fri, 27 Aug 2021 20:16:40 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Fri, Aug 27, 2021 at 08:16:40PM +0000, Bossart, Nathan wrote:\n> On 8/27/21, 12:39 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\n>> One thing I wonder is if this wouldn't better be dealt with in a more generic\n>> way. While this is the most problematic runtime computed GUC, it's not the\n>> only one. What if we introduced a new shared_memory_size GUC, and made\n>> --describe-config output it? Perhaps adding --describe-config=guc-name?\n>>\n>> I also wonder if we should output the number of hugepages needed instead of\n>> the \"raw\" bytes of shared memory. The whole business about figuring out the\n>> huge page size, dividing the shared memory size by that and then rounding up\n>> could be removed in that case. Due to huge_page_size it's not even immediately\n>> obvious which huge page size one should use...\n> \n> I like both of these ideas.\n\nThat pretty much looks like -C in concept, isn't it? Except that you\ncannot get the actual total shared memory value because we'd do this\noperation before loading shared_preload_libraries and miss any amount\nasked by extensions. There is a problem similar when attempting to do\npostgres -C data_checksums, for example, which would output an\nincorrect value even if the cluster has data checksums enabled.\n--\nMichael", "msg_date": "Sat, 28 Aug 2021 11:00:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Sat, Aug 28, 2021 at 11:00:11AM +0900, Michael Paquier wrote:\n> On Fri, Aug 27, 2021 at 08:16:40PM +0000, Bossart, Nathan wrote:\n> > On 8/27/21, 12:39 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\n> >> One thing I wonder is if this wouldn't better be dealt with in a more generic\n> >> way. While this is the most problematic runtime computed GUC, it's not the\n> >> only one. What if we introduced a new shared_memory_size GUC, and made\n> >> --describe-config output it? Perhaps adding --describe-config=guc-name?\n> >>\n> >> I also wonder if we should output the number of hugepages needed instead of\n> >> the \"raw\" bytes of shared memory. The whole business about figuring out the\n> >> huge page size, dividing the shared memory size by that and then rounding up\n> >> could be removed in that case. Due to huge_page_size it's not even immediately\n> >> obvious which huge page size one should use...\n> > \n> > I like both of these ideas.\n> \n> That pretty much looks like -C in concept, isn't it? Except that you\n> cannot get the actual total shared memory value because we'd do this\n> operation before loading shared_preload_libraries and miss any amount\n> asked by extensions. There is a problem similar when attempting to do\n> postgres -C data_checksums, for example, which would output an\n> incorrect value even if the cluster has data checksums enabled.\n\nSince we don't want to try to allocate the huge pages, and we also don't want\nto compute based on shared_buffers alone, did anyone consider if pg_controldata\nis the right place to put this ?\n\nIt includes a lot of related stuff:\n\nmax_connections setting: 100\nmax_worker_processes setting: 8\n - (added in 2013: 6bc8ef0b7f1f1df3998745a66e1790e27424aa0c)\nmax_wal_senders setting: 10\nmax_prepared_xacts setting: 2\nmax_locks_per_xact setting: 64\n\nI'm not sure if there's any reason these aren't also shown (?)\nautovacuum_max_workers - added in 2007: e2a186b03\nmax_predicate_locks_per_xact - added in 2011: dafaa3efb\nmax_logical_replication_workers\nmax_replication_slots\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 27 Aug 2021 22:57:22 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 8/27/21, 7:01 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Fri, Aug 27, 2021 at 08:16:40PM +0000, Bossart, Nathan wrote:\r\n>> On 8/27/21, 12:39 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n>>> One thing I wonder is if this wouldn't better be dealt with in a more generic\r\n>>> way. While this is the most problematic runtime computed GUC, it's not the\r\n>>> only one. What if we introduced a new shared_memory_size GUC, and made\r\n>>> --describe-config output it? Perhaps adding --describe-config=guc-name?\r\n>>>\r\n>>> I also wonder if we should output the number of hugepages needed instead of\r\n>>> the \"raw\" bytes of shared memory. The whole business about figuring out the\r\n>>> huge page size, dividing the shared memory size by that and then rounding up\r\n>>> could be removed in that case. Due to huge_page_size it's not even immediately\r\n>>> obvious which huge page size one should use...\r\n>> \r\n>> I like both of these ideas.\r\n>\r\n> That pretty much looks like -C in concept, isn't it? Except that you\r\n> cannot get the actual total shared memory value because we'd do this\r\n> operation before loading shared_preload_libraries and miss any amount\r\n> asked by extensions. There is a problem similar when attempting to do\r\n> postgres -C data_checksums, for example, which would output an\r\n> incorrect value even if the cluster has data checksums enabled.\r\n\r\nAttached is a hacky attempt at adding a shared_memory_size GUC in a\r\nway that could be used with -C. This should include the amount of\r\nshared memory requested by extensions, too. As long as huge_page_size\r\nis nonzero, it seems easy enough to provide the number of huge pages\r\nneeded as well.\r\n\r\nNathan", "msg_date": "Sat, 28 Aug 2021 05:36:37 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Sat, Aug 28, 2021 at 05:36:37AM +0000, Bossart, Nathan wrote:\n> Attached is a hacky attempt at adding a shared_memory_size GUC in a\n> way that could be used with -C. This should include the amount of\n> shared memory requested by extensions, too. As long as huge_page_size\n> is nonzero, it seems easy enough to provide the number of huge pages\n> needed as well.\n\nYes, the implementation is not good. The key thing is that by wanting\nto support shared_memory_size with the -C switch of postgres, we need\nto call process_shared_preload_libraries before output_config_variable. \nThis additionally means to call ApplyLauncherRegister() before that so\nas all the bgworker slots are not taken first. Going through\n_PG_init() also means that we'd better use ChangeToDataDir()\nbeforehand.\n\nAttached is a WIP to show how the order of the operations could be\nchanged, as that's easier to grasp. Even if we don't do that, having\nthe GUC and the refactoring of CalculateShmemSize() would still be\nuseful, as one could just query an existing instance for an estimation\nof huge pages for a cloned one.\n\nThe GUC shared_memory_size should have GUC_NOT_IN_SAMPLE and\nGUC_DISALLOW_IN_FILE, with some documentation, of course. I added the\nflags to the GUC, not the docs. The code setting up the GUC is not\ngood either. It would make sense to just have that in a small wrapper\nof ipci.c, perhaps.\n--\nMichael", "msg_date": "Mon, 30 Aug 2021 16:29:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 8/30/21, 12:29 AM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> Attached is a WIP to show how the order of the operations could be\r\n> changed, as that's easier to grasp. Even if we don't do that, having\r\n> the GUC and the refactoring of CalculateShmemSize() would still be\r\n> useful, as one could just query an existing instance for an estimation\r\n> of huge pages for a cloned one.\r\n>\r\n> The GUC shared_memory_size should have GUC_NOT_IN_SAMPLE and\r\n> GUC_DISALLOW_IN_FILE, with some documentation, of course. I added the\r\n> flags to the GUC, not the docs. The code setting up the GUC is not\r\n> good either. It would make sense to just have that in a small wrapper\r\n> of ipci.c, perhaps.\r\n\r\nI moved the GUC calculation to ipci.c, adjusted the docs, and added a\r\nhuge_pages_required GUC. It's still a little rough around the edges,\r\nand I haven't tested it on Windows, but this seems like the direction\r\nthe patch is headed.\r\n\r\nNathan", "msg_date": "Tue, 31 Aug 2021 05:37:52 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Tue, Aug 31, 2021 at 05:37:52AM +0000, Bossart, Nathan wrote:\n> I moved the GUC calculation to ipci.c, adjusted the docs, and added a\n> huge_pages_required GUC. It's still a little rough around the edges,\n> and I haven't tested it on Windows, but this seems like the direction\n> the patch is headed.\n\nHmm. I am not sure about the addition of huge_pages_required, knowing\nthat we would have shared_memory_size. I'd rather let the calculation\npart to the user with a scan of /proc/meminfo.\n\n+#elif defined(WIN32)\n+ hp_size = GetLargePageMinimum();\n+#endif\n+\n+#if defined(MAP_HUGETLB) || defined(WIN32)\n+ hp_required = (size_b / hp_size) + 1;\nAs of [1], there is the following description:\n\"If the processor does not support large pages, the return value is\nzero.\"\nSo there is a problem here.\n\n[1]: https://docs.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-getlargepageminimum\n--\nMichael", "msg_date": "Wed, 1 Sep 2021 15:53:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 8/31/21, 11:54 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> Hmm. I am not sure about the addition of huge_pages_required, knowing\r\n> that we would have shared_memory_size. I'd rather let the calculation\r\n> part to the user with a scan of /proc/meminfo.\r\n\r\nI included this based on some feedback from Andres upthread [0]. I\r\nwent ahead and split the patch set into 3 pieces in case we end up\r\nleaving it out.\r\n\r\n> +#elif defined(WIN32)\r\n> + hp_size = GetLargePageMinimum();\r\n> +#endif\r\n> +\r\n> +#if defined(MAP_HUGETLB) || defined(WIN32)\r\n> + hp_required = (size_b / hp_size) + 1;\r\n> As of [1], there is the following description:\r\n> \"If the processor does not support large pages, the return value is\r\n> zero.\"\r\n> So there is a problem here.\r\n\r\nI've fixed this in v4.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/20210827193813.oqo5lamvyzahs35o%40alap3.anarazel.de", "msg_date": "Wed, 1 Sep 2021 18:28:21 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Sep 01, 2021 at 06:28:21PM +0000, Bossart, Nathan wrote:\n> On 8/31/21, 11:54 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n>> Hmm. I am not sure about the addition of huge_pages_required, knowing\n>> that we would have shared_memory_size. I'd rather let the calculation\n>> part to the user with a scan of /proc/meminfo.\n> \n> I included this based on some feedback from Andres upthread [0]. I\n> went ahead and split the patch set into 3 pieces in case we end up\n> leaving it out.\n\nThanks. Anyway, we don't really need huge_pages_required on Windows,\ndo we? The following docs of Windows tell what do to when using large\npages:\nhttps://docs.microsoft.com/en-us/windows/win32/memory/large-page-support\n\nThe backend code does that as in PGSharedMemoryCreate(), now that I\nlook at it. And there is no way to change the minimum large page size\nthere as far as I can see because that's decided by the processor, no?\nThere is a case for shared_memory_size on Windows to be able to adjust\nthe sizing of the memory of the host, though.\n\n>> +#elif defined(WIN32)\n>> + hp_size = GetLargePageMinimum();\n>> +#endif\n>> +\n>> +#if defined(MAP_HUGETLB) || defined(WIN32)\n>> + hp_required = (size_b / hp_size) + 1;\n>> As of [1], there is the following description:\n>> \"If the processor does not support large pages, the return value is\n>> zero.\"\n>> So there is a problem here.\n> \n> I've fixed this in v4.\n\nAt the end it would be nice to not finish with two GUCs. Both depend\non the reordering of the actions done by the postmaster, so I'd be\ncurious to hear the thoughts of others on this particular point.\n--\nMichael", "msg_date": "Thu, 2 Sep 2021 16:50:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/2/21, 12:54 AM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> Thanks. Anyway, we don't really need huge_pages_required on Windows,\r\n> do we? The following docs of Windows tell what do to when using large\r\n> pages:\r\n> https://docs.microsoft.com/en-us/windows/win32/memory/large-page-support\r\n>\r\n> The backend code does that as in PGSharedMemoryCreate(), now that I\r\n> look at it. And there is no way to change the minimum large page size\r\n> there as far as I can see because that's decided by the processor, no?\r\n> There is a case for shared_memory_size on Windows to be able to adjust\r\n> the sizing of the memory of the host, though.\r\n\r\nYeah, huge_pages_required might not serve much purpose for Windows.\r\nWe could always set it to -1 for Windows if it seems like it'll do\r\nmore harm than good.\r\n\r\n> At the end it would be nice to not finish with two GUCs. Both depend\r\n> on the reordering of the actions done by the postmaster, so I'd be\r\n> curious to hear the thoughts of others on this particular point.\r\n\r\nOf course. It'd be great to hear others' thoughts on this stuff.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 2 Sep 2021 16:46:56 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Thu, Sep 02, 2021 at 04:46:56PM +0000, Bossart, Nathan wrote:\n> Yeah, huge_pages_required might not serve much purpose for Windows.\n> We could always set it to -1 for Windows if it seems like it'll do\n> more harm than good.\n\nI'd be fine with this setup on environments where there is no need for\nit.\n\n>> At the end it would be nice to not finish with two GUCs. Both depend\n>> on the reordering of the actions done by the postmaster, so I'd be\n>> curious to hear the thoughts of others on this particular point.\n> \n> Of course. It'd be great to hear others' thoughts on this stuff.\n\nJust to be clear here, the ordering of HEAD is that for the\npostmaster:\n- Load configuration.\n- Handle -C config_param.\n- checkDataDir(), to check permissions of the data dir, etc.\n- checkControlFile(), to see if the control file exists.\n- Switch to data directory as work dir.\n- Lock file creation.\n- Initial read of the control file (where the GUC data_checksums is\nset).\n- Register apply launcher\n- shared_preload_libraries\n\nWith 0002, we have that:\n- Load configuration.\n- checkDataDir(), to check permissions of the data dir, etc.\n- checkControlFile(), to see if the control file exists.\n- Switch to data directory as work dir.\n- Register apply launcher\n- shared_preload_libraries\n- Calculate the shmem GUCs (new step)\n- Handle -C config_param.\n- Lock file creation.\n- Initial read of the control file (where the GUC data_checksums is\nset).\n\nOne thing that would be incorrect upon more review is that we'd still\nhave data_checksums wrong with -C, meaning that the initial read of\nthe control file should be moved further up, and getting the control\nfile checks done before registering workers would be better. Keeping\nthe lock file at the end would be fine AFAIK, but should we worry\nabout the interactions with _PG_init() here?\n\n0001, that refactors the calculation of the shmem size into a\ndifferent routine, is fine as-is, so I'd like to move on and apply\nit.\n--\nMichael", "msg_date": "Fri, 3 Sep 2021 10:45:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "At Thu, 2 Sep 2021 16:46:56 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 9/2/21, 12:54 AM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n> > Thanks. Anyway, we don't really need huge_pages_required on Windows,\n> > do we? The following docs of Windows tell what do to when using large\n> > pages:\n> > https://docs.microsoft.com/en-us/windows/win32/memory/large-page-support\n> >\n> > The backend code does that as in PGSharedMemoryCreate(), now that I\n> > look at it. And there is no way to change the minimum large page size\n> > there as far as I can see because that's decided by the processor, no?\n> > There is a case for shared_memory_size on Windows to be able to adjust\n> > the sizing of the memory of the host, though.\n> \n> Yeah, huge_pages_required might not serve much purpose for Windows.\n> We could always set it to -1 for Windows if it seems like it'll do\n> more harm than good.\n\nI agreed to this.\n\n> > At the end it would be nice to not finish with two GUCs. Both depend\n> > on the reordering of the actions done by the postmaster, so I'd be\n> > curious to hear the thoughts of others on this particular point.\n> \n> Of course. It'd be great to hear others' thoughts on this stuff.\n\nHonestly, I would be satisfied if the following error message\ncontained required huge pages.\n\nFATAL: could not map anonymous shared memory: Cannot allocate memory\nHINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 148897792 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.\n\nOr emit a different message if huge_pages=on.\n\nFATAL: could not map anonymous shared memory from huge pages\nHINT: This usually means that PostgreSQL's request for huge pages more than available. The required 2048kB huge pages for the required memory size (currently 148897792 bytes) is 71 pages.\n\n\nReturning to this feature, even if I am informed that via GUC, I won't\nadd memory by looking shared_memory_size. Anyway since shard_buffers\noccupies almost all portion of shared memory allocated to postgres, we\nare not supposed to need such a precise adjustment of the required\nsize of shared memory. On the other hand available number of huge\npages is configurable and we need to set it as required. On the other\nhand, it might seem to me a bit strange that there's only\nhuge_page_required and not shared_memory_size in the view of\ncomprehensiveness or completeness. So my feeling at this point is \"I\nneed only huge_pages_required but might want shared_memory_size just\nfor completeness\".\n\n\nBy the way I noticed that postgres -C huge_page_size shows 0, which I\nthink should have the number used for the calculation if we show\nhuge_page_required.\n\nI noticed that postgres -C shared_memory_size showed 137 (= 144703488)\nwhereas the error message above showed 148897792 bytes (142MB). So it\nseems that something is forgotten while calculating\nshared_memory_size. As the consequence, launching postgres setting\nhuge_pages_required (69 pages) as vm.nr_hugepages ended up in the\n\"could not map anonymous shared memory\" error.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 03 Sep 2021 14:12:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/2/21, 6:46 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Thu, Sep 02, 2021 at 04:46:56PM +0000, Bossart, Nathan wrote:\r\n>> Yeah, huge_pages_required might not serve much purpose for Windows.\r\n>> We could always set it to -1 for Windows if it seems like it'll do\r\n>> more harm than good.\r\n>\r\n> I'd be fine with this setup on environments where there is no need for\r\n> it.\r\n\r\nI did this in v5.\r\n\r\n> One thing that would be incorrect upon more review is that we'd still\r\n> have data_checksums wrong with -C, meaning that the initial read of\r\n> the control file should be moved further up, and getting the control\r\n> file checks done before registering workers would be better. Keeping\r\n> the lock file at the end would be fine AFAIK, but should we worry\r\n> about the interactions with _PG_init() here?\r\n\r\nI think we can avoid so much reordering by moving the -C handling\r\ninstead. That should also fix things like data_checksums. I split\r\nthe reordering part out into its own patch in v5.\r\n\r\nYou bring up an interesting point about _PG_init(). Presently, you\r\ncan safely assume that the data directory is locked during _PG_init(),\r\nso there's no need to worry about breaking something on a running\r\nserver. I don't know how important this is. Most _PG_init()\r\nfunctions that I've seen will define some GUCs, request some shared\r\nmemory, register some background workers, and/or install some hooks.\r\nThose all seem like safe things to do, but I wouldn't be at all\r\nsurprised to hear examples to the contrary. In any case, it looks\r\nlike the current ordering of these two steps has been there for 15+\r\nyears.\r\n\r\nIf this is a concern, one option would be to disallow running \"-C\r\nshared_memory_size\" on running servers. That would have to extend to\r\nGUCs like data_checksums and huge_pages_required, too.\r\n\r\n> 0001, that refactors the calculation of the shmem size into a\r\n> different routine, is fine as-is, so I'd like to move on and apply\r\n> it.\r\n\r\nSounds good to me.\r\n\r\nNathan", "msg_date": "Fri, 3 Sep 2021 17:36:43 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/2/21, 10:12 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> By the way I noticed that postgres -C huge_page_size shows 0, which I\r\n> think should have the number used for the calculation if we show\r\n> huge_page_required.\r\n\r\nI would agree with this if huge_page_size was a runtime-computed GUC,\r\nbut since it's intended for users to explicitly request the huge page\r\nsize, it might be slightly confusing. Perhaps another option would be\r\nto create a new GUC for this. Or maybe it's enough to note that the\r\nvalue will be changed from 0 at runtime if huge pages are supported.\r\nIn any case, it might be best to handle this separately.\r\n\r\n> I noticed that postgres -C shared_memory_size showed 137 (= 144703488)\r\n> whereas the error message above showed 148897792 bytes (142MB). So it\r\n> seems that something is forgotten while calculating\r\n> shared_memory_size. As the consequence, launching postgres setting\r\n> huge_pages_required (69 pages) as vm.nr_hugepages ended up in the\r\n> \"could not map anonymous shared memory\" error.\r\n\r\nHm. I'm not seeing this with the v5 patch set, so maybe I\r\ninadvertently fixed it already. Can you check this again with v5?\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 3 Sep 2021 17:46:05 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "Hi,\n\nOn 2021-09-01 15:53:52 +0900, Michael Paquier wrote:\n> Hmm. I am not sure about the addition of huge_pages_required, knowing\n> that we would have shared_memory_size. I'd rather let the calculation\n> part to the user with a scan of /proc/meminfo.\n\n-1. We can easily do better, what do we gain by making the user do this stuff?\nEspecially because the right value also depends on huge_page_size.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Sep 2021 13:20:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Fri, Sep 03, 2021 at 05:36:43PM +0000, Bossart, Nathan wrote:\n> On 9/2/21, 6:46 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n> You bring up an interesting point about _PG_init(). Presently, you\n> can safely assume that the data directory is locked during _PG_init(),\n> so there's no need to worry about breaking something on a running\n> server. I don't know how important this is. Most _PG_init()\n> functions that I've seen will define some GUCs, request some shared\n> memory, register some background workers, and/or install some hooks.\n> Those all seem like safe things to do, but I wouldn't be at all\n> surprised to hear examples to the contrary. In any case, it looks\n> like the current ordering of these two steps has been there for 15+\n> years.\n\nYeah. What you are describing here matches what I have seen in the\npast and what we do in core for _PG_init(). Now extensions developers\ncould do more fancy things, like dropping things on-disk to check the\nload state, for whatever reasons. And things could break in such\ncases. Perhaps people should not do that, but it is no fun either to\nbreak code that has been working for years even if that's just a major\nupgrade.\n\n+ * We skip this step if we are just going to print a GUC's value and exit\n+ * a few steps down.\n */\n- CreateDataDirLockFile(true);\n+ if (output_config_variable == NULL)\n+ CreateDataDirLockFile(true);\n\nAnyway, 0002 gives me shivers.\n\n> If this is a concern, one option would be to disallow running \"-C\n> shared_memory_size\" on running servers. That would have to extend to\n> GUCs like data_checksums and huge_pages_required, too.\n\nJust noting this bit from 0003 that would break without 0002:\n-$ <userinput>pmap 4170 | awk '/rw-s/ &amp;&amp; /zero/ {print $2}'</userinput>\n-6490428K\n+$ <userinput>postgres -D $PGDATA -C shared_memory_size</userinput>\n+6339\n\n>> 0001, that refactors the calculation of the shmem size into a\n>> different routine, is fine as-is, so I'd like to move on and apply\n>> it.\n> \n> Sounds good to me.\n\nApplied this one.\n\nWithout concluding on 0002 yet, another thing that we could do is to\njust add the GUCs. These sound rather useful on their own (mixed\nfeelings about huge_pages_required but I can see why it is useful to\navoid the setup steps and the backend already grabs this information),\nparticularly when it comes to cloned setups that share a lot of\nproperties.\n--\nMichael", "msg_date": "Mon, 6 Sep 2021 11:27:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/5/21, 7:28 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Fri, Sep 03, 2021 at 05:36:43PM +0000, Bossart, Nathan wrote:\r\n>> On 9/2/21, 6:46 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n>>> 0001, that refactors the calculation of the shmem size into a\r\n>>> different routine, is fine as-is, so I'd like to move on and apply\r\n>>> it.\r\n>> \r\n>> Sounds good to me.\r\n>\r\n> Applied this one.\r\n\r\nThanks!\r\n\r\n> Without concluding on 0002 yet, another thing that we could do is to\r\n> just add the GUCs. These sound rather useful on their own (mixed\r\n> feelings about huge_pages_required but I can see why it is useful to\r\n> avoid the setup steps and the backend already grabs this information),\r\n> particularly when it comes to cloned setups that share a lot of\r\n> properties.\r\n\r\nI think this is a good starting point, but I'd like to follow up on\r\nmaking them visible without completely starting the server. The main\r\npurpose for adding these GUCs is to be able to set up huge pages\r\nbefore server startup. Disallowing \"-C huge_pages_required\" on a\r\nrunning server to enable this use-case seems like a modest tradeoff.\r\n\r\nAnyway, I'll restructure the remaining patches to add the GUCs first\r\nand then address the 0002 business separately.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 6 Sep 2021 04:21:51 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/5/21, 9:26 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 9/5/21, 7:28 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n>> Without concluding on 0002 yet, another thing that we could do is to\r\n>> just add the GUCs. These sound rather useful on their own (mixed\r\n>> feelings about huge_pages_required but I can see why it is useful to\r\n>> avoid the setup steps and the backend already grabs this information),\r\n>> particularly when it comes to cloned setups that share a lot of\r\n>> properties.\r\n>\r\n> I think this is a good starting point, but I'd like to follow up on\r\n> making them visible without completely starting the server. The main\r\n> purpose for adding these GUCs is to be able to set up huge pages\r\n> before server startup. Disallowing \"-C huge_pages_required\" on a\r\n> running server to enable this use-case seems like a modest tradeoff.\r\n>\r\n> Anyway, I'll restructure the remaining patches to add the GUCs first\r\n> and then address the 0002 business separately.\r\n\r\nAttached is a new patch set. The first two patches just add the new\r\nGUCs, and the third is an attempt at providing useful values for those\r\nGUCs via -C.\r\n\r\nNathan", "msg_date": "Mon, 6 Sep 2021 23:55:42 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Mon, Sep 06, 2021 at 11:55:42PM +0000, Bossart, Nathan wrote:\n> Attached is a new patch set. The first two patches just add the new\n> GUCs, and the third is an attempt at providing useful values for those\n> GUCs via -C.\n\n+ sprintf(buf, \"%lu MB\", size_mb);\n+ SetConfigOption(\"shared_memory_size\", buf, PGC_INTERNAL, PGC_S_OVERRIDE);\nOne small-ish comment about 0002: there is no need to add the unit\ninto the buffer set as GUC_UNIT_MB would take care of that. The patch\nlooks fine.\n\n+#ifndef WIN32\n+#include <sys/mman.h>\n+#endif\nSo, this is needed in ipci.c to check for MAP_HUGETLB. I am not much\na fan of moving around platform-specific checks when these have\nremained local to each shmem implementation. Could it be cleaner to\nadd GetHugePageSize() to win32_shmem.c and make it always declared in\nthe SysV implementation?\n--\nMichael", "msg_date": "Tue, 7 Sep 2021 13:00:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "At Fri, 3 Sep 2021 17:46:05 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 9/2/21, 10:12 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > By the way I noticed that postgres -C huge_page_size shows 0, which I\n> > think should have the number used for the calculation if we show\n> > huge_page_required.\n> \n> I would agree with this if huge_page_size was a runtime-computed GUC,\n> but since it's intended for users to explicitly request the huge page\n> size, it might be slightly confusing. Perhaps another option would be\n> to create a new GUC for this. Or maybe it's enough to note that the\n> value will be changed from 0 at runtime if huge pages are supported.\n> In any case, it might be best to handle this separately.\n\n(Sorry, I was confused, but) yeah, agreed.\n\n> > I noticed that postgres -C shared_memory_size showed 137 (= 144703488)\n> > whereas the error message above showed 148897792 bytes (142MB). So it\n> > seems that something is forgotten while calculating\n> > shared_memory_size. As the consequence, launching postgres setting\n> > huge_pages_required (69 pages) as vm.nr_hugepages ended up in the\n> > \"could not map anonymous shared memory\" error.\n> \n> Hm. I'm not seeing this with the v5 patch set, so maybe I\n> inadvertently fixed it already. Can you check this again with v5?\n\nThanks! I confirmed that the numbers match with v5.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 07 Sep 2021 15:24:21 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/6/21, 9:00 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> + sprintf(buf, \"%lu MB\", size_mb);\r\n> + SetConfigOption(\"shared_memory_size\", buf, PGC_INTERNAL, PGC_S_OVERRIDE);\r\n> One small-ish comment about 0002: there is no need to add the unit\r\n> into the buffer set as GUC_UNIT_MB would take care of that. The patch\r\n> looks fine.\r\n\r\nI fixed this in v7.\r\n\r\n> +#ifndef WIN32\r\n> +#include <sys/mman.h>\r\n> +#endif\r\n> So, this is needed in ipci.c to check for MAP_HUGETLB. I am not much\r\n> a fan of moving around platform-specific checks when these have\r\n> remained local to each shmem implementation. Could it be cleaner to\r\n> add GetHugePageSize() to win32_shmem.c and make it always declared in\r\n> the SysV implementation?\r\n\r\nI don't know if it's really all that much cleaner, but I did it this\r\nway in v7. IMO it's a little weird that GetHugePageSize() doesn't\r\nreturn the value from GetLargePageMinimum(), but that's what we'd need\r\nto do to avoid setting huge_pages_required for Windows without any\r\nplatform-specific checks.\r\n\r\nNathan", "msg_date": "Tue, 7 Sep 2021 17:08:43 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/6/21, 11:24 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> At Fri, 3 Sep 2021 17:46:05 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in\r\n>> On 9/2/21, 10:12 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n>> > I noticed that postgres -C shared_memory_size showed 137 (= 144703488)\r\n>> > whereas the error message above showed 148897792 bytes (142MB). So it\r\n>> > seems that something is forgotten while calculating\r\n>> > shared_memory_size. As the consequence, launching postgres setting\r\n>> > huge_pages_required (69 pages) as vm.nr_hugepages ended up in the\r\n>> > \"could not map anonymous shared memory\" error.\r\n>>\r\n>> Hm. I'm not seeing this with the v5 patch set, so maybe I\r\n>> inadvertently fixed it already. Can you check this again with v5?\r\n>\r\n> Thanks! I confirmed that the numbers match with v5.\r\n\r\nThanks for confirming.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 7 Sep 2021 17:09:08 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Tue, Sep 07, 2021 at 05:08:43PM +0000, Bossart, Nathan wrote:\n> On 9/6/21, 9:00 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n>> + sprintf(buf, \"%lu MB\", size_mb);\n>> + SetConfigOption(\"shared_memory_size\", buf, PGC_INTERNAL, PGC_S_OVERRIDE);\n>> One small-ish comment about 0002: there is no need to add the unit\n>> into the buffer set as GUC_UNIT_MB would take care of that. The patch\n>> looks fine.\n> \n> I fixed this in v7.\n\nSwitched the variable name to shared_memory_size_mb for easier\ngrepping, moved it to a more correct location with the other read-only\nGUCS, and applied 0002. Well, 0001 here.\n\n>> +#ifndef WIN32\n>> +#include <sys/mman.h>\n>> +#endif\n>> So, this is needed in ipci.c to check for MAP_HUGETLB. I am not much\n>> a fan of moving around platform-specific checks when these have\n>> remained local to each shmem implementation. Could it be cleaner to\n>> add GetHugePageSize() to win32_shmem.c and make it always declared in\n>> the SysV implementation?\n> \n> I don't know if it's really all that much cleaner, but I did it this\n> way in v7. IMO it's a little weird that GetHugePageSize() doesn't\n> return the value from GetLargePageMinimum(), but that's what we'd need\n> to do to avoid setting huge_pages_required for Windows without any\n> platform-specific checks.\n\nThanks. Keeping MAP_HUGETLB within the SysV portions is an\nimprovement IMO. That's subject to one's taste, perhaps.\n\nAfter sleeping on it, I'd be fine to live with the logic based on the\nnew GUC flag called GUC_RUNTIME_COMPUTED to control if a parameter can\nbe looked at either an earlier or a later stage of the startup\nsequences, with the later stage meaning that such parameters cannot be\nchecked if a server is running. This case was originally broken\nanyway, so it does not make it worse, just better.\n\n+ This can be used on a running server for most parameters. However,\n+ the server must be shut down for some runtime-computed parameters\n+ (e.g., <xref linkend=\"guc-huge-pages-required\"/>).\nPerhaps we should add a couple of extra parameters here, like\nshared_memory_size, and perhaps wal_segment_size? The list does not\nhave to be complete, just meaningful enough.\n--\nMichael", "msg_date": "Wed, 8 Sep 2021 12:50:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "\n\nOn 2021/09/08 12:50, Michael Paquier wrote:\n> On Tue, Sep 07, 2021 at 05:08:43PM +0000, Bossart, Nathan wrote:\n>> On 9/6/21, 9:00 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n>>> + sprintf(buf, \"%lu MB\", size_mb);\n>>> + SetConfigOption(\"shared_memory_size\", buf, PGC_INTERNAL, PGC_S_OVERRIDE);\n>>> One small-ish comment about 0002: there is no need to add the unit\n>>> into the buffer set as GUC_UNIT_MB would take care of that. The patch\n>>> looks fine.\n>>\n>> I fixed this in v7.\n> \n> Switched the variable name to shared_memory_size_mb for easier\n> grepping, moved it to a more correct location with the other read-only\n> GUCS, and applied 0002. Well, 0001 here.\n\nThanks for adding useful feature!\n\n+\t\t{\"shared_memory_size\", PGC_INTERNAL, RESOURCES_MEM,\n\nWhen reading the applied code, I found the category of shared_memory_size\nis RESOURCES_MEM. Why? This seems right because the parameter is related\nto memory resource. But since its context is PGC_INTERNAL, PRESET_OPTIONS\nis more proper as the category? BTW, the category of any other\nPGC_INTERNAL parameters seems to be PRESET_OPTIONS.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 8 Sep 2021 16:10:41 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/7/21, 8:50 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> Switched the variable name to shared_memory_size_mb for easier\r\n> grepping, moved it to a more correct location with the other read-only\r\n> GUCS, and applied 0002. Well, 0001 here.\r\n\r\nThanks! And thanks for cleaning up the small mistake in aa37a43.\r\n\r\n> + This can be used on a running server for most parameters. However,\r\n> + the server must be shut down for some runtime-computed parameters\r\n> + (e.g., <xref linkend=\"guc-huge-pages-required\"/>).\r\n> Perhaps we should add a couple of extra parameters here, like\r\n> shared_memory_size, and perhaps wal_segment_size? The list does not\r\n> have to be complete, just meaningful enough.\r\n\r\nGood idea.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 8 Sep 2021 17:48:16 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/8/21, 12:11 AM, \"Fujii Masao\" <masao.fujii@oss.nttdata.com> wrote:\r\n> Thanks for adding useful feature!\r\n\r\n:)\r\n\r\n> + {\"shared_memory_size\", PGC_INTERNAL, RESOURCES_MEM,\r\n>\r\n> When reading the applied code, I found the category of shared_memory_size\r\n> is RESOURCES_MEM. Why? This seems right because the parameter is related\r\n> to memory resource. But since its context is PGC_INTERNAL, PRESET_OPTIONS\r\n> is more proper as the category? BTW, the category of any other\r\n> PGC_INTERNAL parameters seems to be PRESET_OPTIONS.\r\n\r\nYeah, I did wonder about this. We're even listing it in the \"Preset\r\nOptions\" section in the docs. I updated this in the new patch set,\r\nwhich is attached.\r\n\r\nNathan", "msg_date": "Wed, 8 Sep 2021 17:52:33 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Sep 08, 2021 at 04:10:41PM +0900, Fujii Masao wrote:\n> +\t\t{\"shared_memory_size\", PGC_INTERNAL, RESOURCES_MEM,\n> \n> When reading the applied code, I found the category of shared_memory_size\n> is RESOURCES_MEM. Why? This seems right because the parameter is related\n> to memory resource. But since its context is PGC_INTERNAL, PRESET_OPTIONS\n> is more proper as the category? BTW, the category of any other\n> PGC_INTERNAL parameters seems to be PRESET_OPTIONS.\n\nYes, that's an oversight from me. I was looking at that yesterday,\nnoticed some exceptions in the GUC list with things not allowed in\nfiles and just concluded that RESOURCES_MEM should be fine, but the\ndocs tell a different story. Thanks, fixed.\n--\nMichael", "msg_date": "Thu, 9 Sep 2021 10:09:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Sep 08, 2021 at 05:52:33PM +0000, Bossart, Nathan wrote:\n> Yeah, I did wonder about this. We're even listing it in the \"Preset\n> Options\" section in the docs. I updated this in the new patch set,\n> which is attached.\n\nI broke that again, so rebased as v9 attached.\n\nFWIW, I don't have an environment at hand these days to test properly\n0001, so this will have to wait a bit. I really like the approach\ntaken by 0002, and it is independent of the other patch while\nextending support for postgres -c to provide the correct runtime\nvalues. So let's wrap this part first. No need to send a reorganized\npatch set.\n--\nMichael", "msg_date": "Thu, 9 Sep 2021 13:19:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Thu, Sep 09, 2021 at 01:19:14PM +0900, Michael Paquier wrote:\n> I broke that again, so rebased as v9 attached.\n\nWell.\n--\nMichael", "msg_date": "Thu, 9 Sep 2021 13:23:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/8/21, 9:19 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> FWIW, I don't have an environment at hand these days to test properly\r\n> 0001, so this will have to wait a bit. I really like the approach\r\n> taken by 0002, and it is independent of the other patch while\r\n> extending support for postgres -c to provide the correct runtime\r\n> values. So let's wrap this part first. No need to send a reorganized\r\n> patch set.\r\n\r\nSounds good.\r\n\r\nFor 0001, the biggest thing on my mind at the moment is the name of\r\nthe GUC. \"huge_pages_required\" feels kind of ambiguous. From the\r\nname alone, it could mean either \"the number of huge pages required\"\r\nor \"huge pages are required for the server to run.\" Also, the number\r\nof huge pages required is not actually required if you don't want to\r\nrun the server with huge pages. I think it might be clearer to\r\nsomehow indicate that the value is essentially the size of the main\r\nshared memory area in terms of the huge page size, but I'm not sure\r\nhow to do that concisely. Perhaps it is enough to just make sure the\r\ndescription of \"huge_pages_required\" is detailed enough.\r\n\r\nFor 0002, I have two small concerns. My first concern is that it\r\nmight be confusing to customers when the runtime GUCs cannot be\r\nreturned for a running server. We have the note in the docs, but if\r\nyou're encountering it on the command line, it's not totally clear\r\nwhat the problem is.\r\n\r\n $ postgres -D . -C log_min_messages\r\n warning\r\n $ postgres -D . -C shared_memory_size\r\n 2021-09-09 18:51:21.617 UTC [7924] FATAL: lock file \"postmaster.pid\" already exists\r\n 2021-09-09 18:51:21.617 UTC [7924] HINT: Is another postmaster (PID 7912) running in data directory \"/local/home/bossartn/pgdata\"?\r\n\r\nMy other concern is that by default, viewing the runtime-computed GUCs\r\nwill also emit a LOG.\r\n\r\n $ postgres -D . -C shared_memory_size\r\n 142\r\n 2021-09-09 18:53:25.194 UTC [8006] LOG: database system is shut down\r\n\r\nRunning these commands with log_min_messages=debug5 emits way more\r\ninformation for the runtime-computed GUCs than for others, but IMO\r\nthat is alright. However, perhaps we should adjust the logging in\r\n0002 to improve the default user experience. I attached an attempt at\r\nthat.\r\n\r\nWith the attached patch, trying to view a runtime-computed GUC on a\r\nrunning server will look like this:\r\n\r\n $ postgres -D . -C shared_memory_size\r\n 2021-09-09 21:24:21.552 UTC [6224] FATAL: lock file \"postmaster.pid\" already exists\r\n 2021-09-09 21:24:21.552 UTC [6224] DETAIL: Runtime-computed GUC \"shared_memory_size\" cannot be viewed on a running server.\r\n 2021-09-09 21:24:21.552 UTC [6224] HINT: Is another postmaster (PID 3628) running in data directory \"/local/home/bossartn/pgdata\"?\r\n\r\nAnd viewing a runtime-computed GUC on a server that is shut down will\r\nlook like this:\r\n\r\n $ postgres -D . -C shared_memory_size\r\n 142\r\n\r\nI'm not tremendously happy with the patch, but I hope that it at least\r\nhelps with the discussion.\r\n\r\nNathan", "msg_date": "Thu, 9 Sep 2021 21:53:22 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Thu, Sep 09, 2021 at 09:53:22PM +0000, Bossart, Nathan wrote:\n> For 0002, I have two small concerns. My first concern is that it\n> might be confusing to customers when the runtime GUCs cannot be\n> returned for a running server. We have the note in the docs, but if\n> you're encountering it on the command line, it's not totally clear\n> what the problem is.\n\nYeah, that's true. There are more unlikely-to-happen errors that\ncould be triggered and prevent the command to work. I have never\ntried using error_context_stack in a code path as early as that, to be\nhonest.\n\n> Running these commands with log_min_messages=debug5 emits way more\n> information for the runtime-computed GUCs than for others, but IMO\n> that is alright. However, perhaps we should adjust the logging in\n> 0002 to improve the default user experience. I attached an attempt at\n> that.\n\nRegistered bgworkers would generate a DEBUG entry, for one.\n\n> I'm not tremendously happy with the patch, but I hope that it at least\n> helps with the discussion.\n\nAs far as the behavior is documented, I'd be fine with the approach to\nkeep the code in its simplest shape. I agree that the message is\nconfusing, still it is not wrong either as we try to query a run-time\nparameter, but we need the lock.\n--\nMichael", "msg_date": "Fri, 10 Sep 2021 11:03:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/9/21, 7:03 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> As far as the behavior is documented, I'd be fine with the approach to\r\n> keep the code in its simplest shape. I agree that the message is\r\n> confusing, still it is not wrong either as we try to query a run-time\r\n> parameter, but we need the lock.\r\n\r\nThat seems alright to me.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 10 Sep 2021 02:26:10 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Thu, Sep 9, 2021 at 5:53 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> For 0001, the biggest thing on my mind at the moment is the name of\n> the GUC. \"huge_pages_required\" feels kind of ambiguous. From the\n> name alone, it could mean either \"the number of huge pages required\"\n> or \"huge pages are required for the server to run.\" Also, the number\n> of huge pages required is not actually required if you don't want to\n> run the server with huge pages.\n\n+1 to all of that.\n\n> I think it might be clearer to\n> somehow indicate that the value is essentially the size of the main\n> shared memory area in terms of the huge page size, but I'm not sure\n> how to do that concisely. Perhaps it is enough to just make sure the\n> description of \"huge_pages_required\" is detailed enough.\n\nshared_memory_size_in_huge_pages? It's kinda long, but a long name\nthat you can understand without reading the docs is better than a\nshort one where you can't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Sep 2021 16:01:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/10/21, 1:02 PM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Thu, Sep 9, 2021 at 5:53 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> I think it might be clearer to\r\n>> somehow indicate that the value is essentially the size of the main\r\n>> shared memory area in terms of the huge page size, but I'm not sure\r\n>> how to do that concisely. Perhaps it is enough to just make sure the\r\n>> description of \"huge_pages_required\" is detailed enough.\r\n>\r\n> shared_memory_size_in_huge_pages? It's kinda long, but a long name\r\n> that you can understand without reading the docs is better than a\r\n> short one where you can't.\r\n\r\nI think that's an improvement. The only other idea I have at the\r\nmoment is num_huge_pages_required_for_shared_memory.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 10 Sep 2021 23:43:40 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Fri, Sep 10, 2021 at 7:43 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > shared_memory_size_in_huge_pages? It's kinda long, but a long name\n> > that you can understand without reading the docs is better than a\n> > short one where you can't.\n>\n> I think that's an improvement. The only other idea I have at the\n> moment is num_huge_pages_required_for_shared_memory.\n\nHmm, that to me sounds like maybe only part of shared memory uses huge\npages and maybe we're just giving you the number required for that\npart. I realize that it doesn't work that way but I don't know if\neveryone will.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Sep 2021 11:49:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/13/21, 8:59 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Fri, Sep 10, 2021 at 7:43 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> I think that's an improvement. The only other idea I have at the\r\n>> moment is num_huge_pages_required_for_shared_memory.\r\n>\r\n> Hmm, that to me sounds like maybe only part of shared memory uses huge\r\n> pages and maybe we're just giving you the number required for that\r\n> part. I realize that it doesn't work that way but I don't know if\r\n> everyone will.\r\n\r\nYeah, I agree. What about\r\nhuge_pages_needed_for_shared_memory_size or\r\nhuge_pages_needed_for_main_shared_memory? I'm still not stoked about\r\nusing \"required\" or \"needed\" in the name, as it sounds like huge pages\r\nmust be allocated for the server to run, which is only true if\r\nhuge_pages=on. I haven't thought of a better word to use, though.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 13 Sep 2021 18:49:16 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Mon, Sep 13, 2021 at 2:49 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Yeah, I agree. What about\n> huge_pages_needed_for_shared_memory_size or\n> huge_pages_needed_for_main_shared_memory? I'm still not stoked about\n> using \"required\" or \"needed\" in the name, as it sounds like huge pages\n> must be allocated for the server to run, which is only true if\n> huge_pages=on. I haven't thought of a better word to use, though.\n\nI prefer the first of those to the second. I don't find it\nparticularly better or worse than my previous suggestion of\nshared_memory_size_in_huge_pages.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Sep 2021 16:20:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> Yeah, I agree. What about\n> huge_pages_needed_for_shared_memory_size or\n> huge_pages_needed_for_main_shared_memory?\n\nSeems like \"huge_pages_needed_for_shared_memory\" would be sufficient.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Sep 2021 16:24:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Mon, Sep 13, 2021 at 04:20:00PM -0400, Robert Haas wrote:\n> On Mon, Sep 13, 2021 at 2:49 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>> Yeah, I agree. What about\n>> huge_pages_needed_for_shared_memory_size or\n>> huge_pages_needed_for_main_shared_memory? I'm still not stoked about\n>> using \"required\" or \"needed\" in the name, as it sounds like huge pages\n>> must be allocated for the server to run, which is only true if\n>> huge_pages=on. I haven't thought of a better word to use, though.\n> \n> I prefer the first of those to the second. I don't find it\n> particularly better or worse than my previous suggestion of\n> shared_memory_size_in_huge_pages.\n\nI am not particularly fond of the use \"needed\" in this context, so I'd\nbe fine with your suggestion of \"shared_memory_size_in_huge_pages.\nSome other ideas I could think of:\n- shared_memory_size_as_huge_pages\n- huge_pages_for_shared_memory_size\n\nHaving shared_memory_size in the GUC name is kind of appealing though\nin terms of grepping, and one gets the relationship with\nshared_memory_size immediately. If the consensus is\nhuge_pages_needed_for_shared_memory_size, I won't fight it, but IMO\nthat's too long.\n--\nMichael", "msg_date": "Tue, 14 Sep 2021 09:08:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/13/21, 1:25 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> Seems like \"huge_pages_needed_for_shared_memory\" would be sufficient.\r\n\r\nI think we are down to either shared_memory_size_in_huge_pages or\r\nhuge_pages_needed_for_shared_memory. Robert's argument against\r\nhuge_pages_needed_for_shared_memory was that it might sound like only\r\npart of shared memory uses huge pages and we're only giving the number\r\nrequired for that. Speaking of which, isn't that technically true?\r\nFor shared_memory_size_in_huge_pages, the intent is to make it sound\r\nlike we are providing shared_memory_size in terms of the huge page\r\nsize, but I think it could also be interpreted as \"the amount of\r\nshared memory that is currently stored in huge pages.\"\r\n\r\nI personally lean towards huge_pages_needed_for_shared_memory because\r\nit feels the most clear and direct to me. I'm not vehemently opposed\r\nto shared_memory_size_in_huge_pages, though. I don't think either one\r\nis too misleading.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 14 Sep 2021 00:30:22 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "At Tue, 14 Sep 2021 00:30:22 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 9/13/21, 1:25 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n> > Seems like \"huge_pages_needed_for_shared_memory\" would be sufficient.\n> \n> I think we are down to either shared_memory_size_in_huge_pages or\n> huge_pages_needed_for_shared_memory. Robert's argument against\n> huge_pages_needed_for_shared_memory was that it might sound like only\n> part of shared memory uses huge pages and we're only giving the number\n> required for that. Speaking of which, isn't that technically true?\n> For shared_memory_size_in_huge_pages, the intent is to make it sound\n> like we are providing shared_memory_size in terms of the huge page\n> size, but I think it could also be interpreted as \"the amount of\n> shared memory that is currently stored in huge pages.\"\n> \n> I personally lean towards huge_pages_needed_for_shared_memory because\n> it feels the most clear and direct to me. I'm not vehemently opposed\n> to shared_memory_size_in_huge_pages, though. I don't think either one\n> is too misleading.\n\nI like 'in' slightly than 'for' in this context. I stand by Michael\nthat that name looks somewhat too long especially considering that\nthat name won't be completed on shell command lines, but won't fight\nit, too. On the other hand the full-spelled name can be thought as\none can spell it out from memory easily than a name halfway shortened.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 14 Sep 2021 09:49:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/13/21, 5:49 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> At Tue, 14 Sep 2021 00:30:22 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in\r\n>> On 9/13/21, 1:25 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n>> > Seems like \"huge_pages_needed_for_shared_memory\" would be sufficient.\r\n>>\r\n>> I think we are down to either shared_memory_size_in_huge_pages or\r\n>> huge_pages_needed_for_shared_memory. Robert's argument against\r\n>> huge_pages_needed_for_shared_memory was that it might sound like only\r\n>> part of shared memory uses huge pages and we're only giving the number\r\n>> required for that. Speaking of which, isn't that technically true?\r\n>> For shared_memory_size_in_huge_pages, the intent is to make it sound\r\n>> like we are providing shared_memory_size in terms of the huge page\r\n>> size, but I think it could also be interpreted as \"the amount of\r\n>> shared memory that is currently stored in huge pages.\"\r\n>>\r\n>> I personally lean towards huge_pages_needed_for_shared_memory because\r\n>> it feels the most clear and direct to me. I'm not vehemently opposed\r\n>> to shared_memory_size_in_huge_pages, though. I don't think either one\r\n>> is too misleading.\r\n>\r\n> I like 'in' slightly than 'for' in this context. I stand by Michael\r\n> that that name looks somewhat too long especially considering that\r\n> that name won't be completed on shell command lines, but won't fight\r\n> it, too. On the other hand the full-spelled name can be thought as\r\n> one can spell it out from memory easily than a name halfway shortened.\r\n\r\nI think I see more support for shared_memory_size_in_huge_pages than\r\nfor huge_pages_needed_for_shared_memory at the moment. I'll update\r\nthe patch set in the next day or two to use\r\nshared_memory_size_in_huge_pages unless something changes in the\r\nmeantime.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 14 Sep 2021 18:00:44 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Tue, Sep 14, 2021 at 06:00:44PM +0000, Bossart, Nathan wrote:\n> I think I see more support for shared_memory_size_in_huge_pages than\n> for huge_pages_needed_for_shared_memory at the moment. I'll update\n> the patch set in the next day or two to use\n> shared_memory_size_in_huge_pages unless something changes in the\n> meantime.\n\nI have been looking at the patch to add the new GUC flag and the new\nsequence for postgres -C, and we could have some TAP tests. There\nwere two places that made sense to me: pg_checksums/t/002_actions.pl\nand recovery/t/017_shm.pl. I have chosen the former as it has\ncoverage across more platforms, and used data_checksums for this\npurpose, with an extra positive test to check for the case where a GUC\ncan be queried while the server is running.\n\nThere are four parameters that are updated here:\n- shared_memory_size\n- wal_segment_size\n- data_checksums\n- data_directory_mode\nThat looks sensible, after looking at the full set of GUCs.\n\nAttached is a refreshed patch (commit message is the same as v9 for\nnow), with some minor tweaks and the tests.\n\nThoughts?\n--\nMichael", "msg_date": "Wed, 15 Sep 2021 12:05:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/14/21, 8:06 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> Attached is a refreshed patch (commit message is the same as v9 for\r\n> now), with some minor tweaks and the tests.\r\n>\r\n> Thoughts?\r\n\r\nLGTM\r\n\r\n+ This can be used on a running server for most parameters. However,\r\n+ the server must be shut down for some runtime-computed parameters\r\n+ (e.g., <xref linkend=\"guc-shared-memory-size\"/>, and\r\n+ <xref linkend=\"guc-wal-segment-size\"/>).\r\n\r\nnitpick: I think you can remove the comma before the \"and\" in the list\r\nof examples.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 15 Sep 2021 22:31:20 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Sep 15, 2021 at 10:31:20PM +0000, Bossart, Nathan wrote:\n> + This can be used on a running server for most parameters. However,\n> + the server must be shut down for some runtime-computed parameters\n> + (e.g., <xref linkend=\"guc-shared-memory-size\"/>, and\n> + <xref linkend=\"guc-wal-segment-size\"/>).\n> \n> nitpick: I think you can remove the comma before the \"and\" in the list\n> of examples.\n\nFixed that, and applied. Could you rebase the last patch with the\nname suggested for the last GUC, including the docs? It looks like we\nare heading for shared_memory_size_in_huge_pages.\n--\nMichael", "msg_date": "Thu, 16 Sep 2021 11:41:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/15/21, 7:42 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> Fixed that, and applied. Could you rebase the last patch with the\r\n> name suggested for the last GUC, including the docs? It looks like we\r\n> are heading for shared_memory_size_in_huge_pages.\r\n\r\nThanks! And done.\r\n\r\nFor the huge pages setup documentation, I considered sending stderr to\r\n/dev/null to eliminate the LOG from the output, but I opted against\r\nthat. That would've looked like this:\r\n\r\n postgres -D $PGDATA -C shared_memory_size_in_huge_pages 2> /dev/null\r\n\r\nOtherwise, there aren't any significant changes in this version of the\r\npatch besides the name change.\r\n\r\nNathan", "msg_date": "Thu, 16 Sep 2021 17:06:11 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "\n+ * and the hugepage-related mmap flags to use into *mmap_flags. If huge pages\n+ * is not supported, *hugepagesize and *mmap_flags will be set to 0.\n\nnitpick: *are* not supported, as you say elsewhere.\n\n+ gettext_noop(\"Shows the number of huge pages needed for the main shared memory area.\"),\n\nMaybe this was already discussed, but \"main\" could be misleading.\n\nTo me that sounds like there might be huge pages needed for something other\nthan the \"main\" area and the reported value might turn out to be inadequate,\n(which is exactly the issue these patch are trying to address).\n\nIn particular, this sounds like it's just going to report\nshared_buffers/huge_page_size (since shared buffers is usually the \"main\" use\nof shared memory) - rather than reporting the size of the entire shared memory\nin units of huge_pages.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 16 Sep 2021 12:14:27 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/16/21, 10:17 AM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\r\n> + * and the hugepage-related mmap flags to use into *mmap_flags. If huge pages\r\n> + * is not supported, *hugepagesize and *mmap_flags will be set to 0.\r\n>\r\n> nitpick: *are* not supported, as you say elsewhere.\r\n\r\nUpdated. I think I originally chose \"is\" because I was referring to\r\nHuge Pages as a singular feature, but that sounds a bit awkward, and I\r\ndon't think there's any substantial difference either way.\r\n\r\n> + gettext_noop(\"Shows the number of huge pages needed for the main shared memory area.\"),\r\n>\r\n> Maybe this was already discussed, but \"main\" could be misleading.\r\n>\r\n> To me that sounds like there might be huge pages needed for something other\r\n> than the \"main\" area and the reported value might turn out to be inadequate,\r\n> (which is exactly the issue these patch are trying to address).\r\n>\r\n> In particular, this sounds like it's just going to report\r\n> shared_buffers/huge_page_size (since shared buffers is usually the \"main\" use\r\n> of shared memory) - rather than reporting the size of the entire shared memory\r\n> in units of huge_pages.\r\n\r\nI'm not sure I agree on this one. The documentation for huge_pages\r\n[0] and shared_memory_type [1] uses the same phrasing multiple times,\r\nand the new shared_memory_size GUC uses it as well [2]. I don't see\r\nanything in the documentation that suggests that shared_buffers is the\r\nonly thing in the main shared memory area, and the documentation for\r\nsetting up huge pages makes no mention of any extra memory that needs\r\nto be considered, either.\r\n\r\nFurthermore, I'm not sure what else we'd call it. I don't think it's\r\naccurate to suggest that it's the entirety of shared memory for the\r\nserver, as it's possible to dynamically allocate more as needed (which\r\nIIUC won't use any explicitly allocated huge pages).\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/docs/devel/runtime-config-resource.html#GUC-HUGE-PAGES\r\n[1] https://www.postgresql.org/docs/devel/runtime-config-resource.html#GUC-SHARED-MEMORY-TYPE\r\n[2] https://www.postgresql.org/docs/devel/runtime-config-preset.html#GUC-SHARED-MEMORY-SIZE", "msg_date": "Thu, 16 Sep 2021 21:26:56 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Thu, Sep 16, 2021 at 09:26:56PM +0000, Bossart, Nathan wrote:\n> I'm not sure I agree on this one. The documentation for huge_pages\n> [0] and shared_memory_type [1] uses the same phrasing multiple times,\n> and the new shared_memory_size GUC uses it as well [2]. I don't see\n> anything in the documentation that suggests that shared_buffers is the\n> only thing in the main shared memory area, and the documentation for\n> setting up huge pages makes no mention of any extra memory that needs\n> to be considered, either.\n\nLooks rather sane to me, FWIW. I have not tested on Linux properly\nyet (not tempted to take my bets on the buildfarm on a Friday,\neither), but I should be able to handle that at the beginning of next\nweek.\n\n+ GetHugePageSize(&hp_size, &unused);\n+ if (hp_size != 0)\nI'd rather change GetHugePageSize() to be able to accept NULL for the\nparameter values, rather than declaring such variables.\n\n+ To determine the number of huge pages needed, use the\n+ <command>postgres</command> command to see the value of\n+ <xref linkend=\"guc-shared-memory-size-in-huge-pages\"/>.\nWe may want to say as well here that the server should be offline?\nIt would not hurt to duplicate this information with\npostgres-ref.sgml.\n\n+ This setting is supported only on Linux. It is always set to\nNit: This paragraph is missing two <productname>s for Linux. The docs\nare random about that, but these are new entries.\n--\nMichael", "msg_date": "Fri, 17 Sep 2021 11:20:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/16/21, 7:21 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> + GetHugePageSize(&hp_size, &unused);\r\n> + if (hp_size != 0)\r\n> I'd rather change GetHugePageSize() to be able to accept NULL for the\r\n> parameter values, rather than declaring such variables.\r\n\r\nDone.\r\n\r\n> + To determine the number of huge pages needed, use the\r\n> + <command>postgres</command> command to see the value of\r\n> + <xref linkend=\"guc-shared-memory-size-in-huge-pages\"/>.\r\n> We may want to say as well here that the server should be offline?\r\n> It would not hurt to duplicate this information with\r\n> postgres-ref.sgml.\r\n\r\nDone.\r\n\r\n> + This setting is supported only on Linux. It is always set to\r\n> Nit: This paragraph is missing two <productname>s for Linux. The docs\r\n> are random about that, but these are new entries.\r\n\r\nDone.\r\n\r\nNathan", "msg_date": "Fri, 17 Sep 2021 16:31:44 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "Should we also initialize the shared memory GUCs in bootstrap and\r\nsingle-user mode? I think I missed this in bd17880.\r\n\r\nNathan\r\n\r\ndiff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c\r\nindex 48615c0ebc..4c4cf44871 100644\r\n--- a/src/backend/bootstrap/bootstrap.c\r\n+++ b/src/backend/bootstrap/bootstrap.c\r\n@@ -324,6 +324,12 @@ BootstrapModeMain(int argc, char *argv[], bool check_only)\r\n\r\n InitializeMaxBackends();\r\n\r\n+ /*\r\n+ * Initialize runtime-computed GUCs that depend on the amount of shared\r\n+ * memory required.\r\n+ */\r\n+ InitializeShmemGUCs();\r\n+\r\n CreateSharedMemoryAndSemaphores();\r\n\r\n /*\r\ndiff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\r\nindex 0775abe35d..cae0b079b9 100644\r\n--- a/src/backend/tcop/postgres.c\r\n+++ b/src/backend/tcop/postgres.c\r\n@@ -3978,6 +3978,12 @@ PostgresSingleUserMain(int argc, char *argv[],\r\n /* Initialize MaxBackends */\r\n InitializeMaxBackends();\r\n\r\n+ /*\r\n+ * Initialize runtime-computed GUCs that depend on the amount of shared\r\n+ * memory required.\r\n+ */\r\n+ InitializeShmemGUCs();\r\n+\r\n CreateSharedMemoryAndSemaphores();\r\n\r\n /*\r\n\r\n", "msg_date": "Tue, 21 Sep 2021 00:08:22 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Fri, Sep 17, 2021 at 04:31:44PM +0000, Bossart, Nathan wrote:\n> Done.\n\nThanks. I have gone through the last patch this morning, did some\ntests on all the platforms I have at hand (including Linux) and\nfinished by applying it after doing some small tweaks. First, I have \nfinished by extending GetHugePageSize() to accept NULL for its first\nargument hugepagesize. A second thing was in the docs, where it is\nstill useful IMO to keep the reference to /proc/meminfo and\n/sys/kernel/mm/hugepages to let users know how the system impacts the\ncalculation of the new GUC.\n\nLet's see what the buildfarm thinks about it.\n--\nMichael", "msg_date": "Tue, 21 Sep 2021 10:48:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Tue, Sep 21, 2021 at 12:08:22AM +0000, Bossart, Nathan wrote:\n> Should we also initialize the shared memory GUCs in bootstrap and\n> single-user mode? I think I missed this in bd17880.\n\nWhy would we need that for the bootstrap mode?\n\nWhile looking at the patch for shared_memory_size, I have looked at\nthose code paths to note that some of the runtime GUCs would be set\nthanks to the load of the control file, but supporting this case\nsounded rather limited to me for --single when it came to shared\nmemory and huge page estimation and we don't load\nshared_preload_libraries in this context either, which could lead to\nwrong estimations. Anyway, I am not going to fight hard if people\nwould like that for the --single mode, even if it may lead to an\nunderestimation of the shmem allocated.\n--\nMichael", "msg_date": "Tue, 21 Sep 2021 11:29:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/20/21, 6:48 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> Thanks. I have gone through the last patch this morning, did some\r\n> tests on all the platforms I have at hand (including Linux) and\r\n> finished by applying it after doing some small tweaks. First, I have \r\n> finished by extending GetHugePageSize() to accept NULL for its first\r\n> argument hugepagesize. A second thing was in the docs, where it is\r\n> still useful IMO to keep the reference to /proc/meminfo and\r\n> /sys/kernel/mm/hugepages to let users know how the system impacts the\r\n> calculation of the new GUC.\r\n>\r\n> Let's see what the buildfarm thinks about it.\r\n\r\nThanks!\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 21 Sep 2021 15:46:41 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On 9/20/21, 7:29 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Tue, Sep 21, 2021 at 12:08:22AM +0000, Bossart, Nathan wrote:\r\n>> Should we also initialize the shared memory GUCs in bootstrap and\r\n>> single-user mode? I think I missed this in bd17880.\r\n>\r\n> Why would we need that for the bootstrap mode?\r\n>\r\n> While looking at the patch for shared_memory_size, I have looked at\r\n> those code paths to note that some of the runtime GUCs would be set\r\n> thanks to the load of the control file, but supporting this case\r\n> sounded rather limited to me for --single when it came to shared\r\n> memory and huge page estimation and we don't load\r\n> shared_preload_libraries in this context either, which could lead to\r\n> wrong estimations. Anyway, I am not going to fight hard if people\r\n> would like that for the --single mode, even if it may lead to an\r\n> underestimation of the shmem allocated.\r\n\r\nI was looking at this from the standpoint of keeping the startup steps\r\nconsistent between the modes. Looking again, I can't think of\r\na strong reason to add it to bootstrap mode. I think the case for\r\nadding it to single-user mode is a bit stronger, as commands like\r\n\"SHOW shared_memory_size;\" currently return 0. I lean in favor of\r\nadding it for single-user mode, but it's probably fine either way.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 21 Sep 2021 16:06:38 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Tue, Sep 21, 2021 at 04:06:38PM +0000, Bossart, Nathan wrote:\n> I was looking at this from the standpoint of keeping the startup steps\n> consistent between the modes. Looking again, I can't think of\n> a strong reason to add it to bootstrap mode. I think the case for\n> adding it to single-user mode is a bit stronger, as commands like\n> \"SHOW shared_memory_size;\" currently return 0. I lean in favor of\n> adding it for single-user mode, but it's probably fine either way.\n\nI am not sure either as that's a tradeoff between an underestimation\nand no information. The argument of consistency indeed matters.\nLet's see if others have any opinion to share on this point.\n--\nMichael", "msg_date": "Wed, 22 Sep 2021 12:53:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Tue, Sep 21, 2021 at 11:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I am not sure either as that's a tradeoff between an underestimation\n> and no information. The argument of consistency indeed matters.\n> Let's see if others have any opinion to share on this point.\n\nWell, if we think the information won't be safe to use, it's better to\nreport nothing than a wrong value, I think.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Sep 2021 12:57:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Thu, Sep 9, 2021 at 11:53 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 9/8/21, 9:19 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n> > FWIW, I don't have an environment at hand these days to test properly\n> > 0001, so this will have to wait a bit. I really like the approach\n> > taken by 0002, and it is independent of the other patch while\n> > extending support for postgres -c to provide the correct runtime\n> > values. So let's wrap this part first. No need to send a reorganized\n> > patch set.\n>\n> Sounds good.\n>\n> For 0001, the biggest thing on my mind at the moment is the name of\n> the GUC. \"huge_pages_required\" feels kind of ambiguous. From the\n> name alone, it could mean either \"the number of huge pages required\"\n> or \"huge pages are required for the server to run.\" Also, the number\n> of huge pages required is not actually required if you don't want to\n> run the server with huge pages. I think it might be clearer to\n> somehow indicate that the value is essentially the size of the main\n> shared memory area in terms of the huge page size, but I'm not sure\n> how to do that concisely. Perhaps it is enough to just make sure the\n> description of \"huge_pages_required\" is detailed enough.\n>\n> For 0002, I have two small concerns. My first concern is that it\n> might be confusing to customers when the runtime GUCs cannot be\n> returned for a running server. We have the note in the docs, but if\n> you're encountering it on the command line, it's not totally clear\n> what the problem is.\n>\n> $ postgres -D . -C log_min_messages\n> warning\n> $ postgres -D . -C shared_memory_size\n> 2021-09-09 18:51:21.617 UTC [7924] FATAL: lock file \"postmaster.pid\" already exists\n> 2021-09-09 18:51:21.617 UTC [7924] HINT: Is another postmaster (PID 7912) running in data directory \"/local/home/bossartn/pgdata\"?\n>\n> My other concern is that by default, viewing the runtime-computed GUCs\n> will also emit a LOG.\n>\n> $ postgres -D . -C shared_memory_size\n> 142\n> 2021-09-09 18:53:25.194 UTC [8006] LOG: database system is shut down\n>\n> Running these commands with log_min_messages=debug5 emits way more\n> information for the runtime-computed GUCs than for others, but IMO\n> that is alright. However, perhaps we should adjust the logging in\n> 0002 to improve the default user experience. I attached an attempt at\n> that.\n>\n> With the attached patch, trying to view a runtime-computed GUC on a\n> running server will look like this:\n>\n> $ postgres -D . -C shared_memory_size\n> 2021-09-09 21:24:21.552 UTC [6224] FATAL: lock file \"postmaster.pid\" already exists\n> 2021-09-09 21:24:21.552 UTC [6224] DETAIL: Runtime-computed GUC \"shared_memory_size\" cannot be viewed on a running server.\n> 2021-09-09 21:24:21.552 UTC [6224] HINT: Is another postmaster (PID 3628) running in data directory \"/local/home/bossartn/pgdata\"?\n>\n> And viewing a runtime-computed GUC on a server that is shut down will\n> look like this:\n>\n> $ postgres -D . -C shared_memory_size\n> 142\n\nNothing fixing this ended up actually getting committed, right? That\nis, we still get the extra log output?\n\nAnd in fact, the command documented on\nhttps://www.postgresql.org/docs/devel/kernel-resources.html doesn't\nactually produce the output that the docs show, it also shows the log\nline, in the default config? If we can't fix the extra logging we\nshould at least have our docs represent reality -- maybe by adding a\n\"2>/dev/null\" entry? But it'd be better to have it not output that log\nin the first place...\n\n(Of course what I'd really want is to be able to run it on a cluster\nthat's running, but that was discussed downthread so I'm not bringing\nthat one up for changes now)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 14 Mar 2022 16:26:43 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Mon, Mar 14, 2022 at 04:26:43PM +0100, Magnus Hagander wrote:\n> Nothing fixing this ended up actually getting committed, right? That\n> is, we still get the extra log output?\n\nCorrect.\n\n> And in fact, the command documented on\n> https://www.postgresql.org/docs/devel/kernel-resources.html doesn't\n> actually produce the output that the docs show, it also shows the log\n> line, in the default config? If we can't fix the extra logging we\n> should at least have our docs represent reality -- maybe by adding a\n> \"2>/dev/null\" entry? But it'd be better to have it not output that log\n> in the first place...\n\nI attached a patch to adjust the documentation for now. This apparently\ncrossed my mind earlier [0], but I didn't follow through with it for some\nreason.\n\n> (Of course what I'd really want is to be able to run it on a cluster\n> that's running, but that was discussed downthread so I'm not bringing\n> that one up for changes now)\n\nI think it is worth revisiting the extra logging and the ability to view\nruntime-computed GUCs on a running server. Should this be an open item for\nv15, or do you think it's alright to leave it for the v16 development\ncycle?\n\n[0] https://postgr.es/m/C45224E1-29C8-414C-A8E6-B718C07ACB94%40amazon.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 14 Mar 2022 10:34:17 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Mon, Mar 14, 2022 at 10:34:17AM -0700, Nathan Bossart wrote:\n> On Mon, Mar 14, 2022 at 04:26:43PM +0100, Magnus Hagander wrote:\n>> And in fact, the command documented on\n>> https://www.postgresql.org/docs/devel/kernel-resources.html doesn't\n>> actually produce the output that the docs show, it also shows the log\n>> line, in the default config? If we can't fix the extra logging we\n>> should at least have our docs represent reality -- maybe by adding a\n>> \"2>/dev/null\" entry? But it'd be better to have it not output that log\n>> in the first place...\n> \n> I attached a patch to adjust the documentation for now. This apparently\n> crossed my mind earlier [0], but I didn't follow through with it for some\n> reason.\n\nAnother thing that we can add is -c log_min_messages=fatal, but my\nmethod is more complicated than a simple redirection, of course :)\n\n>> (Of course what I'd really want is to be able to run it on a cluster\n>> that's running, but that was discussed downthread so I'm not bringing\n>> that one up for changes now)\n> \n> I think it is worth revisiting the extra logging and the ability to view\n> runtime-computed GUCs on a running server. Should this be an open item for\n> v15, or do you think it's alright to leave it for the v16 development\n> cycle?\n\nWell, this is a completely new problem as it opens the door of\npotential concurrent access issues with the data directory lock file\nwhile reading values from the control file. And that's not mandatory\nto be able to get those estimations without having to allocate a large\nchunk of memory, which was the primary goal discussed upthread as far\nas I recall. So I would leave that as an item to potentially tackle\nin future versions.\n--\nMichael", "msg_date": "Tue, 15 Mar 2022 11:41:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Tue, Mar 15, 2022 at 3:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Mar 14, 2022 at 10:34:17AM -0700, Nathan Bossart wrote:\n> > On Mon, Mar 14, 2022 at 04:26:43PM +0100, Magnus Hagander wrote:\n> >> And in fact, the command documented on\n> >> https://www.postgresql.org/docs/devel/kernel-resources.html doesn't\n> >> actually produce the output that the docs show, it also shows the log\n> >> line, in the default config? If we can't fix the extra logging we\n> >> should at least have our docs represent reality -- maybe by adding a\n> >> \"2>/dev/null\" entry? But it'd be better to have it not output that log\n> >> in the first place...\n> >\n> > I attached a patch to adjust the documentation for now. This apparently\n> > crossed my mind earlier [0], but I didn't follow through with it for some\n> > reason.\n>\n> Another thing that we can add is -c log_min_messages=fatal, but my\n> method is more complicated than a simple redirection, of course :)\n\nEither does work, but yours has more characters :)\n\n\n> >> (Of course what I'd really want is to be able to run it on a cluster\n> >> that's running, but that was discussed downthread so I'm not bringing\n> >> that one up for changes now)\n> >\n> > I think it is worth revisiting the extra logging and the ability to view\n> > runtime-computed GUCs on a running server. Should this be an open item for\n> > v15, or do you think it's alright to leave it for the v16 development\n> > cycle?\n>\n> Well, this is a completely new problem as it opens the door of\n> potential concurrent access issues with the data directory lock file\n> while reading values from the control file. And that's not mandatory\n> to be able to get those estimations without having to allocate a large\n> chunk of memory, which was the primary goal discussed upthread as far\n> as I recall. So I would leave that as an item to potentially tackle\n> in future versions.\n\nI think we're talking about two different things here.\n\nI think the \"avoid extra logging\" would be worth seeing if we can\naddress for 15.\n\nThe \"able to run on a live cluster\" seems a lot bigger and more scary\nand not 15 material.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 15 Mar 2022 23:02:37 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Tue, Mar 15, 2022 at 11:02:37PM +0100, Magnus Hagander wrote:\n> I think we're talking about two different things here.\n> \n> I think the \"avoid extra logging\" would be worth seeing if we can\n> address for 15.\n\nA simple approach could be to just set log_min_messages to PANIC before\nexiting. I've attached a patch for this. With this patch, we'll still see\na FATAL if we try to use 'postgres -C' for a runtime-computed GUC on a\nrunning server, and there will be no extra output as long as the user sets\nlog_min_messages to INFO or higher (i.e., not a DEBUG* value). For\ncomparison, 'postgres -C' for a non-runtime-computed GUC does not emit\nextra output as long as the user sets log_min_messages to DEBUG2 or higher.\n\n> The \"able to run on a live cluster\" seems a lot bigger and more scary\n> and not 15 material.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 15 Mar 2022 15:44:39 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Tue, Mar 15, 2022 at 03:44:39PM -0700, Nathan Bossart wrote:\n> A simple approach could be to just set log_min_messages to PANIC before\n> exiting. I've attached a patch for this. With this patch, we'll still see\n> a FATAL if we try to use 'postgres -C' for a runtime-computed GUC on a\n> running server, and there will be no extra output as long as the user sets\n> log_min_messages to INFO or higher (i.e., not a DEBUG* value). For\n> comparison, 'postgres -C' for a non-runtime-computed GUC does not emit\n> extra output as long as the user sets log_min_messages to DEBUG2 or higher.\n\nI created a commitfest entry for this:\n\n\thttps://commitfest.postgresql.org/38/3596/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 21 Mar 2022 15:12:05 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Tue, Mar 15, 2022 at 03:44:39PM -0700, Nathan Bossart wrote:\n> A simple approach could be to just set log_min_messages to PANIC before\n> exiting. I've attached a patch for this. With this patch, we'll still see\n> a FATAL if we try to use 'postgres -C' for a runtime-computed GUC on a\n> running server, and there will be no extra output as long as the user sets\n> log_min_messages to INFO or higher (i.e., not a DEBUG* value). For\n> comparison, 'postgres -C' for a non-runtime-computed GUC does not emit\n> extra output as long as the user sets log_min_messages to DEBUG2 or higher.\n\n> \t\tputs(config_val ? config_val : \"\");\n> +\n> +\t\t/* don't emit shutdown messages */\n> +\t\tSetConfigOption(\"log_min_messages\", \"PANIC\", PGC_INTERNAL, PGC_S_OVERRIDE);\n> +\n> \t\tExitPostmaster(0);\n\nThat's fancy, but I don't like that much. And this would not protect\neither against any messages generated before this code path, either,\neven if that should be enough for the current HEAD .\n\nMy solution for the docs is perhaps too confusing for the end-user,\nand we are talking about a Linux-only thing here anyway. So, at the\nend, I am tempted to just add the \"2> /dev/null\" as suggested upthread\nby Nathan and call it a day. Does that sound fine?\n--\nMichael", "msg_date": "Wed, 23 Mar 2022 15:25:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Mar 23, 2022 at 03:25:48PM +0900, Michael Paquier wrote:\n> My solution for the docs is perhaps too confusing for the end-user,\n> and we are talking about a Linux-only thing here anyway. So, at the\n> end, I am tempted to just add the \"2> /dev/null\" as suggested upthread\n> by Nathan and call it a day.\n\nThis still sounds like the best way to go for now, so done this way as\nof bbd4951.\n--\nMichael", "msg_date": "Thu, 24 Mar 2022 21:10:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Mar 23, 2022 at 7:25 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Mar 15, 2022 at 03:44:39PM -0700, Nathan Bossart wrote:\n> > A simple approach could be to just set log_min_messages to PANIC before\n> > exiting. I've attached a patch for this. With this patch, we'll still see\n> > a FATAL if we try to use 'postgres -C' for a runtime-computed GUC on a\n> > running server, and there will be no extra output as long as the user sets\n> > log_min_messages to INFO or higher (i.e., not a DEBUG* value). For\n> > comparison, 'postgres -C' for a non-runtime-computed GUC does not emit\n> > extra output as long as the user sets log_min_messages to DEBUG2 or higher.\n>\n> > puts(config_val ? config_val : \"\");\n> > +\n> > + /* don't emit shutdown messages */\n> > + SetConfigOption(\"log_min_messages\", \"PANIC\", PGC_INTERNAL, PGC_S_OVERRIDE);\n> > +\n> > ExitPostmaster(0);\n>\n> That's fancy, but I don't like that much. And this would not protect\n> either against any messages generated before this code path, either,\n\nBut neither would the suggestion of redirecting stderr to /dev/null.\nIn fact, doing the redirect it will *also* throw away any FATAL that\nhappens. In fact, using the 2>/dev/null method, we *also* remove the\nmessage that says there's another postmaster running in this\ndirectory, which is strictly worse than the override of\nlog_min_messages.\n\nThat said, the redirect can be removed without recompiling postgres,\nso it is probably still hte better choice as a temporary workaround.\nBut we should really look into getting a better solution in place once\nwe start on 16.\n\n\n\n> My solution for the docs is perhaps too confusing for the end-user,\n> and we are talking about a Linux-only thing here anyway. So, at the\n> end, I am tempted to just add the \"2> /dev/null\" as suggested upthread\n> by Nathan and call it a day. Does that sound fine?\n\nWhat would be a linux only thing?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 24 Mar 2022 14:07:26 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Thu, Mar 24, 2022 at 02:07:26PM +0100, Magnus Hagander wrote:\n> On Wed, Mar 23, 2022 at 7:25 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Tue, Mar 15, 2022 at 03:44:39PM -0700, Nathan Bossart wrote:\n>> > A simple approach could be to just set log_min_messages to PANIC before\n>> > exiting. I've attached a patch for this. With this patch, we'll still see\n>> > a FATAL if we try to use 'postgres -C' for a runtime-computed GUC on a\n>> > running server, and there will be no extra output as long as the user sets\n>> > log_min_messages to INFO or higher (i.e., not a DEBUG* value). For\n>> > comparison, 'postgres -C' for a non-runtime-computed GUC does not emit\n>> > extra output as long as the user sets log_min_messages to DEBUG2 or higher.\n>>\n>> > puts(config_val ? config_val : \"\");\n>> > +\n>> > + /* don't emit shutdown messages */\n>> > + SetConfigOption(\"log_min_messages\", \"PANIC\", PGC_INTERNAL, PGC_S_OVERRIDE);\n>> > +\n>> > ExitPostmaster(0);\n>>\n>> That's fancy, but I don't like that much. And this would not protect\n>> either against any messages generated before this code path, either,\n> \n> But neither would the suggestion of redirecting stderr to /dev/null.\n> In fact, doing the redirect it will *also* throw away any FATAL that\n> happens. In fact, using the 2>/dev/null method, we *also* remove the\n> message that says there's another postmaster running in this\n> directory, which is strictly worse than the override of\n> log_min_messages.\n> \n> That said, the redirect can be removed without recompiling postgres,\n> so it is probably still hte better choice as a temporary workaround.\n> But we should really look into getting a better solution in place once\n> we start on 16.\n\nA couple of other options to consider:\n\n1) Always set log_min_messages to WARNING/ERROR/FATAL for 'postgres -C'.\nWe might need some special logic for handling the case where the user is\ninspecting the log_min_messages parameter. With this approach, you'd\nprobably never get extra output unless something was wrong (e.g., database\nalready running when inspecting a runtime-computed GUC). Also, this would\nsilence any extra output that you might see today with non-runtime-computed\nGUCs.\n\n2) Add some way to skip just the shutdown message (e.g., a variable set\nwhen output_config_variable is true). With this approach, you wouldn't get\nextra output by default, but you still might if log_min_messages is set to\nsomething like DEBUG3. This wouldn't impact any extra output that you see\ntoday with non-runtime-computed GUCs.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 24 Mar 2022 13:31:08 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Thu, Mar 24, 2022 at 01:31:08PM -0700, Nathan Bossart wrote:\n> A couple of other options to consider:\n> \n> 1) Always set log_min_messages to WARNING/ERROR/FATAL for 'postgres -C'.\n> We might need some special logic for handling the case where the user is\n> inspecting the log_min_messages parameter. With this approach, you'd\n> probably never get extra output unless something was wrong (e.g., database\n> already running when inspecting a runtime-computed GUC). Also, this would\n> silence any extra output that you might see today with non-runtime-computed\n> GUCs.\n> \n> 2) Add some way to skip just the shutdown message (e.g., a variable set\n> when output_config_variable is true). With this approach, you wouldn't get\n> extra output by default, but you still might if log_min_messages is set to\n> something like DEBUG3. This wouldn't impact any extra output that you see\n> today with non-runtime-computed GUCs.\n\nI've attached a first attempt at option 1.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 28 Mar 2022 10:35:03 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Thu, Mar 24, 2022 at 02:07:26PM +0100, Magnus Hagander wrote:\n> But neither would the suggestion of redirecting stderr to /dev/null.\n> In fact, doing the redirect it will *also* throw away any FATAL that\n> happens. In fact, using the 2>/dev/null method, we *also* remove the\n> message that says there's another postmaster running in this\n> directory, which is strictly worse than the override of\n> log_min_messages.\n\nWell, we could also tweak more the command with a redirection of\nstderr to a log file or such, and tell to look at it for errors.\n\n> That said, the redirect can be removed without recompiling postgres,\n> so it is probably still hte better choice as a temporary workaround.\n> But we should really look into getting a better solution in place once\n> we start on 16.\n\nBut do we really need a better or more invasive solution for already\nrunning servers though? A SHOW command would be able to do the work\nalready in this case. This would lack consistency compared to the\noffline case, but we are not without option either. That leaves the\ncase where the server is running, has allocated memory but is not\nready to accept connections, like crash recovery, still this use case\nlooks rather thin to me. \n\n>> My solution for the docs is perhaps too confusing for the end-user,\n>> and we are talking about a Linux-only thing here anyway. So, at the\n>> end, I am tempted to just add the \"2> /dev/null\" as suggested upthread\n>> by Nathan and call it a day. Does that sound fine?\n> \n> What would be a linux only thing?\n\nPerhaps not at some point in the future. Now that's under a section\nof the docs only for Linux.\n--\nMichael", "msg_date": "Wed, 20 Apr 2022 07:12:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, Apr 20, 2022, 00:12 Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Mar 24, 2022 at 02:07:26PM +0100, Magnus Hagander wrote:\n> > But neither would the suggestion of redirecting stderr to /dev/null.\n> > In fact, doing the redirect it will *also* throw away any FATAL that\n> > happens. In fact, using the 2>/dev/null method, we *also* remove the\n> > message that says there's another postmaster running in this\n> > directory, which is strictly worse than the override of\n> > log_min_messages.\n>\n> Well, we could also tweak more the command with a redirection of\n> stderr to a log file or such, and tell to look at it for errors.\n>\n\nThat's would be a pretty terrible ux though.\n\n\n\n> > That said, the redirect can be removed without recompiling postgres,\n> > so it is probably still hte better choice as a temporary workaround.\n> > But we should really look into getting a better solution in place once\n> > we start on 16.\n>\n> But do we really need a better or more invasive solution for already\n> running servers though? A SHOW command would be able to do the work\n> already in this case. This would lack consistency compared to the\n> offline case, but we are not without option either. That leaves the\n> case where the server is running, has allocated memory but is not\n> ready to accept connections, like crash recovery, still this use case\n> looks rather thin to me.\n\n\n\nI agree that thats a very narrow use case. And I'm nog sure the use case of\na running server is even that important here - it's really the offline one\nthat's important. Or rather, the really compelling one is when there is a\nserver running but I want to check the value offline because it will\nchange. SHOW doesn't help there because it shows the value based on the\ncurrently running configuration, not the new one after a restart.\n\nI don't agree that the redirect is a solution. It's a workaround.\n\n\n>> My solution for the docs is perhaps too confusing for the end-user,\n> >> and we are talking about a Linux-only thing here anyway. So, at the\n> >> end, I am tempted to just add the \"2> /dev/null\" as suggested upthread\n> >> by Nathan and call it a day. Does that sound fine?\n> >\n> > What would be a linux only thing?\n>\n> Perhaps not at some point in the future. Now that's under a section\n> of the docs only for Linux.\n>\n\n\nHmm. So what's the solution on windows? I guess maybe it's not as important\nthere because there is no limit on huge pages, but generally getting the\nexpected shared memory usage might be useful? Just significantly less\nimportant.\n\n/Magnus\n\nOn Wed, Apr 20, 2022, 00:12 Michael Paquier <michael@paquier.xyz> wrote:On Thu, Mar 24, 2022 at 02:07:26PM +0100, Magnus Hagander wrote:\n> But neither would the suggestion of redirecting stderr to /dev/null.\n> In fact, doing the redirect it will *also* throw away any FATAL that\n> happens. In fact, using the 2>/dev/null method, we *also* remove the\n> message that says there's another postmaster running in this\n> directory, which is strictly worse than the override of\n> log_min_messages.\n\nWell, we could also tweak more the command with a redirection of\nstderr to a log file or such, and tell to look at it for errors.That's would be a pretty terrible ux though. \n\n> That said, the redirect can be removed without recompiling postgres,\n> so it is probably still hte better choice as a temporary workaround.\n> But we should really look into getting a better solution in place once\n> we start on 16.\n\nBut do we really need a better or more invasive solution for already\nrunning servers though?  A SHOW command would be able to do the work\nalready in this case.  This would lack consistency compared to the\noffline case, but we are not without option either.  That leaves the\ncase where the server is running, has allocated memory but is not\nready to accept connections, like crash recovery, still this use case\nlooks rather thin to me.I agree that thats a very narrow use case. And I'm nog sure the use case of a running server is even that important here - it's really the offline one that's important. Or rather, the really compelling one is when there is a server running but I want to check the value offline because it will change. SHOW doesn't help there because it shows the value based on the currently running configuration, not the new one after a restart. I don't agree that the redirect is a solution. It's a workaround. \n>> My solution for the docs is perhaps too confusing for the end-user,\n>> and we are talking about a Linux-only thing here anyway.  So, at the\n>> end, I am tempted to just add the \"2> /dev/null\" as suggested upthread\n>> by Nathan and call it a day.  Does that sound fine?\n> \n> What would be a linux only thing?\n\nPerhaps not at some point in the future.  Now that's under a section\nof the docs only for Linux.Hmm. So what's the solution on windows? I guess maybe it's not as important there because there is no limit on huge pages, but generally getting the expected shared memory usage might be useful? Just significantly less important. /Magnus", "msg_date": "Fri, 22 Apr 2022 09:49:34 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Fri, Apr 22, 2022 at 09:49:34AM +0200, Magnus Hagander wrote:\n> I agree that thats a very narrow use case. And I'm not sure the use case of\n> a running server is even that important here - it's really the offline one\n> that's important. Or rather, the really compelling one is when there is a\n> server running but I want to check the value offline because it will\n> change. SHOW doesn't help there because it shows the value based on the\n> currently running configuration, not the new one after a restart.\n\nYou mean the case of a server where one would directly change\npostgresql.conf on a running server, and use postgres -C to see how\nmuch the kernel settings need to be changed before the restart?\n\n> Hmm. So what's the solution on windows? I guess maybe it's not as important\n> there because there is no limit on huge pages, but generally getting the\n> expected shared memory usage might be useful? Just significantly less\n> important.\n\nContrary to Linux, we don't need to care about the number of large\npages that are necessary because there is no equivalent of\nvm.nr_hugepages on Windows (see [1]), do we? If that were the case,\nwe'd have a use case for huge_page_size, additionally.\n\nThat's the case where shared_memory_size_in_huge_pages comes in\nhandy, as much as does huge_page_size, and note that\nshared_memory_size works on WIN32.\n\n[1]: https://docs.microsoft.com/en-us/windows/win32/memory/large-page-support\n--\nMichael", "msg_date": "Mon, 25 Apr 2022 09:15:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Mon, Apr 25, 2022 at 2:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Apr 22, 2022 at 09:49:34AM +0200, Magnus Hagander wrote:\n> > I agree that thats a very narrow use case. And I'm not sure the use case\n> of\n> > a running server is even that important here - it's really the offline\n> one\n> > that's important. Or rather, the really compelling one is when there is a\n> > server running but I want to check the value offline because it will\n> > change. SHOW doesn't help there because it shows the value based on the\n> > currently running configuration, not the new one after a restart.\n>\n> You mean the case of a server where one would directly change\n> postgresql.conf on a running server, and use postgres -C to see how\n> much the kernel settings need to be changed before the restart?\n>\n\nYes.\n\nAIUI that was the original use-case for this feature. It certainly was for\nme :)\n\n\n\n> Hmm. So what's the solution on windows? I guess maybe it's not as\n> important\n> > there because there is no limit on huge pages, but generally getting the\n> > expected shared memory usage might be useful? Just significantly less\n> > important.\n>\n> Contrary to Linux, we don't need to care about the number of large\n> pages that are necessary because there is no equivalent of\n> vm.nr_hugepages on Windows (see [1]), do we? If that were the case,\n> we'd have a use case for huge_page_size, additionally.\n>\n\nRight, for this one in particular -- that's what I meant with my comment\nabout there not being a limit. But this feature works for other settings as\nwell, not just the huge pages one. Exactly what the use-cases are can\nvary, but surely they would have the same problems wrt redirects?\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Apr 25, 2022 at 2:15 AM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Apr 22, 2022 at 09:49:34AM +0200, Magnus Hagander wrote:\n> I agree that thats a very narrow use case. And I'm not sure the use case of\n> a running server is even that important here - it's really the offline one\n> that's important. Or rather, the really compelling one is when there is a\n> server running but I want to check the value offline because it will\n> change. SHOW doesn't help there because it shows the value based on the\n> currently running configuration, not the new one after a restart.\n\nYou mean the case of a server where one would directly change\npostgresql.conf on a running server, and use postgres -C to see how\nmuch the kernel settings need to be changed before the restart?Yes.AIUI that was the original use-case for this feature. It certainly was for me :)\n> Hmm. So what's the solution on windows? I guess maybe it's not as important\n> there because there is no limit on huge pages, but generally getting the\n> expected shared memory usage might be useful? Just significantly less\n> important.\n\nContrary to Linux, we don't need to care about the number of large\npages that are necessary because there is no equivalent of\nvm.nr_hugepages on Windows (see [1]), do we?  If that were the case,\nwe'd have a use case for huge_page_size, additionally.Right, for this one in particular -- that's what I meant with my comment about there not being a limit. But this feature works for other settings as well, not just the huge pages one.  Exactly what the use-cases are can vary, but surely they would have the same problems wrt redirects?--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 25 Apr 2022 16:55:25 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Mon, Apr 25, 2022 at 04:55:25PM +0200, Magnus Hagander wrote:\n> AIUI that was the original use-case for this feature. It certainly was for\n> me :)\n\nPerhaps we'd be fine with relaxing the requirements here knowing that\nthe control file should never be larger than PG_CONTROL_MAX_SAFE_SIZE\n(aka the read should be atomic so it could be made lockless). At the\nend of the day, to be absolutely correct in the shmem size estimation,\nI think that we are going to need what's proposed here or the sizing\nmay not be right depending on how extensions adjust GUCs after they\nload their _PG_init():\nhttps://www.postgresql.org/message-id/20220419154658.GA2487941@nathanxps13\n\nThat's a bit independent, but not completely unrelated either\ndepending on how exact you want your number of estimated huge pages to\nbe. Just wanted to mention it.\n\n>> Contrary to Linux, we don't need to care about the number of large\n>> pages that are necessary because there is no equivalent of\n>> vm.nr_hugepages on Windows (see [1]), do we? If that were the case,\n>> we'd have a use case for huge_page_size, additionally.\n> \n> Right, for this one in particular -- that's what I meant with my comment\n> about there not being a limit. But this feature works for other settings as\n> well, not just the huge pages one. Exactly what the use-cases are can\n> vary, but surely they would have the same problems wrt redirects?\n\nYes, the redirection issue would apply to all the run-time GUCs.\n--\nMichael", "msg_date": "Tue, 26 Apr 2022 10:34:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Tue, Apr 26, 2022 at 10:34:06AM +0900, Michael Paquier wrote:\n> Yes, the redirection issue would apply to all the run-time GUCs.\n\nShould this be tracked as an open item for v15? There was another recent\nreport about the extra log output [0].\n\n[0] https://www.postgresql.org/message-id/YnARlI5nvbziobR4%40momjian.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 6 May 2022 10:13:18 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Fri, May 06, 2022 at 10:13:18AM -0700, Nathan Bossart wrote:\n> On Tue, Apr 26, 2022 at 10:34:06AM +0900, Michael Paquier wrote:\n>> Yes, the redirection issue would apply to all the run-time GUCs.\n> \n> Should this be tracked as an open item for v15? There was another recent\n> report about the extra log output [0].\n\nThat makes it for two complaints on two separate threads. So an open\nitem seems adapted to adjust this behavior.\n\nI have looked at the patch posted at [1], and I don't quite understand\nwhy you need the extra dance with log_min_messages. Why don't you\njust set the GUC at the end of the code path in PostmasterMain() where\nwe print non-runtime-computed parameters? I am not really worrying\nabout users deciding to set log_min_messages to PANIC in\npostgresql.conf when it comes to postgres -C, TBH, as they'd miss the\nFATAL messages if the command is attempted on a server already\nstarting.\n\nPer se the attached.\n\n[1]: https://www.postgresql.org/message-id/20220328173503.GA137769@nathanxps13\n--\nMichael", "msg_date": "Mon, 9 May 2022 15:53:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Mon, May 09, 2022 at 03:53:24PM +0900, Michael Paquier wrote:\n> I have looked at the patch posted at [1], and I don't quite understand\n> why you need the extra dance with log_min_messages. Why don't you\n> just set the GUC at the end of the code path in PostmasterMain() where\n> we print non-runtime-computed parameters?\n\nThe log_min_messages dance avoids extra output when inspecting\nnon-runtime-computed GUCs, like this:\n\n\t~/pgdata$ postgres -D . -C log_min_messages -c log_min_messages=debug5\n\tdebug5\n\t2022-05-10 09:06:04.728 PDT [3715607] DEBUG: shmem_exit(0): 0 before_shmem_exit callbacks to make\n\t2022-05-10 09:06:04.728 PDT [3715607] DEBUG: shmem_exit(0): 0 on_shmem_exit callbacks to make\n\t2022-05-10 09:06:04.728 PDT [3715607] DEBUG: proc_exit(0): 0 callbacks to make\n\t2022-05-10 09:06:04.728 PDT [3715607] DEBUG: exit(0)\n\nAFAICT you need to set log_min_messages to at least DEBUG3 to see extra\noutput for the non-runtime-computed GUCs, so it might not be worth the\nadded complexity.\n\n> I am not really worrying\n> about users deciding to set log_min_messages to PANIC in\n> postgresql.conf when it comes to postgres -C, TBH, as they'd miss the\n> FATAL messages if the command is attempted on a server already\n> starting.\n\nI don't have a strong opinion on this one.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 May 2022 09:12:49 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Tue, May 10, 2022 at 09:12:49AM -0700, Nathan Bossart wrote:\n> AFAICT you need to set log_min_messages to at least DEBUG3 to see extra\n> output for the non-runtime-computed GUCs, so it might not be worth the\n> added complexity.\n\nThis set of messages is showing up for ages with zero complaints from\nthe field AFAIK, and nobody would use this level of logging except\ndevelopers. One thing that overriding log_min_messages to FATAL does,\nhowever, is to not show anymore those debug3 messages when querying a\nruntime-computed GUC, but that's the kind of things we'd hide. Your\npatch would hide those entries in both cases. Perhaps we could do\nthat, but at the end, I don't really see any need to complicate this\ncode path more than necessary, and this is enough to silence the logs\nin the cases we care about basically all the time, even if the log\nlevels are reduced a bit on a given cluster. Hence, I have applied\nthe simplest solution to just enforce a log_min_messages=FATAL when\nrequesting a runtime GUC.\n--\nMichael", "msg_date": "Wed, 11 May 2022 14:34:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" }, { "msg_contents": "On Wed, May 11, 2022 at 02:34:25PM +0900, Michael Paquier wrote:\n> Hence, I have applied\n> the simplest solution to just enforce a log_min_messages=FATAL when\n> requesting a runtime GUC.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 11 May 2022 08:57:19 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Estimating HugePages Requirements?" } ]
[ { "msg_contents": "Hi,\n\nI have been exploring multirange data types using PostgreSQL 14 Beta 1.\nThus far I'm really happy with the user experience, and it has allowed\nme to simplify some previously onerous queries!\n\nI do have a question about trying to \"unnest\" a multirange type into its\nindividual ranges. For example, I have a query where I want to find the\navailability over a given week. This query may look something like:\n\n SELECT datemultirange(daterange(CURRENT_DATE, CURRENT_DATE + 7))\n - datemultirange(daterange(CURRENT_DATE + 2, CURRENT_DATE + 4))\n as availability;\n\n availability\n ---------------------------------------------------\n {[2021-06-09,2021-06-11),[2021-06-13,2021-06-16)}\n (1 row)\n\nI would like to decompose the returned multirange into its individual\nranges, similarly to how I would \"unnest\" an array:\n\n SELECT * FROM unnest(ARRAY[1,2,3]);\n unnest\n --------\n 1\n 2\n 3\n (3 rows)\n\nSo something like:\n\n SELECT unnest('{[2021-06-09,2021-06-11),\n [2021-06-13,2021-06-16)}')::datemultirange;\n\n unnest\n -------------------------\n [2021-06-09,2021-06-11)\n [2021-06-13,2021-06-16)\n (2 rows)\n\nI looked at the various functions + operators available for the\nmultirange types in the documentation but could not find anything that\ncould perform this action.\n\nDoes this functionality exist?\n\nThanks,\n\nJonathan", "msg_date": "Wed, 9 Jun 2021 14:33:56 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "unnesting multirange data types" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> I would like to decompose the returned multirange into its individual\n> ranges, similarly to how I would \"unnest\" an array:\n\n+1 for adding such a feature, but I suppose it's too late for v14.\n\nAFAICS, \"unnest(anymultirange) returns setof anyrange\" could coexist\nalongside the existing variants of unnest(), so I don't see any\nfundamental stumbling block to having it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Jun 2021 15:25:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 6/9/21 3:25 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> I would like to decompose the returned multirange into its individual\n>> ranges, similarly to how I would \"unnest\" an array:\n> \n> +1 for adding such a feature, but I suppose it's too late for v14.\n\nWell, the case I would make for v14 is that, as of right now, the onus\nis on the driver writers / application developers to be able to unpack\nthe multiranges.\n\nMaybe it's not terrible as of this moment -- I haven't tried testing it\nthat far yet -- but it may make it a bit more challenging to work with\nthese types outside of Postgres. I recall a similar issue when initially\ntrying to integrate range types into my apps back in the v9.2 days, and\nI ended up writing some grotty code to handle it. Yes, I worked around\nit, but I preferably wouldn't have had to.\n\nAn \"unnest\" at least lets us bridge the gap a bit, i.e. if you really\nneed to introspect a multirange type, you have a way of getting it into\na familiar format.\n\nI haven't tried manipulating a multirange in a PL like Python, maybe\nsome exploration there would unveil more or less pain, or if it could be\niterated over in PL/pgSQL (I'm suspecting no).\n\nThat all said, for writing queries within Postgres, the multiranges make\na lot of operations way easier. I do think a missing \"unnest\" function\ndoes straddle the line of \"omission\" and \"new feature,\" so I can\nunderstand if it does not make it into v14.\n\n> AFAICS, \"unnest(anymultirange) returns setof anyrange\" could coexist\n> alongside the existing variants of unnest(), so I don't see any\n> fundamental stumbling block to having it.\n\nCool. I was initially throwing out \"unnest\" as the name as it mirrors\nwhat we currently have with arrays, and seems to be doing something\nsimilar. Open to other names, but this was the one that I was drawn to.\n\"multirange\" is an \"ordered array of ranges\" after all.\n\nThanks,\n\nJonathan", "msg_date": "Wed, 9 Jun 2021 15:44:39 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 6/9/21 3:44 PM, Jonathan S. Katz wrote:\n> On 6/9/21 3:25 PM, Tom Lane wrote:\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>>> I would like to decompose the returned multirange into its individual\n>>> ranges, similarly to how I would \"unnest\" an array:\n>>\n>> +1 for adding such a feature, but I suppose it's too late for v14.\n> \n> Well, the case I would make for v14 is that, as of right now, the onus\n> is on the driver writers / application developers to be able to unpack\n> the multiranges.\n> \n> I haven't tried manipulating a multirange in a PL like Python, maybe\n> some exploration there would unveil more or less pain, or if it could be\n> iterated over in PL/pgSQL (I'm suspecting no).\n\nI did a couple more tests around this.\n\nAs suspected, in PL/pgSQL, there is no way to unpack or iterate over a\nmultirange type.\n\nIn PL/Python, both range types and multirange types are treated as\nstrings. From there, you can at least ultimately parse and manipulate it\ninto your preferred Python types, but this goes back to my earlier point\nabout putting the onus on the developer to do so.\n\nThanks,\n\nJonathan", "msg_date": "Wed, 9 Jun 2021 16:24:27 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 2021-Jun-09, Jonathan S. Katz wrote:\n\n> I did a couple more tests around this.\n> \n> As suspected, in PL/pgSQL, there is no way to unpack or iterate over a\n> multirange type.\n\nUh. This is disappointing; the need for some way to unnest or unpack a\nmultirange was mentioned multiple times in the range_agg thread. I had\nassumed that there was some way to cast the multirange to a range array,\nor somehow convert it, but apparently that doesn't work.\n\nIf the supporting pieces are mostly there, then I opine we should add\nsomething.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Hay dos momentos en la vida de un hombre en los que no deber�a\nespecular: cuando puede permit�rselo y cuando no puede\" (Mark Twain)\n\n\n", "msg_date": "Wed, 9 Jun 2021 16:56:17 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 6/9/21 4:56 PM, Alvaro Herrera wrote:\n> On 2021-Jun-09, Jonathan S. Katz wrote:\n> \n>> I did a couple more tests around this.\n>>\n>> As suspected, in PL/pgSQL, there is no way to unpack or iterate over a\n>> multirange type.\n> \n> Uh. This is disappointing; the need for some way to unnest or unpack a\n> multirange was mentioned multiple times in the range_agg thread. I had\n> assumed that there was some way to cast the multirange to a range array,\n> or somehow convert it, but apparently that doesn't work.\n\nJust to be pedantic with examples:\n\n SELECT datemultirange(\n daterange(current_date, current_date + 2),\n daterange(current_date + 5, current_date + 7))::daterange[];\n\n ERROR: cannot cast type datemultirange to daterange[]\n LINE 1: ...2), daterange(current_date + 5, current_date + 7))::daterang...\n\nIF there was an array to cast it into an array, we could then use the\narray looping construct in PL/pgSQL, but if we could only choose one, I\nthink it'd be more natural/less verbose to have an \"unnest\".\n\n> If the supporting pieces are mostly there, then I opine we should add\n> something.\n\nAgreed.\n\nJonathan", "msg_date": "Wed, 9 Jun 2021 19:00:24 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "Hi, all!\n\nOn Thu, Jun 10, 2021 at 2:00 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 6/9/21 4:56 PM, Alvaro Herrera wrote:\n> > On 2021-Jun-09, Jonathan S. Katz wrote:\n> >\n> >> I did a couple more tests around this.\n> >>\n> >> As suspected, in PL/pgSQL, there is no way to unpack or iterate over a\n> >> multirange type.\n> >\n> > Uh. This is disappointing; the need for some way to unnest or unpack a\n> > multirange was mentioned multiple times in the range_agg thread. I had\n> > assumed that there was some way to cast the multirange to a range array,\n> > or somehow convert it, but apparently that doesn't work.\n>\n> Just to be pedantic with examples:\n>\n> SELECT datemultirange(\n> daterange(current_date, current_date + 2),\n> daterange(current_date + 5, current_date + 7))::daterange[];\n>\n> ERROR: cannot cast type datemultirange to daterange[]\n> LINE 1: ...2), daterange(current_date + 5, current_date + 7))::daterang...\n>\n> IF there was an array to cast it into an array, we could then use the\n> array looping construct in PL/pgSQL, but if we could only choose one, I\n> think it'd be more natural/less verbose to have an \"unnest\".\n>\n> > If the supporting pieces are mostly there, then I opine we should add\n> > something.\n>\n> Agreed.\n\nI agree that unnest(), cast to array and subscription are missing\npoints. Proper subscription support requires expanded object\nhandling. And that seems too late for v14. But unnset() and cast to\narray seems trivial. I've drafted unnest support (attached). I'm\ngoing to add also cast to the array, tests, and docs within a day.\nStay tuned :)\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Thu, 10 Jun 2021 20:24:20 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 6/10/21 1:24 PM, Alexander Korotkov wrote:\n> Hi, all!\n> \n> On Thu, Jun 10, 2021 at 2:00 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>> On 6/9/21 4:56 PM, Alvaro Herrera wrote:\n>>> On 2021-Jun-09, Jonathan S. Katz wrote:\n>>>\n>>>> I did a couple more tests around this.\n>>>>\n>>>> As suspected, in PL/pgSQL, there is no way to unpack or iterate over a\n>>>> multirange type.\n>>>\n>>> Uh. This is disappointing; the need for some way to unnest or unpack a\n>>> multirange was mentioned multiple times in the range_agg thread. I had\n>>> assumed that there was some way to cast the multirange to a range array,\n>>> or somehow convert it, but apparently that doesn't work.\n>>\n>> Just to be pedantic with examples:\n>>\n>> SELECT datemultirange(\n>> daterange(current_date, current_date + 2),\n>> daterange(current_date + 5, current_date + 7))::daterange[];\n>>\n>> ERROR: cannot cast type datemultirange to daterange[]\n>> LINE 1: ...2), daterange(current_date + 5, current_date + 7))::daterang...\n>>\n>> IF there was an array to cast it into an array, we could then use the\n>> array looping construct in PL/pgSQL, but if we could only choose one, I\n>> think it'd be more natural/less verbose to have an \"unnest\".\n>>\n>>> If the supporting pieces are mostly there, then I opine we should add\n>>> something.\n>>\n>> Agreed.\n> \n> I agree that unnest(), cast to array and subscription are missing\n> points. Proper subscription support requires expanded object\n> handling. And that seems too late for v14.\n\nAgreed, the subscripting functionality is too late for v14. (Though\nperhaps someone ambitious could bridge that gap temporarily with the\nability to add subscripting to types!).\n\n> But unnset() and cast to\n> array seems trivial. I've drafted unnest support (attached). I'm\n> going to add also cast to the array, tests, and docs within a day.\n> Stay tuned :)\n\nAwesome. I'll defer to others on the implementation. I'll try to test\nout the patch in a bit to see how it works.\n\nAre there any objections adding this as a v14 open item?\n\nThanks,\n\nJonathan", "msg_date": "Thu, 10 Jun 2021 13:57:42 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Thu, Jun 10, 2021 at 8:57 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 6/10/21 1:24 PM, Alexander Korotkov wrote:\n> > I agree that unnest(), cast to array and subscription are missing\n> > points. Proper subscription support requires expanded object\n> > handling. And that seems too late for v14.\n>\n> Agreed, the subscripting functionality is too late for v14. (Though\n> perhaps someone ambitious could bridge that gap temporarily with the\n> ability to add subscripting to types!).\n>\n> > But unnset() and cast to\n> > array seems trivial. I've drafted unnest support (attached). I'm\n> > going to add also cast to the array, tests, and docs within a day.\n> > Stay tuned :)\n>\n> Awesome. I'll defer to others on the implementation. I'll try to test\n> out the patch in a bit to see how it works.\n\nGood!\n\n> Are there any objections adding this as a v14 open item?\n\nNo objections, let's add it.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 10 Jun 2021 22:08:21 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "+{ oid => '1293', descr => 'expand mutlirange to set of ranges',\n\ntypo: mutlirange\n\nThanks Jonathan for excercising this implementation sooner than later.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 10 Jun 2021 17:04:16 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Fri, Jun 11, 2021 at 1:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> +{ oid => '1293', descr => 'expand mutlirange to set of ranges',\n>\n> typo: mutlirange\n\nFixed, thanks.\n\nThe patch with the implementation of both unnest() and cast to array\nis attached. It contains both tests and docs.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Fri, 11 Jun 2021 23:37:58 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Fri, Jun 11, 2021 at 11:37:58PM +0300, Alexander Korotkov wrote:\n> On Fri, Jun 11, 2021 at 1:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > +{ oid => '1293', descr => 'expand mutlirange to set of ranges',\n> >\n> > typo: mutlirange\n> \n> Fixed, thanks.\n> \n> The patch with the implementation of both unnest() and cast to array\n> is attached. It contains both tests and docs.\n\n|+ The multirange could be explicitly cast to the array of corresponding\nshould say: \"can be cast to an array of corresponding..\"\n\n|+ * Cast multirange to the array of ranges.\nI think should be: *an array of ranges\n\nPer sqlsmith, this is causing consistent crashes.\nI took one of its less appalling queries and simplified it to this:\n\nselect\npg_catalog.multirange_to_array(\n cast(pg_catalog.int8multirange() as int8multirange)) as c2\nfrom (select 1)x;\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 11 Jun 2021 18:30:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "()On Sat, Jun 12, 2021 at 2:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, Jun 11, 2021 at 11:37:58PM +0300, Alexander Korotkov wrote:\n> > On Fri, Jun 11, 2021 at 1:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > +{ oid => '1293', descr => 'expand mutlirange to set of ranges',\n> > >\n> > > typo: mutlirange\n> >\n> > Fixed, thanks.\n> >\n> > The patch with the implementation of both unnest() and cast to array\n> > is attached. It contains both tests and docs.\n>\n> |+ The multirange could be explicitly cast to the array of corresponding\n> should say: \"can be cast to an array of corresponding..\"\n>\n> |+ * Cast multirange to the array of ranges.\n> I think should be: *an array of ranges\n\nThank you for catching this.\n\n> Per sqlsmith, this is causing consistent crashes.\n> I took one of its less appalling queries and simplified it to this:\n>\n> select\n> pg_catalog.multirange_to_array(\n> cast(pg_catalog.int8multirange() as int8multirange)) as c2\n> from (select 1)x;\n\nIt seems that multirange_to_array() doesn't handle empty multiranges.\nI'll post an updated version of the patch tomorrow.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sat, 12 Jun 2021 02:44:13 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sat, Jun 12, 2021 at 2:44 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> ()On Sat, Jun 12, 2021 at 2:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Fri, Jun 11, 2021 at 11:37:58PM +0300, Alexander Korotkov wrote:\n> > > On Fri, Jun 11, 2021 at 1:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > +{ oid => '1293', descr => 'expand mutlirange to set of ranges',\n> > > >\n> > > > typo: mutlirange\n> > >\n> > > Fixed, thanks.\n> > >\n> > > The patch with the implementation of both unnest() and cast to array\n> > > is attached. It contains both tests and docs.\n> >\n> > |+ The multirange could be explicitly cast to the array of corresponding\n> > should say: \"can be cast to an array of corresponding..\"\n> >\n> > |+ * Cast multirange to the array of ranges.\n> > I think should be: *an array of ranges\n>\n> Thank you for catching this.\n>\n> > Per sqlsmith, this is causing consistent crashes.\n> > I took one of its less appalling queries and simplified it to this:\n> >\n> > select\n> > pg_catalog.multirange_to_array(\n> > cast(pg_catalog.int8multirange() as int8multirange)) as c2\n> > from (select 1)x;\n>\n> It seems that multirange_to_array() doesn't handle empty multiranges.\n> I'll post an updated version of the patch tomorrow.\n\nA revised patch is attached. Now empty multiranges are handled\nproperly (and it's covered by tests). Typos are fixed as well.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 13 Jun 2021 00:57:41 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 6/12/21 5:57 PM, Alexander Korotkov wrote:\n> On Sat, Jun 12, 2021 at 2:44 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> ()On Sat, Jun 12, 2021 at 2:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>> On Fri, Jun 11, 2021 at 11:37:58PM +0300, Alexander Korotkov wrote:\n>>>> On Fri, Jun 11, 2021 at 1:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>>>>\n>>>>> +{ oid => '1293', descr => 'expand mutlirange to set of ranges',\n>>>>>\n>>>>> typo: mutlirange\n>>>>\n>>>> Fixed, thanks.\n>>>>\n>>>> The patch with the implementation of both unnest() and cast to array\n>>>> is attached. It contains both tests and docs.\n>>>\n>>> |+ The multirange could be explicitly cast to the array of corresponding\n>>> should say: \"can be cast to an array of corresponding..\"\n>>>\n>>> |+ * Cast multirange to the array of ranges.\n>>> I think should be: *an array of ranges\n>>\n>> Thank you for catching this.\n>>\n>>> Per sqlsmith, this is causing consistent crashes.\n>>> I took one of its less appalling queries and simplified it to this:\n>>>\n>>> select\n>>> pg_catalog.multirange_to_array(\n>>> cast(pg_catalog.int8multirange() as int8multirange)) as c2\n>>> from (select 1)x;\n>>\n>> It seems that multirange_to_array() doesn't handle empty multiranges.\n>> I'll post an updated version of the patch tomorrow.\n> \n> A revised patch is attached. Now empty multiranges are handled\n> properly (and it's covered by tests). Typos are fixed as well.\n\nTested both against my original cases using both SQL + PL/pgSQL. All\nworked well. I also tested the empty multirange case as well.\n\nOverall the documentation seems to make sense, I'd suggest:\n\n+ <para>\n+ The multirange can be cast to an array of corresponding ranges.\n+ </para>\n\nbecomes:\n\n+ <para>\n+ A multirange can be cast to an array of ranges of the same type.\n+ </para>\n\nAgain, I'll defer to others on the code, but this seems to solve the use\ncase I presented. Thanks for the quick turnaround!\n\nJonathan", "msg_date": "Sat, 12 Jun 2021 18:16:24 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sun, Jun 13, 2021 at 1:16 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 6/12/21 5:57 PM, Alexander Korotkov wrote:\n> > On Sat, Jun 12, 2021 at 2:44 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >> ()On Sat, Jun 12, 2021 at 2:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >>> On Fri, Jun 11, 2021 at 11:37:58PM +0300, Alexander Korotkov wrote:\n> >>>> On Fri, Jun 11, 2021 at 1:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >>>>>\n> >>>>> +{ oid => '1293', descr => 'expand mutlirange to set of ranges',\n> >>>>>\n> >>>>> typo: mutlirange\n> >>>>\n> >>>> Fixed, thanks.\n> >>>>\n> >>>> The patch with the implementation of both unnest() and cast to array\n> >>>> is attached. It contains both tests and docs.\n> >>>\n> >>> |+ The multirange could be explicitly cast to the array of corresponding\n> >>> should say: \"can be cast to an array of corresponding..\"\n> >>>\n> >>> |+ * Cast multirange to the array of ranges.\n> >>> I think should be: *an array of ranges\n> >>\n> >> Thank you for catching this.\n> >>\n> >>> Per sqlsmith, this is causing consistent crashes.\n> >>> I took one of its less appalling queries and simplified it to this:\n> >>>\n> >>> select\n> >>> pg_catalog.multirange_to_array(\n> >>> cast(pg_catalog.int8multirange() as int8multirange)) as c2\n> >>> from (select 1)x;\n> >>\n> >> It seems that multirange_to_array() doesn't handle empty multiranges.\n> >> I'll post an updated version of the patch tomorrow.\n> >\n> > A revised patch is attached. Now empty multiranges are handled\n> > properly (and it's covered by tests). Typos are fixed as well.\n>\n> Tested both against my original cases using both SQL + PL/pgSQL. All\n> worked well. I also tested the empty multirange case as well.\n>\n> Overall the documentation seems to make sense, I'd suggest:\n>\n> + <para>\n> + The multirange can be cast to an array of corresponding ranges.\n> + </para>\n>\n> becomes:\n>\n> + <para>\n> + A multirange can be cast to an array of ranges of the same type.\n> + </para>\n\nThank you. This change is incorporated in the attached revision of the patch.\n\nThis thread gave me another lesson about English articles. Hopefully,\nI would be able to make progress in future patches :)\n\n> Again, I'll defer to others on the code, but this seems to solve the use\n> case I presented. Thanks for the quick turnaround!\n\nThank you for the feedback!\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 13 Jun 2021 02:58:30 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sun, Jun 13, 2021 at 2:58 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Sun, Jun 13, 2021 at 1:16 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > On 6/12/21 5:57 PM, Alexander Korotkov wrote:\n> > > On Sat, Jun 12, 2021 at 2:44 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > >> ()On Sat, Jun 12, 2021 at 2:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >>> On Fri, Jun 11, 2021 at 11:37:58PM +0300, Alexander Korotkov wrote:\n> > >>>> On Fri, Jun 11, 2021 at 1:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >>>>>\n> > >>>>> +{ oid => '1293', descr => 'expand mutlirange to set of ranges',\n> > >>>>>\n> > >>>>> typo: mutlirange\n> > >>>>\n> > >>>> Fixed, thanks.\n> > >>>>\n> > >>>> The patch with the implementation of both unnest() and cast to array\n> > >>>> is attached. It contains both tests and docs.\n> > >>>\n> > >>> |+ The multirange could be explicitly cast to the array of corresponding\n> > >>> should say: \"can be cast to an array of corresponding..\"\n> > >>>\n> > >>> |+ * Cast multirange to the array of ranges.\n> > >>> I think should be: *an array of ranges\n> > >>\n> > >> Thank you for catching this.\n> > >>\n> > >>> Per sqlsmith, this is causing consistent crashes.\n> > >>> I took one of its less appalling queries and simplified it to this:\n> > >>>\n> > >>> select\n> > >>> pg_catalog.multirange_to_array(\n> > >>> cast(pg_catalog.int8multirange() as int8multirange)) as c2\n> > >>> from (select 1)x;\n> > >>\n> > >> It seems that multirange_to_array() doesn't handle empty multiranges.\n> > >> I'll post an updated version of the patch tomorrow.\n> > >\n> > > A revised patch is attached. Now empty multiranges are handled\n> > > properly (and it's covered by tests). Typos are fixed as well.\n> >\n> > Tested both against my original cases using both SQL + PL/pgSQL. All\n> > worked well. I also tested the empty multirange case as well.\n> >\n> > Overall the documentation seems to make sense, I'd suggest:\n> >\n> > + <para>\n> > + The multirange can be cast to an array of corresponding ranges.\n> > + </para>\n> >\n> > becomes:\n> >\n> > + <para>\n> > + A multirange can be cast to an array of ranges of the same type.\n> > + </para>\n>\n> Thank you. This change is incorporated in the attached revision of the patch.\n>\n> This thread gave me another lesson about English articles. Hopefully,\n> I would be able to make progress in future patches :)\n>\n> > Again, I'll defer to others on the code, but this seems to solve the use\n> > case I presented. Thanks for the quick turnaround!\n>\n> Thank you for the feedback!\n\nI've added the commit message to the patch. I'm going to push it if\nno objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 13 Jun 2021 14:43:36 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 6/13/21 7:43 AM, Alexander Korotkov wrote:\n> On Sun, Jun 13, 2021 at 2:58 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> On Sun, Jun 13, 2021 at 1:16 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n\n>>> Again, I'll defer to others on the code, but this seems to solve the use\n>>> case I presented. Thanks for the quick turnaround!\n>>\n>> Thank you for the feedback!\n> \n> I've added the commit message to the patch. I'm going to push it if\n> no objections.\n\nI went ahead and tried testing a few more cases with the patch, and\neverything seems to work as expected.\n\nI did skim through the code -- I'm much less familiar with this part of\nthe system -- and I did not see anything that I would consider \"obvious\nto correct\" from my perspective.\n\nSo I will continue to go with what I said above: no objections on the\nuse case perspective, but I defer to others on the code.\n\nOne question: if I were to make a custom multirange type (e.g. let's say\nI use \"inet\" to make \"inetrange\" and then a \"inetmultirange\") will this\nmethod still work? It seems so, but I wanted clarify.\n\nThanks,\n\nJonathan", "msg_date": "Sun, 13 Jun 2021 08:26:19 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 6/13/21 8:26 AM, Jonathan S. Katz wrote:\n\n> One question: if I were to make a custom multirange type (e.g. let's say\n> I use \"inet\" to make \"inetrange\" and then a \"inetmultirange\") will this\n> method still work? It seems so, but I wanted clarify.\n\nI went ahead and answered this myself: \"yes\":\n\n CREATE TYPE inetrange AS RANGE (SUBTYPE = inet);\n\n SELECT unnest(inetmultirange(inetrange('192.168.1.1', '192.168.1.5'),\ninetrange('192.168.1.7', '192.168.1.10')));\n unnest\n ----------------------------\n [192.168.1.1,192.168.1.5)\n [192.168.1.7,192.168.1.10)\n (2 rows)\n\nAwesome stuff.\n\nJonathan", "msg_date": "Sun, 13 Jun 2021 08:29:38 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sat, Jun 12, 2021 at 4:58 PM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Sun, Jun 13, 2021 at 1:16 AM Jonathan S. Katz <jkatz@postgresql.org>\n> wrote:\n> > On 6/12/21 5:57 PM, Alexander Korotkov wrote:\n> > > On Sat, Jun 12, 2021 at 2:44 AM Alexander Korotkov <\n> aekorotkov@gmail.com> wrote:\n> > >> ()On Sat, Jun 12, 2021 at 2:30 AM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> > >>> On Fri, Jun 11, 2021 at 11:37:58PM +0300, Alexander Korotkov wrote:\n> > >>>> On Fri, Jun 11, 2021 at 1:04 AM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> > >>>>>\n> > >>>>> +{ oid => '1293', descr => 'expand mutlirange to set of ranges',\n> > >>>>>\n> > >>>>> typo: mutlirange\n> > >>>>\n> > >>>> Fixed, thanks.\n> > >>>>\n> > >>>> The patch with the implementation of both unnest() and cast to array\n> > >>>> is attached. It contains both tests and docs.\n> > >>>\n> > >>> |+ The multirange could be explicitly cast to the array of\n> corresponding\n> > >>> should say: \"can be cast to an array of corresponding..\"\n> > >>>\n> > >>> |+ * Cast multirange to the array of ranges.\n> > >>> I think should be: *an array of ranges\n> > >>\n> > >> Thank you for catching this.\n> > >>\n> > >>> Per sqlsmith, this is causing consistent crashes.\n> > >>> I took one of its less appalling queries and simplified it to this:\n> > >>>\n> > >>> select\n> > >>> pg_catalog.multirange_to_array(\n> > >>> cast(pg_catalog.int8multirange() as int8multirange)) as c2\n> > >>> from (select 1)x;\n> > >>\n> > >> It seems that multirange_to_array() doesn't handle empty multiranges.\n> > >> I'll post an updated version of the patch tomorrow.\n> > >\n> > > A revised patch is attached. Now empty multiranges are handled\n> > > properly (and it's covered by tests). Typos are fixed as well.\n> >\n> > Tested both against my original cases using both SQL + PL/pgSQL. All\n> > worked well. I also tested the empty multirange case as well.\n> >\n> > Overall the documentation seems to make sense, I'd suggest:\n> >\n> > + <para>\n> > + The multirange can be cast to an array of corresponding ranges.\n> > + </para>\n> >\n> > becomes:\n> >\n> > + <para>\n> > + A multirange can be cast to an array of ranges of the same type.\n> > + </para>\n>\n> Thank you. This change is incorporated in the attached revision of the\n> patch.\n>\n> This thread gave me another lesson about English articles. Hopefully,\n> I would be able to make progress in future patches :)\n>\n> > Again, I'll defer to others on the code, but this seems to solve the use\n> > case I presented. Thanks for the quick turnaround!\n>\n> Thank you for the feedback!\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\nHi,\n+ A multirange can be cast to an array of ranges of the same type.\n\nI think 'same type' is not very accurate. It should be 'of the subtype'.\n\n+ ObjectAddress myself,\n\nnit: myself -> self\n\n+/* Turn multirange into a set of ranges */\n\nset of ranges: sequence of ranges\n\nCheers\n\nOn Sat, Jun 12, 2021 at 4:58 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:On Sun, Jun 13, 2021 at 1:16 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 6/12/21 5:57 PM, Alexander Korotkov wrote:\n> > On Sat, Jun 12, 2021 at 2:44 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >> ()On Sat, Jun 12, 2021 at 2:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >>> On Fri, Jun 11, 2021 at 11:37:58PM +0300, Alexander Korotkov wrote:\n> >>>> On Fri, Jun 11, 2021 at 1:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >>>>>\n> >>>>> +{ oid => '1293', descr => 'expand mutlirange to set of ranges',\n> >>>>>\n> >>>>> typo: mutlirange\n> >>>>\n> >>>> Fixed, thanks.\n> >>>>\n> >>>> The patch with the implementation of both unnest() and cast to array\n> >>>> is attached.  It contains both tests and docs.\n> >>>\n> >>> |+   The multirange could be explicitly cast to the array of corresponding\n> >>> should say: \"can be cast to an array of corresponding..\"\n> >>>\n> >>> |+ * Cast multirange to the array of ranges.\n> >>> I think should be: *an array of ranges\n> >>\n> >> Thank you for catching this.\n> >>\n> >>> Per sqlsmith, this is causing consistent crashes.\n> >>> I took one of its less appalling queries and simplified it to this:\n> >>>\n> >>> select\n> >>> pg_catalog.multirange_to_array(\n> >>>     cast(pg_catalog.int8multirange() as int8multirange)) as c2\n> >>> from (select 1)x;\n> >>\n> >> It seems that multirange_to_array() doesn't handle empty multiranges.\n> >> I'll post an updated version of the patch tomorrow.\n> >\n> > A revised patch is attached.  Now empty multiranges are handled\n> > properly (and it's covered by tests).  Typos are fixed as well.\n>\n> Tested both against my original cases using both SQL + PL/pgSQL. All\n> worked well. I also tested the empty multirange case as well.\n>\n> Overall the documentation seems to make sense, I'd suggest:\n>\n> +  <para>\n> +   The multirange can be cast to an array of corresponding ranges.\n> +  </para>\n>\n> becomes:\n>\n> +  <para>\n> +   A multirange can be cast to an array of ranges of the same type.\n> +  </para>\n\nThank you. This change is incorporated in the attached revision of the patch.\n\nThis thread gave me another lesson about English articles.  Hopefully,\nI would be able to make progress in future patches :)\n\n> Again, I'll defer to others on the code, but this seems to solve the use\n> case I presented. Thanks for the quick turnaround!\n\nThank you for the feedback!\n\n------\nRegards,\nAlexander KorotkovHi,+   A multirange can be cast to an array of ranges of the same type.I think 'same type' is not very accurate. It should be 'of the subtype'.+   ObjectAddress myself,nit: myself -> self+/* Turn multirange into a set of ranges */set of ranges: sequence of rangesCheers", "msg_date": "Sun, 13 Jun 2021 07:57:56 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 6/13/21 10:57 AM, Zhihong Yu wrote:\n> \n> \n> On Sat, Jun 12, 2021 at 4:58 PM Alexander Korotkov <aekorotkov@gmail.com\n> <mailto:aekorotkov@gmail.com>> wrote:\n> \n> On Sun, Jun 13, 2021 at 1:16 AM Jonathan S. Katz\n> <jkatz@postgresql.org <mailto:jkatz@postgresql.org>> wrote:\n> > On 6/12/21 5:57 PM, Alexander Korotkov wrote:\n> > > On Sat, Jun 12, 2021 at 2:44 AM Alexander Korotkov\n> <aekorotkov@gmail.com <mailto:aekorotkov@gmail.com>> wrote:\n> > >> ()On Sat, Jun 12, 2021 at 2:30 AM Justin Pryzby\n> <pryzby@telsasoft.com <mailto:pryzby@telsasoft.com>> wrote:\n> > >>> On Fri, Jun 11, 2021 at 11:37:58PM +0300, Alexander Korotkov\n> wrote:\n> > >>>> On Fri, Jun 11, 2021 at 1:04 AM Justin Pryzby\n> <pryzby@telsasoft.com <mailto:pryzby@telsasoft.com>> wrote:\n> > >>>>>\n> > >>>>> +{ oid => '1293', descr => 'expand mutlirange to set of ranges',\n> > >>>>>\n> > >>>>> typo: mutlirange\n> > >>>>\n> > >>>> Fixed, thanks.\n> > >>>>\n> > >>>> The patch with the implementation of both unnest() and cast\n> to array\n> > >>>> is attached.  It contains both tests and docs.\n> > >>>\n> > >>> |+   The multirange could be explicitly cast to the array of\n> corresponding\n> > >>> should say: \"can be cast to an array of corresponding..\"\n> > >>>\n> > >>> |+ * Cast multirange to the array of ranges.\n> > >>> I think should be: *an array of ranges\n> > >>\n> > >> Thank you for catching this.\n> > >>\n> > >>> Per sqlsmith, this is causing consistent crashes.\n> > >>> I took one of its less appalling queries and simplified it to\n> this:\n> > >>>\n> > >>> select\n> > >>> pg_catalog.multirange_to_array(\n> > >>>     cast(pg_catalog.int8multirange() as int8multirange)) as c2\n> > >>> from (select 1)x;\n> > >>\n> > >> It seems that multirange_to_array() doesn't handle empty\n> multiranges.\n> > >> I'll post an updated version of the patch tomorrow.\n> > >\n> > > A revised patch is attached.  Now empty multiranges are handled\n> > > properly (and it's covered by tests).  Typos are fixed as well.\n> >\n> > Tested both against my original cases using both SQL + PL/pgSQL. All\n> > worked well. I also tested the empty multirange case as well.\n> >\n> > Overall the documentation seems to make sense, I'd suggest:\n> >\n> > +  <para>\n> > +   The multirange can be cast to an array of corresponding ranges.\n> > +  </para>\n> >\n> > becomes:\n> >\n> > +  <para>\n> > +   A multirange can be cast to an array of ranges of the same type.\n> > +  </para>\n> \n> Thank you. This change is incorporated in the attached revision of\n> the patch.\n> \n> This thread gave me another lesson about English articles.  Hopefully,\n> I would be able to make progress in future patches :)\n> \n> > Again, I'll defer to others on the code, but this seems to solve\n> the use\n> > case I presented. Thanks for the quick turnaround!\n> \n> Thank you for the feedback!\n> \n> ------\n> Regards,\n> Alexander Korotkov\n> \n> \n> Hi,\n> +   A multirange can be cast to an array of ranges of the same type.\n> \n> I think 'same type' is not very accurate. It should be 'of the subtype'.\n\nI think that's more technically correct, but it could be confusing to\nthe user. There is an example next to it that shows how this function\nworks, i.e. it returns the type of range that is represented by the\nmultirange.\n\n> +   ObjectAddress myself,\n> \n> nit: myself -> self\n> \n> +/* Turn multirange into a set of ranges */\n> \n> set of ranges: sequence of ranges\n\nI believe \"set of ranges\" is accurate here, as the comparable return is\na \"SETOF rangetype\". Sequences are objects unto themselves.\n\nJonathan", "msg_date": "Sun, 13 Jun 2021 11:25:05 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sun, Jun 13, 2021 at 11:25:05AM -0400, Jonathan S. Katz wrote:\n> On 6/13/21 10:57 AM, Zhihong Yu wrote:\n> > +/* Turn multirange into a set of ranges */\n> > \n> > set of ranges: sequence of ranges\n> \n> I believe \"set of ranges\" is accurate here, as the comparable return is\n> a \"SETOF rangetype\". Sequences are objects unto themselves.\n> \n\nI believe the point was that (in mathematics) a \"set\" is unordered, and a\nsequence is ordered. Also, a \"setof\" tuples in postgres can contain\nduplicates.\n\nThe docs say \"The ranges are read out in storage order (ascending).\", so I\nthink this is just a confusion between what \"set\" means in math vs in postgres.\n\nIn postgres, \"sequence\" usually refers to the object that generarates a\nsequence:\n| CREATE SEQUENCE creates a new sequence number generator.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 13 Jun 2021 10:49:07 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 6/13/21 11:49 AM, Justin Pryzby wrote:\n> On Sun, Jun 13, 2021 at 11:25:05AM -0400, Jonathan S. Katz wrote:\n>> On 6/13/21 10:57 AM, Zhihong Yu wrote:\n>>> +/* Turn multirange into a set of ranges */\n>>>\n>>> set of ranges: sequence of ranges\n>>\n>> I believe \"set of ranges\" is accurate here, as the comparable return is\n>> a \"SETOF rangetype\". Sequences are objects unto themselves.\n>>\n> \n> I believe the point was that (in mathematics) a \"set\" is unordered, and a\n> sequence is ordered. Also, a \"setof\" tuples in postgres can contain\n> duplicates.\n\nThe comment in question is part of the header for the\n\"multirange_unnest\" function in the code and AFAICT it is accurate: it\nis returning a \"set of\" ranges as it's literally calling into the\nset-returning function framework.\n\nI would suggest leaving it as is.\n\n> The docs say \"The ranges are read out in storage order (ascending).\", so I\n> think this is just a confusion between what \"set\" means in math vs in postgres.\n\nThis is nearly identical to the language in the array unnest[1]\nfunction, which is what I believed Alexander borrowed from:\n\n\"Expands an array into a set of rows. The array's elements are read out\nin storage order.\"\n\nIf we tweaked the multirange \"unnest\" function, we could change it to:\n\n+ <para>\n+ Expands a multirange into a set of rows.\n+ The ranges are read out in storage order (ascending).\n+ </para>\n\nto match what the array \"unnest\" function docs, or\n\n+ <para>\n+ Expands a multirange into a set of rows that each\n+ contain an individual range.\n+ The ranges are read out in storage order (ascending).\n+ </para>\n\nto be a bit more specific. However, I think this is also bordering on\noverengineering the text, given there has been a lack of feedback on the\n\"unnest\" array function description being confusing.\n\nThanks,\n\nJonathan\n\n[1] https://www.postgresql.org/docs/current/functions-array.html", "msg_date": "Sun, 13 Jun 2021 14:46:36 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sun, Jun 13, 2021 at 5:53 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> + ObjectAddress myself,\n>\n> nit: myself -> self\n\nProbably \"self\" is a better name than \"myself\" in this context.\nHowever, you can see that the surrounding code already uses the name\n\"myself\". Therefore, I prefer to stay with \"myself\".\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 14 Jun 2021 00:10:32 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sun, Jun 13, 2021 at 9:46 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> On 6/13/21 11:49 AM, Justin Pryzby wrote:\n> > On Sun, Jun 13, 2021 at 11:25:05AM -0400, Jonathan S. Katz wrote:\n> >> On 6/13/21 10:57 AM, Zhihong Yu wrote:\n> >>> +/* Turn multirange into a set of ranges */\n> >>>\n> >>> set of ranges: sequence of ranges\n> >>\n> >> I believe \"set of ranges\" is accurate here, as the comparable return is\n> >> a \"SETOF rangetype\". Sequences are objects unto themselves.\n> >>\n> >\n> > I believe the point was that (in mathematics) a \"set\" is unordered, and a\n> > sequence is ordered. Also, a \"setof\" tuples in postgres can contain\n> > duplicates.\n>\n> The comment in question is part of the header for the\n> \"multirange_unnest\" function in the code and AFAICT it is accurate: it\n> is returning a \"set of\" ranges as it's literally calling into the\n> set-returning function framework.\n>\n> I would suggest leaving it as is.\n\n+1\n\n> > The docs say \"The ranges are read out in storage order (ascending).\", so I\n> > think this is just a confusion between what \"set\" means in math vs in postgres.\n>\n> This is nearly identical to the language in the array unnest[1]\n> function, which is what I believed Alexander borrowed from:\n\nYes, that's it! :)\n\n> \"Expands an array into a set of rows. The array's elements are read out\n> in storage order.\"\n>\n> If we tweaked the multirange \"unnest\" function, we could change it to:\n>\n> + <para>\n> + Expands a multirange into a set of rows.\n> + The ranges are read out in storage order (ascending).\n> + </para>\n>\n> to match what the array \"unnest\" function docs, or\n>\n> + <para>\n> + Expands a multirange into a set of rows that each\n> + contain an individual range.\n> + The ranges are read out in storage order (ascending).\n> + </para>\n>\n> to be a bit more specific. However, I think this is also bordering on\n> overengineering the text, given there has been a lack of feedback on the\n> \"unnest\" array function description being confusing.\n\nI think it's not necessarily to say about rows here. Our\ndocumentation already has already a number of examples, where we\ndescribe set of returned values without speaking about rows including:\njson_array_elements, json_array_elements_text, json_object_keys,\npg_listening_channels, pg_tablespace_databases...\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 14 Jun 2021 00:18:48 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sun, Jun 13, 2021 at 2:10 PM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Sun, Jun 13, 2021 at 5:53 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > + ObjectAddress myself,\n> >\n> > nit: myself -> self\n>\n> Probably \"self\" is a better name than \"myself\" in this context.\n> However, you can see that the surrounding code already uses the name\n> \"myself\". Therefore, I prefer to stay with \"myself\".\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\nHi,\nIs it Okay if I submit a patch changing the 'myself's to 'self' ?\n\nCheers\n\nOn Sun, Jun 13, 2021 at 2:10 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:On Sun, Jun 13, 2021 at 5:53 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> +   ObjectAddress myself,\n>\n> nit: myself -> self\n\nProbably \"self\" is a better name than \"myself\" in this context.\nHowever, you can see that the surrounding code already uses the name\n\"myself\".  Therefore, I prefer to stay with \"myself\".\n\n------\nRegards,\nAlexander KorotkovHi,Is it Okay if I submit a patch changing the 'myself's to 'self' ?Cheers", "msg_date": "Sun, 13 Jun 2021 18:36:42 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sun, Jun 13, 2021 at 06:36:42PM -0700, Zhihong Yu wrote:\n> On Sun, Jun 13, 2021 at 2:10 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Sun, Jun 13, 2021 at 5:53 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > > + ObjectAddress myself,\n> > >\n> > > nit: myself -> self\n> >\n> > Probably \"self\" is a better name than \"myself\" in this context.\n> > However, you can see that the surrounding code already uses the name\n> > \"myself\". Therefore, I prefer to stay with \"myself\".\n>\n> Is it Okay if I submit a patch changing the 'myself's to 'self' ?\n\nI think it's too nit-picky to be useful and and too much like busy-work.\n\nThe patch wouldn't be applied to backbranches, and the divergence complicates\nfuture backpatches, and can create the possibility to introduce errors.\n\nI already looked for and reported typos introduced in v14, but I can almost\npromise that if someone looks closely at the documentation changes there are\nmore errors to be found, even without testing that the code behaves as\nadvertised.\n\nYou can look for patches which changed docs in v14 like so:\ngit log -p --cherry-pick --stat origin/REL_13_STABLE...origin/master -- doc\n\nBut I recommend reading the changes to documentation in HTML/PDF, since it's\neasy to miss an errors while reading SGML.\nhttps://www.postgresql.org/docs/devel/index.html\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 13 Jun 2021 21:29:02 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 6/13/21 5:18 PM, Alexander Korotkov wrote:\n\n>> \"Expands an array into a set of rows. The array's elements are read out\n>> in storage order.\"\n>>\n>> If we tweaked the multirange \"unnest\" function, we could change it to:\n>>\n>> + <para>\n>> + Expands a multirange into a set of rows.\n>> + The ranges are read out in storage order (ascending).\n>> + </para>\n>>\n>> to match what the array \"unnest\" function docs, or\n>>\n>> + <para>\n>> + Expands a multirange into a set of rows that each\n>> + contain an individual range.\n>> + The ranges are read out in storage order (ascending).\n>> + </para>\n>>\n>> to be a bit more specific. However, I think this is also bordering on\n>> overengineering the text, given there has been a lack of feedback on the\n>> \"unnest\" array function description being confusing.\n> \n> I think it's not necessarily to say about rows here. Our\n> documentation already has already a number of examples, where we\n> describe set of returned values without speaking about rows including:\n> json_array_elements, json_array_elements_text, json_object_keys,\n> pg_listening_channels, pg_tablespace_databases...\n\nI do agree -- my main point was that I don't think we need to change\nanything. I proposed alternatives just to show some other ways of\nlooking at it. But as I mentioned, at this point I think it's\noverengineering the text.\n\nIf folks are good with the method + code, I think this is ready.\n\nThanks,\n\nJonathan", "msg_date": "Mon, 14 Jun 2021 08:50:01 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Mon, Jun 14, 2021 at 3:50 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 6/13/21 5:18 PM, Alexander Korotkov wrote:\n>\n> >> \"Expands an array into a set of rows. The array's elements are read out\n> >> in storage order.\"\n> >>\n> >> If we tweaked the multirange \"unnest\" function, we could change it to:\n> >>\n> >> + <para>\n> >> + Expands a multirange into a set of rows.\n> >> + The ranges are read out in storage order (ascending).\n> >> + </para>\n> >>\n> >> to match what the array \"unnest\" function docs, or\n> >>\n> >> + <para>\n> >> + Expands a multirange into a set of rows that each\n> >> + contain an individual range.\n> >> + The ranges are read out in storage order (ascending).\n> >> + </para>\n> >>\n> >> to be a bit more specific. However, I think this is also bordering on\n> >> overengineering the text, given there has been a lack of feedback on the\n> >> \"unnest\" array function description being confusing.\n> >\n> > I think it's not necessarily to say about rows here. Our\n> > documentation already has already a number of examples, where we\n> > describe set of returned values without speaking about rows including:\n> > json_array_elements, json_array_elements_text, json_object_keys,\n> > pg_listening_channels, pg_tablespace_databases...\n>\n> I do agree -- my main point was that I don't think we need to change\n> anything. I proposed alternatives just to show some other ways of\n> looking at it. But as I mentioned, at this point I think it's\n> overengineering the text.\n>\n> If folks are good with the method + code, I think this is ready.\n\nCool, thank you for the summary. I'll wait for two days since I've\npublished the last revision of the patch [1] (comes tomorrow), and\npush it if no new issues arise.\n\nLinks\n1. https://www.postgresql.org/message-id/CAPpHfdvG%3DJR5kqmZx7KvTyVgtQePX0QJ09TO1y3sN73WOfJf1Q%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 14 Jun 2021 16:14:40 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Mon, Jun 14, 2021 at 4:14 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Mon, Jun 14, 2021 at 3:50 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > On 6/13/21 5:18 PM, Alexander Korotkov wrote:\n> >\n> > >> \"Expands an array into a set of rows. The array's elements are read out\n> > >> in storage order.\"\n> > >>\n> > >> If we tweaked the multirange \"unnest\" function, we could change it to:\n> > >>\n> > >> + <para>\n> > >> + Expands a multirange into a set of rows.\n> > >> + The ranges are read out in storage order (ascending).\n> > >> + </para>\n> > >>\n> > >> to match what the array \"unnest\" function docs, or\n> > >>\n> > >> + <para>\n> > >> + Expands a multirange into a set of rows that each\n> > >> + contain an individual range.\n> > >> + The ranges are read out in storage order (ascending).\n> > >> + </para>\n> > >>\n> > >> to be a bit more specific. However, I think this is also bordering on\n> > >> overengineering the text, given there has been a lack of feedback on the\n> > >> \"unnest\" array function description being confusing.\n> > >\n> > > I think it's not necessarily to say about rows here. Our\n> > > documentation already has already a number of examples, where we\n> > > describe set of returned values without speaking about rows including:\n> > > json_array_elements, json_array_elements_text, json_object_keys,\n> > > pg_listening_channels, pg_tablespace_databases...\n> >\n> > I do agree -- my main point was that I don't think we need to change\n> > anything. I proposed alternatives just to show some other ways of\n> > looking at it. But as I mentioned, at this point I think it's\n> > overengineering the text.\n> >\n> > If folks are good with the method + code, I think this is ready.\n>\n> Cool, thank you for the summary. I'll wait for two days since I've\n> published the last revision of the patch [1] (comes tomorrow), and\n> push it if no new issues arise.\n\nPushed! Thanks to thread participants for raising this topic and\nreview. I'll be around to resolve issues if any.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 15 Jun 2021 16:10:21 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> Pushed! Thanks to thread participants for raising this topic and\n> review. I'll be around to resolve issues if any.\n\nBuildfarm is pretty thoroughly unhappy. Did you do a \"check-world\"\nbefore pushing?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 09:49:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Tue, Jun 15, 2021 at 4:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > Pushed! Thanks to thread participants for raising this topic and\n> > review. I'll be around to resolve issues if any.\n>\n> Buildfarm is pretty thoroughly unhappy. Did you do a \"check-world\"\n> before pushing?\n\nYes, I'm looking at this now.\n\nI did run \"check-world\", but it passed for me. Probably the same\nreason it passed for some buildfarm animals...\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 15 Jun 2021 19:06:50 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> I did run \"check-world\", but it passed for me. Probably the same\n> reason it passed for some buildfarm animals...\n\nThe only buildfarm animals that have passed since this went in\nare the ones that don't run the pg_dump or pg_upgrade tests.\n\nIt looks to me like the proximate problem is that you should\nhave taught pg_dump to skip these new auto-generated functions.\nHowever, I fail to see why we need auto-generated functions\nfor this at all. Couldn't we have done it with one polymorphic\nfunction?\n\nI think this ought to be reverted and reviewed more carefully.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 13:18:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 2021-Jun-15, Tom Lane wrote:\n\n> It looks to me like the proximate problem is that you should\n> have taught pg_dump to skip these new auto-generated functions.\n> However, I fail to see why we need auto-generated functions\n> for this at all. Couldn't we have done it with one polymorphic\n> function?\n\nI think such a function would need to take anycompatiblerangearray,\nwhich is not something we currently have.\n\n> I think this ought to be reverted and reviewed more carefully.\n\nIt seems to me that removing the cast-to-range[] is a sufficient fix,\nand that we can do with only the unnest part for pg14; the casts can be\nadded in 15 (if at all). That would mean reverting only half the patch.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Tue, 15 Jun 2021 13:28:27 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jun-15, Tom Lane wrote:\n>> I think this ought to be reverted and reviewed more carefully.\n\n> It seems to me that removing the cast-to-range[] is a sufficient fix,\n> and that we can do with only the unnest part for pg14; the casts can be\n> added in 15 (if at all). That would mean reverting only half the patch.\n\nMight be a reasonable solution. But right now I'm annoyed that the\nbuildfarm is broken, and I'm also convinced that this didn't get\nadequate testing. I think \"revert and reconsider\" is the way\nforward for today.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 13:52:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 6/15/21 1:52 PM, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> On 2021-Jun-15, Tom Lane wrote:\n>>> I think this ought to be reverted and reviewed more carefully.\n> \n>> It seems to me that removing the cast-to-range[] is a sufficient fix,\n>> and that we can do with only the unnest part for pg14; the casts can be\n>> added in 15 (if at all). That would mean reverting only half the patch.\n> \n> Might be a reasonable solution. But right now I'm annoyed that the\n> buildfarm is broken, and I'm also convinced that this didn't get\n> adequate testing.\n\nI had focused testing primarily on the \"unnest\" cases that I had\ndescribed in my original note. I did a couple of casts and had no issue;\nI did not test with pg_dump / pg_upgrade, but noting to do so in the\nfuture in cases like this.\n\n> I think \"revert and reconsider\" is the way\n> forward for today.\n\nI don't want the buildfarm broken so I'm fine if this is the best way\nforward. If we can keep the \"unnest\" functionality I would strongly\nsuggest it as that was the premise of the original note to complete the\nutility of multiranges for v14. The casting, while convenient, is not\nneeded.\n\nJonathan", "msg_date": "Tue, 15 Jun 2021 13:59:38 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "\nOn 6/15/21 12:06 PM, Alexander Korotkov wrote:\n> On Tue, Jun 15, 2021 at 4:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Alexander Korotkov <aekorotkov@gmail.com> writes:\n>>> Pushed! Thanks to thread participants for raising this topic and\n>>> review. I'll be around to resolve issues if any.\n>> Buildfarm is pretty thoroughly unhappy. Did you do a \"check-world\"\n>> before pushing?\n> Yes, I'm looking at this now.\n>\n> I did run \"check-world\", but it passed for me. Probably the same\n> reason it passed for some buildfarm animals...\n>\n\nDid you configure with --enable-tap-tests? If not, then `make\ncheck-world` won't run the tests that are failing here.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 15 Jun 2021 14:43:51 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Tue, Jun 15, 2021 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > I did run \"check-world\", but it passed for me. Probably the same\n> > reason it passed for some buildfarm animals...\n>\n> The only buildfarm animals that have passed since this went in\n> are the ones that don't run the pg_dump or pg_upgrade tests.\n>\n> It looks to me like the proximate problem is that you should\n> have taught pg_dump to skip these new auto-generated functions.\n> However, I fail to see why we need auto-generated functions\n> for this at all. Couldn't we have done it with one polymorphic\n> function?\n>\n> I think this ought to be reverted and reviewed more carefully.\n\nThank you for your feedback. I've reverted the patch.\n\nI'm going to have closer look at this tomorrow.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 15 Jun 2021 21:46:26 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "\nOn 6/15/21 1:52 PM, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> On 2021-Jun-15, Tom Lane wrote:\n>>> I think this ought to be reverted and reviewed more carefully.\n>> It seems to me that removing the cast-to-range[] is a sufficient fix,\n>> and that we can do with only the unnest part for pg14; the casts can be\n>> added in 15 (if at all). That would mean reverting only half the patch.\n> Might be a reasonable solution. But right now I'm annoyed that the\n> buildfarm is broken, and I'm also convinced that this didn't get\n> adequate testing. I think \"revert and reconsider\" is the way\n> forward for today.\n>\n> \t\n\n\n\n(RMT hat on) That would be my inclination at this stage. The commit\nmessage states that it's trivial, but it seems not to be, and I suspect\nit should not have been done at this stage of the development cycle.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 15 Jun 2021 14:47:40 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Tue, Jun 15, 2021 at 8:28 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Jun-15, Tom Lane wrote:\n>\n> > It looks to me like the proximate problem is that you should\n> > have taught pg_dump to skip these new auto-generated functions.\n> > However, I fail to see why we need auto-generated functions\n> > for this at all. Couldn't we have done it with one polymorphic\n> > function?\n>\n> I think such a function would need to take anycompatiblerangearray,\n> which is not something we currently have.\n\nYes, I've started with polymorphic function\nmultirange_to_array(anymultirange) returning anyarray. But then I got\nthat for int4multirange return type Is integer[] instead of\nint4range[] :)\n\n# select pg_typeof(multirange_to_array('{[1,2),[5,6)}'::int4multirange));\n pg_typeof\n-----------\n integer[]\n(1 row)\n\nSo, a new anyrangearray polymorphic type is required for this\nfunction. Not sure if it worth it to introduce a new polymorphic\nfunction for this use case.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 16 Jun 2021 15:19:08 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Tue, Jun 15, 2021 at 9:43 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 6/15/21 12:06 PM, Alexander Korotkov wrote:\n> > On Tue, Jun 15, 2021 at 4:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> >>> Pushed! Thanks to thread participants for raising this topic and\n> >>> review. I'll be around to resolve issues if any.\n> >> Buildfarm is pretty thoroughly unhappy. Did you do a \"check-world\"\n> >> before pushing?\n> > Yes, I'm looking at this now.\n> >\n> > I did run \"check-world\", but it passed for me. Probably the same\n> > reason it passed for some buildfarm animals...\n> >\n>\n> Did you configure with --enable-tap-tests? If not, then `make\n> check-world` won't run the tests that are failing here.\n\nI've rechecked that check-world actually fails on my machine on that\ncommit. I definitely configured with --enable-tap-tests. So, it\nappears that I just did something wrong (run make check-world on a\ndifferent branch or something like that). Sorry for that. I'll\ndouble-check in the future.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 16 Jun 2021 15:35:36 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Tue, Jun 15, 2021 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > I did run \"check-world\", but it passed for me. Probably the same\n> > reason it passed for some buildfarm animals...\n>\n> It looks to me like the proximate problem is that you should\n> have taught pg_dump to skip these new auto-generated functions.\n\nYes, it appears that pg_dump skips auto-generated functions, but\ndoesn't skip auto-generated casts. It appears to be enough to tune\nquery getCasts() to resolve the issue. The revised patch is attached.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Wed, 16 Jun 2021 15:44:29 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Wed, Jun 16, 2021 at 3:44 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Tue, Jun 15, 2021 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > I did run \"check-world\", but it passed for me. Probably the same\n> > > reason it passed for some buildfarm animals...\n> >\n> > It looks to me like the proximate problem is that you should\n> > have taught pg_dump to skip these new auto-generated functions.\n>\n> Yes, it appears that pg_dump skips auto-generated functions, but\n> doesn't skip auto-generated casts. It appears to be enough to tune\n> query getCasts() to resolve the issue. The revised patch is attached.\n\nHere is the next revision of the patch: I've adjusted some comments.\n\nIn my point of view this patch is not actually complex. The reason\nwhy it colored buildfarm in red is purely my fault: I messed up with\n\"make check-world\" :(\n\nI've registered it on the commitfest application to make it go through\ncommitfest.cputube.org. My proposal is to re-push it once it goes\nthrough commitfest.cputube.org.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Thu, 17 Jun 2021 19:54:13 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Thu, Jun 17, 2021 at 7:54 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Wed, Jun 16, 2021 at 3:44 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Tue, Jun 15, 2021 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > > I did run \"check-world\", but it passed for me. Probably the same\n> > > > reason it passed for some buildfarm animals...\n> > >\n> > > It looks to me like the proximate problem is that you should\n> > > have taught pg_dump to skip these new auto-generated functions.\n> >\n> > Yes, it appears that pg_dump skips auto-generated functions, but\n> > doesn't skip auto-generated casts. It appears to be enough to tune\n> > query getCasts() to resolve the issue. The revised patch is attached.\n>\n> Here is the next revision of the patch: I've adjusted some comments.\n>\n> In my point of view this patch is not actually complex. The reason\n> why it colored buildfarm in red is purely my fault: I messed up with\n> \"make check-world\" :(\n>\n> I've registered it on the commitfest application to make it go through\n> commitfest.cputube.org. My proposal is to re-push it once it goes\n> through commitfest.cputube.org.\n\nPatch successfully passed commitfest.cputube.org. I'm going to\nre-push it if there are no objections.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 17 Jun 2021 22:28:36 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> Patch successfully passed commitfest.cputube.org. I'm going to\n> re-push it if there are no objections.\n\nI'm still not happy about the way you've done the multirange-to-array\npart. I think we'd be better off improving the polymorphism rules so\nthat that can be handled by one polymorphic function. Obviously that'd\nbe a task for v15, but we've already concluded that just having the\nunnest ability would be minimally sufficient for v14.\n\nSo I think you should trim it down to just the unnest part.\n\nIn any case, beta2 wraps on Monday, and there is very little time\nleft for a full round of buildfarm testing. I almost feel that\nit's too late to consider pushing this today. Tomorrow absolutely\nis too late for beta2.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Jun 2021 12:35:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sat, Jun 19, 2021 at 7:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > Patch successfully passed commitfest.cputube.org. I'm going to\n> > re-push it if there are no objections.\n>\n> I'm still not happy about the way you've done the multirange-to-array\n> part. I think we'd be better off improving the polymorphism rules so\n> that that can be handled by one polymorphic function. Obviously that'd\n> be a task for v15, but we've already concluded that just having the\n> unnest ability would be minimally sufficient for v14.\n>\n> So I think you should trim it down to just the unnest part.\n\nI'm not entirely sure it's worth introducing anyrangearray. There\nmight be not many use-cases of anyrangearray besides this cast\n(normally one should use multirange instead of an array of ranges).\nBut I agree that this subject should be carefully considered for v15.\n\n> In any case, beta2 wraps on Monday, and there is very little time\n> left for a full round of buildfarm testing. I almost feel that\n> it's too late to consider pushing this today. Tomorrow absolutely\n> is too late for beta2.\n\n+1\nI also don't feel comfortable hurrying with unnest part to beta2.\nAccording to the open items wiki page, there should be beta3. Does\nunnest part have a chance for beta3?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sun, 20 Jun 2021 04:12:56 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> I also don't feel comfortable hurrying with unnest part to beta2.\n> According to the open items wiki page, there should be beta3. Does\n> unnest part have a chance for beta3?\n\nHm. I'd prefer to avoid another forced initdb after beta2. On the\nother hand, it's entirely likely that there will be some other thing\nthat forces that; in which case there'd be no reason not to push in\nthe unnest feature as well.\n\nI'd say let's sit on the unnest code for a little bit and see what\nhappens.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Jun 2021 22:05:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sat, Jun 19, 2021 at 10:05:09PM -0400, Tom Lane wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > I also don't feel comfortable hurrying with unnest part to beta2.\n> > According to the open items wiki page, there should be beta3. Does\n> > unnest part have a chance for beta3?\n> \n> Hm. I'd prefer to avoid another forced initdb after beta2. On the\n> other hand, it's entirely likely that there will be some other thing\n> that forces that; in which case there'd be no reason not to push in\n> the unnest feature as well.\n> \n> I'd say let's sit on the unnest code for a little bit and see what\n> happens.\n\nI think $SUBJECT can't simultaneously offer too little to justify its own\ncatversion bump and also offer enough to bypass feature freeze. If multirange\nis good without $SUBJECT, then $SUBJECT should wait for v15. Otherwise, the\nmatter of the catversion bump should not delay commit of $SUBJECT.\n\n\n", "msg_date": "Sun, 20 Jun 2021 01:09:21 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sun, Jun 20, 2021 at 11:09 AM Noah Misch <noah@leadboat.com> wrote:\n> On Sat, Jun 19, 2021 at 10:05:09PM -0400, Tom Lane wrote:\n> > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > I also don't feel comfortable hurrying with unnest part to beta2.\n> > > According to the open items wiki page, there should be beta3. Does\n> > > unnest part have a chance for beta3?\n> >\n> > Hm. I'd prefer to avoid another forced initdb after beta2. On the\n> > other hand, it's entirely likely that there will be some other thing\n> > that forces that; in which case there'd be no reason not to push in\n> > the unnest feature as well.\n> >\n> > I'd say let's sit on the unnest code for a little bit and see what\n> > happens.\n>\n> I think $SUBJECT can't simultaneously offer too little to justify its own\n> catversion bump and also offer enough to bypass feature freeze. If multirange\n> is good without $SUBJECT, then $SUBJECT should wait for v15. Otherwise, the\n> matter of the catversion bump should not delay commit of $SUBJECT.\n\nFWIW, there is a patch implementing just unnest() function.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 21 Jun 2021 01:24:18 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Mon, Jun 21, 2021 at 1:24 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Sun, Jun 20, 2021 at 11:09 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Sat, Jun 19, 2021 at 10:05:09PM -0400, Tom Lane wrote:\n> > > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > > I also don't feel comfortable hurrying with unnest part to beta2.\n> > > > According to the open items wiki page, there should be beta3. Does\n> > > > unnest part have a chance for beta3?\n> > >\n> > > Hm. I'd prefer to avoid another forced initdb after beta2. On the\n> > > other hand, it's entirely likely that there will be some other thing\n> > > that forces that; in which case there'd be no reason not to push in\n> > > the unnest feature as well.\n> > >\n> > > I'd say let's sit on the unnest code for a little bit and see what\n> > > happens.\n> >\n> > I think $SUBJECT can't simultaneously offer too little to justify its own\n> > catversion bump and also offer enough to bypass feature freeze. If multirange\n> > is good without $SUBJECT, then $SUBJECT should wait for v15. Otherwise, the\n> > matter of the catversion bump should not delay commit of $SUBJECT.\n>\n> FWIW, there is a patch implementing just unnest() function.\n\nBTW, I found some small inconsistencies in the declaration of\nmultirange operators in the system catalog. Nothing critical, but if\nwe decide to bump catversion in beta3, this patch is also nice to\npush.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 27 Jun 2021 02:35:48 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 2021-Jun-27, Alexander Korotkov wrote:\n\n> BTW, I found some small inconsistencies in the declaration of\n> multirange operators in the system catalog. Nothing critical, but if\n> we decide to bump catversion in beta3, this patch is also nice to\n> push.\n\nHmm, I think you should push this and not bump catversion. That way,\nnobody is forced to initdb if we end up not having a catversion bump for\nsome other reason; but also anybody who initdb's with beta3 or later\nwill get the correct descriptions.\n\nIf you don't push it, everybody will have the wrong descriptions.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sat, 10 Jul 2021 12:34:19 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sat, Jul 10, 2021 at 7:34 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Jun-27, Alexander Korotkov wrote:\n>\n> > BTW, I found some small inconsistencies in the declaration of\n> > multirange operators in the system catalog. Nothing critical, but if\n> > we decide to bump catversion in beta3, this patch is also nice to\n> > push.\n>\n> Hmm, I think you should push this and not bump catversion. That way,\n> nobody is forced to initdb if we end up not having a catversion bump for\n> some other reason; but also anybody who initdb's with beta3 or later\n> will get the correct descriptions.\n>\n> If you don't push it, everybody will have the wrong descriptions.\n\nTrue, but I'm a bit uncomfortable about user instances with different\ncatalogs but the same catversions. On the other hand, initdb's with\nbeta3 or later will be the vast majority among pg14 instances.\n\nDid we have similar precedents in the past?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sun, 11 Jul 2021 01:00:27 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sun, Jul 11, 2021 at 01:00:27AM +0300, Alexander Korotkov wrote:\n> On Sat, Jul 10, 2021 at 7:34 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > On 2021-Jun-27, Alexander Korotkov wrote:\n> >\n> > > BTW, I found some small inconsistencies in the declaration of\n> > > multirange operators in the system catalog. Nothing critical, but if\n> > > we decide to bump catversion in beta3, this patch is also nice to\n> > > push.\n> >\n> > Hmm, I think you should push this and not bump catversion. That way,\n> > nobody is forced to initdb if we end up not having a catversion bump for\n> > some other reason; but also anybody who initdb's with beta3 or later\n> > will get the correct descriptions.\n> >\n> > If you don't push it, everybody will have the wrong descriptions.\n> \n> True, but I'm a bit uncomfortable about user instances with different\n> catalogs but the same catversions. On the other hand, initdb's with\n> beta3 or later will be the vast majority among pg14 instances.\n> \n> Did we have similar precedents in the past?\n\nIt seems so.\n\nNote in particular 74ab96a45, which adds a new function with no bump.\nAlthough that one may not be a good precedent to follow, or one that's been\nfollowed recently.\n\ncommit 0aac73e6a2602696d23aa7a9686204965f9093dc\nAuthor: Noah Misch <noah@leadboat.com>\nDate: Mon Jun 14 17:29:37 2021 -0700\n\n Copy-edit text for the pg_terminate_backend() \"timeout\" parameter.\n\ncommit b09a64d602a19c9a3cc69e4bb0f8986a6f5facf4\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Thu Sep 20 16:06:18 2018 -0400\n\n Add missing pg_description strings for pg_type entries.\n\ncommit a4627e8fd479ff74fffdd49ad07636b79751be45\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue Feb 2 11:39:50 2016 -0500\n\n Fix pg_description entries for jsonb_to_record() and jsonb_to_recordset().\n\ncommit b852dc4cbd09156e2c74786d5b265f03d45bc404\nAuthor: Bruce Momjian <bruce@momjian.us>\nDate: Wed Oct 7 09:06:49 2015 -0400\n\n docs: clarify JSONB operator descriptions\n\ncommit a80889a7359e720107b881bcd3e8fd47f3874e36\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Wed Oct 10 12:19:25 2012 -0400\n\n Set procost to 10 for each of the pg_foo_is_visible() functions.\n\ncommit c246eb5aafe66d5537b468d6da2116c462775faf\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Sat Aug 18 16:14:57 2012 -0400\n\n Make use of LATERAL in information_schema.sequences view.\n\ncommit 74ab96a45ef6259aa6a86a781580edea8488511a\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Wed Jan 25 13:15:29 2012 -0300\n\n Add pg_trigger_depth() function\n\ncommit ddd6ff289f2512f881493b7793853a96955459ff\nAuthor: Bruce Momjian <bruce@momjian.us>\nDate: Tue Mar 15 11:26:20 2011 -0400\n\n Add database comments to template0 and postgres databases, and improve\n\n\n", "msg_date": "Sat, 10 Jul 2021 17:20:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Jul 11, 2021 at 01:00:27AM +0300, Alexander Korotkov wrote:\n>> True, but I'm a bit uncomfortable about user instances with different\n>> catalogs but the same catversions. On the other hand, initdb's with\n>> beta3 or later will be the vast majority among pg14 instances.\n>> \n>> Did we have similar precedents in the past?\n\n> It seems so.\n\nIf it's *only* the description strings you want to change, then yeah,\nwe've done that before.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 10 Jul 2021 18:28:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": " On Sun, Jul 11, 2021 at 1:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Sun, Jul 11, 2021 at 01:00:27AM +0300, Alexander Korotkov wrote:\n> >> True, but I'm a bit uncomfortable about user instances with different\n> >> catalogs but the same catversions. On the other hand, initdb's with\n> >> beta3 or later will be the vast majority among pg14 instances.\n> >>\n> >> Did we have similar precedents in the past?\n>\n> > It seems so.\n>\n> If it's *only* the description strings you want to change, then yeah,\n> we've done that before.\n\nMy patch also changes 'oprjoin' from 'scalargtjoinsel' to\n'scalarltjoinsel'. Implementation is the same, but 'scalarltjoinsel'\nlooks more logical here.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sun, 11 Jul 2021 02:08:58 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sun, Jul 11, 2021 at 1:20 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Sun, Jul 11, 2021 at 01:00:27AM +0300, Alexander Korotkov wrote:\n> > On Sat, Jul 10, 2021 at 7:34 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > On 2021-Jun-27, Alexander Korotkov wrote:\n> > >\n> > > > BTW, I found some small inconsistencies in the declaration of\n> > > > multirange operators in the system catalog. Nothing critical, but if\n> > > > we decide to bump catversion in beta3, this patch is also nice to\n> > > > push.\n> > >\n> > > Hmm, I think you should push this and not bump catversion. That way,\n> > > nobody is forced to initdb if we end up not having a catversion bump for\n> > > some other reason; but also anybody who initdb's with beta3 or later\n> > > will get the correct descriptions.\n> > >\n> > > If you don't push it, everybody will have the wrong descriptions.\n> >\n> > True, but I'm a bit uncomfortable about user instances with different\n> > catalogs but the same catversions. On the other hand, initdb's with\n> > beta3 or later will be the vast majority among pg14 instances.\n> >\n> > Did we have similar precedents in the past?\n>\n> It seems so.\n>\n> Note in particular 74ab96a45, which adds a new function with no bump.\n> Although that one may not be a good precedent to follow, or one that's been\n> followed recently.\n\nJustin, thank you very much for the summary.\n\nGiven we have similar precedents in the past, I'm going to push the\npatch [1] to master and pg14 if no objections.\n\nLinks\n1. https://www.postgresql.org/message-id/CAPpHfdv9OZEuZDqOQoUKpXhq%3Dmc-qa4gKCPmcgG5Vvesu7%3Ds1w%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 13 Jul 2021 15:11:16 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Tue, Jul 13, 2021 at 03:11:16PM +0300, Alexander Korotkov wrote:\n> On Sun, Jul 11, 2021 at 1:20 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Sun, Jul 11, 2021 at 01:00:27AM +0300, Alexander Korotkov wrote:\n> > > On Sat, Jul 10, 2021 at 7:34 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > > On 2021-Jun-27, Alexander Korotkov wrote:\n> > > >\n> > > > > BTW, I found some small inconsistencies in the declaration of\n> > > > > multirange operators in the system catalog. Nothing critical, but if\n> > > > > we decide to bump catversion in beta3, this patch is also nice to\n> > > > > push.\n> > > >\n> > > > Hmm, I think you should push this and not bump catversion. That way,\n> > > > nobody is forced to initdb if we end up not having a catversion bump for\n> > > > some other reason; but also anybody who initdb's with beta3 or later\n> > > > will get the correct descriptions.\n> > > >\n> > > > If you don't push it, everybody will have the wrong descriptions.\n> > >\n> > > True, but I'm a bit uncomfortable about user instances with different\n> > > catalogs but the same catversions. On the other hand, initdb's with\n> > > beta3 or later will be the vast majority among pg14 instances.\n> > >\n> > > Did we have similar precedents in the past?\n> >\n> > It seems so.\n> >\n> > Note in particular 74ab96a45, which adds a new function with no bump.\n> > Although that one may not be a good precedent to follow, or one that's been\n> > followed recently.\n> \n> Justin, thank you very much for the summary.\n> \n> Given we have similar precedents in the past, I'm going to push the\n> patch [1] to master and pg14 if no objections.\n\nTo be clear, do you mean with or without this hunk ?\n\n- oprrest => 'multirangesel', oprjoin => 'scalargtjoinsel' },\n+ oprrest => 'multirangesel', oprjoin => 'scalarltjoinsel' },\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 13 Jul 2021 09:07:55 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Tue, Jul 13, 2021 at 5:07 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Tue, Jul 13, 2021 at 03:11:16PM +0300, Alexander Korotkov wrote:\n> > On Sun, Jul 11, 2021 at 1:20 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > On Sun, Jul 11, 2021 at 01:00:27AM +0300, Alexander Korotkov wrote:\n> > > > On Sat, Jul 10, 2021 at 7:34 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > > > On 2021-Jun-27, Alexander Korotkov wrote:\n> > > > >\n> > > > > > BTW, I found some small inconsistencies in the declaration of\n> > > > > > multirange operators in the system catalog. Nothing critical, but if\n> > > > > > we decide to bump catversion in beta3, this patch is also nice to\n> > > > > > push.\n> > > > >\n> > > > > Hmm, I think you should push this and not bump catversion. That way,\n> > > > > nobody is forced to initdb if we end up not having a catversion bump for\n> > > > > some other reason; but also anybody who initdb's with beta3 or later\n> > > > > will get the correct descriptions.\n> > > > >\n> > > > > If you don't push it, everybody will have the wrong descriptions.\n> > > >\n> > > > True, but I'm a bit uncomfortable about user instances with different\n> > > > catalogs but the same catversions. On the other hand, initdb's with\n> > > > beta3 or later will be the vast majority among pg14 instances.\n> > > >\n> > > > Did we have similar precedents in the past?\n> > >\n> > > It seems so.\n> > >\n> > > Note in particular 74ab96a45, which adds a new function with no bump.\n> > > Although that one may not be a good precedent to follow, or one that's been\n> > > followed recently.\n> >\n> > Justin, thank you very much for the summary.\n> >\n> > Given we have similar precedents in the past, I'm going to push the\n> > patch [1] to master and pg14 if no objections.\n>\n> To be clear, do you mean with or without this hunk ?\n>\n> - oprrest => 'multirangesel', oprjoin => 'scalargtjoinsel' },\n> + oprrest => 'multirangesel', oprjoin => 'scalarltjoinsel' },\n\nI mean with this hunk unless I hear objection to it.\n\nThe implementations of scalarltjoinsel and scalargtjoinsel are the\nsame. And I don't think they are going to be changed on pg14.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 13 Jul 2021 17:13:26 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 2021-Jul-13, Alexander Korotkov wrote:\n\n> > To be clear, do you mean with or without this hunk ?\n> >\n> > - oprrest => 'multirangesel', oprjoin => 'scalargtjoinsel' },\n> > + oprrest => 'multirangesel', oprjoin => 'scalarltjoinsel' },\n> \n> I mean with this hunk unless I hear objection to it.\n\n+1 for pushing with that hunk, no catversion bump.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n\n", "msg_date": "Tue, 13 Jul 2021 10:29:05 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Tue, Jul 13, 2021 at 5:29 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jul-13, Alexander Korotkov wrote:\n>\n> > > To be clear, do you mean with or without this hunk ?\n> > >\n> > > - oprrest => 'multirangesel', oprjoin => 'scalargtjoinsel' },\n> > > + oprrest => 'multirangesel', oprjoin => 'scalarltjoinsel' },\n> >\n> > I mean with this hunk unless I hear objection to it.\n>\n> +1 for pushing with that hunk, no catversion bump.\n\nThank you for the feedback. Pushed!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 15 Jul 2021 14:43:16 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 2021-Jun-19, Tom Lane wrote:\n\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > I also don't feel comfortable hurrying with unnest part to beta2.\n> > According to the open items wiki page, there should be beta3. Does\n> > unnest part have a chance for beta3?\n> \n> Hm. I'd prefer to avoid another forced initdb after beta2. On the\n> other hand, it's entirely likely that there will be some other thing\n> that forces that; in which case there'd be no reason not to push in\n> the unnest feature as well.\n> \n> I'd say let's sit on the unnest code for a little bit and see what\n> happens.\n\n... So, almost a month has gone by, and we still don't have multirange\nunnest(). Looking at the open items list, it doesn't look like we have\nanything that would require a catversion bump. Does that mean that\nwe're going to ship pg14 without multirange unnest?\n\nThat seems pretty sad, as the usability of the feature is greatly\nreduced. Just look at what's being suggested:\n https://postgr.es/m/20210715121508.GA30348@depesz.com\nTo me this screams of an incomplete datatype. I far prefer a beta3\ninitdb than shipping 14GA without multirange unnest.\n\nI haven't seen any announcements about beta3, but it's probably not far\noff; I think if we're going to have it, it would be much better to give\nit buildfarm cycles several days in advance and not just the last\nweekend.\n\nWhat do others think?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Ninguna manada de bestias tiene una voz tan horrible como la humana\" (Orual)\n\n\n", "msg_date": "Thu, 15 Jul 2021 11:29:50 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jun-19, Tom Lane wrote:\n>> I'd say let's sit on the unnest code for a little bit and see what\n>> happens.\n\n> ... So, almost a month has gone by, and we still don't have multirange\n> unnest(). Looking at the open items list, it doesn't look like we have\n> anything that would require a catversion bump. Does that mean that\n> we're going to ship pg14 without multirange unnest?\n\n> That seems pretty sad, as the usability of the feature is greatly\n> reduced. Just look at what's being suggested:\n> https://postgr.es/m/20210715121508.GA30348@depesz.com\n> To me this screams of an incomplete datatype. I far prefer a beta3\n> initdb than shipping 14GA without multirange unnest.\n\nYeah, that seems pretty horrid. I still don't like the way the\narray casts were done, but I'd be okay with pushing the unnest\naddition.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Jul 2021 11:47:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Thu, Jul 15, 2021 at 6:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2021-Jun-19, Tom Lane wrote:\n> >> I'd say let's sit on the unnest code for a little bit and see what\n> >> happens.\n>\n> > ... So, almost a month has gone by, and we still don't have multirange\n> > unnest(). Looking at the open items list, it doesn't look like we have\n> > anything that would require a catversion bump. Does that mean that\n> > we're going to ship pg14 without multirange unnest?\n>\n> > That seems pretty sad, as the usability of the feature is greatly\n> > reduced. Just look at what's being suggested:\n> > https://postgr.es/m/20210715121508.GA30348@depesz.com\n> > To me this screams of an incomplete datatype. I far prefer a beta3\n> > initdb than shipping 14GA without multirange unnest.\n>\n> Yeah, that seems pretty horrid. I still don't like the way the\n> array casts were done, but I'd be okay with pushing the unnest\n> addition.\n\nI agree that array casts require better polymorphism and should be\nconsidered for pg15.\n\n+1 for just unnest().\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 15 Jul 2021 19:26:56 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On 7/15/21 12:26 PM, Alexander Korotkov wrote:\n> On Thu, Jul 15, 2021 at 6:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, that seems pretty horrid. I still don't like the way the\n>> array casts were done, but I'd be okay with pushing the unnest\n>> addition.\n> \n> +1 for just unnest().\n\n...which was my original ask at the beginning of the thread :) So +1.\n\nThanks,\n\nJonathan", "msg_date": "Thu, 15 Jul 2021 15:26:47 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Thu, Jul 15, 2021 at 10:27 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 7/15/21 12:26 PM, Alexander Korotkov wrote:\n> > On Thu, Jul 15, 2021 at 6:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Yeah, that seems pretty horrid. I still don't like the way the\n> >> array casts were done, but I'd be okay with pushing the unnest\n> >> addition.\n> >\n> > +1 for just unnest().\n>\n> ...which was my original ask at the beginning of the thread :) So +1.\n\nThanks for the feedback. I've pushed the unnest() patch to master and\npg14. I've initially forgotten to change catversion.h for master, so\nI made it with an additional commit.\n\nI've double-checked that \"make check-world\" passes on my machine for\nboth master and pg14. And I'm keeping my fingers crossed looking at\nbuildfarm.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sun, 18 Jul 2021 21:20:32 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" }, { "msg_contents": "On Sun, Jul 18, 2021 at 8:20 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Thu, Jul 15, 2021 at 10:27 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > On 7/15/21 12:26 PM, Alexander Korotkov wrote:\n> > > On Thu, Jul 15, 2021 at 6:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> Yeah, that seems pretty horrid. I still don't like the way the\n> > >> array casts were done, but I'd be okay with pushing the unnest\n> > >> addition.\n> > >\n> > > +1 for just unnest().\n> >\n> > ...which was my original ask at the beginning of the thread :) So +1.\n>\n> Thanks for the feedback. I've pushed the unnest() patch to master and\n> pg14. I've initially forgotten to change catversion.h for master, so\n> I made it with an additional commit.\n>\n> I've double-checked that \"make check-world\" passes on my machine for\n> both master and pg14. And I'm keeping my fingers crossed looking at\n> buildfarm.\n\nThis patch was closed with \"moved to next commitfest\" in the July\ncommitfest, and is currently sitting as \"Needs review\" in the\nSeptember one.\n\nIf it's committed, it should probably have been closed with that? And\nif there are things still needed, they should perhaps have their own\nCF entry instead since we clearly do have unnest() for multiranges?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 8 Sep 2021 22:30:04 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: unnesting multirange data types" } ]
[ { "msg_contents": "Hi,\n\nNot sure if there is much chance of debugging this one-off failure in\nwithout a backtrace (long shot: any chance there's still a core\nfile?), but for the record: mandrill choked on a null pointer passed\nto GetMemoryChunkContext() inside a walsender running logical\nreplication. Possibly via pfree(NULL), but there are other paths.\nThat's an animal running with force_parallel_mode and\nRANDOMIZE_ALLOCATED_MEMORY, on AIX with IBM compiler in 32 bit mode,\nso unusual in several ways.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2021-06-06%2015:37:23\n\n\n", "msg_date": "Thu, 10 Jun 2021 10:47:20 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "BF assertion failure on mandrill in walsender, v13" }, { "msg_contents": "On Thu, Jun 10, 2021 at 10:47:20AM +1200, Thomas Munro wrote:\n> Not sure if there is much chance of debugging this one-off failure in\n> without a backtrace (long shot: any chance there's still a core\n> file?)\n\nNo; it was probably in a directory deleted for each run. One would need to\nadd dbx support to the buildfarm client, or perhaps add support for saving\nbuild directories when there's a core, so I can operate dbx manually.\n\n\n", "msg_date": "Wed, 9 Jun 2021 22:47:15 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: BF assertion failure on mandrill in walsender, v13" }, { "msg_contents": "\nOn 6/10/21 1:47 AM, Noah Misch wrote:\n> On Thu, Jun 10, 2021 at 10:47:20AM +1200, Thomas Munro wrote:\n>> Not sure if there is much chance of debugging this one-off failure in\n>> without a backtrace (long shot: any chance there's still a core\n>> file?)\n> No; it was probably in a directory deleted for each run. One would need to\n> add dbx support to the buildfarm client, or perhaps add support for saving\n> build directories when there's a core, so I can operate dbx manually.\n>\n>\n\n\nThis is what the setting \"keep_error_builds\" does. In the END handler it\nrenames the build and install directories with a timestamp. Cleanup is\nleft to the user.\n\nI don't have much knowledge of dbx, but I would take a patch for support.\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 10 Jun 2021 09:08:06 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: BF assertion failure on mandrill in walsender, v13" }, { "msg_contents": "On Thu, Jun 10, 2021 at 09:08:06AM -0400, Andrew Dunstan wrote:\n> On 6/10/21 1:47 AM, Noah Misch wrote:\n> > On Thu, Jun 10, 2021 at 10:47:20AM +1200, Thomas Munro wrote:\n> >> Not sure if there is much chance of debugging this one-off failure in\n> >> without a backtrace (long shot: any chance there's still a core\n> >> file?)\n> > No; it was probably in a directory deleted for each run. One would need to\n> > add dbx support to the buildfarm client, or perhaps add support for saving\n> > build directories when there's a core, so I can operate dbx manually.\n> \n> This is what the setting \"keep_error_builds\" does. In the END handler it\n> renames the build and install directories with a timestamp. Cleanup is\n> left to the user.\n\nGreat. The machine has ample disk, so I'll add that setting.\n\n\n", "msg_date": "Thu, 10 Jun 2021 18:39:42 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: BF assertion failure on mandrill in walsender, v13" } ]
[ { "msg_contents": "Hi,\n\nOn occasion it comes up that the genetic query optimizer (GEQO) doesn't\nproduce particularly great plans, and is slow ([1] for example). The only\nalternative that has gotten as far as a prototype patch (as far as I know)\nis simulated annealing some years ago, which didn't seem to get far.\n\nThe join problem as it pertains to Postgres has been described within the\ncommunity in\n[Gustaffson, 2017] and [Stroffek & Kovarik, 2007].\n\nThe fact that there is much more interest than code in this area indicates\nthat this is a hard problem. I hadn't given it much thought myself until by\nchance I came across [Neumann, 2018], which describes a number of\ninteresting ideas. The key takeaway is that you want a graceful transition\nbetween exhaustive search and heuristic search. In other words, if the\nspace of possible join orderings is just slightly larger than the maximum\nallowed exhaustive search, then the search should be *almost*\nexhaustive. This not only increases the chances of finding a good plan, but\nalso has three engineering advantages I can think of:\n\n1) It's natural to re-use data structures etc. already used by the existing\ndynamic programming (DP) algorithm. Framing the problem as extending DP\ngreatly lowers the barrier to making realistic progress. If the problem is\nframed as \"find an algorithm as a complete drop-in replacement for GEQO\",\nit's a riskier project in my view.\n\n2) We can still keep GEQO around (with some huge limit by default) for a\nfew years as an escape hatch, while we refine the replacement. If there is\nsome bug that prevents finding a plan, we can emit a WARNING and fall back\nto GEQO. Or if a user encounters a regression in a big query, they can\nlower the limit to restore the plan they had in an earlier version.\n\n3) It actually improves the existing exhaustive search, because the\ncomplexity of the join order problem depends on the query shape: a \"chain\"\nshape (linear) vs. a \"star\" shape (as in star schema), for the most common\nexamples. The size of the DP table grows like this (for n >= 4):\n\nChain: (n^3 - n) / 6 (including bushy plans)\nStar: (n - 1) * 2^(n - 2)\n\n n chain star\n--------------------\n 4 10 12\n 5 20 32\n 6 35 80\n 7 56 192\n 8 84 448\n 9 120 1024\n10 165 2304\n11 220 5120\n12 286 11264\n13 364 24576\n14 455 53248\n15 560 114688\n...\n64 43680 290536219160925437952\n\nThe math behind this is described in detail in [Ono & Lohman, 1990]. I\nverified this in Postgres by instrumenting the planner to count how many\ntimes it calls make_join_rel().\n\nImagine having a \"join enumeration budget\" that, if exceeded, prevents\nadvancing to the next join level. Given the above numbers and a query with\nsome combination of chain and star shapes, a budget of 400 can guarantee an\nexhaustive search when there are up to 8 relations. For a pure chain join,\nwe can do an exhaustive search on up to 13 relations, for a similar cost of\ntime and space. Out of curiosity I tested HEAD with a chain query having 64\ntables found in the SQLite tests [2] and found exhaustive search to take\nonly twice as long as GEQO. If we have some 30-way join, and some (> 400)\nbudget, it's actually possible that we will complete the exhaustive search\nand get the optimal plan. This is a bottom-up way of determining the\ncomplexity. Rather than configuring a number-of-relations threshold\nand possibly have exponential behavior blow up in their faces, users can\nconfigure something that somewhat resembles the runtime cost.\n\nNow, I'll walk through one way that a greedy heuristic can integrate with\nDP. In our 30-way join example, if we use up our budget and don't have a\nvalid plan, we'll break out of DP at the last join level we completed.\nSince we already have built a number of joinrels, we build upon that work\nas we proceed. The approach I have in mind is described in [Kossmann &\nStocker, 2000], which the authors call \"iterative dynamic programming\"\n(IDP). I'll describe one of the possible variants here. Let's say we only\ngot as far as join level 8, so we've created up to 8-way joinrels. We pick\nthe best few (maybe 5%) of these 8-way joinrels by some measure (doesn't\nhave to be the full cost model) and on top of each of them create a full\nplan quickly: At each join level, we only pick one base relation (again by\nsome measure) to create one new joinrel and then move to the next join\nlevel. This is very fast, even with hundreds of relations.\n\nOnce we have one valid, complete plan, we can technically stop at any time\n(Coding this much is also a good proof-of-concept). How much additional\neffort we expend to find a good plan could be another budget we have. With\na complete plan obtained quickly, we also have an upper bound on the\nmeasure of the cheapest overall plan, so with that we can prune any more\nexpensive plans as we iterate through the 8-way joinrels. Once we have a\nset of complete plans, we pick some of them to improve the part of the plan\npicked during the greedy step. For some size k (typically between 4 and 7),\nwe divide the greedy-step part of the join into k-sized sections. So with\nour 30-table join where we started with an 8-way joinrel, we have 22\ntables. If k=5, we run standard dynamic programming (including the standard\ncost model) on four 5-table sections and then once the last 2-table section.\n\nYou can also think of it like this: We quickly find 5 tables that likely\nwould be good to join next, find the optimal join order among the 5, then\nadd that to our joinrel. We keep doing that until we get a valid plan. The\nonly difference is, performing the greedy step to completion allows us to\nprune subsequent bad intermediate steps.\n\nBy \"some measure\" above I'm being a bit hand-wavy, but at least in the\nliterature I've read, fast heuristic algorithms seem to use simpler and\ncheaper-to-compute metrics like intermediate result size or selectivity,\nrather than a full cost function. That's a detail to be worked out. Also,\nit must be said that in choosing among intermediate steps we need to be\ncareful to include things like:\n\n- interesting sort orders\n- patition-wise joins\n- parameterized paths\n\nFurther along the lines of extending DP that's kind of orthogonal to the\nabove is the possibility of doing pruning during the initial DP step.\nLooking again at how quickly the join enumeration for star queries grows\nwith increasing \"n\", it makes sense that a large number of those are bad\nplans. In [Das & Haritsa, 2006], the authors demonstrate a method of\nextending the reach of DP by pruning joinrels at each join level by two\ncriteria:\n\n1) Whether the joinrel contains a hub relation (i.e. is the center of a\nstar)\n2) A skyline function taking into account cost, cardinality, and selectivity\n\nThis way, the worst joinrels of star queries are pruned and the initial\njoin budget I mentioned above goes a lot farther.\n\nThere are quite a few details and variations I left out (testing, for one),\nbut this is enough to show the idea. I plan on working on this during the\nPG15 cycle. I'd appreciate any feedback on the above.\n--\n\n[1] https://www.postgresql.org/message-id/15658.1241278636@sss.pgh.pa.us\n\n[Stroffek & Kovarik, 2007]\nhttps://www.pgcon.org/2007/schedule/attachments/28-Execution_Plan_Optimization_Techniques_Stroffek_Kovarik.pdf\n\n[Gustaffson, 2017]\nhttps://www.postgresql.eu/events/pgconfeu2017/sessions/session/1586/slides/26/Through_the_Joining_Glass-PGConfeu-DanielGustafsson.pdf\n\n[Neumann, 2018] Adaptive Optimization of Very Large Join Queries.\nhttps://dl.acm.org/doi/10.1145/3183713.3183733\n\n[Ono & Lohman, 1990] Measuring the Complexity of Join Enumeration in Query\nOptimization.\nhttps://www.csd.uoc.gr/~hy460/pdf/MeasuringtheComplexityofJoinEnumerationinQueryOptimization.PDF\n\n[2] https://www.sqlite.org/sqllogictest/file?name=test/select5.test\n(Note: there are no explicit join clauses so \"from\" limits didn't have an\neffect in my quick test.)\n\n[Kossmann & Stocker, 2000] Iterative dynamic programming: a new class of\nquery optimization algorithms. https://doi.org/10.1145/352958.352982\n\n[Das & Haritsa, 2006] Robust Heuristics for Scalable Optimization of\nComplex SQL Queries.\nhttps://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.549.4331&rep=rep1&type=pdf\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nHi,On occasion it comes up that the genetic query optimizer (GEQO) doesn't produce particularly great plans, and is slow ([1] for example). The only alternative that has gotten as far as a prototype patch (as far as I know) is simulated annealing some years ago, which didn't seem to get far.The join problem as it pertains to Postgres has been described within the community in[Gustaffson, 2017] and [Stroffek & Kovarik, 2007].The fact that there is much more interest than code in this area indicates that this is a hard problem. I hadn't given it much thought myself until by chance I came across [Neumann, 2018], which describes a number of interesting ideas. The key takeaway is that you want a graceful transition between exhaustive search and heuristic search. In other words, if the space of possible join orderings is just slightly larger than the maximum allowed exhaustive search, then the search should be *almost* exhaustive. This not only increases the chances of finding a good plan, but also has three engineering advantages I can think of: 1) It's natural to re-use data structures etc. already used by the existing dynamic programming (DP) algorithm. Framing the problem as extending DP greatly lowers the barrier to making realistic progress. If the problem is framed as \"find an algorithm as a complete drop-in replacement for GEQO\", it's a riskier project in my view.2) We can still keep GEQO around (with some huge limit by default) for a few years as an escape hatch, while we refine the replacement. If there is some bug that prevents finding a plan, we can emit a WARNING and fall back to GEQO. Or if a user encounters a regression in a big query, they can lower the limit to restore the plan they had in an earlier version.3) It actually improves the existing exhaustive search, because the complexity of the join order problem depends on the query shape: a \"chain\" shape (linear) vs. a \"star\" shape (as in star schema), for the most common examples. The size of the DP table grows like this (for n >= 4):Chain: (n^3 - n) / 6   (including bushy plans)Star:  (n - 1) * 2^(n - 2) n  chain       star-------------------- 4     10         12 5     20         32 6     35         80 7     56        192 8     84        448 9    120       102410    165       230411    220       512012    286      1126413    364      2457614    455      5324815    560     114688...64  43680     290536219160925437952The math behind this is described in detail in [Ono & Lohman, 1990]. I verified this in Postgres by instrumenting the planner to count how many times it calls make_join_rel().Imagine having a \"join enumeration budget\" that, if exceeded, prevents advancing to the next join level. Given the above numbers and a query with some combination of chain and star shapes, a budget of 400 can guarantee an exhaustive search when there are up to 8 relations. For a pure chain join, we can do an exhaustive search on up to 13 relations, for a similar cost of time and space. Out of curiosity I tested HEAD with a chain query having 64 tables found in the SQLite tests [2] and found exhaustive search to take only twice as long as GEQO. If we have some 30-way join, and some (> 400) budget, it's actually possible that we will complete the exhaustive search and get the optimal plan. This is a bottom-up way of determining the complexity. Rather than configuring a number-of-relations threshold and possibly have exponential behavior blow up in their faces, users can configure something that somewhat resembles the runtime cost.Now, I'll walk through one way that a greedy heuristic can integrate with DP. In our 30-way join example, if we use up our budget and don't have a valid plan, we'll break out of DP at the last join level we completed. Since we already have built a number of joinrels, we build upon that work as we proceed. The approach I have in mind is described in [Kossmann & Stocker, 2000], which the authors call \"iterative dynamic programming\" (IDP). I'll describe one of the possible variants here. Let's say we only got as far as join level 8, so we've created up to 8-way joinrels. We pick the best few (maybe 5%) of these 8-way joinrels by some measure (doesn't have to be the full cost model) and on top of each of them create a full plan quickly: At each join level, we only pick one base relation (again by some measure) to create one new joinrel and then move to the next join level. This is very fast, even with hundreds of relations.Once we have one valid, complete plan, we can technically stop at any time (Coding this much is also a good proof-of-concept). How much additional effort we expend to find a good plan could be another budget we have.  With a complete plan obtained quickly, we also have an upper bound on the measure of the cheapest overall plan, so with that we can prune any more expensive plans as we iterate through the 8-way joinrels. Once we have a set of complete plans, we pick some of them to improve the part of the plan picked during the greedy step. For some size k (typically between 4 and 7), we divide the greedy-step part of the join into k-sized sections. So with our 30-table join where we started with an 8-way joinrel, we have 22 tables. If k=5, we run standard dynamic programming (including the standard cost model) on four 5-table sections and then once the last 2-table section.You can also think of it like this: We quickly find 5 tables that likely would be good to join next, find the optimal join order among the 5, then add that to our joinrel. We keep doing that until we get a valid plan. The only difference is, performing the greedy step to completion allows us to prune subsequent bad intermediate steps.By \"some measure\" above I'm being a bit hand-wavy, but at least in the literature I've read, fast heuristic algorithms seem to use simpler and cheaper-to-compute metrics like intermediate result size or selectivity, rather than a full cost function. That's a detail to be worked out. Also, it must be said that in choosing among intermediate steps we need to be careful to include things like:- interesting sort orders- patition-wise joins- parameterized pathsFurther along the lines of extending DP that's kind of orthogonal to the above is the possibility of doing pruning during the initial DP step. Looking again at how quickly the join enumeration for star queries grows with increasing \"n\", it makes sense that a large number of those are bad plans. In [Das & Haritsa, 2006], the authors demonstrate a method of extending the reach of DP by pruning joinrels at each join level by two criteria:1) Whether the joinrel contains a hub relation (i.e. is the center of a star)2) A skyline function taking into account cost, cardinality, and selectivityThis way, the worst joinrels of star queries are pruned and the initial join budget I mentioned above goes a lot farther.There are quite a few details and variations I left out (testing, for one), but this is enough to show the idea. I plan on working on this during the PG15 cycle. I'd appreciate any feedback on the above.--[1] https://www.postgresql.org/message-id/15658.1241278636@sss.pgh.pa.us[Stroffek & Kovarik, 2007] https://www.pgcon.org/2007/schedule/attachments/28-Execution_Plan_Optimization_Techniques_Stroffek_Kovarik.pdf[Gustaffson, 2017]  https://www.postgresql.eu/events/pgconfeu2017/sessions/session/1586/slides/26/Through_the_Joining_Glass-PGConfeu-DanielGustafsson.pdf[Neumann, 2018] Adaptive Optimization of Very Large Join Queries. https://dl.acm.org/doi/10.1145/3183713.3183733[Ono & Lohman, 1990] Measuring the Complexity of Join Enumeration in Query Optimization. https://www.csd.uoc.gr/~hy460/pdf/MeasuringtheComplexityofJoinEnumerationinQueryOptimization.PDF[2] https://www.sqlite.org/sqllogictest/file?name=test/select5.test(Note: there are no explicit join clauses so \"from\" limits didn't have an effect in my quick test.)[Kossmann & Stocker, 2000] Iterative dynamic programming: a new class of query optimization algorithms. https://doi.org/10.1145/352958.352982[Das & Haritsa, 2006] Robust Heuristics for Scalable Optimization of Complex SQL Queries. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.549.4331&rep=rep1&type=pdf-- John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 9 Jun 2021 21:21:10 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "a path towards replacing GEQO with something better" }, { "msg_contents": "Hi,\n\nOn 6/10/21 3:21 AM, John Naylor wrote:\n> Hi,\n> \n> On occasion it comes up that the genetic query optimizer (GEQO) doesn't\n> produce particularly great plans, and is slow ([1] for example). The\n> only alternative that has gotten as far as a prototype patch (as far as\n> I know) is simulated annealing some years ago, which didn't seem to get far.\n> \n> The join problem as it pertains to Postgres has been described within\n> the community in\n> [Gustaffson, 2017] and [Stroffek & Kovarik, 2007].\n> \n> The fact that there is much more interest than code in this area\n> indicates that this is a hard problem. I hadn't given it much thought\n> myself until by chance I came across [Neumann, 2018], which describes a\n> number of interesting ideas. The key takeaway is that you want a\n> graceful transition between exhaustive search and heuristic search. In\n> other words, if the space of possible join orderings is just slightly\n> larger than the maximum allowed exhaustive search, then the search\n> should be *almost* exhaustive.\n\nYeah, I think this is one of the places with the annoying \"cliff edge\"\nbehavior in our code base, so an alternative that would degrade a bit\nmore gracefully would be welcome.\n\nI only quickly read the [Neumann, 2018] paper over the weekend, and\noverall it seems like a very interesting/promising approach. Of course,\nthe question is how well it can be combined with the rest of our code,\nand various other details from real-world queries (papers often ignore\nsome of those bits for simplicity).\n\n> This not only increases the chances of finding a good plan, but also\n> has three engineering advantages I can think of:\n> \n> 1) It's natural to re-use data structures etc. already used by the\n> existing dynamic programming (DP) algorithm. Framing the problem as\n> extending DP greatly lowers the barrier to making realistic progress. If\n> the problem is framed as \"find an algorithm as a complete drop-in\n> replacement for GEQO\", it's a riskier project in my view.\n> \n\nTrue.\n\n> 2) We can still keep GEQO around (with some huge limit by default) for a\n> few years as an escape hatch, while we refine the replacement. If there\n> is some bug that prevents finding a plan, we can emit a WARNING and fall\n> back to GEQO. Or if a user encounters a regression in a big query, they\n> can lower the limit to restore the plan they had in an earlier version.\n> \n\nNot sure. Keeping significant amounts of code may not be free - both for\nmaintenance and new features. It'd be a bit sad if someone proposed\nimprovements to join planning, but had to do 2x the work to support it\nin both the DP and GEQO branches, or risk incompatibility.\n\nOTOH maybe this concern is unfounded in practice - I don't think we've\ndone very many big changes to geqo in the last few years.\n\n> 3) It actually improves the existing exhaustive search, because the\n> complexity of the join order problem depends on the query shape: a\n> \"chain\" shape (linear) vs. a \"star\" shape (as in star schema), for the\n> most common examples. The size of the DP table grows like this (for n >= 4):\n> \n> Chain: (n^3 - n) / 6   (including bushy plans)\n> Star:  (n - 1) * 2^(n - 2)\n> \n>  n  chain       star\n> --------------------\n>  4     10         12\n>  5     20         32\n>  6     35         80\n>  7     56        192\n>  8     84        448\n>  9    120       1024\n> 10    165       2304\n> 11    220       5120\n> 12    286      11264\n> 13    364      24576\n> 14    455      53248\n> 15    560     114688\n> ...\n> 64  43680     290536219160925437952\n> \n> The math behind this is described in detail in [Ono & Lohman, 1990]. I\n> verified this in Postgres by instrumenting the planner to count how many\n> times it calls make_join_rel().\n> \n\nSo, did you verify it for star query with 64 relations? ;-)\n\n> Imagine having a \"join enumeration budget\" that, if exceeded, prevents\n> advancing to the next join level. Given the above numbers and a query\n> with some combination of chain and star shapes, a budget of 400 can\n> guarantee an exhaustive search when there are up to 8 relations. For a\n> pure chain join, we can do an exhaustive search on up to 13 relations,\n> for a similar cost of time and space. Out of curiosity I tested HEAD\n> with a chain query having 64 tables found in the SQLite tests [2] and\n> found exhaustive search to take only twice as long as GEQO. If we have\n> some 30-way join, and some (> 400) budget, it's actually possible that\n> we will complete the exhaustive search and get the optimal plan. This is\n> a bottom-up way of determining the complexity. Rather than configuring a\n> number-of-relations threshold and possibly have exponential behavior\n> blow up in their faces, users can configure something that somewhat\n> resembles the runtime cost.\n> \n\nSound reasonable in principle, I think.\n\nThis reminds me the proposals to have a GUC that'd determine how much\neffort should the planner invest into various optimizations. For OLTP it\nmight be quite low, for large OLAP queries it'd be economical to spend\nmore time trying some more expensive optimizations.\n\nThe challenge of course is how / in what units / to define the budget,\nso that it's meaningful and understandable for users. Not sure if\n\"number of join rels generated\" will be clear enough for users. But it\nseems good enough for PoC / development, and hopefully people won't have\nto tweak it very often.\n\nFor JIT we used the query cost, which is a term users are familiar with,\nbut that's possible because we do the decision after the plan is\nbuilt/costed. That doesn't work for join order search :-(\n\n> Now, I'll walk through one way that a greedy heuristic can integrate\n> with DP. In our 30-way join example, if we use up our budget and don't\n> have a valid plan, we'll break out of DP at the last join level\n> we completed. Since we already have built a number of joinrels, we build\n> upon that work as we proceed. The approach I have in mind is described\n> in [Kossmann & Stocker, 2000], which the authors call \"iterative dynamic\n> programming\" (IDP). I'll describe one of the possible variants here.\n> Let's say we only got as far as join level 8, so we've created up to\n> 8-way joinrels. We pick the best few (maybe 5%) of these 8-way joinrels\n> by some measure (doesn't have to be the full cost model) and on top of\n> each of them create a full plan quickly: At each join level, we only\n> pick one base relation (again by some measure) to create one new joinrel\n> and then move to the next join level. This is very fast, even with\n> hundreds of relations.\n> \n> Once we have one valid, complete plan, we can technically stop at any\n> time (Coding this much is also a good proof-of-concept). How much\n> additional effort we expend to find a good plan could be another budget\n> we have.  With a complete plan obtained quickly, we also have an upper\n> bound on the measure of the cheapest overall plan, so with that we can\n> prune any more expensive plans as we iterate through the 8-way joinrels.\n> Once we have a set of complete plans, we pick some of them to improve\n> the part of the plan picked during the greedy step. For some size k\n> (typically between 4 and 7), we divide the greedy-step part of the join\n> into k-sized sections. So with our 30-table join where we started with\n> an 8-way joinrel, we have 22 tables. If k=5, we run standard dynamic\n> programming (including the standard cost model) on four 5-table sections\n> and then once the last 2-table section.\n> \n> You can also think of it like this: We quickly find 5 tables that likely\n> would be good to join next, find the optimal join order among the 5,\n> then add that to our joinrel. We keep doing that until we get a valid\n> plan. The only difference is, performing the greedy step to completion\n> allows us to prune subsequent bad intermediate steps.\n> \n\nI haven't read the [Kossmann & Stocker, 2000] paper yet, but the\n[Neumann, 2018] paper seems to build on it, and it seems to work with\nmuch larger subtrees of the join tree than k=5.\n\n> By \"some measure\" above I'm being a bit hand-wavy, but at least in the\n> literature I've read, fast heuristic algorithms seem to use simpler and\n> cheaper-to-compute metrics like intermediate result size or selectivity,\n> rather than a full cost function. That's a detail to be worked out.\n> Also, it must be said that in choosing among intermediate steps we need\n> to be careful to include things like:\n> \n> - interesting sort orders\n> - patition-wise joins\n> - parameterized paths\n> \n\nYeah. I think this is going to be somewhat tricky - the paper seems to\nhave very few details about dealing with criteria like this. OTOH the\nquery plans for TPC-H/TPC-DS etc. seem to be quite good.\n\nWhat I find fairly interesting is the section in [Neumann, 2018] about\ncardinality estimates, and quality of query plans when the estimates are\noff. The last paragraph in the 6.5 section essentially says that despite\npoor estimates, the proposed algorithm performs better than the simple\n(and cheap) heuristics. I'm not sure what to think about that, because\nmy \"intuitive\" understanding is that the more elaborate the planning is,\nthe more errors it can make when the estimates are off.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 13 Jun 2021 15:50:03 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "On Sun, Jun 13, 2021 at 9:50 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> > 2) We can still keep GEQO around (with some huge limit by default) for a\n> > few years as an escape hatch, while we refine the replacement. If there\n> > is some bug that prevents finding a plan, we can emit a WARNING and fall\n> > back to GEQO. Or if a user encounters a regression in a big query, they\n> > can lower the limit to restore the plan they had in an earlier version.\n> >\n>\n> Not sure. Keeping significant amounts of code may not be free - both for\n> maintenance and new features. It'd be a bit sad if someone proposed\n> improvements to join planning, but had to do 2x the work to support it\n> in both the DP and GEQO branches, or risk incompatibility.\n\nLooking back again at the commit history, we did modify geqo to support\npartial paths and partition-wise join, so that's a fair concern. My concern\nis the risk of plan regressions after an upgrade, even if for a small\nnumber of cases.\n\n> OTOH maybe this concern is unfounded in practice - I don't think we've\n> done very many big changes to geqo in the last few years.\n\nYeah, I get the feeling that it's already de facto obsolete, and we could\nmake it a policy not to consider improvements aside from bug fixes where it\ncan't find a valid plan, or forced API changes. Which I guess is another\nway of saying \"deprecated\".\n\n(I briefly considered turning it into a contrib module, but that seems like\nthe worst of both worlds.)\n\n> This reminds me the proposals to have a GUC that'd determine how much\n> effort should the planner invest into various optimizations. For OLTP it\n> might be quite low, for large OLAP queries it'd be economical to spend\n> more time trying some more expensive optimizations.\n>\n> The challenge of course is how / in what units / to define the budget,\n> so that it's meaningful and understandable for users. Not sure if\n> \"number of join rels generated\" will be clear enough for users. But it\n> seems good enough for PoC / development, and hopefully people won't have\n> to tweak it very often.\n\nI'm also in favor of having some type of \"planner effort\" or \"OLTP to OLAP\nspectrum\" guc, but I'm not yet sure whether it'd be better to have it\nseparate or to couple the joinrel budget to it. If we go that route, I\nimagine there'll be many things that planner_effort changes that we don't\nwant to give a separate knob for. And, I hope with graceful degradation and\na good enough heuristic search, it won't be necessary to change in most\ncases.\n\n> I haven't read the [Kossmann & Stocker, 2000] paper yet, but the\n> [Neumann, 2018] paper seems to build on it, and it seems to work with\n> much larger subtrees of the join tree than k=5.\n\nRight, in particular it builds on \"IDP-2\" from Kossmann & Stocker. Okay, so\nNeumann's favorite algorithm stack \"Adaptive\" is complex, and I believe you\nare referring to cases where they can iteratively improve up to 100 rels at\na time because of linearization. That's a separate algorithm (IKKBZ) that\ncomplicates the cost model and also cannot have outer joins. If it has\nouter joins, they use regular DP on subsets of size up to 10. It's not\nsubstantively different from IDP-2, and that's the one I'd like to try to\ngracefully fall back to. Or something similar.\n\n> What I find fairly interesting is the section in [Neumann, 2018] about\n> cardinality estimates, and quality of query plans when the estimates are\n> off. The last paragraph in the 6.5 section essentially says that despite\n> poor estimates, the proposed algorithm performs better than the simple\n> (and cheap) heuristics. I'm not sure what to think about that, because\n> my \"intuitive\" understanding is that the more elaborate the planning is,\n> the more errors it can make when the estimates are off.\n\nYeah, I'm not sure this part is very useful and seems almost like an\nafterthought. In table 3, all those poor examples are \"pure\" greedy\nalgorithms and don't have iterative refinement added, so it kind of makes\nsense that poor estimates would hurt them more. But they don't compare\nthose with *just* a refinement step added. I also don't know how realistic\ntheir \"estimate fuzzing\" is.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sun, Jun 13, 2021 at 9:50 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:> > 2) We can still keep GEQO around (with some huge limit by default) for a> > few years as an escape hatch, while we refine the replacement. If there> > is some bug that prevents finding a plan, we can emit a WARNING and fall> > back to GEQO. Or if a user encounters a regression in a big query, they> > can lower the limit to restore the plan they had in an earlier version.> >>> Not sure. Keeping significant amounts of code may not be free - both for> maintenance and new features. It'd be a bit sad if someone proposed> improvements to join planning, but had to do 2x the work to support it> in both the DP and GEQO branches, or risk incompatibility.Looking back again at the commit history, we did modify geqo to support partial paths and partition-wise join, so that's a fair concern. My concern is the risk of plan regressions after an upgrade, even if for a small number of cases.> OTOH maybe this concern is unfounded in practice - I don't think we've> done very many big changes to geqo in the last few years.Yeah, I get the feeling that it's already de facto obsolete, and we could make it a policy not to consider improvements aside from bug fixes where it can't find a valid plan, or forced API changes. Which I guess is another way of saying \"deprecated\".(I briefly considered turning it into a contrib module, but that seems like the worst of both worlds.)> This reminds me the proposals to have a GUC that'd determine how much> effort should the planner invest into various optimizations. For OLTP it> might be quite low, for large OLAP queries it'd be economical to spend> more time trying some more expensive optimizations.>> The challenge of course is how / in what units / to define the budget,> so that it's meaningful and understandable for users. Not sure if> \"number of join rels generated\" will be clear enough for users. But it> seems good enough for PoC / development, and hopefully people won't have> to tweak it very often.I'm also in favor of having some type of \"planner effort\" or \"OLTP to OLAP spectrum\" guc, but I'm not yet sure whether it'd be better to have it separate or to couple the joinrel budget to it. If we go that route, I imagine there'll be many things that planner_effort changes that we don't want to give a separate knob for. And, I hope with graceful degradation and a good enough heuristic search, it won't be necessary to change in most cases.> I haven't read the [Kossmann & Stocker, 2000] paper yet, but the> [Neumann, 2018] paper seems to build on it, and it seems to work with> much larger subtrees of the join tree than k=5.Right, in particular it builds on \"IDP-2\" from Kossmann & Stocker. Okay, so Neumann's favorite algorithm stack \"Adaptive\" is complex, and I believe you are referring to cases where they can iteratively improve up to 100 rels at a time because of linearization. That's a separate algorithm (IKKBZ) that complicates the cost model and also cannot have outer joins. If it has outer joins, they use regular DP on subsets of size up to 10. It's not substantively different from IDP-2, and that's the one I'd like to try to gracefully fall back to. Or something similar.> What I find fairly interesting is the section in [Neumann, 2018] about> cardinality estimates, and quality of query plans when the estimates are> off. The last paragraph in the 6.5 section essentially says that despite> poor estimates, the proposed algorithm performs better than the simple> (and cheap) heuristics. I'm not sure what to think about that, because> my \"intuitive\" understanding is that the more elaborate the planning is,> the more errors it can make when the estimates are off.Yeah, I'm not sure this part is very useful and seems almost like an afterthought. In table 3, all those poor examples are \"pure\" greedy algorithms and don't have iterative refinement added, so it kind of makes sense that poor estimates would hurt them more. But they don't compare those with *just* a refinement step added. I also don't know how realistic their \"estimate fuzzing\" is.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 14 Jun 2021 07:16:58 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "\n\nOn 6/14/21 1:16 PM, John Naylor wrote:\n> On Sun, Jun 13, 2021 at 9:50 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n>> > 2) We can still keep GEQO around (with some huge limit by default) for a\n>> > few years as an escape hatch, while we refine the replacement. If there\n>> > is some bug that prevents finding a plan, we can emit a WARNING and fall\n>> > back to GEQO. Or if a user encounters a regression in a big query, they\n>> > can lower the limit to restore the plan they had in an earlier version.\n>> >\n>>\n>> Not sure. Keeping significant amounts of code may not be free - both for\n>> maintenance and new features. It'd be a bit sad if someone proposed\n>> improvements to join planning, but had to do 2x the work to support it\n>> in both the DP and GEQO branches, or risk incompatibility.\n> \n> Looking back again at the commit history, we did modify geqo to support\n> partial paths and partition-wise join, so that's a fair concern.\n\nRight. I think the question is how complex those changes were. If it was\nmostly mechanical, it's not a big deal and we can keep both, but if it\nrequires deeper knowledge of the GEQO inner workings it may be an issue\n(planner changes are already challenging enough).\n\n> My concern is the risk of plan regressions after an upgrade, even if\n> for a small number of cases.\n> \n\nI don't know. My impression/experience with GEQO is that getting a good\njoin order for queries with many joins is often a matter of luck, and\nthe risk of getting poor plan just forces me to increase geqo_threshold\nor disable it altogether. Or just rephrase the the query to nudge the\nplanner to use a good join order (those large queries are often star or\nchain shaped, so it's not that difficult).\n\nSo I'm not sure \"GEQO accidentally produces a better plan for this one\nquery\" is a good argument to keep it around. We should probably evaluate\nthe overall behavior, and then make a decision.\n\nFWIW I think we're facing this very dilemma for every optimizer change,\nto some extent. Every time we make the planner smarter by adding a new\nplan variant or heuristics, we're also increasing the opportunity for\nerrors. And every time we look (or should look) at average behavior and\nworst case behavior ...\n\n>> OTOH maybe this concern is unfounded in practice - I don't think we've\n>> done very many big changes to geqo in the last few years.\n> \n> Yeah, I get the feeling that it's already de facto obsolete, and we\n> could make it a policy not to consider improvements aside from bug fixes\n> where it can't find a valid plan, or forced API changes. Which I guess\n> is another way of saying \"deprecated\".\n> \n> (I briefly considered turning it into a contrib module, but that seems\n> like the worst of both worlds.)\n> \n\nTrue. I'm fine with deprecating / not improving geqo. What would worry\nme is incompatibility, i.e. if geqo could not support some features. I'm\nthinking of storage engines in MySQL not supporting some features,\nleading to a mine field for users. Producing poor plans is fine, IMO.\n\n>> This reminds me the proposals to have a GUC that'd determine how much\n>> effort should the planner invest into various optimizations. For OLTP it\n>> might be quite low, for large OLAP queries it'd be economical to spend\n>> more time trying some more expensive optimizations.\n>>\n>> The challenge of course is how / in what units / to define the budget,\n>> so that it's meaningful and understandable for users. Not sure if\n>> \"number of join rels generated\" will be clear enough for users. But it\n>> seems good enough for PoC / development, and hopefully people won't have\n>> to tweak it very often.\n> \n> I'm also in favor of having some type of \"planner effort\" or \"OLTP to\n> OLAP spectrum\" guc, but I'm not yet sure whether it'd be better to have\n> it separate or to couple the joinrel budget to it. If we go that route,\n> I imagine there'll be many things that planner_effort changes that we\n> don't want to give a separate knob for. And, I hope with graceful\n> degradation and a good enough heuristic search, it won't be necessary to\n> change in most cases.\n> \n\nYeah. I'm really worried about having a zillion separate GUC knobs for\ntiny parts of the code. That's impossible to tune, and it also exposes\ndetails about the implementation. And some people just can't resist\ntouching all the available options ;-)\n\nThe thing I like about JIT tunables is that it's specified in \"cost\"\nwhich the users are fairly familiar with. Having another GUC with\nentirely different unit is not great.\n\nBut as I said - it seems perfectly fine for PoC / development, and we\ncan revisit that later. Adding some sort of \"planner effort\" or multiple\noptimization passes is a huge project on it's own.\n\n>> I haven't read the [Kossmann & Stocker, 2000] paper yet, but the\n>> [Neumann, 2018] paper seems to build on it, and it seems to work with\n>> much larger subtrees of the join tree than k=5.\n> \n> Right, in particular it builds on \"IDP-2\" from Kossmann & Stocker. Okay,\n> so Neumann's favorite algorithm stack \"Adaptive\" is complex, and I\n> believe you are referring to cases where they can iteratively improve up\n> to 100 rels at a time because of linearization. That's a separate\n> algorithm (IKKBZ) that complicates the cost model and also cannot have\n> outer joins. If it has outer joins, they use regular DP on subsets of\n> size up to 10. It's not substantively different from IDP-2, and that's\n> the one I'd like to try to gracefully fall back to. Or something similar.\n> \n\nYes, that's what I was referring to. You're right it's complex and we\ndon't need to implement all of that - certainly not on day one. The\nlinearization / IKKBZ seems interesting (even if just for inner joins),\nbut better to start with something generic.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 14 Jun 2021 18:10:28 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "On Mon, Jun 14, 2021 at 06:10:28PM +0200, Tomas Vondra wrote:\n> On 6/14/21 1:16 PM, John Naylor wrote:\n> > On Sun, Jun 13, 2021 at 9:50 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> > wrote:\n> > \n> >> > 2) We can still keep GEQO around (with some huge limit by default) for a\n> >> > few years as an escape hatch, while we refine the replacement. If there\n> >> > is some bug that prevents finding a plan, we can emit a WARNING and fall\n> >> > back to GEQO. Or if a user encounters a regression in a big query, they\n> >> > can lower the limit to restore the plan they had in an earlier version.\n> >> >\n> >>\n> >> Not sure. Keeping significant amounts of code may not be free - both for\n> >> maintenance and new features. It'd be a bit sad if someone proposed\n> >> improvements to join planning, but had to do 2x the work to support it\n> >> in both the DP and GEQO branches, or risk incompatibility.\n> > \n> > Looking back again at the commit history, we did modify geqo to support\n> > partial paths and partition-wise join, so that's a fair concern.\n> \n> Right. I think the question is how complex those changes were. If it was\n> mostly mechanical, it's not a big deal and we can keep both, but if it\n> requires deeper knowledge of the GEQO inner workings it may be an issue\n> (planner changes are already challenging enough).\n\nThe random plan nature of GEQO, along with its \"cliff\", make it\nsomething I would be glad to get rid of if we can get an improved\napproach to large planning needs.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 17:15:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "On Wed, Jun 9, 2021 at 9:24 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> 3) It actually improves the existing exhaustive search, because the complexity of the join order problem depends on the query shape: a \"chain\" shape (linear) vs. a \"star\" shape (as in star schema), for the most common examples. The size of the DP table grows like this (for n >= 4):\n>\n> Chain: (n^3 - n) / 6 (including bushy plans)\n> Star: (n - 1) * 2^(n - 2)\n>\n> n chain star\n> --------------------\n> 4 10 12\n> 5 20 32\n> 6 35 80\n> 7 56 192\n> 8 84 448\n> 9 120 1024\n> 10 165 2304\n> 11 220 5120\n> 12 286 11264\n> 13 364 24576\n> 14 455 53248\n> 15 560 114688\n> ...\n> 64 43680 290536219160925437952\n\nI don't quite understand the difference between the \"chain\" case and\nthe \"star\" case. Can you show sample queries for each one? e.g. SELECT\n... FROM a_1, a_2, ..., a_n WHERE <something>?\n\nOne idea I just ran across in\nhttps://15721.courses.cs.cmu.edu/spring2020/papers/22-costmodels/p204-leis.pdf\nis to try to economize by skipping consideration of bushy plans. We\ncould start doing that when some budget is exceeded, similar to what\nyou are proposing here, but probably the budget for skipping\nconsideration of bushy plans would be smaller than the budget for\nswitching to IDP. The idea of skipping bushy plan generation in some\ncases makes sense to me intuitively because most of the plans\nPostgreSQL generates are mostly left-deep, and even when we do\ngenerate bushy plans, they're not always a whole lot better than the\nnearest equivalent left-deep plan. The paper argues that considering\nbushy plans makes measurable gains, but also that failure to consider\nsuch plans isn't catastrophically bad.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Jun 2021 12:15:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> One idea I just ran across in\n> https://15721.courses.cs.cmu.edu/spring2020/papers/22-costmodels/p204-leis.pdf\n> is to try to economize by skipping consideration of bushy plans. We\n> could start doing that when some budget is exceeded, similar to what\n> you are proposing here, but probably the budget for skipping\n> consideration of bushy plans would be smaller than the budget for\n> switching to IDP. The idea of skipping bushy plan generation in some\n> cases makes sense to me intuitively because most of the plans\n> PostgreSQL generates are mostly left-deep, and even when we do\n> generate bushy plans, they're not always a whole lot better than the\n> nearest equivalent left-deep plan. The paper argues that considering\n> bushy plans makes measurable gains, but also that failure to consider\n> such plans isn't catastrophically bad.\n\nIt's not catastrophically bad until you hit a query where the only\ncorrect plans are bushy. These do exist, and I think they're not\nthat uncommon.\n\nStill, I take your point that maybe we could ratchet down the cost of\nexhaustive search by skimping on this part. Maybe it'd work to skip\nbushy so long as we'd found at least one left-deep or right-deep path\nfor the current rel.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 13:00:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "On Tue, Jun 15, 2021 at 1:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Still, I take your point that maybe we could ratchet down the cost of\n> exhaustive search by skimping on this part. Maybe it'd work to skip\n> bushy so long as we'd found at least one left-deep or right-deep path\n> for the current rel.\n\nYes, that sounds better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Jun 2021 13:11:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "On Tue, Jun 15, 2021 at 12:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jun 9, 2021 at 9:24 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n> > 3) It actually improves the existing exhaustive search, because the\ncomplexity of the join order problem depends on the query shape: a \"chain\"\nshape (linear) vs. a \"star\" shape (as in star schema), for the most common\nexamples. The size of the DP table grows like this (for n >= 4):\n...\n> I don't quite understand the difference between the \"chain\" case and\n> the \"star\" case. Can you show sample queries for each one? e.g. SELECT\n> ... FROM a_1, a_2, ..., a_n WHERE <something>?\n\nThere's a very simple example in the optimizer README:\n\n--\nSELECT *\nFROM tab1, tab2, tab3, tab4\nWHERE tab1.col = tab2.col AND\n tab2.col = tab3.col AND\n tab3.col = tab4.col\n\nTables 1, 2, 3, and 4 are joined as:\n{1 2},{2 3},{3 4}\n{1 2 3},{2 3 4}\n{1 2 3 4}\n(other possibilities will be excluded for lack of join clauses)\n\nSELECT *\nFROM tab1, tab2, tab3, tab4\nWHERE tab1.col = tab2.col AND\n tab1.col = tab3.col AND\n tab1.col = tab4.col\n\nTables 1, 2, 3, and 4 are joined as:\n{1 2},{1 3},{1 4}\n{1 2 3},{1 3 4},{1 2 4}\n{1 2 3 4}\n--\n\nThe first one is chain, and the second is star. Four is the smallest set\nwhere we have a difference. I should now point out an imprecision in my\nlanguage: By \"size of the DP table\", the numbers in my first email refer to\nthe number of joinrels times the number of possible joins (not paths, and\nignoring commutativity). Here are the steps laid out, with cumulative\ncounts:\n\njoin_level, # joins, cumulative # joins:\n\nlinear, n=4\n 2 3 3\n 3 4 7\n 4 3 10\n\nstar, n=4\n 2 3 3\n 3 6 9\n 4 3 12\n\nAnd of course, the chain query also has three at the last level, because it\ntries two left- (or right-) deep joins and one bushy join.\n\n> One idea I just ran across in\n>\nhttps://15721.courses.cs.cmu.edu/spring2020/papers/22-costmodels/p204-leis.pdf\n> is to try to economize by skipping consideration of bushy plans.\n\nThat's a good paper, and it did influence my thinking.\n\nYou likely already know this, but for the archives: If only chain queries\ncould have bushy plans, it wouldn't matter because they are so cheap to\nenumerate. But, since star queries can introduce a large number of extra\njoins via equivalence (same join column or FK), making them resemble\n\"clique\" queries, bushy joins get excessively numerous.\n\n> We\n> could start doing that when some budget is exceeded, similar to what\n> you are proposing here, but probably the budget for skipping\n> consideration of bushy plans would be smaller than the budget for\n> switching to IDP. The idea of skipping bushy plan generation in some\n> cases makes sense to me intuitively because most of the plans\n> PostgreSQL generates are mostly left-deep, and even when we do\n> generate bushy plans, they're not always a whole lot better than the\n> nearest equivalent left-deep plan. The paper argues that considering\n> bushy plans makes measurable gains, but also that failure to consider\n> such plans isn't catastrophically bad.\n\nI think that makes sense. There are a few things we could do within the\n\"grey zone\" -- too many rels to finish exhaustive search, but not enough to\njustify starting directly with the greedy step -- to increase our chances\nof completing, and that's a very simple one.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Jun 15, 2021 at 12:15 PM Robert Haas <robertmhaas@gmail.com> wrote:>> On Wed, Jun 9, 2021 at 9:24 PM John Naylor <john.naylor@enterprisedb.com> wrote:> > 3) It actually improves the existing exhaustive search, because the complexity of the join order problem depends on the query shape: a \"chain\" shape (linear) vs. a \"star\" shape (as in star schema), for the most common examples. The size of the DP table grows like this (for n >= 4):...> I don't quite understand the difference between the \"chain\" case and> the \"star\" case. Can you show sample queries for each one? e.g. SELECT> ... FROM a_1, a_2, ..., a_n WHERE <something>?There's a very simple example in the optimizer README:--SELECT  *FROM    tab1, tab2, tab3, tab4WHERE   tab1.col = tab2.col AND    tab2.col = tab3.col AND    tab3.col = tab4.colTables 1, 2, 3, and 4 are joined as:{1 2},{2 3},{3 4}{1 2 3},{2 3 4}{1 2 3 4}(other possibilities will be excluded for lack of join clauses)SELECT  *FROM    tab1, tab2, tab3, tab4WHERE   tab1.col = tab2.col AND    tab1.col = tab3.col AND    tab1.col = tab4.colTables 1, 2, 3, and 4 are joined as:{1 2},{1 3},{1 4}{1 2 3},{1 3 4},{1 2 4}{1 2 3 4}--The first one is chain, and the second is star. Four is the smallest set where we have a difference. I should now point out an imprecision in my language: By \"size of the DP table\", the numbers in my first email refer to the number of joinrels times the number of possible joins (not paths, and ignoring commutativity). Here are the steps laid out, with cumulative counts:join_level, # joins,  cumulative # joins:linear, n=4 2               3               3 3               4               7 4               3              10star, n=4 2               3               3 3               6               9 4               3              12And of course, the chain query also has three at the last level, because it tries two left- (or right-) deep joins and one bushy join.> One idea I just ran across in> https://15721.courses.cs.cmu.edu/spring2020/papers/22-costmodels/p204-leis.pdf> is to try to economize by skipping consideration of bushy plans.That's a good paper, and it did influence my thinking.You likely already know this, but for the archives: If only chain queries could have bushy plans, it wouldn't matter because they are so cheap to enumerate. But, since star queries can introduce a large number of extra joins via equivalence (same join column or FK), making them resemble \"clique\" queries, bushy joins get excessively numerous.> We> could start doing that when some budget is exceeded, similar to what> you are proposing here, but probably the budget for skipping> consideration of bushy plans would be smaller than the budget for> switching to IDP. The idea of skipping bushy plan generation in some> cases makes sense to me intuitively because most of the plans> PostgreSQL generates are mostly left-deep, and even when we do> generate bushy plans, they're not always a whole lot better than the> nearest equivalent left-deep plan. The paper argues that considering> bushy plans makes measurable gains, but also that failure to consider> such plans isn't catastrophically bad.I think that makes sense. There are a few things we could do within the \"grey zone\" -- too many rels to finish exhaustive search, but not enough to justify starting directly with the greedy step -- to increase our chances of completing, and that's a very simple one.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 15 Jun 2021 14:15:56 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "On Tue, Jun 15, 2021 at 2:16 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> > I don't quite understand the difference between the \"chain\" case and\n> > the \"star\" case. Can you show sample queries for each one? e.g. SELECT\n> > ... FROM a_1, a_2, ..., a_n WHERE <something>?\n>\n> SELECT *\n> FROM tab1, tab2, tab3, tab4\n> WHERE tab1.col = tab2.col AND\n> tab2.col = tab3.col AND\n> tab3.col = tab4.col\n>\n> SELECT *\n> FROM tab1, tab2, tab3, tab4\n> WHERE tab1.col = tab2.col AND\n> tab1.col = tab3.col AND\n> tab1.col = tab4.col\n\nI feel like these are completely equivalent. Either way, the planner\nis going to deduce that all the \".col\" columns are equal to each other\nvia the equivalence class machinery, and then the subsequent planning\nwill be absolutely identical. Or am I missing something?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:01:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "On Wed, Jun 16, 2021 at 12:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jun 15, 2021 at 2:16 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > > I don't quite understand the difference between the \"chain\" case and\n> > > the \"star\" case. Can you show sample queries for each one? e.g. SELECT\n> > > ... FROM a_1, a_2, ..., a_n WHERE <something>?\n> >\n> > SELECT *\n> > FROM tab1, tab2, tab3, tab4\n> > WHERE tab1.col = tab2.col AND\n> > tab2.col = tab3.col AND\n> > tab3.col = tab4.col\n> >\n> > SELECT *\n> > FROM tab1, tab2, tab3, tab4\n> > WHERE tab1.col = tab2.col AND\n> > tab1.col = tab3.col AND\n> > tab1.col = tab4.col\n>\n> I feel like these are completely equivalent. Either way, the planner\n> is going to deduce that all the \".col\" columns are equal to each other\n> via the equivalence class machinery, and then the subsequent planning\n> will be absolutely identical. Or am I missing something?\n\nI believe the intention of the example is that \".col\" is a place holder for\nsome column (all different). Otherwise, with enough ECs it would result in\nan even bigger set of joinrels than what we see here. If ECs don't actually\ncause additional joinrels to be created, then I'm missing something\nfundamental.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jun 16, 2021 at 12:01 PM Robert Haas <robertmhaas@gmail.com> wrote:>> On Tue, Jun 15, 2021 at 2:16 PM John Naylor> <john.naylor@enterprisedb.com> wrote:> > > I don't quite understand the difference between the \"chain\" case and> > > the \"star\" case. Can you show sample queries for each one? e.g. SELECT> > > ... FROM a_1, a_2, ..., a_n WHERE <something>?> >> > SELECT  *> > FROM    tab1, tab2, tab3, tab4> > WHERE   tab1.col = tab2.col AND> >     tab2.col = tab3.col AND> >     tab3.col = tab4.col> >> > SELECT  *> > FROM    tab1, tab2, tab3, tab4> > WHERE   tab1.col = tab2.col AND> >     tab1.col = tab3.col AND> >     tab1.col = tab4.col>> I feel like these are completely equivalent. Either way, the planner> is going to deduce that all the \".col\" columns are equal to each other> via the equivalence class machinery, and then the subsequent planning> will be absolutely identical. Or am I missing something?I believe the intention of the example is that \".col\" is a place holder for some column (all different). Otherwise, with enough ECs it would result in an even bigger set of joinrels than what we see here. If ECs don't actually cause additional joinrels to be created, then I'm missing something fundamental.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 16 Jun 2021 12:24:05 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Wed, Jun 16, 2021 at 12:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> I feel like these are completely equivalent. Either way, the planner\n>> is going to deduce that all the \".col\" columns are equal to each other\n>> via the equivalence class machinery, and then the subsequent planning\n>> will be absolutely identical. Or am I missing something?\n\n> I believe the intention of the example is that \".col\" is a place holder for\n> some column (all different). Otherwise, with enough ECs it would result in\n> an even bigger set of joinrels than what we see here. If ECs don't actually\n> cause additional joinrels to be created, then I'm missing something\n> fundamental.\n\nYeah, I'm not sure I believe this distinction either. IMV a typical star\nschema is going to involve joins of dimension-table ID columns to\n*different* referencing columns of the fact table(s), so that you won't\nget large equivalence classes.\n\nThere certainly are cases where a query produces large equivalence classes\nthat will lead us to investigate a lot of join paths that we wouldn't have\nconsidered were it not for the EC-deduced join clauses. But I don't\nthink that scenario has much to do with star schemas.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:53:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "On Wed, Jun 16, 2021 at 12:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I feel like these are completely equivalent. Either way, the planner\n> is going to deduce that all the \".col\" columns are equal to each other\n> via the equivalence class machinery, and then the subsequent planning\n> will be absolutely identical. Or am I missing something?\n\nOk, I've modified the examples so it reflects the distinction:\n\nA chain has join predicates linking relations in a linear sequence:\n\nSELECT *\nFROM tab1, tab2, tab3, tab4\nWHERE tab1.a = tab2.b AND\n tab2.i = tab3.j AND\n tab3.x = tab4.y\n\nA star has a hub with join predicates to multiple spokes:\n\nSELECT *\nFROM tab1, tab2, tab3, tab4\nWHERE tab1.f1 = tab2.d1 AND\n tab1.f2 = tab3.d2 AND\n tab1.f3 = tab4.d3\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jun 16, 2021 at 12:01 PM Robert Haas <robertmhaas@gmail.com> wrote:>> I feel like these are completely equivalent. Either way, the planner> is going to deduce that all the \".col\" columns are equal to each other> via the equivalence class machinery, and then the subsequent planning> will be absolutely identical. Or am I missing something?Ok, I've modified the examples so it reflects the distinction:A chain has join predicates linking relations in a linear sequence:SELECT  *FROM    tab1, tab2, tab3, tab4WHERE   tab1.a = tab2.b AND   \t    tab2.i = tab3.j AND        tab3.x = tab4.yA star has a hub with join predicates to multiple spokes:SELECT  *FROM    tab1, tab2, tab3, tab4WHERE   tab1.f1 = tab2.d1 AND        tab1.f2 = tab3.d2 AND        tab1.f3 = tab4.d3--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 16 Jun 2021 18:02:32 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "Hi,\n\nI stumbled across this which may be of interest to this topic and GEQO\nalternative.\n\t\nThe main creator/author of Neo and Bao (ML for Query Optimizer) Ryan Marcus\n(finishing Postdoc and looking for job) recently posted [1] about Bao for\ndistributed systems. \n\nBut what was interesting was the links he had to a 2020 Neo YouTube [2]\nwhich discussed better cardinality estimation / 90% less errors (vs.\nPostgres 10) only improved query latency by 2-3%, and other MLs made worse\nin other scenarios.\n\nOther interesting takeaways from the video (summary):\n\nPostgreSQL Query Optimizer – 40k LOC. PG10 70% worse/slower than Oracle. PG\nhas 3 major flaws in QO, 1 fixed in PG11. Neo 10-25% better than PG QO after\n30hr training (using GPU). Neo drops to 10% better if 3 flaws were / could\nbe fixed.\n\nMS SQL – 1 million LOC.\n\nOracle – 45-55 FTEs working on it. No LOC given by Oracle. Appear to focus\non TPC-DS. NEO better than Oracle after 60hr training (using GPU).\n\nHumans and hand tuning will always beat ML. I.e. Neo (and Bao) good for\nthose who cannot afford a fulltime DBA doing query optimizing.\n\n\nBao – follow-on work from Neo.\n“This is a prototype implementation of Bao for PostgreSQL. Bao is a learned\nquery optimizer that learns to \"steer\" the PostgreSQL optimizer by issuing\ncoarse-grained query hints. For more information about Bao”\n\nBAO GitHub here [3] and is AGPLv3 license (not sure if that’s good or bad).\n\nBao drawbacks… (but may not matter from a GEQO perspective??)\n\n“Of course, Bao does come with some drawbacks. Bao causes query optimization\nto take a little bit more time (~300ms), requiring quite a bit more\ncomputation. We studied this overhead in our SIGMOD paper. For data\nwarehouse workloads, which largely consists of long-running, resource\nintensive queries, Bao’s increased overhead is hardly noticeable. However,\nfor workloads with a lot of short running queries, like OLTP workloads, this\nmight not be the case. We are currently working on new approaches to\nmitigate that problem – so stay tuned!”\n\n\n[1] https://rmarcus.info/blog/2021/06/17/bao-distributed.html\n[2] https://www.youtube.com/watch?v=lMb1yNbIopc\nCardinality errors impact on latency - Starting at 8:00, interesting at\n10:10 approx.\n[3] https://github.com/learnedsystems/baoforpostgresql\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Sat, 19 Jun 2021 13:38:58 -0700 (MST)", "msg_from": "AJG <ayden@gera.co.nz>", "msg_from_op": false, "msg_subject": "Re: a path towards replacing GEQO with something better" }, { "msg_contents": "On Mon, Jun 14, 2021 at 12:10 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> >> I haven't read the [Kossmann & Stocker, 2000] paper yet, but the\n> >> [Neumann, 2018] paper seems to build on it, and it seems to work with\n> >> much larger subtrees of the join tree than k=5.\n> >\n> > Right, in particular it builds on \"IDP-2\" from Kossmann & Stocker. Okay,\n> > so Neumann's favorite algorithm stack \"Adaptive\" is complex, and I\n> > believe you are referring to cases where they can iteratively improve up\n> > to 100 rels at a time because of linearization. That's a separate\n> > algorithm (IKKBZ) that complicates the cost model and also cannot have\n> > outer joins. If it has outer joins, they use regular DP on subsets of\n> > size up to 10. It's not substantively different from IDP-2, and that's\n> > the one I'd like to try to gracefully fall back to. Or something\nsimilar.\n> >\n>\n> Yes, that's what I was referring to. You're right it's complex and we\n> don't need to implement all of that - certainly not on day one. The\n> linearization / IKKBZ seems interesting (even if just for inner joins),\n> but better to start with something generic.\n\nUpdate for future reference: The authors published a follow-up in 2019 in\nwhich they describe a way to allow non-inner joins to be considered during\nlinearization. Their scheme also allows for incorporating a limited number\nof cross products into the search in a safe way. Unsurprisingly,\nthese features add complexity, and I don't quite understand it yet, but it\nmight be worth evaluating in the future.\n\nhttps://btw.informatik.uni-rostock.de/download/tagungsband/B2-1.pdf\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jun 14, 2021 at 12:10 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:> >> I haven't read the [Kossmann & Stocker, 2000] paper yet, but the> >> [Neumann, 2018] paper seems to build on it, and it seems to work with> >> much larger subtrees of the join tree than k=5.> >> > Right, in particular it builds on \"IDP-2\" from Kossmann & Stocker. Okay,> > so Neumann's favorite algorithm stack \"Adaptive\" is complex, and I> > believe you are referring to cases where they can iteratively improve up> > to 100 rels at a time because of linearization. That's a separate> > algorithm (IKKBZ) that complicates the cost model and also cannot have> > outer joins. If it has outer joins, they use regular DP on subsets of> > size up to 10. It's not substantively different from IDP-2, and that's> > the one I'd like to try to gracefully fall back to. Or something similar.> >>> Yes, that's what I was referring to. You're right it's complex and we> don't need to implement all of that - certainly not on day one. The> linearization / IKKBZ seems interesting (even if just for inner joins),> but better to start with something generic.Update for future reference: The authors published a follow-up in 2019 in which they describe a way to allow non-inner joins to be considered during linearization. Their scheme also allows for incorporating a limited number of cross products into the search in a safe way. Unsurprisingly, these features add complexity, and I don't quite understand it yet, but it might be worth evaluating in the future.https://btw.informatik.uni-rostock.de/download/tagungsband/B2-1.pdf--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 24 Jun 2021 11:04:14 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: a path towards replacing GEQO with something better" } ]
[ { "msg_contents": "I thought it might be worth having this conversation before we branch for v15.\n\nIt seems we have no standard as to if we say \"a SQL\" or \"an SQL\".\n\nPersonally, I pronounce the language as es-que-ell, so I'd write \"an\nSQL\". If you say \"sequel\", then you'll think differently. The reason\nI do this is that the language was only briefly named sequel but was\nrenamed to SQL. For me calling it sequel seems wrong or out-dated. End\nof personal opinion.\n\nLet this thread not become the place where you tell me why I'm wrong.\nLet's just get some consensus on something, make a change then move\non.\n\nOverall we seem to mostly write \"a SQL\".\n\n~/pg_src$ git grep -E \"\\s(a|A)\\sSQL\\s\" | wc -l\n855\n~/pg_src$ git grep -E \"\\s(an|An)\\sSQL\\s\" | wc -l\n295\n\nHowever, we mostly use \"an SQL\" in the docs.\n\n~/pg_src$ cd doc/\n~/pg_src/doc$ git grep -E \"\\s(a|A)\\sSQL\\s\" | wc -l\n55\n~/pg_src/doc$ git grep -E \"\\s(an|An)\\sSQL\\s\" | wc -l\n94\n\nI think we should change all 55 instances of \"a SQL\" in the docs to\nuse \"an SQL\" and leave the 800 other instances of \"a SQL\" alone.\nChanging those does not seem worthwhile as it could cause\nback-patching pain.\n\nI mostly think that because of the fact that my personal opinion\nagrees with the majority of instances in the docs. Makes more sense to\nchange 55 places than 94 places.\n\nInteresting reading:\nhttp://patorjk.com/blog/2012/01/26/pronouncing-sql-s-q-l-or-sequel/\n\nFurther, there might be a few more in the docs that we might want to\nconsider changing:\n\ngit grep -E \"\\sa\\s(A|E|F|H|I|L|M|N|O|S|X)[A-Z]{2,5}\\s\"\n\nI see \"a FSM\", \"a FIFO\", \"a SSPI\", \"a SASL\", \"a MCV\", \"a SHA\", \"a SQLDA\"\n\nMy regex foo is not strong enough to think how I might find multiline instances.\n\nDavid\n\n\n", "msg_date": "Thu, 10 Jun 2021 19:26:40 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "\"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On 10.06.21 09:26, David Rowley wrote:\n> It seems we have no standard as to if we say \"a SQL\" or \"an SQL\".\n\nThe SQL standard uses \"an SQL-something\".\n\n> However, we mostly use \"an SQL\" in the docs.\n> \n> ~/pg_src$ cd doc/\n> ~/pg_src/doc$ git grep -E \"\\s(a|A)\\sSQL\\s\" | wc -l\n> 55\n> ~/pg_src/doc$ git grep -E \"\\s(an|An)\\sSQL\\s\" | wc -l\n> 94\n> \n> I think we should change all 55 instances of \"a SQL\" in the docs to\n> use \"an SQL\" and leave the 800 other instances of \"a SQL\" alone.\n> Changing those does not seem worthwhile as it could cause\n> back-patching pain.\n\nagreed\n\n> Further, there might be a few more in the docs that we might want to\n> consider changing:\n> \n> git grep -E \"\\sa\\s(A|E|F|H|I|L|M|N|O|S|X)[A-Z]{2,5}\\s\"\n> \n> I see \"a FSM\", \"a FIFO\", \"a SSPI\", \"a SASL\", \"a MCV\", \"a SHA\", \"a SQLDA\"\n> \n> My regex foo is not strong enough to think how I might find multiline instances.\n\nUm, of those, I pronounce FIFO, SASL, and SHA as words, with an \"a\" article.\n\n\n", "msg_date": "Thu, 10 Jun 2021 10:31:37 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Thu, Jun 10, 2021 at 9:31 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 10.06.21 09:26, David Rowley wrote:\n> > It seems we have no standard as to if we say \"a SQL\" or \"an SQL\".\n>\n> The SQL standard uses \"an SQL-something\".\n>\n\nI use both commonly, but the argument for \"an S-Q-L ...\" is strong I think\n- and I definitely think consistency is good.\n\n\n>\n> > However, we mostly use \"an SQL\" in the docs.\n> >\n> > ~/pg_src$ cd doc/\n> > ~/pg_src/doc$ git grep -E \"\\s(a|A)\\sSQL\\s\" | wc -l\n> > 55\n> > ~/pg_src/doc$ git grep -E \"\\s(an|An)\\sSQL\\s\" | wc -l\n> > 94\n> >\n> > I think we should change all 55 instances of \"a SQL\" in the docs to\n> > use \"an SQL\" and leave the 800 other instances of \"a SQL\" alone.\n> > Changing those does not seem worthwhile as it could cause\n> > back-patching pain.\n>\n> agreed\n>\n\n+1 in general, though I would perhaps suggest extending to any user-visible\nmessages in the code. I don't think there's any point in messing with\ncomments etc. I'm not sure what that would do to the numbers though.\n\n\n>\n> > Further, there might be a few more in the docs that we might want to\n> > consider changing:\n> >\n> > git grep -E \"\\sa\\s(A|E|F|H|I|L|M|N|O|S|X)[A-Z]{2,5}\\s\"\n> >\n> > I see \"a FSM\", \"a FIFO\", \"a SSPI\", \"a SASL\", \"a MCV\", \"a SHA\", \"a SQLDA\"\n> >\n> > My regex foo is not strong enough to think how I might find multiline\n> instances.\n>\n> Um, of those, I pronounce FIFO, SASL, and SHA as words, with an \"a\"\n> article.\n>\n\nSame here. I've never heard anyone try to pronounce SSPI, so I would expect\nthat to be \"an SSPI ...\". The other remaining ones (FSM, MCV & SQLDA) I\nwould also argue aren't pronounceable, so should use the \"an\" article.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Thu, Jun 10, 2021 at 9:31 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 10.06.21 09:26, David Rowley wrote:\n> It seems we have no standard as to if we say \"a SQL\" or \"an SQL\".\n\nThe SQL standard uses \"an SQL-something\".I use both commonly, but the argument for \"an S-Q-L ...\" is strong I think - and I definitely think consistency is good. \n\n> However, we mostly use \"an SQL\"  in the docs.\n> \n> ~/pg_src$ cd doc/\n> ~/pg_src/doc$ git grep -E \"\\s(a|A)\\sSQL\\s\" | wc -l\n> 55\n> ~/pg_src/doc$ git grep -E \"\\s(an|An)\\sSQL\\s\" | wc -l\n> 94\n> \n> I think we should change all 55 instances of \"a SQL\" in the docs to\n> use \"an SQL\" and leave the 800 other instances of \"a SQL\" alone.\n> Changing those does not seem worthwhile as it could cause\n> back-patching pain.\n\nagreed+1 in general, though I would perhaps suggest extending to any user-visible messages in the code. I don't think there's any point in messing with comments etc. I'm not sure what that would do to the numbers though. \n\n> Further, there might be a few more in the docs that we might want to\n> consider changing:\n> \n> git grep -E \"\\sa\\s(A|E|F|H|I|L|M|N|O|S|X)[A-Z]{2,5}\\s\"\n> \n> I see \"a FSM\", \"a FIFO\", \"a SSPI\", \"a SASL\", \"a MCV\", \"a SHA\", \"a SQLDA\"\n> \n> My regex foo is not strong enough to think how I might find multiline instances.\n\nUm, of those, I pronounce FIFO, SASL, and SHA as words, with an \"a\" article.Same here. I've never heard anyone try to pronounce SSPI, so I would expect that to be \"an SSPI ...\". The other remaining ones (FSM, MCV & SQLDA) I would also argue aren't pronounceable, so should use the \"an\" article. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com", "msg_date": "Thu, 10 Jun 2021 09:54:12 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "> On 10 Jun 2021, at 10:54, Dave Page <dpage@pgadmin.org> wrote:\n\n> .. I would perhaps suggest extending to any user-visible messages in the code.\n\nI agree, consistent language between docs and user-facing messages is\nimportant.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 10 Jun 2021 10:58:41 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Thu, 10 Jun 2021 at 20:58, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 10 Jun 2021, at 10:54, Dave Page <dpage@pgadmin.org> wrote:\n>\n> > .. I would perhaps suggest extending to any user-visible messages in the code.\n>\n> I agree, consistent language between docs and user-facing messages is\n> important.\n\nYeah, agreed.\n\nI came up with the attached patch.\n\nThe only additional abbreviation that I found to be incorrect that I'd\npreviously not mentioned was \"SRF\". I changed that to use \"an\".\n\nI only found 4 error messages that needed to be updated. There's some\nincorrect stuff remaining in a few README files which I couldn't\ndecide if I should update or not.\n\nMost of the offenders away from the docs are the translator hint\ncomments and within the .po files themselves.\n\n$ git grep -E \"translator:.*(a|A)\\sSQL\" | wc -l\n690\n\nOnly 816 instances of \"a SQL\" remain, so only 126 are not related to\ntranslator hints or .po files.\n\nDoes anyone have any thoughts if the READMEs should be fixed up?\n\nDavid", "msg_date": "Fri, 11 Jun 2021 02:04:47 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Thu, Jun 10, 2021 at 1:27 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n>\n> I think we should change all 55 instances of \"a SQL\" in the docs to\n> use \"an SQL\" and leave the 800 other instances of \"a SQL\" alone.\n\n\n+1\n\nConsistency is good.\n\nRoberto\n\nOn Thu, Jun 10, 2021 at 1:27 AM David Rowley <dgrowleyml@gmail.com> wrote:I think we should change all 55 instances of \"a SQL\" in the docs to\nuse \"an SQL\" and leave the 800 other instances of \"a SQL\" alone.+1Consistency is good.Roberto", "msg_date": "Thu, 10 Jun 2021 08:26:49 -0600", "msg_from": "Roberto Mello <roberto.mello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On 2021-Jun-10, David Rowley wrote:\n\n> I thought it might be worth having this conversation before we branch for v15.\n> \n> It seems we have no standard as to if we say \"a SQL\" or \"an SQL\".\n\nI was just reading the standard a couple of days ago and happened to\nnotice that the standard itself in some places uses \"a SQL\" and in other\nplaces \"an SQL\". I didn't stop to make an analysis of that, so I don't\nknow how prevalent each form is -- I just giggled and moved on.\n\n> My regex foo is not strong enough to think how I might find multiline instances.\n\nThis catches some of these:\n\nag \"\\sa[\\s*]*\\n[\\s*]*(A|E|F|H|I|L|M|N|O|S|X)[A-Z]{2,5}\\s\"\n\nYou get a bunch of \"a NULL\" or \"a NOT\" and so on, but here's a few valid ones:\n\ncontrib/tablefunc/tablefunc.c:316: * crosstab - create a crosstab of rowids and values columns from a\ncontrib/tablefunc/tablefunc.c:317: * SQL statement returning one rowid column, one category column,\n\ncontrib/tablefunc/tablefunc.c:607: * crosstab - create a crosstab of rowids and values columns from a\ncontrib/tablefunc/tablefunc.c:608: * SQL statement returning one rowid column, one category column,\n\ndoc/src/sgml/plpgsql.sgml\n1127: The result of a\n1128: SQL command yielding a single row (possibly of multiple\n\nsrc/backend/catalog/pg_subscription.c:438:\t\t\t * translator: first %s is a SQL ALTER command and second %s is a\nsrc/backend/catalog/pg_subscription.c:439:\t\t\t * SQL DROP command\n\nsrc/backend/replication/logical/logical.c:126:\t * 1) We need to be able to correctly and quickly identify the timeline a\nsrc/backend/replication/logical/logical.c:127:\t *\t LSN belongs to\n\nsrc/backend/libpq/auth.c:847:\t * has. If it's an MD5 hash, we must do MD5 authentication, and if it's a\nsrc/backend/libpq/auth.c:848:\t * SCRAM secret, we must do SCRAM authentication.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Thu, 10 Jun 2021 10:35:05 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Fri, 11 Jun 2021 at 02:04, David Rowley <dgrowleyml@gmail.com> wrote:\n> I came up with the attached patch.\n\nFurther searching using:\n\ngit grep -E \"\\s(an|An)\\s(F|H|L|M|N|S|X)[A-Z]{2,5}\"\n\n(i.e vowel sounding, but not actually starting with a vowel then\nmanually looking for pronounceable ones.)\n\n- by a response from client in an SASLResponse message. The particulars of\n+ by a response from client in a SASLResponse message. The particulars of\n\n- An SHA1 hash of the random prefix and data is appended.\n+ A SHA1 hash of the random prefix and data is appended.\n\n- requires an MIT Kerberos installation and opens TCP/IP listen sockets.\n+ requires a MIT Kerberos installation and opens TCP/IP listen sockets.\n\nI think all of these should use \"a\" rather than \"an\".\n\nDavid\n\n\n", "msg_date": "Fri, 11 Jun 2021 02:42:34 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Thu, 10 Jun 2021 at 10:43, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> - requires an MIT Kerberos installation and opens TCP/IP listen\n> sockets.\n> + requires a MIT Kerberos installation and opens TCP/IP listen\n> sockets.\n>\n> I think all of these should use \"a\" rather than \"an\".\n>\n\n“A MIT …”? As far as I know it is pronounced M - I - T, which would imply\nthat it should use “an”. The following page seems believable and is pretty\nunequivocal on the issue:\n\nhttps://mitadmissions.org/blogs/entry/como_se_dice/\n\nOn Thu, 10 Jun 2021 at 10:43, David Rowley <dgrowleyml@gmail.com> wrote:-       requires an MIT Kerberos installation and opens TCP/IP listen sockets.\n+       requires a MIT Kerberos installation and opens TCP/IP listen sockets.\n\nI think all of these should use \"a\" rather than \"an\".\n“A MIT …”? As far as I know it is pronounced M - I - T, which would imply that it should use “an”. The following page seems believable and is pretty unequivocal on the issue:https://mitadmissions.org/blogs/entry/como_se_dice/", "msg_date": "Thu, 10 Jun 2021 10:48:11 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Fri, 11 Jun 2021 at 02:35, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jun-10, David Rowley wrote:\n> > My regex foo is not strong enough to think how I might find multiline instances.\n>\n> This catches some of these:\n>\n> ag \"\\sa[\\s*]*\\n[\\s*]*(A|E|F|H|I|L|M|N|O|S|X)[A-Z]{2,5}\\s\"\n\nThanks. I ended up using -C 1 and manually checking the previous line.\n\n> You get a bunch of \"a NULL\" or \"a NOT\" and so on, but here's a few valid ones:\n>\n> contrib/tablefunc/tablefunc.c:316: * crosstab - create a crosstab of rowids and values columns from a\n> contrib/tablefunc/tablefunc.c:317: * SQL statement returning one rowid column, one category column,\n>\n> contrib/tablefunc/tablefunc.c:607: * crosstab - create a crosstab of rowids and values columns from a\n> contrib/tablefunc/tablefunc.c:608: * SQL statement returning one rowid column, one category column,\n>\n> doc/src/sgml/plpgsql.sgml\n> 1127: The result of a\n> 1128: SQL command yielding a single row (possibly of multiple\n>\n> src/backend/catalog/pg_subscription.c:438: * translator: first %s is a SQL ALTER command and second %s is a\n> src/backend/catalog/pg_subscription.c:439: * SQL DROP command\n>\n> src/backend/replication/logical/logical.c:126: * 1) We need to be able to correctly and quickly identify the timeline a\n> src/backend/replication/logical/logical.c:127: * LSN belongs to\n>\n> src/backend/libpq/auth.c:847: * has. If it's an MD5 hash, we must do MD5 authentication, and if it's a\n> src/backend/libpq/auth.c:848: * SCRAM secret, we must do SCRAM authentication.\n\nThanks. I've left all the .c file comments alone for no and looks like\nI got the doc/src/sgml/plpgsql.sgml one already.\n\nDavid\n\n\n", "msg_date": "Fri, 11 Jun 2021 02:51:23 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jun-10, David Rowley wrote:\n>> It seems we have no standard as to if we say \"a SQL\" or \"an SQL\".\n\n> I was just reading the standard a couple of days ago and happened to\n> notice that the standard itself in some places uses \"a SQL\" and in other\n> places \"an SQL\". I didn't stop to make an analysis of that, so I don't\n> know how prevalent each form is -- I just giggled and moved on.\n\nIndeed. I think this is entirely pointless; there's zero hope that\nany consistency you might establish right now will persist very long.\nThe largest effect of this proposed patch will be to create\nback-patching headaches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Jun 2021 10:53:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Fri, 11 Jun 2021 at 02:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Indeed. I think this is entirely pointless; there's zero hope that\n> any consistency you might establish right now will persist very long.\n> The largest effect of this proposed patch will be to create\n> back-patching headaches.\n\nhmm. Yet we do have other standards which we do manage to maintain.\n\nI did limit the scope to just the docs and error messages. My thoughts\nwere that someone fudging a backpatch on the docs seems less likely to\ncause a nuclear meltdown than someone doing the same in .c code.\n\nDavid\n\n\n", "msg_date": "Fri, 11 Jun 2021 03:00:51 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Fri, 11 Jun 2021 at 02:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Indeed. I think this is entirely pointless; there's zero hope that\n>> any consistency you might establish right now will persist very long.\n\n> hmm. Yet we do have other standards which we do manage to maintain.\n\nIf there were some semblance of an overall consensus on the spelling,\nI'd be fine with weeding out the stragglers. But when the existing\nusages are only about 2-to-1 in one direction or the other, I feel\nquite confident in predicting that incoming patches are often going\nto get this wrong. Especially so if the convention you want to\nestablish in the docs is contrary to the majority usage in the code\ncomments --- how is that not going to confuse people?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Jun 2021 11:24:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Fri, 11 Jun 2021 at 03:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If there were some semblance of an overall consensus on the spelling,\n> I'd be fine with weeding out the stragglers. But when the existing\n> usages are only about 2-to-1 in one direction or the other, I feel\n> quite confident in predicting that incoming patches are often going\n> to get this wrong.\n\nI'm pretty sure you're right and we will get some inconsistencies\ncreeping back in. I'm not really sure why you think that will be hard\nto fix though. If we catch them soon enough then we won't need to\nworry about causing future backpatching pain.\n\n> Especially so if the convention you want to\n> establish in the docs is contrary to the majority usage in the code\n> comments --- how is that not going to confuse people?\n\nWhy would someone go and gawk at code comments to clear up their\nconfusion about what they should write in the docs? I think any sane\nperson that's looking for inspiration would look at the docs first.\n\nI really think it's worth the trouble here to be consistent in our\npublic-facing documents. When I read [1] earlier and the blog started\ntalking about Oracle documentation using sequel consistently before\ngoing on to talk about MySQL's documentation, I started to get a bit\nworried that the author might mention something about our lack of\nconsistency. I was glad to see they missed us out of that. However,\nmaybe that's because we are inconsistent.\n\nIf you really feel that strongly about not changing this then I can\ndrop this. However, I'll likely growl every time I see \"a SQL\" in the\ndocs from now on.\n\nDavid\n\n[1] http://patorjk.com/blog/2012/01/26/pronouncing-sql-s-q-l-or-sequel/\n\n\n", "msg_date": "Fri, 11 Jun 2021 03:52:46 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> If you really feel that strongly about not changing this then I can\n> drop this. However, I'll likely growl every time I see \"a SQL\" in the\n> docs from now on.\n\n[ shrug... ] I'm not going to stand in your way. However, I'm also\nunlikely to worry about this point when copy-editing docs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Jun 2021 12:04:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On 11/06/21 2:48 am, Isaac Morland wrote:\n> On Thu, 10 Jun 2021 at 10:43, David Rowley <dgrowleyml@gmail.com \n> <mailto:dgrowleyml@gmail.com>> wrote:\n>\n> -      requires an MIT Kerberos installation and opens TCP/IP\n> listen sockets.\n> +       requires a MIT Kerberos installation and opens TCP/IP\n> listen sockets.\n>\n> I think all of these should use \"a\" rather than \"an\".\n>\n>\n> “A MIT …”? As far as I know it is pronounced M - I - T, which would \n> imply that it should use “an”. The following page seems believable and \n> is pretty unequivocal on the issue:\n>\n> https://mitadmissions.org/blogs/entry/como_se_dice/ \n> <https://mitadmissions.org/blogs/entry/como_se_dice/>\n>\nThe rule is, in English, is that if the word sounds like it starts with \na vowel then use 'an' rather than 'a'.  Though some people think that \nthe rule only applies to words beginning with a vowel, which is a \nmisunderstanding.\n\nSo 'an SQL' and 'an MIT'  are correct.   IMHO\n\n\nCheers,\nGavin\n\n\n\n", "msg_date": "Fri, 11 Jun 2021 08:10:49 +1200", "msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Thu, 10 Jun 2021 at 16:11, Gavin Flower <GavinFlower@archidevsys.co.nz>\nwrote:\n\n> On 11/06/21 2:48 am, Isaac Morland wrote:\n>\n\n\n> > “A MIT …”? As far as I know it is pronounced M - I - T, which would\n> > imply that it should use “an”. The following page seems believable and\n> > is pretty unequivocal on the issue:\n> >\n> > https://mitadmissions.org/blogs/entry/como_se_dice/\n> > <https://mitadmissions.org/blogs/entry/como_se_dice/>\n> >\n> The rule is, in English, is that if the word sounds like it starts with\n> a vowel then use 'an' rather than 'a'. Though some people think that\n> the rule only applies to words beginning with a vowel, which is a\n> misunderstanding.\n>\n> So 'an SQL' and 'an MIT' are correct. IMHO\n>\n\nRight, spelling is irrelevant, it's about whether the word begins with a\nvowel *sound*. Or so I've always understood and I'm pretty sure if you\nlisten to what people actually say that's what you'll generally hear. So \"A\nuranium mine\" not \"An uranium mine\" since \"uranium\" begins with a \"y-\"\nsound just like \"yesterday\". The fact that \"u\" is a vowel is irrelevant.\nBut then there is \"an historic occasion\" so go figure.\n\nOn Thu, 10 Jun 2021 at 16:11, Gavin Flower <GavinFlower@archidevsys.co.nz> wrote:On 11/06/21 2:48 am, Isaac Morland wrote: > “A MIT …”? As far as I know it is pronounced M - I - T, which would \n> imply that it should use “an”. The following page seems believable and \n> is pretty unequivocal on the issue:\n>\n> https://mitadmissions.org/blogs/entry/como_se_dice/ \n> <https://mitadmissions.org/blogs/entry/como_se_dice/>\n>\nThe rule is, in English, is that if the word sounds like it starts with \na vowel then use 'an' rather than 'a'.  Though some people think that \nthe rule only applies to words beginning with a vowel, which is a \nmisunderstanding.\n\nSo 'an SQL' and 'an MIT'  are correct.   IMHORight, spelling is irrelevant, it's about whether the word begins with a vowel *sound*. Or so I've always understood and I'm pretty sure if you listen to what people actually say that's what you'll generally hear. So \"A uranium mine\" not \"An uranium mine\" since \"uranium\" begins with a \"y-\" sound just like \"yesterday\". The fact that \"u\" is a vowel is irrelevant. But then there is \"an historic occasion\" so go figure.", "msg_date": "Thu, 10 Jun 2021 16:17:55 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On 11/06/21 8:17 am, Isaac Morland wrote:\n> On Thu, 10 Jun 2021 at 16:11, Gavin Flower \n> <GavinFlower@archidevsys.co.nz <mailto:GavinFlower@archidevsys.co.nz>> \n> wrote:\n>\n> On 11/06/21 2:48 am, Isaac Morland wrote:\n>\n> > “A MIT …”? As far as I know it is pronounced M - I - T, which would\n> > imply that it should use “an”. The following page seems\n> believable and\n> > is pretty unequivocal on the issue:\n> >\n> > https://mitadmissions.org/blogs/entry/como_se_dice/\n> <https://mitadmissions.org/blogs/entry/como_se_dice/>\n> > <https://mitadmissions.org/blogs/entry/como_se_dice/\n> <https://mitadmissions.org/blogs/entry/como_se_dice/>>\n> >\n> The rule is, in English, is that if the word sounds like it starts\n> with\n> a vowel then use 'an' rather than 'a'.  Though some people think that\n> the rule only applies to words beginning with a vowel, which is a\n> misunderstanding.\n>\n> So 'an SQL' and 'an MIT'  are correct.   IMHO\n>\n>\n> Right, spelling is irrelevant, it's about whether the word begins with \n> a vowel *sound*. Or so I've always understood and I'm pretty sure if \n> you listen to what people actually say that's what you'll generally \n> hear. So \"A uranium mine\" not \"An uranium mine\" since \"uranium\" begins \n> with a \"y-\" sound just like \"yesterday\". The fact that \"u\" is a vowel \n> is irrelevant. But then there is \"an historic occasion\" so go figure.\n>\nThe 'h' in 'historic' is silent, at least it used to be -- I think now \nit is almost silent.  So using 'an historic occasion' is correct.\n\n\n\n", "msg_date": "Fri, 11 Jun 2021 08:32:58 +1200", "msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "Gavin Flower <GavinFlower@archidevsys.co.nz> writes:\n> On 11/06/21 8:17 am, Isaac Morland wrote:\n>> ... But then there is \"an historic occasion\" so go figure.\n\n> The 'h' in 'historic' is silent, at least it used to be -- I think now \n> it is almost silent. So using 'an historic occasion' is correct.\n\nIt's silent according to the Brits, I believe. In America, the\npronunciation varies.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Jun 2021 17:32:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "\nOn 6/10/21 5:32 PM, Tom Lane wrote:\n> Gavin Flower <GavinFlower@archidevsys.co.nz> writes:\n>> On 11/06/21 8:17 am, Isaac Morland wrote:\n>>> ... But then there is \"an historic occasion\" so go figure.\n>> The 'h' in 'historic' is silent, at least it used to be -- I think now \n>> it is almost silent. So using 'an historic occasion' is correct.\n> It's silent according to the Brits, I believe. In America, the\n> pronunciation varies.\n>\n> \t\t\t\n\n\nI suspect \"an historic\" is bordering on archaic even in the UK these days.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 10 Jun 2021 17:39:00 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Fri, 11 Jun 2021 at 02:48, Isaac Morland <isaac.morland@gmail.com> wrote:\n>\n> On Thu, 10 Jun 2021 at 10:43, David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> - requires an MIT Kerberos installation and opens TCP/IP listen sockets.\n>> + requires a MIT Kerberos installation and opens TCP/IP listen sockets.\n>>\n>> I think all of these should use \"a\" rather than \"an\".\n>\n>\n> “A MIT …”? As far as I know it is pronounced M - I - T, which would imply that it should use “an”. The following page seems believable and is pretty unequivocal on the issue:\n>\n> https://mitadmissions.org/blogs/entry/como_se_dice/\n\nOpps. I'm not sure what I was thinking there. I'd just been listening\nto something in German, so maybe I'd had the German word in mind\ninstead.\n\nDavid\n\n\n", "msg_date": "Fri, 11 Jun 2021 11:26:28 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Fri, 11 Jun 2021 at 09:39, Andrew Dunstan <andrew@dunslane.net> wrote:\n> I suspect \"an historic\" is bordering on archaic even in the UK these days.\n\nYeah, that's a weird one. Maybe\nhttps://en.wikipedia.org/wiki/H-dropping is to blame.\n\nDavid\n\n\n", "msg_date": "Fri, 11 Jun 2021 11:38:02 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Fri, 11 Jun 2021 at 04:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> However, I'm also\n> unlikely to worry about this point when copy-editing docs.\n\nI'm sorry to hear that. Maybe keeping this consistent will be one of\nthose endless jobs like keeping the source code pgindented. We still\ntry to keep that in order despite the audience for the source code\nbeing much smaller than the audience for our documents.\n\nAnyway, I'll set an alarm for this time next year so I can check on\nhow many inconsistencies have crept back in over the development\ncycle.\n\nIn the meantime, I've pushed the fixes to master.\n\nDavid\n\n\n", "msg_date": "Fri, 11 Jun 2021 13:44:40 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Thu, Jun 10, 2021 at 05:39:00PM -0400, Andrew Dunstan wrote:\n> I suspect \"an historic\" is bordering on archaic even in the UK these days.\n\nDon't trigger me on the difference between \"historic\" and \"historical\"! ;-)\n\n(Hey, not every day I get to trim quoted text to one line --- see recent\npgsql-general discussion of the topic.)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 11 Jun 2021 12:34:38 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Thu, 10 Jun 2021, 15:35 Alvaro Herrera, <alvherre@alvh.no-ip.org> wrote:\n\n> src/backend/libpq/auth.c:847: * has. If it's an MD5 hash, we must do\n> MD5 authentication, and if it's a\n> src/backend/libpq/auth.c:848: * SCRAM secret, we must do SCRAM\n> authentication.\n>\n\nNot sure whether you were just listing examples and you weren't suggesting\nthis should be changed, but surely \"SCRAM\" is pronounced \"scram\" and is\nthus \"a SCRAM\"?\n\nGeoff\n\nOn Thu, 10 Jun 2021, 15:35 Alvaro Herrera, <alvherre@alvh.no-ip.org> wrote:\nsrc/backend/libpq/auth.c:847:    * has.  If it's an MD5 hash, we must do MD5 authentication, and if it's a\nsrc/backend/libpq/auth.c:848:    * SCRAM secret, we must do SCRAM authentication.Not sure whether you were just listing examples and you weren't suggesting this should be changed, but surely \"SCRAM\" is pronounced \"scram\" and is thus \"a SCRAM\"?Geoff", "msg_date": "Sun, 13 Jun 2021 07:36:54 +0100", "msg_from": "Geoff Winkless <pgsqladmin@geoff.dj>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Sun, Jun 13, 2021 at 07:36:54AM +0100, Geoff Winkless wrote:\n> On Thu, 10 Jun 2021, 15:35 Alvaro Herrera, <alvherre@alvh.no-ip.org> wrote:\n>> src/backend/libpq/auth.c:847: * has. If it's an MD5 hash, we must do\n>> MD5 authentication, and if it's a\n>> src/backend/libpq/auth.c:848: * SCRAM secret, we must do SCRAM\n>> authentication.\n> \n> Not sure whether you were just listing examples and you weren't suggesting\n> this should be changed, but surely \"SCRAM\" is pronounced \"scram\" and is\n> thus \"a SCRAM\"?\n\nRFC 5802 uses \"a SCRAM something\" commonly, but \"a SCRAM\" alone does\nnot make sense:\nhttps://datatracker.ietf.org/doc/html/rfc5802\n\nThe sentences quoted above look fine to me.\n--\nMichael", "msg_date": "Sun, 13 Jun 2021 20:13:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "\nOn 6/13/21 7:13 AM, Michael Paquier wrote:\n> On Sun, Jun 13, 2021 at 07:36:54AM +0100, Geoff Winkless wrote:\n>> On Thu, 10 Jun 2021, 15:35 Alvaro Herrera, <alvherre@alvh.no-ip.org> wrote:\n>>> src/backend/libpq/auth.c:847: * has. If it's an MD5 hash, we must do\n>>> MD5 authentication, and if it's a\n>>> src/backend/libpq/auth.c:848: * SCRAM secret, we must do SCRAM\n>>> authentication.\n>> Not sure whether you were just listing examples and you weren't suggesting\n>> this should be changed, but surely \"SCRAM\" is pronounced \"scram\" and is\n>> thus \"a SCRAM\"?\n> RFC 5802 uses \"a SCRAM something\" commonly, but \"a SCRAM\" alone does\n> not make sense:\n> https://datatracker.ietf.org/doc/html/rfc5802\n>\n> The sentences quoted above look fine to me.\n\n\nI don't think anyone was suggesting SCRAM should be used as a noun\nrather than as an adjective. But adjectives can be preceded by an\nindefinite article just as nouns can. The discussion simply left out the\nimplied following noun.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 13 Jun 2021 07:36:28 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Fri, 11 Jun 2021 at 13:44, David Rowley <dgrowleyml@gmail.com> wrote:\n> Anyway, I'll set an alarm for this time next year so I can check on\n> how many inconsistencies have crept back in over the development\n> cycle.\n\nThat alarm went off today.\n\nThere seem to be only 3 \"a SQL\"s in the docs to change to \"an SQL\".\n\nThis is a pretty old thread, so here's a link [1] to the discussion.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvpML27UqFXnrYO1MJddsKVMQoiZisPvsAGhKE_tsKXquw@mail.gmail.com", "msg_date": "Tue, 11 Apr 2023 17:43:04 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Tue, Apr 11, 2023 at 05:43:04PM +1200, David Rowley wrote:\n> That alarm went off today.\n> \n> There seem to be only 3 \"a SQL\"s in the docs to change to \"an SQL\".\n> \n> This is a pretty old thread, so here's a link [1] to the discussion.\n\nGood catches!\n--\nMichael", "msg_date": "Tue, 11 Apr 2023 15:00:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Tue, 11 Apr 2023 at 17:43, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 11 Jun 2021 at 13:44, David Rowley <dgrowleyml@gmail.com> wrote:\n> > Anyway, I'll set an alarm for this time next year so I can check on\n> > how many inconsistencies have crept back in over the development\n> > cycle.\n>\n> That alarm went off today.\n>\n> There seem to be only 3 \"a SQL\"s in the docs to change to \"an SQL\".\n>\n> This is a pretty old thread, so here's a link [1] to the discussion.\n>\n> [1] https://postgr.es/m/CAApHDvpML27UqFXnrYO1MJddsKVMQoiZisPvsAGhKE_tsKXquw@mail.gmail.com\n\nLink to the old thread above.\n\nThere's just 1 instance of \"a SQL\" that crept into PG16 after\nd866f0374. This probably means I'd be better off doing this in June a\nfew weeks before branching...\n\nPatch attached.\n\nDavid", "msg_date": "Tue, 9 Apr 2024 16:18:00 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" }, { "msg_contents": "On Tue, 9 Apr 2024 at 16:18, David Rowley <dgrowleyml@gmail.com> wrote:\n> There's just 1 instance of \"a SQL\" that crept into PG16 after\n> d866f0374. This probably means I'd be better off doing this in June a\n> few weeks before branching...\n>\n> Patch attached.\n\nPushed.\n\nDavid\n\n\n", "msg_date": "Wed, 10 Apr 2024 11:58:04 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"an SQL\" vs. \"a SQL\"" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 17056\nLogged by: Alexander Lakhin\nEmail address: exclusion@gmail.com\nPostgreSQL version: 14beta1\nOperating system: Ubuntu 20.04\nDescription: \n\nWhen executing the following query (based on excerpt from\nforeign_data.sql):\r\n\r\nCREATE FOREIGN DATA WRAPPER dummy;\r\nCREATE SERVER s0 FOREIGN DATA WRAPPER dummy;\r\nCREATE FOREIGN TABLE ft1 (c1 integer NOT NULL) SERVER s0;\r\nALTER FOREIGN TABLE ft1 ADD COLUMN c8 integer DEFAULT 0;\r\nALTER FOREIGN TABLE ft1 ALTER COLUMN c8 TYPE char(10);\r\n\r\nThe server crashes with the stack trace:\r\n\r\nCore was generated by `postgres: law regression [local] ALTER FOREIGN TABLE \n '.\r\nProgram terminated with signal SIGSEGV, Segmentation fault.\r\n#0 pg_detoast_datum (datum=0x0) at fmgr.c:1724\r\n1724 if (VARATT_IS_EXTENDED(datum))\r\n(gdb) bt\r\n#0 pg_detoast_datum (datum=0x0) at fmgr.c:1724\r\n#1 0x000055f03f919267 in construct_md_array\n(elems=elems@entry=0x7ffc24b6c3f0, nulls=nulls@entry=0x0, \r\n ndims=ndims@entry=1, dims=dims@entry=0x7ffc24b6c340,\nlbs=lbs@entry=0x7ffc24b6c344, elmtype=elmtype@entry=1042, \r\n elmlen=-1, elmbyval=false, elmalign=105 'i') at arrayfuncs.c:3397\r\n#2 0x000055f03f91952f in construct_array (elems=elems@entry=0x7ffc24b6c3f0,\nnelems=nelems@entry=1, \r\n elmtype=elmtype@entry=1042, elmlen=<optimized out>, elmbyval=<optimized\nout>, elmalign=<optimized out>)\r\n at arrayfuncs.c:3328\r\n#3 0x000055f03f6f3db7 in ATExecAlterColumnType (tab=0x7ffc24b6c400,\ntab@entry=0x55f03ff27a20, \r\n rel=rel@entry=0x7f2035994618, cmd=<optimized out>,\nlockmode=lockmode@entry=8) at tablecmds.c:12276\r\n#4 0x000055f03f705f24 in ATExecCmd (wqueue=wqueue@entry=0x7ffc24b6c700,\ntab=tab@entry=0x55f03ff27a20, \r\n cmd=<optimized out>, lockmode=lockmode@entry=8,\ncur_pass=cur_pass@entry=1, context=context@entry=0x7ffc24b6c810)\r\n at tablecmds.c:4985\r\n#5 0x000055f03f7063bb in ATRewriteCatalogs\n(wqueue=wqueue@entry=0x7ffc24b6c700, lockmode=lockmode@entry=8, \r\n context=context@entry=0x7ffc24b6c810) at\n../../../src/include/nodes/nodes.h:604\r\n#6 0x000055f03f706618 in ATController\n(parsetree=parsetree@entry=0x55f03fe163d8, rel=rel@entry=0x7f2035994618, \r\n cmds=0x55f03fe163a0, recurse=true, lockmode=lockmode@entry=8,\ncontext=context@entry=0x7ffc24b6c810)\r\n at tablecmds.c:4376\r\n#7 0x000055f03f7066a2 in AlterTable (stmt=stmt@entry=0x55f03fe163d8,\nlockmode=lockmode@entry=8, \r\n context=context@entry=0x7ffc24b6c810) at tablecmds.c:4023\r\n#8 0x000055f03f8f7d47 in ProcessUtilitySlow\n(pstate=pstate@entry=0x55f03ff278b0, pstmt=pstmt@entry=0x55f03fe166e8, \r\n queryString=queryString@entry=0x55f03fe15690 \"ALTER FOREIGN TABLE ft1\nALTER COLUMN c8 TYPE char(10);\", \r\n context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0,\nqueryEnv=queryEnv@entry=0x0, \r\n dest=0x55f03fe167b8, qc=0x7ffc24b6cd20) at utility.c:1284\r\n#9 0x000055f03f8f77bf in standard_ProcessUtility (pstmt=0x55f03fe166e8, \r\n queryString=0x55f03fe15690 \"ALTER FOREIGN TABLE ft1 ALTER COLUMN c8 TYPE\nchar(10);\", \r\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x55f03fe167b8, qc=0x7ffc24b6cd20)\r\n at utility.c:1034\r\n#10 0x000055f03f8f789e in ProcessUtility (pstmt=pstmt@entry=0x55f03fe166e8,\nqueryString=<optimized out>, \r\n context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=<optimized out>,\nqueryEnv=<optimized out>, \r\n dest=dest@entry=0x55f03fe167b8, qc=0x7ffc24b6cd20) at utility.c:525\r\n#11 0x000055f03f8f3c65 in PortalRunUtility\n(portal=portal@entry=0x55f03fe790f0, pstmt=pstmt@entry=0x55f03fe166e8, \r\n isTopLevel=isTopLevel@entry=true,\nsetHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x55f03fe167b8,\n\r\n qc=qc@entry=0x7ffc24b6cd20) at pquery.c:1159\r\n#12 0x000055f03f8f48c0 in PortalRunMulti\n(portal=portal@entry=0x55f03fe790f0, isTopLevel=isTopLevel@entry=true, \r\n setHoldSnapshot=setHoldSnapshot@entry=false,\ndest=dest@entry=0x55f03fe167b8, altdest=altdest@entry=0x55f03fe167b8, \r\n qc=qc@entry=0x7ffc24b6cd20) at pquery.c:1305\r\n#13 0x000055f03f8f559b in PortalRun (portal=portal@entry=0x55f03fe790f0,\ncount=count@entry=9223372036854775807, \r\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true,\ndest=dest@entry=0x55f03fe167b8, \r\n altdest=altdest@entry=0x55f03fe167b8, qc=0x7ffc24b6cd20) at\npquery.c:779\r\n#14 0x000055f03f8f1825 in exec_simple_query (\r\n query_string=query_string@entry=0x55f03fe15690 \"ALTER FOREIGN TABLE ft1\nALTER COLUMN c8 TYPE char(10);\")\r\n at postgres.c:1214\r\n#15 0x000055f03f8f37f7 in PostgresMain (argc=argc@entry=1,\nargv=argv@entry=0x7ffc24b6cf10, dbname=<optimized out>, \r\n username=<optimized out>) at postgres.c:4486\r\n#16 0x000055f03f84ee79 in BackendRun (port=port@entry=0x55f03fe36d20) at\npostmaster.c:4491\r\n#17 0x000055f03f852008 in BackendStartup (port=port@entry=0x55f03fe36d20) at\npostmaster.c:4213\r\n#18 0x000055f03f85224f in ServerLoop () at postmaster.c:1745\r\n#19 0x000055f03f85379c in PostmasterMain (argc=3, argv=<optimized out>) at\npostmaster.c:1417\r\n#20 0x000055f03f7949f9 in main (argc=3, argv=0x55f03fe0f950) at main.c:209", "msg_date": "Thu, 10 Jun 2021 20:00:01 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #17056: Segmentation fault on altering the type of the foreign\n table column with a default" }, { "msg_contents": "PG Bug reporting form <noreply@postgresql.org> writes:\n> CREATE FOREIGN DATA WRAPPER dummy;\n> CREATE SERVER s0 FOREIGN DATA WRAPPER dummy;\n> CREATE FOREIGN TABLE ft1 (c1 integer NOT NULL) SERVER s0;\n> ALTER FOREIGN TABLE ft1 ADD COLUMN c8 integer DEFAULT 0;\n> ALTER FOREIGN TABLE ft1 ALTER COLUMN c8 TYPE char(10);\n\nHmm. The equivalent DDL on a plain table works fine, but this is\ncrashing in the code that manipulates attmissingval. I suspect some\nconfusion about whether a foreign table column should even *have*\nattmissingval. Andrew, any thoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Jun 2021 18:10:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #17056: Segmentation fault on altering the type of the\n foreign table column with a default" }, { "msg_contents": "\nOn 6/10/21 6:10 PM, Tom Lane wrote:\n> PG Bug reporting form <noreply@postgresql.org> writes:\n>> CREATE FOREIGN DATA WRAPPER dummy;\n>> CREATE SERVER s0 FOREIGN DATA WRAPPER dummy;\n>> CREATE FOREIGN TABLE ft1 (c1 integer NOT NULL) SERVER s0;\n>> ALTER FOREIGN TABLE ft1 ADD COLUMN c8 integer DEFAULT 0;\n>> ALTER FOREIGN TABLE ft1 ALTER COLUMN c8 TYPE char(10);\n> Hmm. The equivalent DDL on a plain table works fine, but this is\n> crashing in the code that manipulates attmissingval. I suspect some\n> confusion about whether a foreign table column should even *have*\n> attmissingval. Andrew, any thoughts?\n>\n> \t\t\t\n\n\nMy initial thought would be that it should not. If the foreign table has\nrows with missing columns then it should be up to the foreign server to\nsupply them transparently. We have no notion what the foreign semantics\nof missing columns are.\n\n\nI can take a look at a fix tomorrow. My inclination would be simply to\nskip setting attmissingval for foreign tables.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 10 Jun 2021 18:55:05 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: BUG #17056: Segmentation fault on altering the type of the\n foreign table column with a default" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 6/10/21 6:10 PM, Tom Lane wrote:\n>> Hmm. The equivalent DDL on a plain table works fine, but this is\n>> crashing in the code that manipulates attmissingval. I suspect some\n>> confusion about whether a foreign table column should even *have*\n>> attmissingval. Andrew, any thoughts?\n\n> My initial thought would be that it should not. If the foreign table has\n> rows with missing columns then it should be up to the foreign server to\n> supply them transparently. We have no notion what the foreign semantics\n> of missing columns are.\n\nYeah, that was kind of what I thought. Probably only RELKIND_RELATION\nrels should ever have attmissingval; but certainly, anything without\nlocal storage should not.\n\n> I can take a look at a fix tomorrow. My inclination would be simply to\n> skip setting attmissingval for foreign tables.\n\nSeems like in addition to that, we'll need a defense in this specific\ncode to cope with the case where the foreign column already has an\nattmissingval. Or maybe, the logic to not store a new one will be enough\nto keep us from reaching this crash; but we need to be sure it is enough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Jun 2021 19:11:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #17056: Segmentation fault on altering the type of the\n foreign table column with a default" }, { "msg_contents": "\nOn 6/10/21 7:11 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 6/10/21 6:10 PM, Tom Lane wrote:\n>>> Hmm. The equivalent DDL on a plain table works fine, but this is\n>>> crashing in the code that manipulates attmissingval. I suspect some\n>>> confusion about whether a foreign table column should even *have*\n>>> attmissingval. Andrew, any thoughts?\n>> My initial thought would be that it should not. If the foreign table has\n>> rows with missing columns then it should be up to the foreign server to\n>> supply them transparently. We have no notion what the foreign semantics\n>> of missing columns are.\n> Yeah, that was kind of what I thought. Probably only RELKIND_RELATION\n> rels should ever have attmissingval; but certainly, anything without\n> local storage should not.\n>\n>> I can take a look at a fix tomorrow. My inclination would be simply to\n>> skip setting attmissingval for foreign tables.\n> Seems like in addition to that, we'll need a defense in this specific\n> code to cope with the case where the foreign column already has an\n> attmissingval. Or maybe, the logic to not store a new one will be enough\n> to keep us from reaching this crash; but we need to be sure it is enough.\n\n\nThe first piece could be fairly simply accomplished by something like this\n\ndiff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\nindex afa830d924..ac89efefe8 100644\n--- a/src/backend/catalog/heap.c\n+++ b/src/backend/catalog/heap.c\n@@ -2287,7 +2287,8 @@ StoreAttrDefault(Relation rel, AttrNumber attnum,\n        valuesAtt[Anum_pg_attribute_atthasdef - 1] = true;\n        replacesAtt[Anum_pg_attribute_atthasdef - 1] = true;\n \n-       if (add_column_mode && !attgenerated)\n+       if (rel->rd_rel->relkind == RELKIND_RELATION  && add_column_mode &&\n+           !attgenerated)\n        {\n            expr2 = expression_planner(expr2);\n            estate = CreateExecutorState();\n\n\nI'm guessing we want to exclude materialized views and partitioned\ntables as well as things without local storage.\n\nHow to ignore something that's got into the catalog that shouldn't be\nthere is less clear. At the point where we fetch missing values all we\nhave access to is a TupleDesc.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 11 Jun 2021 17:59:07 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: BUG #17056: Segmentation fault on altering the type of the\n foreign table column with a default" }, { "msg_contents": "\nOn 6/11/21 5:59 PM, Andrew Dunstan wrote:\n>\n> How to ignore something that's got into the catalog that shouldn't be\n> there is less clear. At the point where we fetch missing values all we\n> have access to is a TupleDesc.\n>\n>\n\nOn further reflection I guess we'll need to make that check at the point\nwhere we fill in the TupleDesc.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 11 Jun 2021 18:03:28 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: BUG #17056: Segmentation fault on altering the type of the\n foreign table column with a default" }, { "msg_contents": "On 6/10/21 7:11 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 6/10/21 6:10 PM, Tom Lane wrote:\n>>> Hmm. The equivalent DDL on a plain table works fine, but this is\n>>> crashing in the code that manipulates attmissingval. I suspect some\n>>> confusion about whether a foreign table column should even *have*\n>>> attmissingval. Andrew, any thoughts?\n>> My initial thought would be that it should not. If the foreign table has\n>> rows with missing columns then it should be up to the foreign server to\n>> supply them transparently. We have no notion what the foreign semantics\n>> of missing columns are.\n> Yeah, that was kind of what I thought. Probably only RELKIND_RELATION\n> rels should ever have attmissingval; but certainly, anything without\n> local storage should not.\n>\n>> I can take a look at a fix tomorrow. My inclination would be simply to\n>> skip setting attmissingval for foreign tables.\n> Seems like in addition to that, we'll need a defense in this specific\n> code to cope with the case where the foreign column already has an\n> attmissingval. Or maybe, the logic to not store a new one will be enough\n> to keep us from reaching this crash; but we need to be sure it is enough.\n\n\nOk, I think the attached is the least we need to do. Given this I\nhaven't been able to induce a crash even when the catalog is hacked with\nbogus missing values on a foreign table. But I'm not 100% convinced I\nhave fixed all the places that need to be fixed.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 12 Jun 2021 17:40:38 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: BUG #17056: Segmentation fault on altering the type of the\n foreign table column with a default" }, { "msg_contents": "Hi,\n\nOn 2021-06-12 17:40:38 -0400, Andrew Dunstan wrote:\n> Ok, I think the attached is the least we need to do. Given this I\n> haven't been able to induce a crash even when the catalog is hacked with\n> bogus missing values on a foreign table. But I'm not 100% convinced I\n> have fixed all the places that need to be fixed.\n\nHm. There's a few places that look at atthasmissing and just assume that\nthere's corresponding information about the missing field. And as far as\nI can see the proposed changes in RelationBuildTupleDesc() don't unset\natthasmissing, they just prevent the constraint part of the tuple desc\nfrom being filled. Wouldn't this cause problems if we reach code like\n\nDatum\ngetmissingattr(TupleDesc tupleDesc,\n\t\t\t int attnum, bool *isnull)\n{\n\tForm_pg_attribute att;\n\n\tAssert(attnum <= tupleDesc->natts);\n\tAssert(attnum > 0);\n\n\tatt = TupleDescAttr(tupleDesc, attnum - 1);\n\n\tif (att->atthasmissing)\n\t{\n\t\tAttrMissing *attrmiss;\n\n\t\tAssert(tupleDesc->constr);\n\t\tAssert(tupleDesc->constr->missing);\n\n\t\tattrmiss = tupleDesc->constr->missing + (attnum - 1);\n\n\t\tif (attrmiss->am_present)\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Jun 2021 14:50:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG #17056: Segmentation fault on altering the type of the\n foreign table column with a default" }, { "msg_contents": "On 6/12/21 5:50 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-06-12 17:40:38 -0400, Andrew Dunstan wrote:\n>> Ok, I think the attached is the least we need to do. Given this I\n>> haven't been able to induce a crash even when the catalog is hacked with\n>> bogus missing values on a foreign table. But I'm not 100% convinced I\n>> have fixed all the places that need to be fixed.\n> Hm. There's a few places that look at atthasmissing and just assume that\n> there's corresponding information about the missing field. And as far as\n> I can see the proposed changes in RelationBuildTupleDesc() don't unset\n> atthasmissing, they just prevent the constraint part of the tuple desc\n> from being filled. Wouldn't this cause problems if we reach code like\n>\n\nYes, you're right. This version should take care of things better.\n\n\nThanks for looking.\n\n\ncheers\n\n\nandrew", "msg_date": "Sat, 12 Jun 2021 21:59:24 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: BUG #17056: Segmentation fault on altering the type of the\n foreign table column with a default" }, { "msg_contents": "reposting to -hackers to get more eyeballs.\n\n\nSummary: only RELKIND_RELATION type relations should have attributes\nwith atthasmissing/attmissingval\n\n\n\n-------- Forwarded Message --------\nSubject: \tRe: BUG #17056: Segmentation fault on altering the type of the\nforeign table column with a default\nDate: \tSat, 12 Jun 2021 21:59:24 -0400\nFrom: \tAndrew Dunstan <andrew@dunslane.net>\nTo: \tAndres Freund <andres@anarazel.de>\nCC: \tTom Lane <tgl@sss.pgh.pa.us>, exclusion@gmail.com,\npgsql-bugs@lists.postgresql.org\n\n\n\n\nOn 6/12/21 5:50 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-06-12 17:40:38 -0400, Andrew Dunstan wrote:\n>> Ok, I think the attached is the least we need to do. Given this I\n>> haven't been able to induce a crash even when the catalog is hacked with\n>> bogus missing values on a foreign table. But I'm not 100% convinced I\n>> have fixed all the places that need to be fixed.\n> Hm. There's a few places that look at atthasmissing and just assume that\n> there's corresponding information about the missing field. And as far as\n> I can see the proposed changes in RelationBuildTupleDesc() don't unset\n> atthasmissing, they just prevent the constraint part of the tuple desc\n> from being filled. Wouldn't this cause problems if we reach code like\n>\n\nYes, you're right. This version should take care of things better.\n\n\nThanks for looking.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 14 Jun 2021 07:33:39 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Fwd: BUG #17056: Segmentation fault on altering the type of the\n foreign table column with a default" }, { "msg_contents": "On 2021-Jun-12, Andrew Dunstan wrote:\n\n> +\t/* Don't do anything unless it's a RELKIND type relation */\n> +\tif (tablerel->rd_rel->relkind != RELKIND_RELATION)\n> +\t{\n> +\t\ttable_close(tablerel, AccessExclusiveLock);\n> +\t\treturn;\n> +\t}\n\n\"RELKIND type relation\" is the wrong phrase ... maybe \"it's a plain\ntable\" is good enough? (Ditto in RelationBuildTupleDesc).\n\n> \t/*\n> \t * Here we go --- change the recorded column type and collation. (Note\n> \t * heapTup is a copy of the syscache entry, so okay to scribble on.) First\n> -\t * fix up the missing value if any.\n> +\t * fix up the missing value if any. There shouldn't be any missing values\n> +\t * for anything except RELKIND_RELATION relations, but if there are, ignore\n> +\t * them.\n> \t */\n> -\tif (attTup->atthasmissing)\n> +\tif (rel->rd_rel->relkind == RELKIND_RELATION && attTup->atthasmissing)\n\nWould it be sensible to have a macro \"AttributeHasMissingVal(rel,\nattTup)\", to use instead of reading atthasmissing directly? The macro\nwould check the relkind, and also serve as documentation that said check\nis necessary.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Uno puede defenderse de los ataques; contra los elogios se esta indefenso\"\n\n\n", "msg_date": "Mon, 14 Jun 2021 15:13:21 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: BUG #17056: Segmentation fault on altering the type of the\n foreign table column with a default" }, { "msg_contents": "\nOn 6/14/21 3:13 PM, Alvaro Herrera wrote:\n> On 2021-Jun-12, Andrew Dunstan wrote:\n>\n>> +\t/* Don't do anything unless it's a RELKIND type relation */\n>> +\tif (tablerel->rd_rel->relkind != RELKIND_RELATION)\n>> +\t{\n>> +\t\ttable_close(tablerel, AccessExclusiveLock);\n>> +\t\treturn;\n>> +\t}\n> \"RELKIND type relation\" is the wrong phrase ... maybe \"it's a plain\n> table\" is good enough? (Ditto in RelationBuildTupleDesc).\n\n\n\nOK, will change.\n\n\n\n>\n>> \t/*\n>> \t * Here we go --- change the recorded column type and collation. (Note\n>> \t * heapTup is a copy of the syscache entry, so okay to scribble on.) First\n>> -\t * fix up the missing value if any.\n>> +\t * fix up the missing value if any. There shouldn't be any missing values\n>> +\t * for anything except RELKIND_RELATION relations, but if there are, ignore\n>> +\t * them.\n>> \t */\n>> -\tif (attTup->atthasmissing)\n>> +\tif (rel->rd_rel->relkind == RELKIND_RELATION && attTup->atthasmissing)\n> Would it be sensible to have a macro \"AttributeHasMissingVal(rel,\n> attTup)\", to use instead of reading atthasmissing directly? The macro\n> would check the relkind, and also serve as documentation that said check\n> is necessary.\n\n\n\nWell AFAIK this is the only place we actually need this combination of\ntests, and I'm not a huge fan of defining a macro to use in one spot.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 16:29:06 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: BUG #17056: Segmentation fault on altering the type of the\n foreign table column with a default" } ]
[ { "msg_contents": "In the middle of GIN index testing, there are some selects that are on\na different table array_op_test that doesn't even have an index. They\nprobably were supposed to be selects to table array_index_op_test like\nthe other ones around the area.\n\nFix that. The expected output should stay the same because both tables\nuse the same array.data.\n---\n src/test/regress/expected/create_index.out | 12 ++++++------\n src/test/regress/sql/create_index.sql | 12 ++++++------\n 2 files changed, 12 insertions(+), 12 deletions(-)\n\ndiff --git a/src/test/regress/expected/create_index.out b/src/test/regress/expected/create_index.out\nindex 49f2a158c1..cfdf73179f 100644\n--- a/src/test/regress/expected/create_index.out\n+++ b/src/test/regress/expected/create_index.out\n@@ -904,23 +904,23 @@ SELECT * FROM array_index_op_test WHERE i <@ '{}' ORDER BY seqno;\n 101 | {} | {}\n (1 row)\n \n-SELECT * FROM array_op_test WHERE i = '{NULL}' ORDER BY seqno;\n+SELECT * FROM array_index_op_test WHERE i = '{NULL}' ORDER BY seqno;\n seqno | i | t \n -------+--------+--------\n 102 | {NULL} | {NULL}\n (1 row)\n \n-SELECT * FROM array_op_test WHERE i @> '{NULL}' ORDER BY seqno;\n+SELECT * FROM array_index_op_test WHERE i @> '{NULL}' ORDER BY seqno;\n seqno | i | t \n -------+---+---\n (0 rows)\n \n-SELECT * FROM array_op_test WHERE i && '{NULL}' ORDER BY seqno;\n+SELECT * FROM array_index_op_test WHERE i && '{NULL}' ORDER BY seqno;\n seqno | i | t \n -------+---+---\n (0 rows)\n \n-SELECT * FROM array_op_test WHERE i <@ '{NULL}' ORDER BY seqno;\n+SELECT * FROM array_index_op_test WHERE i <@ '{NULL}' ORDER BY seqno;\n seqno | i | t \n -------+----+----\n 101 | {} | {}\n@@ -1195,13 +1195,13 @@ SELECT * FROM array_index_op_test WHERE t = '{}' ORDER BY seqno;\n 101 | {} | {}\n (1 row)\n \n-SELECT * FROM array_op_test WHERE i = '{NULL}' ORDER BY seqno;\n+SELECT * FROM array_index_op_test WHERE i = '{NULL}' ORDER BY seqno;\n seqno | i | t \n -------+--------+--------\n 102 | {NULL} | {NULL}\n (1 row)\n \n-SELECT * FROM array_op_test WHERE i <@ '{NULL}' ORDER BY seqno;\n+SELECT * FROM array_index_op_test WHERE i <@ '{NULL}' ORDER BY seqno;\n seqno | i | t \n -------+----+----\n 101 | {} | {}\ndiff --git a/src/test/regress/sql/create_index.sql b/src/test/regress/sql/create_index.sql\nindex 8bc76f7c6f..9474dabf9e 100644\n--- a/src/test/regress/sql/create_index.sql\n+++ b/src/test/regress/sql/create_index.sql\n@@ -295,10 +295,10 @@ SELECT * FROM array_index_op_test WHERE i = '{}' ORDER BY seqno;\n SELECT * FROM array_index_op_test WHERE i @> '{}' ORDER BY seqno;\n SELECT * FROM array_index_op_test WHERE i && '{}' ORDER BY seqno;\n SELECT * FROM array_index_op_test WHERE i <@ '{}' ORDER BY seqno;\n-SELECT * FROM array_op_test WHERE i = '{NULL}' ORDER BY seqno;\n-SELECT * FROM array_op_test WHERE i @> '{NULL}' ORDER BY seqno;\n-SELECT * FROM array_op_test WHERE i && '{NULL}' ORDER BY seqno;\n-SELECT * FROM array_op_test WHERE i <@ '{NULL}' ORDER BY seqno;\n+SELECT * FROM array_index_op_test WHERE i = '{NULL}' ORDER BY seqno;\n+SELECT * FROM array_index_op_test WHERE i @> '{NULL}' ORDER BY seqno;\n+SELECT * FROM array_index_op_test WHERE i && '{NULL}' ORDER BY seqno;\n+SELECT * FROM array_index_op_test WHERE i <@ '{NULL}' ORDER BY seqno;\n \n CREATE INDEX textarrayidx ON array_index_op_test USING gin (t);\n \n@@ -331,8 +331,8 @@ SELECT * FROM array_index_op_test WHERE t && '{AAAAAAA80240}' ORDER BY seqno;\n SELECT * FROM array_index_op_test WHERE i @> '{32}' AND t && '{AAAAAAA80240}' ORDER BY seqno;\n SELECT * FROM array_index_op_test WHERE i && '{32}' AND t @> '{AAAAAAA80240}' ORDER BY seqno;\n SELECT * FROM array_index_op_test WHERE t = '{}' ORDER BY seqno;\n-SELECT * FROM array_op_test WHERE i = '{NULL}' ORDER BY seqno;\n-SELECT * FROM array_op_test WHERE i <@ '{NULL}' ORDER BY seqno;\n+SELECT * FROM array_index_op_test WHERE i = '{NULL}' ORDER BY seqno;\n+SELECT * FROM array_index_op_test WHERE i <@ '{NULL}' ORDER BY seqno;\n \n RESET enable_seqscan;\n RESET enable_indexscan;\n-- \n2.24.1\n\n\n\n", "msg_date": "Thu, 10 Jun 2021 16:29:15 -0700", "msg_from": "Jason Kim <git@jasonk.me>", "msg_from_op": true, "msg_subject": "[PATCH] Fix select from wrong table array_op_test" }, { "msg_contents": "Jason Kim <git@jasonk.me> writes:\n> In the middle of GIN index testing, there are some selects that are on\n> a different table array_op_test that doesn't even have an index. They\n> probably were supposed to be selects to table array_index_op_test like\n> the other ones around the area.\n\nI think it's probably intentional, else why have two tables at all?\nI suppose the point of these test cases is to confirm that you get the\nsame results with or without use of an index.\n\nCertainly, there's more than one way to do that. Perhaps we should\nhave only one table and perform the variant tests by manipulating\nenable_indexscan et al. But I think what you did here is defeating\nthe intent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Jun 2021 11:31:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix select from wrong table array_op_test" }, { "msg_contents": "On 11/06/2021 18:31, Tom Lane wrote:\n> Jason Kim <git@jasonk.me> writes:\n>> In the middle of GIN index testing, there are some selects that are on\n>> a different table array_op_test that doesn't even have an index. They\n>> probably were supposed to be selects to table array_index_op_test like\n>> the other ones around the area.\n> \n> I think it's probably intentional, else why have two tables at all?\n> I suppose the point of these test cases is to confirm that you get the\n> same results with or without use of an index.\n\nWe already have these same queries in the 'arrays' test against the \n'array_op_test' table, though. It sure looks like a copy-paste error to \nme as well.\n\n- Heikki\n\n\n\n", "msg_date": "Fri, 11 Jun 2021 19:00:41 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix select from wrong table array_op_test" }, { "msg_contents": "On 2021-06-11T19:00:41+0300, Heikki Linnakangas wrote:\n> We already have these same queries in the 'arrays' test against the\n> 'array_op_test' table, though. It sure looks like a copy-paste error to me\n> as well.\n\nThat's reason enough, but another reason is that I don't think GIN_CAT_NULL_KEY\nis covered without this change.\n\n\n", "msg_date": "Mon, 14 Jun 2021 15:33:06 -0700", "msg_from": "Jason Kim <git@jasonk.me>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix select from wrong table array_op_test" } ]
[ { "msg_contents": "The docs currently say (introduced in commit 91fa853):\n\n\"In the event of a backend-detected error during copy-both mode, the\nbackend will issue an ErrorResponse message, discard frontend messages\nuntil a Sync message is received, and then issue ReadyForQuery and\nreturn to normal processing.\"\n\nBut that doesn't seem to be correct: Sync is only used for the extended\nquery protocol, and CopyBoth can only be initiated with the simple\nquery protocol. So the actual behavior seems to be more like a \"COPY\nFROM STDIN\" initiated with the simple query protocol:\n\n\"In the event of a backend-detected error during copy-in mode\n(including receipt of a CopyFail message), the backend will issue an\nErrorResponse message. ... If the COPY command was issued in a simple\nQuery message, the rest of that message is discarded and ReadyForQuery\nis issued ... any subsequent CopyData, CopyDone, or CopyFail messages\nissued by the frontend will simply be dropped.\"\n\nIf the client does send a Sync, it results in an extra ReadyForQuery\nmessage.\n\nDiagnosed and reported by Petros Angelatos (petrosagg on Github).\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 10 Jun 2021 18:26:56 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Replication protocol doc fix" }, { "msg_contents": "On Thu, Jun 10, 2021 at 9:26 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> The docs currently say (introduced in commit 91fa853):\n>\n> \"In the event of a backend-detected error during copy-both mode, the\n> backend will issue an ErrorResponse message, discard frontend messages\n> until a Sync message is received, and then issue ReadyForQuery and\n> return to normal processing.\"\n>\n> But that doesn't seem to be correct: Sync is only used for the extended\n> query protocol, and CopyBoth can only be initiated with the simple\n> query protocol.\n\nMy impression was that CopyBoth can be initiated either way, but if\nyou use the extended query protocol, then the result is a hopeless\nmess, because the protocol is badly designed:\n\nhttps://www.postgresql.org/message-id/CA+Tgmoa4eA+cPXaiGQmEBp9XisVd3ZE9dbvnbZEvx9UcMiw2tg@mail.gmail.com\n\nBut I think you're correct in saying that the discard-until-Sync\nbehavior only happens if the extended query protocol is used, so I\nagree that the current text is wrong.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Jun 2021 16:57:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "On Fri, 2021-06-11 at 16:57 -0400, Robert Haas wrote:\n> My impression was that CopyBoth can be initiated either way, \n\nThe docs say: \"In either physical replication or logical replication\nwalsender mode, only the simple query protocol can be used.\" Is there\nsome way to initiate CopyBoth outside of walsender?\n\n> but if\n> you use the extended query protocol, then the result is a hopeless\n> mess, because the protocol is badly designed:\n> \n> \nhttps://www.postgresql.org/message-id/CA+Tgmoa4eA+cPXaiGQmEBp9XisVd3ZE9dbvnbZEvx9UcMiw2tg@mail.gmail.com\n\nIt seems like you're saying that CopyIn and CopyBoth are both equally\nbroken in extended query mode. Is that right?\n\n> But I think you're correct in saying that the discard-until-Sync\n> behavior only happens if the extended query protocol is used, so I\n> agree that the current text is wrong.\n\nShould we just document how CopyBoth works with the simple query\nprotocol, or should we make it match the CopyIn docs?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 11 Jun 2021 15:12:29 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "On Fri, Jun 11, 2021 at 6:12 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Fri, 2021-06-11 at 16:57 -0400, Robert Haas wrote:\n> > My impression was that CopyBoth can be initiated either way,\n>\n> The docs say: \"In either physical replication or logical replication\n> walsender mode, only the simple query protocol can be used.\" Is there\n> some way to initiate CopyBoth outside of walsender?\n\nCurrently, no, or at least not to my knowledge. I just meant that\nthere seems to be nothing in the protocol specification which prevents\nCopyBothResponse from being sent in response to a query sent using the\nextended protocol.\n\n> > but if\n> > you use the extended query protocol, then the result is a hopeless\n> > mess, because the protocol is badly designed:\n> >\n> https://www.postgresql.org/message-id/CA+Tgmoa4eA+cPXaiGQmEBp9XisVd3ZE9dbvnbZEvx9UcMiw2tg@mail.gmail.com\n>\n> It seems like you're saying that CopyIn and CopyBoth are both equally\n> broken in extended query mode. Is that right?\n\nYeah.\n\n> > But I think you're correct in saying that the discard-until-Sync\n> > behavior only happens if the extended query protocol is used, so I\n> > agree that the current text is wrong.\n>\n> Should we just document how CopyBoth works with the simple query\n> protocol, or should we make it match the CopyIn docs?\n\nI think it would make sense to make it match the CopyIn docs. Possibly\nthe CopyOut docs should be made more similar as well.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Jun 2021 10:51:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "On Mon, 2021-06-14 at 10:51 -0400, Robert Haas wrote:\n> but if\n> > > you use the extended query protocol, then the result is a\n> > > hopeless\n> > > mess, because the protocol is badly designed:\n> > > \n\nAfter looking in more detail, I think I understand a bit better.\nClients don't differentiate between:\n\n* A normal command, where you know that you've sent everything that you\nwill send. In this case, the client needs to send the Sync message in\norder to get the ReadyForQuery message.\n\n* A command that initiates CopyIn/CopyBoth, where you are going to send\nmore data after the command. In this case, sending the Sync eagerly is\nwrong, and you can't pipeline more queries in the middle of\nCopyIn/CopyBoth mode. Instead, the client should send Sync after\nreceiving an ErrorResponse, or after sending a CopyDone/CopyFail\n(right?).\n\nOne thing I don't fully understand is what would happen if the client\nissued the Sync as the *first* message in an extended-protocol series.\n\n> > > But I think you're correct in saying that the discard-until-Sync\n> > > behavior only happens if the extended query protocol is used, so\n> > > I\n> > > agree that the current text is wrong.\n> > \n> > Should we just document how CopyBoth works with the simple query\n> > protocol, or should we make it match the CopyIn docs?\n> \n> I think it would make sense to make it match the CopyIn docs.\n> Possibly\n> the CopyOut docs should be made more similar as well.\n\nI attached a doc patch that hopefully clarifies this point as well as\nthe weirdness around CopyIn/CopyBoth and the extended protocol. I\nreorganized the sections, as well.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 16 Jun 2021 14:15:35 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> One thing I don't fully understand is what would happen if the client\n> issued the Sync as the *first* message in an extended-protocol series.\n\nThat'd cause the backend to send ReadyForQuery, which'd likely\nconfuse the client.\n\n> But I think you're correct in saying that the discard-until-Sync\n> behavior only happens if the extended query protocol is used,\n\nCertainly, because otherwise there is no Sync.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 17:25:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "On Wed, Jun 16, 2021 at 5:15 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> * A normal command, where you know that you've sent everything that you\n> will send. In this case, the client needs to send the Sync message in\n> order to get the ReadyForQuery message.\n>\n> * A command that initiates CopyIn/CopyBoth, where you are going to send\n> more data after the command. In this case, sending the Sync eagerly is\n> wrong, and you can't pipeline more queries in the middle of\n> CopyIn/CopyBoth mode. Instead, the client should send Sync after\n> receiving an ErrorResponse, or after sending a CopyDone/CopyFail\n> (right?).\n\nWell, that's one view of it. I would argue that the protocol ought not\nto be designed in such a way that the client has to guess what\nresponse the server might send back. How is it supposed to know? If\nthe user says, hey, go run this via the extended query protocol, we\ndon't want libpq to have to try to parse the query text and figure out\nwhether it looks COPY-ish. That's expensive, hacky, and might create\ncross-version compatibility hazards if, say, a new replication command\nthat uses the copy protocol is added. Nor do we want the user to have\nto specify what it thinks the server is going to do. Right now, we\nhave this odd situation where the client indeed does not try to guess\nwhat the server will do and always send Sync, but the server acts as\nif the client is doing what you propose here - only sending the\nCopyDone/CopyFail at the end of everything associated with the\ncommand.\n\n> One thing I don't fully understand is what would happen if the client\n> issued the Sync as the *first* message in an extended-protocol series.\n\nI don't think that will break anything, because I think you can send a\nSync message to try to reestablish protocol synchronization whenever\nyou want. But I don't think it will accomplish anything either,\nbecause presumably you've already got protocol synchronization at the\nbeginning of the sequence. The tricky part is getting resynchronized\nafter you've done some stuff.\n\n> I attached a doc patch that hopefully clarifies this point as well as\n> the weirdness around CopyIn/CopyBoth and the extended protocol. I\n> reorganized the sections, as well.\n\nOn a casual read-through this seems pretty reasonable, but it\nessentially documents that libpq is doing the wrong thing by sending\nSync unconditionally. As I say above, I disagree with that from a\nphilosophical perspective. Then again, unless we're willing to\nredefine the wire protocol, I don't have an alternative to offer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 12:42:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "On Thu, 2021-06-17 at 12:42 -0400, Robert Haas wrote:\n> On a casual read-through this seems pretty reasonable, but it\n> essentially documents that libpq is doing the wrong thing by sending\n> Sync unconditionally. As I say above, I disagree with that from a\n> philosophical perspective. Then again, unless we're willing to\n> redefine the wire protocol, I don't have an alternative to offer.\n\nWhat if we simply mandate that a Sync must be sent before the server\nwill respond with CopyInResponse/CopyBothResponse, and the client must\nsend another Sync after CopyDone/CopyFail (or after receiving an\nErrorResponse, if the client isn't going to send a CopyDone/CopyFail)?\n\nThis will follow what libpq is already doing today, as far as I can\ntell, and it will leave the server in a definite state.\n\nIn theory, it could break a client that issues Parse+Bind+Execute for a\nCopyIn/CopyBoth command without a Sync, but I'm not sure there are any\nclients that do that, and it's arguable whether the documentation\npermitted that or not anyway.\n\nI hacked together a quick patch; attached.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 17 Jun 2021 16:37:51 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "On Thu, Jun 17, 2021 at 7:37 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> What if we simply mandate that a Sync must be sent before the server\n> will respond with CopyInResponse/CopyBothResponse, and the client must\n> send another Sync after CopyDone/CopyFail (or after receiving an\n> ErrorResponse, if the client isn't going to send a CopyDone/CopyFail)?\n\nI am not sure whether this works or not. Holding off cancel interrupts\nacross possible network I/O seems like a non-starter. We have to be\nable to kill off connections that have wedged. Also, if we have to\npostpone sending ErrorResponse until we see the Sync, that's also bad:\nI think we need to be able to error out whenever. But, hmm, maybe it's\nOK to send ErrorResponse either before or after sending\nCopy{In,Both}Response. Then the client knows that if ErrorResponse\nshows up before Copy{In,Both}Response, the server sent it before\nconsuming the Sync and will stop skipping messages when it sees the\nSync; whereas if the ErrorResponse shows up after the\nCopy{In,Both}Response then the client knows the Sync was eaten and it\nhas to send another one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Jun 2021 12:25:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "On Wed, 2021-06-30 at 12:25 -0400, Robert Haas wrote:\n> I am not sure whether this works or not. Holding off cancel\n> interrupts\n> across possible network I/O seems like a non-starter. We have to be\n> able to kill off connections that have wedged.\n\nI was following a pattern that I saw in CopyGetData() and\nSocketBackend(). If I understand correctly, the idea is to avoid a\ncancel leaving part of a message unread, which would desync the\nprotocol.\n\n> Also, if we have to\n> postpone sending ErrorResponse until we see the Sync, that's also\n> bad:\n> I think we need to be able to error out whenever.\n\nI think we could still send an ErrorResponse whenever we want, and then\njust ignore messages until we get a Sync (just like for an ordinary\nextended protocol sequence).\n\n> But, hmm, maybe it's\n> OK to send ErrorResponse either before or after sending\n> Copy{In,Both}Response. Then the client knows that if ErrorResponse\n> shows up before Copy{In,Both}Response, the server sent it before\n> consuming the Sync and will stop skipping messages when it sees the\n> Sync; whereas if the ErrorResponse shows up after the\n> Copy{In,Both}Response then the client knows the Sync was eaten and it\n> has to send another one.\n\nThat's what I had in mind.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 01 Jul 2021 22:55:40 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "On Fri, Jul 2, 2021 at 1:55 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Wed, 2021-06-30 at 12:25 -0400, Robert Haas wrote:\n> > I am not sure whether this works or not. Holding off cancel\n> > interrupts\n> > across possible network I/O seems like a non-starter. We have to be\n> > able to kill off connections that have wedged.\n>\n> I was following a pattern that I saw in CopyGetData() and\n> SocketBackend(). If I understand correctly, the idea is to avoid a\n> cancel leaving part of a message unread, which would desync the\n> protocol.\n\nRight, that seems like a good goal. Thinking about this a little more,\nit's only holding off *cancel* interrupts, not *all* interrupts, so\npresumably you can still terminate the backend in this state. That's\nnot so bad, and it's not clear how we could do any better. So I\nwithdraw my previous complaint about this point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Jul 2021 08:44:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "On Fri, 2021-07-02 at 08:44 -0400, Robert Haas wrote:\n> Right, that seems like a good goal. Thinking about this a little\n> more,\n> it's only holding off *cancel* interrupts, not *all* interrupts, so\n> presumably you can still terminate the backend in this state. That's\n> not so bad, and it's not clear how we could do any better. So I\n> withdraw my previous complaint about this point.\n\nFurther thoughts on this? I don't feel comfortable making this change\nwithout a stronger endorsement.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 30 Jul 2021 14:55:29 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "Hi,\n\nOn 2021-06-17 16:37:51 -0700, Jeff Davis wrote:\n> In theory, it could break a client that issues Parse+Bind+Execute for a\n> CopyIn/CopyBoth command without a Sync, but I'm not sure there are any\n> clients that do that, and it's arguable whether the documentation\n> permitted that or not anyway.\n\nI'm worried about that breaking things and us only noticing down the\nroad. This doesn't fix a problem that we are actively hitting, and as\nyou say it's arguably compliant to do it differently. Potential protocol\nincompatibilities are a dangerous area. I think before doing something\nlike this we ought to at least verify that the most popular native\ndrivers won't have a problem with the change. Maybe pgjdbc, npgsql, the\npopular go ones and rust-postgres?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 30 Jul 2021 17:09:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "The commitfest CI times out on all platforms and never finishes when running\nmake check with this patch, so unless the patch is dropped due to concerns\nalready raised then that seems like a good thing to fix.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 3 Nov 2021 12:14:56 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Replication protocol doc fix" }, { "msg_contents": "> On 3 Nov 2021, at 12:14, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> The commitfest CI times out on all platforms and never finishes when running\n> make check with this patch, so unless the patch is dropped due to concerns\n> already raised then that seems like a good thing to fix.\n\nAs the thread has stalled, I'm marking this Returned with Feedback. Please\nfeel free to resubmit when/if a new patch is available.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 1 Dec 2021 11:59:00 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Replication protocol doc fix" } ]
[ { "msg_contents": "Hi\n\n\nI by chance found a timing issue that exists in PG9.6\nduring my process to make a patch-set that should be back-patched to 9.6.\nI don't judge this is unique in 9.6 or not from HEAD yet.\n(Then, I don't know like this is solved by HEAD already but not back-patched but can it be ?)\nPlease tell me about this if you know something.\n\nThe logs I got :\n\n* ./contrib/test_decoding/regression_output/regression.out\n\ntest ddl ... ok\ntest xact ... ok\ntest rewrite ... ok\ntest toast ... FAILED (test process exited with exit code 2)\ntest permissions ... FAILED (test process exited with exit code 2)\ntest decoding_in_xact ... FAILED (test process exited with exit code 2)\ntest decoding_into_rel ... FAILED (test process exited with exit code 2)\ntest binary ... FAILED (test process exited with exit code 2)\ntest prepared ... FAILED (test process exited with exit code 2)\ntest replorigin ... FAILED (test process exited with exit code 2)\ntest time ... FAILED (test process exited with exit code 2)\ntest messages ... FAILED (test process exited with exit code 2)\ntest spill ...\n\n* ./contrib/test_decoding/regression_output/regression.diffs\n\n\n*** /(where/I/put/PG)/contrib/test_decoding/expected/toast.out 2021-06-11 00:19:17.917565307 +0000\n--- /(where/I/pug/PG)/contrib/test_decoding/./regression_output/results/toast.out 2021-06-11 00:37:45.642565307 +0000\n***************\n*** 348,364 ****\n DROP TABLE toasted_several;\n SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\\1..\\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')\n WHERE data NOT LIKE '%INSERT: %';\n! regexp_replace\n! ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! BEGIN\n! table public.toasted_several: UPDATE: old-key: id[integer]:1 toasted_key[text]:'98765432109876543210..7654321098765432109876543210987654321098765432109876543210' toasted_col2[text]:unchanged-toast-datum\n! table public.toasted_several: DELETE: id[integer]:1 toasted_key[text]:'98765432109876543210987654321..876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210'\n! COMMIT\n! (4 rows)\n!\n! SELECT pg_drop_replication_slot('regression_slot');\n! pg_drop_replication_slot\n! --------------------------\n!\n! (1 row)\n!\n--- 348,355 ----\n DROP TABLE toasted_several;\n SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\\1..\\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')\n WHERE data NOT LIKE '%INSERT: %';\n! FATAL: could not open relation mapping file \"global/pg_filenode.map\": No such file or directory\n! server closed the connection unexpectedly\n! This probably means the server terminated abnormally\n! before or while processing the request.\n! connection to server was lost\n....\n\n\nI just checkouted the stable 9.6 branch\nand configured with --enable-cassert --enable-debug --enable-tap-tests CFLAGS=-O0 --prefix=/where/I/put/binary.\nThen, I ran build and make check-world in parallel. There is no core file.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n", "msg_date": "Fri, 11 Jun 2021 02:00:27 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "timing sensitive failure of test_decoding RT that exists in PG9.6" } ]
[ { "msg_contents": "StartLogicalReplication() calls CreateDecodingContext(), which says:\n\n else if (start_lsn < slot->data.confirmed_flush)\n {\n /*\n * It might seem like we should error out in this case, but it's\n * pretty common for a client to acknowledge a LSN it doesn't\nhave to\n * do anything for, and thus didn't store persistently, because the\n * xlog records didn't result in anything relevant for logical\n * decoding. Clients have to be able to do that to support\nsynchronous\n * replication.\n */\n ...\n start_lsn = slot->data.confirmed_flush;\n }\n\nBut what about LSNs that are way in the past? Physical replication will\nthrow an error in that case (e.g. \"requested WAL segment %s has already\nbeen removed\"), but StartLogicalReplication() ends up just starting\nfrom confirmed_flush, which doesn't seem right.\n\nI'm not sure I understand the comment overall. Why would the client\nrequest something that it has already acknowledged, and why would the\nserver override that and just advance to the confirmed_lsn?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 10 Jun 2021 19:08:10 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Question about StartLogicalReplication() error path" }, { "msg_contents": "On Fri, Jun 11, 2021 at 7:38 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> StartLogicalReplication() calls CreateDecodingContext(), which says:\n>\n> else if (start_lsn < slot->data.confirmed_flush)\n> {\n> /*\n> * It might seem like we should error out in this case, but it's\n> * pretty common for a client to acknowledge a LSN it doesn't\n> have to\n> * do anything for, and thus didn't store persistently, because the\n> * xlog records didn't result in anything relevant for logical\n> * decoding. Clients have to be able to do that to support\n> synchronous\n> * replication.\n> */\n> ...\n> start_lsn = slot->data.confirmed_flush;\n> }\n>\n..\n..\n>\n> I'm not sure I understand the comment overall. Why would the client\n> request something that it has already acknowledged,\n>\n\nBecause sometimes clients don't have to do anything for xlog records.\nOne example is WAL for DDL where logical decoding didn't produce\nanything for the client but later with keepalive we send the LSN of\nWAL where DDL has finished and the client just responds with the\nposition sent by the server as it doesn't have any other pending\ntransactions.\n\n> and why would the\n> server override that and just advance to the confirmed_lsn?\n>\n\nI think because there is no need to process the WAL that has been\nconfirmed by the client. Do you see any problems with this scheme?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Jun 2021 10:13:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Fri, 2021-06-11 at 10:13 +0530, Amit Kapila wrote:\n> Because sometimes clients don't have to do anything for xlog records.\n> One example is WAL for DDL where logical decoding didn't produce\n> anything for the client but later with keepalive we send the LSN of\n> WAL where DDL has finished and the client just responds with the\n> position sent by the server as it doesn't have any other pending\n> transactions.\n\nIf I understand correctly, in this situation it avoids the cost of a\nwrite on the client just to update its stored LSN progress value when\nthere's no real data to be written. In that case the client would need\nto rely on the server's confirmed_flush_lsn instead of its own stored\nLSN progress value.\n\nThat's a reasonable thing for the *client* to do explicitly, e.g. by\njust reading the slot's confirmed_flush_lsn and comparing to its own\nstored lsn. But I don't think it's reasonable for the server to just\nskip over data requested by the client because it thinks it knows best.\n\n> I think because there is no need to process the WAL that has been\n> confirmed by the client. Do you see any problems with this scheme?\n\nSeveral:\n\n* Replication setups are complex, and it can be easy to misconfigure\nsomething or have a bug in some control code. An error is valuable to\ndetect the problem closer to the source.\n\n* There are plausible configurations where things could go badly wrong.\nFor instance, if you are storing the decoded data in another postgres\nserver with syncrhonous_commit=off, and acknowledging LSNs before they\nare durable. A crash of the destination system would be consistent, but\nit would be missing some data earlier than the confirmed_flush_lsn. The\nclient would then request the data starting at its stored lsn progress\nvalue, but the server would skip ahead to the confirmed_flush_lsn;\nsilently missing data.\n\n* It's contradicted by the docs: \"Instructs server to start streaming\nWAL for logical replication, starting at WAL location XXX/XXX.\"\n\n* The comment acknowledges that a user might expect an error in that\ncase; but doesn't really address why the user would expect an error,\nand why it's OK to violate that expectation.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 10 Jun 2021 23:22:53 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Fri, Jun 11, 2021 at 2:23 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> * The comment acknowledges that a user might expect an error in that\n> case; but doesn't really address why the user would expect an error,\n> and why it's OK to violate that expectation.\n\nThis code was written by Andres, so he'd be the best person to comment\non it, but it seems to me that the comment does explain this, and that\nit's basically the same explanation as what Amit said. If the client\ndoesn't have to do anything for a certain range of WAL and just\nacknowledges it, it would under your proposal have to also durably\nrecord that it had chosen to do nothing, which might cause extra\nfsyncs, potentially lots of extra fsyncs if this happens frequently\ne.g. because most tables are filtered out and the replicated ones are\nonly modified occasionally. I'm not sure that it would be a good\ntrade-off to have a tighter sanity check at the expense of adding that\noverhead.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Jun 2021 13:15:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "Hi,\n\nOn 2021-06-11 13:15:11 -0400, Robert Haas wrote:\n> On Fri, Jun 11, 2021 at 2:23 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > * The comment acknowledges that a user might expect an error in that\n> > case; but doesn't really address why the user would expect an error,\n> > and why it's OK to violate that expectation.\n> \n> This code was written by Andres, so he'd be the best person to comment\n> on it, but it seems to me that the comment does explain this, and that\n> it's basically the same explanation as what Amit said. If the client\n> doesn't have to do anything for a certain range of WAL and just\n> acknowledges it, it would under your proposal have to also durably\n> record that it had chosen to do nothing, which might cause extra\n> fsyncs, potentially lots of extra fsyncs if this happens frequently\n> e.g. because most tables are filtered out and the replicated ones are\n> only modified occasionally.\n\nYes, that's the motivation.\n\n\n> I'm not sure that it would be a good trade-off to have a tighter\n> sanity check at the expense of adding that overhead.\n\nEspecially because it very well might break existing working setups...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Jun 2021 10:40:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Fri, 2021-06-11 at 13:15 -0400, Robert Haas wrote:\n> on it, but it seems to me that the comment does explain this, and\n> that\n> it's basically the same explanation as what Amit said.\n\nIt only addresses the \"pro\" side of the behavior, not the \"con\". It's a\nbit like saying \"Given that we are in the U.S., it might seem like we\nshould be driving on the right side of the road, but that side has\ntraffic and we are in a hurry.\"\n\nWhy might it seem that we should error out? If we don't error out, what\nbad things might happen? How do these \"con\"s weigh against the \"pro\"s?\n\n> I'm not sure that it would be a good\n> trade-off to have a tighter sanity check at the expense of adding\n> that\n> overhead.\n\nIt doesn't add any overhead.\n\nAll the client would have to do is \"SELECT confirmed_flush_lsn FROM\npg_replication_slots WHERE slot_name='myslot'\", and compare it to the\nstored value. If the stored value is earlier than the\nconfirmed_flush_lsn, the *client* can decide to start replication at\nthe confirmed_flush_lsn. That makes sense because the client knows more\nabout its behavior and configuration, and whether that's a safe choice\nor not.\n\nThe only difference is whether the server is safe-by-default with\nintuitive semantics that match the documentation, or unsafe-by-default\nwith unexpected semantics that don't match the documentation.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 11 Jun 2021 11:49:19 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Fri, 2021-06-11 at 10:40 -0700, Andres Freund wrote:\n> Especially because it very well might break existing working\n> setups...\n\nPlease address my concerns[1].\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://www.postgresql.org/message-id/e22a4606333ce1032e29fe2fb1aa9036e6f0ca98.camel%40j-davis.com\n\n\n\n", "msg_date": "Fri, 11 Jun 2021 11:56:06 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On 2021-06-11 11:49:19 -0700, Jeff Davis wrote:\n> All the client would have to do is \"SELECT confirmed_flush_lsn FROM\n> pg_replication_slots WHERE slot_name='myslot'\", and compare it to the\n> stored value.\n\nThat doesn't work as easily in older versions because there was no SQL\nsupport in replication connections until PG 10...\n\n\n", "msg_date": "Fri, 11 Jun 2021 11:56:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Fri, 2021-06-11 at 11:56 -0700, Andres Freund wrote:\n> That doesn't work as easily in older versions because there was no\n> SQL\n> support in replication connections until PG 10...\n\n9.6 will be EOL this year. I don't really see why such old versions are\nrelevant to this discussion.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 11 Jun 2021 12:07:24 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Fri, Jun 11, 2021 at 2:49 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> It doesn't add any overhead.\n>\n> All the client would have to do is \"SELECT confirmed_flush_lsn FROM\n> pg_replication_slots WHERE slot_name='myslot'\", and compare it to the\n> stored value. If the stored value is earlier than the\n> confirmed_flush_lsn, the *client* can decide to start replication at\n> the confirmed_flush_lsn. That makes sense because the client knows more\n> about its behavior and configuration, and whether that's a safe choice\n> or not.\n\nTrue, but it doesn't seem very nice to forcethe client depend on SQL\nwhen that wouldn't otherwise be needed. The SQL is a lot more likely\nto fail than a replication command, for example due to some\npermissions issue. So I think if we want to make this an optional\nbehavior, it would be better to add a flag to the START_REPLICATION\nflag to say whether it's OK for the server to fast-forward like this.\n\nYou seem to see this as some kind of major problem and I guess I don't\nagree. I think it's pretty clear what the motivation was for the\ncurrent behavior, because I believe it's well-explained by the comment\nand the three people who have tried to answer your question. I also\nthink it's pretty clear why somebody might find it surprising: someone\nmight think that fast-forwarding is harmful and risky rather than a\nuseful convenience. As evidence for the fact that someone might think\nthat, I offer the fact that you seem to think exactly that thing. I\nalso think that there's pretty good evidence that the behavior as it\nexists is not really so bad. As far as I know, and I certainly might\nhave missed something, you're the first one to complain about behavior\nthat we've had for quite a long time now, and you seem to be saying\nthat it might cause problems for somebody, not that you know it\nactually did. So, I don't know, I'm not opposed to talking about\npotential improvements here, but to the extent that you're suggesting\nthis is unreasonable behavior, I think that's too harsh.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Jun 2021 16:05:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "Hi,\n\nOn 2021-06-11 16:05:10 -0400, Robert Haas wrote:\n> You seem to see this as some kind of major problem and I guess I don't\n> agree. I think it's pretty clear what the motivation was for the\n> current behavior, because I believe it's well-explained by the comment\n> and the three people who have tried to answer your question. I also\n> think it's pretty clear why somebody might find it surprising: someone\n> might think that fast-forwarding is harmful and risky rather than a\n> useful convenience. As evidence for the fact that someone might think\n> that, I offer the fact that you seem to think exactly that thing. I\n> also think that there's pretty good evidence that the behavior as it\n> exists is not really so bad. As far as I know, and I certainly might\n> have missed something, you're the first one to complain about behavior\n> that we've had for quite a long time now, and you seem to be saying\n> that it might cause problems for somebody, not that you know it\n> actually did. So, I don't know, I'm not opposed to talking about\n> potential improvements here, but to the extent that you're suggesting\n> this is unreasonable behavior, I think that's too harsh.\n\nYea. I think it'd be a different matter if streaming logical decoding\nhad been added this cycle and we'd started out with supporting queries\nover replication connection - but it's been long enough that it's likely\nthat people rely on the current behaviour, and I don't see the gain in\nreliability outweigh the compat issues.\n\nYour argument that one can just check kinda goes both ways - you can do\nthat with the current behaviour too...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Jun 2021 13:42:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On 2021-06-11 12:07:24 -0700, Jeff Davis wrote:\n> On Fri, 2021-06-11 at 11:56 -0700, Andres Freund wrote:\n> > That doesn't work as easily in older versions because there was no\n> > SQL\n> > support in replication connections until PG 10...\n> \n> 9.6 will be EOL this year. I don't really see why such old versions are\n> relevant to this discussion.\n\nIt's relevant to understand how we ended up here.\n\n\n", "msg_date": "Fri, 11 Jun 2021 13:43:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Fri, Jun 11, 2021 at 11:52 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Fri, 2021-06-11 at 10:13 +0530, Amit Kapila wrote:\n> > I think because there is no need to process the WAL that has been\n> > confirmed by the client. Do you see any problems with this scheme?\n>\n> Several:\n>\n> * Replication setups are complex, and it can be easy to misconfigure\n> something or have a bug in some control code. An error is valuable to\n> detect the problem closer to the source.\n>\n> * There are plausible configurations where things could go badly wrong.\n> For instance, if you are storing the decoded data in another postgres\n> server with syncrhonous_commit=off, and acknowledging LSNs before they\n> are durable. A crash of the destination system would be consistent, but\n> it would be missing some data earlier than the confirmed_flush_lsn. The\n> client would then request the data starting at its stored lsn progress\n> value, but the server would skip ahead to the confirmed_flush_lsn;\n> silently missing data.\n>\n\nAFAIU, currently, in such a case, the subscriber (client) won't\nadvance the flush location (confirmed_flush_lsn). So, we won't lose\nany data.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 12 Jun 2021 16:17:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Sat, 2021-06-12 at 16:17 +0530, Amit Kapila wrote:\n> AFAIU, currently, in such a case, the subscriber (client) won't\n> advance the flush location (confirmed_flush_lsn). So, we won't lose\n> any data.\n\nI think you are talking about the official Logical Replication\nspecifically, rather than an arbitrary client that's using the logical\nreplication protocol based on the protocol docs.\n\n\nIt seems that there's not much agreement in a behavior change here. I\nsuggest one or more of the following:\n\n 1. change the logical rep protocol docs to match the current behavior\n a. also briefly explain in the docs why it's different from\nphysical replication (which does always start at the provided LSN as\nfar as I can tell)\n\n 2. Change the comment to add something like \"Starting at a different\nLSN than requested might not catch certain kinds of client errors.\nClients should be careful to check confirmed_flush_lsn if starting at\nthe requested LSN is required.\"\n\n 3. upgrade DEBUG1 message to a WARNING\n\nCan I get agreement on any of the above suggestions?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 09:50:32 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Mon, Jun 14, 2021 at 12:50 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> It seems that there's not much agreement in a behavior change here. I\n> suggest one or more of the following:\n>\n> 1. change the logical rep protocol docs to match the current behavior\n> a. also briefly explain in the docs why it's different from\n> physical replication (which does always start at the provided LSN as\n> far as I can tell)\n>\n> 2. Change the comment to add something like \"Starting at a different\n> LSN than requested might not catch certain kinds of client errors.\n> Clients should be careful to check confirmed_flush_lsn if starting at\n> the requested LSN is required.\"\n>\n> 3. upgrade DEBUG1 message to a WARNING\n>\n> Can I get agreement on any of the above suggestions?\n\nI'm happy to hear other opinions, but I think I would be inclined to\nvote in favor of #1 and/or #2 but against #3.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Jun 2021 13:13:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Mon, 2021-06-14 at 13:13 -0400, Robert Haas wrote:\n> I'm happy to hear other opinions, but I think I would be inclined to\n> vote in favor of #1 and/or #2 but against #3.\n\nWhat about upgrading it to, say, LOG? It seems like it would happen\npretty infrequently, and in the event something strange happens, might\nrule out some possibilities.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 14:51:35 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "At Mon, 14 Jun 2021 14:51:35 -0700, Jeff Davis <pgsql@j-davis.com> wrote in \n> On Mon, 2021-06-14 at 13:13 -0400, Robert Haas wrote:\n> > I'm happy to hear other opinions, but I think I would be inclined to\n> > vote in favor of #1 and/or #2 but against #3.\n> \n> What about upgrading it to, say, LOG? It seems like it would happen\n> pretty infrequently, and in the event something strange happens, might\n> rule out some possibilities.\n\nI don't think the message is neded, but I don't oppose it as far as\nthe level is LOG and the messages were changed as something like this:\n\n\n-\t\telog(DEBUG1, \"cannot stream from %X/%X, minimum is %X/%X, forwarding\",\n+\t\telog(LOG, \"%X/%X has been already streamed, forwarding to %X/%X\",\n\nFWIW, I most prefer #1. I see #2 as optional. and see #3 as the above.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 15 Jun 2021 15:19:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Tue, Jun 15, 2021 at 3:21 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Mon, 2021-06-14 at 13:13 -0400, Robert Haas wrote:\n> > I'm happy to hear other opinions, but I think I would be inclined to\n> > vote in favor of #1 and/or #2 but against #3.\n>\n> What about upgrading it to, say, LOG? It seems like it would happen\n> pretty infrequently, and in the event something strange happens, might\n> rule out some possibilities.\n>\n\nI don't see any problem with changing it to LOG if that helps\nespecially because it won't happen too often.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 16 Jun 2021 11:31:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Tue, 2021-06-15 at 15:19 +0900, Kyotaro Horiguchi wrote:\n> I don't think the message is neded, but I don't oppose it as far as\n> the level is LOG and the messages were changed as something like\n> this:\n> \n> \n> - elog(DEBUG1, \"cannot stream from %X/%X, minimum is\n> %X/%X, forwarding\",\n> + elog(LOG, \"%X/%X has been already streamed,\n> forwarding to %X/%X\",\n> \n> FWIW, I most prefer #1. I see #2 as optional. and see #3 as the\n> above.\n\nAttached.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 16 Jun 2021 15:55:46 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Question about StartLogicalReplication() error path" }, { "msg_contents": "On Thu, Jun 17, 2021 at 4:25 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Tue, 2021-06-15 at 15:19 +0900, Kyotaro Horiguchi wrote:\n> > I don't think the message is neded, but I don't oppose it as far as\n> > the level is LOG and the messages were changed as something like\n> > this:\n> >\n> >\n> > - elog(DEBUG1, \"cannot stream from %X/%X, minimum is\n> > %X/%X, forwarding\",\n> > + elog(LOG, \"%X/%X has been already streamed,\n> > forwarding to %X/%X\",\n> >\n> > FWIW, I most prefer #1. I see #2 as optional. and see #3 as the\n> > above.\n>\n> Attached.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 19 Jun 2021 15:30:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about StartLogicalReplication() error path" } ]
[ { "msg_contents": "Hi,\n\nThe Release Management Team (Peter Geoghegan, Andrew Dunstan and\nmyself) proposes that the date of the PostgreSQL 14 Beta 2 release\nwill be **Thursday June 24, 2021**, which aligns with the past\npractice.\n\nThanks,\n--\nMichael", "msg_date": "Fri, 11 Jun 2021 13:19:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Release 14 Beta 2" } ]
[ { "msg_contents": "Hi Hackers,\n\nI played pgbench with wrong parameters, and I found bug-candidate.\nThe latest commit in my source is 3a09d75.\n\n1. Do initdb and start.\n2. Initialize schema and data with \"scale factor\" = 1.\n3. execute following command many times:\n\n$ pgbench -c 101 -j 10 postgres\n\nThen, sometimes the negative \" initial connection time\" was returned.\nLateyncy average is also strange.\n\n```\n$ pgbench -c 101 -j 10 postgres\nstarting vacuum...end.\npgbench: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL: sorry, too many clients already\npgbench (PostgreSQL) 14.0\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 101\nnumber of threads: 10\nnumber of transactions per client: 10\nnumber of transactions actually processed: 910/1010\nlatency average = 41387686.662 ms\ninitial connection time = -372896921.586 ms\ntps = 0.002440 (without initial connection time)\n```\n\nI sought pgbench.c and found a reason.\nWhen a thread failed to get some connections, they do not fill any values to thread->bench_start in threadRun().\nAnd if the failure is caused in the final thread (this means threads[nthreads - 1]->bench_start is zero),\nthe following if-statement sets bench_start to zero.\n\n```\n 6494 /* first recorded benchmarking start time */\n 6495 if (bench_start == 0 || thread->bench_start < bench_start)\n 6496 bench_start = thread->bench_start;\n```\n\nThe wrong bench_start propagates to printResult() and then some invalid values are appered.\n\n```\n 6509 printResults(&stats, pg_time_now() - bench_start, conn_total_duration,\n 6510 bench_start - start_time, latency_late);\n```\n\nI cannot distinguish whether we have to fix it, but I attache the patch.\nThis simply ignores a result when therad->bench_start is zero.\n\nHow do you think?\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Fri, 11 Jun 2021 08:58:45 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "\n\nHello Hayato-san,\n\n> I played pgbench with wrong parameters,\n\nThat's good:-)\n\n> and I found bug-candidate.\n>\n> 1. Do initdb and start.\n> 2. Initialize schema and data with \"scale factor\" = 1.\n> 3. execute following command many times:\n>\n> $ pgbench -c 101 -j 10 postgres\n>\n> Then, sometimes the negative \" initial connection time\" was returned.\n> Lateyncy average is also strange.\n>\n> ```\n> $ pgbench -c 101 -j 10 postgres\n> starting vacuum...end.\n> pgbench: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL: sorry, too many clients already\n\nHmmm.\n\nAFAICR there was a decision to generate a report even if something went \nvery wrong, in this case some client could not connect, so some values \nare not initialized, hence the absurd figures, as you show below.\n\nMaybe we should revisit this decision.\n\n> initial connection time = -372896921.586 ms\n\n> I sought pgbench.c and found a reason.\n\n> When a thread failed to get some connections, they do not fill any values to thread->bench_start in threadRun().\n> And if the failure is caused in the final thread (this means threads[nthreads - 1]->bench_start is zero),\n> the following if-statement sets bench_start to zero.\n\n> I cannot distinguish whether we have to fix it, but I attache the patch.\n> This simply ignores a result when therad->bench_start is zero.\n\n> How do you think?\n\nHmmm. Possibly. Another option could be not to report anything after some \nerrors. I'm not sure, because it would depend on the use case. I guess the \ncommand returned an error status as well.\n\nI'm going to give it some thoughts.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 11 Jun 2021 16:57:35 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "Dear Fabien,\n\nThank you for replying!\n\n> Hmmm. Possibly. Another option could be not to report anything after some \n> errors. I'm not sure, because it would depend on the use case. I guess the \n> command returned an error status as well.\n\nI did not know any use cases and decisions , but I vote to report nothing when error occurs.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n", "msg_date": "Mon, 14 Jun 2021 00:42:12 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "Hello Kuroda-san,\n\nOn Fri, 11 Jun 2021 08:58:45 +0000\n\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com> wrote:\n\n> Hi Hackers,\n> \n> I played pgbench with wrong parameters, and I found bug-candidate.\n> The latest commit in my source is 3a09d75.\n> \n> 1. Do initdb and start.\n> 2. Initialize schema and data with \"scale factor\" = 1.\n> 3. execute following command many times:\n> \n> $ pgbench -c 101 -j 10 postgres\n> \n> Then, sometimes the negative \" initial connection time\" was returned.\n> Lateyncy average is also strange.\n> \n> ```\n> $ pgbench -c 101 -j 10 postgres\n> starting vacuum...end.\n> pgbench: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL: sorry, too many clients already\n> pgbench (PostgreSQL) 14.0\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 1\n> query mode: simple\n> number of clients: 101\n> number of threads: 10\n> number of transactions per client: 10\n> number of transactions actually processed: 910/1010\n> latency average = 41387686.662 ms\n> initial connection time = -372896921.586 ms\n> tps = 0.002440 (without initial connection time)\n> ```\n> \n> I sought pgbench.c and found a reason.\n> When a thread failed to get some connections, they do not fill any values to thread->bench_start in threadRun().\n> And if the failure is caused in the final thread (this means threads[nthreads - 1]->bench_start is zero),\n> the following if-statement sets bench_start to zero.\n> \n> ```\n> 6494 /* first recorded benchmarking start time */\n> 6495 if (bench_start == 0 || thread->bench_start < bench_start)\n> 6496 bench_start = thread->bench_start;\n> ```\n> \n> The wrong bench_start propagates to printResult() and then some invalid values are appered.\n> \n> ```\n> 6509 printResults(&stats, pg_time_now() - bench_start, conn_total_duration,\n> 6510 bench_start - start_time, latency_late);\n> ```\n> \n> I cannot distinguish whether we have to fix it, but I attache the patch.\n> This simply ignores a result when therad->bench_start is zero.\n\n\n +\t\t/* skip if the thread faild to get connection */\n+\t\tif (thread->bench_start == 0)\n+\t\t\tcontinue;\n\nIt detects if a thread failed to get the initial connection by thread->bench_start == 0, but this assumes the initial value is zero. For ensuring this, I think it is better to initialize it in an early state, for example like this.\n\n@@ -6419,6 +6419,7 @@ main(int argc, char **argv)\n initRandomState(&thread->ts_throttle_rs);\n initRandomState(&thread->ts_sample_rs);\n thread->logfile = NULL; /* filled in later */\n+ thread->bench_start = 0;\n thread->latency_late = 0;\n initStats(&thread->stats, 0)\n\ntypo: faild -> failed\n\nRegards,\nYugo Nagata\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 14 Jun 2021 17:05:18 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "On Mon, 14 Jun 2021 00:42:12 +0000\n\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com> wrote:\n\n> Dear Fabien,\n> \n> Thank you for replying!\n> \n> > Hmmm. Possibly. Another option could be not to report anything after some \n> > errors. I'm not sure, because it would depend on the use case. I guess the \n> > command returned an error status as well.\n> \n> I did not know any use cases and decisions , but I vote to report nothing when error occurs.\n\nI would prefer to abort the thread whose connection got an error and report\nresults for other threads, as handled when doConnect fails in CSTATE_START_TX\nstate. \n\nIn this case, we have to set the state to CSTATE_ABORT before going to 'done'\nas fixed in the attached patch, in order to ensure that exit status is 2 and the\nresult reports \"pgbench: fatal: Run was aborted; the above results are incomplete.\" \n\nOtherwise, if we want pgbench to exit immediately when a connection error occurs, \nwe have tocall exit(1) to ensure the exit code is 1, of course. Anyway, it is wrong\nthat thecurrent pgbench exit successfully with exit code 0 when doConnnect fails.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Mon, 14 Jun 2021 17:07:02 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "\n>>> Hmmm. Possibly. Another option could be not to report anything after some\n>>> errors. I'm not sure, because it would depend on the use case. I guess the\n>>> command returned an error status as well.\n>>\n>> I did not know any use cases and decisions , but I vote to report nothing when error occurs.\n>\n> I would prefer to abort the thread whose connection got an error and report\n> results for other threads, as handled when doConnect fails in CSTATE_START_TX\n> state.\n\nIt is unclear to me whether it makes much sense to report performance when \nthings go wrong. At least when a one connection per client bench is run \nISTM that it should not proceed, because the bench could not even start \nas prescribe. When connection break while the bench has already started, \nmaybe it makes more sense to proceed, although I guess that maybe \nreattempting connections would make also sense in such case.\n\n> In this case, we have to set the state to CSTATE_ABORT before going to 'done'\n> as fixed in the attached patch, in order to ensure that exit status is 2 and the\n> result reports \"pgbench: fatal: Run was aborted; the above results are incomplete.\"\n\nHmmm. I agree that at least reporting that there was an issue is a good \nidea.\n\n> Otherwise, if we want pgbench to exit immediately when a connection error occurs,\n> we have tocall exit(1) to ensure the exit code is 1, of course. Anyway, it is wrong\n> that thecurrent pgbench exit successfully with exit code 0 when doConnnect fails.\n\nIndeed, I can only agree on this one.\n\n-- \nFabien.\n\n\n", "msg_date": "Mon, 14 Jun 2021 11:30:14 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "Hello Fabien,\n\nOn Mon, 14 Jun 2021 11:30:14 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> >>> Hmmm. Possibly. Another option could be not to report anything after some\n> >>> errors. I'm not sure, because it would depend on the use case. I guess the\n> >>> command returned an error status as well.\n> >>\n> >> I did not know any use cases and decisions , but I vote to report nothing when error occurs.\n> >\n> > I would prefer to abort the thread whose connection got an error and report\n> > results for other threads, as handled when doConnect fails in CSTATE_START_TX\n> > state.\n> \n> It is unclear to me whether it makes much sense to report performance when \n> things go wrong. At least when a one connection per client bench is run \n> ISTM that it should not proceed, because the bench could not even start \n> as prescribe. \n\nI agreed that when an initial connections fails we cannot start a bench\nin the condition that the user wants and that we should stop early to let\nthe user know it and check the conf. \n\nI attached a patch, which is a fusion of my previous patch that changes the\nstate to CSTATE_ABORT when the socket get failure during the bench, and a\npart of your patch attached in [1] that exits for initial failures.\n\n[1] https://www.postgresql.org/message-id/alpine.DEB.2.22.394.2106141011100.1338009%40pseudo\n\n> When connection break while the bench has already started, \n> maybe it makes more sense to proceed, \n\nThe result would be incomplete also in this case. However, the reason why\nit is worth to proceed is that such information is still useful for users,\nor we don't want to waste the bench that has already started?\n\n> although I guess that maybe \n> reattempting connections would make also sense in such case.\n\nThis might become possible after pgbench gets the feature to retry in deadlock\nor serialization errors. I am working on rebase of the patch [2] and I will\nsubmit this in a few days.\n\n[2] https://www.postgresql.org/message-id/20210524112910.444fbfdfbff747bd3b9720ee@sraoss.co.jp\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 17 Jun 2021 00:59:34 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "\nHello Yugo-san,\n\n>> When connection break while the bench has already started,\n>> maybe it makes more sense to proceed,\n>\n> The result would be incomplete also in this case. However, the reason why\n> it is worth to proceed is that such information is still useful for users,\n> or we don't want to waste the bench that has already started?\n\nHmmm. It depends on what the user is testing. If one is interested in \nclient resilience under errors, the bench should probably attempt a \nreconnect. If one is interested in best performance when all is well,\nthen clearly something is amiss and there is no point to go on.\n\n>> although I guess that maybe reattempting connections would make also \n>> sense in such case.\n>\n> This might become possible after pgbench gets the feature to retry in deadlock\n> or serialization errors.\n\nYes, I agree that part of the needed infrastructure would be in place for \nthat. As reconnecting is already in place under -c, so possibly it is just \na matter of switching between states with some care.\n\n> I am working on rebase of the patch [2] and I will submit this in a few \n> days.\n\nOk. Very good, I look forward to your submission! I'll be sure to look at \nit.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 16 Jun 2021 20:25:31 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "On Wed, 16 Jun 2021 20:25:31 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> Hello Yugo-san,\n> \n> >> When connection break while the bench has already started,\n> >> maybe it makes more sense to proceed,\n> >\n> > The result would be incomplete also in this case. However, the reason why\n> > it is worth to proceed is that such information is still useful for users,\n> > or we don't want to waste the bench that has already started?\n> \n> Hmmm. It depends on what the user is testing. If one is interested in \n> client resilience under errors, the bench should probably attempt a \n> reconnect. If one is interested in best performance when all is well,\n> then clearly something is amiss and there is no point to go on.\n\nAgreed. After --max-tries options is implemented on pgbench, we would be\nable to add a new feature to allow users to choose if we try to reconnect\nor not. However, we don't have it yet for now, so we should just abort\nthe client and report the abortion at the end of the bench when a connection\nor socket error occurs during the bench, as same the existing behaviour.\n\nBy the way, the issue of inital connection erros reported in this thread\nwill be fixed by the patch attached in my previous post (a major part are\nwritten by you :-) ). Is this acceptable for you?\n\n\nRegards,\nYugo Nagata\n \n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 17 Jun 2021 12:55:56 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "\nHello Yugo-san,\n\n> By the way, the issue of inital connection erros reported in this thread\n> will be fixed by the patch attached in my previous post (a major part are\n> written by you :-)\n\nThat does not, on its own, ensure that it is bug free:-)\n\n> ). Is this acceptable for you?\n\nI disagree on two counts:\n\nFirst, thread[0] should not appear.\n\nSecond, currently the *only* function to change the client state is \nadvanceConnectionState, so it can be checked there and any bug is only \nthere. We had issues before when several functions where doing updates, \nand it was a mess to understand what was going on. I really want that it \nstays that way, so I disagree with setting the state to ABORTED from \nthreadRun. Moreover I do not see that it brings a feature, so ISTM that it \nis not an actual issue not to do it?\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 17 Jun 2021 10:37:05 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "Hello Fabien,\n\nOn Thu, 17 Jun 2021 10:37:05 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> > ). Is this acceptable for you?\n> \n> I disagree on two counts:\n> \n> First, thread[0] should not appear.\n> \n> Second, currently the *only* function to change the client state is \n> advanceConnectionState, so it can be checked there and any bug is only \n> there. We had issues before when several functions where doing updates, \n> and it was a mess to understand what was going on. I really want that it \n> stays that way, so I disagree with setting the state to ABORTED from \n> threadRun. Moreover I do not see that it brings a feature, so ISTM that it \n> is not an actual issue not to do it?\n\nOk. I gave up to change the state in threadRun. Instead, I changed the\ncondition at the end of bench, which enables to report abortion due to\nsocket errors.\n\n+@@ -6480,7 +6490,7 @@ main(int argc, char **argv)\n+ #endif\t\t\t\t\t\t\t/* ENABLE_THREAD_SAFETY */\n+ \n+ \t\tfor (int j = 0; j < thread->nstate; j++)\n+-\t\t\tif (thread->state[j].state == CSTATE_ABORTED)\n++\t\t\tif (thread->state[j].state != CSTATE_FINISHED)\n+ \t\t\t\texit_code = 2;\n+ \n+ \t\t/* aggregate thread level stats */\n\nDoes this make sense?\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 17 Jun 2021 18:16:42 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "\n>> Second, currently the *only* function to change the client state is\n>> advanceConnectionState, so it can be checked there and any bug is only\n>> there. We had issues before when several functions where doing updates,\n>> and it was a mess to understand what was going on. I really want that it\n>> stays that way, so I disagree with setting the state to ABORTED from\n>> threadRun. Moreover I do not see that it brings a feature, so ISTM that it\n>> is not an actual issue not to do it?\n>\n> Ok. I gave up to change the state in threadRun. Instead, I changed the\n> condition at the end of bench, which enables to report abortion due to\n> socket errors.\n>\n> +@@ -6480,7 +6490,7 @@ main(int argc, char **argv)\n> + #endif\t\t\t\t\t\t\t/* ENABLE_THREAD_SAFETY */\n> +\n> + \t\tfor (int j = 0; j < thread->nstate; j++)\n> +-\t\t\tif (thread->state[j].state == CSTATE_ABORTED)\n> ++\t\t\tif (thread->state[j].state != CSTATE_FINISHED)\n> + \t\t\t\texit_code = 2;\n> +\n> + \t\t/* aggregate thread level stats */\n>\n> Does this make sense?\n\nYes, definitely.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 17 Jun 2021 11:52:04 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "At Thu, 17 Jun 2021 11:52:04 +0200 (CEST), Fabien COELHO <coelho@cri.ensmp.fr> wrote in \n> > Ok. I gave up to change the state in threadRun. Instead, I changed the\n> > condition at the end of bench, which enables to report abortion due to\n> > socket errors.\n> >\n> > +@@ -6480,7 +6490,7 @@ main(int argc, char **argv)\n> > + #endif\t\t\t\t\t\t\t/* ENABLE_THREAD_SAFETY */\n> > +\n> > + \t\tfor (int j = 0; j < thread->nstate; j++)\n> > +-\t\t\tif (thread->state[j].state == CSTATE_ABORTED)\n> > ++\t\t\tif (thread->state[j].state != CSTATE_FINISHED)\n> > + \t\t\t\texit_code = 2;\n> > +\n> > + \t\t/* aggregate thread level stats */\n> >\n> > Does this make sense?\n> \n> Yes, definitely.\n\nI sought for a simple way to enforce all client finishes with the\nstates abort or finished but I didn't find. So +1 for the\nchange. However, as a matter of style. if we touch the code maybe we\nwant to enclose the if statement.\n\nDoing this means we regard any state other than CSTATE_FINISHED as\naborted. So, the current goto's to done in threadRun are effectively\naborting a part or the all clients running on the thread. So for\nexample the following place:\n\npgbench.c:6713\n> /* must be something wrong */\n> pg_log_error(\"%s() failed: %m\", SOCKET_WAIT_METHOD);\n> goto done;\n\nShould say such like \"thread %d aborted: %s() failed: ...\".\n\n\nI'm not sure what is the consensus here about the case where aborted\nclient can recoonect to the same server. This patch doesn't that. However, I think causing reconnection needs more work than accepted as a fix while beta.\n\n\n====\n\n+ /* as the bench is already running, we do not abort */\n pg_log_error(\"client %d aborted while establishing connection\", st->id);\n st->state = CSTATE_ABORTED;\n\nThe comment looks strange that it is saying \"we don't abort\" while\nsetting the state to CSTATE_ABORT then showing \"client %d aborted\".\n\n\n====\n if ((con = doConnect()) == NULL)\n+ {\n+ pg_log_fatal(\"connection for initialization failed\");\n exit(1);\n\ndoConnect() prints an error emssage given from libpq. So The\nadditional messaget is redundant.\n\n\n====\n errno = THREAD_BARRIER_INIT(&barrier, nthreads);\n if (errno != 0)\n+ {\n pg_log_fatal(\"could not initialize barrier: %m\");\n+ exit(1);\n\nThis is a run-time error. Maybe we should return 2 in that case.\n\n\n===\n if (thread->logfile == NULL)\n {\n pg_log_fatal(\"could not open logfile \\\"%s\\\": %m\", logpath);\n- goto done;\n+ exit(1);\n\nMaybe we should exit with 2 this case. If we exit this case, we might\nalso want to exit when fclose() fails. (Currently the error of\nfclose() is ignored.)\n\n\n\n===\n+ /* coldly abort on connection failure */\n+ pg_log_fatal(\"cannot create connection for thread %d client %d\",\n+ thread->tid, i);\n+ exit(1);\n\nIt seems to me that the \"thread %d client %d(not clinent id but the\nclient index within the thread)\" doesn't make sense to users. Even if\nwe showed a message like that, it should show only the global clinent\nid (cstate->id).\n\nI think that we should return with 2 here but we return with 1\nin another place for the same reason..\n\n\n===\n /* must be something wrong */\n- pg_log_fatal(\"%s() failed: %m\", SOCKET_WAIT_METHOD);\n+ pg_log_error(\"%s() failed: %m\", SOCKET_WAIT_METHOD);\n goto done;\n\nWhy doesn't a fatal error cause an immediate exit? (And if we change\nthis to fatal, we also need to change similar errors in the same\nfunction to fatal.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 18 Jun 2021 17:26:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "\nHello,\n\n> Doing this means we regard any state other than CSTATE_FINISHED as\n> aborted. So, the current goto's to done in threadRun are effectively\n> aborting a part or the all clients running on the thread. So for\n> example the following place:\n>\n> pgbench.c:6713\n>> /* must be something wrong */\n>> pg_log_error(\"%s() failed: %m\", SOCKET_WAIT_METHOD);\n>> goto done;\n>\n> Should say such like \"thread %d aborted: %s() failed: ...\".\n\nYep, possibly.\n\n> I'm not sure what is the consensus here about the case where aborted\n> client can recoonect to the same server. This patch doesn't that.\n\nNon trivial future work.\n\n> However, I think causing reconnection needs more work than accepted as \n> a fix while beta.\n\nIt is an entire project which requires some thinking about.\n\n> + /* as the bench is already running, we do not abort */\n> pg_log_error(\"client %d aborted while establishing connection\", st->id);\n> st->state = CSTATE_ABORTED;\n>\n> The comment looks strange that it is saying \"we don't abort\" while\n> setting the state to CSTATE_ABORT then showing \"client %d aborted\".\n\nIndeed. There is abort from the client, which just means that it stops \nsending transaction, and abort for the process, which is basically \n\"exit(1)\".\n\n> ====\n> if ((con = doConnect()) == NULL)\n> + {\n> + pg_log_fatal(\"connection for initialization failed\");\n> exit(1);\n>\n> doConnect() prints an error emssage given from libpq. So The\n> additional messaget is redundant.\n\nThis is not the same for me: doConnect may fail but we may decide to go \nretry the connection later, or just one client may be disconnected but \nothers are going on, which is different from deciding to stop the whole \nprogram, which deserves a message on its own.\n\n> ====\n> errno = THREAD_BARRIER_INIT(&barrier, nthreads);\n> if (errno != 0)\n> + {\n> pg_log_fatal(\"could not initialize barrier: %m\");\n> + exit(1);\n>\n> This is a run-time error. Maybe we should return 2 in that case.\n\nHmmm. Yep.\n\n> ===\n> if (thread->logfile == NULL)\n> {\n> pg_log_fatal(\"could not open logfile \\\"%s\\\": %m\", logpath);\n> - goto done;\n> + exit(1);\n>\n> Maybe we should exit with 2 this case.\n\nYep.\n\n> If we exit this case, we might also want to exit when fclose() fails. \n> (Currently the error of fclose() is ignored.)\n\nNot sure. I'd let it at that for now.\n\n> ===\n> + /* coldly abort on connection failure */\n> + pg_log_fatal(\"cannot create connection for thread %d client %d\",\n> + thread->tid, i);\n> + exit(1);\n>\n> It seems to me that the \"thread %d client %d(not clinent id but the\n> client index within the thread)\" doesn't make sense to users. Even if\n> we showed a message like that, it should show only the global clinent\n> id (cstate->id).\n\nThis is not obvious to me. I think that we should be homogeneous with what \nis already done around.\n\n> I think that we should return with 2 here but we return with 1\n> in another place for the same reason..\n\nPossibly.\n\n> /* must be something wrong */\n> - pg_log_fatal(\"%s() failed: %m\", SOCKET_WAIT_METHOD);\n> + pg_log_error(\"%s() failed: %m\", SOCKET_WAIT_METHOD);\n> goto done;\n>\n> Why doesn't a fatal error cause an immediate exit?\n\nGood point. I do not know, but I would expect it to be the case, and \nAFAICR it does not.\n\n> (And if we change this to fatal, we also need to change similar errors \n> in the same function to fatal.)\n\nPossibly.\n\nI'll look into it over the week-end.\n\nThanks for the feedback!\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 18 Jun 2021 14:54:27 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": ">>> /* must be something wrong */\n>>> pg_log_error(\"%s() failed: %m\", SOCKET_WAIT_METHOD);\n>>> goto done;\n>> \n>> Should say such like \"thread %d aborted: %s() failed: ...\".\n\nAfter having a lookg, there are already plenty such cases. I'd say not to \nchange anything for beta, and think of it for the next round.\n\n>> ====\n>> errno = THREAD_BARRIER_INIT(&barrier, nthreads);\n>> if (errno != 0)\n>> + {\n>> pg_log_fatal(\"could not initialize barrier: %m\");\n>> + exit(1);\n>> \n>> This is a run-time error. Maybe we should return 2 in that case.\n\nI think that you are right, but there are plenty such places where exit \nshould be 2 instead of 1 if the doc is followed:\n\n\"\"\"Errors during the run such as database errors or problems in the script \nwill result in exit status 2.\"\"\"\n\nMy beta take is to let these as they are, i.e. pretty inconsistent all \nover pgbench, and schedule a cleanup on the next round.\n\n>> ===\n>> if (thread->logfile == NULL)\n>> {\n>> pg_log_fatal(\"could not open logfile \\\"%s\\\": %m\", logpath);\n>> - goto done;\n>> + exit(1);\n>> \n>> Maybe we should exit with 2 this case.\n>\n> Yep.\n\nThe bench is not even started, this is not really run time yet, 1 seems \nok. The failure may be due to a typo in the path which comes from the \nuser.\n\n>> If we exit this case, we might also want to exit when fclose() fails. \n>> (Currently the error of fclose() is ignored.)\n>\n> Not sure. I'd let it at that for now.\n\nI stand by this.\n\n>> + /* coldly abort on connection failure */\n>> + pg_log_fatal(\"cannot create connection for thread %d client %d\",\n>> + thread->tid, i);\n>> + exit(1);\n>> \n>> It seems to me that the \"thread %d client %d(not clinent id but the\n>> client index within the thread)\" doesn't make sense to users. Even if\n>> we showed a message like that, it should show only the global clinent\n>> id (cstate->id).\n>\n> This is not obvious to me. I think that we should be homogeneous with what is \n> already done around.\n\nok for only giving the global client id.\n\n>> I think that we should return with 2 here but we return with 1\n>> in another place for the same reason..\n>\n> Possibly.\n\nAgain for this one, the bench has not really started, so 1 seems fine.\n\n>> /* must be something wrong */\n>> - pg_log_fatal(\"%s() failed: %m\", SOCKET_WAIT_METHOD);\n>> + pg_log_error(\"%s() failed: %m\", SOCKET_WAIT_METHOD);\n>> goto done;\n>> \n>> Why doesn't a fatal error cause an immediate exit?\n>\n> Good point. I do not know, but I would expect it to be the case, and AFAICR \n> it does not.\n>\n>> (And if we change this to fatal, we also need to change similar errors in \n>> the same function to fatal.)\n>\n> Possibly.\n\nOn second look, I think that error is fine, indeed we do not stop the \nprocess, so \"fatal\" it is not;\n\nAttached Yugo-san patch with some updates discussed in the previous mails, \nso as to move things along.\n\n-- \nFabien.", "msg_date": "Fri, 18 Jun 2021 15:58:48 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "Hello Horiguchi-san, Fabien,\n\nOn Fri, 18 Jun 2021 15:58:48 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> >>> /* must be something wrong */\n> >>> pg_log_error(\"%s() failed: %m\", SOCKET_WAIT_METHOD);\n> >>> goto done;\n> >> \n> >> Should say such like \"thread %d aborted: %s() failed: ...\".\n> \n> After having a lookg, there are already plenty such cases. I'd say not to \n> change anything for beta, and think of it for the next round.\n\nAgreed. Basically, I think the existing message should be left as is.\n\n> >> ====\n> >> errno = THREAD_BARRIER_INIT(&barrier, nthreads);\n> >> if (errno != 0)\n> >> + {\n> >> pg_log_fatal(\"could not initialize barrier: %m\");\n> >> + exit(1);\n> >> \n> >> This is a run-time error. Maybe we should return 2 in that case.\n> \n> I think that you are right, but there are plenty such places where exit \n> should be 2 instead of 1 if the doc is followed:\n> \n> \"\"\"Errors during the run such as database errors or problems in the script \n> will result in exit status 2.\"\"\"\n> \n> My beta take is to let these as they are, i.e. pretty inconsistent all \n> over pgbench, and schedule a cleanup on the next round.\n\nAs same as the below Fabian's comment about thread->logfile, \n\n> >> ===\n> >> if (thread->logfile == NULL)\n> >> {\n> >> pg_log_fatal(\"could not open logfile \\\"%s\\\": %m\", logpath);\n> >> - goto done;\n> >> + exit(1);\n> >> \n> >> Maybe we should exit with 2 this case.\n> >\n> > Yep.\n> \n> The bench is not even started, this is not really run time yet, 1 seems \n> ok. The failure may be due to a typo in the path which comes from the \n> user.\n\nthe bench is not started at THREAD_BARRIER_INIT, so I think exit(1) is ok. \n\n> \n> >> /* must be something wrong */\n> >> - pg_log_fatal(\"%s() failed: %m\", SOCKET_WAIT_METHOD);\n> >> + pg_log_error(\"%s() failed: %m\", SOCKET_WAIT_METHOD);\n> >> goto done;\n> >> \n> >> Why doesn't a fatal error cause an immediate exit?\n> >\n> > Good point. I do not know, but I would expect it to be the case, and AFAICR \n> > it does not.\n> >\n> >> (And if we change this to fatal, we also need to change similar errors in \n> >> the same function to fatal.)\n> >\n> > Possibly.\n> \n> On second look, I think that error is fine, indeed we do not stop the \n> process, so \"fatal\" it is not;\n\nI replaced this 'fatal' with 'error' because we are aborting the client\ninstead of exit(1). When pgbench was rewritten to use common logging API\nby the commit 30a3e772b40, somehow pg_log_fatal was used, but I am\nwondering it should have be pg_log_error.\n\n> Attached Yugo-san patch with some updates discussed in the previous mails, \n> so as to move things along.\n\nThank you for update. I agree with this fix.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Sat, 19 Jun 2021 00:46:05 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "Hello,\n\nOn Fri, 18 Jun 2021 15:58:48 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> Attached Yugo-san patch with some updates discussed in the previous mails, \n> so as to move things along.\n\nI attached the patch rebased to a change due to 856de3b39cf.\n\nRegards,\nYugo Nagata\n\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 29 Jul 2021 13:23:25 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "\n\nOn 2021/07/29 13:23, Yugo NAGATA wrote:\n> Hello,\n> \n> On Fri, 18 Jun 2021 15:58:48 +0200 (CEST)\n> Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> \n>> Attached Yugo-san patch with some updates discussed in the previous mails,\n>> so as to move things along.\n> \n> I attached the patch rebased to a change due to 856de3b39cf.\n\n+\t\tpg_log_fatal(\"connection for initialization failed\");\n+\t\tpg_log_fatal(\"setup connection failed\");\n+\t\t\t\tpg_log_fatal(\"cannot create connection for client %d\",\n\nThese fatal messages output when doConnect() fails should be a bit more consistent each other? For example,\n\n could not create connection for initialization\n could not create connection for setup\n could not create connection for client %d\n\nI'm not sure, but *if* \"xxx failed\" is more proper for pgbench, what about\n\n connection for initialization failed\n connection for setup failed\n connection for client %d failed\n\n\n> Exit status 1 indicates static problems such as invalid command-line options.\n> Errors during the run such as database errors or problems in the script will\n> result in exit status 2.\n\nWhile reading the code and docs related to the patch, I found\nthese descriptions in pgbench docs. The first description needs to be\nupdated? Because even database error (e.g., failure of connection for setup)\ncan result in exit status 1 if it happens before the benchmark actually runs.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 7 Sep 2021 02:34:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "Hello Fujii-san,\n\nOn Tue, 7 Sep 2021 02:34:17 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> On 2021/07/29 13:23, Yugo NAGATA wrote:\n> > Hello,\n> > \n> > On Fri, 18 Jun 2021 15:58:48 +0200 (CEST)\n> > Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > \n> >> Attached Yugo-san patch with some updates discussed in the previous mails,\n> >> so as to move things along.\n> > \n> > I attached the patch rebased to a change due to 856de3b39cf.\n> \n> +\t\tpg_log_fatal(\"connection for initialization failed\");\n> +\t\tpg_log_fatal(\"setup connection failed\");\n> +\t\t\t\tpg_log_fatal(\"cannot create connection for client %d\",\n> \n> These fatal messages output when doConnect() fails should be a bit more consistent each other? For example,\n> \n> could not create connection for initialization\n> could not create connection for setup\n> could not create connection for client %d\n\nOk. I fixed as your suggestion.\n\n> > Exit status 1 indicates static problems such as invalid command-line options.\n> > Errors during the run such as database errors or problems in the script will\n> > result in exit status 2.\n> \n> While reading the code and docs related to the patch, I found\n> these descriptions in pgbench docs. The first description needs to be\n> updated? Because even database error (e.g., failure of connection for setup)\n> can result in exit status 1 if it happens before the benchmark actually runs.\n\nThat makes sense. Failures of setup connection or initial connection doesn't\nseem 'static problems'. I rewrote this description to explain exit status 1\nindicates also interal errors and early errors.\n\n Exit status 1 indicates static problems such as invalid command-line options\n or internal errors which are supposed to never occur. Early errors that occur\n when starting benchmark such as initial connection failures also exit with\n status 1.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Fri, 24 Sep 2021 07:26:45 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "\n\nOn 2021/09/24 7:26, Yugo NAGATA wrote:\n> That makes sense. Failures of setup connection or initial connection doesn't\n> seem 'static problems'. I rewrote this description to explain exit status 1\n> indicates also interal errors and early errors.\n> \n> Exit status 1 indicates static problems such as invalid command-line options\n> or internal errors which are supposed to never occur. Early errors that occur\n> when starting benchmark such as initial connection failures also exit with\n> status 1.\n\nLGTM. Barring any objection, I will commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 24 Sep 2021 11:26:12 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "\n\nOn 2021/09/24 11:26, Fujii Masao wrote:\n> \n> \n> On 2021/09/24 7:26, Yugo NAGATA wrote:\n>> That makes sense. Failures of setup connection or initial connection doesn't\n>> seem 'static problems'. I rewrote this description to explain exit status 1\n>> indicates also interal errors and early errors.\n>>\n>> �� Exit status 1 indicates static problems such as invalid command-line options\n>> �� or internal errors which are supposed to never occur.� Early errors that occur\n>> �� when starting benchmark such as initial connection failures also exit with\n>> �� status 1.\n> \n> LGTM. Barring any objection, I will commit the patch.\n\nI extracted two changes from the patch and pushed (also back-patched) them.\n\nThe remainings are the changes of handling of initial connection or\nlogfile open failures. I agree to push them at least for the master.\nBut I'm not sure if they should be back-patched. Without these changes,\neven when those failures happen, pgbench proceeds the benchmark and\nreports the result. But with the changes, pgbench exits immediately in\nthat case. I'm not sure if there are people who expect this behavior,\nbut if there are, maybe we should not change it at least at stable branches.\nThought?\n\nBTW, when logfile fails to be opened, pgbench gets stuck due to commit\naeb57af8e6. So even if we decided not to back-patch those changes,\nwe should improve the handling of logfile open failure, to fix the issue.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 29 Sep 2021 22:11:53 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "On Wed, Sep 29, 2021 at 10:11:53PM +0900, Fujii Masao wrote:\n> BTW, when logfile fails to be opened, pgbench gets stuck due to commit\n> aeb57af8e6. So even if we decided not to back-patch those changes,\n> we should improve the handling of logfile open failure, to fix the issue.\n\nThere is an entry in the CF for this thread:\nhttps://commitfest.postgresql.org/34/3219/\n\nI have moved that to the next one as some pieces are missing. If you\nare planning to handle the rest, could you register your name as a\ncommitter?\n--\nMichael", "msg_date": "Fri, 1 Oct 2021 15:27:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "On 2021/10/01 15:27, Michael Paquier wrote:\n> On Wed, Sep 29, 2021 at 10:11:53PM +0900, Fujii Masao wrote:\n>> BTW, when logfile fails to be opened, pgbench gets stuck due to commit\n>> aeb57af8e6. So even if we decided not to back-patch those changes,\n>> we should improve the handling of logfile open failure, to fix the issue.\n> \n> There is an entry in the CF for this thread:\n> https://commitfest.postgresql.org/34/3219/\n> \n> I have moved that to the next one as some pieces are missing. If you\n> are planning to handle the rest, could you register your name as a\n> committer?\n\nThanks for letting me know that!\nI registered myself as a committer of the patch again.\n\n\n\tpg_time_usec_t conn_duration;\t/* cumulated connection and deconnection\n\t\t\t\t\t\t\t\t\t * delays */\n\nBTW, while reading the patch, I found the above comment in pgbench.c.\n\"deconnection\" seems a valid word in French (?), but isn't it better to\nreplace it with \"disconnection\"? Patch attached.\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Sat, 9 Oct 2021 00:41:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "On 2021/09/29 22:11, Fujii Masao wrote:\n> \n> \n> On 2021/09/24 11:26, Fujii Masao wrote:\n>>\n>>\n>> On 2021/09/24 7:26, Yugo NAGATA wrote:\n>>> That makes sense. Failures of setup connection or initial connection doesn't\n>>> seem 'static problems'. I rewrote this description to explain exit status 1\n>>> indicates also interal errors and early errors.\n>>>\n>>> �� Exit status 1 indicates static problems such as invalid command-line options\n>>> �� or internal errors which are supposed to never occur.� Early errors that occur\n>>> �� when starting benchmark such as initial connection failures also exit with\n>>> �� status 1.\n>>\n>> LGTM. Barring any objection, I will commit the patch.\n> \n> I extracted two changes from the patch and pushed (also back-patched) them.\n> \n> The remainings are the changes of handling of initial connection or\n> logfile open failures. I agree to push them at least for the master.\n> But I'm not sure if they should be back-patched. Without these changes,\n> even when those failures happen, pgbench proceeds the benchmark and\n> reports the result. But with the changes, pgbench exits immediately in\n> that case. I'm not sure if there are people who expect this behavior,\n> but if there are, maybe we should not change it at least at stable branches.\n> Thought?\n\nThe current behavior should be improved, but not a bug.\nSo I don't think that the patch needs to be back-patched.\nBarring any objection, I will push the attached patch to the master.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 1 Nov 2021 23:01:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "\n\nOn 2021/10/09 0:41, Fujii Masao wrote:\n> \n> \n> On 2021/10/01 15:27, Michael Paquier wrote:\n>> On Wed, Sep 29, 2021 at 10:11:53PM +0900, Fujii Masao wrote:\n>>> BTW, when logfile fails to be opened, pgbench gets stuck due to commit\n>>> aeb57af8e6. So even if we decided not to back-patch those changes,\n>>> we should improve the handling of logfile open failure, to fix the issue.\n>>\n>> There is an entry in the CF for this thread:\n>> https://commitfest.postgresql.org/34/3219/\n>>\n>> I have moved that to the next one as some pieces are missing.� If you\n>> are planning to handle the rest, could you register your name as a\n>> committer?\n> \n> Thanks for letting me know that!\n> I registered myself as a committer of the patch again.\n> \n> \n> ����pg_time_usec_t conn_duration;��� /* cumulated connection and deconnection\n> ������������������������������������ * delays */\n> \n> BTW, while reading the patch, I found the above comment in pgbench.c.\n> \"deconnection\" seems a valid word in French (?), but isn't it better to\n> replace it with \"disconnection\"? Patch attached.\n\nBarring any objection, I will push this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 1 Nov 2021 23:02:44 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "\n\nOn 2021/11/01 23:01, Fujii Masao wrote:\n>> The remainings are the changes of handling of initial connection or\n>> logfile open failures. I agree to push them at least for the master.\n>> But I'm not sure if they should be back-patched. Without these changes,\n>> even when those failures happen, pgbench proceeds the benchmark and\n>> reports the result. But with the changes, pgbench exits immediately in\n>> that case. I'm not sure if there are people who expect this behavior,\n>> but if there are, maybe we should not change it at least at stable branches.\n>> Thought?\n> \n> The current behavior should be improved, but not a bug.\n> So I don't think that the patch needs to be back-patched.\n> Barring any objection, I will push the attached patch to the master.\n\nPushed. Thanks!\n\nI also pushed the typo-fix patch that I proposed upthread.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 2 Nov 2021 23:11:39 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" }, { "msg_contents": "On Tue, 2 Nov 2021 23:11:39 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> \n> On 2021/11/01 23:01, Fujii Masao wrote:\n> >> The remainings are the changes of handling of initial connection or\n> >> logfile open failures. I agree to push them at least for the master.\n> >> But I'm not sure if they should be back-patched. Without these changes,\n> >> even when those failures happen, pgbench proceeds the benchmark and\n> >> reports the result. But with the changes, pgbench exits immediately in\n> >> that case. I'm not sure if there are people who expect this behavior,\n> >> but if there are, maybe we should not change it at least at stable branches.\n> >> Thought?\n> > \n> > The current behavior should be improved, but not a bug.\n> > So I don't think that the patch needs to be back-patched.\n> > Barring any objection, I will push the attached patch to the master.\n> \n> Pushed. Thanks!\n\nThanks!\n\n> \n> I also pushed the typo-fix patch that I proposed upthread.\n> \n> Regards,\n> \n> -- \n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 4 Nov 2021 09:31:52 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench bug candidate: negative \"initial connection time\"" } ]
[ { "msg_contents": "Hi all,\n\nFollowing up with the recent thread that dealt with the same $subject\nfor the TAP tests, I have gone through pg_regress.c:\nhttps://www.postgresql.org/message-id/YLbjjRpucIeZ78VQ@paquier.xyz\n\nThe list of environment variables that had better be reset when using\na temporary instance is very close to TestLib.pm, leading to the\nattached. Please note that that the list of unsetted parameters has\nbeen reorganized to be consistent with the TAP tests, and that I have\nadded comments referring one and the other.\n\nThoughts?\n--\nMichael", "msg_date": "Fri, 11 Jun 2021 21:07:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "pg_regress.c also sensitive to various PG* environment variables" }, { "msg_contents": "On 2021-Jun-11, Michael Paquier wrote:\n\n> Following up with the recent thread that dealt with the same $subject\n> for the TAP tests, I have gone through pg_regress.c:\n> https://www.postgresql.org/message-id/YLbjjRpucIeZ78VQ@paquier.xyz\n\nGood idea.\n\n> The list of environment variables that had better be reset when using\n> a temporary instance is very close to TestLib.pm, leading to the\n> attached. Please note that that the list of unsetted parameters has\n> been reorganized to be consistent with the TAP tests, and that I have\n> added comments referring one and the other.\n> \n> Thoughts?\n\nI think if they're to be kept in sync, then the exceptions should be\nnoted. I mean, where PGCLIENTENCODING would otherwise be, I'd add\n/* PGCLIENTENCODING set above */\n/* See below for PGHOSTADDR */\nand so on (PGHOST and PGPORT probably don't need this because they're\nimmediately below; not sure; but I would put them in alphabetical order\nin both lists for sure and then that wouldn't apply). Otherwise I would\nthink that it's an omission and would set to fix it.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Fri, 11 Jun 2021 10:08:20 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_regress.c also sensitive to various PG* environment variables" }, { "msg_contents": "On Fri, Jun 11, 2021 at 10:08:20AM -0400, Alvaro Herrera wrote:\n> I think if they're to be kept in sync, then the exceptions should be\n> noted. I mean, where PGCLIENTENCODING would otherwise be, I'd add\n> /* PGCLIENTENCODING set above */\n> /* See below for PGHOSTADDR */\n> and so on (PGHOST and PGPORT probably don't need this because they're\n> immediately below; not sure; but I would put them in alphabetical order\n> in both lists for sure and then that wouldn't apply). Otherwise I would\n> think that it's an omission and would set to fix it.\n\nGood idea, thanks. I'll add comments for each one that cannot be\nunsetted.\n--\nMichael", "msg_date": "Sat, 12 Jun 2021 09:10:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_regress.c also sensitive to various PG* environment variables" }, { "msg_contents": "On Sat, Jun 12, 2021 at 09:10:06AM +0900, Michael Paquier wrote:\n> Good idea, thanks. I'll add comments for each one that cannot be\n> unsetted.\n\nAnd done, finally.\n--\nMichael", "msg_date": "Sun, 13 Jun 2021 20:14:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_regress.c also sensitive to various PG* environment variables" } ]
[ { "msg_contents": "I noticed that we are getting random failures [1][2][3] in the\nrecovery test on hoverfly. The failures are in 022_crash_temp_files\nand 013_crash_restart. Both the tests failed due to same reason:\n\nack Broken pipe: write( 13, 'SELECT 1' ) at\n/home/nm/src/build/IPC-Run-0.94/lib/IPC/Run/IO.pm line 558.\n\nIt seems the error happens in both the tests when after issuing a\nKILL, we are trying to reconnect. Can we do anything for this?\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2021-06-11%2006%3A59%3A59\n[2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2021-06-06%2007%3A09%3A53\n[3] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2021-06-05%2008%3A40%3A49\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Jun 2021 17:38:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "recovery test failures on hoverfly" }, { "msg_contents": "On Fri, Jun 11, 2021 at 05:38:34PM +0530, Amit Kapila wrote:\n> It seems the error happens in both the tests when after issuing a\n> KILL, we are trying to reconnect. Can we do anything for this?\n\nThis is the same problem as c757a3da and 6d41dd0, where we write a\nquery to a pipe but the kill, causing a failure, makes the test fail\nwith a SIGPIPE in IPC::Run as a query is sent down to a pipe.\n\nI think that using SELECT 1 to test if the server has been restarted\nis a bit crazy. I would suggest to use instead a loop based on\npg_isready.\n--\nMichael", "msg_date": "Fri, 11 Jun 2021 21:20:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: recovery test failures on hoverfly" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Jun 11, 2021 at 05:38:34PM +0530, Amit Kapila wrote:\n>> It seems the error happens in both the tests when after issuing a\n>> KILL, we are trying to reconnect. Can we do anything for this?\n\n> This is the same problem as c757a3da and 6d41dd0, where we write a\n> query to a pipe but the kill, causing a failure, makes the test fail\n> with a SIGPIPE in IPC::Run as a query is sent down to a pipe.\n\nIndeed.\n\n> I think that using SELECT 1 to test if the server has been restarted\n> is a bit crazy. I would suggest to use instead a loop based on\n> pg_isready.\n\nThe precedent of the previous fixes would seem to suggest seeing if\nwe can replace 'SELECT 1' with \"undef\". Not sure if that'll work\nwithout annoying changes to poll_query_until, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Jun 2021 10:37:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovery test failures on hoverfly" }, { "msg_contents": "I wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> This is the same problem as c757a3da and 6d41dd0, where we write a\n>> query to a pipe but the kill, causing a failure, makes the test fail\n>> with a SIGPIPE in IPC::Run as a query is sent down to a pipe.\n\n> The precedent of the previous fixes would seem to suggest seeing if\n> we can replace 'SELECT 1' with \"undef\". Not sure if that'll work\n> without annoying changes to poll_query_until, though.\n\nI noticed that elver failed this same way today, so that got me\nannoyed enough to pursue a fix. Using \"undef\" as poll_query_until's\ninput almost works, except it turns out that it fails to notice psql\nconnection failures in that case! It is *only* looking at psql's\nstdout, not at either stderr or the exit status, which seems seriously\nbogus in its own right; not least because poll_query_until's own\ndocumentation claims it will continue waiting after an error, which\nis exactly what it's not doing. So I propose the attached.\n\n(I first tried to make it check $result == 0, but it seems there are a\nlot of cases where psql returns status 1 in these tests. That seems\npretty bogus too, but probably beta is no time to change that\nbehavior.)\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 11 Jun 2021 18:28:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovery test failures on hoverfly" }, { "msg_contents": ">> Michael Paquier <michael@paquier.xyz> writes:\n>>> This is the same problem as c757a3da and 6d41dd0, where we write a\n>>> query to a pipe but the kill, causing a failure, makes the test fail\n>>> with a SIGPIPE in IPC::Run as a query is sent down to a pipe.\n\nAfter checking the git logs, I realized that this failure is actually\nnew since 11e9caff8: before that, poll_query_until passed the query\non the command line not stdin, so it wasn't vulnerable to SIGPIPE.\nSo that explains why we only recently started to see this.\n\nThe fix I proposed seems to work fine in all branches, so I went\nahead and pushed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Jun 2021 15:15:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovery test failures on hoverfly" }, { "msg_contents": "\nOn 6/12/21 3:15 PM, Tom Lane wrote:\n>>> Michael Paquier <michael@paquier.xyz> writes:\n>>>> This is the same problem as c757a3da and 6d41dd0, where we write a\n>>>> query to a pipe but the kill, causing a failure, makes the test fail\n>>>> with a SIGPIPE in IPC::Run as a query is sent down to a pipe.\n> After checking the git logs, I realized that this failure is actually\n> new since 11e9caff8: before that, poll_query_until passed the query\n> on the command line not stdin, so it wasn't vulnerable to SIGPIPE.\n> So that explains why we only recently started to see this.\n>\n> The fix I proposed seems to work fine in all branches, so I went\n> ahead and pushed it.\n>\n> \t\t\t\n\n\nI'm a bit dubious about this. It doesn't seem more robust to insist that\nwe pass undef in certain cases. If passing the SQL via stdin is fragile,\nas we also found to be the case with passing it via the command line,\nperhaps we should try passing it via a tmp file. Then there would\npresumably be no SIGPIPE.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 12 Jun 2021 17:19:38 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: recovery test failures on hoverfly" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I'm a bit dubious about this. It doesn't seem more robust to insist that\n> we pass undef in certain cases.\n\nTrue, it'd be nicer if that didn't matter; mainly because people\nwill get it wrong in future.\n\n> If passing the SQL via stdin is fragile,\n> as we also found to be the case with passing it via the command line,\n> perhaps we should try passing it via a tmp file. Then there would\n> presumably be no SIGPIPE.\n\nSeems kind of inefficient. Maybe writing and reading a file would\nbe a negligible cost compared to everything else involved, but\nI'm not sure.\n\nAnother angle is that the SIGPIPE complaints aren't necessarily\na bad thing: if psql doesn't read what we send, it's good to\nknow about that. IMO the real problem is that the errors are\nso darn nonrepeatable. I wonder if there is a way to make them\nmore reproducible?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Jun 2021 17:28:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovery test failures on hoverfly" }, { "msg_contents": "\nOn 6/12/21 5:28 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I'm a bit dubious about this. It doesn't seem more robust to insist that\n>> we pass undef in certain cases.\n> True, it'd be nicer if that didn't matter; mainly because people\n> will get it wrong in future.\n\n\nRight, that's what I'm worried about.\n\n\n>\n>> If passing the SQL via stdin is fragile,\n>> as we also found to be the case with passing it via the command line,\n>> perhaps we should try passing it via a tmp file. Then there would\n>> presumably be no SIGPIPE.\n> Seems kind of inefficient. Maybe writing and reading a file would\n> be a negligible cost compared to everything else involved, but\n> I'm not sure.\n\n\nWell, in poll_query_until we would of course set up the file outside the\nloop. I suspect the cost would in fact be negligible.\n\n\nNote, too that the psql and safe_psql methods also pass the query via stdin.\n\n\n>\n> Another angle is that the SIGPIPE complaints aren't necessarily\n> a bad thing: if psql doesn't read what we send, it's good to\n> know about that. IMO the real problem is that the errors are\n> so darn nonrepeatable. I wonder if there is a way to make them\n> more reproducible?\n>\n> \t\t\t\n\n\nI don't know.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 12 Jun 2021 17:50:46 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: recovery test failures on hoverfly" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Note, too that the psql and safe_psql methods also pass the query via stdin.\n\nYeah. We need all of these to act the same, IMO. Recall that\nthe previous patches that introduced the undef hack were changing\ncallers of those routines, not poll_query_until.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Jun 2021 17:57:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recovery test failures on hoverfly" } ]
[ { "msg_contents": ">>\r\n You can have these queries return both rows if you use an\r\n accent-ignoring collation, like this example in the documentation:\r\n\r\n CREATE COLLATION ignore_accents (provider = icu, locale =\r\n 'und-u-ks-level1-kc-true', deterministic = false);\r\n<<\r\n\r\nIndeed. Is the dependency between the character expansion capability and accent-insensitive collations documented anywhere?\r\n\r\nAnother unexpected dependency appears to be @colCaseFirst=upper. If specified in combination with colStrength=secondary, it appears that the upper/lower case ordering is random within a group of characters that are secondary equal, e.g. 'A' < 'a', but 'b' < 'B', 'c' < 'C', ... , but then 'L' < 'l'. It is not even consistently ordered with respect to case. If I make it a nondeterministic CS_AI collation, then it sorts upper before lower consistently. The rule seems to be that you can't sort by case within a group that is case-insensitive. \r\n\r\nCan a CI collation be ordered upper case first, or is this a limitation of ICU?\r\n\r\nFor example, this is part of the sort order that I'd like to achieve with ICU, with the code point in column 1 and dense_rank() shown in the rightmost column indicating that 'b' = 'B', for example:\r\n\r\n66\tB\tB\t138\t151\r\n98\tb\tb\t138\t151 <- so within a group that is CI_AS equal, the sort order needs to be upper case first\r\n67\tC\tC\t139\t152\r\n99\tc\tc\t139\t152\r\n199\tÇ\tÇ\t140\t153\r\n231\tç\tç\t140\t153\r\n68\tD\tD\t141\t154\r\n100\td\td\t141\t154\r\n208\tÐ\tÐ\t142\t199\r\n240\tð\tð\t142\t199\r\n69\tE\tE\t143\t155\r\n101\te\te\t143\t155\r\n\r\nCan this sort order be achieved with ICU?\r\n\r\nMore generally, is there any interest in leveraging the full power of ICU tailoring rules to get whatever order someone may need, subject to the limitations of ICU itself? what would be required to extend CREATE COLLATION to accept an optional sequence of tailoring rules that we would store in the pg_collation catalog and apply along with the modifiers in the locale string?\r\n\r\n /Jim\r\n\r\n\r\n\r\n", "msg_date": "Fri, 11 Jun 2021 20:05:39 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Character expansion with ICU collations" }, { "msg_contents": "On 11.06.21 22:05, Finnerty, Jim wrote:\n>>>\n> You can have these queries return both rows if you use an\n> accent-ignoring collation, like this example in the documentation:\n> \n> CREATE COLLATION ignore_accents (provider = icu, locale =\n> 'und-u-ks-level1-kc-true', deterministic = false);\n> <<\n> \n> Indeed. Is the dependency between the character expansion capability and accent-insensitive collations documented anywhere?\n\nThe above is merely a consequence of what the default collation elements \nfor 'ß' are.\n\nExpansion isn't really a relevant concept in collation. Any character \ncan map to 1..N collation elements. The collation algorithm doesn't \ncare how many it is.\n\n> Can a CI collation be ordered upper case first, or is this a limitation of ICU?\n\nI don't know the authoritative answer to that, but to me it doesn't make \nsense, since the effect of a case-insensitive collation is to throw away \nthe third-level weights, so there is nothing left for \"upper case first\" \nto operate on.\n\n> More generally, is there any interest in leveraging the full power of ICU tailoring rules to get whatever order someone may need, subject to the limitations of ICU itself? what would be required to extend CREATE COLLATION to accept an optional sequence of tailoring rules that we would store in the pg_collation catalog and apply along with the modifiers in the locale string?\n\nyes\n\n\n", "msg_date": "Fri, 11 Jun 2021 22:29:54 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Character expansion with ICU collations" } ]
[ { "msg_contents": "Hi,\n\nI just tried to add a, seemingly innocuous, assertion to\nProcArrayAdd()/Remove() that proc->pgprocno is < arrayP->maxProcs. That\nquickly fails.\n\nThe reason for that is that PGPROC are created in the following order\n1) MaxBackends normal (*[1]) backends\n2) NUM_AUXILIARY_PROCS auxiliary processes\n3) max_prepared_xacts prepared transactions.\n\nand pgprocnos are assigned sequentially - they are needed to index into\nProcGlobal->allProcs.\n\nIn contrast to that procarray.c initializes maxProcs to\n#define PROCARRAY_MAXPROCS\t(MaxBackends + max_prepared_xacts)\ni.e. without the aux processes.\n\nWhich means that some of the prepared transactions have pgprocnos that are\nbigger than ProcArrayStruct->maxProcs. Hence my assertion failure.\n\n\nThis is obviously not a bug, but is quite hard to understand / confusing. I\nthink I made a similar mistake before.\n\nI'd at least like to add a comment with a warning somewhere in ProcArrayStruct\nor such.\n\nAn alternative approach would be to change the PGPROC order to instead be 1)\naux, b) normal backends, 3) prepared xacts and give aux processes a negative\nor invalid pgprocno.\n\nOne small advantage of that would be that we'd not need to \"skip\" over the\n\"aux process hole\" between normal and prepared xact PGPROCs in various\nprocarray.c routines that iterate over procs.\n\nGreetings,\n\nAndres Freund\n\n[1] well, kinda. It's user backends followed by autovacuum worker, launcher,\nworker processes and wal senders.\n\n\n", "msg_date": "Fri, 11 Jun 2021 18:44:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "ProcArrayStruct->pgprocnos vs ->maxProcs vs PGPROC ordering" } ]
[ { "msg_contents": "Hi,\n\nRemoving legitimate warnings can it be worth it?\n\n-1 CAST can be wrong, when there is an invalid value defined\n(InvalidBucket, InvalidBlockNumber).\nI think depending on the compiler -1 CAST may be different from\nInvalidBucket or InvalidBlockNumber.\n\npg_rewind is one special case.\nAll cases of XLogSegNo (uint64) initialization are zero, but in pg_rewind\nwas used -1?\nI did not find it InvalidXLogSegNo!\nNot tested.\n\nTrivial patch attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Fri, 11 Jun 2021 23:05:29 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Signed vs. Unsigned (some)" }, { "msg_contents": "At Fri, 11 Jun 2021 23:05:29 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> \n> Removing legitimate warnings can it be worth it?\n\n From what the warning comes from? And what is the exact message?\n\n> -1 CAST can be wrong, when there is an invalid value defined\n> (InvalidBucket, InvalidBlockNumber).\n> I think depending on the compiler -1 CAST may be different from\n> InvalidBucket or InvalidBlockNumber.\n\nThe definitions are not ((type) -1) but ((type) 0xFFFFFFFF) so\nactually they might be different if we forget to widen the constant\nwhen widening the types. Regarding to the compiler behavior, I think\nwe are assuming C99[1] and C99 defines that -1 is converted to\nUxxx_MAX. (6.3.1.3 Singed and unsigned integers)\n\nI'm +0.2 on it. It might be worthwhile as a matter of style.\n\n> pg_rewind is one special case.\n> All cases of XLogSegNo (uint64) initialization are zero, but in pg_rewind\n> was used -1?\n> I did not find it InvalidXLogSegNo!\n\nI'm not sure whether that is a thinko that the variable is signed or\nthat it is intentional to assign the maximum value. Anyway, actually\nthere's no need for initializing the variable at all. So I don't think\nit's worth changing the initial value. If any compiler actually\ncomplains about the assignment changing it to zero seems reasonable.\n\n> Not tested.\n> \n> Trivial patch attached.\n\nPlease don't quickly update the patch responding to my comments alone.\nI might be a minority.\n\n[1] http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1124.pdf\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 15 Jun 2021 17:17:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs. Unsigned (some)" }, { "msg_contents": "Hi Kyotaro,\n\nThanks for taking a look.\n\nEm ter., 15 de jun. de 2021 às 05:17, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> At Fri, 11 Jun 2021 23:05:29 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > Hi,\n> >\n> > Removing legitimate warnings can it be worth it?\n>\n> From what the warning comes from? And what is the exact message?\n>\nmsvc 64 bits compiler (Level4)\nwarning C4245: '=': conversion from 'int' to 'Bucket', signed/unsigned\nmismatch\n\n\n> > -1 CAST can be wrong, when there is an invalid value defined\n> > (InvalidBucket, InvalidBlockNumber).\n> > I think depending on the compiler -1 CAST may be different from\n> > InvalidBucket or InvalidBlockNumber.\n>\n> The definitions are not ((type) -1) but ((type) 0xFFFFFFFF) so\n> actually they might be different if we forget to widen the constant\n> when widening the types. Regarding to the compiler behavior, I think\n> we are assuming C99[1] and C99 defines that -1 is converted to\n> Uxxx_MAX. (6.3.1.3 Singed and unsigned integers)\n>\n> I'm +0.2 on it. It might be worthwhile as a matter of style.\n>\nI think about more than style.\nThis is one of the tricks that should not be used.\n\n\n> > pg_rewind is one special case.\n> > All cases of XLogSegNo (uint64) initialization are zero, but in pg_rewind\n> > was used -1?\n> > I did not find it InvalidXLogSegNo!\n>\n> I'm not sure whether that is a thinko that the variable is signed or\n> that it is intentional to assign the maximum value.\n\nIt is a thinko.\n\n Anyway, actually\n> there's no need for initializing the variable at all. So I don't think\n> it's worth changing the initial value.\n\nIt is the case of removing the initialization then?\n\n\n> If any compiler actually\n> complains about the assignment changing it to zero seems reasonable.\n>\nSame case.\nmsvc 64 bits compiler (Level4)\nwarning C4245: '=': initialization from 'int' to 'XLogSegNo',\nsigned/unsigned mismatch\n\n\n> > Not tested.\n> >\n> > Trivial patch attached.\n>\n> Please don't quickly update the patch responding to my comments alone.\n> I might be a minority.\n>\nOk.\n\nbest regards,\nRanier Vilela\n\nHi Kyotaro,Thanks for taking a look.Em ter., 15 de jun. de 2021 às 05:17, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:At Fri, 11 Jun 2021 23:05:29 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> \n> Removing legitimate warnings can it be worth it?\n\n From what the warning comes from? And what is the exact message?msvc 64 bits compiler (Level4)warning C4245: '=': conversion from 'int' to 'Bucket', signed/unsigned mismatch \n\n> -1 CAST can be wrong, when there is an invalid value defined\n> (InvalidBucket, InvalidBlockNumber).\n> I think depending on the compiler -1 CAST may be different from\n> InvalidBucket or InvalidBlockNumber.\n\nThe definitions are not ((type) -1) but ((type) 0xFFFFFFFF) so\nactually they might be different if we forget to widen the constant\nwhen widening the types.  Regarding to the compiler behavior, I think\nwe are assuming C99[1] and C99 defines that -1 is converted to\nUxxx_MAX. (6.3.1.3 Singed and unsigned integers)\n\nI'm +0.2 on it.  It might be worthwhile as a matter of style.I think about more than style. This is one of the tricks that should not be used.\n\n> pg_rewind is one special case.\n> All cases of XLogSegNo (uint64) initialization are zero, but in pg_rewind\n> was used -1?\n> I did not find it InvalidXLogSegNo!\n\nI'm not sure whether that is a thinko that the variable is signed or\nthat it is intentional to assign the maximum value.It is a thinko.   Anyway, actually\nthere's no need for initializing the variable at all. So I don't think\nit's worth changing the initial value. It is the case of removing the initialization then? If any compiler actually\ncomplains about the assignment changing it to zero seems reasonable.Same case.\nmsvc 64 bits compiler (Level4)warning C4245: '=': initialization from 'int' to 'XLogSegNo', signed/unsigned mismatch\n\n\n> Not tested.\n> \n> Trivial patch attached.\n\nPlease don't quickly update the patch responding to my comments alone.\nI might be a minority.Ok.best regards,Ranier Vilela", "msg_date": "Tue, 15 Jun 2021 07:38:57 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs. Unsigned (some)" }, { "msg_contents": "On 15.06.21 10:17, Kyotaro Horiguchi wrote:\n> The definitions are not ((type) -1) but ((type) 0xFFFFFFFF) so\n> actually they might be different if we forget to widen the constant\n> when widening the types. Regarding to the compiler behavior, I think\n> we are assuming C99[1] and C99 defines that -1 is converted to\n> Uxxx_MAX. (6.3.1.3 Singed and unsigned integers)\n> \n> I'm +0.2 on it. It might be worthwhile as a matter of style.\n\nI think since we have the constants we should use them.\n\n>> pg_rewind is one special case.\n>> All cases of XLogSegNo (uint64) initialization are zero, but in pg_rewind\n>> was used -1?\n>> I did not find it InvalidXLogSegNo!\n> \n> I'm not sure whether that is a thinko that the variable is signed or\n> that it is intentional to assign the maximum value. Anyway, actually\n> there's no need for initializing the variable at all. So I don't think\n> it's worth changing the initial value. If any compiler actually\n> complains about the assignment changing it to zero seems reasonable.\n> \n>> Not tested.\n\nI think this case needs some analysis and explanation what is going on. \nI agree that the existing code looks a bit fishy, but we shouldn't just \nchange it to something else without understanding what is going on.\n\n\n", "msg_date": "Wed, 16 Jun 2021 10:48:20 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs. Unsigned (some)" }, { "msg_contents": "Em qua., 16 de jun. de 2021 às 05:48, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> escreveu:\n\n> On 15.06.21 10:17, Kyotaro Horiguchi wrote:\n> > The definitions are not ((type) -1) but ((type) 0xFFFFFFFF) so\n> > actually they might be different if we forget to widen the constant\n> > when widening the types. Regarding to the compiler behavior, I think\n> > we are assuming C99[1] and C99 defines that -1 is converted to\n> > Uxxx_MAX. (6.3.1.3 Singed and unsigned integers)\n> >\n> > I'm +0.2 on it. It might be worthwhile as a matter of style.\n>\n> I think since we have the constants we should use them.\n>\n> >> pg_rewind is one special case.\n> >> All cases of XLogSegNo (uint64) initialization are zero, but in\n> pg_rewind\n> >> was used -1?\n> >> I did not find it InvalidXLogSegNo!\n> >\n> > I'm not sure whether that is a thinko that the variable is signed or\n> > that it is intentional to assign the maximum value. Anyway, actually\n> > there's no need for initializing the variable at all. So I don't think\n> > it's worth changing the initial value. If any compiler actually\n> > complains about the assignment changing it to zero seems reasonable.\n> >\n> >> Not tested.\n>\n> I think this case needs some analysis and explanation what is going on.\n> I agree that the existing code looks a bit fishy, but we shouldn't just\n> change it to something else without understanding what is going on.\n>\nYes, sure.\nI think everyone agrees that they have to understand to change something.\n\nI am acting as a firefighter for small fires.\nI believe the real contribution to Postgres would be to convince them to\nchange the default build flags.\nLast night I tested a full build on Ubuntu, with clang 10.\nSurprise, no warning, all clear, with -Wall enabled (by default).\nNo wonder these problems end up inside the code, no one sees them.\nEveryone is happy to compile Postgres and not see any warnings.\nBut add -Wpedantinc and -Wextra and you'll get more trouble than rabbits in\na magician's hat.\nOf course most are bogus, but they are there, and the new ones, the result\nof the new code that has just been modified, will not enter.\nTom once complained that small scissors don't cut the grass.\nBut small defects piling up lead to big problems.\n\nI believe Postgres will benefit enormously from enabling all the warnings\nat compile time, at least the new little bugs will have some chance of not\ngetting into the codebase.\n\nregards,\nRanier Vilela\n\nEm qua., 16 de jun. de 2021 às 05:48, Peter Eisentraut <peter.eisentraut@enterprisedb.com> escreveu:On 15.06.21 10:17, Kyotaro Horiguchi wrote:\n> The definitions are not ((type) -1) but ((type) 0xFFFFFFFF) so\n> actually they might be different if we forget to widen the constant\n> when widening the types.  Regarding to the compiler behavior, I think\n> we are assuming C99[1] and C99 defines that -1 is converted to\n> Uxxx_MAX. (6.3.1.3 Singed and unsigned integers)\n> \n> I'm +0.2 on it.  It might be worthwhile as a matter of style.\n\nI think since we have the constants we should use them.\n\n>> pg_rewind is one special case.\n>> All cases of XLogSegNo (uint64) initialization are zero, but in pg_rewind\n>> was used -1?\n>> I did not find it InvalidXLogSegNo!\n> \n> I'm not sure whether that is a thinko that the variable is signed or\n> that it is intentional to assign the maximum value.  Anyway, actually\n> there's no need for initializing the variable at all. So I don't think\n> it's worth changing the initial value. If any compiler actually\n> complains about the assignment changing it to zero seems reasonable.\n> \n>> Not tested.\n\nI think this case needs some analysis and explanation what is going on. \nI agree that the existing code looks a bit fishy, but we shouldn't just \nchange it to something else without understanding what is going on.Yes, sure.I think everyone agrees that they have to understand to change something.I am acting as a firefighter for small fires.I believe the real contribution to Postgres would be to convince them to change the default build flags.Last night I tested a full build on Ubuntu, with clang 10.Surprise, no warning, all clear, with -Wall enabled (by default).No wonder these problems end up inside the code, no one sees them. Everyone is happy to compile Postgres and not see any warnings.But add -Wpedantinc and -Wextra and you'll get more trouble than rabbits in a magician's hat.Of course most are bogus, but they are there, and the new ones, the result of the new code that has just been modified, will not enter.Tom once complained that small scissors don't cut the grass.But small defects piling up lead to big problems.I believe Postgres will benefit enormously from enabling all the warnings at compile time, at least the new little bugs will have some chance of not getting into the codebase.regards,Ranier Vilela", "msg_date": "Wed, 16 Jun 2021 08:51:16 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs. Unsigned (some)" }, { "msg_contents": "On 16.06.21 10:48, Peter Eisentraut wrote:\n> On 15.06.21 10:17, Kyotaro Horiguchi wrote:\n>> The definitions are not ((type) -1) but ((type) 0xFFFFFFFF) so\n>> actually they might be different if we forget to widen the constant\n>> when widening the types.  Regarding to the compiler behavior, I think\n>> we are assuming C99[1] and C99 defines that -1 is converted to\n>> Uxxx_MAX. (6.3.1.3 Singed and unsigned integers)\n>>\n>> I'm +0.2 on it.  It might be worthwhile as a matter of style.\n> \n> I think since we have the constants we should use them.\n\nI have pushed the InvalidBucket changes.\n\nThe use of InvalidBlockNumber with vac_update_relstats() looks a bit \nfishy to me. We are using in the same call 0 as the default for \nnum_all_visible_pages, and we generally elsewhere also use 0 as the \nstarting value for relpages, so it's not clear to me why it should be -1 \nor InvalidBlockNumber here. I'd rather leave it \"slightly wrong\" for \nnow so it can be checked again.\n\n\n", "msg_date": "Fri, 2 Jul 2021 12:09:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs. Unsigned (some)" }, { "msg_contents": "Em sex., 2 de jul. de 2021 às 07:09, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> escreveu:\n\n> On 16.06.21 10:48, Peter Eisentraut wrote:\n> > On 15.06.21 10:17, Kyotaro Horiguchi wrote:\n> >> The definitions are not ((type) -1) but ((type) 0xFFFFFFFF) so\n> >> actually they might be different if we forget to widen the constant\n> >> when widening the types. Regarding to the compiler behavior, I think\n> >> we are assuming C99[1] and C99 defines that -1 is converted to\n> >> Uxxx_MAX. (6.3.1.3 Singed and unsigned integers)\n> >>\n> >> I'm +0.2 on it. It might be worthwhile as a matter of style.\n> >\n> > I think since we have the constants we should use them.\n>\n> I have pushed the InvalidBucket changes.\n>\nNice. Thanks.\n\n\n> The use of InvalidBlockNumber with vac_update_relstats() looks a bit\n> fishy to me. We are using in the same call 0 as the default for\n> num_all_visible_pages, and we generally elsewhere also use 0 as the\n> starting value for relpages, so it's not clear to me why it should be -1\n> or InvalidBlockNumber here.\n\nIt seems to me that the only use in vac_update_relstats is to mark relpages\nas invalid (dirty = true).\n\n\n> I'd rather leave it \"slightly wrong\" for\n> now so it can be checked again.\n>\nIdeally InvalidBlockNumber should be 0.\nMaybe in the long run this will be fixed.\n\nregards,\nRanier Vilela\n\nEm sex., 2 de jul. de 2021 às 07:09, Peter Eisentraut <peter.eisentraut@enterprisedb.com> escreveu:On 16.06.21 10:48, Peter Eisentraut wrote:\n> On 15.06.21 10:17, Kyotaro Horiguchi wrote:\n>> The definitions are not ((type) -1) but ((type) 0xFFFFFFFF) so\n>> actually they might be different if we forget to widen the constant\n>> when widening the types.  Regarding to the compiler behavior, I think\n>> we are assuming C99[1] and C99 defines that -1 is converted to\n>> Uxxx_MAX. (6.3.1.3 Singed and unsigned integers)\n>>\n>> I'm +0.2 on it.  It might be worthwhile as a matter of style.\n> \n> I think since we have the constants we should use them.\n\nI have pushed the InvalidBucket changes.Nice. Thanks. \n\nThe use of InvalidBlockNumber with vac_update_relstats() looks a bit \nfishy to me.  We are using in the same call 0 as the default for \nnum_all_visible_pages, and we generally elsewhere also use 0 as the \nstarting value for relpages, so it's not clear to me why it should be -1 \nor InvalidBlockNumber here.It seems to me that the only use in vac_update_relstats is to mark relpages as invalid (dirty = true).   I'd rather leave it \"slightly wrong\" for \nnow so it can be checked again.Ideally InvalidBlockNumber should be 0.Maybe in the long run this will be fixed.regards,Ranier Vilela", "msg_date": "Fri, 2 Jul 2021 08:08:56 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs. Unsigned (some)" }, { "msg_contents": "On Fri, Jul 02, 2021 at 12:09:23PM +0200, Peter Eisentraut wrote:\n> On 16.06.21 10:48, Peter Eisentraut wrote:\n> > On 15.06.21 10:17, Kyotaro Horiguchi wrote:\n> > > The definitions are not ((type) -1) but ((type) 0xFFFFFFFF) so\n> > > actually they might be different if we forget to widen the constant\n> > > when widening the types.� Regarding to the compiler behavior, I think\n> > > we are assuming C99[1] and C99 defines that -1 is converted to\n> > > Uxxx_MAX. (6.3.1.3 Singed and unsigned integers)\n> > > \n> > > I'm +0.2 on it.� It might be worthwhile as a matter of style.\n> > \n> > I think since we have the constants we should use them.\n> \n> I have pushed the InvalidBucket changes.\n> \n> The use of InvalidBlockNumber with vac_update_relstats() looks a bit fishy\n> to me. We are using in the same call 0 as the default for\n> num_all_visible_pages, and we generally elsewhere also use 0 as the starting\n> value for relpages, so it's not clear to me why it should be -1 or\n> InvalidBlockNumber here. I'd rather leave it \"slightly wrong\" for now so it\n> can be checked again.\n\nThere's two relevant changes:\n\n|commit 3d351d916b20534f973eda760cde17d96545d4c4\n|Author: Tom Lane <tgl@sss.pgh.pa.us>\n|Date: Sun Aug 30 12:21:51 2020 -0400\n|\n| Redefine pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.\n\n|commit 0e69f705cc1a3df273b38c9883fb5765991e04fe\n|Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n|Date: Fri Apr 9 11:29:08 2021 -0400\n|\n| Set pg_class.reltuples for partitioned tables\n\n3d35 also affects partitioned tables, and 0e69 appears to do the right thing by\npreserving relpages=-1 during auto-analyze.\n\nNote that Alvaro's commit message and comment refer to relpages, but should\nhave said reltuples - comment fixed at 7ef8b52cf.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 2 Jul 2021 09:58:50 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs. Unsigned (some)" }, { "msg_contents": "On 2021-Jul-02, Justin Pryzby wrote:\n\n> On Fri, Jul 02, 2021 at 12:09:23PM +0200, Peter Eisentraut wrote:\n\n> > The use of InvalidBlockNumber with vac_update_relstats() looks a bit fishy\n> > to me. We are using in the same call 0 as the default for\n> > num_all_visible_pages, and we generally elsewhere also use 0 as the starting\n> > value for relpages, so it's not clear to me why it should be -1 or\n> > InvalidBlockNumber here. I'd rather leave it \"slightly wrong\" for now so it\n> > can be checked again.\n\n> |commit 0e69f705cc1a3df273b38c9883fb5765991e04fe\n> |Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> |Date: Fri Apr 9 11:29:08 2021 -0400\n> |\n> | Set pg_class.reltuples for partitioned tables\n> \n> 3d35 also affects partitioned tables, and 0e69 appears to do the right thing by\n> preserving relpages=-1 during auto-analyze.\n\nI suppose the question is what is the value used for. BlockNumber is\ntypedef'd uint32, an unsigned variable, so using -1 for it is quite\nfishy. The weird thing is that in vac_update_relstats we cast it to\n(int32) when storing it in the pg_class tuple, so that's quite fishy\ntoo.\n\nWhat we really want is for table_block_relation_estimate_size to work\nproperly. What that does is get the signed-int32 value from pg_class\nand cast it back to BlockNumber. If that assignment gets -1 again, then\nit's all fine. I didn't test it.\n\nI think changing the vac_update_relstats call I added in 0e69f705cc1a to\nInvalidBlockNumber is fine. I didn't verify any other places.\n\nI think storing BlockNumber values >= 2^31 in an int32 catalog column is\nasking for trouble. We'll have to fix that at some point.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 2 Jul 2021 12:29:45 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Signed vs. Unsigned (some)" }, { "msg_contents": "Em sex., 2 de jul. de 2021 às 13:29, Alvaro Herrera <alvherre@alvh.no-ip.org>\nescreveu:\n\n> On 2021-Jul-02, Justin Pryzby wrote:\n>\n> > On Fri, Jul 02, 2021 at 12:09:23PM +0200, Peter Eisentraut wrote:\n>\n> > > The use of InvalidBlockNumber with vac_update_relstats() looks a bit\n> fishy\n> > > to me. We are using in the same call 0 as the default for\n> > > num_all_visible_pages, and we generally elsewhere also use 0 as the\n> starting\n> > > value for relpages, so it's not clear to me why it should be -1 or\n> > > InvalidBlockNumber here. I'd rather leave it \"slightly wrong\" for now\n> so it\n> > > can be checked again.\n>\n> > |commit 0e69f705cc1a3df273b38c9883fb5765991e04fe\n> > |Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > |Date: Fri Apr 9 11:29:08 2021 -0400\n> > |\n> > | Set pg_class.reltuples for partitioned tables\n> >\n> > 3d35 also affects partitioned tables, and 0e69 appears to do the right\n> thing by\n> > preserving relpages=-1 during auto-analyze.\n>\n> I suppose the question is what is the value used for. BlockNumber is\n> typedef'd uint32, an unsigned variable, so using -1 for it is quite\n> fishy. The weird thing is that in vac_update_relstats we cast it to\n> (int32) when storing it in the pg_class tuple, so that's quite fishy\n> too.\n>\n\n> What we really want is for table_block_relation_estimate_size to work\n> properly. What that does is get the signed-int32 value from pg_class\n> and cast it back to BlockNumber. If that assignment gets -1 again, then\n> it's all fine. I didn't test it.\n>\nIt seems to me that it is happening, but it is risky to make comparisons\nbetween different types.\n\n1)\n#define InvalidBlockNumber 0xFFFFFFFF\n\nint main()\n{\n unsigned int num_pages;\n int rel_pages;\n\n num_pages = -1;\n rel_pages = (int) num_pages;\n printf(\"num_pages = %u\\n\", num_pages);\n printf(\"rel_pages = %d\\n\", rel_pages);\n printf(\"(num_pages == InvalidBlockNumber) => %u\\n\", (num_pages ==\nInvalidBlockNumber));\n printf(\"(rel_pages == InvalidBlockNumber) => %u\\n\", (rel_pages ==\nInvalidBlockNumber));\n}\n\nnum_pages = 4294967295\nrel_pages = -1\n(num_pages == InvalidBlockNumber) => 1\n(rel_pages == InvalidBlockNumber) => 1 /* 17:68: warning: comparison\nbetween signed and unsigned integer expressions [-Wsign-compare] */\n\nIf num_pages is promoted to uint64 and rel_pages to int64:\n2)\n#define InvalidBlockNumber 0xFFFFFFFF\n\nint main()\n{\n unsigned long int num_pages;\n long int rel_pages;\n\n num_pages = -1;\n rel_pages = (int) num_pages;\n printf(\"num_pages = %lu\\n\", num_pages);\n printf(\"rel_pages = %ld\\n\", rel_pages);\n printf(\"(num_pages == InvalidBlockNumber) => %u\\n\", (num_pages ==\nInvalidBlockNumber));\n printf(\"(rel_pages == InvalidBlockNumber) => %u\\n\", (rel_pages ==\nInvalidBlockNumber));\n}\n\nnum_pages = 18446744073709551615\nrel_pages = -1\n(num_pages == InvalidBlockNumber) => 0\n(rel_pages == InvalidBlockNumber) => 0 /* 17:68: warning: comparison\nbetween signed and unsigned integer expressions [-Wsign-compare] */\n\nAs Kyotaro said:\n\"they might be different if we forget to widen the constant\nwhen widening the types\"\n\nregards,\nRanier Vilela\n\nEm sex., 2 de jul. de 2021 às 13:29, Alvaro Herrera <alvherre@alvh.no-ip.org> escreveu:On 2021-Jul-02, Justin Pryzby wrote:\n\n> On Fri, Jul 02, 2021 at 12:09:23PM +0200, Peter Eisentraut wrote:\n\n> > The use of InvalidBlockNumber with vac_update_relstats() looks a bit fishy\n> > to me.  We are using in the same call 0 as the default for\n> > num_all_visible_pages, and we generally elsewhere also use 0 as the starting\n> > value for relpages, so it's not clear to me why it should be -1 or\n> > InvalidBlockNumber here.  I'd rather leave it \"slightly wrong\" for now so it\n> > can be checked again.\n\n> |commit 0e69f705cc1a3df273b38c9883fb5765991e04fe\n> |Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> |Date:   Fri Apr 9 11:29:08 2021 -0400\n> |\n> |    Set pg_class.reltuples for partitioned tables\n> \n> 3d35 also affects partitioned tables, and 0e69 appears to do the right thing by\n> preserving relpages=-1 during auto-analyze.\n\nI suppose the question is what is the value used for.  BlockNumber is\ntypedef'd uint32, an unsigned variable, so using -1 for it is quite\nfishy.  The weird thing is that in vac_update_relstats we cast it to\n(int32) when storing it in the pg_class tuple, so that's quite fishy\ntoo.\n\nWhat we really want is for table_block_relation_estimate_size to work\nproperly.  What that does is get the signed-int32 value from pg_class\nand cast it back to BlockNumber.  If that assignment gets -1 again, then\nit's all fine.  I didn't test it.It seems to me that it is happening, but it is risky to make comparisons between different types.1)\n#define InvalidBlockNumber 0xFFFFFFFFint main(){    unsigned int num_pages;    int rel_pages;        num_pages = -1;    rel_pages = (int) num_pages;    printf(\"num_pages = %u\\n\", num_pages);    printf(\"rel_pages = %d\\n\", rel_pages);    printf(\"(num_pages == InvalidBlockNumber) => %u\\n\", (num_pages == InvalidBlockNumber));    printf(\"(rel_pages == InvalidBlockNumber) => %u\\n\", (rel_pages == InvalidBlockNumber));}\nnum_pages = 4294967295rel_pages = -1(num_pages == InvalidBlockNumber) => 1(rel_pages == InvalidBlockNumber) => 1 /* \n17:68: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] */If num_pages is promoted to uint64 and rel_pages to int64:2)\n#define InvalidBlockNumber 0xFFFFFFFFint main(){    unsigned long int num_pages;    long int rel_pages;        num_pages = -1;    rel_pages = (int) num_pages;    printf(\"num_pages = %lu\\n\", num_pages);    printf(\"rel_pages = %ld\\n\", rel_pages);    printf(\"(num_pages == InvalidBlockNumber) => %u\\n\", (num_pages == InvalidBlockNumber));    printf(\"(rel_pages == InvalidBlockNumber) => %u\\n\", (rel_pages == InvalidBlockNumber));}num_pages = 18446744073709551615rel_pages = -1(num_pages == InvalidBlockNumber) => 0(rel_pages == InvalidBlockNumber) => 0 \n/* \n17:68: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] */\n\n\nAs Kyotaro said:\"they might be different if we forget to widen the constant\nwhen widening the types\"regards,Ranier Vilela", "msg_date": "Fri, 2 Jul 2021 15:12:08 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs. Unsigned (some)" }, { "msg_contents": "Em sex., 11 de jun. de 2021 às 23:05, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Hi,\n>\n> Removing legitimate warnings can it be worth it?\n>\n> -1 CAST can be wrong, when there is an invalid value defined\n> (InvalidBucket, InvalidBlockNumber).\n> I think depending on the compiler -1 CAST may be different from\n> InvalidBucket or InvalidBlockNumber.\n>\n> pg_rewind is one special case.\n> All cases of XLogSegNo (uint64) initialization are zero, but in pg_rewind\n> was used -1?\n> I did not find it InvalidXLogSegNo!\n> Not tested.\n>\n> Trivial patch attached.\n>\nAfter a long time, finally a small part is accepted and fixed.\nhttps://github.com/postgres/postgres/commit/302612a6c74fb16f26d094ff47e9c59cf412740c\n\nregards,\nRanier Vilela\n\nEm sex., 11 de jun. de 2021 às 23:05, Ranier Vilela <ranier.vf@gmail.com> escreveu:Hi,Removing legitimate warnings can it be worth it?-1 CAST can be wrong, when there is an invalid value defined (InvalidBucket, InvalidBlockNumber).I think depending on the compiler -1 CAST may be different from InvalidBucket or InvalidBlockNumber.pg_rewind is one special case.All cases of XLogSegNo (uint64) initialization are zero, but in pg_rewind was used -1?I did not find it InvalidXLogSegNo!Not tested.Trivial \npatch attached.After a long time, finally a small part is accepted and fixed.https://github.com/postgres/postgres/commit/302612a6c74fb16f26d094ff47e9c59cf412740cregards,Ranier Vilela", "msg_date": "Sun, 13 Feb 2022 17:19:46 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs. Unsigned (some)" } ]
[ { "msg_contents": "Per sqlsmith.\n\npostgres=# SELECT pg_filenode_relation(0,0);\nERROR: unexpected duplicate for tablespace 0, relfilenode 0\n\npostgres=# \\errverbose \nERROR: XX000: unexpected duplicate for tablespace 0, relfilenode 0\nLOCATION: RelidByRelfilenode, relfilenodemap.c:220\n\nThe usual expectation is that sql callable functions should return null rather\nthan hitting elog(). This also means that sqlsmith has one fewer\nfalse-positive error.\n\nI think it should return NULL if passed invalid relfilenode, rather than\nsearching pg_class and then writing a pretty scary message about duplicates.\n\ndiff --git a/src/backend/utils/cache/relfilenodemap.c b/src/backend/utils/cache/relfilenodemap.c\nindex 56d7c73d33..5a5cf853bd 100644\n--- a/src/backend/utils/cache/relfilenodemap.c\n+++ b/src/backend/utils/cache/relfilenodemap.c\n@@ -146,6 +146,9 @@ RelidByRelfilenode(Oid reltablespace, Oid relfilenode)\n \tScanKeyData skey[2];\n \tOid\t\t\trelid;\n \n+\tif (!OidIsValid(relfilenode))\n+\t\treturn InvalidOid;\n+\n \tif (RelfilenodeMapHash == NULL)\n \t\tInitializeRelfilenodeMap();\n \n\n\n", "msg_date": "Fri, 11 Jun 2021 21:33:25 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg_filenode_relation(0,0) elog" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Per sqlsmith.\n> postgres=# SELECT pg_filenode_relation(0,0);\n> ERROR: unexpected duplicate for tablespace 0, relfilenode 0\n\nUgh.\n\n> The usual expectation is that sql callable functions should return null rather\n> than hitting elog().\n\nAgreed, but you should put the short-circuit into the SQL-callable\nfunction, ie pg_filenode_relation. Lower-level callers ought not be\npassing junk data.\n\nLikely it should check the reltablespace, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Jun 2021 23:51:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_filenode_relation(0,0) elog" }, { "msg_contents": "On Fri, Jun 11, 2021 at 11:51:35PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > Per sqlsmith.\n> > postgres=# SELECT pg_filenode_relation(0,0);\n> > ERROR: unexpected duplicate for tablespace 0, relfilenode 0\n> \n> Ugh.\n> \n> > The usual expectation is that sql callable functions should return null rather\n> > than hitting elog().\n> \n> Agreed, but you should put the short-circuit into the SQL-callable\n> function, ie pg_filenode_relation. Lower-level callers ought not be\n> passing junk data.\n\nRight. I spent inadequate time reading output of git grep.\n\n> Likely it should check the reltablespace, too.\n\nI don't think so. The docs say:\nhttps://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBLOCATION\n|For a relation in the database's default tablespace, the tablespace can be specified as zero.\n\nAlso, that would breaks expected/alter_table.out for the same reason.\n\ndiff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c\nindex 3c70bb5943..144aca1099 100644\n--- a/src/backend/utils/adt/dbsize.c\n+++ b/src/backend/utils/adt/dbsize.c\n@@ -905,6 +905,9 @@ pg_filenode_relation(PG_FUNCTION_ARGS)\n \tOid\t\t\trelfilenode = PG_GETARG_OID(1);\n \tOid\t\t\theaprel = InvalidOid;\n \n+\tif (!OidIsValid(relfilenode))\n+\t\tPG_RETURN_NULL();\n+\n \theaprel = RelidByRelfilenode(reltablespace, relfilenode);\n \n \tif (!OidIsValid(heaprel))\n\n\n", "msg_date": "Sat, 12 Jun 2021 10:12:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_filenode_relation(0,0) elog" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Jun 11, 2021 at 11:51:35PM -0400, Tom Lane wrote:\n>> Likely it should check the reltablespace, too.\n\n> I don't think so. The docs say:\n> https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBLOCATION\n> |For a relation in the database's default tablespace, the tablespace can be specified as zero.\n\nRight, my mistake. Pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Jun 2021 13:30:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_filenode_relation(0,0) elog" } ]
[ { "msg_contents": "Ok! Thanks\r\n----------\r\n\r\n\r\n\r\n\r\n--------------原始邮件--------------\r\n发件人:\"Andres Freund \"<andres@anarazel.de&gt;;\r\n发送时间:2021年6月12日(星期六) 中午12:43\r\n收件人:\"盏一\" <w@hidva.com&gt;;\r\n抄送:\"pgsql-hackers \"<pgsql-hackers@postgresql.org&gt;;\r\n主题:Re: use `proc-&gt;pgxactoff` as the value of `index` in `ProcArrayRemove()`\r\n-----------------------------------\r\n\r\n Hi,\r\n\r\nOn 2021-05-07 04:36:25 +0800, 盏一 wrote:\r\n&gt; &amp;gt;&amp;nbsp;Sounds like a plan! Do you want to write a patch?\r\n&gt; Add the patch.\r\n\r\nThanks for the patch. I finally pushed an edited version of it. There\r\nwere other loops over -&gt;pgprocnos, so I put assertions in those - that\r\ngains us a a good bit more checking than we had before...\r\n\r\nI also couldn't resist to do some small formatting cleanups - I found\r\nthe memmove calls just too hard to read.\r\n\r\nI took the authorship information as you had it in the diff you attached\r\n- I hope that's OK?\r\n\r\nGreetings,\r\n\r\nAndres Freund\nOk! Thanks------------------------原始邮件--------------发件人:\"Andres Freund \"<andres@anarazel.de>;发送时间:2021年6月12日(星期六) 中午12:43收件人:\"盏一\" <w@hidva.com>;抄送:\"pgsql-hackers \"<pgsql-hackers@postgresql.org>;主题:Re: use `proc->pgxactoff` as the value of `index` in `ProcArrayRemove()`----------------------------------- Hi,On 2021-05-07 04:36:25 +0800, 盏一 wrote:> &gt;&nbsp;Sounds like a plan! Do you want to write a patch?> Add the patch.Thanks for the patch. I finally pushed an edited version of it. Therewere other loops over ->pgprocnos, so I put assertions in those - thatgains us a a good bit more checking than we had before...I also couldn't resist to do some small formatting cleanups - I foundthe memmove calls just too hard to read.I took the authorship information as you had it in the diff you attached- I hope that's OK?Greetings,Andres Freund", "msg_date": "Sat, 12 Jun 2021 12:52:13 +0800", "msg_from": "\"=?utf-8?B?55uP5LiA?=\" <w@hidva.com>", "msg_from_op": true, "msg_subject": "Re: use `proc->pgxactoff` as the value of `index` in\n `ProcArrayRemove()`" } ]
[ { "msg_contents": "Hi all,\n\nI am trying to implement a sort support function for geometry data types in\nPostGIS with the new feature `SortSupport`. However, I have a question\nabout this.\n\nI think it is hardly to apply a sort support function to a complex data\ntype without the `abbrev_converter` to simply the data structure into a\nsingle `Datum`. However, I do not know how the system determines when to\napply the converter.\n\nI appreciate any answers or suggestions. I am looking forward to hearing\nfrom you.\n\nBest regards,\nHan\n\nHi all,I am trying to implement a sort support function for geometry data types in PostGIS with the new feature `SortSupport`. However, I have a question about this.I think it is hardly to apply a sort support function to a complex data type without the `abbrev_converter` to simply the data structure into a single `Datum`. However, I do not know how the system determines when to apply the converter.I appreciate any answers or suggestions. I am looking forward to hearing from you.Best regards,Han", "msg_date": "Sat, 12 Jun 2021 14:51:05 +0800", "msg_from": "Han Wang <hanwgeek@gmail.com>", "msg_from_op": true, "msg_subject": "Questions about support function and abbreviate" }, { "msg_contents": "Hello,\n\nthe abbrev_converter is applied whenever it is defined. The values are\nsorted using the abbreviated comparator first using the shortened version,\nand if there is a tie the system asks the real full comparator to resolve\nit.\n\nThis article seems to be rather comprehensive:\nhttps://brandur.org/sortsupport\n\nOn Sat, Jun 12, 2021 at 9:51 AM Han Wang <hanwgeek@gmail.com> wrote:\n\n> Hi all,\n>\n> I am trying to implement a sort support function for geometry data types\n> in PostGIS with the new feature `SortSupport`. However, I have a question\n> about this.\n>\n> I think it is hardly to apply a sort support function to a complex data\n> type without the `abbrev_converter` to simply the data structure into a\n> single `Datum`. However, I do not know how the system determines when to\n> apply the converter.\n>\n> I appreciate any answers or suggestions. I am looking forward to hearing\n> from you.\n>\n> Best regards,\n> Han\n>\n\n\n-- \nDarafei \"Komяpa\" Praliaskouski\nOSM BY Team - http://openstreetmap.by/\n\nHello,the abbrev_converter is applied whenever it is defined. The values are sorted using the abbreviated comparator first using the shortened version, and if there is a tie the system asks the real full comparator to resolve it. This article seems to be rather comprehensive: https://brandur.org/sortsupportOn Sat, Jun 12, 2021 at 9:51 AM Han Wang <hanwgeek@gmail.com> wrote:Hi all,I am trying to implement a sort support function for geometry data types in PostGIS with the new feature `SortSupport`. However, I have a question about this.I think it is hardly to apply a sort support function to a complex data type without the `abbrev_converter` to simply the data structure into a single `Datum`. However, I do not know how the system determines when to apply the converter.I appreciate any answers or suggestions. I am looking forward to hearing from you.Best regards,Han\n-- Darafei \"Komяpa\" PraliaskouskiOSM BY Team - http://openstreetmap.by/", "msg_date": "Sat, 12 Jun 2021 09:55:13 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": false, "msg_subject": "Re: Questions about support function and abbreviate" }, { "msg_contents": "Hi Darafei,\n\nThanks for your reply.\n\nHowever, I still don't get the full picture of this. Let me make my\nquestion more clear.\n\nFirst of all, in the *`gistproc.c\n<https://github.com/postgres/postgres/blob/master/src/backend/access/gist/gistproc.c#L1761>`*\nof Postgres, it shows that the `abbreviate` attributes should be set before\nthe `abbrev_converter` defined. So I would like to know where to define a\n`SortSupport` structure with `abbreviate` is `true`.\n\nSecondly, in the support functions of internal data type `Point`, the\n`abbrev_full_copmarator` just z-order hash the point first like the\n`abbrev_converter` doing and then compare the hash value. So I don't know\nthe difference between `full_comparator` and `comparator` after\n`abbrev_converter`.\n\nBest regards,\nHan\n\nOn Sat, Jun 12, 2021 at 2:55 PM Darafei \"Komяpa\" Praliaskouski <\nme@komzpa.net> wrote:\n\n> Hello,\n>\n> the abbrev_converter is applied whenever it is defined. The values are\n> sorted using the abbreviated comparator first using the shortened version,\n> and if there is a tie the system asks the real full comparator to resolve\n> it.\n>\n> This article seems to be rather comprehensive:\n> https://brandur.org/sortsupport\n>\n> On Sat, Jun 12, 2021 at 9:51 AM Han Wang <hanwgeek@gmail.com> wrote:\n>\n>> Hi all,\n>>\n>> I am trying to implement a sort support function for geometry data types\n>> in PostGIS with the new feature `SortSupport`. However, I have a question\n>> about this.\n>>\n>> I think it is hardly to apply a sort support function to a complex data\n>> type without the `abbrev_converter` to simply the data structure into a\n>> single `Datum`. However, I do not know how the system determines when to\n>> apply the converter.\n>>\n>> I appreciate any answers or suggestions. I am looking forward to hearing\n>> from you.\n>>\n>> Best regards,\n>> Han\n>>\n>\n>\n> --\n> Darafei \"Komяpa\" Praliaskouski\n> OSM BY Team - http://openstreetmap.by/\n>\n\nHi Darafei,Thanks for your reply.However, I still don't get the full picture of this.  Let me make my question more clear.First of all, in the `gistproc.c` of Postgres, it shows that the `abbreviate` attributes should be set before the `abbrev_converter` defined. So I would like to know where to define a `SortSupport` structure with `abbreviate` is `true`.  Secondly, in the support functions of internal data type `Point`, the `abbrev_full_copmarator` just z-order hash the point first like the `abbrev_converter` doing and then compare the hash value. So I don't know the difference between `full_comparator` and `comparator` after `abbrev_converter`.Best regards,HanOn Sat, Jun 12, 2021 at 2:55 PM Darafei \"Komяpa\" Praliaskouski <me@komzpa.net> wrote:Hello,the abbrev_converter is applied whenever it is defined. The values are sorted using the abbreviated comparator first using the shortened version, and if there is a tie the system asks the real full comparator to resolve it. This article seems to be rather comprehensive: https://brandur.org/sortsupportOn Sat, Jun 12, 2021 at 9:51 AM Han Wang <hanwgeek@gmail.com> wrote:Hi all,I am trying to implement a sort support function for geometry data types in PostGIS with the new feature `SortSupport`. However, I have a question about this.I think it is hardly to apply a sort support function to a complex data type without the `abbrev_converter` to simply the data structure into a single `Datum`. However, I do not know how the system determines when to apply the converter.I appreciate any answers or suggestions. I am looking forward to hearing from you.Best regards,Han\n-- Darafei \"Komяpa\" PraliaskouskiOSM BY Team - http://openstreetmap.by/", "msg_date": "Sat, 12 Jun 2021 15:42:30 +0800", "msg_from": "Han Wang <hanwgeek@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions about support function and abbreviate" }, { "msg_contents": "Hi Han,\n\nDarafei already provided a good answer to your question, I will add just a\nfew things with the hope of making things more clear for your use case.\n\nSortSupport implementation in PostgreSQL allows to make comparisons at\nbinary level in a dedicated region of memory where data can be quickly\naccessed through\nreferences to actual data in the heap called \"sort tuples\". Those\nreferences have a space to include the data of a length of a native pointer\nof a system, which is 8 bytes\nfor 64 bit systems. Although that represents enough space for standard data\ntypes like integers or floats, it's not enough for longer data types, or\nvarlena data like\ngeometries.\n\nIn this last case, we need to pass to sort tuples an abbreviated version of\nthe key which should include the most representative part. This is the\nscope of the abbreviated\nattributes which need to be provided to create the abbreviated keys.\n\nTo answer more specifically to your question, the four abbreviated\nattributes represent\n\n* comparator --> the access method which should\nbe used of comparison of abbreviated keys\n* abbrev_converter --> the method which creates the abbreviations\n(NOTE in src/backend/access/gist/gistproc.c it just consider the first 32\nbits of the hash of a geometry)\n* abbrev_abort --> the method which should check if the\nabbreviation has to be done or not even in cases the length is greater than\nthe size of the native pointer (NOTE,\n it is not\nimplemented in src/backend/access/gist/gistproc.c, which means that\nabbreviation is always worth)\n* abbrev_full_comparator --> the method which should be used for\ncomparisons in case of fall back into not abbreviated keys (NOTE, this\nattribute coincides to the comparator one\n in case the\nabbreviate flag is set to false)\n\nHope it helps,\nGiuseppe.\n\n\nIl giorno sab 12 giu 2021 alle ore 08:43 Han Wang <hanwgeek@gmail.com> ha\nscritto:\n\n> Hi Darafei,\n>\n> Thanks for your reply.\n>\n> However, I still don't get the full picture of this. Let me make my\n> question more clear.\n>\n> First of all, in the *`gistproc.c\n> <https://github.com/postgres/postgres/blob/master/src/backend/access/gist/gistproc.c#L1761>`*\n> of Postgres, it shows that the `abbreviate` attributes should be set before\n> the `abbrev_converter` defined. So I would like to know where to define a\n> `SortSupport` structure with `abbreviate` is `true`.\n>\n> Secondly, in the support functions of internal data type `Point`, the\n> `abbrev_full_copmarator` just z-order hash the point first like the\n> `abbrev_converter` doing and then compare the hash value. So I don't know\n> the difference between `full_comparator` and `comparator` after\n> `abbrev_converter`.\n>\n> Best regards,\n> Han\n>\n> On Sat, Jun 12, 2021 at 2:55 PM Darafei \"Komяpa\" Praliaskouski <\n> me@komzpa.net> wrote:\n>\n>> Hello,\n>>\n>> the abbrev_converter is applied whenever it is defined. The values are\n>> sorted using the abbreviated comparator first using the shortened version,\n>> and if there is a tie the system asks the real full comparator to resolve\n>> it.\n>>\n>> This article seems to be rather comprehensive:\n>> https://brandur.org/sortsupport\n>>\n>> On Sat, Jun 12, 2021 at 9:51 AM Han Wang <hanwgeek@gmail.com> wrote:\n>>\n>>> Hi all,\n>>>\n>>> I am trying to implement a sort support function for geometry data types\n>>> in PostGIS with the new feature `SortSupport`. However, I have a question\n>>> about this.\n>>>\n>>> I think it is hardly to apply a sort support function to a complex data\n>>> type without the `abbrev_converter` to simply the data structure into a\n>>> single `Datum`. However, I do not know how the system determines when to\n>>> apply the converter.\n>>>\n>>> I appreciate any answers or suggestions. I am looking forward to hearing\n>>> from you.\n>>>\n>>> Best regards,\n>>> Han\n>>>\n>>\n>>\n>> --\n>> Darafei \"Komяpa\" Praliaskouski\n>> OSM BY Team - http://openstreetmap.by/\n>>\n>\n\nHi Han,Darafei already provided a good answer to your question, I will add just a few things with the hope of making things more clear for your use case.SortSupport implementation in PostgreSQL allows to make comparisons at binary level in a dedicated region of memory where data can be quickly accessed throughreferences to actual data in the heap called \"sort tuples\".  Those references have a space to include the data of a length of a native pointer of a system, which is 8 bytesfor 64 bit systems. Although that represents enough space for standard data types like integers or floats, it's not enough for longer data types, or varlena data likegeometries.In this last case, we need to pass to sort tuples an abbreviated version of the key which should include the most representative part. This is the scope of the abbreviatedattributes which need to be provided to create the abbreviated keys.To answer more specifically to your question, the four abbreviated attributes represent* comparator                          -->  the access method which should be used of comparison of abbreviated keys* abbrev_converter       -->  the method which creates the abbreviations (NOTE in src/backend/access/gist/gistproc.c it just consider the first 32 bits of the hash of a geometry)* abbrev_abort           -->  the method which should check if the abbreviation has to be done or not even in cases the length is greater than the size of the native pointer (NOTE,                                                       it is not implemented in src/backend/access/gist/gistproc.c, which means that abbreviation is always worth)* abbrev_full_comparator -->  the method which should be used for comparisons in case of fall back into not abbreviated keys (NOTE, this attribute coincides to the comparator one                                                       in case the abbreviate flag is set to false) Hope it helps,Giuseppe.Il giorno sab 12 giu 2021 alle ore 08:43 Han Wang <hanwgeek@gmail.com> ha scritto:Hi Darafei,Thanks for your reply.However, I still don't get the full picture of this.  Let me make my question more clear.First of all, in the `gistproc.c` of Postgres, it shows that the `abbreviate` attributes should be set before the `abbrev_converter` defined. So I would like to know where to define a `SortSupport` structure with `abbreviate` is `true`.  Secondly, in the support functions of internal data type `Point`, the `abbrev_full_copmarator` just z-order hash the point first like the `abbrev_converter` doing and then compare the hash value. So I don't know the difference between `full_comparator` and `comparator` after `abbrev_converter`.Best regards,HanOn Sat, Jun 12, 2021 at 2:55 PM Darafei \"Komяpa\" Praliaskouski <me@komzpa.net> wrote:Hello,the abbrev_converter is applied whenever it is defined. The values are sorted using the abbreviated comparator first using the shortened version, and if there is a tie the system asks the real full comparator to resolve it. This article seems to be rather comprehensive: https://brandur.org/sortsupportOn Sat, Jun 12, 2021 at 9:51 AM Han Wang <hanwgeek@gmail.com> wrote:Hi all,I am trying to implement a sort support function for geometry data types in PostGIS with the new feature `SortSupport`. However, I have a question about this.I think it is hardly to apply a sort support function to a complex data type without the `abbrev_converter` to simply the data structure into a single `Datum`. However, I do not know how the system determines when to apply the converter.I appreciate any answers or suggestions. I am looking forward to hearing from you.Best regards,Han\n-- Darafei \"Komяpa\" PraliaskouskiOSM BY Team - http://openstreetmap.by/", "msg_date": "Sun, 13 Jun 2021 00:45:45 +0100", "msg_from": "Giuseppe Broccolo <g.broccolo.7@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions about support function and abbreviate" } ]
[ { "msg_contents": "Hi all,\n\nwrasse has just failed with what looks like a timing error with a\nreplication slot drop:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2021-06-12%2006%3A16%3A30\n\nHere is the error:\nerror running SQL: 'psql:<stdin>:1: ERROR: could not drop replication\nslot \"tap_sub\" on publisher: ERROR: replication slot \"tap_sub\" is\nactive for PID 1641'\n\nIt seems to me that this just lacks a poll_query_until() doing some\nslot monitoring? I have not checked in details if we need to do that\nin more places. The code path that failed has been added in 7c4f524\nfrom 2017.\n\nThanks,\n--\nMichael", "msg_date": "Sat, 12 Jun 2021 16:43:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Failure in subscription test 004_sync.pl" }, { "msg_contents": "On Sat, Jun 12, 2021 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> wrasse has just failed with what looks like a timing error with a\n> replication slot drop:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2021-06-12%2006%3A16%3A30\n>\n> Here is the error:\n> error running SQL: 'psql:<stdin>:1: ERROR: could not drop replication\n> slot \"tap_sub\" on publisher: ERROR: replication slot \"tap_sub\" is\n> active for PID 1641'\n>\n> It seems to me that this just lacks a poll_query_until() doing some\n> slot monitoring?\n>\n\nI think it is showing a race condition issue in the code. In\nDropSubscription, we first stop the worker that is receiving the WAL,\nand then in a separate connection with the publisher, it tries to drop\nthe slot which leads to this error. The reason is that walsender is\nstill active as we just wait for wal receiver (or apply worker) to\nstop. Normally, as soon as the apply worker is stopped the walsender\ndetects it and exits but in this case, it took some time to exit, and\nin the meantime, we tried to drop the slot which is still in use by\nwalsender.\n\nIf we want to fix this, we might want to wait till the slot is active\non the publisher before trying to drop it but not sure if it is a good\nidea. In the worst case, if the user retries this operation (Drop\nSubscription), it will succeed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 12 Jun 2021 18:27:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Failure in subscription test 004_sync.pl" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Sat, Jun 12, 2021 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> wrasse has just failed with what looks like a timing error with a\n>> replication slot drop:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2021-06-12%2006%3A16%3A30\n\n> If we want to fix this, we might want to wait till the slot is active\n> on the publisher before trying to drop it but not sure if it is a good\n> idea. In the worst case, if the user retries this operation (Drop\n> Subscription), it will succeed.\n\nwrasse's not the only animal reporting this type of failure.\nSee also\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=komodoensis&dt=2021-06-12%2011%3A32%3A04\n\nerror running SQL: 'psql:<stdin>:1: ERROR: could not drop replication slot \"pg_16387_sync_16384_6972886888894805957\" on publisher: ERROR: replication slot \"pg_16387_sync_16384_6972886888894805957\" is active for PID 2971625'\nwhile running 'psql -XAtq -d port=60321 host=/tmp/vdQmH7ijFI dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'DROP SUBSCRIPTION testsub2' at /home/bf/build/buildfarm-komodoensis/HEAD/pgsql.build/../pgsql/src/test/perl/PostgresNode.pm line 1771.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2021-06-11%2020%3A30%3A28\n\nerror running SQL: 'psql:<stdin>:1: ERROR: could not drop replication slot \"testsub2\" on publisher: ERROR: replication slot \"testsub2\" is active for PID 27175'\nwhile running 'psql -XAtq -d port=59579 host=/tmp/9Qchjsykek dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'DROP SUBSCRIPTION testsub2' at /home/pgbf/buildroot/HEAD/pgsql.build/src/test/subscription/../../../src/test/perl/PostgresNode.pm line 1771.\n\nThose are both in the t/100_bugs.pl test script, but otherwise they\nlook like the exact same thing.\n\nI don't think that it's optional to fix a problem that occurs as\noften as three times in 10 days in the buildfarm.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Jun 2021 13:51:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Failure in subscription test 004_sync.pl" }, { "msg_contents": "On Sat, Jun 12, 2021 at 9:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jun 12, 2021 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > wrasse has just failed with what looks like a timing error with a\n> > replication slot drop:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2021-06-12%2006%3A16%3A30\n> >\n> > Here is the error:\n> > error running SQL: 'psql:<stdin>:1: ERROR: could not drop replication\n> > slot \"tap_sub\" on publisher: ERROR: replication slot \"tap_sub\" is\n> > active for PID 1641'\n> >\n> > It seems to me that this just lacks a poll_query_until() doing some\n> > slot monitoring?\n> >\n>\n> I think it is showing a race condition issue in the code. In\n> DropSubscription, we first stop the worker that is receiving the WAL,\n> and then in a separate connection with the publisher, it tries to drop\n> the slot which leads to this error. The reason is that walsender is\n> still active as we just wait for wal receiver (or apply worker) to\n> stop. Normally, as soon as the apply worker is stopped the walsender\n> detects it and exits but in this case, it took some time to exit, and\n> in the meantime, we tried to drop the slot which is still in use by\n> walsender.\n\nThere might be possible.\n\nThat's weird since DROP SUBSCRIPTION executes DROP_REPLICATION_SLOT\ncommand with WAIT option. I found a bug that is possibly an oversight\nof commit 1632ea4368. The commit changed the code around the error as\nfollows:\n\n if (active_pid != MyProcPid)\n {\n- if (behavior == SAB_Error)\n+ if (!nowait)\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_IN_USE),\n errmsg(\"replication slot \\\"%s\\\" is active for PID %d\",\n NameStr(s->data.name), active_pid)));\n- else if (behavior == SAB_Inquire)\n- return active_pid;\n\n /* Wait here until we get signaled, and then restart */\n ConditionVariableSleep(&s->active_cv,\n\nThe condition should be the opposite; we should raise the error when\n'nowait' is true. I think this is the cause of the test failure. Even\nif DROP SUBSCRIPTION tries to drop the slot with the WAIT option, we\ndon't wait but raise the error.\n\nAttached a small patch fixes it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 14 Jun 2021 14:11:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Failure in subscription test 004_sync.pl" }, { "msg_contents": "On Mon, Jun 14, 2021 at 10:41 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> >\n> > I think it is showing a race condition issue in the code. In\n> > DropSubscription, we first stop the worker that is receiving the WAL,\n> > and then in a separate connection with the publisher, it tries to drop\n> > the slot which leads to this error. The reason is that walsender is\n> > still active as we just wait for wal receiver (or apply worker) to\n> > stop. Normally, as soon as the apply worker is stopped the walsender\n> > detects it and exits but in this case, it took some time to exit, and\n> > in the meantime, we tried to drop the slot which is still in use by\n> > walsender.\n>\n> There might be possible.\n>\n> That's weird since DROP SUBSCRIPTION executes DROP_REPLICATION_SLOT\n> command with WAIT option. I found a bug that is possibly an oversight\n> of commit 1632ea4368.\n>\n..\n>\n> The condition should be the opposite; we should raise the error when\n> 'nowait' is true. I think this is the cause of the test failure. Even\n> if DROP SUBSCRIPTION tries to drop the slot with the WAIT option, we\n> don't wait but raise the error.\n>\n> Attached a small patch fixes it.\n>\n\nYes, this should fix the recent buildfarm failures. Alvaro, would you\nlike to take care of this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 14 Jun 2021 14:18:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Failure in subscription test 004_sync.pl" }, { "msg_contents": "On 2021-Jun-14, Amit Kapila wrote:\n\n> On Mon, Jun 14, 2021 at 10:41 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> > The condition should be the opposite; we should raise the error when\n> > 'nowait' is true. I think this is the cause of the test failure. Even\n> > if DROP SUBSCRIPTION tries to drop the slot with the WAIT option, we\n> > don't wait but raise the error.\n> \n> Yes, this should fix the recent buildfarm failures. Alvaro, would you\n> like to take care of this?\n\nUgh, thanks for CCing me. Yes, I'll get this fixed ASAP.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Saca el libro que tu religi�n considere como el indicado para encontrar la\noraci�n que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Ducl�s)\n\n\n", "msg_date": "Mon, 14 Jun 2021 10:40:31 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Failure in subscription test 004_sync.pl" }, { "msg_contents": "On 2021-Jun-14, Masahiko Sawada wrote:\n\n> The condition should be the opposite; we should raise the error when\n> 'nowait' is true. I think this is the cause of the test failure. Even\n> if DROP SUBSCRIPTION tries to drop the slot with the WAIT option, we\n> don't wait but raise the error.\n\nHi, thanks for diagnosing this and producing a patch! I ended up\nturning the condition around, so that all three stanzas still test\n\"!nowait\"; which seems a bit easier to follow.\n\nTBH I'm quite troubled by the fact that this test only failed once on\neach animal; they all had a lot of successful runs after that. I wonder\nif this is because coverage is insufficient, or is it just bad luck.\n\nI also wonder if this bug is what caused the random failures in the test\ncase I tried to add. I should look at that some more now ...\n\n-- \n�lvaro Herrera Valdivia, Chile\nAl principio era UNIX, y UNIX habl� y dijo: \"Hello world\\n\".\nNo dijo \"Hello New Jersey\\n\", ni \"Hello USA\\n\".\n\n\n", "msg_date": "Mon, 14 Jun 2021 16:40:10 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Failure in subscription test 004_sync.pl" } ]
[ { "msg_contents": "Much of nodeAgg.c does not really know the difference between the\naggregate's combine function and the aggregate's transition function.\nThis was done on purpose so that we can keep as much code the same as\npossible between partial aggregate and finalize aggregate.\n\nWe can take this a bit further with the attached patch which managed a\nnet reduction of about 3 dozen lines of code.\n\n3 files changed, 118 insertions(+), 155 deletions(-)\n\nI also did some renaming to try and make it more clear about when\nwe're talking about aggtransfn and when we're just talking about the\ntransition function that's being used, which in the finalize aggregate\ncase will be the combine function.\n\nI proposed this a few years ago in [1], but at the time we went with a\nmore minimal patch to fix the bug being reported there with plans to\ncome back and do a bit more once we branched.\n\nI've rebased this and I'd like to propose this small cleanup for pg15.\n\nThe patch is basically making build_pertrans_for_aggref() oblivious to\nif it's working with the aggtransfn or the aggcombinefn and all the\ncode that needs to treat them differently is moved up into\nExecInitAgg(). This allows us to just completely get rid of\nbuild_aggregate_combinefn_expr() and just use\nbuild_aggregate_transfn_expr() instead.\n\nI feel this is worth doing as nodeAgg.c has grown quite a bit over the\nyears. Shrinking it down a bit and maybe making it a bit more readable\nseems like a worthy goal. Heikki took a big step forward towards that\ngoal in 0a2bc5d61e. This, arguably, helps a little more.\n\nI've included Andres and Horiguchi-san because they were part of the\ndiscussion on the original thread.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f-Qu2Q9g6Xfcf5dctg99oDkbV9LyW4Lym9C4L1vEHTN%3Dg%40mail.gmail.com", "msg_date": "Sat, 12 Jun 2021 23:03:43 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Small clean up in nodeAgg.c" }, { "msg_contents": "On Sat, 12 Jun 2021 at 23:03, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've rebased this and I'd like to propose this small cleanup for pg15.\n\nNow that the pg15 branch is open, does anyone have any objections to this patch?\n\nDavid\n\n\n", "msg_date": "Thu, 1 Jul 2021 10:53:42 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Small clean up in nodeAgg.c" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Now that the pg15 branch is open, does anyone have any objections to this patch?\n\nJust reading it over quickly, I noticed a comment mentioning \n\"aggcombinedfn\" which I suppose should be \"aggcombinefn\".\nNo particular opinion on whether this is a net reduction\nof logical complexity.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Jun 2021 19:09:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Small clean up in nodeAgg.c" }, { "msg_contents": "On Thu, 1 Jul 2021 at 11:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Just reading it over quickly, I noticed a comment mentioning\n> \"aggcombinedfn\" which I suppose should be \"aggcombinefn\".\n\nThanks. I've fixed that locally.\n\n> No particular opinion on whether this is a net reduction\n> of logical complexity.\n\nI had another look over it and I think we do need to be more clear\nabout when we're talking about aggtransfn and aggcombinefn. The\nexisting code uses variables name aggtransfn when the value stored\ncould be the value for the aggcombinefn. Additionally, the other\nchange to remove the special case build_aggregate_combinefn_expr()\nfunction seems good in a sense of reusing more code and reducing the\namount of code in that file.\n\nUnless anyone thinks differently about this, I plan on pushing the\npatch in the next day or so.\n\nDavid\n\n\n", "msg_date": "Fri, 2 Jul 2021 22:30:53 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Small clean up in nodeAgg.c" }, { "msg_contents": "On Fri, 2 Jul 2021 at 22:30, David Rowley <dgrowleyml@gmail.com> wrote:\n> Unless anyone thinks differently about this, I plan on pushing the\n> patch in the next day or so.\n\nPushed (63b1af943)\n\nDavid\n\n\n", "msg_date": "Sun, 4 Jul 2021 18:49:04 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Small clean up in nodeAgg.c" } ]
[ { "msg_contents": "Back in f0705bb62, we added pg_nextpower2_32 and pg_nextpower2_64 to\nefficiently obtain the next power of 2 of a given number using an\nintrinsic function to find the left-most 1 bit.\n\nIn d025cf88b and 02a2e8b44, I added some usages of these new functions\nbut I didn't quite get all of them done. The attached replaces all\nof the remaining ones that I'm happy enough to go near.\n\nThe ones that I left behind are ones in the form of:\n\nwhile (reqsize >= buflen)\n{\n buflen *= 2;\n buf = repalloc(buf, buflen);\n}\n\nThe reason I left those behind is that I was too scared that I might\nintroduce an opportunity to wrap buflen back around to zero again. At\nthe moment the repalloc() would prevent that as we'd go above\nMaxAllocSize before we wrapped buflen back to zero again. All the\nother places I touched does not change the risk of that happening.\n\nIt would be nice to get rid of doing that repalloc() in a loop, but it\nwould need a bit more study to ensure we couldn't wrap or we'd need to\nadd some error checking code that raised an ERROR if it did wrap. I\ndon't want to touch those as part of this effort.\n\nI've also fixed up a few places that were just doubling the size of a\nbuffer but used a \"while\" loop to do this when a simple \"if\" would\nhave done. Using an \"if\" is ever so slightly more optimal since the\ncondition will be checked once rather than twice when the buffer needs\nto increase in size.\n\nI'd like to fix these for PG15.\n\nDavid", "msg_date": "Sun, 13 Jun 2021 00:31:31 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Use pg_nextpower2_* in a few more places" }, { "msg_contents": "On Sat, Jun 12, 2021 at 5:32 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Back in f0705bb62, we added pg_nextpower2_32 and pg_nextpower2_64 to\n> efficiently obtain the next power of 2 of a given number using an\n> intrinsic function to find the left-most 1 bit.\n>\n> In d025cf88b and 02a2e8b44, I added some usages of these new functions\n> but I didn't quite get all of them done. The attached replaces all\n> of the remaining ones that I'm happy enough to go near.\n>\n> The ones that I left behind are ones in the form of:\n>\n> while (reqsize >= buflen)\n> {\n> buflen *= 2;\n> buf = repalloc(buf, buflen);\n> }\n>\n> The reason I left those behind is that I was too scared that I might\n> introduce an opportunity to wrap buflen back around to zero again. At\n> the moment the repalloc() would prevent that as we'd go above\n> MaxAllocSize before we wrapped buflen back to zero again. All the\n> other places I touched does not change the risk of that happening.\n>\n> It would be nice to get rid of doing that repalloc() in a loop, but it\n> would need a bit more study to ensure we couldn't wrap or we'd need to\n> add some error checking code that raised an ERROR if it did wrap. I\n> don't want to touch those as part of this effort.\n>\n> I've also fixed up a few places that were just doubling the size of a\n> buffer but used a \"while\" loop to do this when a simple \"if\" would\n> have done. Using an \"if\" is ever so slightly more optimal since the\n> condition will be checked once rather than twice when the buffer needs\n> to increase in size.\n>\n> I'd like to fix these for PG15.\n>\n> David\n>\nHi,\n\n- newalloc = Max(LWLockTrancheNamesAllocated, 8);\n- while (newalloc <= tranche_id)\n- newalloc *= 2;\n+ newalloc = pg_nextpower2_32(Max(8, tranche_id + 1));\n\nShould LWLockTrancheNamesAllocated be included in the Max() expression (in\ncase it gets to a high value) ?\n\nCheers\n\nOn Sat, Jun 12, 2021 at 5:32 AM David Rowley <dgrowleyml@gmail.com> wrote:Back in f0705bb62, we added pg_nextpower2_32 and pg_nextpower2_64 to\nefficiently obtain the next power of 2 of a given number using an\nintrinsic function to find the left-most 1 bit.\n\nIn d025cf88b and 02a2e8b44, I added some usages of these new functions\nbut I didn't quite get all of them done.   The attached replaces all\nof the remaining ones that I'm happy enough to go near.\n\nThe ones that I left behind are ones in the form of:\n\nwhile (reqsize >= buflen)\n{\n   buflen *= 2;\n   buf = repalloc(buf, buflen);\n}\n\nThe reason I left those behind is that I was too scared that I might\nintroduce an opportunity to wrap buflen back around to zero again.  At\nthe moment the repalloc() would prevent that as we'd go above\nMaxAllocSize before we wrapped buflen back to zero again.  All the\nother places I touched does not change the risk of that happening.\n\nIt would be nice to get rid of doing that repalloc() in a loop, but it\nwould need a bit more study to ensure we couldn't wrap or we'd need to\nadd some error checking code that raised an ERROR if it did wrap.  I\ndon't want to touch those as part of this effort.\n\nI've also fixed up a few places that were just doubling the size of a\nbuffer but used a \"while\" loop to do this when a simple \"if\" would\nhave done.  Using an \"if\" is ever so slightly more optimal since the\ncondition will be checked once rather than twice when the buffer needs\nto increase in size.\n\nI'd like to fix these for PG15.\n\nDavidHi,-       newalloc = Max(LWLockTrancheNamesAllocated, 8);-       while (newalloc <= tranche_id)-           newalloc *= 2;+       newalloc = pg_nextpower2_32(Max(8, tranche_id + 1));Should LWLockTrancheNamesAllocated be included in the Max() expression (in case it gets to a high value) ?Cheers", "msg_date": "Sat, 12 Jun 2021 05:55:13 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Use pg_nextpower2_* in a few more places" }, { "msg_contents": "Thanks for having a look.\n\nOn Sun, 13 Jun 2021 at 00:50, Zhihong Yu <zyu@yugabyte.com> wrote:\n> - newalloc = Max(LWLockTrancheNamesAllocated, 8);\n> - while (newalloc <= tranche_id)\n> - newalloc *= 2;\n> + newalloc = pg_nextpower2_32(Max(8, tranche_id + 1));\n>\n> Should LWLockTrancheNamesAllocated be included in the Max() expression (in case it gets to a high value) ?\n\nI think the new code will produce the same result as the old code in all cases.\n\nAll the old code did was finding the next power of 2 that's >= 8 and\nlarger than tranche_id. LWLockTrancheNamesAllocated is just a hint at\nwhere the old code should start searching from. The new code does not\nneed that hint. All it seems to do is save the old code from having to\nstart the loop at 8 each time we need more space.\n\nDavid\n\n\n", "msg_date": "Sun, 13 Jun 2021 01:40:13 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use pg_nextpower2_* in a few more places" }, { "msg_contents": "On Sat, Jun 12, 2021 at 6:40 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Thanks for having a look.\n>\n> On Sun, 13 Jun 2021 at 00:50, Zhihong Yu <zyu@yugabyte.com> wrote:\n> > - newalloc = Max(LWLockTrancheNamesAllocated, 8);\n> > - while (newalloc <= tranche_id)\n> > - newalloc *= 2;\n> > + newalloc = pg_nextpower2_32(Max(8, tranche_id + 1));\n> >\n> > Should LWLockTrancheNamesAllocated be included in the Max() expression\n> (in case it gets to a high value) ?\n>\n> I think the new code will produce the same result as the old code in all\n> cases.\n>\n> All the old code did was finding the next power of 2 that's >= 8 and\n> larger than tranche_id. LWLockTrancheNamesAllocated is just a hint at\n> where the old code should start searching from. The new code does not\n> need that hint. All it seems to do is save the old code from having to\n> start the loop at 8 each time we need more space.\n>\n> David\n>\nHi,\nMaybe add an assertion after the assignment, that newalloc >=\n LWLockTrancheNamesAllocated.\n\nCheers\n\nOn Sat, Jun 12, 2021 at 6:40 AM David Rowley <dgrowleyml@gmail.com> wrote:Thanks for having a look.\n\nOn Sun, 13 Jun 2021 at 00:50, Zhihong Yu <zyu@yugabyte.com> wrote:\n> -       newalloc = Max(LWLockTrancheNamesAllocated, 8);\n> -       while (newalloc <= tranche_id)\n> -           newalloc *= 2;\n> +       newalloc = pg_nextpower2_32(Max(8, tranche_id + 1));\n>\n> Should LWLockTrancheNamesAllocated be included in the Max() expression (in case it gets to a high value) ?\n\nI think the new code will produce the same result as the old code in all cases.\n\nAll the old code did was finding the next power of 2 that's >= 8 and\nlarger than tranche_id.  LWLockTrancheNamesAllocated is just a hint at\nwhere the old code should start searching from.  The new code does not\nneed that hint. All it seems to do is save the old code from having to\nstart the loop at 8 each time we need more space.\n\nDavidHi,Maybe add an assertion after the assignment, that newalloc >=  LWLockTrancheNamesAllocated.Cheers", "msg_date": "Sat, 12 Jun 2021 07:13:09 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Use pg_nextpower2_* in a few more places" }, { "msg_contents": "On Sun, 13 Jun 2021 at 02:08, Zhihong Yu <zyu@yugabyte.com> wrote:\n> Maybe add an assertion after the assignment, that newalloc >= LWLockTrancheNamesAllocated.\n\nI don't quite see how it would be possible for that to ever fail. I\ncould understand adding an Assert() if some logic was outside the\nfunction and we wanted to catch something outside of the function's\ncontrol, but that's not the case here. All the logic is within a few\nlines.\n\nMaybe it would help if we look at the if condition that this code\nexecutes under:\n\n/* If necessary, create or enlarge array. */\nif (tranche_id >= LWLockTrancheNamesAllocated)\n\nSo since we're doing:\n\n+ newalloc = pg_nextpower2_32(Max(8, tranche_id + 1));\n\nassuming pg_nextpower2_32 does not give us something incorrect, then I\ndon't quite see why Assert(newalloc >= LWLockTrancheNamesAllocated)\ncould ever fail.\n\nCan you explain why you think it might?\n\nDavid\n\n\n", "msg_date": "Sun, 13 Jun 2021 02:35:08 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use pg_nextpower2_* in a few more places" }, { "msg_contents": "On Sat, Jun 12, 2021 at 7:35 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Sun, 13 Jun 2021 at 02:08, Zhihong Yu <zyu@yugabyte.com> wrote:\n> > Maybe add an assertion after the assignment, that newalloc >=\n> LWLockTrancheNamesAllocated.\n>\n> I don't quite see how it would be possible for that to ever fail. I\n> could understand adding an Assert() if some logic was outside the\n> function and we wanted to catch something outside of the function's\n> control, but that's not the case here. All the logic is within a few\n> lines.\n>\n> Maybe it would help if we look at the if condition that this code\n> executes under:\n>\n> /* If necessary, create or enlarge array. */\n> if (tranche_id >= LWLockTrancheNamesAllocated)\n>\n> So since we're doing:\n>\n> + newalloc = pg_nextpower2_32(Max(8, tranche_id + 1));\n>\n> assuming pg_nextpower2_32 does not give us something incorrect, then I\n> don't quite see why Assert(newalloc >= LWLockTrancheNamesAllocated)\n> could ever fail.\n>\n> Can you explain why you think it might?\n>\n> David\n>\nHi,\nInteresting, the quoted if () line was not shown in the patch.\nPardon my not checking this line.\n\nIn that case, the assertion is not needed.\n\nOn Sat, Jun 12, 2021 at 7:35 AM David Rowley <dgrowleyml@gmail.com> wrote:On Sun, 13 Jun 2021 at 02:08, Zhihong Yu <zyu@yugabyte.com> wrote:\n> Maybe add an assertion after the assignment, that newalloc >=  LWLockTrancheNamesAllocated.\n\nI don't quite see how it would be possible for that to ever fail.  I\ncould understand adding an Assert() if some logic was outside the\nfunction and we wanted to catch something outside of the function's\ncontrol, but that's not the case here.  All the logic is within a few\nlines.\n\nMaybe it would help if we look at the if condition that this code\nexecutes under:\n\n/* If necessary, create or enlarge array. */\nif (tranche_id >= LWLockTrancheNamesAllocated)\n\nSo since we're doing:\n\n+       newalloc = pg_nextpower2_32(Max(8, tranche_id + 1));\n\nassuming pg_nextpower2_32 does not give us something incorrect, then I\ndon't quite see why Assert(newalloc >=  LWLockTrancheNamesAllocated)\ncould ever fail.\n\nCan you explain why you think it might?\n\nDavidHi,Interesting, the quoted if () line was not shown in the patch.Pardon my not checking this line.In that case, the assertion is not needed.", "msg_date": "Sat, 12 Jun 2021 07:44:18 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Use pg_nextpower2_* in a few more places" }, { "msg_contents": "On Sun, 13 Jun 2021 at 00:31, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Back in f0705bb62, we added pg_nextpower2_32 and pg_nextpower2_64 to\n> efficiently obtain the next power of 2 of a given number using an\n> intrinsic function to find the left-most 1 bit.\n>\n> In d025cf88b and 02a2e8b44, I added some usages of these new functions\n> but I didn't quite get all of them done. The attached replaces all\n> of the remaining ones that I'm happy enough to go near.\n\n> I'd like to fix these for PG15.\n\nI had another look over this patch and it looks ok to me. I plan to\npush it in the next day or so.\n\nDavid\n\n\n", "msg_date": "Thu, 1 Jul 2021 00:24:19 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use pg_nextpower2_* in a few more places" } ]
[ { "msg_contents": "Hi,\n\nWith the recent changes at procarray.c, I take a look in.\nmsvc compiler, has some warnings about signed vs unsigned.\n\nSo.\n\n1. Size_t is weird, because all types are int.\n2. Wouldn't it be better to initialize static variables?\n3. There are some shadowing parameters.\n4. Possible loop beyond numProcs?\n\n- for (size_t pgxactoff = 0; pgxactoff < numProcs; pgxactoff++)\n+ for (int pgxactoff = 0; pgxactoff < numProcs; pgxactoff++)\n\nI think no functional behavior changed.\nPatch attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Sat, 12 Jun 2021 10:55:22 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Hi,\n\nOn 2021-06-12 10:55:22 -0300, Ranier Vilela wrote:\n> With the recent changes at procarray.c, I take a look in.\n> msvc compiler, has some warnings about signed vs unsigned.\n\n> 1. Size_t is weird, because all types are int.\n\nNot sure why I ended up using size_t here. There are cases where using a\nnatively sized integer can lead to better code being generated, so I'd\nwant to see some evaluation of the code generation effects.\n\n\n> 2. Wouldn't it be better to initialize static variables?\n\nNo, explicit initialization needs additional space in the binary, and\nstatic variables are always zero initialized.\n\n\n> 3. There are some shadowing parameters.\n\nHm, yea, that's not great. Those are from\n\ncommit 0e141c0fbb211bdd23783afa731e3eef95c9ad7a\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: 2015-08-06 11:52:51 -0400\n\n Reduce ProcArrayLock contention by removing backends in batches.\n\nAmit, Robert, I assume you don't mind changing this?\n\n\n> 4. Possible loop beyond numProcs?\n\nWhat are you referring to here?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Jun 2021 12:27:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Hi Andres, thanks for taking a look.\n\nEm sáb., 12 de jun. de 2021 às 16:27, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2021-06-12 10:55:22 -0300, Ranier Vilela wrote:\n> > With the recent changes at procarray.c, I take a look in.\n> > msvc compiler, has some warnings about signed vs unsigned.\n>\n> > 1. Size_t is weird, because all types are int.\n>\n> Not sure why I ended up using size_t here. There are cases where using a\n> natively sized integer can lead to better code being generated, so I'd\n> want to see some evaluation of the code generation effects.\n>\n Yes, sure.\n\n>\n>\n> > 2. Wouldn't it be better to initialize static variables?\n>\n> No, explicit initialization needs additional space in the binary, and\n> static variables are always zero initialized.\n>\nYes, I missed this part.\nBut I was worried about this line:\n\n/* hasn't been updated yet */\nif (!TransactionIdIsValid(ComputeXidHorizonsResultLastXmin))\n\nThe first run with ComputeXidHorizonsResultLastXmin = 0, is ok?\n\n\n>\n> > 3. There are some shadowing parameters.\n>\n> Hm, yea, that's not great. Those are from\n>\n> commit 0e141c0fbb211bdd23783afa731e3eef95c9ad7a\n> Author: Robert Haas <rhaas@postgresql.org>\n> Date: 2015-08-06 11:52:51 -0400\n>\n> Reduce ProcArrayLock contention by removing backends in batches.\n>\n> Amit, Robert, I assume you don't mind changing this?\n>\n\n\n>\n>\n> > 4. Possible loop beyond numProcs?\n>\n> What are you referring to here?\n>\nMy mistake.\n\nbest regards,\nRanier Vilela\n\nHi Andres, thanks for taking a look.Em sáb., 12 de jun. de 2021 às 16:27, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\nOn 2021-06-12 10:55:22 -0300, Ranier Vilela wrote:\n> With the recent changes at procarray.c, I take a look in.\n> msvc compiler, has some warnings about signed vs unsigned.\n\n> 1. Size_t is weird, because all types are int.\n\nNot sure why I ended up using size_t here. There are cases where using a\nnatively sized integer can lead to better code being generated, so I'd\nwant to see some evaluation of the code generation effects. Yes, sure.\n\n\n> 2. Wouldn't it be better to initialize static variables?\n\nNo, explicit initialization needs additional space in the binary, and\nstatic variables are always zero initialized.Yes, I missed this part.But I was worried about this line:\t/* hasn't been updated yet */\tif (!TransactionIdIsValid(ComputeXidHorizonsResultLastXmin))The first run with\nComputeXidHorizonsResultLastXmin = 0, is ok?\n\n\n> 3. There are some shadowing parameters.\n\nHm, yea, that's not great. Those are from\n\ncommit 0e141c0fbb211bdd23783afa731e3eef95c9ad7a\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate:   2015-08-06 11:52:51 -0400\n\n    Reduce ProcArrayLock contention by removing backends in batches.\n\nAmit, Robert, I assume you don't mind changing this? \n\n\n> 4. Possible loop beyond numProcs?\n\nWhat are you referring to here?My mistake. best regards,Ranier Vilela", "msg_date": "Sun, 13 Jun 2021 09:43:43 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Em dom., 13 de jun. de 2021 às 09:43, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Hi Andres, thanks for taking a look.\n>\n> Em sáb., 12 de jun. de 2021 às 16:27, Andres Freund <andres@anarazel.de>\n> escreveu:\n>\n>> Hi,\n>>\n>> On 2021-06-12 10:55:22 -0300, Ranier Vilela wrote:\n>> > With the recent changes at procarray.c, I take a look in.\n>> > msvc compiler, has some warnings about signed vs unsigned.\n>>\n>> > 1. Size_t is weird, because all types are int.\n>>\n>> Not sure why I ended up using size_t here. There are cases where using a\n>> natively sized integer can lead to better code being generated, so I'd\n>> want to see some evaluation of the code generation effects.\n>>\n> Yes, sure.\n>\nI'm a little confused by the msvc compiler, but here's the difference in\ncode generation.\nApart from the noise caused by unnecessary changes regarding the names.\n\nMicrosoft (R) C/C++ Optimizing Compiler Versão 19.28.29915 para x64\n\ndiff attached.\n\nregards,\nRanier Vilela", "msg_date": "Sun, 13 Jun 2021 22:21:32 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "I took it a step further.\n\nTransactions\n\nHEAD patched\n10002207 10586781\n10146167 10388685\n10048919 10333359\n10065764,3333333 10436275 3,55021946687555\n\nTPS\nHEAD patched\n33469,016009 35399,010472\n33950,624679 34733,252336\n33639,8429 34578,495043\n33686,4945293333 34903,5859503333 3,48700968070122\n\n3,55% Is it worth touch procarray.c for real?\n\nWith msvc 64 bits, the asm generated:\nHEAD\n213.731 bytes procarray.asm\npatched\n212.035 bytes procarray.asm\n\nPatch attached.\n\nregards,\nRanier Vilela", "msg_date": "Mon, 14 Jun 2021 21:01:19 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Em seg., 14 de jun. de 2021 às 21:01, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> I took it a step further.\n>\n> Transactions\n>\n> HEAD patched\n> 10002207 10586781\n> 10146167 10388685\n> 10048919 10333359\n> 10065764,3333333 10436275 3,55021946687555\n>\n> TPS\n> HEAD patched\n> 33469,016009 35399,010472\n> 33950,624679 34733,252336\n> 33639,8429 34578,495043\n> 33686,4945293333 34903,5859503333 3,48700968070122\n>\n> 3,55% Is it worth touch procarray.c for real?\n>\n> With msvc 64 bits, the asm generated:\n> HEAD\n> 213.731 bytes procarray.asm\n> patched\n> 212.035 bytes procarray.asm\n>\n> Patch attached.\n>\nAdded to next CF (https://commitfest.postgresql.org/33/3169/)\n\nregards,\nRanier Vilela\n\nEm seg., 14 de jun. de 2021 às 21:01, Ranier Vilela <ranier.vf@gmail.com> escreveu:I took it a step further.Transactions HEAD                      patched\t10002207                10586781\t10146167                10388685\t10048919                10333359\t10065764,3333333\t10436275                 3,55021946687555 TPSHEAD                       patched\t33469,016009         35399,010472\t33950,624679         34733,252336\t33639,8429             34578,495043\t33686,4945293333\t34903,5859503333\t3,487009680701223,55% Is it worth touch procarray.c for real?With msvc 64 bits, the asm generated:HEAD213.731 bytes procarray.asmpatched212.035 bytes procarray.asmPatch attached.Added to next CF (https://commitfest.postgresql.org/33/3169/)regards,Ranier Vilela", "msg_date": "Tue, 15 Jun 2021 07:57:12 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Hi hackers,\n\n>> Patch attached.\n> Added to next CF (https://commitfest.postgresql.org/33/3169/)\n\nThe proposed code casts `const` variables to non-`const`. I'm surprised\nMSVC misses it. Also, there were some issues with the code formatting. The\ncorrected patch is attached.\n\nThe patch is listed under the \"Performance\" topic on CF. However, I can't\nverify any changes in the performance because there were no benchmarks\nattached that I could reproduce. By looking at the code and the first\nmessage in the thread, I assume this is in fact a refactoring.\n\nPersonally I don't believe that changes like:\n\n- for (int i = 0; i < nxids; i++)\n+ int i;\n+ for (i = 0; i < nxids; i++)\n\n.. or:\n\n- for (int index = myoff; index < arrayP->numProcs; index++)\n+ numProcs = arrayP->numProcs;\n+ for (index = myoff; index < numProcs; index++)\n\n... are of any value, but other changes may be. I choose to keep the patch\nas-is except for the named defects and let the committer decide which\nchanges, if any, are worth committing.\n\nI'm updating the status to \"Ready for Committer\".\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 15 Jul 2021 14:38:32 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Em qui., 15 de jul. de 2021 às 08:38, Aleksander Alekseev <\naleksander@timescale.com> escreveu:\n\n> Hi hackers,\n>\n> >> Patch attached.\n> > Added to next CF (https://commitfest.postgresql.org/33/3169/)\n>\n> Hi Aleksander, thanks for taking a look at this.\n\n\n> The proposed code casts `const` variables to non-`const`. I'm surprised\n> MSVC misses it.\n>\nI lost where. Can you show me?\n\n\n> Also, there were some issues with the code formatting. The corrected patch\n> is attached.\n>\nSorry, thanks for correcting.\n\n\n> The patch is listed under the \"Performance\" topic on CF. However, I can't\n> verify any changes in the performance because there were no benchmarks\n> attached that I could reproduce. By looking at the code and the first\n> message in the thread, I assume this is in fact a refactoring.\n>\nMy mistake, a serious fault.\nBut the benchmark came from:\npgbench -i -p 5432 -d postgres\npgbench -c 50 -T 300 -S -n\n\n\n>\n> Personally I don't believe that changes like:\n>\n> - for (int i = 0; i < nxids; i++)\n> + int i;\n> + for (i = 0; i < nxids; i++)\n>\nYeah, it seems to me that this style will be consolidated in Postgres 'for\n(int i = 0;'.\n\n\n>\n> .. or:\n>\n> - for (int index = myoff; index < arrayP->numProcs; index++)\n> + numProcs = arrayP->numProcs;\n> + for (index = myoff; index < numProcs; index++)\n>\nThe rationale here is to cache arrayP->numProcs to local variable, which\nimproves performance.\n\n\n>\n> ... are of any value, but other changes may be. I choose to keep the patch\n> as-is except for the named defects and let the committer decide which\n> changes, if any, are worth committing.\n>\n> I'm updating the status to \"Ready for Committer\".\n>\nThank you.\n\n regards,\nRanier Vilela\n\nEm qui., 15 de jul. de 2021 às 08:38, Aleksander Alekseev <aleksander@timescale.com> escreveu:Hi hackers,>> Patch attached.> Added to next CF (https://commitfest.postgresql.org/33/3169/)Hi Aleksander, thanks for taking a look at this. The proposed code casts `const` variables to non-`const`. I'm surprised MSVC misses it.I lost where. Can you show me?  Also, there were some issues with the code formatting. The corrected patch is attached.Sorry, thanks for correcting. The patch is listed under the \"Performance\" topic on CF. However, I can't verify any changes in the performance because there were no benchmarks attached that I could reproduce. By looking at the code and the first message in the thread, I assume this is in fact a refactoring.My mistake, a serious fault.But the benchmark came from:pgbench -i -p 5432 -d postgrespgbench -c 50 -T 300 -S -n Personally I don't believe that changes like:-               for (int i = 0; i < nxids; i++)+               int     i;+               for (i = 0; i < nxids; i++)Yeah, it seems to me that this style will be consolidated in Postgres 'for (int i = 0;'. .. or:-       for (int index = myoff; index < arrayP->numProcs; index++)+       numProcs = arrayP->numProcs;+       for (index = myoff; index < numProcs; index++)The rationale here is to cache arrayP->numProcs to local variable, which improves performance. ... are of any value, but other changes may be. I choose to keep the patch as-is except for the named defects and let the committer decide which changes, if any, are worth committing.I'm updating the status to \"Ready for Committer\".Thank you. regards,Ranier Vilela", "msg_date": "Thu, 15 Jul 2021 09:31:15 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "On Thu, 15 Jul 2021 at 23:38, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> I'm updating the status to \"Ready for Committer\".\n\nI think that might be a bit premature. I can't quite see how changing\nthe pids List to a const List makes any sense, especially when the\ncode goes and calls lappend_int() on it to assign it some different\nvalue.\n\nThere are also problems in BackendPidGetProcWithLock around consts.\n\nMuch of this patch kinda feels like another one of those \"I've got a\nfancy new static analyzer\" patches. Unfortunately, it just introduces\na bunch of compiler warnings as a result of the changes it makes.\n\nI'd suggest splitting each portion of the patch out into parts related\nto what it aims to achieve. For example, it looks like there's some\nrenaming going on to remove a local variable from shadowing a function\nparameter. Yet the patch is claiming performance improvements. I\ndon't see how that part relates to performance. The changes to\nProcArrayClearTransaction() seem also unrelated to performance.\n\nI'm not sure what the point of changing things like for (int i =0...\nto move the variable declaration somewhere else is about. That just\nseems like needless stylistic changes that achieve nothing but more\nheadaches for committers doing backpatching work.\n\nI'd say if this patch wants to be taken seriously it better decide\nwhat it's purpose is, because to me it looks just like a jumble of\nrandom changes that have no clear purpose.\n\nI'm going to set this back to waiting on author.\n\nDavid\n\n\n", "msg_date": "Fri, 16 Jul 2021 00:44:58 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Em qui., 15 de jul. de 2021 às 09:45, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Thu, 15 Jul 2021 at 23:38, Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> > I'm updating the status to \"Ready for Committer\".\n>\n> I think that might be a bit premature. I can't quite see how changing\n> the pids List to a const List makes any sense, especially when the\n> code goes and calls lappend_int() on it to assign it some different\n> value.\n>\n> There are also problems in BackendPidGetProcWithLock around consts.\n>\n> Much of this patch kinda feels like another one of those \"I've got a\n> fancy new static analyzer\" patches. Unfortunately, it just introduces\n> a bunch of compiler warnings as a result of the changes it makes.\n>\n> I'd suggest splitting each portion of the patch out into parts related\n> to what it aims to achieve. For example, it looks like there's some\n> renaming going on to remove a local variable from shadowing a function\n> parameter. Yet the patch is claiming performance improvements. I\n> don't see how that part relates to performance. The changes to\n> ProcArrayClearTransaction() seem also unrelated to performance.\n>\n> I'm not sure what the point of changing things like for (int i =0...\n> to move the variable declaration somewhere else is about. That just\n> seems like needless stylistic changes that achieve nothing but more\n> headaches for committers doing backpatching work.\n>\n> I'd say if this patch wants to be taken seriously it better decide\n> what it's purpose is, because to me it looks just like a jumble of\n> random changes that have no clear purpose.\n>\n> I'm going to set this back to waiting on author.\n>\nI understood.\nI will try to address all concerns in the new version.\n\nregards,\nRanier Vilela\n\nEm qui., 15 de jul. de 2021 às 09:45, David Rowley <dgrowleyml@gmail.com> escreveu:On Thu, 15 Jul 2021 at 23:38, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> I'm updating the status to \"Ready for Committer\".\n\nI think that might be a bit premature.  I can't quite see how changing\nthe pids List to a const List makes any sense, especially when the\ncode goes and calls lappend_int() on it to assign it some different\nvalue.\n\nThere are also problems in BackendPidGetProcWithLock around consts.\n\nMuch of this patch kinda feels like another one of those \"I've got a\nfancy new static analyzer\" patches.  Unfortunately, it just introduces\na bunch of compiler warnings as a result of the changes it makes.\n\nI'd suggest splitting each portion of the patch out into parts related\nto what it aims to achieve.  For example,  it looks like there's some\nrenaming going on to remove a local variable from shadowing a function\nparameter.  Yet the patch is claiming performance improvements.  I\ndon't see how that part relates to performance. The changes to\nProcArrayClearTransaction() seem also unrelated to performance.\n\nI'm not sure what the point of changing things like for (int i =0...\nto move the variable declaration somewhere else is about.  That just\nseems like needless stylistic changes that achieve nothing but more\nheadaches for committers doing backpatching work.\n\nI'd say if this patch wants to be taken seriously it better decide\nwhat it's purpose is, because to me it looks just like a jumble of\nrandom changes that have no clear purpose.\n\nI'm going to set this back to waiting on author.I understood.I will try to address all concerns in the new version. regards,Ranier Vilela", "msg_date": "Thu, 15 Jul 2021 09:54:53 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Thanks, David.\n\n> I lost where. Can you show me?\n\nSee the attached warnings.txt.\n\n> But the benchmark came from:\n> pgbench -i -p 5432 -d postgres\n> pgbench -c 50 -T 300 -S -n\n\nI'm afraid this tells nothing unless you also provide the\nconfiguration files and the hardware description, and also some\ninformation on how you checked that there is no performance\ndegradation on all the other supported platforms and possible\nconfigurations. Benchmarking is a very complicated topic - trust me,\nbeen there!\n\nIt would be better to submit two separate patches, the one that\naddresses Size_t and another that addresses shadowing. Refactorings\nonly, nothing else.\n\nRegarding the code formatting, please see src/tools/pgindent.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 15 Jul 2021 16:01:15 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Em qui., 15 de jul. de 2021 às 10:01, Aleksander Alekseev <\naleksander@timescale.com> escreveu:\n\n> Thanks, David.\n>\n> > I lost where. Can you show me?\n>\n> See the attached warnings.txt.\n>\nThank you.\n\n\n>\n> > But the benchmark came from:\n> > pgbench -i -p 5432 -d postgres\n> > pgbench -c 50 -T 300 -S -n\n>\n> I'm afraid this tells nothing unless you also provide the\n> configuration files and the hardware description, and also some\n> information on how you checked that there is no performance\n> degradation on all the other supported platforms and possible\n> configurations.\n\n\n\n> Benchmarking is a very complicated topic - trust me,\n> been there!\n>\nAbsolutely.\n\n\n>\n> It would be better to submit two separate patches, the one that\n> addresses Size_t and another that addresses shadowing. Refactorings\n> only, nothing else.\n>\n> Regarding the code formatting, please see src/tools/pgindent.\n>\nI will try.\n\nregards,\nRanier Vilela\n\nEm qui., 15 de jul. de 2021 às 10:01, Aleksander Alekseev <aleksander@timescale.com> escreveu:Thanks, David.\n\n> I lost where. Can you show me?\n\nSee the attached warnings.txt.Thank you. \n\n> But the benchmark came from:\n> pgbench -i -p 5432 -d postgres\n> pgbench -c 50 -T 300 -S -n\n\nI'm afraid this tells nothing unless you also provide the\nconfiguration files and the hardware description, and also some\ninformation on how you checked that there is no performance\ndegradation on all the other supported platforms and possible\nconfigurations.  Benchmarking is a very complicated topic - trust me,\nbeen there!Absolutely. \n\nIt would be better to submit two separate patches, the one that\naddresses Size_t and another that addresses shadowing. Refactorings\nonly, nothing else.\n\nRegarding the code formatting, please see src/tools/pgindent.I will try.regards,Ranier Vilela", "msg_date": "Thu, 15 Jul 2021 10:04:14 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Em qui., 15 de jul. de 2021 às 10:04, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em qui., 15 de jul. de 2021 às 10:01, Aleksander Alekseev <\n> aleksander@timescale.com> escreveu:\n>\n>> Thanks, David.\n>>\n>> > I lost where. Can you show me?\n>>\n>> See the attached warnings.txt.\n>>\n> Thank you.\n>\n>\n>>\n>> > But the benchmark came from:\n>> > pgbench -i -p 5432 -d postgres\n>> > pgbench -c 50 -T 300 -S -n\n>>\n>> I'm afraid this tells nothing unless you also provide the\n>> configuration files and the hardware description, and also some\n>> information on how you checked that there is no performance\n>> degradation on all the other supported platforms and possible\n>> configurations.\n>\n>\n>\n>> Benchmarking is a very complicated topic - trust me,\n>> been there!\n>>\n> Absolutely.\n>\n>\n>>\n>> It would be better to submit two separate patches, the one that\n>> addresses Size_t and another that addresses shadowing. Refactorings\n>> only, nothing else.\n>>\n>> Regarding the code formatting, please see src/tools/pgindent.\n>>\n> I will try.\n>\nHere are the two patches.\nAs suggested, reclassified as refactoring only.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 15 Jul 2021 22:03:30 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Hi Rainer,\n\n> Here are the two patches.\n> As suggested, reclassified as refactoring only.\n\nPlease don't change the status of the patch on CF application before\nit was reviewed. It will only slow things down.\n\nYour patch seems to have some problems on FreeBSD. Please see\nhttp://commitfest.cputube.org/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 16 Jul 2021 15:05:19 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Em sex., 16 de jul. de 2021 às 09:05, Aleksander Alekseev <\naleksander@timescale.com> escreveu:\n\n> Hi Rainer,\n>\n> > Here are the two patches.\n> > As suggested, reclassified as refactoring only.\n>\n> Please don't change the status of the patch on CF application before\n> it was reviewed. It will only slow things down.\n>\nHi Aleksander,\nSorry, lack of practice.\n\n\n> Your patch seems to have some problems on FreeBSD. Please see\n> http://commitfest.cputube.org/\n\nI saw.\nVery strange, in all other architectures, it went well.\nI will have to install a FreeBSD to be able to debug.\n\nThanks for your review.\n\nbest regards,\nRanier Vilela\n\nEm sex., 16 de jul. de 2021 às 09:05, Aleksander Alekseev <aleksander@timescale.com> escreveu:Hi Rainer,\n\n> Here are the two patches.\n> As suggested, reclassified as refactoring only.\n\nPlease don't change the status of the patch on CF application before\nit was reviewed. It will only slow things down.Hi Aleksander,Sorry, lack of practice.\n\nYour patch seems to have some problems on FreeBSD. Please see\nhttp://commitfest.cputube.org/I saw.Very strange, in all other architectures, it went well.I will have to install a FreeBSD to be able to debug.Thanks for your review.best regards,Ranier Vilela", "msg_date": "Fri, 16 Jul 2021 09:41:41 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Em sex., 16 de jul. de 2021 às 09:41, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em sex., 16 de jul. de 2021 às 09:05, Aleksander Alekseev <\n> aleksander@timescale.com> escreveu:\n>\n>> Hi Rainer,\n>>\n>> > Here are the two patches.\n>> > As suggested, reclassified as refactoring only.\n>>\n>> Please don't change the status of the patch on CF application before\n>> it was reviewed. It will only slow things down.\n>>\n> Hi Aleksander,\n> Sorry, lack of practice.\n>\n>\n>> Your patch seems to have some problems on FreeBSD. Please see\n>> http://commitfest.cputube.org/\n>\n> I saw.\n> Very strange, in all other architectures, it went well.\n> I will have to install a FreeBSD to be able to debug.\n>\nThere are a typo in\n0001-Promove-unshadowing-of-two-variables-PGPROC-type.patch\n\n- ProcArrayEndTransactionInternal(proc, proc->procArrayGroupMemberXid);\n+ ProcArrayEndTransactionInternal(nextproc,\nnextproc->procArrayGroupMemberXid);\n\nAttached new version v1, with fix.\nNow pass check-world at FreeBSD 13 with clang 11.\n\nregards,\nRanier Vilela", "msg_date": "Tue, 20 Jul 2021 19:16:10 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nThe patch was tested on MacOS against master `80ba4bb3`.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Fri, 23 Jul 2021 09:52:33 +0000", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Hi hackers,\n\nThe following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n>\n> The patch was tested on MacOS against master `80ba4bb3`.\n>\n> The new status of this patch is: Ready for Committer\n>\n\nThe second patch seems fine too. I'm attaching both patches to trigger\ncfbot and to double-check them.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 23 Jul 2021 13:02:11 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Em sex., 23 de jul. de 2021 às 07:02, Aleksander Alekseev <\naleksander@timescale.com> escreveu:\n\n> Hi hackers,\n>\n> The following review has been posted through the commitfest application:\n>> make installcheck-world: tested, passed\n>> Implements feature: tested, passed\n>> Spec compliant: tested, passed\n>> Documentation: tested, passed\n>>\n>> The patch was tested on MacOS against master `80ba4bb3`.\n>>\n>> The new status of this patch is: Ready for Committer\n>>\n>\n> The second patch seems fine too. I'm attaching both patches to trigger\n> cfbot and to double-check them.\n>\nThanks Aleksander, for reviewing this.\n\nregards,\nRanier Vilela\n\nEm sex., 23 de jul. de 2021 às 07:02, Aleksander Alekseev <aleksander@timescale.com> escreveu:Hi hackers,The following review has been posted through the commitfest application:\nmake installcheck-world:  tested, passed\nImplements feature:       tested, passed\nSpec compliant:           tested, passed\nDocumentation:            tested, passed\n\nThe patch was tested on MacOS against master `80ba4bb3`.\n\nThe new status of this patch is: Ready for Committer\nThe second patch seems fine too. I'm attaching both patches to trigger cfbot and to double-check them.Thanks Aleksander, for reviewing this.regards,Ranier Vilela", "msg_date": "Fri, 23 Jul 2021 08:07:10 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "\n\nOn 2021/07/23 20:07, Ranier Vilela wrote:\n> Em sex., 23 de jul. de 2021 às 07:02, Aleksander Alekseev <aleksander@timescale.com <mailto:aleksander@timescale.com>> escreveu:\n> \n> Hi hackers,\n> \n> The following review has been posted through the commitfest application:\n> make installcheck-world:  tested, passed\n> Implements feature:       tested, passed\n> Spec compliant:           tested, passed\n> Documentation:            tested, passed\n> \n> The patch was tested on MacOS against master `80ba4bb3`.\n> \n> The new status of this patch is: Ready for Committer\n> \n> \n> The second patch seems fine too. I'm attaching both patches to trigger cfbot and to double-check them.\n> \n> Thanks Aleksander, for reviewing this.\n\nI looked at these patches because they are marked as ready for committer.\nThey don't change any actual behavior, but look valid to me in term of coding.\nBarring any objection, I will commit them.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 11 Sep 2021 12:21:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "\n\nOn 2021/09/11 12:21, Fujii Masao wrote:\n> \n> \n> On 2021/07/23 20:07, Ranier Vilela wrote:\n>> Em sex., 23 de jul. de 2021 às 07:02, Aleksander Alekseev <aleksander@timescale.com <mailto:aleksander@timescale.com>> escreveu:\n>>\n>>     Hi hackers,\n>>\n>>         The following review has been posted through the commitfest application:\n>>         make installcheck-world:  tested, passed\n>>         Implements feature:       tested, passed\n>>         Spec compliant:           tested, passed\n>>         Documentation:            tested, passed\n>>\n>>         The patch was tested on MacOS against master `80ba4bb3`.\n>>\n>>         The new status of this patch is: Ready for Committer\n>>\n>>\n>>     The second patch seems fine too. I'm attaching both patches to trigger cfbot and to double-check them.\n>>\n>> Thanks Aleksander, for reviewing this.\n> \n> I looked at these patches because they are marked as ready for committer.\n> They don't change any actual behavior, but look valid to me in term of coding.\n> Barring any objection, I will commit them.\n\n> No need to backpatch, why this patch is classified as\n> refactoring only.\n\nI found this in the commit log in the patch. I agree that these patches\nare refactoring ones. But I'm thinking that it's worth doing back-patch,\nto make future back-patching easy. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 15 Sep 2021 13:08:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Em qua., 15 de set. de 2021 às 01:08, Fujii Masao <\nmasao.fujii@oss.nttdata.com> escreveu:\n\n>\n>\n> On 2021/09/11 12:21, Fujii Masao wrote:\n> >\n> >\n> > On 2021/07/23 20:07, Ranier Vilela wrote:\n> >> Em sex., 23 de jul. de 2021 às 07:02, Aleksander Alekseev <\n> aleksander@timescale.com <mailto:aleksander@timescale.com>> escreveu:\n> >>\n> >> Hi hackers,\n> >>\n> >> The following review has been posted through the commitfest\n> application:\n> >> make installcheck-world: tested, passed\n> >> Implements feature: tested, passed\n> >> Spec compliant: tested, passed\n> >> Documentation: tested, passed\n> >>\n> >> The patch was tested on MacOS against master `80ba4bb3`.\n> >>\n> >> The new status of this patch is: Ready for Committer\n> >>\n> >>\n> >> The second patch seems fine too. I'm attaching both patches to\n> trigger cfbot and to double-check them.\n> >>\n> >> Thanks Aleksander, for reviewing this.\n> >\n> > I looked at these patches because they are marked as ready for committer.\n> > They don't change any actual behavior, but look valid to me in term of\n> coding.\n> > Barring any objection, I will commit them.\n>\n> > No need to backpatch, why this patch is classified as\n> > refactoring only.\n>\n> I found this in the commit log in the patch. I agree that these patches\n> are refactoring ones. But I'm thinking that it's worth doing back-patch,\n> to make future back-patching easy. Thought?\n>\nThanks for picking this.\n\nI don't see anything against it being more work for the committer.\n\nregards,\nRanier Vilela\n\nEm qua., 15 de set. de 2021 às 01:08, Fujii Masao <masao.fujii@oss.nttdata.com> escreveu:\n\nOn 2021/09/11 12:21, Fujii Masao wrote:\n> \n> \n> On 2021/07/23 20:07, Ranier Vilela wrote:\n>> Em sex., 23 de jul. de 2021 às 07:02, Aleksander Alekseev <aleksander@timescale.com <mailto:aleksander@timescale.com>> escreveu:\n>>\n>>     Hi hackers,\n>>\n>>         The following review has been posted through the commitfest application:\n>>         make installcheck-world:  tested, passed\n>>         Implements feature:       tested, passed\n>>         Spec compliant:           tested, passed\n>>         Documentation:            tested, passed\n>>\n>>         The patch was tested on MacOS against master `80ba4bb3`.\n>>\n>>         The new status of this patch is: Ready for Committer\n>>\n>>\n>>     The second patch seems fine too. I'm attaching both patches to trigger cfbot and to double-check them.\n>>\n>> Thanks Aleksander, for reviewing this.\n> \n> I looked at these patches because they are marked as ready for committer.\n> They don't change any actual behavior, but look valid to me in term of coding.\n> Barring any objection, I will commit them.\n\n> No need to backpatch, why this patch is classified as\n> refactoring only.\n\nI found this in the commit log in the patch. I agree that these patches\nare refactoring ones. But I'm thinking that it's worth doing back-patch,\nto make future back-patching easy. Thought?Thanks for picking this. I don't see anything against it being more work for the committer.regards,Ranier Vilela", "msg_date": "Wed, 15 Sep 2021 09:27:02 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "\n\nOn 2021/09/15 21:27, Ranier Vilela wrote:\n> I found this in the commit log in the patch. I agree that these patches\n> are refactoring ones. But I'm thinking that it's worth doing back-patch,\n> to make future back-patching easy. Thought?\n> \n> Thanks for picking this.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 16 Sep 2021 13:13:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" }, { "msg_contents": "Em qui., 16 de set. de 2021 às 01:13, Fujii Masao <\nmasao.fujii@oss.nttdata.com> escreveu:\n\n>\n>\n> On 2021/09/15 21:27, Ranier Vilela wrote:\n> > I found this in the commit log in the patch. I agree that these\n> patches\n> > are refactoring ones. But I'm thinking that it's worth doing\n> back-patch,\n> > to make future back-patching easy. Thought?\n> >\n> > Thanks for picking this.\n>\n> Pushed. Thanks!\n>\nThank you.\n\nI will close this item if it is not already closed.\n\nregards,\nRanier Vilela\n\nEm qui., 16 de set. de 2021 às 01:13, Fujii Masao <masao.fujii@oss.nttdata.com> escreveu:\n\nOn 2021/09/15 21:27, Ranier Vilela wrote:\n>     I found this in the commit log in the patch. I agree that these patches\n>     are refactoring ones. But I'm thinking that it's worth doing back-patch,\n>     to make future back-patching easy. Thought?\n> \n> Thanks for picking this.\n\nPushed. Thanks!Thank you.I will close this item if it is not already closed.regards,Ranier Vilela", "msg_date": "Thu, 16 Sep 2021 08:03:23 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Signed vs Unsigned (take 2) (src/backend/storage/ipc/procarray.c)" } ]
[ { "msg_contents": "A few years ago I wrote a patch to implement the missing aggregate\ncombine functions for array_agg and string_agg [1]. In the end, the\npatch was rejected due to some concern [2] that if we allow these\naggregates to run in parallel then it might mess up the order in which\nvalues are being aggregated by some unsuspecting user who didn't\ninclude an ORDER BY clause in the aggregate. It was mentioned at the\ntime that due to nodeAgg.c performing so terribly with ORDER BY\naggregates that we couldn't possibly ask users who were upset by this\nchange to include an ORDER BY in their aggregate functions.\n\nI'd still quite like to finish off the remaining aggregate functions\nthat still don't have a combine function, so to get that going again\nI've written some code that gets the query planner onboard with\npicking a plan that allows nodeAgg.c to skip doing any internal sorts\nwhen the input results are already sorted according to what's required\nby the aggregate function.\n\nOf course, the query could have many aggregates all with differing\nORDER BY clauses. Since we reuse the same input for each, it might not\nbe possible to feed each aggregate with suitably sorted input. When\nthe input is not sorted, nodeAgg.c still must perform the sort as it\ndoes today. So unfortunately we can't remove the nodeAgg.c code for\nthat.\n\nThe best I could come up with is just picking a sort order that suits\nthe first ORDER BY aggregate, then spin through the remaining ones to\nsee if there's any with a more strict order requirement that we can\nuse to suit that one and the first one. The planner does something\nsimilar for window functions already, although it's not quite as\ncomprehensive to look beyond the first window for windows with a more\nstrict sort requirement.\n\nThis allows us to give presorted input to both aggregates in the following case:\n\nSELECT agg(a ORDER BY a),agg2(a ORDER BY a,b) ...\n\nbut just the first agg in this one:\n\nSELECT agg(a ORDER BY a),agg2(a ORDER BY c) ...\n\nIn order to make DISTINCT work, I had to add a new expression\nevaluation operator to allow filtering if the current value is the\nsame as the last value. The current unpatched code implements\ndistinct when reading back the sorted tuplestore data. Since I don't\nhave a tuplestore with the pre-sorted version, I needed to cache the\nlast Datum, or last tuple somewhere. I ended up putting that in the\nAggStatePerTransData struct. I'm not quite sure if I like that, but I\ndon't really see where else it could go.\n\nWhen testing the performance of all this I found that when a suitable\nindex exists to provide pre-sorted input for the aggregation that the\nperformance does improve. Unfortunately, it looks like things get more\ncomplex when no index exists. In this case, since we're setting\npathkeys to tell the planner we need a plan that provides pre-sorted\ninput to the aggregates, the planner will add a sort below the\naggregate node. I initially didn't see any problem with that as it\njust moves the sort to a Sort node rather than having it done\nimplicitly inside nodeAgg.c. The problem is, it just does not perform\nas well. I guess this is because when the sort is done inside\nnodeAgg.c that the transition function is called in a tight loop while\nreading records back from the tuplestore. In the patched version,\nthere's an additional node transition in between nodeAgg and nodeSort\nand that causes slower performance. For now, I'm not quite sure what\nto do about that. We set the plan pathkeys well before we could\npossibly decide if asking for pre-sorted input for the aggregates\nwould be a good idea or not.\n\nPlease find attached my WIP patch. It's WIP due to what I mentioned\nin the above paragraph and also because I've not bothered to add JIT\nsupport for the new expression evaluation steps.\n\nI'll add this to the July commitfest.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f9sx_6GTcvd6TMuZnNtCh0VhBzhX6FZqw17TgVFH-ga_A%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/6538.1522096067%40sss.pgh.pa.us#c228ed67026faa15209c76dada035da4", "msg_date": "Sun, 13 Jun 2021 03:07:18 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "> \n> This allows us to give presorted input to both aggregates in the following\n> case:\n> \n> SELECT agg(a ORDER BY a),agg2(a ORDER BY a,b) ...\n> \n> but just the first agg in this one:\n> \n> SELECT agg(a ORDER BY a),agg2(a ORDER BY c) ...\n\nI don't know if it's acceptable, but in the case where you add both an \naggregate with an ORDER BY clause, and another aggregate without the clause, \nthe output for the unordered one will change and use the same ordering, maybe \nsuprising the unsuspecting user. Would that be acceptable ?\n\n> When testing the performance of all this I found that when a suitable\n> index exists to provide pre-sorted input for the aggregation that the\n> performance does improve. Unfortunately, it looks like things get more\n> complex when no index exists. In this case, since we're setting\n> pathkeys to tell the planner we need a plan that provides pre-sorted\n> input to the aggregates, the planner will add a sort below the\n> aggregate node. I initially didn't see any problem with that as it\n> just moves the sort to a Sort node rather than having it done\n> implicitly inside nodeAgg.c. The problem is, it just does not perform\n> as well. I guess this is because when the sort is done inside\n> nodeAgg.c that the transition function is called in a tight loop while\n> reading records back from the tuplestore. In the patched version,\n> there's an additional node transition in between nodeAgg and nodeSort\n> and that causes slower performance. For now, I'm not quite sure what\n> to do about that. We set the plan pathkeys well before we could\n> possibly decide if asking for pre-sorted input for the aggregates\n> would be a good idea or not.\n\nI was curious about the performance implication of that additional transition, \nand could not reproduce a signifcant difference. I may be doing something \nwrong: how did you highlight it ?\n\nRegards,\n\n--\nRonan Dunklau\n\n\n\n\n", "msg_date": "Fri, 02 Jul 2021 09:53:52 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Fri, 2 Jul 2021 at 19:54, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> I don't know if it's acceptable, but in the case where you add both an\n> aggregate with an ORDER BY clause, and another aggregate without the clause,\n> the output for the unordered one will change and use the same ordering, maybe\n> suprising the unsuspecting user. Would that be acceptable ?\n\nThat's a good question. There was an argument in [1] that mentions\nthat there might be a group of people who rely on aggregation being\ndone in a certain order where they're not specifying an ORDER BY\nclause in the aggregate. If that group of people exists, then it's\npossible they might be upset in the scenario that you describe.\n\nI also think that it's going to be pretty hard to make significant\ngains in this area if we are too scared to make changes to undefined\nbehaviour. You wouldn't have to look too hard in the pgsql-general\nmailing list to find someone complaining that their query output is in\nthe wrong order on some query that does not have an ORDER BY. We\npretty much always tell those people that the order is undefined\nwithout an ORDER BY. I'm not too sure why Tom in [1] classes the ORDER\nBY aggregate people any differently. We'll be stuck forever here and\nin many other areas if we're too scared to change the order of\naggregation. You could argue that something like parallelism has\nchanged that for people already. I think the multi-batch Hash\nAggregate code could also change this.\n\n> I was curious about the performance implication of that additional transition,\n> and could not reproduce a signifcant difference. I may be doing something\n> wrong: how did you highlight it ?\n\nIt was pretty basic. I just created a table with two columns and no\nindex and did something like SELECT a,SUM(b ORDER BY b) from ab GROUP\nBY 1; the new code will include a Sort due to lack of any index and\nthe old code would have done a sort inside nodeAgg.c. I imagine that\nthe overhead comes from the fact that in the patched version nodeAgg.c\nmust ask its subnode (nodeSort.c) for the next tuple each time,\nwhereas unpatched nodeAgg.c already has all the tuples in a tuplestore\nand can fetch them very quickly in a tight loop.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/6538.1522096067%40sss.pgh.pa.us\n\n\n", "msg_date": "Fri, 2 Jul 2021 20:39:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On 2/07/21 8:39 pm, David Rowley wrote:\n> On Fri, 2 Jul 2021 at 19:54, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n>> I don't know if it's acceptable, but in the case where you add both an\n>> aggregate with an ORDER BY clause, and another aggregate without the clause,\n>> the output for the unordered one will change and use the same ordering, maybe\n>> suprising the unsuspecting user. Would that be acceptable ?\n> That's a good question. There was an argument in [1] that mentions\n> that there might be a group of people who rely on aggregation being\n> done in a certain order where they're not specifying an ORDER BY\n> clause in the aggregate. If that group of people exists, then it's\n> possible they might be upset in the scenario that you describe.\n\n[...]\n\nI've always worked on the assumption that if I do not specify an ORDER \nBY clause then the rdbms is expected to present rows in the order most \nefficient for it to do so. If pg orders rows when not requested then \nthis could add extra elapsed time to the query, which might be \nsignificant for large queries.\n\nI don't know of any rdbms that guarantees the order of returned rows \nwhen an ORDER BY clause is not used.\n\nSo I think that pg has no obligation to protect the sensibilities of \nnaive users in this case, especially at the expense of users that want \nqueries to complete as quickly as possible.  IMHO\n\n\nCheers,\nGavin\n\n\n\n", "msg_date": "Sat, 3 Jul 2021 08:37:40 +1200", "msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Gavin Flower <GavinFlower@archidevsys.co.nz> writes:\n> On 2/07/21 8:39 pm, David Rowley wrote:\n>> That's a good question. There was an argument in [1] that mentions\n>> that there might be a group of people who rely on aggregation being\n>> done in a certain order where they're not specifying an ORDER BY\n>> clause in the aggregate. If that group of people exists, then it's\n>> possible they might be upset in the scenario that you describe.\n\n> So I think that pg has no obligation to protect the sensibilities of \n> naive users in this case, especially at the expense of users that want \n> queries to complete as quickly as possible.  IMHO\n\nI agree. We've broken such expectations in the past and I don't\nhave much hesitation about breaking them again.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 02 Jul 2021 16:51:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Le vendredi 2 juillet 2021, 10:39:44 CEST David Rowley a écrit :\n> On Fri, 2 Jul 2021 at 19:54, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> > I don't know if it's acceptable, but in the case where you add both an\n> > aggregate with an ORDER BY clause, and another aggregate without the\n> > clause, the output for the unordered one will change and use the same\n> > ordering, maybe suprising the unsuspecting user. Would that be acceptable\n> > ?\n> \n> That's a good question. There was an argument in [1] that mentions\n> that there might be a group of people who rely on aggregation being\n> done in a certain order where they're not specifying an ORDER BY\n> clause in the aggregate. If that group of people exists, then it's\n> possible they might be upset in the scenario that you describe.\n> \n> I also think that it's going to be pretty hard to make significant\n> gains in this area if we are too scared to make changes to undefined\n> behaviour. You wouldn't have to look too hard in the pgsql-general\n> mailing list to find someone complaining that their query output is in\n> the wrong order on some query that does not have an ORDER BY. We\n> pretty much always tell those people that the order is undefined\n> without an ORDER BY. I'm not too sure why Tom in [1] classes the ORDER\n> BY aggregate people any differently. We'll be stuck forever here and\n> in many other areas if we're too scared to change the order of\n> aggregation. You could argue that something like parallelism has\n> changed that for people already. I think the multi-batch Hash\n> Aggregate code could also change this.\n\nI would agree with that.\n\n> \n> > I was curious about the performance implication of that additional\n> > transition, and could not reproduce a signifcant difference. I may be\n> > doing something wrong: how did you highlight it ?\n> \n> It was pretty basic. I just created a table with two columns and no\n> index and did something like SELECT a,SUM(b ORDER BY b) from ab GROUP\n> BY 1; the new code will include a Sort due to lack of any index and\n> the old code would have done a sort inside nodeAgg.c. I imagine that\n> the overhead comes from the fact that in the patched version nodeAgg.c\n> must ask its subnode (nodeSort.c) for the next tuple each time,\n> whereas unpatched nodeAgg.c already has all the tuples in a tuplestore\n> and can fetch them very quickly in a tight loop.\n\nOk, I reproduced that case, just not using a group by: by adding the group by \na sort node is added in both cases (master and your patch), except that with \nyour patch we sort on both keys and that doesn't really incur a performance \npenalty. \n\nI think the overhead occurs because in the ExecAgg case, we use the \ntuplesort_*_datum API as an optimization when we have a single column as an \ninput, which the ExecSort code doesn't. Maybe it would be worth it to try to \nuse that API in sort nodes too, when it can be done. \n\n\n> \n> David\n> \n> [1] https://www.postgresql.org/message-id/6538.1522096067%40sss.pgh.pa.us\n\n\n-- \nRonan Dunklau\n\n\n\n\n\n", "msg_date": "Mon, 05 Jul 2021 08:38:28 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "> Ok, I reproduced that case, just not using a group by: by adding the group\n> by a sort node is added in both cases (master and your patch), except that\n> with your patch we sort on both keys and that doesn't really incur a\n> performance penalty.\n> \n> I think the overhead occurs because in the ExecAgg case, we use the\n> tuplesort_*_datum API as an optimization when we have a single column as an\n> input, which the ExecSort code doesn't. Maybe it would be worth it to try to\n> use that API in sort nodes too, when it can be done.\n\nPlease find attached a POC patch to do just that.\n\nThe switch to the single-datum tuplesort is done when there is only one \nattribute, it is byval (to avoid having to deal with copy of the references \neverywhere) and we are not in bound mode (to also avoid having to move things \naround).\n\nA naive run on make check pass on this, but I may have overlooked things.\n\nShould I add this separately to the commitfest ?\n\nFor the record, the times I got on my laptop, on master VS david's patch VS \nboth. Values are an average of 100 runs, as reported by pgbench --no-vacuum -f \n<file.sql> -t 100. There is a good amount of noise, but the simple \"select one \nordered column case\" seems worth the optimization.\n\nOnly shared_buffers and work_mem have been set to 2GB each.\n\nSetup 1: single table, 1 000 000 tuples, no index\nCREATE TABLE tbench (\n a int,\n b int\n);\n\nINSERT INTO tbench (a, b) SELECT a, b FROM generate_series(1, 100) a, \ngenerate_series(1, 10000) b;\n\n\nTest 1: Single-column ordered select (order by b since the table is already \nsorted by a)\nselect b from tbench order by b;\n\nmaster: 303.661ms\nwith mine: 148.571ms\n\nTest 2: Ordered sum (using b so that the input is not presorted)\nselect sum(b order by b) from tbench;\n\nmaster: 112.379ms\nwith david's patch: 144.469ms\nwith david's patch + mine: 97ms\n\nTest 3: Ordered sum + group by\nselect b, sum(a order by a) from tbench GROUP BY b;\n\nmaster: 316.117ms\nwith david's patch: 297.079\nwith david's patch + mine: 294.601\n\nSetup 2: same as before, but adding an index on (b, a)\nCREATE INDEX ON tbench (b, a);\n\nTest 2: Ordered sum:\nselect sum(a order by a) from tbench;\n\nmaster: 111.847 ms\nwith david's patch: 48.088\nwith david's patch + mine: 47.678 ms\n\nTest 3: Ordered sum + group by:\nselect a, sum(b order by b) from tbench GROUP BY a;\n\nmaster: 76.873 ms\nwith david's patch: 61.105\nwith david's patch + mine: 62.672 ms\n\n\n-- \nRonan Dunklau", "msg_date": "Mon, 05 Jul 2021 14:07:05 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Sat, Jun 12, 2021 at 11:07 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> A few years ago I wrote a patch to implement the missing aggregate\n> combine functions for array_agg and string_agg [1]. In the end, the\n> patch was rejected due to some concern [2] that if we allow these\n> aggregates to run in parallel then it might mess up the order in which\n> values are being aggregated by some unsuspecting user who didn't\n> include an ORDER BY clause in the aggregate. It was mentioned at the\n> time that due to nodeAgg.c performing so terribly with ORDER BY\n> aggregates that we couldn't possibly ask users who were upset by this\n> change to include an ORDER BY in their aggregate functions.\n>\n> I'd still quite like to finish off the remaining aggregate functions\n> that still don't have a combine function, so to get that going again\n> I've written some code that gets the query planner onboard with\n> picking a plan that allows nodeAgg.c to skip doing any internal sorts\n> when the input results are already sorted according to what's required\n> by the aggregate function.\n>\n> Of course, the query could have many aggregates all with differing\n> ORDER BY clauses. Since we reuse the same input for each, it might not\n> be possible to feed each aggregate with suitably sorted input. When\n> the input is not sorted, nodeAgg.c still must perform the sort as it\n> does today. So unfortunately we can't remove the nodeAgg.c code for\n> that.\n>\n> The best I could come up with is just picking a sort order that suits\n> the first ORDER BY aggregate, then spin through the remaining ones to\n> see if there's any with a more strict order requirement that we can\n> use to suit that one and the first one. The planner does something\n> similar for window functions already, although it's not quite as\n> comprehensive to look beyond the first window for windows with a more\n> strict sort requirement.\n\nI think this is a reasonable choice. I could imagine a more complex\nmethod, say, counting the number of aggregates benefiting from a given\nsort, and choosing the one that benefits the most (and this could be\nfurther complicated by ranking based on \"cost\" -- not costing in the\nnormal sense since we don't have that at this point), but I think it'd\ntake a lot of convincing that that was valuable.\n\n> This allows us to give presorted input to both aggregates in the following case:\n>\n> SELECT agg(a ORDER BY a),agg2(a ORDER BY a,b) ...\n>\n> but just the first agg in this one:\n>\n> SELECT agg(a ORDER BY a),agg2(a ORDER BY c) ...\n>\n> In order to make DISTINCT work, I had to add a new expression\n> evaluation operator to allow filtering if the current value is the\n> same as the last value. The current unpatched code implements\n> distinct when reading back the sorted tuplestore data. Since I don't\n> have a tuplestore with the pre-sorted version, I needed to cache the\n> last Datum, or last tuple somewhere. I ended up putting that in the\n> AggStatePerTransData struct. I'm not quite sure if I like that, but I\n> don't really see where else it could go.\n\nThat sounds like what nodeIncrementalSort.c's isCurrentGroup() does,\nexcept it's just implemented inline. Not anything you need to change\nin this patch, but noting it in case it triggered a thought valuable\nfor you for me later on.\n\n> When testing the performance of all this I found that when a suitable\n> index exists to provide pre-sorted input for the aggregation that the\n> performance does improve. Unfortunately, it looks like things get more\n> complex when no index exists. In this case, since we're setting\n> pathkeys to tell the planner we need a plan that provides pre-sorted\n> input to the aggregates, the planner will add a sort below the\n> aggregate node. I initially didn't see any problem with that as it\n> just moves the sort to a Sort node rather than having it done\n> implicitly inside nodeAgg.c. The problem is, it just does not perform\n> as well. I guess this is because when the sort is done inside\n> nodeAgg.c that the transition function is called in a tight loop while\n> reading records back from the tuplestore. In the patched version,\n> there's an additional node transition in between nodeAgg and nodeSort\n> and that causes slower performance. For now, I'm not quite sure what\n> to do about that. We set the plan pathkeys well before we could\n> possibly decide if asking for pre-sorted input for the aggregates\n> would be a good idea or not.\n\nThis opens up another path for significant plan benefits too: if\nthere's now an explicit sort node, then it's possible for that node to\nbe an incremental sort node, which isn't something nodeAgg is capable\nof utilizing currently.\n\n> Please find attached my WIP patch. It's WIP due to what I mentioned\n> in the above paragraph and also because I've not bothered to add JIT\n> support for the new expression evaluation steps.\n\nI looked this over (though didn't get a chance to play with it).\n\nI'm wondering about the changes to the test output in tuplesort.out.\nIt looks like where a merge join used to be proceeding with an\nexplicit sort with DESC it's now sorting with (implicit) ASC, and then\nan explicit sort node using DESC is above the merge join node. Two\nthoughts:\n\n1. It seems like losing the \"proper\" sort order in the JOIN node isn't\npreferable. Given both are full sorts, it may not actually be a\nsignificant performance difference on its own (it can't short\ncircuit), but it means precluding using incremental sort in the new\nsort node.\n2. In an ideal world we'd push the more complex sort down into the\nmerge join rather than doing a sort on \"col1\" and then a sort on\n\"col1, col2\". That's likely beyond this patch, but I haven't\ninvestigated at all.\n\nThanks for your work on this; both this patch and the original one\nyou're trying to enable seem like great additions.\n\nJames\n\n\n", "msg_date": "Mon, 5 Jul 2021 15:12:51 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Mon, Jul 5, 2021 at 8:08 AM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n>\n> > Ok, I reproduced that case, just not using a group by: by adding the group\n> > by a sort node is added in both cases (master and your patch), except that\n> > with your patch we sort on both keys and that doesn't really incur a\n> > performance penalty.\n> >\n> > I think the overhead occurs because in the ExecAgg case, we use the\n> > tuplesort_*_datum API as an optimization when we have a single column as an\n> > input, which the ExecSort code doesn't. Maybe it would be worth it to try to\n> > use that API in sort nodes too, when it can be done.\n>\n> Please find attached a POC patch to do just that.\n>\n> The switch to the single-datum tuplesort is done when there is only one\n> attribute, it is byval (to avoid having to deal with copy of the references\n> everywhere) and we are not in bound mode (to also avoid having to move things\n> around).\n>\n> A naive run on make check pass on this, but I may have overlooked things.\n>\n> Should I add this separately to the commitfest ?\n\nIt seems like a pretty obvious win on its own, and, I'd expect, will\nneed less discussion than David's patch, so my vote is to make it a\nseparate thread. The patch tester wants the full series attached each\ntime, and even without that it's difficult to discuss multiple patches\non a single thread.\n\nIf you make a separate thread and CF entry, please CC me and add me as\na reviewer on the CF entry.\n\nOne thing from a quick read through of the patch: your changes near\nthe end of ExecSort, in ExecInitSort, and in execnodes.h need\nindentation fixes.\n\nThanks,\nJames\n\n\n", "msg_date": "Mon, 5 Jul 2021 15:33:59 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "> If you make a separate thread and CF entry, please CC me and add me as\n> a reviewer on the CF entry.\n\nOk, I started a new thread and added it to the next CF: https://\ncommitfest.postgresql.org/34/3237/\n\n\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Tue, 06 Jul 2021 08:19:38 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Mon, 5 Jul 2021 at 18:38, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> I think the overhead occurs because in the ExecAgg case, we use the\n> tuplesort_*_datum API as an optimization when we have a single column as an\n> input, which the ExecSort code doesn't. Maybe it would be worth it to try to\n> use that API in sort nodes too, when it can be done.\n\nThat's a really great find! Looks like I was wrong to assume that the\nextra overhead was from transitioning between nodes.\n\nI ran the performance results locally here with:\n\ncreate table t1(a int not null);\ncreate table t2(a int not null, b int not null);\ncreate table t3(a int not null, b int not null, c int not null);\n\ninsert into t1 select x from generate_Series(1,1000000)x;\ninsert into t2 select x,x from generate_Series(1,1000000)x;\ninsert into t3 select x,x,1 from generate_Series(1,1000000)x;\nvacuum freeze analyze t1,t2,t3;\n\nselect1: select sum(a order by a) from t1;\nselect2: select sum(a order by b) from t2;\nselect3: select c,sum(a order by b) from t3 group by c;\n\nmaster = https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=8aafb02616753f5c6c90bbc567636b73c0cbb9d4\npatch1 = https://www.postgresql.org/message-id/attachment/123546/wip_planner_support_for_orderby_distinct_aggs_v0.patch\npatch2 = https://www.postgresql.org/message-id/attachment/124238/0001-Allow-Sort-nodes-to-use-the-fast-single-datum-tuples.patch\n\nThe attached graph shows the results.\n\nIt's very good to see that with both patches applied there's no\nregression. I'm a bit surprised there's much performance gain here\ngiven that I didn't add any indexes to provide any presorted input.\n\nDavid", "msg_date": "Tue, 6 Jul 2021 19:26:49 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Sun, 13 Jun 2021 at 03:07, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Please find attached my WIP patch. It's WIP due to what I mentioned\n> in the above paragraph and also because I've not bothered to add JIT\n> support for the new expression evaluation steps.\n\nI've split this patch into two parts.\n\n0001 Adds planner support for ORDER BY aggregates.\n\n0002 is a WIP patch for DISTINCT support. This still lacks JIT\nsupport and I'm still not certain of the best where to store the\nprevious value or tuple to determine if the current one is distinct\nfrom it.\n\nThe 0001 patch is fairly simple and does not require much in the way\nof changes in the planner aside from standard_qp_callback().\nSurprisingly the executor does not need a great deal of work here\neither. It's just mostly about skipping the normal agg(.. ORDER BY)\ncode when the Aggref is presorted.\n\nDavid", "msg_date": "Tue, 13 Jul 2021 00:04:22 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Em seg., 12 de jul. de 2021 às 09:04, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Sun, 13 Jun 2021 at 03:07, David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > Please find attached my WIP patch. It's WIP due to what I mentioned\n> > in the above paragraph and also because I've not bothered to add JIT\n> > support for the new expression evaluation steps.\n>\n> I've split this patch into two parts.\n>\nHi, I'll take a look.\n\n\n> 0001 Adds planner support for ORDER BY aggregates.\n>\n/* Normal transition function without ORDER BY / DISTINCT. */\nIs it possible to avoid entering to initialize args if 'argno >=\npertrans->numTransInputs'?\nLike this:\n\nif (!pertrans->aggsortrequired && argno < pertrans->numTransInputs)\n\nAnd if argos is '>' that pertrans->numTransInputs?\nThe test shouldn't be, inside the loop?\n\n+ /*\n+ * Don't initialize args for any ORDER BY clause that might\n+ * exist in a presorted aggregate.\n+ */\n+ if (argno >= pertrans->numTransInputs)\n+ break;\n\nI think that or can reduce the scope of variable 'sortlist' or simply\nremove it?\n\na)\n+ /* Determine pathkeys for aggregate functions with an ORDER BY */\n+ if (parse->groupingSets == NIL && root->numOrderedAggs > 0 &&\n+ (qp_extra->groupClause == NIL || root->group_pathkeys))\n+ {\n+ ListCell *lc;\n+ List *pathkeys = NIL;\n+\n+ foreach(lc, root->agginfos)\n+ {\n+ AggInfo *agginfo = (AggInfo *) lfirst(lc);\n+ Aggref *aggref = agginfo->representative_aggref;\n+ List *sortlist;\n+\n\nor better\n\nb)\n+ /* Determine pathkeys for aggregate functions with an ORDER BY */\n+ if (parse->groupingSets == NIL && root->numOrderedAggs > 0 &&\n+ (qp_extra->groupClause == NIL || root->group_pathkeys))\n+ {\n+ ListCell *lc;\n+ List *pathkeys = NIL;\n+\n+ foreach(lc, root->agginfos)\n+ {\n+ AggInfo *agginfo = (AggInfo *) lfirst(lc);\n+ Aggref *aggref = agginfo->representative_aggref;\n+\n+ if (AGGKIND_IS_ORDERED_SET(aggref->aggkind))\n+ continue;\n+\n+ /* DISTINCT aggregates not yet supported by the planner */\n+ if (aggref->aggdistinct != NIL)\n+ continue;\n+\n+ if (aggref->aggorder == NIL)\n+ continue;\n+\n+ /*\n+ * Find the pathkeys with the most sorted derivative of the first\n+ * Aggref. For example, if we determine the pathkeys for the first\n+ * Aggref to be {a}, and we find another with {a,b}, then we use\n+ * {a,b} since it's useful for more Aggrefs than just {a}. We\n+ * currently ignore anything that might have a longer list of\n+ * pathkeys than the first Aggref if it is not contained in the\n+ * pathkeys for the first agg. We can't practically plan for all\n+ * orders of each Aggref, so this seems like the best compromise.\n+ */\n+ if (pathkeys == NIL)\n+ {\n+ pathkeys = make_pathkeys_for_sortclauses(root, aggref->aggorder ,\n+ aggref->args);\n+ aggref->aggpresorted = true;\n+ }\n+ else\n+ {\n+ List *pathkeys2 = make_pathkeys_for_sortclauses(root,\n+ aggref->aggorder,\n+ aggref->args);\n\n\n> 0002 is a WIP patch for DISTINCT support. This still lacks JIT\n> support and I'm still not certain of the best where to store the\n> previous value or tuple to determine if the current one is distinct\n> from it.\n>\nIn the patch 0002, I think that can reduce the scope of variable 'aggstate'?\n\n+ EEO_CASE(EEOP_AGG_PRESORTED_DISTINCT_SINGLE)\n+ {\n+ AggStatePerTrans pertrans = op->d.agg_presorted_distinctcheck.pertrans;\n+ Datum value = pertrans->transfn_fcinfo->args[1].value;\n+ bool isnull = pertrans->transfn_fcinfo->args[1].isnull;\n+\n+ if (!pertrans->haslast ||\n+ pertrans->lastisnull != isnull ||\n+ !DatumGetBool(FunctionCall2Coll(&pertrans->equalfnOne,\n+ pertrans->aggCollation,\n+ pertrans->lastdatum, value)))\n+ {\n+ if (pertrans->haslast && !pertrans->inputtypeByVal)\n+ pfree(DatumGetPointer(pertrans->lastdatum));\n+\n+ pertrans->haslast = true;\n+ if (!isnull)\n+ {\n+ AggState *aggstate = castNode(AggState, state->parent);\n+\n+ /*\n+ * XXX is it worth having a dedicted ByVal version of this\n+ * operation so that we can skip switching memory contexts\n+ * and do a simple assign rather than datumCopy below?\n+ */\n+ MemoryContext oldContext;\n+\n+ oldContext =\nMemoryContextSwitchTo(aggstate->curaggcontext->ecxt_per_tuple_memory);\n\nWhat do you think?\n\nregards,\nRanier Vilela\n\nEm seg., 12 de jul. de 2021 às 09:04, David Rowley <dgrowleyml@gmail.com> escreveu:On Sun, 13 Jun 2021 at 03:07, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Please find attached my WIP patch.  It's WIP due to what I mentioned\n> in the above paragraph and also because I've not bothered to add JIT\n> support for the new expression evaluation steps.\n\nI've split this patch into two parts.Hi, I'll take a look. \n\n0001 Adds planner support for ORDER BY aggregates./* Normal transition function without ORDER BY / DISTINCT. */ Is it possible to avoid entering to initialize args if 'argno >= pertrans->numTransInputs'?Like this:if (!pertrans->aggsortrequired && \nargno < pertrans->numTransInputs)And if argos is '>' that pertrans->numTransInputs?The test shouldn't be, inside the loop?+\t\t\t\t/*+\t\t\t\t * Don't initialize args for any ORDER BY clause that might+\t\t\t\t * exist in a presorted aggregate.+\t\t\t\t */+\t\t\t\tif (argno >= pertrans->numTransInputs)+\t\t\t\t\tbreak;I think that or can reduce the scope of variable 'sortlist' or simply remove it?a)+\t/* Determine pathkeys for aggregate functions with an ORDER BY */+\tif (parse->groupingSets == NIL && root->numOrderedAggs > 0 &&+\t\t(qp_extra->groupClause == NIL || root->group_pathkeys))+\t{+\t\tListCell   *lc;+\t\tList\t   *pathkeys = NIL;++\t\tforeach(lc, root->agginfos)+\t\t{+\t\t\tAggInfo    *agginfo = (AggInfo *) lfirst(lc);+\t\t\tAggref\t   *aggref = agginfo->representative_aggref;\n+\t\tList\t   *sortlist;\n\n+or betterb)+\t/* Determine pathkeys for aggregate functions with an ORDER BY */+\tif (parse->groupingSets == NIL && root->numOrderedAggs > 0 &&+\t\t(qp_extra->groupClause == NIL || root->group_pathkeys))+\t{+\t\tListCell   *lc;+\t\tList\t   *pathkeys = NIL;++\t\tforeach(lc, root->agginfos)+\t\t{+\t\t\tAggInfo    *agginfo = (AggInfo *) lfirst(lc);+\t\t\tAggref\t   *aggref = agginfo->representative_aggref;++\t\t\tif (AGGKIND_IS_ORDERED_SET(aggref->aggkind))+\t\t\t\tcontinue;++\t\t\t/* DISTINCT aggregates not yet supported by the planner */+\t\t\tif (aggref->aggdistinct != NIL)+\t\t\t\tcontinue;++\t\t\tif (aggref->aggorder == NIL)+\t\t\t\tcontinue;++\t\t\t/*+\t\t\t * Find the pathkeys with the most sorted derivative of the first+\t\t\t * Aggref. For example, if we determine the pathkeys for the first+\t\t\t * Aggref to be {a}, and we find another with {a,b}, then we use+\t\t\t * {a,b} since it's useful for more Aggrefs than just {a}.  We+\t\t\t * currently ignore anything that might have a longer list of+\t\t\t * pathkeys than the first Aggref if it is not contained in the+\t\t\t * pathkeys for the first agg.  We can't practically plan for all+\t\t\t * orders of each Aggref, so this seems like the best compromise.+\t\t\t */+\t\t\tif (pathkeys == NIL)+\t\t\t{+\t\t\t\tpathkeys = make_pathkeys_for_sortclauses(root, \naggref->aggorder\n\n,+\t\t\t\t\t\t\t\t\t\t\t\t\t\t aggref->args);+\t\t\t\taggref->aggpresorted = true;+\t\t\t}+\t\t\telse+\t\t\t{+\t\t\t\tList\t   *pathkeys2 = make_pathkeys_for_sortclauses(root,+ \naggref->aggorder,+\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t  aggref->args);\n\n0002 is a WIP patch for DISTINCT support.  This still lacks JIT\nsupport and I'm still not certain of the best where to store the\nprevious value or tuple to determine if the current one is distinct\nfrom it.In the patch\n0002, I think that can reduce the scope of variable 'aggstate'?+\t\tEEO_CASE(EEOP_AGG_PRESORTED_DISTINCT_SINGLE)+\t\t{+\t\t\tAggStatePerTrans pertrans = op->d.agg_presorted_distinctcheck.pertrans;+\t\t\tDatum\t\tvalue = pertrans->transfn_fcinfo->args[1].value;+\t\t\tbool\t\tisnull = pertrans->transfn_fcinfo->args[1].isnull;++\t\t\tif (!pertrans->haslast ||+\t\t\t\tpertrans->lastisnull != isnull ||+\t\t\t\t!DatumGetBool(FunctionCall2Coll(&pertrans->equalfnOne,+\t\t\t\t\t\t\t\t\t\t\t\tpertrans->aggCollation,+\t\t\t\t\t\t\t\t\t\t\t\tpertrans->lastdatum, value)))+\t\t\t{+\t\t\t\tif (pertrans->haslast && !pertrans->inputtypeByVal)+\t\t\t\t\tpfree(DatumGetPointer(pertrans->lastdatum));++\t\t\t\tpertrans->haslast = true;+\t\t\t\tif (!isnull)+\t\t\t\t{\n+\t\t\tAggState   *aggstate = castNode(AggState, state->parent);+\n+\t\t\t\t\t/*+\t\t\t\t\t * XXX is it worth having a dedicted ByVal version of this+\t\t\t\t\t * operation so that we can skip switching memory contexts+\t\t\t\t\t * and do a simple assign rather than datumCopy below?+\t\t\t\t\t */+\t\t\t\t\tMemoryContext oldContext;++\t\t\t\t\toldContext = MemoryContextSwitchTo(aggstate->curaggcontext->ecxt_per_tuple_memory);\nWhat do you think?regards,Ranier Vilela", "msg_date": "Mon, 12 Jul 2021 20:04:26 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Thanks for having a look at this.\n\nOn Tue, 13 Jul 2021 at 11:04, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> 0001 Adds planner support for ORDER BY aggregates.\n>\n> /* Normal transition function without ORDER BY / DISTINCT. */\n> Is it possible to avoid entering to initialize args if 'argno >= pertrans->numTransInputs'?\n> Like this:\n>\n> if (!pertrans->aggsortrequired && argno < pertrans->numTransInputs)\n>\n> And if argos is '>' that pertrans->numTransInputs?\n> The test shouldn't be, inside the loop?\n>\n> + /*\n> + * Don't initialize args for any ORDER BY clause that might\n> + * exist in a presorted aggregate.\n> + */\n> + if (argno >= pertrans->numTransInputs)\n> + break;\n\nThe idea is to stop the loop before processing any Aggref arguments\nthat might belong to the ORDER BY clause. We must still process other\narguments up to the ORDER BY args though, so we can't skip this loop.\n\nNote that we're doing argno++ inside the loop. If we had a\nfor_each_to() macro, similar to for_each_from(), but allowed us to\nspecify an end element then we could use that instead, but we don't\nand we still must initialize the transition arguments.\n\n> I think that or can reduce the scope of variable 'sortlist' or simply remove it?\n\nI've adjusted the scope of this. I didn't want to remove it because\nit's kinda useful to have it that way otherwise the 0002 patch would\nneed to add it.\n\n>> 0002 is a WIP patch for DISTINCT support. This still lacks JIT\n>> support and I'm still not certain of the best where to store the\n>> previous value or tuple to determine if the current one is distinct\n>> from it.\n>\n> In the patch 0002, I think that can reduce the scope of variable 'aggstate'?\n>\n> + EEO_CASE(EEOP_AGG_PRESORTED_DISTINCT_SINGLE)\n\nYeah, that could be done.\n\nI've attached the updated patches.\n\nDavid", "msg_date": "Tue, 13 Jul 2021 16:44:12 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Em ter., 13 de jul. de 2021 às 01:44, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> Thanks for having a look at this.\n>\n> On Tue, 13 Jul 2021 at 11:04, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >> 0001 Adds planner support for ORDER BY aggregates.\n> >\n> > /* Normal transition function without ORDER BY / DISTINCT. */\n> > Is it possible to avoid entering to initialize args if 'argno >=\n> pertrans->numTransInputs'?\n> > Like this:\n> >\n> > if (!pertrans->aggsortrequired && argno < pertrans->numTransInputs)\n> >\n> > And if argos is '>' that pertrans->numTransInputs?\n> > The test shouldn't be, inside the loop?\n> >\n> > + /*\n> > + * Don't initialize args for any ORDER BY clause that might\n> > + * exist in a presorted aggregate.\n> > + */\n> > + if (argno >= pertrans->numTransInputs)\n> > + break;\n>\n> The idea is to stop the loop before processing any Aggref arguments\n> that might belong to the ORDER BY clause.\n\nYes, I get the idea.\n\nWe must still process other\n> arguments up to the ORDER BY args though,\n\nI may have misunderstood, but the other arguments are under\npertrans->numTransInputs?\n\n\n> so we can't skip this loop.\n>\nThe question not answered is if *argno* can '>=' that\npertrans->numTransInputs,\nbefore entering the loop?\nIf *can*, the loop might be useless in that case.\n\n\n>\n> Note that we're doing argno++ inside the loop.\n\nAnother question is, if *argno* can '>' that pertrans->numTransInputs,\nbefore the loop, the test will fail?\nif (argno == pertrans->numTransInputs)\n\n\n>\n> > I think that or can reduce the scope of variable 'sortlist' or simply\n> remove it?\n>\n> I've adjusted the scope of this. I didn't want to remove it because\n> it's kinda useful to have it that way otherwise the 0002 patch would\n> need to add it.\n>\nNice.\n\n\n> >> 0002 is a WIP patch for DISTINCT support. This still lacks JIT\n> >> support and I'm still not certain of the best where to store the\n> >> previous value or tuple to determine if the current one is distinct\n> >> from it.\n> >\n> > In the patch 0002, I think that can reduce the scope of variable\n> 'aggstate'?\n> >\n> > + EEO_CASE(EEOP_AGG_PRESORTED_DISTINCT_SINGLE)\n>\n> Yeah, that could be done.\n>\n> I've attached the updated patches.\n>\nThanks.\n\nregards,\nRanier Vilela\n\nEm ter., 13 de jul. de 2021 às 01:44, David Rowley <dgrowleyml@gmail.com> escreveu:Thanks for having a look at this.\n\nOn Tue, 13 Jul 2021 at 11:04, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> 0001 Adds planner support for ORDER BY aggregates.\n>\n> /* Normal transition function without ORDER BY / DISTINCT. */\n> Is it possible to avoid entering to initialize args if 'argno >= pertrans->numTransInputs'?\n> Like this:\n>\n> if (!pertrans->aggsortrequired && argno < pertrans->numTransInputs)\n>\n> And if argos is '>' that pertrans->numTransInputs?\n> The test shouldn't be, inside the loop?\n>\n> + /*\n> + * Don't initialize args for any ORDER BY clause that might\n> + * exist in a presorted aggregate.\n> + */\n> + if (argno >= pertrans->numTransInputs)\n> + break;\n\nThe idea is to stop the loop before processing any Aggref arguments\nthat might belong to the ORDER BY clause. Yes, I get the idea. We must still process other\narguments up to the ORDER BY args though,I may have misunderstood, but the other arguments are under pertrans->numTransInputs?  so we can't skip this loop.The question not answered is if *argno* can '>=' that \npertrans->numTransInputs,before entering the loop?If *can*, the loop might be useless in that case. \n\nNote that we're doing argno++ inside the loop.Another question is, if *argno* can '>' that \npertrans->numTransInputs,before the loop, the test will fail?if (argno == pertrans->numTransInputs) \n> I think that or can reduce the scope of variable 'sortlist' or simply remove it?\n\nI've adjusted the scope of this.  I didn't want to remove it because\nit's kinda useful to have it that way otherwise the 0002 patch would\nneed to add it.Nice. \n\n>> 0002 is a WIP patch for DISTINCT support.  This still lacks JIT\n>> support and I'm still not certain of the best where to store the\n>> previous value or tuple to determine if the current one is distinct\n>> from it.\n>\n> In the patch 0002, I think that can reduce the scope of variable 'aggstate'?\n>\n> + EEO_CASE(EEOP_AGG_PRESORTED_DISTINCT_SINGLE)\n\nYeah, that could be done.\n\nI've attached the updated patches.Thanks.regards,Ranier Vilela", "msg_date": "Tue, 13 Jul 2021 08:45:20 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, 13 Jul 2021 at 23:45, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> The question not answered is if *argno* can '>=' that pertrans->numTransInputs,\n> before entering the loop?\n> If *can*, the loop might be useless in that case.\n>\n>>\n>>\n>> Note that we're doing argno++ inside the loop.\n>\n> Another question is, if *argno* can '>' that pertrans->numTransInputs,\n> before the loop, the test will fail?\n> if (argno == pertrans->numTransInputs)\n\nargno is *always* 0 before the loop starts.\n\nDavid\n\n\n", "msg_date": "Wed, 14 Jul 2021 13:15:27 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Em ter., 13 de jul. de 2021 às 22:15, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Tue, 13 Jul 2021 at 23:45, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > The question not answered is if *argno* can '>=' that\n> pertrans->numTransInputs,\n> > before entering the loop?\n> > If *can*, the loop might be useless in that case.\n> >\n> >>\n> >>\n> >> Note that we're doing argno++ inside the loop.\n> >\n> > Another question is, if *argno* can '>' that pertrans->numTransInputs,\n> > before the loop, the test will fail?\n> > if (argno == pertrans->numTransInputs)\n>\n> argno is *always* 0 before the loop starts.\n>\nGood. Thanks.\n\nregards,\nRanier Vilela\n\nEm ter., 13 de jul. de 2021 às 22:15, David Rowley <dgrowleyml@gmail.com> escreveu:On Tue, 13 Jul 2021 at 23:45, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> The question not answered is if *argno* can '>=' that pertrans->numTransInputs,\n> before entering the loop?\n> If *can*, the loop might be useless in that case.\n>\n>>\n>>\n>> Note that we're doing argno++ inside the loop.\n>\n> Another question is, if *argno* can '>' that pertrans->numTransInputs,\n> before the loop, the test will fail?\n> if (argno == pertrans->numTransInputs)\n\nargno is *always* 0 before the loop starts.Good. Thanks.regards,Ranier Vilela", "msg_date": "Tue, 13 Jul 2021 22:45:22 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Le mardi 13 juillet 2021, 06:44:12 CEST David Rowley a écrit :\n> I've attached the updated patches.\n\nThe approach of building a pathkey for the first order by we find, then \nappending to it as needed seems sensible but I'm a bit worried about users \nstarting to rely on this as an optimization. Even if we don't document it, \npeople may start to change the order of their target lists to \"force\" a \nspecific sort on the lower nodes. How confident are we that we won't change this \nor that we will be willing to break it ?\n\nGenerating all possible pathkeys and costing the resulting plans would be too \nexpensive, but maybe a more \"stable\" (and limited) approach would be fine, like \ngenerating the pathkeys only if every ordered aggref shares the same prefix. I \ndon't think there would be any ambiguity here.\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Thu, 15 Jul 2021 15:02:00 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Fri, 16 Jul 2021 at 01:02, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> The approach of building a pathkey for the first order by we find, then\n> appending to it as needed seems sensible but I'm a bit worried about users\n> starting to rely on this as an optimization. Even if we don't document it,\n> people may start to change the order of their target lists to \"force\" a\n> specific sort on the lower nodes. How confident are we that we won't change this\n> or that we will be willing to break it ?\n\nThat's a good question. I mainly did it that way because Windowing\nfunctions work similarly based on the position of items in the\ntargetlist. The situation there is slightly more complex as it\ndepends on the SortGroupClause->tleSortGroupRef.\n\n> Generating all possible pathkeys and costing the resulting plans would be too\n> expensive, but maybe a more \"stable\" (and limited) approach would be fine, like\n> generating the pathkeys only if every ordered aggref shares the same prefix. I\n> don't think there would be any ambiguity here.\n\nI think that's a bad idea as it would leave a lot on the table. I\ndon't see any reason to make it that restrictive. Remember that before\nthis that every Aggref with a sort clause must perform their own sort.\nSo it's not like we'll ever increase the number of sorts here as a\nresult.\n\nWhat we maybe could consider instead would be to pick the first Aggref\nthen look for the most sorted derivative of that then tally up the\nnumber of Aggrefs that can be sorted using those pathkeys, then repeat\nthat process for the remaining Aggrefs that didn't have the same\nprefix then use the pathkeys for the set with the most Aggrefs. We\ncould still tiebreak on the targetlist position so at least it's not\nrandom which ones we pick. Now that we have a list of Aggrefs that are\ndeduplicated in the planner thanks to 0a2bc5d61e it should be fairly\neasy to do that.\n\nDavid\n\n\n", "msg_date": "Fri, 16 Jul 2021 18:04:03 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Fri, 16 Jul 2021 at 18:04, David Rowley <dgrowleyml@gmail.com> wrote:\n> What we maybe could consider instead would be to pick the first Aggref\n> then look for the most sorted derivative of that then tally up the\n> number of Aggrefs that can be sorted using those pathkeys, then repeat\n> that process for the remaining Aggrefs that didn't have the same\n> prefix then use the pathkeys for the set with the most Aggrefs. We\n> could still tiebreak on the targetlist position so at least it's not\n> random which ones we pick. Now that we have a list of Aggrefs that are\n> deduplicated in the planner thanks to 0a2bc5d61e it should be fairly\n> easy to do that.\n\nI've attached a patch which does as I mention above.\n\nI'm still not sold on if this is better than just going with the order\nof the first aggregate. The problem might be that a plan could change\nas new aggregates are added to the end of the target list. It feels\nlike there might be a bit less control over it than the previous\nversion. Remember that suiting more aggregates is not always better as\nthere might be an index that could provide presorted input for another\nset of aggregates which would overall reduce the number of sorts.\nHowever, maybe it's not too big an issue as for any aggregates that\nare not presorted we're left doing 1 sort per Aggref, so reducing the\nnumber of those might be more important than selecting the order that\nhas an index to support it.\n\nI've left off the 0002 patch this time as I think the lack of JIT\nsupport for DISTINCT was causing the CF bot to fail. I'd quite like\nto confirm that theory.\n\nDavid", "msg_date": "Fri, 16 Jul 2021 22:00:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Fri, 16 Jul 2021 at 22:00, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 16 Jul 2021 at 18:04, David Rowley <dgrowleyml@gmail.com> wrote:\n> > What we maybe could consider instead would be to pick the first Aggref\n> > then look for the most sorted derivative of that then tally up the\n> > number of Aggrefs that can be sorted using those pathkeys, then repeat\n> > that process for the remaining Aggrefs that didn't have the same\n> > prefix then use the pathkeys for the set with the most Aggrefs. We\n> > could still tiebreak on the targetlist position so at least it's not\n> > random which ones we pick. Now that we have a list of Aggrefs that are\n> > deduplicated in the planner thanks to 0a2bc5d61e it should be fairly\n> > easy to do that.\n>\n> I've attached a patch which does as I mention above.\n\nLooks like I did a sloppy job of that. I messed up the condition in\nstandard_qp_callback() that sets the ORDER BY aggregate pathkeys so\nthat it accidentally set them when there was an unsortable GROUP BY\nclause, as highlighted by the postgres_fdw tests failing. I've now\nadded a comment to explain why the condition is the way it is so that\nI don't forget again.\n\nHere's a cleaned-up version that passes make check-world.\n\nDavid", "msg_date": "Sat, 17 Jul 2021 14:36:09 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "> Looks like I did a sloppy job of that. I messed up the condition in\n> standard_qp_callback() that sets the ORDER BY aggregate pathkeys so\n> that it accidentally set them when there was an unsortable GROUP BY\n> clause, as highlighted by the postgres_fdw tests failing. I've now\n> added a comment to explain why the condition is the way it is so that\n> I don't forget again.\n> \n> Here's a cleaned-up version that passes make check-world.\n> \n\nI've noticed that when the ORDER BY is a grouping key (which to be honest \ndoesn't seem to make much sense to me), the sort key is duplicated, as \ndemonstrated by one of the modified tests (partition_aggregate.sql). \n\nThis leads to additional sort nodes being added when there is no necessity to \ndo so. In the case of sort and index pathes, the duplicate keys are not \nconsidered, I think the same should apply here.\n\nIt means the logic for appending the order by pathkeys to the existing group \nby pathkeys would ideally need to remove the redundant group by keys from the \norder by keys, considering this example:\n\nregression=# explain select sum(unique1 order by ten, two), sum(unique1 order \nby four), sum(unique1 order by two, four) from tenk2 group by ten;\n QUERY PLAN \n------------------------------------------------------------------------\n GroupAggregate (cost=1109.39..1234.49 rows=10 width=28)\n Group Key: ten\n -> Sort (cost=1109.39..1134.39 rows=10000 width=16)\n Sort Key: ten, ten, two\n -> Seq Scan on tenk2 (cost=0.00..445.00 rows=10000 width=16)\n\n\nWe would ideally like to sort on ten, two, four to satisfy the first and last \naggref at once. Stripping the common prefix (ten) would eliminate this problem. \n\nAlso, existing regression tests cover the first problem (order by a grouping \nkey) but I feel like they should be extended with a case similar as the above \nto check which pathkeys are used in the \"multiple ordered aggregates + group \nby\" cases. \n\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Mon, 19 Jul 2021 08:32:34 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Mon, 19 Jul 2021 at 18:32, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> It means the logic for appending the order by pathkeys to the existing group\n> by pathkeys would ideally need to remove the redundant group by keys from the\n> order by keys, considering this example:\n>\n> regression=# explain select sum(unique1 order by ten, two), sum(unique1 order\n> by four), sum(unique1 order by two, four) from tenk2 group by ten;\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> GroupAggregate (cost=1109.39..1234.49 rows=10 width=28)\n> Group Key: ten\n> -> Sort (cost=1109.39..1134.39 rows=10000 width=16)\n> Sort Key: ten, ten, two\n> -> Seq Scan on tenk2 (cost=0.00..445.00 rows=10000 width=16)\n>\n>\n> We would ideally like to sort on ten, two, four to satisfy the first and last\n> aggref at once. Stripping the common prefix (ten) would eliminate this problem.\n\nhmm, yeah. That's not great. This comes from the way I'm doing\nlist_concat on the pathkeys from the GROUP BY with the ones from the\nordered aggregates. If it were possible to use\nmake_pathkeys_for_sortclauses() to make these in one go, that would\nfix the problem since pathkey_is_redundant() would skip the 2nd \"ten\".\nUnfortunately, it's not possible to pass the combined list of\nSortGroupClauses to make_pathkeys_for_sortclauses since they're not\nfrom the same targetlist. Aggrefs have their own targetlist and the\nSortGroupClauses for the Aggref reference that tlist.\n\nI think to do this we'd need something like pathkeys_append() in\npathkeys.c which had a loop and appended the pathkey only if\npathkey_is_redundant returns false.\n\n> Also, existing regression tests cover the first problem (order by a grouping\n> key) but I feel like they should be extended with a case similar as the above\n> to check which pathkeys are used in the \"multiple ordered aggregates + group\n> by\" cases.\n\nIt does seem like a bit of a weird case to go to a lot of effort to\nmake work, but it would be nice if it did work without having to\ncontort the code too much.\n\nDavid\n\n\n", "msg_date": "Tue, 20 Jul 2021 23:06:40 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Mon, 19 Jul 2021 at 18:32, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> regression=# explain select sum(unique1 order by ten, two), sum(unique1 order\n> by four), sum(unique1 order by two, four) from tenk2 group by ten;\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> GroupAggregate (cost=1109.39..1234.49 rows=10 width=28)\n> Group Key: ten\n> -> Sort (cost=1109.39..1134.39 rows=10000 width=16)\n> Sort Key: ten, ten, two\n> -> Seq Scan on tenk2 (cost=0.00..445.00 rows=10000 width=16)\n>\n>\n> We would ideally like to sort on ten, two, four to satisfy the first and last\n> aggref at once. Stripping the common prefix (ten) would eliminate this problem.\n\nThanks for finding this. I've made a few changes to make this case\nwork as you describe. Please see attached v6 patches.\n\nBecause I had to add additional code to take the GROUP BY pathkeys\ninto account when choosing the best ORDER BY agg pathkeys, the\nfunction that does that became a little bigger. To try to reduce the\ncomplexity of it, I got rid of all the processing from the initial\nloop and instead it now uses the logic from the 2nd loop to find the\nbest pathkeys. The reason I'd not done that in the first place was\nbecause I'd thought I could get away without building an additional\nBitmapset for simple cases, but that's probably fairly cheap compared\nto building Pathkeys. With the additional complexity for the GROUP\nBY pathkeys, the extra code seemed not worth it.\n\nThe 0001 patch is the ORDER BY aggregate code. 0002 is to fix up some\nbroken regression tests in postgres_fdw that 0001 caused. It appears\nthat 0001 uncovered a bug in the postgres_fdw code. I've reported\nthat in [1]. If that turns out to be a bug then it'll need to be fixed\nbefore this can progress.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvr4OeC2DBVY--zVP83-K=bYrTD7F8SZDhN4g+pj2f2S-A@mail.gmail.com", "msg_date": "Wed, 21 Jul 2021 14:52:43 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Le mercredi 21 juillet 2021, 04:52:43 CEST David Rowley a écrit :\n> Thanks for finding this. I've made a few changes to make this case\n> work as you describe. Please see attached v6 patches.\n> \n> Because I had to add additional code to take the GROUP BY pathkeys\n> into account when choosing the best ORDER BY agg pathkeys, the\n> function that does that became a little bigger. To try to reduce the\n> complexity of it, I got rid of all the processing from the initial\n> loop and instead it now uses the logic from the 2nd loop to find the\n> best pathkeys. The reason I'd not done that in the first place was\n> because I'd thought I could get away without building an additional\n> Bitmapset for simple cases, but that's probably fairly cheap compared\n> to building Pathkeys. With the additional complexity for the GROUP\n> BY pathkeys, the extra code seemed not worth it.\n> \n> The 0001 patch is the ORDER BY aggregate code. 0002 is to fix up some\n> broken regression tests in postgres_fdw that 0001 caused. It appears\n> that 0001 uncovered a bug in the postgres_fdw code. I've reported\n> that in [1]. If that turns out to be a bug then it'll need to be fixed\n> before this can progress.\n\nI tested the 0001 patch against both HEAD and my proposed bugfix for \npostgres_fdw.\n\nThere is a problem that the ordered aggregate is not pushed down anymore. The \nunderlying Sort node is correctly pushed down though. \n\nThis comes from the fact that postgres_fdw grouping path never contains any \npathkey. Since the cost is fuzzily the same between the pushed-down aggregate \nand the locally performed one, the tie is broken against the pathkeys.\n\nIdeally we would add the group pathkeys to the grouping path, but this would \nadd an additional ORDER BY expression matching the GROUP BY. Moreover, some \ntriaging of the pathkeys would be necessary since we now mix the sort-in-\naggref pathkeys with the group_pathkeys.\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Wed, 21 Jul 2021 16:01:03 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Thu, 22 Jul 2021 at 02:01, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> I tested the 0001 patch against both HEAD and my proposed bugfix for\n> postgres_fdw.\n>\n> There is a problem that the ordered aggregate is not pushed down anymore. The\n> underlying Sort node is correctly pushed down though.\n>\n> This comes from the fact that postgres_fdw grouping path never contains any\n> pathkey. Since the cost is fuzzily the same between the pushed-down aggregate\n> and the locally performed one, the tie is broken against the pathkeys.\n\nI think this might be more down to a lack of any penalty cost for\nfetching foreign tuples. Looking at create_foreignscan_path(), I don't\nsee anything that adds anything to account for fetching the tuples\nfrom the foreign server. If there was something like that then there'd\nbe more of a preference to perform the remote aggregation so that\nfewer rows must arrive from the remote server.\n\nI tested by adding: total_cost += cpu_tuple_cost * rows * 100; in\ncreate_foreignscan_path() and I got the plan with the remote\naggregation. That's a fairly large penalty of 1.0 per row. Much bigger\nthan parallel_tuple_cost's default value.\n\nI'm a bit undecided on how much this patch needs to get involved in\nadjusting foreign scan costs. The problem is that we've given the\nexecutor a new path to consider and nobody has done any proper\ncostings for the foreign scan so that it properly prefers paths that\nhave to pull fewer foreign tuples. This is a pretty similar problem\nto what parallel_tuple_cost aims to fix. Also similar to how we had to\nadd APPEND_CPU_COST_MULTIPLIER to have partition-wise aggregates\nprefer grouping at the partition level rather than at the partitioned\ntable level.\n\n> Ideally we would add the group pathkeys to the grouping path, but this would\n> add an additional ORDER BY expression matching the GROUP BY. Moreover, some\n> triaging of the pathkeys would be necessary since we now mix the sort-in-\n> aggref pathkeys with the group_pathkeys.\n\nI think you're talking about passing pathkeys into\ncreate_foreign_upper_path in add_foreign_grouping_paths. If so, I\ndon't really see how it would be safe to add pathkeys to the foreign\ngrouping path. What if the foreign server did a Hash Aggregate? The\nrows might come back in any random order.\n\nI kinda think that to fix this properly would need a new foreign\nserver option such as foreign_tuple_cost. I'd feel better about\nsomething like that with some of the people with a vested interest in\nthe FDW code were watching more closely. So far we've not managed to\nentice any of them with the bug report yet, but it's maybe early days\nyet.\n\nDavid\n\n\n", "msg_date": "Thu, 22 Jul 2021 19:38:50 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Le jeudi 22 juillet 2021, 09:38:50 CEST David Rowley a écrit :\n> On Thu, 22 Jul 2021 at 02:01, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> > I tested the 0001 patch against both HEAD and my proposed bugfix for\n> > postgres_fdw.\n> > \n> > There is a problem that the ordered aggregate is not pushed down anymore.\n> > The underlying Sort node is correctly pushed down though.\n> > \n> > This comes from the fact that postgres_fdw grouping path never contains\n> > any\n> > pathkey. Since the cost is fuzzily the same between the pushed-down\n> > aggregate and the locally performed one, the tie is broken against the\n> > pathkeys.\n> I think this might be more down to a lack of any penalty cost for\n> fetching foreign tuples. Looking at create_foreignscan_path(), I don't\n> see anything that adds anything to account for fetching the tuples\n> from the foreign server. If there was something like that then there'd\n> be more of a preference to perform the remote aggregation so that\n> fewer rows must arrive from the remote server.\n> \n> I tested by adding: total_cost += cpu_tuple_cost * rows * 100; in\n> create_foreignscan_path() and I got the plan with the remote\n> aggregation. That's a fairly large penalty of 1.0 per row. Much bigger\n> than parallel_tuple_cost's default value.\n> \n> I'm a bit undecided on how much this patch needs to get involved in\n> adjusting foreign scan costs. The problem is that we've given the\n> executor a new path to consider and nobody has done any proper\n> costings for the foreign scan so that it properly prefers paths that\n> have to pull fewer foreign tuples. This is a pretty similar problem\n> to what parallel_tuple_cost aims to fix. Also similar to how we had to\n> add APPEND_CPU_COST_MULTIPLIER to have partition-wise aggregates\n> prefer grouping at the partition level rather than at the partitioned\n> table level.\n> \n> > Ideally we would add the group pathkeys to the grouping path, but this\n> > would add an additional ORDER BY expression matching the GROUP BY.\n> > Moreover, some triaging of the pathkeys would be necessary since we now\n> > mix the sort-in- aggref pathkeys with the group_pathkeys.\n> \n> I think you're talking about passing pathkeys into\n> create_foreign_upper_path in add_foreign_grouping_paths. If so, I\n> don't really see how it would be safe to add pathkeys to the foreign\n> grouping path. What if the foreign server did a Hash Aggregate? The\n> rows might come back in any random order.\n\nYes, I was suggesting to add a new path with the pathkeys factored in, which \nif chosen over the non-ordered path would result in an additional ORDER BY \nclause to prevent a HashAggregate. But that doesn't seem a good idea after \nall.\n\n> \n> I kinda think that to fix this properly would need a new foreign\n> server option such as foreign_tuple_cost. I'd feel better about\n> something like that with some of the people with a vested interest in\n> the FDW code were watching more closely. So far we've not managed to\n> entice any of them with the bug report yet, but it's maybe early days\n> yet.\n\nWe already have that in the form of fdw_tuple_cost as a server option if I'm \nnot mistaken ? It works as expected when the number of tuples is notably \nreduced by the foreign group by.\n\nThe problem arise when the cardinality of the groups is equal to the input's \ncardinality. I think even in that case we should try to use a remote aggregate \nsince it's a computation that will not happen on the local server. I also \nthink we're more likely to have up to date statistics remotely than the ones \ncollected locally on the foreign tables, and the estimated number of groups \nwould be more accurate on the remote side than the local one.\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Thu, 22 Jul 2021 10:42:49 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Le jeudi 22 juillet 2021, 10:42:49 CET Ronan Dunklau a écrit :\n> Le jeudi 22 juillet 2021, 09:38:50 CEST David Rowley a écrit :\n> > On Thu, 22 Jul 2021 at 02:01, Ronan Dunklau <ronan.dunklau@aiven.io> \nwrote:\n> > > I tested the 0001 patch against both HEAD and my proposed bugfix for\n> > > postgres_fdw.\n> > > \n> > > There is a problem that the ordered aggregate is not pushed down\n> > > anymore.\n> > > The underlying Sort node is correctly pushed down though.\n> > > \n> > > This comes from the fact that postgres_fdw grouping path never contains\n> > > any\n> > > pathkey. Since the cost is fuzzily the same between the pushed-down\n> > > aggregate and the locally performed one, the tie is broken against the\n> > > pathkeys.\n> > \n> > I think this might be more down to a lack of any penalty cost for\n> > fetching foreign tuples. Looking at create_foreignscan_path(), I don't\n> > see anything that adds anything to account for fetching the tuples\n> > from the foreign server. If there was something like that then there'd\n> > be more of a preference to perform the remote aggregation so that\n> > fewer rows must arrive from the remote server.\n> > \n> > I tested by adding: total_cost += cpu_tuple_cost * rows * 100; in\n> > create_foreignscan_path() and I got the plan with the remote\n> > aggregation. That's a fairly large penalty of 1.0 per row. Much bigger\n> > than parallel_tuple_cost's default value.\n> > \n> > I'm a bit undecided on how much this patch needs to get involved in\n> > adjusting foreign scan costs. The problem is that we've given the\n> > executor a new path to consider and nobody has done any proper\n> > costings for the foreign scan so that it properly prefers paths that\n> > have to pull fewer foreign tuples. This is a pretty similar problem\n> > to what parallel_tuple_cost aims to fix. Also similar to how we had to\n> > add APPEND_CPU_COST_MULTIPLIER to have partition-wise aggregates\n> > prefer grouping at the partition level rather than at the partitioned\n> > table level.\n> > \n> > > Ideally we would add the group pathkeys to the grouping path, but this\n> > > would add an additional ORDER BY expression matching the GROUP BY.\n> > > Moreover, some triaging of the pathkeys would be necessary since we now\n> > > mix the sort-in- aggref pathkeys with the group_pathkeys.\n> > \n> > I think you're talking about passing pathkeys into\n> > create_foreign_upper_path in add_foreign_grouping_paths. If so, I\n> > don't really see how it would be safe to add pathkeys to the foreign\n> > grouping path. What if the foreign server did a Hash Aggregate? The\n> > rows might come back in any random order.\n> \n> Yes, I was suggesting to add a new path with the pathkeys factored in, which\n> if chosen over the non-ordered path would result in an additional ORDER BY\n> clause to prevent a HashAggregate. But that doesn't seem a good idea after\n> all.\n> \n> > I kinda think that to fix this properly would need a new foreign\n> > server option such as foreign_tuple_cost. I'd feel better about\n> > something like that with some of the people with a vested interest in\n> > the FDW code were watching more closely. So far we've not managed to\n> > entice any of them with the bug report yet, but it's maybe early days\n> > yet.\n> \n> We already have that in the form of fdw_tuple_cost as a server option if I'm\n> not mistaken ? It works as expected when the number of tuples is notably\n> reduced by the foreign group by.\n> \n> The problem arise when the cardinality of the groups is equal to the input's\n> cardinality. I think even in that case we should try to use a remote\n> aggregate since it's a computation that will not happen on the local\n> server. I also think we're more likely to have up to date statistics\n> remotely than the ones collected locally on the foreign tables, and the\n> estimated number of groups would be more accurate on the remote side than\n> the local one.\n\nI took some time to toy with this again.\n\nAt first I thought that charging a discount in foreign grouping paths for \nAggref targets (since they are computed remotely) would be a good idea, \nsimilar to what is done for the grouping keys.\n\nBut in the end, it's probably not something we would like to do. Yes, the \ngroup planning will be more accurate on the remote side generally (better \nstatistics than locally for estimating the number of groups) but executing the \ngrouping locally when the number of groups is close to the input's cardinality \n(ex: group by unique_key) gives us a form of parallelism which can actually \nperform better. \n\nFor the other cases where there is fewer output than input tuples, that is, \nwhen an actual grouping takes place, adjusting fdw_tuple_cost might be enough \nto tune the behavior to what is desirable.\n\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Thu, 04 Nov 2021 08:59:00 +0100", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "This patch is now failing in the sqljson regression test. It looks\nlike it's just the ordering of the elements in json_arrayagg() calls\nwhich may actually be a faulty test that needs more ORDER BY clauses\nrather than any issues with the code. Nonetheless it's something that\nneeds to be addressed before this patch could be applied.\n\nGiven it's gotten some feedback from Ronan and this regression test\nfailure I'll move it to Waiting on Author but we're near the end of\nthe CF and it'll probably be moved forward soon.\n\nOn Thu, 4 Nov 2021 at 04:00, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n>\n> Le jeudi 22 juillet 2021, 10:42:49 CET Ronan Dunklau a écrit :\n> > Le jeudi 22 juillet 2021, 09:38:50 CEST David Rowley a écrit :\n> > > On Thu, 22 Jul 2021 at 02:01, Ronan Dunklau <ronan.dunklau@aiven.io>\n> wrote:\n> > > > I tested the 0001 patch against both HEAD and my proposed bugfix for\n> > > > postgres_fdw.\n> > > >\n> > > > There is a problem that the ordered aggregate is not pushed down\n> > > > anymore.\n> > > > The underlying Sort node is correctly pushed down though.\n> > > >\n> > > > This comes from the fact that postgres_fdw grouping path never contains\n> > > > any\n> > > > pathkey. Since the cost is fuzzily the same between the pushed-down\n> > > > aggregate and the locally performed one, the tie is broken against the\n> > > > pathkeys.\n> > >\n> > > I think this might be more down to a lack of any penalty cost for\n> > > fetching foreign tuples. Looking at create_foreignscan_path(), I don't\n> > > see anything that adds anything to account for fetching the tuples\n> > > from the foreign server. If there was something like that then there'd\n> > > be more of a preference to perform the remote aggregation so that\n> > > fewer rows must arrive from the remote server.\n> > >\n> > > I tested by adding: total_cost += cpu_tuple_cost * rows * 100; in\n> > > create_foreignscan_path() and I got the plan with the remote\n> > > aggregation. That's a fairly large penalty of 1.0 per row. Much bigger\n> > > than parallel_tuple_cost's default value.\n> > >\n> > > I'm a bit undecided on how much this patch needs to get involved in\n> > > adjusting foreign scan costs. The problem is that we've given the\n> > > executor a new path to consider and nobody has done any proper\n> > > costings for the foreign scan so that it properly prefers paths that\n> > > have to pull fewer foreign tuples. This is a pretty similar problem\n> > > to what parallel_tuple_cost aims to fix. Also similar to how we had to\n> > > add APPEND_CPU_COST_MULTIPLIER to have partition-wise aggregates\n> > > prefer grouping at the partition level rather than at the partitioned\n> > > table level.\n> > >\n> > > > Ideally we would add the group pathkeys to the grouping path, but this\n> > > > would add an additional ORDER BY expression matching the GROUP BY.\n> > > > Moreover, some triaging of the pathkeys would be necessary since we now\n> > > > mix the sort-in- aggref pathkeys with the group_pathkeys.\n> > >\n> > > I think you're talking about passing pathkeys into\n> > > create_foreign_upper_path in add_foreign_grouping_paths. If so, I\n> > > don't really see how it would be safe to add pathkeys to the foreign\n> > > grouping path. What if the foreign server did a Hash Aggregate? The\n> > > rows might come back in any random order.\n> >\n> > Yes, I was suggesting to add a new path with the pathkeys factored in, which\n> > if chosen over the non-ordered path would result in an additional ORDER BY\n> > clause to prevent a HashAggregate. But that doesn't seem a good idea after\n> > all.\n> >\n> > > I kinda think that to fix this properly would need a new foreign\n> > > server option such as foreign_tuple_cost. I'd feel better about\n> > > something like that with some of the people with a vested interest in\n> > > the FDW code were watching more closely. So far we've not managed to\n> > > entice any of them with the bug report yet, but it's maybe early days\n> > > yet.\n> >\n> > We already have that in the form of fdw_tuple_cost as a server option if I'm\n> > not mistaken ? It works as expected when the number of tuples is notably\n> > reduced by the foreign group by.\n> >\n> > The problem arise when the cardinality of the groups is equal to the input's\n> > cardinality. I think even in that case we should try to use a remote\n> > aggregate since it's a computation that will not happen on the local\n> > server. I also think we're more likely to have up to date statistics\n> > remotely than the ones collected locally on the foreign tables, and the\n> > estimated number of groups would be more accurate on the remote side than\n> > the local one.\n>\n> I took some time to toy with this again.\n>\n> At first I thought that charging a discount in foreign grouping paths for\n> Aggref targets (since they are computed remotely) would be a good idea,\n> similar to what is done for the grouping keys.\n>\n> But in the end, it's probably not something we would like to do. Yes, the\n> group planning will be more accurate on the remote side generally (better\n> statistics than locally for estimating the number of groups) but executing the\n> grouping locally when the number of groups is close to the input's cardinality\n> (ex: group by unique_key) gives us a form of parallelism which can actually\n> perform better.\n>\n> For the other cases where there is fewer output than input tuples, that is,\n> when an actual grouping takes place, adjusting fdw_tuple_cost might be enough\n> to tune the behavior to what is desirable.\n>\n>\n> --\n> Ronan Dunklau\n>\n>\n>\n>\n\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 30 Mar 2022 13:35:57 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Thu, 31 Mar 2022 at 06:36, Greg Stark <stark@mit.edu> wrote:\n>\n> This patch is now failing in the sqljson regression test. It looks\n> like it's just the ordering of the elements in json_arrayagg() calls\n> which may actually be a faulty test that needs more ORDER BY clauses\n> rather than any issues with the code. Nonetheless it's something that\n> needs to be addressed before this patch could be applied.\n>\n> Given it's gotten some feedback from Ronan and this regression test\n> failure I'll move it to Waiting on Author but we're near the end of\n> the CF and it'll probably be moved forward soon.\n\nThanks for mentioning this and for keeping tabs on it.\n\nThis patch in general is more than there's realistic time for in this\nCF. I'd very much like to get the DISTINCT part working too. Not just\nthe ORDER BY. I've pushed this one out to July's CF now.\n\nDavid\n\n\n", "msg_date": "Thu, 7 Apr 2022 16:09:41 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Thu, 4 Nov 2021 at 20:59, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> I took some time to toy with this again.\n>\n> At first I thought that charging a discount in foreign grouping paths for\n> Aggref targets (since they are computed remotely) would be a good idea,\n> similar to what is done for the grouping keys.\n\nI've been working on this patch again. There was a bit of work to do\nto rebase it atop db0d67db2. The problem there was that since this\npatch appends pathkeys to suit ORDER BY / DISTINCT aggregates to the\nquery's group_pathkeys, db0d67db2 goes and tries to rearrange those,\nbut fails to find the SortGroupClause corresponding to the PathKey in\ngroup_pathkeys. I wish the code I came up with to make that work was a\nbit nicer, but what's there at least seems to work. There are a few\nmore making copies of Lists than I'd like.\n\nI've also went and added LLVM support to make JIT work with the new\nDISTINCT expression evaluation step types.\n\nAlso, James mentioned in [1] about the Merge Join plan change that\nthis patch was causing in an existing test. I looked into that and\nfound the cause. The plan change is not really the fault of this\npatch, so I've proposed a fix for to make that work more efficiently\nin [2]. The basics there are that select_outer_pathkeys_for_merge()\npre-dates Incremental Sorts and didn't consider prefixes of the\nquery_pathkeys after matching all the join quals. The patch on that\nthread relaxes that rule and makes this patch produce an Incremental\nSort plan for the query in question.\n\nAnother annoying part of this patch is that I've added an\n\"aggpresorted\" field to Aggref, which the planner sets. That's a\nparse node type and it would be nicer not to have the planner mess\naround with those. We maybe could wrap up the Aggrefs in some planner\nstruct and pass those to the executor instead. That would require a\nbit more churn than what I've got in the attached.\n\nI've attached the v7 patch.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAAaqYe-yxXkXVPJkRw1nDA=CJBw28jvhACRyDcU10dAOcdSj=Q@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAApHDvrtZu0PHVfDPFM4Yx3jNR2Wuwosv+T2zqa7LrhhBr2rRg@mail.gmail.com", "msg_date": "Wed, 20 Jul 2022 17:26:39 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Thu, 4 Nov 2021 at 20:59, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> I took some time to toy with this again.\n>\n> At first I thought that charging a discount in foreign grouping paths for\n> Aggref targets (since they are computed remotely) would be a good idea,\n> similar to what is done for the grouping keys.\n>\n> But in the end, it's probably not something we would like to do. Yes, the\n> group planning will be more accurate on the remote side generally (better\n> statistics than locally for estimating the number of groups) but executing the\n> grouping locally when the number of groups is close to the input's cardinality\n> (ex: group by unique_key) gives us a form of parallelism which can actually\n> perform better.\n>\n> For the other cases where there is fewer output than input tuples, that is,\n> when an actual grouping takes place, adjusting fdw_tuple_cost might be enough\n> to tune the behavior to what is desirable.\n\nI've now looked into this issue. With the patched code, the remote\naggregate path loses out in add_path() due to the fact that the local\naggregate path compares fuzzily the same as the remote aggregate path.\nSince the local aggregate path is now fetching the rows from the\nforeign server with a SQL query containing an ORDER BY clause (per my\nchange to query_pathkeys being picked up in\nget_useful_pathkeys_for_relation()), add_path now prefers that path\ndue to it having pathkeys and the remote aggregate query not having\nany (PATHKEYS_BETTER2).\n\nIt seems what's going on is that quite simply the default\nfdw_tuple_cost is unrealistically low. Let's look.\n\n#define DEFAULT_FDW_TUPLE_COST 0.01\n\nWhich is even lower than DEFAULT_PARALLEL_TUPLE_COST (0.1) and the\nsame as cpu_tuple_cost!\n\nAfter some debugging, I see add_path() switches to the, seemingly\nbetter, remote aggregate plan again if I multiple fdw_tuple_cost by\n28. Anything below that sticks to the (inferior) local aggregate plan.\n\nThere's also another problem going on that would make that situation\nbetter. The query planner expects the following query to produce 6\nrows:\n\nSELECT array_agg(\"C 1\" ORDER BY \"C 1\" USING OPERATOR(public.<^) NULLS\nLAST), c2 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" < 100)) AND ((c2 = 6)) GROUP\nBY c2;\n\nYou might expect the planner to think there'd just be 1 row due to the\n\"c2 = 6\" and \"GROUP BY c2\", but it thinks there will be more than\nthat. If estimate_num_groups() knew about EquivalenceClasses and\nchecked ec_has_const, then it might be able to do better, but it\ndoesn't, so:\n\nGroupAggregate (cost=11.67..11.82 rows=6 width=36)\n\nIf I force that estimate to be 1 row instead of 6, then I only need a\nfdw_tuple_cost to be 12 times the default to get it to switch to the\nremote aggregate plan.\n\nI think we should likely just patch master and change\nDEFAULT_FDW_TUPLE_COST to at the very least 0.2, which is 20x higher\nthan today's setting. I'd be open to a much higher setting such as 0.5\n(50x).\n\nDavid\n\n\n", "msg_date": "Fri, 22 Jul 2022 16:00:15 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, Jul 20, 2022 at 1:27 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I've been working on this patch again. There was a bit of work to do\n> to rebase it atop db0d67db2. The problem there was that since this\n> patch appends pathkeys to suit ORDER BY / DISTINCT aggregates to the\n> query's group_pathkeys, db0d67db2 goes and tries to rearrange those,\n> but fails to find the SortGroupClause corresponding to the PathKey in\n> group_pathkeys. I wish the code I came up with to make that work was a\n> bit nicer, but what's there at least seems to work. There are a few\n> more making copies of Lists than I'd like.\n\n\nWe may need to do more checks when adding members to 'aggindexes' to\nrecord we've found pathkeys for an aggregate, because 'currpathkeys' may\ninclude pathkeys for some latter aggregates. I can see this problem with\nthe query below:\n\n select max(b order by b), max(a order by a) from t group by a;\n\nWhen processing the first aggregate, we compose the 'currpathkeys' as\n{a, b} and mark this aggregate in 'aggindexes'. When it comes to the\nsecond aggregate, we compose its pathkeys as {a} and decide that it is\nnot stronger than 'currpathkeys'. So the second aggregate is not\nrecorded in 'aggindexes'. As a result, we fail to mark aggpresorted for\nthe second aggregate.\n\nThanks\nRichard\n\nOn Wed, Jul 20, 2022 at 1:27 PM David Rowley <dgrowleyml@gmail.com> wrote:\nI've been working on this patch again. There was a bit of work to do\nto rebase it atop db0d67db2.  The problem there was that since this\npatch appends pathkeys to suit ORDER BY / DISTINCT aggregates to the\nquery's group_pathkeys, db0d67db2 goes and tries to rearrange those,\nbut fails to find the SortGroupClause corresponding to the PathKey in\ngroup_pathkeys. I wish the code I came up with to make that work was a\nbit nicer, but what's there at least seems to work. There are a few\nmore making copies of Lists than I'd like.We may need to do more checks when adding members to 'aggindexes' torecord we've found pathkeys for an aggregate, because 'currpathkeys' mayinclude pathkeys for some latter aggregates. I can see this problem withthe query below:    select max(b order by b), max(a order by a) from t group by a;When processing the first aggregate, we compose the 'currpathkeys' as{a, b} and mark this aggregate in 'aggindexes'. When it comes to thesecond aggregate, we compose its pathkeys as {a} and decide that it isnot stronger than 'currpathkeys'. So the second aggregate is notrecorded in 'aggindexes'. As a result, we fail to mark aggpresorted forthe second aggregate.ThanksRichard", "msg_date": "Fri, 22 Jul 2022 17:33:24 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Fri, 22 Jul 2022 at 21:33, Richard Guo <guofenglinux@gmail.com> wrote:\n> I can see this problem with\n> the query below:\n>\n> select max(b order by b), max(a order by a) from t group by a;\n>\n> When processing the first aggregate, we compose the 'currpathkeys' as\n> {a, b} and mark this aggregate in 'aggindexes'. When it comes to the\n> second aggregate, we compose its pathkeys as {a} and decide that it is\n> not stronger than 'currpathkeys'. So the second aggregate is not\n> recorded in 'aggindexes'. As a result, we fail to mark aggpresorted for\n> the second aggregate.\n\nYeah, you're right. I have a missing check to see if currpathkeys are\nbetter than the pathkeys for the current aggregate. In your example\ncase we'd have still processed the 2nd aggregate the old way instead\nof realising we could take the new pre-sorted path for faster\nprocessing.\n\nI've adjusted that in the attached to make it properly include the\ncase where currpathkeys are better.\n\nThanks for the review.\n\nDavid", "msg_date": "Tue, 26 Jul 2022 11:38:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Mon, Jul 25, 2022 at 4:39 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 22 Jul 2022 at 21:33, Richard Guo <guofenglinux@gmail.com> wrote:\n> > I can see this problem with\n> > the query below:\n> >\n> > select max(b order by b), max(a order by a) from t group by a;\n> >\n> > When processing the first aggregate, we compose the 'currpathkeys' as\n> > {a, b} and mark this aggregate in 'aggindexes'. When it comes to the\n> > second aggregate, we compose its pathkeys as {a} and decide that it is\n> > not stronger than 'currpathkeys'. So the second aggregate is not\n> > recorded in 'aggindexes'. As a result, we fail to mark aggpresorted for\n> > the second aggregate.\n>\n> Yeah, you're right. I have a missing check to see if currpathkeys are\n> better than the pathkeys for the current aggregate. In your example\n> case we'd have still processed the 2nd aggregate the old way instead\n> of realising we could take the new pre-sorted path for faster\n> processing.\n>\n> I've adjusted that in the attached to make it properly include the\n> case where currpathkeys are better.\n>\n> Thanks for the review.\n>\n> David\n>\nHi,\n\nsort order the the planner chooses is simply : there is duplicate `the`\n\n+ /* mark this aggregate is covered by 'currpathkeys'\n*/\n\nis covered by -> as covered by\n\nCheers\n\nOn Mon, Jul 25, 2022 at 4:39 PM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 22 Jul 2022 at 21:33, Richard Guo <guofenglinux@gmail.com> wrote:\n> I can see this problem with\n> the query below:\n>\n>     select max(b order by b), max(a order by a) from t group by a;\n>\n> When processing the first aggregate, we compose the 'currpathkeys' as\n> {a, b} and mark this aggregate in 'aggindexes'. When it comes to the\n> second aggregate, we compose its pathkeys as {a} and decide that it is\n> not stronger than 'currpathkeys'. So the second aggregate is not\n> recorded in 'aggindexes'. As a result, we fail to mark aggpresorted for\n> the second aggregate.\n\nYeah, you're right. I have a missing check to see if currpathkeys are\nbetter than the pathkeys for the current aggregate. In your example\ncase we'd have still processed the 2nd aggregate the old way instead\nof realising we could take the new pre-sorted path for faster\nprocessing.\n\nI've adjusted that in the attached to make it properly include the\ncase where currpathkeys are better.\n\nThanks for the review.\n\nDavidHi,sort order the the planner chooses is simply : there is duplicate `the`+                       /* mark this aggregate is covered by 'currpathkeys' */is covered by -> as covered byCheers", "msg_date": "Mon, 25 Jul 2022 17:07:31 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, Jul 26, 2022 at 7:38 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 22 Jul 2022 at 21:33, Richard Guo <guofenglinux@gmail.com> wrote:\n> > I can see this problem with\n> > the query below:\n> >\n> > select max(b order by b), max(a order by a) from t group by a;\n> >\n> > When processing the first aggregate, we compose the 'currpathkeys' as\n> > {a, b} and mark this aggregate in 'aggindexes'. When it comes to the\n> > second aggregate, we compose its pathkeys as {a} and decide that it is\n> > not stronger than 'currpathkeys'. So the second aggregate is not\n> > recorded in 'aggindexes'. As a result, we fail to mark aggpresorted for\n> > the second aggregate.\n>\n> Yeah, you're right. I have a missing check to see if currpathkeys are\n> better than the pathkeys for the current aggregate. In your example\n> case we'd have still processed the 2nd aggregate the old way instead\n> of realising we could take the new pre-sorted path for faster\n> processing.\n>\n> I've adjusted that in the attached to make it properly include the\n> case where currpathkeys are better.\n\n\nThanks. Verified problem is solved in v8 patch.\n\nAlso I'm wondering if it's possible to take into consideration the\nordering indicated by existing indexes when determining the pathkeys. So\nthat for the query below we can avoid the Incremental Sort node if we\nconsider that there is an index on t(a, c):\n\n# explain (costs off) select max(b order by b), max(c order by c) from t\ngroup by a;\n QUERY PLAN\n---------------------------------------------\n GroupAggregate\n Group Key: a\n -> Incremental Sort\n Sort Key: a, b\n Presorted Key: a\n -> Index Scan using t_a_c_idx on t\n(6 rows)\n\nThanks\nRichard\n\nOn Tue, Jul 26, 2022 at 7:38 AM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 22 Jul 2022 at 21:33, Richard Guo <guofenglinux@gmail.com> wrote:\n> I can see this problem with\n> the query below:\n>\n>     select max(b order by b), max(a order by a) from t group by a;\n>\n> When processing the first aggregate, we compose the 'currpathkeys' as\n> {a, b} and mark this aggregate in 'aggindexes'. When it comes to the\n> second aggregate, we compose its pathkeys as {a} and decide that it is\n> not stronger than 'currpathkeys'. So the second aggregate is not\n> recorded in 'aggindexes'. As a result, we fail to mark aggpresorted for\n> the second aggregate.\n\nYeah, you're right. I have a missing check to see if currpathkeys are\nbetter than the pathkeys for the current aggregate. In your example\ncase we'd have still processed the 2nd aggregate the old way instead\nof realising we could take the new pre-sorted path for faster\nprocessing.\n\nI've adjusted that in the attached to make it properly include the\ncase where currpathkeys are better.Thanks. Verified problem is solved in v8 patch.Also I'm wondering if it's possible to take into consideration theordering indicated by existing indexes when determining the pathkeys. Sothat for the query below we can avoid the Incremental Sort node if weconsider that there is an index on t(a, c):# explain (costs off) select max(b order by b), max(c order by c) from t group by a;                 QUERY PLAN--------------------------------------------- GroupAggregate   Group Key: a   ->  Incremental Sort         Sort Key: a, b         Presorted Key: a         ->  Index Scan using t_a_c_idx on t(6 rows)ThanksRichard", "msg_date": "Tue, 26 Jul 2022 15:39:25 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, 26 Jul 2022 at 12:01, Zhihong Yu <zyu@yugabyte.com> wrote:\n> sort order the the planner chooses is simply : there is duplicate `the`\n\nI think the first \"the\" should be \"that\"\n\n> + /* mark this aggregate is covered by 'currpathkeys' */\n>\n> is covered by -> as covered by\n\nI think it was shortened from \"mark that this aggregate\", but I\ndropped \"that\" to get the comment to fit on a single line. Swapping\n\"is\" for \"as\" makes it better. Thanks.\n\nDavid\n\n\n", "msg_date": "Wed, 27 Jul 2022 10:37:26 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, 26 Jul 2022 at 19:39, Richard Guo <guofenglinux@gmail.com> wrote:\n> Also I'm wondering if it's possible to take into consideration the\n> ordering indicated by existing indexes when determining the pathkeys. So\n> that for the query below we can avoid the Incremental Sort node if we\n> consider that there is an index on t(a, c):\n>\n> # explain (costs off) select max(b order by b), max(c order by c) from t group by a;\n> QUERY PLAN\n> ---------------------------------------------\n> GroupAggregate\n> Group Key: a\n> -> Incremental Sort\n> Sort Key: a, b\n> Presorted Key: a\n> -> Index Scan using t_a_c_idx on t\n> (6 rows)\n\nThat would be nice but I'm not going to add anything to this patch\nwhich does anything like that. I think the patch, as it is, is a good\nmeaningful step forward to improve the performance of ordered\naggregates.\n\nThere are other things in the planner that could gain from what you\ntalk about. For example, choosing the evaluation order of WindowFuncs.\nPerhaps it would be better to try to tackle those two problems\ntogether rather than try to sneak something half-baked along with this\npatch.\n\nDavid\n\n\n", "msg_date": "Wed, 27 Jul 2022 10:45:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, Jul 27, 2022 at 6:46 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 26 Jul 2022 at 19:39, Richard Guo <guofenglinux@gmail.com> wrote:\n> > Also I'm wondering if it's possible to take into consideration the\n> > ordering indicated by existing indexes when determining the pathkeys. So\n> > that for the query below we can avoid the Incremental Sort node if we\n> > consider that there is an index on t(a, c):\n> >\n> > # explain (costs off) select max(b order by b), max(c order by c) from t\n> group by a;\n> > QUERY PLAN\n> > ---------------------------------------------\n> > GroupAggregate\n> > Group Key: a\n> > -> Incremental Sort\n> > Sort Key: a, b\n> > Presorted Key: a\n> > -> Index Scan using t_a_c_idx on t\n> > (6 rows)\n>\n> That would be nice but I'm not going to add anything to this patch\n> which does anything like that. I think the patch, as it is, is a good\n> meaningful step forward to improve the performance of ordered\n> aggregates.\n\n\nConcur with that.\n\n\n> There are other things in the planner that could gain from what you\n> talk about. For example, choosing the evaluation order of WindowFuncs.\n> Perhaps it would be better to try to tackle those two problems\n> together rather than try to sneak something half-baked along with this\n> patch.\n\n\nThat makes sense. The patch looks in a good shape to me in this part.\n\nThanks\nRichard\n\nOn Wed, Jul 27, 2022 at 6:46 AM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 26 Jul 2022 at 19:39, Richard Guo <guofenglinux@gmail.com> wrote:\n> Also I'm wondering if it's possible to take into consideration the\n> ordering indicated by existing indexes when determining the pathkeys. So\n> that for the query below we can avoid the Incremental Sort node if we\n> consider that there is an index on t(a, c):\n>\n> # explain (costs off) select max(b order by b), max(c order by c) from t group by a;\n>                  QUERY PLAN\n> ---------------------------------------------\n>  GroupAggregate\n>    Group Key: a\n>    ->  Incremental Sort\n>          Sort Key: a, b\n>          Presorted Key: a\n>          ->  Index Scan using t_a_c_idx on t\n> (6 rows)\n\nThat would be nice but I'm not going to add anything to this patch\nwhich does anything like that. I think the patch, as it is, is a good\nmeaningful step forward to improve the performance of ordered\naggregates.Concur with that.  \nThere are other things in the planner that could gain from what you\ntalk about. For example, choosing the evaluation order of WindowFuncs.\nPerhaps it would be better to try to tackle those two problems\ntogether rather than try to sneak something half-baked along with this\npatch.That makes sense. The patch looks in a good shape to me in this part.ThanksRichard", "msg_date": "Wed, 27 Jul 2022 11:16:36 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, 27 Jul 2022 at 15:16, Richard Guo <guofenglinux@gmail.com> wrote:\n> That makes sense. The patch looks in a good shape to me in this part.\n\nThanks for giving it another look.\n\nI'm also quite happy with the patch now. The 2 plan changes are\nexplained. I have a patch on another thread [1] for the change in the\nMerge Join plan. I'd like to consider that separately from this\npatch.\n\nThe postgres_fdw changes are explained in [2]. This can be fixed by\nsetting fdw_tuple_cost to something more realistic in the foreign\nserver settings on the test.\n\nI'd like to take a serious look at pushing this patch on the first few\ndays of August, so if anyone is following along here that might have\nobjections, can you do so before then?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrtZu0PHVfDPFM4Yx3jNR2Wuwosv+T2zqa7LrhhBr2rRg@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAApHDvpXiXLxg4TsA8P_4etnuGQqAAbHWEOM4hGe=DCaXmi_jA@mail.gmail.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 06:49:53 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I'd like to take a serious look at pushing this patch on the first few\n> days of August, so if anyone is following along here that might have\n> objections, can you do so before then?\n\nAre you going to push the other patch (adjusting\nselect_outer_pathkeys_for_merge) first, so that we can see the residual\nplan changes that this patch creates? I'm not entirely comfortable\nwith the regression test changes as posted. Likewise, it might be\nbetter to fix DEFAULT_FDW_TUPLE_COST beforehand, to detangle what\nthe effects of that are.\n\nAlso, I think it's bad style to rely on aggpresorted defaulting to false.\nYou should explicitly initialize it anywhere that an Aggref node is\nconstructed. It looks like there are just two places to fix\n(parse_expr.c and parse_func.c).\n\nNothing else jumped out at me in a quick scan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 31 Jul 2022 11:49:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Mon, 1 Aug 2022 at 03:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Are you going to push the other patch (adjusting\n> select_outer_pathkeys_for_merge) first, so that we can see the residual\n> plan changes that this patch creates? I'm not entirely comfortable\n> with the regression test changes as posted.\n\nYes, I pushed that earlier.\n\n> Likewise, it might be\n> better to fix DEFAULT_FDW_TUPLE_COST beforehand, to detangle what\n> the effects of that are.\n\nI chatted to Andres and Thomas about this last week and their view\nmade me think it might not be quite as clear-cut as \"just bump it up a\nbunch because it's ridiculously low\" that I had in mind. They\nmentioned about file_fdw and another one that appears to work on\nmmapped segments, which I don't recall if any names were mentioned.\nCertainly that's not a reason not to change it, but it's not quite as\nclear-cut as I thought. I'll open a thread with some reasonable\nevidence to get a topic going and see where we end up. In the\nmeantime I've just coded it to do a temporary adjustment to the\nfdw_tuple_cost foreign server setting just before the test in\nquestion.\n\n> Also, I think it's bad style to rely on aggpresorted defaulting to false.\n> You should explicitly initialize it anywhere that an Aggref node is\n> constructed. It looks like there are just two places to fix\n> (parse_expr.c and parse_func.c).\n\nOoops. I'm normally good at remembering that. Not this time!\n\n> Nothing else jumped out at me in a quick scan.\n\nThanks for the quick scan. I did another few myself and adjusted a\nsmall number of things. Mostly comments and using things like\nlfirst_node and list_nth_node instead of lfirst and list_nth with a\ncast.\n\nI've now pushed the patch.\n\nThank you to everyone who looked at this.\n\nDavid\n\n\n", "msg_date": "Tue, 2 Aug 2022 23:21:04 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Mon, 1 Aug 2022 at 03:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Likewise, it might be\n>> better to fix DEFAULT_FDW_TUPLE_COST beforehand, to detangle what\n>> the effects of that are.\n\n> I chatted to Andres and Thomas about this last week and their view\n> made me think it might not be quite as clear-cut as \"just bump it up a\n> bunch because it's ridiculously low\" that I had in mind. They\n> mentioned about file_fdw and another one that appears to work on\n> mmapped segments, which I don't recall if any names were mentioned.\n\nUm ... DEFAULT_FDW_TUPLE_COST is postgres_fdw-specific, so I do not\nsee what connection some other FDW would have to it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Aug 2022 09:19:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, 3 Aug 2022 at 01:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I chatted to Andres and Thomas about this last week and their view\n> > made me think it might not be quite as clear-cut as \"just bump it up a\n> > bunch because it's ridiculously low\" that I had in mind. They\n> > mentioned about file_fdw and another one that appears to work on\n> > mmapped segments, which I don't recall if any names were mentioned.\n>\n> Um ... DEFAULT_FDW_TUPLE_COST is postgres_fdw-specific, so I do not\n> see what connection some other FDW would have to it.\n\nI should have devoted more brain cells to that one.\n\nAnyway, I started a thread at [1].\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvopVjjfh5c1Ed2HRvDdfom2dEpMwwiu5-f1AnmYprJngA@mail.gmail.com\n\n\n", "msg_date": "Wed, 3 Aug 2022 02:59:05 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": ">\n>\n> Hi, David:\n\nI was looking at the final patch and noticed that setno field\nin agg_presorted_distinctcheck struct is never used.\n\nLooks like it was copied from neighboring struct.\n\nCan you take a look at the patch ?\n\nThanks\n\n>", "msg_date": "Tue, 2 Aug 2022 11:02:51 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, Aug 2, 2022 at 11:02 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>> Hi, David:\n>\n> I was looking at the final patch and noticed that setno field\n> in agg_presorted_distinctcheck struct is never used.\n>\n> Looks like it was copied from neighboring struct.\n>\n> Can you take a look at the patch ?\n>\n> Thanks\n>\n>>\n>\n> Looks like setoff field is not used either.\n\nCheers", "msg_date": "Tue, 2 Aug 2022 12:38:22 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, 3 Aug 2022 at 07:31, Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Tue, Aug 2, 2022 at 11:02 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> I was looking at the final patch and noticed that setno field in agg_presorted_distinctcheck struct is never used.\n\n> Looks like setoff field is not used either.\n\nThanks for the report. It seems transno was unused too.\n\nI just pushed a commit to remove all 3.\n\nDavid\n\n\n", "msg_date": "Wed, 3 Aug 2022 09:48:49 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, Aug 02, 2022 at 11:21:04PM +1200, David Rowley wrote:\n> I've now pushed the patch.\n\nI've not studied the patch at all.\n\nBut in a few places, it removes the locally-computed group_pathkeys:\n\n- List *group_pathkeys = root->group_pathkeys;\n\nHowever it doesn't do that here:\n\n /*\n * Instead of operating directly on the input relation, we can\n * consider finalizing a partially aggregated path.\n */\n if (partially_grouped_rel != NULL)\n {\n foreach(lc, partially_grouped_rel->pathlist)\n {\n ListCell *lc2;\n Path *path = (Path *) lfirst(lc);\n Path *path_original = path;\n \n List *pathkey_orderings = NIL;\n \n List *group_pathkeys = root->group_pathkeys;\n\nI noticed because that creates a new shadow variable, which seems accidental.\n\nmake src/backend/optimizer/plan/planner.o COPT=-Wshadow=compatible-local\n\nsrc/backend/optimizer/plan/planner.c:6642:14: warning: declaration of ‘group_pathkeys’ shadows a previous local [-Wshadow=compatible-local]\n 6642 | List *group_pathkeys = root->group_pathkeys;\n | ^~~~~~~~~~~~~~\nsrc/backend/optimizer/plan/planner.c:6438:12: note: shadowed declaration is here\n 6438 | List *group_pathkeys;\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 16 Aug 2022 20:57:55 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, 17 Aug 2022 at 13:57, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> But in a few places, it removes the locally-computed group_pathkeys:\n>\n> - List *group_pathkeys = root->group_pathkeys;\n\n> I noticed because that creates a new shadow variable, which seems accidental.\n\nThanks for the report.\n\nI've just pushed a fix for this that basically just removes the line\nyou quoted. Really I should have been using the version of\ngroup_pathkeys that stripped off the pathkeys from the ORDER BY /\nDISTINCT aggregates that is calculated earlier in that function. In\npractice, there was no actual bug here as the wrong variable was only\nbeing used in the code path that was handling partial paths. We never\ncreate any partial paths when there are aggregates with ORDER BY /\nDISTINCT clauses, so in that code path, the two versions of the\ngroup_pathkeys variable would have always been set to the same thing.\n\nIt makes sense just to get rid of the shadowed variable since the\nvalue of it will be the same anyway.\n\nDavid\n\n\n", "msg_date": "Thu, 18 Aug 2022 11:38:25 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Hello,\n\nWhile playing with the patch I found a situation where the performance \nmay be degraded compared to previous versions.\n\nThe test case below.\nIf you create a proper index for the query (a,c), version 16 wins. On my \nnotebook, the query runs ~50% faster.\nBut if there is no index (a,c), but only (a,b), in previous versions the \nplanner uses it, but with this patch a full table scan is selected.\n\n\ncreate table t (a text, b text, c text);\ninsert into t (a,b,c) select x,y,x from generate_series(1,100) as x, \ngenerate_series(1,10000) y;\ncreate index on t (a,b);\nvacuum analyze t;\n\nexplain (analyze, buffers)\nselect a, array_agg(c order by c) from t group by a;\n\n\nv 14.5\n                                                              QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n  GroupAggregate  (cost=0.42..46587.76 rows=100 width=34) (actual \ntime=3.077..351.526 rows=100 loops=1)\n    Group Key: a\n    Buffers: shared hit=193387 read=2745\n    ->  Index Scan using t_a_b_idx on t  (cost=0.42..41586.51 \nrows=1000000 width=4) (actual time=0.014..155.095 rows=1000000 loops=1)\n          Buffers: shared hit=193387 read=2745\n  Planning:\n    Buffers: shared hit=9\n  Planning Time: 0.059 ms\n  Execution Time: 351.581 ms\n(9 rows)\n\n\nv 16\n                                                        QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n  GroupAggregate  (cost=128728.34..136229.59 rows=100 width=34) (actual \ntime=262.930..572.915 rows=100 loops=1)\n    Group Key: a\n    Buffers: shared hit=5396, temp read=1950 written=1964\n    ->  Sort  (cost=128728.34..131228.34 rows=1000000 width=4) (actual \ntime=259.423..434.105 rows=1000000 loops=1)\n          Sort Key: a, c\n          Sort Method: external merge  Disk: 15600kB\n          Buffers: shared hit=5396, temp read=1950 written=1964\n          ->  Seq Scan on t  (cost=0.00..15396.00 rows=1000000 width=4) \n(actual time=0.005..84.104 rows=1000000 loops=1)\n                Buffers: shared hit=5396\n  Planning:\n    Buffers: shared hit=9\n  Planning Time: 0.055 ms\n  Execution Time: 575.146 ms\n(13 rows)\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Sat, 5 Nov 2022 11:51:23 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Le samedi 5 novembre 2022, 09:51:23 CET Pavel Luzanov a écrit :\n> While playing with the patch I found a situation where the performance\n> may be degraded compared to previous versions.\n> \n> The test case below.\n> If you create a proper index for the query (a,c), version 16 wins. On my\n> notebook, the query runs ~50% faster.\n> But if there is no index (a,c), but only (a,b), in previous versions the\n> planner uses it, but with this patch a full table scan is selected.\n\nHello,\n\nIn your exact use case, the combo incremental-sort + Index scan is evaluated \nto cost more than doing a full sort + seqscan. \n\nIf you try for example to create an index on (b, a) and group by b, you will \nget the expected behaviour:\n\nro=# create index on t (b, a);\nCREATE INDEX\nro=# explain select b, array_agg(c order by c) from t group by b;\n QUERY PLAN \n-----------------------------------------------------------------------------------------\n GroupAggregate (cost=10.64..120926.80 rows=9970 width=36)\n Group Key: b\n -> Incremental Sort (cost=10.64..115802.17 rows=1000000 width=6)\n Sort Key: b, c\n Presorted Key: b\n -> Index Scan using t_b_a_idx on t (cost=0.42..47604.12 \nrows=1000000 width=6)\n(6 rows)\n\nI think we can trace that back to incremental sort being pessimistic about \nit's performance. If you try the same query, but with set enable_seqscan = off, \nyou will get a full sort over an index scan:\n\n QUERY PLAN \n-----------------------------------------------------------------------------------------\n GroupAggregate (cost=154944.94..162446.19 rows=100 width=34)\n Group Key: a\n -> Sort (cost=154944.94..157444.94 rows=1000000 width=4)\n Sort Key: a, c\n -> Index Scan using t_a_b_idx on t (cost=0.42..41612.60 \nrows=1000000 width=4)\n(5 rows)\n\n\nThis probably comes from the overly pessimistic behaviour that the number of \ntuples per group will be 1.5 times as much as we should estimate:\n\n\t/*\n\t * Estimate average cost of sorting of one group where presorted \nkeys are\n\t * equal. Incremental sort is sensitive to distribution of tuples \nto the\n\t * groups, where we're relying on quite rough assumptions. Thus, \nwe're\n\t * pessimistic about incremental sort performance and increase its \naverage\n\t * group size by half.\n\t */\n\nI can't see why an incrementalsort could be more expensive than a full sort, \nusing the same presorted path. It looks to me that in that case we should \nalways prefer an incrementalsort. Maybe we should bound incremental sorts cost \nto make sure they are never more expensive than the full sort ?\n\nAlso, prior to this commit I don't think it made a real difference, because \nworst case scenario we would have missed an incremental sort, which we didn't \nhave beforehand. But with this patch, we may actually replace a \"hidden\" \nincremental sort which was done in the agg codepath by a full sort. \n\nBest regards,\n\n--\nRonan Dunklau\n\n\n\n\n", "msg_date": "Mon, 07 Nov 2022 15:53:42 +0100", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Hi,\n\nOn 07.11.2022 17:53, Ronan Dunklau wrote:\n> In your exact use case, the combo incremental-sort + Index scan is evaluated\n> to cost more than doing a full sort + seqscan.\n\n> I think we can trace that back to incremental sort being pessimistic about\n> it's performance. If you try the same query, but with set enable_seqscan = off,\n> you will get a full sort over an index scan:\n>\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------\n> GroupAggregate (cost=154944.94..162446.19 rows=100 width=34)\n> Group Key: a\n> -> Sort (cost=154944.94..157444.94 rows=1000000 width=4)\n> Sort Key: a, c\n> -> Index Scan using t_a_b_idx on t (cost=0.42..41612.60\n> rows=1000000 width=4)\n> (5 rows)\n\nYou are right. By disabling seq scan, we can get this plan. But compare \nit with the plan in v15:\n\npostgres@db(15.0)=# explain\nselect a, array_agg(c order by c) from t group by a;\n                                     QUERY PLAN\n-----------------------------------------------------------------------------------\n  GroupAggregate  (cost=0.42..46667.56 rows=100 width=34)\n    Group Key: a\n    ->  Index Scan using t_a_b_idx on t  (cost=0.42..41666.31 \nrows=1000000 width=4)\n(3 rows)\n\nThe total plan cost in v16 is ~4 times higher, while the index scan cost \nremains the same.\n\n> I can't see why an incrementalsort could be more expensive than a full sort,\n> using the same presorted path.\n\nThe only reason I can see is the number of buffers to read. In the plan \nwith incremental sort we read the whole index, ~190000 buffers.\nAnd the plan with seq scan only required ~5000 (I think due to buffer \nring optimization).\n\nPerhaps this behavior is preferable. Especially when many concurrent \nqueries are running. The less buffer cache is busy, the better. But in \nsingle-user mode this query is slower.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Mon, 7 Nov 2022 19:58:50 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Le lundi 7 novembre 2022, 17:58:50 CET Pavel Luzanov a écrit :\n> > I can't see why an incrementalsort could be more expensive than a full\n> > sort, using the same presorted path.\n> \n> The only reason I can see is the number of buffers to read. In the plan\n> with incremental sort we read the whole index, ~190000 buffers.\n> And the plan with seq scan only required ~5000 (I think due to buffer\n> ring optimization).\n\nWhat I meant here is that disabling seqscans, the planner still chooses a full \nsort over a partial sort. The underlying index is the same, it is just a \nmatter of choosing a Sort node over an IncrementalSort node. This, I think, is \nwrong: I can't see how it could be worse to use an incrementalsort in that \ncase. \n\nIt makes sense to prefer a SeqScan over an IndexScan if you are going to sort \nthe whole table anyway. But in that case we shouldn't. What happened before is \nthat some sort of incremental sort was always chosen, because it was hidden as \nan implementation detail of the agg node. But now it has to compete on a cost \nbasis with the full sort, and that costing is wrong in that case. \n\nMaybe the original costing code for incremental sort was a bit too \npessimistic. \n\n--\nRonan Dunklau\n\n\n\n\n", "msg_date": "Mon, 07 Nov 2022 18:30:16 +0100", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On 07.11.2022 20:30, Ronan Dunklau wrote:\n> What I meant here is that disabling seqscans, the planner still chooses a full\n> sort over a partial sort. The underlying index is the same, it is just a\n> matter of choosing a Sort node over an IncrementalSort node. This, I think, is\n> wrong: I can't see how it could be worse to use an incrementalsort in that\n> case.\n\nI finally get your point. And I agree with you.\n\n> Maybe the original costing code for incremental sort was a bit too\n> pessimistic.\n\nIn this query, incremental sorting lost just a little bit in cost: \n164468.95 vs 162504.23.\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n  GroupAggregate  (cost=155002.98..162504.23 rows=100 width=34) (actual \ntime=296.591..568.270 rows=100 loops=1)\n    Group Key: a\n    ->  Sort  (cost=155002.98..157502.98 rows=1000000 width=4) (actual \ntime=293.810..454.170 rows=1000000 loops=1)\n          Sort Key: a, c\n          Sort Method: external merge  Disk: 15560kB\n          ->  Index Scan using t_a_b_idx on t (cost=0.42..41670.64 \nrows=1000000 width=4) (actual time=0.021..156.441 rows=1000000 loops=1)\n  Settings: enable_seqscan = 'off'\n  Planning Time: 0.074 ms\n  Execution Time: 569.957 ms\n(9 rows)\n\nset enable_sort=off;\nSET\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n  GroupAggregate  (cost=1457.58..164468.95 rows=100 width=34) (actual \ntime=6.623..408.833 rows=100 loops=1)\n    Group Key: a\n    ->  Incremental Sort  (cost=1457.58..159467.70 rows=1000000 width=4) \n(actual time=2.652..298.530 rows=1000000 loops=1)\n          Sort Key: a, c\n          Presorted Key: a\n          Full-sort Groups: 100  Sort Method: quicksort  Average Memory: \n27kB  Peak Memory: 27kB\n          Pre-sorted Groups: 100  Sort Method: quicksort  Average \nMemory: 697kB  Peak Memory: 697kB\n          ->  Index Scan using t_a_b_idx on t (cost=0.42..41670.64 \nrows=1000000 width=4) (actual time=0.011..155.260 rows=1000000 loops=1)\n  Settings: enable_seqscan = 'off', enable_sort = 'off'\n  Planning Time: 0.044 ms\n  Execution Time: 408.867 ms\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Mon, 7 Nov 2022 23:37:55 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, 8 Nov 2022 at 03:53, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> I can't see why an incrementalsort could be more expensive than a full sort,\n> using the same presorted path. It looks to me that in that case we should\n> always prefer an incrementalsort. Maybe we should bound incremental sorts cost\n> to make sure they are never more expensive than the full sort ?\n\nThe only thing that I could think of that would cause incremental sort\nto be more expensive than sort is the tuplesort_reset() calls that are\nperformed between sorts. However, I see cost_incremental_sort()\naccounts for those already with:\n\nrun_cost += 2.0 * cpu_tuple_cost * input_groups;\n\nAlso, I see at the top of incremental_sort.sql there's a comment claiming:\n\n-- When we have to sort the entire table, incremental sort will\n-- be slower than plain sort, so it should not be used.\n\nI'm just unable to verify that's true by doing the following:\n\n$ echo \"select * from (select * from tenk1 order by four) t order by\nfour, ten;\" > bench.sql\n\n$ pgbench -n -f bench.sql -T 60 -M prepared regression | grep -E \"^tps\"\ntps = 102.136151 (without initial connection time)\n\n$ # disable sort so that the test performs Sort -> Incremental Sort rather\n$ # than Sort -> Sort\n$ psql -c \"alter system set enable_sort=0;\" regression\n$ psql -c \"select pg_reload_conf();\" regression\n\n$ pgbench -n -f bench.sql -T 60 -M prepared regression | grep -E \"^tps\"\ntps = 112.378761 (without initial connection time)\n\nWhen I disable sort, the plan changes to use Incremental Sort and\nexecution becomes faster, not slower like the comment claims it will.\nPerhaps this was true during the development of Incremental sort and\nthen something was changed to speed things up. I do recall reviewing\nthat patch many years ago and hinting about the invention of\ntuplesort_reset(). I don't recall, but I assume the patch must have\nbeen creating a new tuplesort each group before that.\n\nAlso, I was looking at add_paths_to_grouping_rel() and I saw that if\npresorted_keys > 0 that we'll create both a Sort and Incremental Sort\npath. If we assume Incremental Sort is always better when it can be\ndone, then it seems useless to create the Sort path when Incremental\nSort is possible. When working on making Incremental Sorts work for\nwindow functions I did things that way. Maybe we should just make\nadd_paths_to_grouping_rel() work the same way.\n\nRegarding the 1.5 factor in cost_incremental_sort(), I assume this is\nfor skewed groups. Imagine there's 1 huge group and 99 tiny ones.\nHowever, even if that were the case, I imagine the performance would\nstill be around the same performance as the non-incremental variant of\nsort.\n\nI've been playing around with the attached patch which does:\n\n1. Adjusts add_paths_to_grouping_rel so that we don't add a Sort path\nwhen we can add an Incremental Sort path instead. This removes quite a\nfew redundant lines of code.\n2. Removes the * 1.5 fuzz-factor in cost_incremental_sort()\n3. Does various other code tidy stuff in cost_incremental_sort().\n4. Removes the test from incremental_sort.sql that was ensuring the\ninferior Sort -> Sort plan was being used instead of the superior Sort\n-> Incremental Sort plan.\n\nI'm not really that 100% confident in the removal of the * 1.5 thing.\nI wonder if there's some reason we're not considering that might cause\na performance regression if we're to remove it.\n\nDavid", "msg_date": "Tue, 8 Nov 2022 14:31:12 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "\nOn 08.11.2022 04:31, David Rowley wrote:\n> I've been playing around with the attached patch which does:\n>\n> 1. Adjusts add_paths_to_grouping_rel so that we don't add a Sort path\n> when we can add an Incremental Sort path instead. This removes quite a\n> few redundant lines of code.\n> 2. Removes the * 1.5 fuzz-factor in cost_incremental_sort()\n> 3. Does various other code tidy stuff in cost_incremental_sort().\n> 4. Removes the test from incremental_sort.sql that was ensuring the\n> inferior Sort -> Sort plan was being used instead of the superior Sort\n> -> Incremental Sort plan.\n\nI can confirm that with this patch, the plan with incremental sorting \nbeats the others.\n\nHere are the test results with my previous example.\n\nScript:\n\ncreate table t (a text, b text, c text);\ninsert into t (a,b,c) select x,y,x from generate_series(1,100) as x, \ngenerate_series(1,10000) y;\ncreate index on t (a);\nvacuum analyze t;\nreset all;\n\nexplain (settings, analyze)\nselect a, array_agg(c order by c) from t group by a;\n\n\\echo set enable_incremental_sort=off;\nset enable_incremental_sort=off;\n\nexplain (settings, analyze)\nselect a, array_agg(c order by c) from t group by a;\n\n\\echo set enable_seqscan=off;\nset enable_seqscan=off;\n\nexplain (settings, analyze)\nselect a, array_agg(c order by c) from t group by a;\n\nScript output:\n\nCREATE TABLE\nINSERT 0 1000000\nCREATE INDEX\nVACUUM\nRESET\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n  GroupAggregate  (cost=957.60..113221.24 rows=100 width=34) (actual \ntime=6.088..381.777 rows=100 loops=1)\n    Group Key: a\n    ->  Incremental Sort  (cost=957.60..108219.99 rows=1000000 width=4) \n(actual time=2.387..272.332 rows=1000000 loops=1)\n          Sort Key: a, c\n          Presorted Key: a\n          Full-sort Groups: 100  Sort Method: quicksort  Average Memory: \n27kB  Peak Memory: 27kB\n          Pre-sorted Groups: 100  Sort Method: quicksort  Average \nMemory: 697kB  Peak Memory: 697kB\n          ->  Index Scan using t_a_idx on t (cost=0.42..29279.42 \nrows=1000000 width=4) (actual time=0.024..128.083 rows=1000000 loops=1)\n  Planning Time: 0.070 ms\n  Execution Time: 381.815 ms\n(10 rows)\n\nset enable_incremental_sort=off;\nSET\n                                                        QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n  GroupAggregate  (cost=128728.34..136229.59 rows=100 width=34) (actual \ntime=234.044..495.537 rows=100 loops=1)\n    Group Key: a\n    ->  Sort  (cost=128728.34..131228.34 rows=1000000 width=4) (actual \ntime=231.172..383.393 rows=1000000 loops=1)\n          Sort Key: a, c\n          Sort Method: external merge  Disk: 15600kB\n          ->  Seq Scan on t  (cost=0.00..15396.00 rows=1000000 width=4) \n(actual time=0.005..78.189 rows=1000000 loops=1)\n  Settings: enable_incremental_sort = 'off'\n  Planning Time: 0.041 ms\n  Execution Time: 497.230 ms\n(9 rows)\n\nset enable_seqscan=off;\nSET\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n  GroupAggregate  (cost=142611.77..150113.02 rows=100 width=34) (actual \ntime=262.250..527.260 rows=100 loops=1)\n    Group Key: a\n    ->  Sort  (cost=142611.77..145111.77 rows=1000000 width=4) (actual \ntime=259.551..417.154 rows=1000000 loops=1)\n          Sort Key: a, c\n          Sort Method: external merge  Disk: 15560kB\n          ->  Index Scan using t_a_idx on t (cost=0.42..29279.42 \nrows=1000000 width=4) (actual time=0.012..121.995 rows=1000000 loops=1)\n  Settings: enable_incremental_sort = 'off', enable_seqscan = 'off'\n  Planning Time: 0.041 ms\n  Execution Time: 528.950 ms\n(9 rows)\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Tue, 8 Nov 2022 09:39:12 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, Nov 8, 2022 at 9:31 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I've been playing around with the attached patch which does:\n>\n> 1. Adjusts add_paths_to_grouping_rel so that we don't add a Sort path\n> when we can add an Incremental Sort path instead. This removes quite a\n> few redundant lines of code.\n\n\nFor unsorted paths, the original logic here is to explicitly add a Sort\npath only for the cheapest-total path. This patch changes that and may\nadd a Sort path for other paths besides the cheapest-total path. I\nthink this may introduce in some unnecessary path candidates.\n\nI think it's good that this patch removes redundant codes. ISTM we can\ndo the same when we try to finalize the partially aggregated paths from\npartially_grouped_rel.\n\nThanks\nRichard\n\nOn Tue, Nov 8, 2022 at 9:31 AM David Rowley <dgrowleyml@gmail.com> wrote:\nI've been playing around with the attached patch which does:\n\n1. Adjusts add_paths_to_grouping_rel so that we don't add a Sort path\nwhen we can add an Incremental Sort path instead. This removes quite a\nfew redundant lines of code. For unsorted paths, the original logic here is to explicitly add a Sortpath only for the cheapest-total path.  This patch changes that and mayadd a Sort path for other paths besides the cheapest-total path.  Ithink this may introduce in some unnecessary path candidates.I think it's good that this patch removes redundant codes.  ISTM we cando the same when we try to finalize the partially aggregated paths frompartially_grouped_rel.ThanksRichard", "msg_date": "Tue, 8 Nov 2022 14:51:00 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Le mardi 8 novembre 2022, 02:31:12 CET David Rowley a écrit :\n> 1. Adjusts add_paths_to_grouping_rel so that we don't add a Sort path\n> when we can add an Incremental Sort path instead. This removes quite a\n> few redundant lines of code.\n\nThis seems sensible\n\n> 2. Removes the * 1.5 fuzz-factor in cost_incremental_sort()\n> 3. Does various other code tidy stuff in cost_incremental_sort().\n> 4. Removes the test from incremental_sort.sql that was ensuring the\n> inferior Sort -> Sort plan was being used instead of the superior Sort\n> -> Incremental Sort plan.\n> \n> I'm not really that 100% confident in the removal of the * 1.5 thing.\n> I wonder if there's some reason we're not considering that might cause\n> a performance regression if we're to remove it.\n\nI'm not sure about it either. It seems to me that we were afraid of \nregressions, and having this overcharged just made us miss a new optimization \nwithout changing existing plans. With ordered aggregates, the balance is a bit \ntrickier and we are at risk of either regressing on aggregate plans, or more \ncommon ordered ones.\n\n--\nRonan Dunklau\n\n\n\n\n\n\n", "msg_date": "Tue, 08 Nov 2022 09:43:57 +0100", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, 8 Nov 2022 at 19:51, Richard Guo <guofenglinux@gmail.com> wrote:\n> For unsorted paths, the original logic here is to explicitly add a Sort\n> path only for the cheapest-total path. This patch changes that and may\n> add a Sort path for other paths besides the cheapest-total path. I\n> think this may introduce in some unnecessary path candidates.\n\nYeah, you're right. The patch shouldn't change that. I've adjusted\nthe attached patch so that part works more like it did before.\n\nv2 attached.\n\nThanks\n\nDavid", "msg_date": "Wed, 9 Nov 2022 14:58:53 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, 9 Nov 2022 at 14:58, David Rowley <dgrowleyml@gmail.com> wrote:\n> v2 attached.\n\nI've been looking at this again and this time around understand why\nthe * 1.5 pessimism factor was included in the incremental sort code.\n\nIf we create a table with a very large skew in the number of rows per\nwhat will be our pre-sorted groups.\n\ncreate table ab (a int not null, b int not null);\ninsert into ab select 0,x from generate_Series(1,999000)x union all\nselect x%1000+1,0 from generate_Series(999001,1000000)x;\n\nHere the 0 group has close to 1 million rows, but the remaining groups\n1-1000 have just 1 row each. The planner only knows there are about\n1001 distinct values in \"a\" and assumes an even distribution of rows\nbetween those values.\n\nWith:\nexplain (analyze, timing off) select * from ab order by a,b;\n\nIn master, the plan is:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Sort (cost=122490.27..124990.27 rows=1000000 width=8) (actual\nrows=1000000 loops=1)\n Sort Key: a, b\n Sort Method: quicksort Memory: 55827kB\n -> Index Scan using ab_a_idx on ab (cost=0.42..22832.42\nrows=1000000 width=8) (actual rows=1000000 loops=1)\n Planning Time: 0.069 ms\n Execution Time: 155.469 ms\n\nWith the v2 patch it's:\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Incremental Sort (cost=2767.38..109344.55 rows=1000000 width=8)\n(actual rows=1000000 loops=1)\n Sort Key: a, b\n Presorted Key: a\n Full-sort Groups: 33 Sort Method: quicksort Average Memory: 27kB\nPeak Memory: 27kB\n Pre-sorted Groups: 1 Sort Method: quicksort Average Memory:\n55795kB Peak Memory: 55795kB\n -> Index Scan using ab_a_idx on ab (cost=0.42..22832.42\nrows=1000000 width=8) (actual rows=1000000 loops=1)\n Planning Time: 0.072 ms\n Execution Time: 163.614 ms\n\nSo there is a performance regression.\n\nSometimes teaching the planner new tricks means that it might use\nthose tricks at a bad time. Normally we put in an off switch for\nthese situations to allow users an escape hatch. We have\nenable_incremental_sort for this. It seems like incremental sort has\ntried to avoid this problem by always considering the same \"Sort\"\npaths that we did prior to incremental sort, and also considers\nincremental sort for pre-sorted paths with the 1.5 pessimism factor.\nThe v2 patch taking away the safety net.\n\nI think what we need to do is: Do our best to give incremental sort\nthe most realistic costs we can and accept that it might choose a\nworse plan in some cases. Users can turn it off if they really have no\nother means to convince the planner it's wrong.\n\nAdditionally, I think what we also need to add a GUC such as\nenable_presorted_aggregate. People can use that when their Index Scan\n-> Incremental Sort -> Aggregate plan is worse than their previous Seq\nScan -> Sort -> Aggregate plan that they were getting in < 16.\nTurning off enable_incremental_sort alone won't give them the same\naggregate plan that they had in pg15 as we always set the\nquery_pathkeys to request a sort order that will suit the order by /\ndistinct aggregates.\n\nI'll draft up a patch for the enable_presorted_aggregate.\n\nDavid\n\n\n", "msg_date": "Tue, 13 Dec 2022 20:53:40 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, 13 Dec 2022 at 20:53, David Rowley <dgrowleyml@gmail.com> wrote:\n> I think what we need to do is: Do our best to give incremental sort\n> the most realistic costs we can and accept that it might choose a\n> worse plan in some cases. Users can turn it off if they really have no\n> other means to convince the planner it's wrong.\n>\n> Additionally, I think what we also need to add a GUC such as\n> enable_presorted_aggregate. People can use that when their Index Scan\n> -> Incremental Sort -> Aggregate plan is worse than their previous Seq\n> Scan -> Sort -> Aggregate plan that they were getting in < 16.\n> Turning off enable_incremental_sort alone won't give them the same\n> aggregate plan that they had in pg15 as we always set the\n> query_pathkeys to request a sort order that will suit the order by /\n> distinct aggregates.\n>\n> I'll draft up a patch for the enable_presorted_aggregate.\n\nI've attached a patch series for this.\n\nv3-0001 can be ignored here. I've posted about that in [1]. Any\ndiscussion about that patch should take place over there. The patch\nis required to get the 0002 patch to pass isolation check\n\nv3-0002 removes the 1.5 x cost pessimism from incremental sort and\nalso rewrites how we make incremental sort paths. I've now gone\nthrough the remaining places where we create an incremental sort path\nto give all those the same treatment that I'd added to\nadd_paths_to_grouping_rel(). There was a 1 or 2 plan changes in the\nregression tests. One was the isolation test change, which I claim to\nbe a broken test and should be fixed another way. The other was\nperforming a Sort on the cheapest input path which had presorted keys.\nThat plan now uses an Incremental Sort to make use of the presorted\nkeys. I'm happy to see just how much redundant code this removes.\nAbout 200 lines.\n\nv3-0003 adds the enable_presorted_aggregate GUC.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrbDhObhLV+=U_K_-t+2Av2av1aL9d+2j_3AO-XndaviA@mail.gmail.com", "msg_date": "Fri, 16 Dec 2022 00:10:44 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Fri, 16 Dec 2022 at 00:10, David Rowley <dgrowleyml@gmail.com> wrote:\n> v3-0002 removes the 1.5 x cost pessimism from incremental sort and\n> also rewrites how we make incremental sort paths. I've now gone\n> through the remaining places where we create an incremental sort path\n> to give all those the same treatment that I'd added to\n> add_paths_to_grouping_rel(). There was a 1 or 2 plan changes in the\n> regression tests. One was the isolation test change, which I claim to\n> be a broken test and should be fixed another way. The other was\n> performing a Sort on the cheapest input path which had presorted keys.\n> That plan now uses an Incremental Sort to make use of the presorted\n> keys. I'm happy to see just how much redundant code this removes.\n> About 200 lines.\n\nI've now pushed this patch. Thanks for the report and everyone for all\nthe useful discussion. Also Richard for the review.\n\n> v3-0003 adds the enable_presorted_aggregate GUC.\n\nThis I've moved off to [1]. We tend to have lengthy discussions about\nGUCs, what to name them and if we actually need them. I didn't want to\nbury that discussion in this old and already long thread.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvqzuHerD8zN1Qu=d66e3bp1=9iFn09ZiQ3Zug_Phi6yLQ@mail.gmail.com\n\n\n", "msg_date": "Fri, 16 Dec 2022 15:26:20 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "While doing some random testing, I noticed that the following is broken in HEAD:\n\nSELECT COUNT(DISTINCT random()) FROM generate_series(1,10);\n\nERROR: ORDER/GROUP BY expression not found in targetlist\n\nIt appears to have been broken by 1349d279, though I haven't looked at\nthe details.\n\nI'm somewhat surprised that a case as simple as this wasn't covered by\nany pre-existing regression tests.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 10 Jan 2023 10:11:50 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, Jan 10, 2023 at 6:12 PM Dean Rasheed <dean.a.rasheed@gmail.com>\nwrote:\n\n> While doing some random testing, I noticed that the following is broken in\n> HEAD:\n>\n> SELECT COUNT(DISTINCT random()) FROM generate_series(1,10);\n>\n> ERROR: ORDER/GROUP BY expression not found in targetlist\n>\n> It appears to have been broken by 1349d279, though I haven't looked at\n> the details.\n\n\nYeah, this is definitely broken. For this query, we try to sort the\nscan/join path by random() before performing the Aggregate, which is an\noptimization implemented in 1349d2790b. However the scan/join plan's\ntlist does not contain random(), which I think we need to fix.\n\nThanks\nRichard\n\nOn Tue, Jan 10, 2023 at 6:12 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:While doing some random testing, I noticed that the following is broken in HEAD:\n\nSELECT COUNT(DISTINCT random()) FROM generate_series(1,10);\n\nERROR:  ORDER/GROUP BY expression not found in targetlist\n\nIt appears to have been broken by 1349d279, though I haven't looked at\nthe details. Yeah, this is definitely broken.  For this query, we try to sort thescan/join path by random() before performing the Aggregate, which is anoptimization implemented in 1349d2790b.  However the scan/join plan'stlist does not contain random(), which I think we need to fix.ThanksRichard", "msg_date": "Wed, 11 Jan 2023 10:45:51 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, 11 Jan 2023 at 15:46, Richard Guo <guofenglinux@gmail.com> wrote:\n> However the scan/join plan's\n> tlist does not contain random(), which I think we need to fix.\n\nI was wondering if that's true and considered that we don't want to\nevaluate random() for the sort then again when doing the aggregate\ntransitions, but I see that does not really work before 1349d279, per:\n\npostgres=# set enable_presorted_aggregate=0;\nSET\npostgres=# select string_agg(random()::text, ',' order by random())\nfrom generate_series(1,3);\n string_agg\n-----------------------------------------------------------\n 0.8659110018246505,0.15612649559563474,0.2022878955613403\n(1 row)\n\nI'd have expected those random numbers to be concatenated in ascending order.\n\nRunning: select random() from generate_Series(1,3) order by random();\ngives me the results in the order I'd have expected.\n\nI think whatever the fix is here, we should likely ensure that the\nresults are consistent regardless of which Aggrefs are the presorted\nones. Perhaps the easiest way to do that, and to ensure we call the\nvolatile functions are called the same number of times would just be\nto never choose Aggrefs with volatile functions when doing\nmake_pathkeys_for_groupagg().\n\nDavid\n\n\n", "msg_date": "Wed, 11 Jan 2023 17:12:45 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I think whatever the fix is here, we should likely ensure that the\n> results are consistent regardless of which Aggrefs are the presorted\n> ones. Perhaps the easiest way to do that, and to ensure we call the\n> volatile functions are called the same number of times would just be\n> to never choose Aggrefs with volatile functions when doing\n> make_pathkeys_for_groupagg().\n\nThere's existing logic in equivclass.c and other places that tries\nto draw very tight lines around what we'll assume about volatile\nsort expressions (pathkeys). It sounds like there's someplace in\nthis recent patch that didn't get that memo.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Jan 2023 23:32:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, 11 Jan 2023 at 17:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I think whatever the fix is here, we should likely ensure that the\n> > results are consistent regardless of which Aggrefs are the presorted\n> > ones. Perhaps the easiest way to do that, and to ensure we call the\n> > volatile functions are called the same number of times would just be\n> > to never choose Aggrefs with volatile functions when doing\n> > make_pathkeys_for_groupagg().\n>\n> There's existing logic in equivclass.c and other places that tries\n> to draw very tight lines around what we'll assume about volatile\n> sort expressions (pathkeys). It sounds like there's someplace in\n> this recent patch that didn't get that memo.\n\nI'm not sure I did a good job of communicating my thoughts there. What\nI mean is, having volatile functions in the aggregate's ORDER BY or\nDISTINCT clause didn't seem very well behaved prior to the presorted\naggregates patch. If I go and fix the bug with the missing targetlist\nitems, then a query such as:\n\nselect string_agg(random()::text, ',' order by random()) from\ngenerate_series(1,3);\n\nshould start putting the random() numbers in order where it didn't\nprior to 1349d279. Perhaps users might be happy that those are in\norder, however, if they then go and change the query to:\n\nselect sum(a order by a),string_agg(random()::text, ',' order by\nrandom()) from generate_series(1,3);\n\nthen they might become unhappy again that their string_agg is not\nordered the way they specified because the planner opted to sort by\n\"a\" rather than \"random()\" after the initial scan.\n\nI'm wondering if 1349d279 should have just never opted to presort\nAggrefs which have volatile functions so that the existing behaviour\nof unordered output is given always and nobody is fooled into thinking\nthis works correctly only to be disappointed later when they add some\nother aggregate to their query or if we should fix both. Certainly,\nit seems much easier to do the former.\n\nDavid\n\n\n", "msg_date": "Wed, 11 Jan 2023 18:23:50 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, Jan 11, 2023 at 12:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> postgres=# set enable_presorted_aggregate=0;\n> SET\n> postgres=# select string_agg(random()::text, ',' order by random())\n> from generate_series(1,3);\n> string_agg\n> -----------------------------------------------------------\n> 0.8659110018246505,0.15612649559563474,0.2022878955613403\n> (1 row)\n>\n> I'd have expected those random numbers to be concatenated in ascending\n> order.\n\n\nselect string_agg(\n random()::text, -- position 1\n ','\n order by random() -- position 2\n )\nfrom generate_series(1,3);\n\nI traced this query a bit and found that when executing the aggregation\nthe random() function in the aggregate expression (position 1) and in\nthe order by clause (position 2) are calculated separately. And the\nsorting is performed based on the function results from the order by\nclause. In the final output, what we see is the function results from\nthe aggregate expression. Thus we'll notice the output is not sorted.\n\nI'm not sure if this is expected or broken though.\n\nBTW, if we explicitly add ::text for random() in the order by clause, as\n\nselect string_agg(\n random()::text,\n ','\n order by random()::text\n )\nfrom generate_series(1,3);\n\nThe random() function will be calculated only once for each tuple, and\nwe can get a sorted output.\n\nThanks\nRichard\n\nOn Wed, Jan 11, 2023 at 12:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\npostgres=# set enable_presorted_aggregate=0;\nSET\npostgres=# select string_agg(random()::text, ',' order by random())\nfrom generate_series(1,3);\n                        string_agg\n-----------------------------------------------------------\n 0.8659110018246505,0.15612649559563474,0.2022878955613403\n(1 row)\n\nI'd have expected those random numbers to be concatenated in ascending order. select string_agg(        random()::text,     -- position 1        ','        order by random()   -- position 2        )from generate_series(1,3);I traced this query a bit and found that when executing the aggregationthe random() function in the aggregate expression (position 1) and inthe order by clause (position 2) are calculated separately.  And thesorting is performed based on the function results from the order byclause.  In the final output, what we see is the function results fromthe aggregate expression.  Thus we'll notice the output is not sorted.I'm not sure if this is expected or broken though.BTW, if we explicitly add ::text for random() in the order by clause, asselect string_agg(        random()::text,        ','        order by random()::text        )from generate_series(1,3);The random() function will be calculated only once for each tuple, andwe can get a sorted output.ThanksRichard", "msg_date": "Wed, 11 Jan 2023 16:47:05 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, 11 Jan 2023 at 05:24, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I'm wondering if 1349d279 should have just never opted to presort\n> Aggrefs which have volatile functions so that the existing behaviour\n> of unordered output is given always and nobody is fooled into thinking\n> this works correctly only to be disappointed later when they add some\n> other aggregate to their query or if we should fix both. Certainly,\n> it seems much easier to do the former.\n>\n\nI took a look at this, and I agree that the best solution is probably\nto have make_pathkeys_for_groupagg() ignore Aggrefs that contain\nvolatile functions. Not only is that the simplest solution, preserving\nthe old behaviour, I think it's required for correctness.\n\nAside from the fact that I don't think such aggregates would benefit\nfrom the optimisation introduced by 1349d279, I think it would be\nincorrect if there was more than one such aggregate having the same\nsort expression, because I think that volatile sorting should be\nevaluated separately for each aggregate. For example:\n\nSELECT string_agg(a::text, ',' ORDER BY random()),\n string_agg(a::text, ',' ORDER BY random())\nFROM generate_series(1,3) s(a);\n\n string_agg | string_agg\n------------+------------\n 2,1,3 | 3,2,1\n(1 row)\n\nso pre-sorting wouldn't be right (or at least it would change existing\nbehaviour in a surprising way).\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 17 Jan 2023 00:16:10 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, 17 Jan 2023 at 13:16, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Wed, 11 Jan 2023 at 05:24, David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > I'm wondering if 1349d279 should have just never opted to presort\n> > Aggrefs which have volatile functions so that the existing behaviour\n> > of unordered output is given always and nobody is fooled into thinking\n> > this works correctly only to be disappointed later when they add some\n> > other aggregate to their query or if we should fix both. Certainly,\n> > it seems much easier to do the former.\n> >\n>\n> I took a look at this, and I agree that the best solution is probably\n> to have make_pathkeys_for_groupagg() ignore Aggrefs that contain\n> volatile functions.\n\nThanks for giving that some additional thought. I've just pushed a\nfix which adjusts things that way.\n\nDavid\n\n\n", "msg_date": "Tue, 17 Jan 2023 16:39:38 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, Jan 17, 2023 at 11:39 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 17 Jan 2023 at 13:16, Dean Rasheed <dean.a.rasheed@gmail.com>\n> wrote:\n> > I took a look at this, and I agree that the best solution is probably\n> > to have make_pathkeys_for_groupagg() ignore Aggrefs that contain\n> > volatile functions.\n>\n> Thanks for giving that some additional thought. I've just pushed a\n> fix which adjusts things that way.\n\n\nThis makes a lot of sense. I agree that we shouldn't do pre-sorting for\nvolatile sort expressions, especially when there are multiple aggregates\nwith the same volatile sort expression.\n\nNot related to this specific issue, but I find sorting by volatile\nexpression is confusing in different scenarios. Consider the two\nqueries given by David\n\nQuery 1:\nselect string_agg(random()::text, ',' order by random()) from\ngenerate_series(1,3);\n\nQuery 2:\nselect random()::text from generate_series(1,3) order by random();\n\nConsidering the targetlist as Aggref->args or Query->targetList, in both\nqueries we would add an additional TargetEntry (as resjunk column) for\nthe ORDER BY item 'random()', because it's not present in the existing\ntargetlist. Note that the existing TargetEntry for 'random()::text' is\na CoerceViaIO expression which is an explicit cast, so we cannot strip\nit and match it to the ORDER BY item. Thus we would have two random()\nFuncExprs in the final targetlist, for both queries.\n\nIn query 1 we call random() twice for each tuple, one for the original\nTargetEntry 'random()::text', and one for the TargetEntry of the ORDER\nBY item 'random()', and do the sorting according to the second call\nresults. Thus we would notice the final output is unsorted because it's\nfrom the first random() call.\n\nHowever, in query 2 we have the ORDER BY item 'random()' in the\nscan/join node's targetlist. And then for the two random() FuncExprs in\nthe final targetlist, set_plan_references would adjust both of them to\nrefer to the outputs of the scan/join node. Thus random() is actually\ncalled only once for each tuple and we would find the final output is\nsorted.\n\nIt seems we fail to keep consistent about the behavior of sorting by\nvolatile expression in the two scenarios.\n\nBTW, I wonder if we should have checked CoercionForm before\nfix_upper_expr_mutator steps into CoerceViaIO->arg to adjust the expr\nthere. It seems parser checks it and only strips implicit coercions\nwhen matching TargetEntry expr to ORDER BY item.\n\nThanks\nRichard\n\nOn Tue, Jan 17, 2023 at 11:39 AM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 17 Jan 2023 at 13:16, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> I took a look at this, and I agree that the best solution is probably\n> to have make_pathkeys_for_groupagg() ignore Aggrefs that contain\n> volatile functions.\n\nThanks for giving that some additional thought.  I've just pushed a\nfix which adjusts things that way. This makes a lot of sense.  I agree that we shouldn't do pre-sorting forvolatile sort expressions, especially when there are multiple aggregateswith the same volatile sort expression.Not related to this specific issue, but I find sorting by volatileexpression is confusing in different scenarios.  Consider the twoqueries given by DavidQuery 1:select string_agg(random()::text, ',' order by random()) from generate_series(1,3);Query 2:select random()::text from generate_series(1,3) order by random();Considering the targetlist as Aggref->args or Query->targetList, in bothqueries we would add an additional TargetEntry (as resjunk column) forthe ORDER BY item 'random()', because it's not present in the existingtargetlist.  Note that the existing TargetEntry for 'random()::text' isa CoerceViaIO expression which is an explicit cast, so we cannot stripit and match it to the ORDER BY item.  Thus we would have two random()FuncExprs in the final targetlist, for both queries.In query 1 we call random() twice for each tuple, one for the originalTargetEntry 'random()::text', and one for the TargetEntry of the ORDERBY item 'random()', and do the sorting according to the second callresults.  Thus we would notice the final output is unsorted because it'sfrom the first random() call.However, in query 2 we have the ORDER BY item 'random()' in thescan/join node's targetlist.  And then for the two random() FuncExprs inthe final targetlist, set_plan_references would adjust both of them torefer to the outputs of the scan/join node.  Thus random() is actuallycalled only once for each tuple and we would find the final output issorted.It seems we fail to keep consistent about the behavior of sorting byvolatile expression in the two scenarios.BTW, I wonder if we should have checked CoercionForm beforefix_upper_expr_mutator steps into CoerceViaIO->arg to adjust the exprthere.  It seems parser checks it and only strips implicit coercionswhen matching TargetEntry expr to ORDER BY item.ThanksRichard", "msg_date": "Tue, 17 Jan 2023 14:29:56 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> BTW, I wonder if we should have checked CoercionForm before\n> fix_upper_expr_mutator steps into CoerceViaIO->arg to adjust the expr\n> there.\n\nI will just quote what it says in primnodes.h:\n\n * NB: equal() ignores CoercionForm fields, therefore this *must* not carry\n * any semantically significant information.\n\nIf you think the planner should act differently for different values of\nCoercionForm, you are mistaken. Maybe this is evidence of some\nprevious bit of brain-fade, but if so we need to fix that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 17 Jan 2023 02:05:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Tue, Jan 17, 2023 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > BTW, I wonder if we should have checked CoercionForm before\n> > fix_upper_expr_mutator steps into CoerceViaIO->arg to adjust the expr\n> > there.\n>\n> I will just quote what it says in primnodes.h:\n>\n> * NB: equal() ignores CoercionForm fields, therefore this *must* not carry\n> * any semantically significant information.\n>\n> If you think the planner should act differently for different values of\n> CoercionForm, you are mistaken. Maybe this is evidence of some\n> previous bit of brain-fade, but if so we need to fix that.\n\n\nAccording to this comment in primnodes.h, the planner is not supposed to\ntreat implicit and explicit casts differently. In this case\nset_plan_references is doing its job correctly, to adjust both random()\nFuncExprs in targetlist to refer to subplan's output for query 2. As a\nconsequence we would get a sorted output.\n\nI'm still confused that when the same scenario happens with ORDER BY in\nan aggregate function, like in query 1, the result is different in that\nwe would get an unsorted output.\n\nI wonder if we should avoid this inconsistent behavior.\n\nThanks\nRichard\n\nOn Tue, Jan 17, 2023 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> BTW, I wonder if we should have checked CoercionForm before\n> fix_upper_expr_mutator steps into CoerceViaIO->arg to adjust the expr\n> there.\n\nI will just quote what it says in primnodes.h:\n\n * NB: equal() ignores CoercionForm fields, therefore this *must* not carry\n * any semantically significant information.\n\nIf you think the planner should act differently for different values of\nCoercionForm, you are mistaken.  Maybe this is evidence of some\nprevious bit of brain-fade, but if so we need to fix that. According to this comment in primnodes.h, the planner is not supposed totreat implicit and explicit casts differently.  In this caseset_plan_references is doing its job correctly, to adjust both random()FuncExprs in targetlist to refer to subplan's output for query 2.  As aconsequence we would get a sorted output.I'm still confused that when the same scenario happens with ORDER BY inan aggregate function, like in query 1, the result is different in thatwe would get an unsorted output.I wonder if we should avoid this inconsistent behavior.ThanksRichard", "msg_date": "Wed, 18 Jan 2023 17:37:01 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, 18 Jan 2023 at 22:37, Richard Guo <guofenglinux@gmail.com> wrote:\n> I'm still confused that when the same scenario happens with ORDER BY in\n> an aggregate function, like in query 1, the result is different in that\n> we would get an unsorted output.\n>\n> I wonder if we should avoid this inconsistent behavior.\n\nIt certainly seems pretty strange that aggregates with an ORDER BY\nbehave differently from the query's ORDER BY. I'd have expected that\nto be the same. I've not looked to see why there's a difference, but\nsuspect that we thought about how we want it to work for the query's\nORDER BY and when ORDER BY aggregates were added, that behaviour was\nnot considered.\n\nLikely finding the code or location where that code should be would\nhelp us understand if something was just forgotten in the aggregate's\ncase.\n\nIt's probably another question as to if we should be adjusting this\nbehaviour now.\n\nDavid\n\n\n", "msg_date": "Wed, 18 Jan 2023 22:49:10 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "On Wed, 18 Jan 2023 at 09:49, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 18 Jan 2023 at 22:37, Richard Guo <guofenglinux@gmail.com> wrote:\n> > I'm still confused that when the same scenario happens with ORDER BY in\n> > an aggregate function, like in query 1, the result is different in that\n> > we would get an unsorted output.\n> >\n> > I wonder if we should avoid this inconsistent behavior.\n>\n> It certainly seems pretty strange that aggregates with an ORDER BY\n> behave differently from the query's ORDER BY. I'd have expected that\n> to be the same. I've not looked to see why there's a difference, but\n> suspect that we thought about how we want it to work for the query's\n> ORDER BY and when ORDER BY aggregates were added, that behaviour was\n> not considered.\n>\n\nI think the behaviour of an ORDER BY in the query can also be pretty\nsurprising. For example, consider:\n\nSELECT ARRAY[random(), random(), random()]\nFROM generate_series(1, 3);\n\n array\n-------------------------------------------------------------\n {0.2335800863701647,0.14688842754711273,0.2975659224823368}\n {0.10616525384762876,0.8371175798972244,0.2936178886154661}\n {0.21679841321788262,0.5254761982948826,0.7789412240118161}\n(3 rows)\n\nwhich produces 9 different random values, as expected, and compare that to:\n\nSELECT ARRAY[random(), random(), random()]\nFROM generate_series(1, 3)\nORDER BY random();\n\n array\n---------------------------------------------------------------\n {0.01952216253949679,0.01952216253949679,0.01952216253949679}\n {0.6735145595500629,0.6735145595500629,0.6735145595500629}\n {0.9406665780147616,0.9406665780147616,0.9406665780147616}\n(3 rows)\n\nwhich now only has 3 distinct random values. It's pretty\ncounterintuitive that adding an ORDER BY clause changes the contents\nof the rows returned, not just their order.\n\nThe trouble is, if we tried to fix that, we'd risk changing some other\nbehaviour that users may have come to rely on.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 18 Jan 2023 10:52:24 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> I think the behaviour of an ORDER BY in the query can also be pretty\n> surprising.\n\nIndeed. The fundamental question is this: in\n\n> SELECT ARRAY[random(), random(), random()]\n> FROM generate_series(1, 3)\n> ORDER BY random();\n\nare those four occurrences of random() supposed to refer to the\nsame value, or not? This only matters for volatile functions\nof course; with stable or immutable functions, textually-equal\nsubexpressions should have the same value in any given row.\n\nIt is very clear what we are supposed to do for\n\nSELECT random() FROM ... ORDER BY 1;\n\nwhich sadly isn't legal SQL anymore. It gets fuzzy as soon\nas we have\n\nSELECT random() FROM ... ORDER BY random();\n\nYou could make an argument either way for those being the\nsame value or not, but historically we've concluded that\nit's more useful to deem them the same value. Then the\nbehavior you show is not such a surprising extension,\nalthough it could be argued that such matches should only\nextend to identical top-level targetlist entries.\n\n> The trouble is, if we tried to fix that, we'd risk changing some other\n> behaviour that users may have come to rely on.\n\nYeah. I'm hesitant to try to adjust semantics here;\nwe're much more likely to get complaints than kudos.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Jan 2023 10:46:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add proper planner support for ORDER BY / DISTINCT aggregates" } ]
[ { "msg_contents": "So far as I can find, just about everyplace that deals with replication\nconnections has slipshod error reporting. An example from worker.c is\n\n LogRepWorkerWalRcvConn = walrcv_connect(MySubscription->conninfo, true,\n MySubscription->name, &err);\n if (LogRepWorkerWalRcvConn == NULL)\n ereport(ERROR,\n (errmsg(\"could not connect to the publisher: %s\", err)));\n\nBecause of the lack of any errcode() call, this failure will be reported\nas XX000 ERRCODE_INTERNAL_ERROR, which is surely not appropriate.\nworker.c is in good company though, because EVERY caller of walrcv_connect\nis equally slipshod.\n\nShall we just use ERRCODE_CONNECTION_FAILURE for these failures, or\nwould it be better to invent another SQLSTATE code? Arguably,\nERRCODE_CONNECTION_FAILURE is meant for failures of client connections;\nbut on the other hand, a replication connection is a sort of client.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Jun 2021 11:42:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "SQLSTATE for replication connection failures" }, { "msg_contents": "On Sat, Jun 12, 2021 at 9:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> So far as I can find, just about everyplace that deals with replication\n> connections has slipshod error reporting. An example from worker.c is\n>\n> LogRepWorkerWalRcvConn = walrcv_connect(MySubscription->conninfo, true,\n> MySubscription->name, &err);\n> if (LogRepWorkerWalRcvConn == NULL)\n> ereport(ERROR,\n> (errmsg(\"could not connect to the publisher: %s\", err)));\n>\n> Because of the lack of any errcode() call, this failure will be reported\n> as XX000 ERRCODE_INTERNAL_ERROR, which is surely not appropriate.\n> worker.c is in good company though, because EVERY caller of walrcv_connect\n> is equally slipshod.\n>\n> Shall we just use ERRCODE_CONNECTION_FAILURE for these failures, or\n> would it be better to invent another SQLSTATE code? Arguably,\n> ERRCODE_CONNECTION_FAILURE is meant for failures of client connections;\n> but on the other hand, a replication connection is a sort of client.\n>\n\nYour reasoning sounds good to me. So, +1 for using ERRCODE_CONNECTION_FAILURE.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 14 Jun 2021 14:47:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQLSTATE for replication connection failures" }, { "msg_contents": "On Mon, Jun 14, 2021 at 6:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jun 12, 2021 at 9:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > So far as I can find, just about everyplace that deals with replication\n> > connections has slipshod error reporting. An example from worker.c is\n> >\n> > LogRepWorkerWalRcvConn = walrcv_connect(MySubscription->conninfo, true,\n> > MySubscription->name, &err);\n> > if (LogRepWorkerWalRcvConn == NULL)\n> > ereport(ERROR,\n> > (errmsg(\"could not connect to the publisher: %s\", err)));\n> >\n> > Because of the lack of any errcode() call, this failure will be reported\n> > as XX000 ERRCODE_INTERNAL_ERROR, which is surely not appropriate.\n> > worker.c is in good company though, because EVERY caller of walrcv_connect\n> > is equally slipshod.\n> >\n> > Shall we just use ERRCODE_CONNECTION_FAILURE for these failures, or\n> > would it be better to invent another SQLSTATE code? Arguably,\n> > ERRCODE_CONNECTION_FAILURE is meant for failures of client connections;\n> > but on the other hand, a replication connection is a sort of client.\n> >\n>\n> Your reasoning sounds good to me. So, +1 for using ERRCODE_CONNECTION_FAILURE.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 16 Jun 2021 15:49:23 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQLSTATE for replication connection failures" }, { "msg_contents": "Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> On Mon, Jun 14, 2021 at 6:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>> Shall we just use ERRCODE_CONNECTION_FAILURE for these failures, or\n>>> would it be better to invent another SQLSTATE code? Arguably,\n>>> ERRCODE_CONNECTION_FAILURE is meant for failures of client connections;\n>>> but on the other hand, a replication connection is a sort of client.\n\n>> Your reasoning sounds good to me. So, +1 for using ERRCODE_CONNECTION_FAILURE.\n\n> +1\n\nDone that way. I also fixed some nearby ereports that were missing\nerrcodes; some of them seemed more like PROTOCOL_VIOLATIONs than\nCONNECTION_FAILUREs, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 11:53:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: SQLSTATE for replication connection failures" } ]
[ { "msg_contents": "This confused me for a minute so took the opportunity to clean it up.\n\nSince: 2594cf0e8c04406ffff19b1651c5a406d376657c\n---\n src/backend/commands/variable.c | 12 +++++++-----\n 1 file changed, 7 insertions(+), 5 deletions(-)\n\ndiff --git a/src/backend/commands/variable.c b/src/backend/commands/variable.c\nindex 0c85679420..56f15d2e37 100644\n--- a/src/backend/commands/variable.c\n+++ b/src/backend/commands/variable.c\n@@ -589,11 +589,12 @@ check_transaction_deferrable(bool *newval, void **extra, GucSource source)\n bool\n check_random_seed(double *newval, void **extra, GucSource source)\n {\n-\t*extra = malloc(sizeof(int));\n-\tif (!*extra)\n+\tbool *doit = *extra = malloc(sizeof(bool));\n+\tif (doit == NULL)\n \t\treturn false;\n+\n \t/* Arm the assign only if source of value is an interactive SET */\n-\t*((int *) *extra) = (source >= PGC_S_INTERACTIVE);\n+\t*doit = (source >= PGC_S_INTERACTIVE);\n \n \treturn true;\n }\n@@ -601,10 +602,11 @@ check_random_seed(double *newval, void **extra, GucSource source)\n void\n assign_random_seed(double newval, void *extra)\n {\n+\tbool *doit = (bool *)extra;\n \t/* We'll do this at most once for any setting of the GUC variable */\n-\tif (*((int *) extra))\n+\tif (*doit)\n \t\tDirectFunctionCall1(setseed, Float8GetDatum(newval));\n-\t*((int *) extra) = 0;\n+\t*doit = false;\n }\n \n const char *\n-- \n2.17.0\n\n\n", "msg_date": "Sat, 12 Jun 2021 11:13:15 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] check_random_seed: use a boolean not an int.." } ]
[ { "msg_contents": "Hi,\n\nI found that pgbench could get stuck when every transaction\ncome to be skipped and the number of transaction is not limitted\nby -t option.\n\nFor example, when I usee a large rate (-R) for throttling and a\nsmall latency limit (-L) values with a duration (-T), pbbench\ngot stuck.\n\n $ pgbench -T 5 -R 100000000 -L 1;\n\nWhen we specify the number of transactions by -t, it doesn't get\nstuck because the number of skipped transactions are counted and\nchecked during the loop. However, the timer expiration is not\nchecked in the loop although it is checked before and after a\nsleep for throttling. \n\nI think it is better to check the timer expiration even in the loop\nof transaction skips and to finish pgbnech successfully because we\nshould correcly repport how many transactions are proccessed and\nskipped also in this case, and getting stuck would not be good\nanyway.\n\nI attached a patch for this fix.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Sun, 13 Jun 2021 04:01:51 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "Hello Yugo-san,\n\n> For example, when I usee a large rate (-R) for throttling and a\n> small latency limit (-L) values with a duration (-T), pbbench\n> got stuck.\n>\n> $ pgbench -T 5 -R 100000000 -L 1;\n\nIndeed, it does not get out of the catchup loop for a long time because \neven scheduling takes more time than the expected transaction time!\n\n> I think it is better to check the timer expiration even in the loop\n> of transaction skips and to finish pgbnech successfully because we\n> should correcly repport how many transactions are proccessed and\n> skipped also in this case, and getting stuck would not be good\n> anyway.\n>\n> I attached a patch for this fix.\n\nThe patch mostly works for me, and I agree that the bench should not be in \na loop on any parameters, even when \"crazy\" parameters are given…\n\nHowever I'm not sure this is the right way to handle this issue.\n\nThe catch-up loop can be dropped and the automaton can loop over itself to \nreschedule. Doing that as the attached fixes this issue and also makes \nprogress reporting work proprely in more cases, and reduces the number of \nlines of code. I did not add a test case because time sensitive tests have \nbeen removed (which is too bad, IMHO).\n\n-- \nFabien.", "msg_date": "Sun, 13 Jun 2021 08:56:59 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "Hello Fabien,\n\nOn Sun, 13 Jun 2021 08:56:59 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> > I attached a patch for this fix.\n> \n> The patch mostly works for me, and I agree that the bench should not be in \n> a loop on any parameters, even when \"crazy\" parameters are given…\n> \n> However I'm not sure this is the right way to handle this issue.\n> \n> The catch-up loop can be dropped and the automaton can loop over itself to \n> reschedule. Doing that as the attached fixes this issue and also makes \n> progress reporting work proprely in more cases, and reduces the number of \n> lines of code. I did not add a test case because time sensitive tests have \n> been removed (which is too bad, IMHO).\n\nI agree with your way to fix. However, the progress reporting didn't work\nbecause we cannot return from advanceConnectionState to threadRun and just\nbreak the loop.\n\n+\t\t\t\t\t\t/* otherwise loop over PREPARE_THROTTLE */\n \t\t\t\t\t\tbreak;\n\nI attached the fixed patch that uses return instead of break, and I confirmed\nthat this made the progress reporting work property.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Mon, 14 Jun 2021 11:20:37 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": ">>> I attached a patch for this fix.\n>>\n>> The patch mostly works for me, and I agree that the bench should not be in\n>> a loop on any parameters, even when \"crazy\" parameters are given…\n>>\n>> However I'm not sure this is the right way to handle this issue.\n>>\n>> The catch-up loop can be dropped and the automaton can loop over itself to\n>> reschedule. Doing that as the attached fixes this issue and also makes\n>> progress reporting work proprely in more cases, and reduces the number of\n>> lines of code. I did not add a test case because time sensitive tests have\n>> been removed (which is too bad, IMHO).\n>\n> I agree with your way to fix. However, the progress reporting didn't work\n> because we cannot return from advanceConnectionState to threadRun and just\n> break the loop.\n>\n> +\t\t\t\t\t\t/* otherwise loop over PREPARE_THROTTLE */\n> \t\t\t\t\t\tbreak;\n>\n> I attached the fixed patch that uses return instead of break, and I confirmed\n> that this made the progress reporting work property.\n\nI'm hesitating to do such a strictural change for a degenerate case linked \nto \"insane\" parameters, as pg is unlikely to reach 100 million tps, ever.\nIt seems to me enough that the command is not blocked in such cases.\n\n-- \nFabien.", "msg_date": "Mon, 14 Jun 2021 08:47:40 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "On Mon, 14 Jun 2021 08:47:40 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> >>> I attached a patch for this fix.\n> >>\n> >> The patch mostly works for me, and I agree that the bench should not be in\n> >> a loop on any parameters, even when \"crazy\" parameters are given…\n> >>\n> >> However I'm not sure this is the right way to handle this issue.\n> >>\n> >> The catch-up loop can be dropped and the automaton can loop over itself to\n> >> reschedule. Doing that as the attached fixes this issue and also makes\n> >> progress reporting work proprely in more cases, and reduces the number of\n> >> lines of code. I did not add a test case because time sensitive tests have\n> >> been removed (which is too bad, IMHO).\n> >\n> > I agree with your way to fix. However, the progress reporting didn't work\n> > because we cannot return from advanceConnectionState to threadRun and just\n> > break the loop.\n> >\n> > +\t\t\t\t\t\t/* otherwise loop over PREPARE_THROTTLE */\n> > \t\t\t\t\t\tbreak;\n> >\n> > I attached the fixed patch that uses return instead of break, and I confirmed\n> > that this made the progress reporting work property.\n> \n> I'm hesitating to do such a strictural change for a degenerate case linked \n> to \"insane\" parameters, as pg is unlikely to reach 100 million tps, ever.\n> It seems to me enough that the command is not blocked in such cases.\n\nSure. The change from \"break\" to \"return\" is just for making the progress\nreporting work in the loop, as you mentioned. However, my original intention\nis avoiding stuck in a corner-case where a unrealistic parameter was used, and\nI agree with you that this change is not so necessary for handling such a\nspecial situation. \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 14 Jun 2021 16:06:10 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "On Mon, 14 Jun 2021 16:06:10 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Mon, 14 Jun 2021 08:47:40 +0200 (CEST)\n> Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> > > I attached the fixed patch that uses return instead of break, and I confirmed\n> > > that this made the progress reporting work property.\n> > \n> > I'm hesitating to do such a strictural change for a degenerate case linked \n> > to \"insane\" parameters, as pg is unlikely to reach 100 million tps, ever.\n> > It seems to me enough that the command is not blocked in such cases.\n> \n> Sure. The change from \"break\" to \"return\" is just for making the progress\n> reporting work in the loop, as you mentioned. However, my original intention\n> is avoiding stuck in a corner-case where a unrealistic parameter was used, and\n> I agree with you that this change is not so necessary for handling such a\n> special situation. \n\nI attached the v2 patch to clarify that I withdrew the v3 patch.\n\nRegards\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 17 Jun 2021 01:23:49 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: not tested\nDocumentation: not tested\n\nLooks fine to me, as a way of catching this edge case.", "msg_date": "Tue, 22 Jun 2021 19:22:38 +0000", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "Hello Greg,\n\nOn Tue, 22 Jun 2021 19:22:38 +0000\nGreg Sabino Mullane <htamfids@gmail.com> wrote:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, failed\n> Implements feature: tested, failed\n> Spec compliant: not tested\n> Documentation: not tested\n> \n> Looks fine to me, as a way of catching this edge case.\n\nThank you for looking into this!\n\n'make installcheck-world' and 'Implements feature' are marked \"failed\",\nbut did you find any problem on this patch?\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 23 Jun 2021 09:36:58 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "Apologies, just saw this. I found no problems, those \"failures\" were just\nme missing checkboxes on the commitfest interface. +1 on the patch.\n\nCheers,\nGreg\n\nApologies, just saw this. I found no problems, those \"failures\" were just me missing checkboxes on the commitfest interface. +1 on the patch.Cheers,Greg", "msg_date": "Tue, 10 Aug 2021 10:50:20 -0400", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "On Tue, 10 Aug 2021 10:50:20 -0400\nGreg Sabino Mullane <htamfids@gmail.com> wrote:\n\n> Apologies, just saw this. I found no problems, those \"failures\" were just\n> me missing checkboxes on the commitfest interface. +1 on the patch.\n\nThank you!\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Fri, 13 Aug 2021 01:01:44 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "\n\nOn 2021/06/17 1:23, Yugo NAGATA wrote:\n> I attached the v2 patch to clarify that I withdrew the v3 patch.\n\nThanks for the patch!\n\n+\t\t\t\t\t\t\t * For very unrealistic rates under -T, some skipped\n+\t\t\t\t\t\t\t * transactions are not counted because the catchup\n+\t\t\t\t\t\t\t * loop is not fast enough just to do the scheduling\n+\t\t\t\t\t\t\t * and counting at the expected speed.\n+\t\t\t\t\t\t\t *\n+\t\t\t\t\t\t\t * We do not bother with such a degenerate case.\n+\t\t\t\t\t\t\t */\n\nISTM that the patch changes pgbench so that it can skip counting\nsome skipped transactions here even for realistic rates under -T.\nOf course, which would happen very rarely. Is this understanding right?\n\nOn the other hand, even without the patch, in the first place, there seems\nno guarantee that all the skipped transactions are counted under -T.\nWhen the timer is exceeded in CSTATE_END_TX, a client ends without\nchecking outstanding skipped transactions. Therefore the \"issue\" that\nsome skipped transactions are not counted is not one the patch newly introdues.\nSo that behavior change by the patch would be acceptable.\nIs this understanding right?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 3 Sep 2021 21:56:12 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "\nHello Fujii-san,\n\n> ISTM that the patch changes pgbench so that it can skip counting\n> some skipped transactions here even for realistic rates under -T.\n> Of course, which would happen very rarely. Is this understanding right?\n\nYes. The point is to get out of the scheduling loop when time has expired, \nas soon it is known, instead of looping there for some possibly long time.\n\n> On the other hand, even without the patch, in the first place, there seems\n> no guarantee that all the skipped transactions are counted under -T.\n> When the timer is exceeded in CSTATE_END_TX, a client ends without\n> checking outstanding skipped transactions.\n\nIndeed. But that should be very few transactions under latency limit.\n\n> Therefore the \"issue\" that some skipped transactions are not counted is \n> not one the patch newly introdues.\n\nYep. The patch counts less of them though, because of the early exit \nintroduced in the patch in the scheduling state. Before it could be stuck \nin the \"while (late) { count; schedule; }\" loop.\n\n> So that behavior change by the patch would be acceptable. Is this \n> understanding right?\n\nI think so.\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 4 Sep 2021 08:27:00 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "\n\nOn 2021/09/04 15:27, Fabien COELHO wrote:\n> \n> Hello Fujii-san,\n> \n>> ISTM that the patch changes pgbench so that it can skip counting\n>> some skipped transactions here even for realistic rates under -T.\n>> Of course, which would happen very rarely. Is this understanding right?\n> \n> Yes. The point is to get out of the scheduling loop when time has expired, as soon it is known, instead of looping there for some possibly long time.\n\nThanks for checking my understanding!\n\n+\t\t\t\t\t\t\t * For very unrealistic rates under -T, some skipped\n+\t\t\t\t\t\t\t * transactions are not counted because the catchup\n+\t\t\t\t\t\t\t * loop is not fast enough just to do the scheduling\n+\t\t\t\t\t\t\t * and counting at the expected speed.\n+\t\t\t\t\t\t\t *\n+\t\t\t\t\t\t\t * We do not bother with such a degenerate case.\n\nSo this comment is a bit misleading? What about updating this as follows?\n\n------------------------------\nStop counting skipped transactions under -T as soon as the timer is exceeded.\nBecause otherwise it can take a very long time to count all of them especially\nwhen quite a lot of them happen with unrealistically high rate setting in -R,\nwhich would prevent pgbench from ending immediately. Because of this behavior,\nnote that there is no guarantee that all skipped transactions are counted\nunder -T though there is under -t. This is OK in practice because it's very\nunlikely to happen with realistic setting.\n------------------------------\n\n\n>> So that behavior change by the patch would be acceptable. Is this understanding right?\n> \n> I think so.\n\n+1\n\nOne question is; which version do we want to back-patch to?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 7 Sep 2021 01:10:44 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "\nHello Fujii-san,\n\n> Stop counting skipped transactions under -T as soon as the timer is \n> exceeded. Because otherwise it can take a very long time to count all of \n> them especially when quite a lot of them happen with unrealistically \n> high rate setting in -R, which would prevent pgbench from ending \n> immediately. Because of this behavior, note that there is no guarantee \n> that all skipped transactions are counted under -T though there is under \n> -t. This is OK in practice because it's very unlikely to happen with \n> realistic setting.\n\nOk, I find this text quite clear.\n\n> One question is; which version do we want to back-patch to?\n\nIf we consider it a \"very minor bug fix\" which is triggered by somehow \nunrealistic options, so I'd say 14 & dev, or possibly only dev.\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 7 Sep 2021 11:24:39 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "On 2021/09/07 18:24, Fabien COELHO wrote:\n> \n> Hello Fujii-san,\n> \n>> Stop counting skipped transactions under -T as soon as the timer is exceeded. Because otherwise it can take a very long time to count all of them especially when quite a lot of them happen with unrealistically high rate setting in -R, which would prevent pgbench from ending immediately. Because of this behavior, note that there is no guarantee that all skipped transactions are counted under -T though there is under -t. This is OK in practice because it's very unlikely to happen with realistic setting.\n> \n> Ok, I find this text quite clear.\n\nThanks for the check! So attached is the updated version of the patch.\n\n\n>> One question is; which version do we want to back-patch to?\n> \n> If we consider it a \"very minor bug fix\" which is triggered by somehow unrealistic options, so I'd say 14 & dev, or possibly only dev.\n\nAgreed. Since it's hard to imagine the issue happens in practice,\nwe don't need to bother back-patch to the stable branches.\nSo I'm thinking to commit the patch to 15dev and 14.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 8 Sep 2021 23:40:35 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" }, { "msg_contents": "\n\nOn 2021/09/08 23:40, Fujii Masao wrote:\n> Agreed. Since it's hard to imagine the issue happens in practice,\n> we don't need to bother back-patch to the stable branches.\n> So I'm thinking to commit the patch to 15dev and 14.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 10 Sep 2021 01:30:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Avoid stuck of pbgench due to skipped transactions" } ]
[ { "msg_contents": "Re: \r\n >> Can a CI collation be ordered upper case first, or is this a limitation of ICU?\r\n\r\n > I don't know the authoritative answer to that, but to me it doesn't make\r\n > sense, since the effect of a case-insensitive collation is to throw away\r\n > the third-level weights, so there is nothing left for \"upper case first\"\r\n > to operate on.\r\n\r\nIt wouldn't make sense for the ICU sort key of a CI collation itself because the sort keys need to be binary equal, but what the collation of interest does is equivalent to adding a secondary \"C\"-collated expression to the ORDER BY clause. For example:\r\n\r\nSELECT ... ORDER BY expr COLLATE ci_as;\r\n\r\nIs ordered as if the query had been written:\r\n\r\nSELECT ... ORDER BY expr COLLATE ci_as, expr COLLATE \"C\";\r\n\r\nRe: \r\n > tailoring rules\r\n >> yes\r\n\r\nIt looks like the relevant API call is ucol_openRules(), \r\n Interface documented here: https://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/ucol_8h.html\r\n example usage from C here: https://android.googlesource.com/platform/external/icu/+/db20b09/source/test/cintltst/citertst.c\r\n\r\nfor example:\r\n\r\n /* Test with an expanding character sequence */\r\n u_uastrcpy(rule, \"&a < b < c/abd < d\");\r\n c2 = ucol_openRules(rule, u_strlen(rule), UCOL_OFF, UCOL_DEFAULT_STRENGTH, NULL, &status);\r\n\r\nand a reordering rule test:\r\n\r\n u_uastrcpy(rule, \"&z < AB\");\r\n coll = ucol_openRules(rule, u_strlen(rule), UCOL_OFF, UCOL_DEFAULT_STRENGTH, NULL, &status);\r\n\r\nthat looks encouraging. It returns a UCollator object, like ucol_open(const char *localeString, ...), so it's an alternative to ucol_open(). One of the parameters is the equivalent of colStrength, so then the question would be, how are the other keyword/value pairs like colCaseFirst, colAlternate, etc. specified via the rules argument? In the same way with the exception of colStrength?\r\n\r\ne.g. is \"colAlternate=shifted;&z < AB\" a valid rules string?\r\n\r\nThe ICU documentation says simply:\r\n\r\n\" rules\tA string describing the collation rules. For the syntax of the rules please see users guide.\"\r\n\r\nTransform rules are documented here: http://userguide.icu-project.org/transforms/general/rules\r\n\r\nBut there are no examples of using the keyword/value pairs that may appear in a locale string with the transform rules, and there's no locale argument on ucol_openRules. How can the keyword/value pairs that may appear in the locale string be applied in combination with tailoring rules (with the exception of colStrength)?\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Sat, 12 Jun 2021 19:39:25 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Character expansion with ICU collations" }, { "msg_contents": "I have a proposal for how to support tailoring rules in ICU collations: The ucol_openRules() function is an alternative to the ucol_open() function that PostgreSQL calls today, but it takes the collation strength as one if its parameters so the locale string would need to be parsed before creating the collator. After the collator is created using either ucol_openRules or ucol_open, the ucol_setAttribute() function may be used to set individual attributes from keyword=value pairs in the locale string as it does now, except that the strength probably can't be changed after opening the collator with ucol_openRules. So the logic in pg_locale.c would need to be reorganized a little bit, but that sounds straightforward.\r\n\r\nOne simple solution would be to have the tailoring rules be specified as a new keyword=value pair, such as colTailoringRules=<rulestring>. Since the <rulestring> may contain single quote characters or PostgreSQL escape characters, any single quote characters or escapes would need to be escaped using PostgreSQL escape rules. If colTailoringRules is present, colStrength would also be known prior to opening the collator, or would default to tertiary, and we would keep a local flag indicating that we should not process the colStrength keyword again, if specified. \r\n\r\nRepresenting the TailoringRules as just another keyword=value in the locale string means that we don't need any change to the catalog to store it. It's just part of the locale specification. I think we wouldn't even need to bump the catversion.\r\n\r\nAre there any tailoring rules, such as expansions and contractions, that we should disallow? I realize that we don't handle nondeterministic collations in LIKE or regular expression operations as of PG14, but given expr LIKE 'a%' on a database with a UTF-8 encoding and arbitrary tailoring rules that include expansions and contractions, is it still guaranteed that expr must sort BETWEEN 'a' AND ('a' || E'/uFFFF') ?\r\n\r\n", "msg_date": "Mon, 21 Jun 2021 13:23:38 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Character expansion with ICU collations" } ]
[ { "msg_contents": "Hi,\n\nCurrently we don't allow a sub-transaction to be spawned from inside a\nparallel worker (and also from a leader who is in parallel mode). This\nimposes a restriction that pl/pgsql functions that use an exception block\ncan't be marked parallel safe, even when the exception block is there only\nto catch trivial errors such as divide-by-zero. I tried to look at the\nimplications of removing this sub-transaction restriction, and came up with\nthe attached WIP patch after considering the below points. I may have\nmissed other points, or may have assumed something wrong. So comments are\nwelcome.\n\n- TBLOCK_PARALLEL_INPROGRESS\n\nNow that there can be an in-progress sub-transaction in a parallel worker,\nthe sub-transaction states need to be accounted for. Rather than having new\ntransaction states such as TBLOCK_PARALLEL_SUBINPROGRESS, I removed the\nexisting TBLOCK_PARALLEL_INPROGRESS from the code. At a couple of places\nis_parallel_worker is set if state is TBLOCK_PARALLEL_INPROGRESS. Instead,\nfor now, I have used (ParallelCurrentXids != NULL) to identify if it's a\nworker in a valid transaction state. Maybe we can improve on this.\nIn EndTransactionBlock(), there is a fatal error thrown if it's a\nparallel-worker in-progress. This seems to be a can't-have case. So I have\nremoved this check. Need to further think how we can retain this check.\n\n\n- IsInParallelMode()\n\nOn HEAD, the parallel worker cannot have any sub-transactions, so\nCurrentTransactionState always points to the TopTransactionStateData. And\nwhen ParallelWorkerMain() calls EnterParallelMode(), from that point\nonwards IsInParallelMode() always returns true. But with the patch,\nCurrentTransactionState can point to some nest level down below, and in\nthat TransactionState, parallelModeLevel would be 0, so IsInParallelMode()\nwill return false in a sub-transaction, unless some other function happens\nto explicitly call EnterParallelMode().\n\nOne option for making IsInParallelMode() always return true for worker is\nto just check whether the worker is in a transaction (ParallelCurrentXids\n!= NULL). Or else, check only the TopTransactionData->parallelModeLevel.\nStill another option is for the new TransactionState to inherit\nparallelModeLevel from it's parent. I chose this option. This avoids\nadditional conditions in IsInParallelMode() specifically for worker.\n\nDoes this inherit-parent-parallelModeLevel option affect the leader code ?\nThe functions calling EnterParallelMode() are : begin_parallel_vacuum,\n_bt_begin_parallel, ParallelWorkerMain, CommitTransaction, ExecutePlan().\nAfter entering Parallel mode, it does not look like a subtransaction will\nbe spawned at these places. If at all it does, on HEAD, the\nIsInParallelMode() will return false, which does not sound right. For all\nthe child transactions, this function should return true. So w.r.t. this,\nin fact inheriting parent's parallelModeLevel looks better.\n\nOperations that are not allowed to run in worker would continue to be\ndisallowed in a worker sub-transaction as well. E.g. assigning new xid,\nheap_insert, etc. These places already are using IsInParallelMode() which\ntakes care of guarding against such operations.\n\nJust for archival ...\nExecutePlan() is called with use_parallel_mode=true when there was a gather\nplan created, in which case it enters Parallel mode. From here, if the\nbackend happens to start a new subtransaction for some reason, it does\nsound right for the Parallel mode to be true for this sub-transaction,\nalthough I am not sure if there can be such a case.\nIn worker, as far as I understand, ExecutePlan() always gets called with\nuse_parallel_mode=false, because there is no gather plan in the worker. So\nit does not enter Parallel mode. But because the worker is already in\nparallel mode, it does not matter.\n\n\n- List of ParallelContexts (pcxt_list) :\nParallelContext is created only in backends. So there are no implications\nof the pcxt_list w.r.t. parallel workers spawning a subtransaction, because\npcxt_list will always be empty in workers.\n\n\n- ParallelCurrentXids :\n\nA parallel worker always maintains a global flat sorted list of xids which\nrepresent all the xids that are considered as current xids (i.e. the ones\nthat are returned by TransactionIdIsCurrentTransactionId() in a leader). So\nthis global list should continue to work no matter what is the\nsub-transaction nest level, since there won't be new xids created in the\nworker.\n\n- Savepoints :\n\nHaven't considered savepoints. The restriction is retained for savepoints.\n\nThanks\n-Amit Khandekar\nHuawei Technologies", "msg_date": "Mon, 14 Jun 2021 09:50:03 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Relaxing the sub-transaction restriction in parallel query" } ]
[ { "msg_contents": "Hi,\n\nTState has a field called \"conn_duration\" and this is, the comment says,\n\"cumulated connection and deconnection delays\". This value is summed over\nthreads and reported as \"average connection time\" under -C/--connect.\nIf this options is not specified, the value is never used.\n\nHowever, I found that conn_duration is calculated even when -C/--connect\nis not specified, which is waste. SO we can remove this code as fixed in\nthe attached patch.\n\nIn addition, deconnection delays are not cumulated even under -C/--connect\nin spite of mentioned in the comment. I also fixed this in the attached patch.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Mon, 14 Jun 2021 15:11:55 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Fix around conn_duration in pgbench" }, { "msg_contents": "Hello Yugo-san,\n\n> TState has a field called \"conn_duration\" and this is, the comment says,\n> \"cumulated connection and deconnection delays\". This value is summed over\n> threads and reported as \"average connection time\" under -C/--connect.\n> If this options is not specified, the value is never used.\n\nYep.\n\n> However, I found that conn_duration is calculated even when -C/--connect\n> is not specified, which is waste. SO we can remove this code as fixed in\n> the attached patch.\n\nI'm fine with the implied code simplification, but it deserves a comment.\n\n> In addition, deconnection delays are not cumulated even under -C/--connect\n> in spite of mentioned in the comment. I also fixed this in the attached patch.\n\nI'm fine with that, even if it only concerns is_connect. I've updated the \ncommand to work whether now is initially set or not. I'm unsure whether \nclosing a pg connection actually waits for anything, so probably the \nimpact is rather small anyway. It cannot be usefully measured when \n!is_connect, because thread do that when then feel like it, whereas on \nconnection we can use barriers to have something which makes sense.\n\nAlso, there is the issue of connection failures: the attached version adds \nan error message and exit for initial connections consistently.\nThis is not done with is_connect, though, and I'm unsure what we should \nreally do.\n\n-- \nFabien.", "msg_date": "Mon, 14 Jun 2021 10:57:07 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "On Mon, 14 Jun 2021 10:57:07 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n \n> > However, I found that conn_duration is calculated even when -C/--connect\n> > is not specified, which is waste. SO we can remove this code as fixed in\n> > the attached patch.\n> \n> I'm fine with the implied code simplification, but it deserves a comment.\n\nThank you for adding comments!\n \n> > In addition, deconnection delays are not cumulated even under -C/--connect\n> > in spite of mentioned in the comment. I also fixed this in the attached patch.\n> \n> I'm fine with that, even if it only concerns is_connect. I've updated the \n> command to work whether now is initially set or not. \n\nOk. I agree with your update. \n \n> Also, there is the issue of connection failures: the attached version adds \n> an error message and exit for initial connections consistently.\n> This is not done with is_connect, though, and I'm unsure what we should \n> really do.\n\nWell, as to connection failures, I think that we should discuss in the other\nthread [1] where this issue was originally raised or in a new thread because\nwe can discuss this as a separated issue from the originally proposed patch.\n\n[1] https://www.postgresql.org/message-id/flat/TYCPR01MB5870057375ACA8A73099C649F5349%40TYCPR01MB5870.jpnprd01.prod.outlook.com.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 15 Jun 2021 23:24:00 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nThis patch looks fine to me. master 48cb244fb9\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Tue, 29 Jun 2021 13:21:54 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "Hello Asif,\n\nOn Tue, 29 Jun 2021 13:21:54 +0000\nAsif Rehman <asifr.rehman@gmail.com> wrote:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n> \n> This patch looks fine to me. master 48cb244fb9\n> \n> The new status of this patch is: Ready for Committer\n\nThank you for reviewing this patch!\n\nThe previous patch included fixes about connection failures, but this part\nwas moved to another patch discussed in [1].\n\n[1] https://www.postgresql.org/message-id/alpine.DEB.2.22.394.2106181535400.3146194%40pseudo\n\nI attached the updated patach.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Wed, 30 Jun 2021 14:35:37 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "On Wed, 30 Jun 2021 14:35:37 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hello Asif,\n> \n> On Tue, 29 Jun 2021 13:21:54 +0000\n> Asif Rehman <asifr.rehman@gmail.com> wrote:\n> \n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, passed\n> > Implements feature: tested, passed\n> > Spec compliant: tested, passed\n> > Documentation: not tested\n> > \n> > This patch looks fine to me. master 48cb244fb9\n> > \n> > The new status of this patch is: Ready for Committer\n> \n> Thank you for reviewing this patch!\n> \n> The previous patch included fixes about connection failures, but this part\n> was moved to another patch discussed in [1].\n> \n> [1] https://www.postgresql.org/message-id/alpine.DEB.2.22.394.2106181535400.3146194%40pseudo\n> \n> I attached the updated patach.\n\nI am sorry but I attached the other patch. Attached in this post\nis the latest patch.\n\nRegards,\nYugo Nagata\n\n\n> \n> Regards,\n> Yugo Nagata\n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Wed, 30 Jun 2021 14:43:04 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "\n\nOn 2021/06/30 14:43, Yugo NAGATA wrote:\n> On Wed, 30 Jun 2021 14:35:37 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n>> Hello Asif,\n>>\n>> On Tue, 29 Jun 2021 13:21:54 +0000\n>> Asif Rehman <asifr.rehman@gmail.com> wrote:\n>>\n>>> The following review has been posted through the commitfest application:\n>>> make installcheck-world: tested, passed\n>>> Implements feature: tested, passed\n>>> Spec compliant: tested, passed\n>>> Documentation: not tested\n>>>\n>>> This patch looks fine to me. master 48cb244fb9\n>>>\n>>> The new status of this patch is: Ready for Committer\n>>\n>> Thank you for reviewing this patch!\n>>\n>> The previous patch included fixes about connection failures, but this part\n>> was moved to another patch discussed in [1].\n>>\n>> [1] https://www.postgresql.org/message-id/alpine.DEB.2.22.394.2106181535400.3146194%40pseudo\n>>\n>> I attached the updated patach.\n> \n> I am sorry but I attached the other patch. Attached in this post\n> is the latest patch.\n\n \t\t\tcase CSTATE_FINISHED:\n+\t\t\t\t/* per-thread last disconnection time is not measured */\n\nCould you tell me why we don't need to do this measurement?\n\n\n-\t\t/* no connection delay to record */\n-\t\tthread->conn_duration = 0;\n+\t\t/* connection delay is measured globally between the barriers */\n\nThis comment is really correct? I was thinking that the measurement is not necessary here because this is the case where -C option is not specified.\n\n\ndone:\n\tstart = pg_time_now();\n\tdisconnect_all(state, nstate);\n\tthread->conn_duration += pg_time_now() - start;\n\nWe should measure the disconnection time here only when -C option specified (i.e., is_connect variable is true)? Though, I'm not sure how much this change is helpful to reduce the performance overhead....\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 27 Jul 2021 03:04:35 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "Hello Fujii-san,\n\nThank you for looking at it.\n\nOn Tue, 27 Jul 2021 03:04:35 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \t\t\tcase CSTATE_FINISHED:\n> +\t\t\t\t/* per-thread last disconnection time is not measured */\n> \n> Could you tell me why we don't need to do this measurement?\n\nWe don't need to do it because it is already done in CSTATE_END_TX state when\nthe transaction successfully finished. Also, we don't need it when the thread\nis aborted (that it, in CSTATE_ABORTED case) because we can't report complete\nresults anyway in such cases.\n\nI updated the comment.\n \n> -\t\t/* no connection delay to record */\n> -\t\tthread->conn_duration = 0;\n> +\t\t/* connection delay is measured globally between the barriers */\n> \n> This comment is really correct? I was thinking that the measurement is not necessary here because this is the case where -C option is not specified.\n\nThis comment means that, when -C is not specified, the connection delay is\nmeasured between the barrier point where the benchmark starts\n\n /* READY */\n THREAD_BARRIER_WAIT(&barrier);\n\nand the barrier point where all the thread finish making initial connections.\n\n /* GO */\n THREAD_BARRIER_WAIT(&barrier);\n\n\n> done:\n> \tstart = pg_time_now();\n> \tdisconnect_all(state, nstate);\n> \tthread->conn_duration += pg_time_now() - start;\n> \n> We should measure the disconnection time here only when -C option specified (i.e., is_connect variable is true)? Though, I'm not sure how much this change is helpful to reduce the performance overhead....\n\nYou are right. We are measuring the disconnection time only when -C option is\nspecified, but it is already done at the end of transaction (i.e., CSTATE_END_TX). \nWe need disconnection here only when we get an error. \nTherefore, we don't need the measurement here.\n\nI attached the updated patch.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Tue, 27 Jul 2021 11:02:47 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "\n\nOn 2021/07/27 11:02, Yugo NAGATA wrote:\n> Hello Fujii-san,\n> \n> Thank you for looking at it.\n> \n> On Tue, 27 Jul 2021 03:04:35 +0900\n> Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n>> \t\t\tcase CSTATE_FINISHED:\n>> +\t\t\t\t/* per-thread last disconnection time is not measured */\n>>\n>> Could you tell me why we don't need to do this measurement?\n> \n> We don't need to do it because it is already done in CSTATE_END_TX state when\n> the transaction successfully finished. Also, we don't need it when the thread\n> is aborted (that it, in CSTATE_ABORTED case) because we can't report complete\n> results anyway in such cases.\n\nUnderstood.\n\n\n> I updated the comment.\n\nThanks!\n\n+\t\t\t\t * Per-thread last disconnection time is not measured because it\n+\t\t\t\t * is already done when the transaction successfully finished.\n+\t\t\t\t * Also, we don't need it when the thread is aborted because we\n+\t\t\t\t * can't report complete results anyway in such cases.\n\nWhat about commenting a bit more explicitly like the following?\n\n--------------------------------------------\nIn CSTATE_FINISHED state, this disconnect_all() is no-op under -C/--connect because all the connections that this thread established should have already been closed at the end of transactions. So we don't need to measure the disconnection delays here.\n\nIn CSTATE_ABORTED state, the measurement is no longer necessary because we cannot report complete results anyways in this case.\n--------------------------------------------\n\n\n> \n>> -\t\t/* no connection delay to record */\n>> -\t\tthread->conn_duration = 0;\n>> +\t\t/* connection delay is measured globally between the barriers */\n>>\n>> This comment is really correct? I was thinking that the measurement is not necessary here because this is the case where -C option is not specified.\n> \n> This comment means that, when -C is not specified, the connection delay is\n> measured between the barrier point where the benchmark starts\n> \n> /* READY */\n> THREAD_BARRIER_WAIT(&barrier);\n> \n> and the barrier point where all the thread finish making initial connections.\n> \n> /* GO */\n> THREAD_BARRIER_WAIT(&barrier);\n\nOk, so you're commenting about the initial connection delay that's\nmeasured when -C is not specified. But I'm not sure if this comment\nhere is really helpful. Seem rather confusing??\n\n\n> \n> \n>> done:\n>> \tstart = pg_time_now();\n>> \tdisconnect_all(state, nstate);\n>> \tthread->conn_duration += pg_time_now() - start;\n>>\n>> We should measure the disconnection time here only when -C option specified (i.e., is_connect variable is true)? Though, I'm not sure how much this change is helpful to reduce the performance overhead....\n> \n> You are right. We are measuring the disconnection time only when -C option is\n> specified, but it is already done at the end of transaction (i.e., CSTATE_END_TX).\n> We need disconnection here only when we get an error.\n> Therefore, we don't need the measurement here.\n\nOk.\n\nI found another disconnect_all().\n\n\t/* XXX should this be connection time? */\n\tdisconnect_all(state, nclients);\n\nThe measurement is also not necessary here.\nSo the above comment should be removed or updated?\n\n \n> I attached the updated patch.\n\nThanks!\n\nRegards.\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 28 Jul 2021 00:20:21 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "Hello Fujii-san,\n\nOn Wed, 28 Jul 2021 00:20:21 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> \n> On 2021/07/27 11:02, Yugo NAGATA wrote:\n> > Hello Fujii-san,\n> > \n> > Thank you for looking at it.\n> > \n> > On Tue, 27 Jul 2021 03:04:35 +0900\n> > Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n \n> +\t\t\t\t * Per-thread last disconnection time is not measured because it\n> +\t\t\t\t * is already done when the transaction successfully finished.\n> +\t\t\t\t * Also, we don't need it when the thread is aborted because we\n> +\t\t\t\t * can't report complete results anyway in such cases.\n> \n> What about commenting a bit more explicitly like the following?\n> \n> --------------------------------------------\n> In CSTATE_FINISHED state, this disconnect_all() is no-op under -C/--connect because all the connections that this thread established should have already been closed at the end of transactions. So we don't need to measure the disconnection delays here.\n> \n> In CSTATE_ABORTED state, the measurement is no longer necessary because we cannot report complete results anyways in this case.\n> --------------------------------------------\n\nThank you for the suggestion. I updated the comment. \n \n> > \n> >> -\t\t/* no connection delay to record */\n> >> -\t\tthread->conn_duration = 0;\n> >> +\t\t/* connection delay is measured globally between the barriers */\n> >>\n> >> This comment is really correct? I was thinking that the measurement is not necessary here because this is the case where -C option is not specified.\n> > \n> > This comment means that, when -C is not specified, the connection delay is\n> > measured between the barrier point where the benchmark starts\n> > \n> > /* READY */\n> > THREAD_BARRIER_WAIT(&barrier);\n> > \n> > and the barrier point where all the thread finish making initial connections.\n> > \n> > /* GO */\n> > THREAD_BARRIER_WAIT(&barrier);\n> \n> Ok, so you're commenting about the initial connection delay that's\n> measured when -C is not specified. But I'm not sure if this comment\n> here is really helpful. Seem rather confusing??\n\nOk. I removed this comment.\n\n\n> I found another disconnect_all().\n> \n> \t/* XXX should this be connection time? */\n> \tdisconnect_all(state, nclients);\n> \n> The measurement is also not necessary here.\n> So the above comment should be removed or updated?\n\nI think this disconnect_all will be a no-op because all connections should\nbe already closed in threadRun(), but I left it just to be sure that\nconnections are all cleaned-up. I updated the comment for explaining above.\n\nI attached the updated patch. Could you please look at this?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Wed, 28 Jul 2021 16:15:11 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "\n\nOn 2021/07/28 16:15, Yugo NAGATA wrote:\n>> I found another disconnect_all().\n>>\n>> \t/* XXX should this be connection time? */\n>> \tdisconnect_all(state, nclients);\n>>\n>> The measurement is also not necessary here.\n>> So the above comment should be removed or updated?\n> \n> I think this disconnect_all will be a no-op because all connections should\n> be already closed in threadRun(), but I left it just to be sure that\n> connections are all cleaned-up. I updated the comment for explaining above.\n> \n> I attached the updated patch. Could you please look at this?\n\nThanks for updating the patch! LGTM.\n\nBarring any objection, I will commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 29 Jul 2021 02:23:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "\n\nOn 2021/07/29 2:23, Fujii Masao wrote:\n> \n> \n> On 2021/07/28 16:15, Yugo NAGATA wrote:\n>>> I found another disconnect_all().\n>>>\n>>> ����/* XXX should this be connection time? */\n>>> ����disconnect_all(state, nclients);\n>>>\n>>> The measurement is also not necessary here.\n>>> So the above comment should be removed or updated?\n>>\n>> I think this disconnect_all will be a no-op because all connections should\n>> be already closed in threadRun(), but I left it just to be sure that\n>> connections are all cleaned-up. I updated the comment for explaining above.\n>>\n>> I attached the updated patch. Could you please look at this?\n> \n> Thanks for updating the patch! LGTM.\n\nThis patch needs to be back-patched because it fixes the bug\nin measurement of disconnection delays. Thought?\n\nBut the patch fails to be back-patched to v13 or before because\npgbench's time logic was changed by commit 547f04e734.\nDo you have the patches that can be back-patched to v13 or before?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 30 Jul 2021 02:01:08 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "Hello Fujii-san,\n\nOn Fri, 30 Jul 2021 02:01:08 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> \n> On 2021/07/29 2:23, Fujii Masao wrote:\n> > \n> > \n> > On 2021/07/28 16:15, Yugo NAGATA wrote:\n> >>> I found another disconnect_all().\n> >>>\n> >>>     /* XXX should this be connection time? */\n> >>>     disconnect_all(state, nclients);\n> >>>\n> >>> The measurement is also not necessary here.\n> >>> So the above comment should be removed or updated?\n> >>\n> >> I think this disconnect_all will be a no-op because all connections should\n> >> be already closed in threadRun(), but I left it just to be sure that\n> >> connections are all cleaned-up. I updated the comment for explaining above.\n> >>\n> >> I attached the updated patch. Could you please look at this?\n> > \n> > Thanks for updating the patch! LGTM.\n> \n> This patch needs to be back-patched because it fixes the bug\n> in measurement of disconnection delays. Thought?\n\nThis patch fixes three issues of connection time measurement:\n\n1. The initial connection time is measured and stored into conn_duration\n but the result is never used.\n2. The disconnection time are not measured even when -C is specified.\n3. The disconnection time measurement at the end of threadRun() is\n not necessary.\n\nThe first one exists only in v14 and master, but others are also in v13 and\nbefore. So, I think we can back-patch these fixes.\n\n> But the patch fails to be back-patched to v13 or before because\n> pgbench's time logic was changed by commit 547f04e734.\n> Do you have the patches that can be back-patched to v13 or before?\n\nI attached the patch for v13.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Fri, 30 Jul 2021 14:43:43 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "\n\nOn 2021/07/30 14:43, Yugo NAGATA wrote:\n> This patch fixes three issues of connection time measurement:\n> \n> 1. The initial connection time is measured and stored into conn_duration\n> but the result is never used.\n> 2. The disconnection time are not measured even when -C is specified.\n> 3. The disconnection time measurement at the end of threadRun() is\n> not necessary.\n> \n> The first one exists only in v14 and master, but others are also in v13 and\n> before. So, I think we can back-patch these fixes.\n\nYes, you're right.\n\n> \n>> But the patch fails to be back-patched to v13 or before because\n>> pgbench's time logic was changed by commit 547f04e734.\n>> Do you have the patches that can be back-patched to v13 or before?\n> \n> I attached the patch for v13.\n\nThanks for the patch!\n\n+\t\t\t\t/*\n+\t\t\t\t * In CSTATE_FINISHED state, this finishCon() is a no-op\n+\t\t\t\t * under -C/--connect because all the connections that this\n+\t\t\t\t * thread established should have already been closed at the end\n+\t\t\t\t * of transactions. So we don't need to measure the disconnection\n+\t\t\t\t * delays here.\n\nIn v13, the disconnection time needs to be measured in CSTATE_FINISHED\nbecause the connection can be closed here when -C is not specified?\n\n\n\t/* tps is about actually executed transactions */\n\ttps_include = ntx / time_include;\n\ttps_exclude = ntx /\n\t\t(time_include - (INSTR_TIME_GET_DOUBLE(conn_total_time) / nclients));\n\nBTW, this is a bit different topic from the patch, but in v13,\ntps excluding connection establishing is calculated in the above way.\nThe total connection time is divided by the number of clients,\nbut why do we need this division? Do you have any idea?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 30 Jul 2021 15:26:51 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "On Fri, 30 Jul 2021 15:26:51 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> \n> On 2021/07/30 14:43, Yugo NAGATA wrote:\n> > This patch fixes three issues of connection time measurement:\n> > \n> > 1. The initial connection time is measured and stored into conn_duration\n> > but the result is never used.\n> > 2. The disconnection time are not measured even when -C is specified.\n> > 3. The disconnection time measurement at the end of threadRun() is\n> > not necessary.\n> > \n> > The first one exists only in v14 and master, but others are also in v13 and\n> > before. So, I think we can back-patch these fixes.\n> \n> Yes, you're right.\n> \n> > \n> >> But the patch fails to be back-patched to v13 or before because\n> >> pgbench's time logic was changed by commit 547f04e734.\n> >> Do you have the patches that can be back-patched to v13 or before?\n> > \n> > I attached the patch for v13.\n> \n> Thanks for the patch!\n> \n> +\t\t\t\t/*\n> +\t\t\t\t * In CSTATE_FINISHED state, this finishCon() is a no-op\n> +\t\t\t\t * under -C/--connect because all the connections that this\n> +\t\t\t\t * thread established should have already been closed at the end\n> +\t\t\t\t * of transactions. So we don't need to measure the disconnection\n> +\t\t\t\t * delays here.\n> \n> In v13, the disconnection time needs to be measured in CSTATE_FINISHED\n> because the connection can be closed here when -C is not specified?\n\nWhen -C is not specified, the disconnection time is not measured even in\nthe patch for v14+. I don't think it is a problem because the \ndisconnection delay at the end of benchmark almost doesn't affect the tps.\n\n> \n> \t/* tps is about actually executed transactions */\n> \ttps_include = ntx / time_include;\n> \ttps_exclude = ntx /\n> \t\t(time_include - (INSTR_TIME_GET_DOUBLE(conn_total_time) / nclients));\n> \n> BTW, this is a bit different topic from the patch, but in v13,\n> tps excluding connection establishing is calculated in the above way.\n> The total connection time is divided by the number of clients,\n> but why do we need this division? Do you have any idea?\n\n\nconn_total_time is a sum of connection delays measured over all threads\nthat are running concurrently. So, we try to get the average connection\ndelays by dividing by the number of clients, I think. However, I am not\nsure this is the right way though, and in fact it was revised in the\nrecent commit so that we don't report the \"tps excluding connection\nestablishing\" especially when -C is specified.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Sun, 1 Aug 2021 14:50:43 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "\n\nOn 2021/08/01 14:50, Yugo NAGATA wrote:\n> When -C is not specified, the disconnection time is not measured even in\n> the patch for v14+. I don't think it is a problem because the\n> disconnection delay at the end of benchmark almost doesn't affect the tps.\n\nWhat about v13 or before? That is, in v13, even when -C is not specified,\nboth the connection and disconnection delays are measured. Right?\nIf right, the time required to close the connection in CSTATE_FINISHED\nstate should also be measured?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 5 Aug 2021 16:16:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "Hello Fujii-san,\n\nOn Thu, 5 Aug 2021 16:16:48 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> \n> On 2021/08/01 14:50, Yugo NAGATA wrote:\n> > When -C is not specified, the disconnection time is not measured even in\n> > the patch for v14+. I don't think it is a problem because the\n> > disconnection delay at the end of benchmark almost doesn't affect the tps.\n> \n> What about v13 or before? That is, in v13, even when -C is not specified,\n> both the connection and disconnection delays are measured. Right?\n\nNo. Although there is a code measuring the thread->conn_time around\ndisconnect_all() in v13 or before;\n\ndone:\n INSTR_TIME_SET_CURRENT(start);\n disconnect_all(state, nstate);\n INSTR_TIME_SET_CURRENT(end);\n INSTR_TIME_ACCUM_DIFF(thread->conn_time, end, start);\n\nthis is a no-op because finishCon() is already called at CSTATE_ABORTED or \nCSTATE_FINISHED. Therefore, in the end, the disconnection delay is not\nmeasured even in v13.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 5 Aug 2021 18:02:19 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "\n\nOn 2021/08/05 18:02, Yugo NAGATA wrote:\n> this is a no-op because finishCon() is already called at CSTATE_ABORTED or\n> CSTATE_FINISHED. Therefore, in the end, the disconnection delay is not\n> measured even in v13.\n\nYes, but I was thinking that's a bug that we should fix.\nIOW, I was thinking that, in v13, both connection and disconnection delays\nshould be measured whether -C is specified or not, *per spec*.\nBut, in v13, the disconnection delays are not measured in the cases\nwhere -C is specified and not specified. So I was thinking that this is\na bug and we should fix those both cases.\n\nBut you're thinking that, in v13, the disconnection delays don't need to\nbe measured because they are not measured for now?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 11 Aug 2021 13:56:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "\n\nOn 2021/08/11 13:56, Fujii Masao wrote:\n> Yes, but I was thinking that's a bug that we should fix.\n> IOW, I was thinking that, in v13, both connection and disconnection delays\n> should be measured whether -C is specified or not, *per spec*.\n> But, in v13, the disconnection delays are not measured in the cases\n> where -C is specified and not specified. So I was thinking that this is\n> a bug and we should fix those both cases.\n> \n> But you're thinking that, in v13, the disconnection delays don't need to\n> be measured because they are not measured for now?\n\nPlease let me clarify my thought.\n\nIn master and v14,\n\n# Expected behavior\n(1) Both connection and disconnection delays should be measured\n only when -C is specified, but not otherwise.\n(2) When -C is specified, since each transaction establishes and closes\n a connection, those delays should be measured for each transaction.\n\n# Current behavior\n(1) Connection delay is measured whether -C is specified or not.\n(2) Even when -C is specified, disconnection delay is NOT measured\n at the end of transaction.\n\n# What the patch should do\n(1) Make pgbench skip measuring connection and disconnection delays\n if not necessary (i.e., -C is not specified).\n(2) Make pgbench measure the disconnection delays whenever\n the connection is closed at the end of transaction, when -C is specified.\n\nIn v13 or before,\n\n# Expected behavior\n(1) Both connection and disconnection delays should be measured\n whether -C is specified or not. Because information about those delays\n is used for the benchmark result report.\n(2) When -C is specified, since each transaction establishes and closes\n a connection, those delays should be measured for each transaction.\n\n# Current behavior\n(1)(2) Disconnection delay is NOT measured whether -C is specified or not.\n\n# What the patch should do\n(1)(2) Make pgbench measure the disconnection delays whenever\n the connection is closed at the end of transaction (for -C case)\n and the end of thread (for NOT -C case).\n\nThought?\n\nAnyway, I changed the status of this patch to \"Waiting on Author\" in CF.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 20 Aug 2021 02:05:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "On Fri, 20 Aug 2021 02:05:27 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> On 2021/08/11 13:56, Fujii Masao wrote:\n> > Yes, but I was thinking that's a bug that we should fix.\n> > IOW, I was thinking that, in v13, both connection and disconnection delays\n> > should be measured whether -C is specified or not, *per spec*.\n> > But, in v13, the disconnection delays are not measured in the cases\n> > where -C is specified and not specified. So I was thinking that this is\n> > a bug and we should fix those both cases.\n> > \n> > But you're thinking that, in v13, the disconnection delays don't need to\n> > be measured because they are not measured for now?\n> \n> Please let me clarify my thought.\n\nThank you for your clarification.\n\n> \n> In master and v14,\n> \n> # Expected behavior\n> (1) Both connection and disconnection delays should be measured\n> only when -C is specified, but not otherwise.\n> (2) When -C is specified, since each transaction establishes and closes\n> a connection, those delays should be measured for each transaction.\n> \n> # Current behavior\n> (1) Connection delay is measured whether -C is specified or not.\n> (2) Even when -C is specified, disconnection delay is NOT measured\n> at the end of transaction.\n> \n> # What the patch should do\n> (1) Make pgbench skip measuring connection and disconnection delays\n> if not necessary (i.e., -C is not specified).\n> (2) Make pgbench measure the disconnection delays whenever\n> the connection is closed at the end of transaction, when -C is specified.\n\nI agree with you. This is what the patch for pg14 does. We don't need to measure\ndisconnection delay when -C is not specified because the output just reports\n\"initial connection time\".\n\n> In v13 or before,\n> \n> # Expected behavior\n> (1) Both connection and disconnection delays should be measured\n> whether -C is specified or not. Because information about those delays\n> is used for the benchmark result report.\n> (2) When -C is specified, since each transaction establishes and closes\n> a connection, those delays should be measured for each transaction.\n> \n> # Current behavior\n> (1)(2) Disconnection delay is NOT measured whether -C is specified or not.\n> \n> # What the patch should do\n> (1)(2) Make pgbench measure the disconnection delays whenever\n> the connection is closed at the end of transaction (for -C case)\n> and the end of thread (for NOT -C case).\n\nOk. That makes sense. The output reports \"including connections establishing\"\nand \"excluding connections establishing\" regardless with -C, so we should\nmeasure delays in the same way.\n\nI updated the patch for pg13 to measure disconnection delay when -C is not\nspecified. I attached the updated patch for pg13 as well as one for pg14\nwhich is same as attached before.\n\n> \n> Anyway, I changed the status of this patch to \"Waiting on Author\" in CF.\n\nI returned the status to \"Ready for Committer\". \nCould you please review this?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 26 Aug 2021 12:13:13 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": ">> Anyway, I changed the status of this patch to \"Waiting on Author\" in CF.\n> \n> I returned the status to \"Ready for Committer\". \n> Could you please review this?\n\nAccording to the patch tester, the patch does not apply.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 30 Aug 2021 14:22:49 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "On Mon, 30 Aug 2021 14:22:49 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> >> Anyway, I changed the status of this patch to \"Waiting on Author\" in CF.\n> > \n> > I returned the status to \"Ready for Committer\". \n> > Could you please review this?\n> \n> According to the patch tester, the patch does not apply.\n\nWell, that's because both the patch for PG14 and one for PG13\nare discussed here.\n\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 30 Aug 2021 14:43:50 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "> On Mon, 30 Aug 2021 14:22:49 +0900 (JST)\n> Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> \n>> >> Anyway, I changed the status of this patch to \"Waiting on Author\" in CF.\n>> > \n>> > I returned the status to \"Ready for Committer\". \n>> > Could you please review this?\n>> \n>> According to the patch tester, the patch does not apply.\n> \n> Well, that's because both the patch for PG14 and one for PG13\n> are discussed here.\n\nOh, ok. So the patch tester is not smart enough to identify each patch\nfor particular branches.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 30 Aug 2021 15:03:16 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "\n\nOn 2021/08/26 12:13, Yugo NAGATA wrote:\n> Ok. That makes sense. The output reports \"including connections establishing\"\n> and \"excluding connections establishing\" regardless with -C, so we should\n> measure delays in the same way.\n\nOn second thought, it's more reasonable and less confusing not to\nmeasure the disconnection delays at all? Since whether the benchmark result\nshould include the disconnection delays or not is not undocumented,\nprobably we cannot say strongly the current behavior (i.e., the disconnection\ndelays are not measured) is a bug. Also since the result has not included\nthe disconnection delays so far, the proposed change might slightly change\nthe benchmark numbers reported, which might confuse the users.\nISTM that at least it's unwise to change long-stable branches for this... Thought?\n\n\n> I updated the patch for pg13 to measure disconnection delay when -C is not\n> specified. I attached the updated patch for pg13 as well as one for pg14\n> which is same as attached before.\n\nThanks! I pushed the part of the patch, which gets rid of unnecessary\nmeasure of connection delays from pgbench.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 30 Aug 2021 23:36:30 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": ">> Ok. That makes sense. The output reports \"including connections \n>> establishing\" and \"excluding connections establishing\" regardless with \n>> -C, so we should measure delays in the same way.\n>\n> On second thought, it's more reasonable and less confusing not to\n> measure the disconnection delays at all? Since whether the benchmark result\n> should include the disconnection delays or not is not undocumented,\n> probably we cannot say strongly the current behavior (i.e., the disconnection\n> delays are not measured) is a bug. Also since the result has not included\n> the disconnection delays so far, the proposed change might slightly change\n> the benchmark numbers reported, which might confuse the users.\n> ISTM that at least it's unwise to change long-stable branches for this... \n> Thought?\n\nMy 0.02�: From a benchmarking perspective, ISTM that it makes sense to \ninclude disconnection times, which are clearly linked to connections, \nespecially with -C. So I'd rather have the more meaningful figure even at \nthe price of a small change in an undocumented feature.\n\n-- \nFabien.", "msg_date": "Tue, 31 Aug 2021 07:01:58 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "Hello Fujii-san,\n\nOn Mon, 30 Aug 2021 23:36:30 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> \n> On 2021/08/26 12:13, Yugo NAGATA wrote:\n> > Ok. That makes sense. The output reports \"including connections establishing\"\n> > and \"excluding connections establishing\" regardless with -C, so we should\n> > measure delays in the same way.\n> \n> On second thought, it's more reasonable and less confusing not to\n> measure the disconnection delays at all? Since whether the benchmark result\n> should include the disconnection delays or not is not undocumented,\n> probably we cannot say strongly the current behavior (i.e., the disconnection\n> delays are not measured) is a bug. Also since the result has not included\n> the disconnection delays so far, the proposed change might slightly change\n> the benchmark numbers reported, which might confuse the users.\n> ISTM that at least it's unwise to change long-stable branches for this... Thought?\n\nOk. I agree with you that it is better to not change the behavior of pg13 or\nbefore at least. As for pg14 or later, I wonder that we can change it when pg14\nis released because the output was already change in the commit 547f04e734,\nalthough, I am not persisting to measure disconnection delay since the effect\nto tps would be very slight. At least, if we decide to not measure disconnection\ndelays, I think we should fix as so, like the attached patch.\n\n> > I updated the patch for pg13 to measure disconnection delay when -C is not\n> > specified. I attached the updated patch for pg13 as well as one for pg14\n> > which is same as attached before.\n> \n> Thanks! I pushed the part of the patch, which gets rid of unnecessary\n> measure of connection delays from pgbench.\n\nThank you!\n\n\nRegards, Yugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Tue, 31 Aug 2021 14:15:10 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": ">>> Ok. That makes sense. The output reports \"including connections\n>>> establishing\" and \"excluding connections establishing\" regardless with\n>>> -C, so we should measure delays in the same way.\n>>\n>> On second thought, it's more reasonable and less confusing not to\n>> measure the disconnection delays at all? Since whether the benchmark\n>> result\n>> should include the disconnection delays or not is not undocumented,\n>> probably we cannot say strongly the current behavior (i.e., the\n>> disconnection\n>> delays are not measured) is a bug. Also since the result has not\n>> included\n>> the disconnection delays so far, the proposed change might slightly\n>> change\n>> the benchmark numbers reported, which might confuse the users.\n>> ISTM that at least it's unwise to change long-stable branches for\n>> this... Thought?\n> \n> My 0.02€: From a benchmarking perspective, ISTM that it makes sense to\n> include disconnection times, which are clearly linked to connections,\n> especially with -C. So I'd rather have the more meaningful figure even\n> at the price of a small change in an undocumented feature.\n\n+1. The aim of -C is trying to measure connection overhead which\nnaturally includes disconnection overhead.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 31 Aug 2021 14:18:48 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "Hello Fabien, Ishii-san,\n\nOn Tue, 31 Aug 2021 14:18:48 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> >>> Ok. That makes sense. The output reports \"including connections\n> >>> establishing\" and \"excluding connections establishing\" regardless with\n> >>> -C, so we should measure delays in the same way.\n> >>\n> >> On second thought, it's more reasonable and less confusing not to\n> >> measure the disconnection delays at all? Since whether the benchmark\n> >> result\n> >> should include the disconnection delays or not is not undocumented,\n> >> probably we cannot say strongly the current behavior (i.e., the\n> >> disconnection\n> >> delays are not measured) is a bug. Also since the result has not\n> >> included\n> >> the disconnection delays so far, the proposed change might slightly\n> >> change\n> >> the benchmark numbers reported, which might confuse the users.\n> >> ISTM that at least it's unwise to change long-stable branches for\n> >> this... Thought?\n> > \n> > My 0.02€: From a benchmarking perspective, ISTM that it makes sense to\n> > include disconnection times, which are clearly linked to connections,\n> > especially with -C. So I'd rather have the more meaningful figure even\n> > at the price of a small change in an undocumented feature.\n> \n> +1. The aim of -C is trying to measure connection overhead which\n> naturally includes disconnection overhead.\n\nI think it is better to measure disconnection delays when -C is specified in\npg 14. This seems not necessary when -C is not specified because pgbench just\nreports \"initial connection time\".\n\nHowever, what about pg13 or later? Do you think we should also change the\nbehavior of pg13 or later? If so, should we measure disconnection delay even\nwhen -C is not specified in pg13?\n\nRegards,\nYugo Nagata\n\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 31 Aug 2021 14:28:35 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": ">> > My 0.02€: From a benchmarking perspective, ISTM that it makes sense to\n>> > include disconnection times, which are clearly linked to connections,\n>> > especially with -C. So I'd rather have the more meaningful figure even\n>> > at the price of a small change in an undocumented feature.\n>> \n>> +1. The aim of -C is trying to measure connection overhead which\n>> naturally includes disconnection overhead.\n> \n> I think it is better to measure disconnection delays when -C is specified in\n> pg 14. This seems not necessary when -C is not specified because pgbench just\n> reports \"initial connection time\".\n\nOk.\n\n> However, what about pg13 or later? Do you think we should also change the\n> behavior of pg13 or later? If so, should we measure disconnection delay even\n> when -C is not specified in pg13?\n\nYou mean \"pg13 or before\"?\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 31 Aug 2021 14:46:42 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "On Tue, 31 Aug 2021 14:46:42 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> >> > My 0.02€: From a benchmarking perspective, ISTM that it makes sense to\n> >> > include disconnection times, which are clearly linked to connections,\n> >> > especially with -C. So I'd rather have the more meaningful figure even\n> >> > at the price of a small change in an undocumented feature.\n> >> \n> >> +1. The aim of -C is trying to measure connection overhead which\n> >> naturally includes disconnection overhead.\n> > \n> > I think it is better to measure disconnection delays when -C is specified in\n> > pg 14. This seems not necessary when -C is not specified because pgbench just\n> > reports \"initial connection time\".\n> \n> Ok.\n> \n> > However, what about pg13 or later? Do you think we should also change the\n> > behavior of pg13 or later? If so, should we measure disconnection delay even\n> > when -C is not specified in pg13?\n> \n> You mean \"pg13 or before\"?\n\nSorry, you are right. I mean \"pg13 or before\".\n\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 31 Aug 2021 15:03:26 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": ">> >> > My 0.02€: From a benchmarking perspective, ISTM that it makes sense to\n>> >> > include disconnection times, which are clearly linked to connections,\n>> >> > especially with -C. So I'd rather have the more meaningful figure even\n>> >> > at the price of a small change in an undocumented feature.\n>> >> \n>> >> +1. The aim of -C is trying to measure connection overhead which\n>> >> naturally includes disconnection overhead.\n>> > \n>> > I think it is better to measure disconnection delays when -C is specified in\n>> > pg 14. This seems not necessary when -C is not specified because pgbench just\n>> > reports \"initial connection time\".\n>> \n>> Ok.\n>> \n>> > However, what about pg13 or later? Do you think we should also change the\n>> > behavior of pg13 or later? If so, should we measure disconnection delay even\n>> > when -C is not specified in pg13?\n>> \n>> You mean \"pg13 or before\"?\n> \n> Sorry, you are right. I mean \"pg13 or before\".\n\nI would think we should leave as it is for pg13 and before to not surprise users.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 31 Aug 2021 15:39:18 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "On Tue, 31 Aug 2021 15:39:18 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> >> >> > My 0.02€: From a benchmarking perspective, ISTM that it makes sense to\n> >> >> > include disconnection times, which are clearly linked to connections,\n> >> >> > especially with -C. So I'd rather have the more meaningful figure even\n> >> >> > at the price of a small change in an undocumented feature.\n> >> >> \n> >> >> +1. The aim of -C is trying to measure connection overhead which\n> >> >> naturally includes disconnection overhead.\n> >> > \n> >> > I think it is better to measure disconnection delays when -C is specified in\n> >> > pg 14. This seems not necessary when -C is not specified because pgbench just\n> >> > reports \"initial connection time\".\n> >> \n> >> Ok.\n> >> \n> >> > However, what about pg13 or later? Do you think we should also change the\n> >> > behavior of pg13 or later? If so, should we measure disconnection delay even\n> >> > when -C is not specified in pg13?\n> >> \n> >> You mean \"pg13 or before\"?\n> > \n> > Sorry, you are right. I mean \"pg13 or before\".\n> \n> I would think we should leave as it is for pg13 and before to not surprise users.\n\nOk. Thank you for your opinion. I also agree with not changing the behavior of\nlong-stable branches, and I think this is the same opinion as Fujii-san.\n\nAttached is the patch to fix to measure disconnection delays that can be applied to\npg14 or later.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Tue, 31 Aug 2021 16:03:05 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "\n>> I would think we should leave as it is for pg13 and before to not surprise users.\n>\n> Ok. Thank you for your opinion. I also agree with not changing the behavior of\n> long-stable branches, and I think this is the same opinion as Fujii-san.\n>\n> Attached is the patch to fix to measure disconnection delays that can be applied to\n> pg14 or later.\n\nI agree that this is not a bug fix, so this is not a matter suitable for \nfor backpatching. Maybe for pg14.\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 31 Aug 2021 09:56:39 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "On 2021/08/31 16:56, Fabien COELHO wrote:\n> \n>>> I would think we should leave as it is for pg13 and before to not surprise users.\n>>\n>> Ok. Thank you for your opinion. I also agree with not changing the behavior of\n>> long-stable branches, and I think this is the same opinion as Fujii-san.\n>>\n>> Attached is the patch to fix to measure disconnection delays that can be applied to\n>> pg14 or later.\n> \n> I agree that this is not a bug fix, so this is not a matter suitable for for backpatching. Maybe for pg14.\n\n+1. So we reached the consensus!\n\nAttached is the updated version of the patch. Based on Nagata-san's latest patch,\nI just modified the comment slightly and ran pgindent. Barring any objection,\nI will commit this patch only in master and v14.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 1 Sep 2021 01:10:37 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "\n\nOn 2021/09/01 1:10, Fujii Masao wrote:\n> +1. So we reached the consensus!\n> \n> Attached is the updated version of the patch. Based on Nagata-san's latest patch,\n> I just modified the comment slightly and ran pgindent. Barring any objection,\n> I will commit this patch only in master and v14.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 1 Sep 2021 17:07:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix around conn_duration in pgbench" }, { "msg_contents": "On Wed, 1 Sep 2021 17:07:43 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> \n> On 2021/09/01 1:10, Fujii Masao wrote:\n> > +1. So we reached the consensus!\n> > \n> > Attached is the updated version of the patch. Based on Nagata-san's latest patch,\n> > I just modified the comment slightly and ran pgindent. Barring any objection,\n> > I will commit this patch only in master and v14.\n> \n> Pushed. Thanks!\n\nThank you!\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 1 Sep 2021 18:33:44 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix around conn_duration in pgbench" } ]
[ { "msg_contents": "\nI've been looking at the recent spate of intermittent failures on my\nCygwin animal lorikeet. Most of them look something like this, where\nthere's 'VACUUM FULL pg_class' and an almost simultaneous \"CREATE TABLE'\nwhich fails.\n\n\n2021-06-14 05:04:00.220 EDT [60c71b7f.e8bf:60] pg_regress/vacuum LOG: statement: VACUUM FULL pg_class;\n2021-06-14 05:04:00.222 EDT [60c71b80.e8c0:7] pg_regress/typed_table LOG: statement: CREATE TABLE persons OF person_type;\n2021-06-14 05:04:00.232 EDT [60c71b80.e8c1:3] pg_regress/inherit LOG: statement: CREATE TABLE a (aa TEXT);\n*** starting debugger for pid 59584, tid 9640\n2021-06-14 05:04:14.678 EDT [60c71b53.e780:4] LOG: server process (PID 59584) exited with exit code 127\n2021-06-14 05:04:14.678 EDT [60c71b53.e780:5] DETAIL: Failed process was running: CREATE TABLE persons OF person_type;\n2021-06-14 05:04:14.678 EDT [60c71b53.e780:6] LOG: terminating any other active server processes\n\n\nGetting stack traces in this platform can be very difficult. I'm going\nto try forcing complete serialization of the regression tests\n(MAX_CONNECTIONS=1) to see if the problem goes away. Any other\nsuggestions might be useful. Note that we're not getting the same issue\non REL_13_STABLE, where the same group pf tests run together (inherit\ntyped_table, vacuum)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 08:19:19 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "recent failures on lorikeet" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I've been looking at the recent spate of intermittent failures on my\n> Cygwin animal lorikeet. Most of them look something like this, where\n> there's 'VACUUM FULL pg_class' and an almost simultaneous \"CREATE TABLE'\n> which fails.\n\nDo you have any idea what \"exit code 127\" signifies on that platform?\n(BTW, not all of them look like that; many are reported as plain\nsegfaults.) I hadn't spotted the association with a concurrent \"VACUUM\nFULL pg_class\" before, that does seem interesting.\n\n> Getting stack traces in this platform can be very difficult. I'm going\n> to try forcing complete serialization of the regression tests\n> (MAX_CONNECTIONS=1) to see if the problem goes away. Any other\n> suggestions might be useful. Note that we're not getting the same issue\n> on REL_13_STABLE, where the same group pf tests run together (inherit\n> typed_table, vacuum)\n\nIf it does go away, that'd be interesting, but I don't see how it gets\nus any closer to a fix. Seems like a stack trace is a necessity to\nnarrow it down.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Jun 2021 09:39:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recent failures on lorikeet" }, { "msg_contents": "\nOn 6/14/21 9:39 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I've been looking at the recent spate of intermittent failures on my\n>> Cygwin animal lorikeet. Most of them look something like this, where\n>> there's 'VACUUM FULL pg_class' and an almost simultaneous \"CREATE TABLE'\n>> which fails.\n> Do you have any idea what \"exit code 127\" signifies on that platform?\n> (BTW, not all of them look like that; many are reported as plain\n> segfaults.) I hadn't spotted the association with a concurrent \"VACUUM\n> FULL pg_class\" before, that does seem interesting.\n>\n>> Getting stack traces in this platform can be very difficult. I'm going\n>> to try forcing complete serialization of the regression tests\n>> (MAX_CONNECTIONS=1) to see if the problem goes away. Any other\n>> suggestions might be useful. Note that we're not getting the same issue\n>> on REL_13_STABLE, where the same group pf tests run together (inherit\n>> typed_table, vacuum)\n> If it does go away, that'd be interesting, but I don't see how it gets\n> us any closer to a fix. Seems like a stack trace is a necessity to\n> narrow it down.\n>\n> \t\t\t\n\n\nSome have given stack traces and some not, not sure why. The one from\nJune 13 has this:\n\n\n---- backtrace ----\n??\n??:0\nWaitOnLock\nsrc/backend/storage/lmgr/lock.c:1831\nLockAcquireExtended\nsrc/backend/storage/lmgr/lock.c:1119\nLockRelationOid\nsrc/backend/storage/lmgr/lmgr.c:135\nrelation_open\nsrc/backend/access/common/relation.c:59\ntable_open\nsrc/backend/access/table/table.c:43\nScanPgRelation\nsrc/backend/utils/cache/relcache.c:322\nRelationBuildDesc\nsrc/backend/utils/cache/relcache.c:1039\nRelationIdGetRelation\nsrc/backend/utils/cache/relcache.c:2045\nrelation_open\nsrc/backend/access/common/relation.c:59\ntable_open\nsrc/backend/access/table/table.c:43\nExecInitPartitionInfo\nsrc/backend/executor/execPartition.c:510\nExecPrepareTupleRouting\nsrc/backend/executor/nodeModifyTable.c:2311\nExecModifyTable\nsrc/backend/executor/nodeModifyTable.c:2559\nExecutePlan\nsrc/backend/executor/execMain.c:1557\n\n\n\nThe line in lmgr.c is where the process title gets changed to \"waiting\".\nI recently stopped setting process title on this animal on REL_13_STABLE\nand its similar errors have largely gone away. I can do the same on\nHEAD. But it does make me wonder what the heck has changed to make this\ncode fragile.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 12:33:18 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: recent failures on lorikeet" }, { "msg_contents": "\nOn 6/14/21 12:33 PM, Andrew Dunstan wrote:\n>\n> The line in lmgr.c is where the process title gets changed to \"waiting\".\n> I recently stopped setting process title on this animal on REL_13_STABLE\n> and its similar errors have largely gone away. I can do the same on\n> HEAD. But it does make me wonder what the heck has changed to make this\n> code fragile.\n\n\nOf course I meant the line (1831) in lock.c.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 12:41:36 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: recent failures on lorikeet" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> The line in lmgr.c is where the process title gets changed to \"waiting\".\n> I recently stopped setting process title on this animal on REL_13_STABLE\n> and its similar errors have largely gone away.\n\nOooh, that certainly seems like a smoking gun.\n\n> I can do the same on\n> HEAD. But it does make me wonder what the heck has changed to make this\n> code fragile.\n\nSo what we've got there is\n\n old_status = get_ps_display(&len);\n new_status = (char *) palloc(len + 8 + 1);\n memcpy(new_status, old_status, len);\n strcpy(new_status + len, \" waiting\");\n set_ps_display(new_status);\n new_status[len] = '\\0'; /* truncate off \" waiting\" */\n\nLine 1831 is the strcpy, but it seems entirely impossible that that\ncould fail, unless palloc has shirked its job. I'm thinking that\nthe crash is really in the memcpy --- looking at the other lines\nin your trace, fingering the line after the call seems common.\n\nWhat that'd have to imply is that get_ps_display() messed up,\nreturning a bad pointer or a bad length.\n\nA platform-specific problem in get_ps_display() seems plausible\nenough. The apparent connection to a concurrent VACUUM FULL seems\npretty hard to explain that way ... but maybe that's a mirage.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Jun 2021 13:18:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recent failures on lorikeet" }, { "msg_contents": "I wrote:\n> What that'd have to imply is that get_ps_display() messed up,\n> returning a bad pointer or a bad length.\n> A platform-specific problem in get_ps_display() seems plausible\n> enough. The apparent connection to a concurrent VACUUM FULL seems\n> pretty hard to explain that way ... but maybe that's a mirage.\n\nIf I understand correctly that you're only seeing this in v13 and\nHEAD, then it seems like bf68b79e5 (Refactor ps_status.c API)\ndeserves a hard look.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Jun 2021 13:29:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: recent failures on lorikeet" } ]
[ { "msg_contents": "Hello hackers,\n\nI have a doubt regarding the positioning of clientAuthentication hook\nin function ClientAuthentication. Particularly, in case of hba errors,\ni.e. cases uaReject or uaImplicitReject it errors out, leading to no\ncalls to any functions attached to the authentication hook. Can't we\nprocess the hook function first and then error out...?\n\n-- \nRegards,\nRafia Sabih\n\n\n", "msg_date": "Mon, 14 Jun 2021 14:51:39 +0200", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": true, "msg_subject": "Position of ClientAuthentication hook" }, { "msg_contents": "On Mon, Jun 14, 2021 at 8:51 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> I have a doubt regarding the positioning of clientAuthentication hook\n> in function ClientAuthentication. Particularly, in case of hba errors,\n> i.e. cases uaReject or uaImplicitReject it errors out, leading to no\n> calls to any functions attached to the authentication hook. Can't we\n> process the hook function first and then error out...?\n\nMaybe. One potential problem is that if the hook errors out, the\noriginal error would be lost and only the error thrown by the hook\nwould be logged or visible to the client. Whether or not that's a\nproblem depends, I suppose, on what you're trying to do with the hook.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Jun 2021 15:04:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Position of ClientAuthentication hook" }, { "msg_contents": "On Mon, 14 Jun 2021 at 21:04, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jun 14, 2021 at 8:51 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> > I have a doubt regarding the positioning of clientAuthentication hook\n> > in function ClientAuthentication. Particularly, in case of hba errors,\n> > i.e. cases uaReject or uaImplicitReject it errors out, leading to no\n> > calls to any functions attached to the authentication hook. Can't we\n> > process the hook function first and then error out...?\n>\n> Maybe. One potential problem is that if the hook errors out, the\n> original error would be lost and only the error thrown by the hook\n> would be logged or visible to the client. Whether or not that's a\n> problem depends, I suppose, on what you're trying to do with the hook.\n\nThanks Robert for this quick clarification.\n\n\n-- \nRegards,\nRafia Sabih\n\n\n", "msg_date": "Wed, 16 Jun 2021 11:20:12 +0200", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Position of ClientAuthentication hook" } ]
[ { "msg_contents": "In the past people have tried to ensure that the isolation tests\nwould pass regardless of the prevailing default_transaction_isolation\nsetting. (That was sort of the point, in fact, for the earliest\ntests using that infrastructure.)\n\nThis seems to have been forgotten about lately, as all of these tests\nfail with default_transaction_isolation = serializable:\n\ntest detach-partition-concurrently-1 ... FAILED 504 ms\ntest detach-partition-concurrently-3 ... FAILED 2224 ms\ntest detach-partition-concurrently-4 ... FAILED 1600 ms\ntest fk-partitioned-2 ... FAILED 133 ms\ntest lock-update-delete ... FAILED 538 ms\ntest tuplelock-update ... FAILED 10223 ms\ntest tuplelock-upgrade-no-deadlock ... FAILED 664 ms\ntest tuplelock-partition ... FAILED 49 ms\n\n(drop-index-concurrently-1 also failed until just now, but\nI resurrected its variant expected-file.)\n\nSo:\n\n* Do we still care about that policy?\n\n* If so, who's going to fix the above-listed problems?\n\n* Should we try to get some routine testing of this case\nin place?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Jun 2021 22:09:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Isolation tests vs. SERIALIZABLE isolation level" }, { "msg_contents": "On Tue, Jun 15, 2021 at 2:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> * Do we still care about that policy?\n\n> * If so, who's going to fix the above-listed problems?\n\n> * Should we try to get some routine testing of this case\n> in place?\n\nI wondered the same in commit 37929599 (the same problem for\nsrc/test/regress, which now passes but only in master, not the back\nbranches). I doubt it will find real bugs very often, and I doubt\nmany people would enjoy the slowdown if it were always on, but it\nmight make sense to have something like PG_TEST_EXTRA that can be used\nto run the tests at all three levels, and then turn that on in a few\nstrategic places like CI and a BF animal or two.\n\n\n", "msg_date": "Tue, 15 Jun 2021 14:50:28 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Isolation tests vs. SERIALIZABLE isolation level" } ]
[ { "msg_contents": "This is a followup to the conversation at [1], in which we speculated\nabout constraining the isolationtester's behavior by annotating the\nspecfiles, in order to eliminate common buildfarm failures such as [2]\nand reduce the need to use long delays to stabilize the test results.\n\nI've spent a couple days hacking on this idea, and I think it has worked\nout really well. On my machine, the time needed for \"make installcheck\"\nin src/test/isolation drops from ~93 seconds to ~26 seconds, as a result\nof removing all the multiple-second delays we used before. Also,\nwhile I'm not fool enough to claim that this will reduce the rate of\nbogus failures to zero, I do think it addresses all the repeating\nfailures we've seen lately.\n\nIn the credit-where-credit-is-due department, this owes some inspiration\nto the patch Asim Praveen offered at [3], though this takes the idea a\ngood bit further.\n\nThis is still WIP to some extent, as I've not spent much time looking at\nspecfiles other than the ones with big delays; there may be additional\nimprovements possible in some places. Also, I've not worried about\nwhether the tests pass in serializable mode, since we have problems there\nalready [4]. But this seemed like a good point at which to solicit\nfeedback and see what the cfbot thinks of it.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/2527507.1598237598%40sss.pgh.pa.us\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-06-13%2016%3A31%3A57\n[3] https://www.postgresql.org/message-id/F8DC434A-9141-451C-857F-148CCA1D42AD%40vmware.com\n[4] https://www.postgresql.org/message-id/324309.1623722988%40sss.pgh.pa.us", "msg_date": "Mon, 14 Jun 2021 22:57:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Improving the isolationtester: fewer failures, less delay " }, { "msg_contents": "Hi,\n\nOn 2021-06-14 22:57:08 -0400, Tom Lane wrote:\n> This is a followup to the conversation at [1], in which we speculated\n> about constraining the isolationtester's behavior by annotating the\n> specfiles, in order to eliminate common buildfarm failures such as [2]\n> and reduce the need to use long delays to stabilize the test results.\n> \n> I've spent a couple days hacking on this idea, and I think it has worked\n> out really well. On my machine, the time needed for \"make installcheck\"\n> in src/test/isolation drops from ~93 seconds to ~26 seconds, as a result\n> of removing all the multiple-second delays we used before.\n\nVery cool stuff. All the reliability things aside, isolationtester\nfrequently is the slowest test in a parallel check world...\n\n\n> Also, while I'm not fool enough to claim that this will reduce the\n> rate of bogus failures to zero, I do think it addresses all the\n> repeating failures we've seen lately.\n\nAnd it should make it easier to fix some others and also to make it\neasier to write some tests that were too hard to get to reliable today.\n\n\n> This is still WIP to some extent, as I've not spent much time looking at\n> specfiles other than the ones with big delays; there may be additional\n> improvements possible in some places. Also, I've not worried about\n> whether the tests pass in serializable mode, since we have problems there\n> already [4]. But this seemed like a good point at which to solicit\n> feedback and see what the cfbot thinks of it.\n\nAre there spec output changes / new failures, if you apply the patch,\nbut do not apply the changes to the spec files?\n\n\nWill look at the patch itself in a bit.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Jun 2021 12:03:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Improving the isolationtester: fewer failures, less delay" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-06-14 22:57:08 -0400, Tom Lane wrote:\n>> This is still WIP to some extent, as I've not spent much time looking at\n>> specfiles other than the ones with big delays; there may be additional\n>> improvements possible in some places. Also, I've not worried about\n>> whether the tests pass in serializable mode, since we have problems there\n>> already [4]. But this seemed like a good point at which to solicit\n>> feedback and see what the cfbot thinks of it.\n\n> Are there spec output changes / new failures, if you apply the patch,\n> but do not apply the changes to the spec files?\n\nIf you make only the code changes, there are a bunch of diffs stemming\nfrom the removal of the 'error in steps' message prefix. If you just\nmechanically remove those from the .out files without touching the .spec\nfiles, most tests pass, but I don't recall whether that's 100% the case.\n\n> Will look at the patch itself in a bit.\n\nI'll have a v2 in a little bit --- the cfbot pointed out that there\nwere some contrib tests I'd missed fixing, and I found a couple of\nother improvements.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 15:14:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving the isolationtester: fewer failures, less delay" }, { "msg_contents": "\nOn 6/14/21 10:57 PM, Tom Lane wrote:\n> This is a followup to the conversation at [1], in which we speculated\n> about constraining the isolationtester's behavior by annotating the\n> specfiles, in order to eliminate common buildfarm failures such as [2]\n> and reduce the need to use long delays to stabilize the test results.\n>\n> I've spent a couple days hacking on this idea, and I think it has worked\n> out really well. On my machine, the time needed for \"make installcheck\"\n> in src/test/isolation drops from ~93 seconds to ~26 seconds, as a result\n> of removing all the multiple-second delays we used before. Also,\n> while I'm not fool enough to claim that this will reduce the rate of\n> bogus failures to zero, I do think it addresses all the repeating\n> failures we've seen lately.\n>\n> In the credit-where-credit-is-due department, this owes some inspiration\n> to the patch Asim Praveen offered at [3], though this takes the idea a\n> good bit further.\n>\n> This is still WIP to some extent, as I've not spent much time looking at\n> specfiles other than the ones with big delays; there may be additional\n> improvements possible in some places. Also, I've not worried about\n> whether the tests pass in serializable mode, since we have problems there\n> already [4]. But this seemed like a good point at which to solicit\n> feedback and see what the cfbot thinks of it.\n>\n> \t\n\n\nCool stuff. Minor gripe while we're on $subject - I wish we'd rename it.\nIt's long outgrown the original purpose that gave it its name, and\nkeeping the name makes it unnecessarily obscure. Yes, I know Lisp still\nhas car and cdr, but we don't need to follow that example.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 15 Jun 2021 15:23:53 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Improving the isolationtester: fewer failures, less delay" }, { "msg_contents": "I wrote:\n> I'll have a v2 in a little bit --- the cfbot pointed out that there\n> were some contrib tests I'd missed fixing, and I found a couple of\n> other improvements.\n\nHere 'tis. This passes check-world, unlike v1 (mea culpa for not\nchecking that). I also cleaned up the variant expected-files,\nso it's now no worse than HEAD as far as failures in serializable\nmode go.\n\nI played a bit more with insert-conflict-specconflict.spec, too.\nIt now seems proof against delays inserted anywhere in the\nlock-acquiring subroutines.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 15 Jun 2021 17:09:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving the isolationtester: fewer failures, less delay" }, { "msg_contents": "Hi,\n\nOnly halfway related: I wonder if we should remove the automatic\npermutation stuff - it's practically never useful. Probably not worth\nchanging...\n\n\nOn 2021-06-15 17:09:00 -0400, Tom Lane wrote:\n> +The general form of a permutation entry is\n> +\n> +\t\"step_name\" [ ( marker [ , marker ... ] ) ]\n> +\n> +where each marker defines a \"blocking condition\". The step will not be\n> +reported as completed before all the blocking conditions are satisfied.\n\nMinor suggestion: I think the folliwing would be a bit easier to read if\nthere first were a list of markers, and then separately the longer\ndescriptions. Right now it's a bit hard to see which paragraph\nintroduces a new type of marker, and which just adds further commentary.\n\n\n> +\t\t\t\t/*\n> +\t\t\t\t * Check for other steps that have finished. We must do this\n> +\t\t\t\t * if oldstep completed; while if it did not, we need to poll\n> +\t\t\t\t * all the active steps in hopes of unblocking oldstep.\n> +\t\t\t\t */\n\nSomehow I found the second sentence a bit hard to parse - I think it's\nthe \"while ...\" sounding a bit odd to me.\n\n\n> +\t\t\t\t/*\n> +\t\t\t\t * If the target session is still busy, apply a timeout to\n> +\t\t\t\t * keep from hanging indefinitely, which could happen with\n> +\t\t\t\t * incorrect blocker annotations. Use the same 2 *\n> +\t\t\t\t * max_step_wait limit as try_complete_step does for deciding\n> +\t\t\t\t * to die. (We don't bother with trying to cancel anything,\n> +\t\t\t\t * since it's unclear what to cancel in this case.)\n> +\t\t\t\t */\n> +\t\t\t\tif (iconn->active_step != NULL)\n> +\t\t\t\t{\n> +\t\t\t\t\tstruct timeval current_time;\n> +\t\t\t\t\tint64\t\ttd;\n> +\n> +\t\t\t\t\tgettimeofday(&current_time, NULL);\n> +\t\t\t\t\ttd = (int64) current_time.tv_sec - (int64) start_time.tv_sec;\n> +\t\t\t\t\ttd *= USECS_PER_SEC;\n> +\t\t\t\t\ttd += (int64) current_time.tv_usec - (int64) start_time.tv_usec;\n>+\t\t\t\t\tif (td > 2 * max_step_wait)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tfprintf(stderr, \"step %s timed out after %d seconds\\n\",\n> +\t\t\t\t\t\t\t\ticonn->active_step->name,\n> +\t\t\t\t\t\t\t\t(int) (td / USECS_PER_SEC));\n> +\t\t\t\t\t\texit(1);\n> +\t\t\t\t\t}\n> +\t\t\t\t}\n> +\t\t\t}\n> \t\t}\n\nIt might be worth printing out the list of steps the being waited for\nwhen reaching the timeout - it seems like it'd now be easier to end up\nwith a bit hard to debug specs. And one cannot necessarily look at\npg_locks or such anymore to debug em.\n\n\n> @@ -833,6 +946,19 @@ try_complete_step(TestSpec *testspec, Step *step, int flags)\n> \t\t}\n> \t}\n> \n> +\t/*\n> +\t * The step is done, but we won't report it as complete so long as there\n> +\t * are blockers.\n> +\t */\n> +\tif (step_has_blocker(pstep))\n> +\t{\n> +\t\tif (!(flags & STEP_RETRY))\n> +\t\t\tprintf(\"step %s: %s <waiting ...>\\n\",\n> +\t\t\t\t step->name, step->sql);\n> +\t\treturn true;\n> +\t}\n\nMight be a bug in my mental state machine: Will this work correctly for\nPSB_ONCE, where we'll already returned before?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Jun 2021 18:18:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Improving the isolationtester: fewer failures, less delay" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Only halfway related: I wonder if we should remove the automatic\n> permutation stuff - it's practically never useful. Probably not worth\n> changing...\n\nWhere it is useful, it saves a lot of error-prone typing ...\n\n> Minor suggestion: I think the folliwing would be a bit easier to read if\n> there first were a list of markers, and then separately the longer\n> descriptions. Right now it's a bit hard to see which paragraph\n> introduces a new type of marker, and which just adds further commentary.\n\nOK, will do. Will act on your other cosmetic points too, tomorrow or so.\n\n>> +\tif (step_has_blocker(pstep))\n>> +\t{\n>> +\t\tif (!(flags & STEP_RETRY))\n>> +\t\t\tprintf(\"step %s: %s <waiting ...>\\n\",\n>> +\t\t\t\t step->name, step->sql);\n>> +\t\treturn true;\n>> +\t}\n\n> Might be a bug in my mental state machine: Will this work correctly for\n> PSB_ONCE, where we'll already returned before?\n\nThis bit ignores PSB_ONCE. Once we've dropped out of try_complete_step\nthe first time, PSB_ONCE is done affecting things. (I'm not in love\nwith that symbol name, if you have a better idea.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 21:22:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving the isolationtester: fewer failures, less delay" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> [ assorted review comments ]\n\nHere's a v3 responding to your comments, plus some other cleanup:\n\n* don't use C99-style declarations-in-for, to ease planned backpatch\n\n* make use of (*) annotation in multiple-cic.spec, to get rid of\nthe need for a variant expected-file for it\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 16 Jun 2021 15:47:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving the isolationtester: fewer failures, less delay" } ]
[ { "msg_contents": "Hi,\n\nI thought about using the dual, but wasn't sure how many languages\nsupport it.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate", "msg_date": "Tue, 15 Jun 2021 04:59:24 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": true, "msg_subject": "Use singular number when appropriate" }, { "msg_contents": "On 15/06/2021 07:59, David Fetter wrote:\n> Hi,\n> \n> I thought about using the dual, but wasn't sure how many languages\n> support it.\n>\n> \tif (fail_count == 0 && fail_ignore_count == 0)\n> \t\tsnprintf(buf, sizeof(buf),\n> \t\t\t\t _(\" %s %d test%s passed. \"),\n> \t\t\t\t success_count == 1 ? \"The\" : \"All\",\n> \t\t\t\t success_count,\n> \t\t\t\t success_count == 1 ? \"\" : \"s\");\n\nConstructing sentences like that is bad practice for translations. See \nhttps://www.gnu.org/software/gettext/manual/html_node/Plural-forms.html.\n\n- Heikki\n\n\n", "msg_date": "Tue, 15 Jun 2021 08:16:41 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Use singular number when appropriate" }, { "msg_contents": "On Tue, Jun 15, 2021 at 04:59:24AM +0000, David Fetter wrote:\n> Hi,\n> \n> I thought about using the dual, but wasn't sure how many languages\n> support it.\n\nI don't think that you can assume that appending something will work in all\nlanguages. Note that it also doesn't always work in english (e.g. this/these),\nas seen in this inconsistent change:\n\n-\t\t\t\t _(\" %d of %d tests failed, %d of these failures ignored. \"),\n\n+\t\t\t\t _(\" %d of %d test%s failed, %d of these failures ignored. \"),\n\n\n", "msg_date": "Tue, 15 Jun 2021 13:18:16 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use singular number when appropriate" }, { "msg_contents": "On Tue, 2021-06-15 at 04:59 +0000, David Fetter wrote:\n> I thought about using the dual, but wasn't sure how many languages\n> support it.\n\nI think none of the languages for which we cater uses the dual.\nBut I guess you were joking, since the tests are not translated ...\n\n> \tif (fail_count == 0 && fail_ignore_count == 0)\n> \t\tsnprintf(buf, sizeof(buf),\n> -\t\t\t\t _(\" All %d tests passed. \"),\n> -\t\t\t\t success_count);\n> +\t\t\t\t _(\" %s %d test%s passed. \"),\n> +\t\t\t\t success_count == 1 ? \"The\" : \"All\",\n> +\t\t\t\t success_count,\n> +\t\t\t\t success_count == 1 ? \"\" : \"s\");\n\n... and that wouldn't be translatable.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 15 Jun 2021 09:37:11 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Use singular number when appropriate" }, { "msg_contents": "On Tue, Jun 15, 2021 at 09:37:11AM +0200, Laurenz Albe wrote:\n> On Tue, 2021-06-15 at 04:59 +0000, David Fetter wrote:\n> > I thought about using the dual, but wasn't sure how many languages\n> > support it.\n> \n> I think none of the languages for which we cater uses the dual. But\n> I guess you were joking, since the tests are not translated ...\n\nI was.\n\n> > \tif (fail_count == 0 && fail_ignore_count == 0)\n> > \t\tsnprintf(buf, sizeof(buf),\n> > -\t\t\t\t _(\" All %d tests passed. \"),\n> > -\t\t\t\t success_count);\n> > +\t\t\t\t _(\" %s %d test%s passed. \"),\n> > +\t\t\t\t success_count == 1 ? \"The\" : \"All\",\n> > +\t\t\t\t success_count,\n> > +\t\t\t\t success_count == 1 ? \"\" : \"s\");\n> \n> ... and that wouldn't be translatable.\n\nThanks, will rearrange.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Tue, 15 Jun 2021 15:30:57 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": true, "msg_subject": "Re: Use singular number when appropriate" } ]
[ { "msg_contents": "Currently, CREATE DATABASE forces a checkpoint, then copies all the\nfiles, then forces another checkpoint. The comments in the createdb()\nfunction explain the reasons for this. The attached patch fixes this\nproblem by making CREATE DATABASE completely WAL-logged so that now we\ncan avoid checkpoints. The patch modifies both CREATE DATABASE and\nALTER DATABASE..SET TABLESPACE to be fully WAL-logged.\n\nOne main advantage of this change is that it will be cheaper. Forcing\ncheckpoints on an idle system is no big deal, but when the system is\nunder heavy write load, it's very expensive. Another advantage is that\nit makes things better for features like TDE, which might want the\npages in the source database to be encrypted using a different key or\nnonce than the pages in the target database.\n\n\nDesign Idea:\n-----------------\nFirst, create the target database directory along with the version\nfile and WAL-log this operation. Create the \"relation map file\" in\nthe target database and copy the content from the source database. For\nthis, we can use some modified versions of the write_relmap_file() and\nWAL-log the relmap create operation along with the file content. Now,\nread the relmap file to find the relfilenode for pg_class and then we\nread pg_class block by block and decode the tuples. For reading the\npg_class blocks, we can use ReadBufferWithoutRelCache() so that we\ndon't need the relcache. Nothing prevents us from checking visibility\nfor tuples in another database because CLOG is global to the cluster.\nAnd nothing prevents us from deforming those tuples because the column\ndefinitions for pg_class have to be the same in every database. Then\nwe can get the relfilenode of every file we need to copy, and prepare\na list of all such relfilenode. Next, for each relfilenode in the\nsource database, create a respective relfilenode in the target\ndatabase (for all forks) using smgrcreate, which is already a\nWAL-logged operation. Now read the source relfilenode block by block\nusing ReadBufferWithoutRelCache() and copy the block to the target\nrelfilenode using smgrextend() and WAL-log them using log_newpage().\nFor the source database, we can not directly use the smgrread(),\nbecause there could be some dirty buffers so we will have to read them\nthrough the buffer manager interface, otherwise, we will have to flush\nall the dirty buffers.\n\nWAL sequence using pg_waldump\n----------------------------------------------------\n1. (new wal to create db dir and write PG_VERSION file)\nrmgr: Database desc: CREATE create dir 1663/16394\n\n2. (new wal to create and write relmap file)\nrmgr: RelMap desc: CREATE database 16394 tablespace 1663 size 512\n\n2. (create relfilenode)\nrmgr: Storage desc: CREATE base/16394/16384\nrmgr: Storage desc: CREATE base/16394/2619\n\n3. (write page data)\nrmgr: XLOG desc: FPI , blkref #0: rel 1663/16394/2619 blk 0 FPW\nrmgr: XLOG desc: FPI , blkref #0: rel 1663/16394/2619 blk 1 FPW\n............\n4. (create other forks)\nrmgr: Storage desc: CREATE base/16394/2619_fsm\nrmgr: Storage CREATE base/16394/2619_vm\n.............\n\nI have attached a POC patch, which shows this idea, with this patch\nall basic sanity testing and the \"check-world\" is passing.\n\nOpen points:\n-------------------\n- This is a POC patch so needs more refactoring/cleanup and testing.\n- Might need to relook into the SMGR level API usage.\n\n\nCredits:\n-----------\nThanks to Robert Haas, for suggesting this idea and the high-level design.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 15 Jun 2021 16:50:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "[Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Jun 15, 2021 at 04:50:24PM +0530, Dilip Kumar wrote:\n> Currently, CREATE DATABASE forces a checkpoint, then copies all the\n> files, then forces another checkpoint. The comments in the createdb()\n> function explain the reasons for this. The attached patch fixes this\n> problem by making CREATE DATABASE completely WAL-logged so that now we\n> can avoid checkpoints. The patch modifies both CREATE DATABASE and\n> ALTER DATABASE..SET TABLESPACE to be fully WAL-logged.\n> \n> One main advantage of this change is that it will be cheaper. Forcing\n> checkpoints on an idle system is no big deal, but when the system is\n> under heavy write load, it's very expensive. Another advantage is that\n> it makes things better for features like TDE, which might want the\n> pages in the source database to be encrypted using a different key or\n> nonce than the pages in the target database.\n\nI only had a quick look at the patch but AFAICS your patch makes the new\nbehavior mandatory. Wouldn't it make sense to have a way to use the previous\napproach? People creating wanting to copy somewhat big database and with a\nslow replication may prefer to pay 2 checkpoints rather than stream everything.\nSame for people who have an otherwise idle system (I often use that to make\ntemporary backups and/or prepare multiple dataset and most of the time the\ncheckpoint is basically free).\n\n\n", "msg_date": "Tue, 15 Jun 2021 19:31:05 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On 15/06/2021 14:20, Dilip Kumar wrote:\n> Design Idea:\n> -----------------\n> First, create the target database directory along with the version\n> file and WAL-log this operation. Create the \"relation map file\" in\n> the target database and copy the content from the source database. For\n> this, we can use some modified versions of the write_relmap_file() and\n> WAL-log the relmap create operation along with the file content. Now,\n> read the relmap file to find the relfilenode for pg_class and then we\n> read pg_class block by block and decode the tuples. For reading the\n> pg_class blocks, we can use ReadBufferWithoutRelCache() so that we\n> don't need the relcache. Nothing prevents us from checking visibility\n> for tuples in another database because CLOG is global to the cluster.\n> And nothing prevents us from deforming those tuples because the column\n> definitions for pg_class have to be the same in every database. Then\n> we can get the relfilenode of every file we need to copy, and prepare\n> a list of all such relfilenode.\n\nI guess that would work, but you could also walk the database directory \nlike copydir() does. How you find the relations to copy is orthogonal to \nwhether you WAL-log them or use checkpoints. And whether you use the \nbuffer cache is also orthogonal to the rest of the proposal; you could \nissue FlushDatabaseBuffers() instead of a checkpoint.\n\n> Next, for each relfilenode in the\n> source database, create a respective relfilenode in the target\n> database (for all forks) using smgrcreate, which is already a\n> WAL-logged operation. Now read the source relfilenode block by block\n> using ReadBufferWithoutRelCache() and copy the block to the target\n> relfilenode using smgrextend() and WAL-log them using log_newpage().\n> For the source database, we can not directly use the smgrread(),\n> because there could be some dirty buffers so we will have to read them\n> through the buffer manager interface, otherwise, we will have to flush\n> all the dirty buffers.\n\nYeah, WAL-logging the contents of the source database would certainly be \nless weird than the current system. As Julien also pointed out, the \nquestion is, are there people using on \"CREATE DATABASE foo TEMPLATE \nbar\" to copy a large source database, on the premise that it's fast \nbecause it skips WAL-logging?\n\nIn principle, we could have both mechanisms, and use the new WAL-logged \nsystem if the database is small, and the old system with checkpoints if \nit's large. But I don't like idea of having to maintain both.\n\n- Heikki\n\n\n", "msg_date": "Tue, 15 Jun 2021 15:04:08 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Jun 15, 2021 at 5:34 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 15/06/2021 14:20, Dilip Kumar wrote:\n> > Design Idea:\n. Then\n> > we can get the relfilenode of every file we need to copy, and prepare\n> > a list of all such relfilenode.\n>\n> I guess that would work, but you could also walk the database directory\n> like copydir() does. How you find the relations to copy is orthogonal to\n> whether you WAL-log them or use checkpoints. And whether you use the\n> buffer cache is also orthogonal to the rest of the proposal; you could\n> issue FlushDatabaseBuffers() instead of a checkpoint.\n\nYeah, that would also work, but I thought since we are already\navoiding the checkpoint so let's avoid FlushDatabaseBuffers() also and\ndirectly use the lower level buffer manager API which doesn't need\nrecache. And I am using pg_class to identify the useful relfilenode\nso that we can avoid processing some unwanted relfilenode but yeah I\nagree that this is orthogonal to whether we use checkpoint or not.\n\n> Yeah, WAL-logging the contents of the source database would certainly be\n> less weird than the current system. As Julien also pointed out, the\n> question is, are there people using on \"CREATE DATABASE foo TEMPLATE\n> bar\" to copy a large source database, on the premise that it's fast\n> because it skips WAL-logging?\n>\n> In principle, we could have both mechanisms, and use the new WAL-logged\n> system if the database is small, and the old system with checkpoints if\n> it's large. But I don't like idea of having to maintain both.\n\nYeah, I agree in some cases, where we don't have many dirty buffers,\ncheckpointing can be faster. I think code wise maintaining two\napproaches will not be a very difficult job because the old approach\njust calls copydir(), but I am thinking about how can we decide which\napproach is better in which scenario. I don't think we can take calls\njust based on the database size? It would also depend upon many other\nfactors e.g. how busy your system is, how many total dirty buffers are\nthere in the cluster right? because checkpoint will affect the\nperformance of the operation going on in other databases in the\ncluster.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Jun 2021 18:11:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Am I mistaken in thinking that this would allow CREATE DATABASE to run\ninside a transaction block now, further reducing the DDL commands that are\nnon-transactional?\n\nAm I mistaken in thinking that this would allow CREATE DATABASE to run inside a transaction block now, further reducing the DDL commands that are non-transactional?", "msg_date": "Tue, 15 Jun 2021 09:18:58 -0400", "msg_from": "Adam Brusselback <adambrusselback@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "\nOn 6/15/21 8:04 AM, Heikki Linnakangas wrote:\n>\n> Yeah, WAL-logging the contents of the source database would certainly\n> be less weird than the current system. As Julien also pointed out, the\n> question is, are there people using on \"CREATE DATABASE foo TEMPLATE\n> bar\" to copy a large source database, on the premise that it's fast\n> because it skips WAL-logging?\n\n\nI'm 100% certain there are. It's not even a niche case.\n\n\n>\n> In principle, we could have both mechanisms, and use the new\n> WAL-logged system if the database is small, and the old system with\n> checkpoints if it's large. But I don't like idea of having to maintain\n> both.\n>\n>\n\nRather than use size, I'd be inclined to say use this if the source\ndatabase is marked as a template, and use the copydir approach for\nanything that isn't.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 15 Jun 2021 09:31:22 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Jun 15, 2021 at 9:31 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> Rather than use size, I'd be inclined to say use this if the source\n> database is marked as a template, and use the copydir approach for\n> anything that isn't.\n\nLooks like a good approach.\n\n\n", "msg_date": "Tue, 15 Jun 2021 22:07:32 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "At Tue, 15 Jun 2021 22:07:32 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Tue, Jun 15, 2021 at 9:31 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >\n> > Rather than use size, I'd be inclined to say use this if the source\n> > database is marked as a template, and use the copydir approach for\n> > anything that isn't.\n> \n> Looks like a good approach.\n\nIf we are willing to maintain the two methods.\n\nCouldn't we just skip the checkpoints if the database is known to\n\"clean\", which means no page has been loaded for the database since\nstartup? We can use the \"template\" mark to reject connections to the\ndatabase. (I'm afraid that we also should prevent vacuum to visit the\ntemplate databases, but...)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Jun 2021 15:27:21 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Jun 16, 2021 at 03:27:21PM +0900, Kyotaro Horiguchi wrote:\n> \n> If we are willing to maintain the two methods.\n> Couldn't we just skip the checkpoints if the database is known to\n> \"clean\", which means no page has been loaded for the database since\n> startup? We can use the \"template\" mark to reject connections to the\n> database. (I'm afraid that we also should prevent vacuum to visit the\n> template databases, but...)\n\nThere's already a datallowconn for that purpose. Modifying template databases\nis a common practice and we shouldn't prevent that.\n\nBut having the database currently doesn't accepting connection doesn't mean that\nthere is no dirty buffer and/or pending unlink, so it doesn't look like\nsomething that could be optimized, at least for the majority of use cases.\n\n\n", "msg_date": "Wed, 16 Jun 2021 14:48:18 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Jun 15, 2021 at 7:01 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> Rather than use size, I'd be inclined to say use this if the source\n> database is marked as a template, and use the copydir approach for\n> anything that isn't.\n\nYeah, that is possible, on the other thought wouldn't it be good to\nprovide control to the user by providing two different commands, e.g.\nCOPY DATABASE for the existing method (copydir) and CREATE DATABASE\nfor the new method (fully wal logged)?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Jun 2021 13:22:12 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2021-06-15 16:50:24 +0530, Dilip Kumar wrote:\n> The patch modifies both CREATE DATABASE and ALTER DATABASE..SET\n> TABLESPACE to be fully WAL-logged.\n\nGenerally quite a bit in favor of this - the current approach is very\nheavyweight, slow and I think we have a few open corner bugs related to\nit.\n\n\n> Design Idea:\n> -----------------\n> First, create the target database directory along with the version\n> file and WAL-log this operation.\n\nWhat happens if you crash / promote at this point?\n\n\n> Create the \"relation map file\" in the target database and copy the\n> content from the source database. For this, we can use some modified\n> versions of the write_relmap_file() and WAL-log the relmap create\n> operation along with the file content. Now, read the relmap file to\n> find the relfilenode for pg_class and then we read pg_class block by\n> block and decode the tuples.\n\nThis doesn't seem like a great approach - you're not going to be able to\nuse much of the normal infrastructure around processing tuples. So it\nseems like it'd end up with quite a bit of special case code that needs\nto maintained in parallel.\n\n\n> Now read the source relfilenode block by block using\n> ReadBufferWithoutRelCache() and copy the block to the target\n> relfilenode using smgrextend() and WAL-log them using log_newpage().\n> For the source database, we can not directly use the smgrread(),\n> because there could be some dirty buffers so we will have to read them\n> through the buffer manager interface, otherwise, we will have to flush\n> all the dirty buffers.\n\nI think we might need a bit more batching for the WAL logging. There are\ncases of template database considerably bigger than the default and the\noverhead of logging each write separately seems likely to be noticable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Jun 2021 14:58:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2021-06-15 18:11:23 +0530, Dilip Kumar wrote:\n> On Tue, Jun 15, 2021 at 5:34 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > On 15/06/2021 14:20, Dilip Kumar wrote:\n> > > Design Idea:\n> . Then\n> > > we can get the relfilenode of every file we need to copy, and prepare\n> > > a list of all such relfilenode.\n> >\n> > I guess that would work, but you could also walk the database directory\n> > like copydir() does. How you find the relations to copy is orthogonal to\n> > whether you WAL-log them or use checkpoints. And whether you use the\n> > buffer cache is also orthogonal to the rest of the proposal; you could\n> > issue FlushDatabaseBuffers() instead of a checkpoint.\n> \n> Yeah, that would also work, but I thought since we are already\n> avoiding the checkpoint so let's avoid FlushDatabaseBuffers() also and\n> directly use the lower level buffer manager API which doesn't need\n> recache. And I am using pg_class to identify the useful relfilenode\n> so that we can avoid processing some unwanted relfilenode but yeah I\n> agree that this is orthogonal to whether we use checkpoint or not.\n\nIt's not entirely obvious to me that it's important to avoid\nFlushDatabaseBuffers() on its own. Forcing a checkpoint is problematic because\nit unnecessarily writes out dirty buffers in other databases, triggers FPWs\netc. Normally a database used as a template won't have a meaningful amount of\ndirty buffers itself, so the FlushDatabaseBuffers() shouldn't trigger a lot of\nwrites. Of course, there is the matter of FlushDatabaseBuffers() not being\ncheap with a large shared_buffers - but I suspect that's not a huge factor\ncompared to the rest of the database creation cost.\n\nI think the better argument for going through shared buffers is that it might\nbe worth doing so for the *target* database. A common use of frequently\ncreating databases, in particular with a non-default template database, is to\nrun regression tests with pre-created schema / data - writing out all that data\njust to have it then dropped a few seconds later after the regression test\ncompleted is wasteful.\n\n\n\n> > In principle, we could have both mechanisms, and use the new WAL-logged\n> > system if the database is small, and the old system with checkpoints if\n> > it's large. But I don't like idea of having to maintain both.\n> \n> Yeah, I agree in some cases, where we don't have many dirty buffers,\n> checkpointing can be faster.\n\nI don't think the main issue is the speed of checkpointing itself? The reaoson\nto maintain the old paths is that the \"new approach\" is bloating WAL volume,\nno? Right now cloning a 1TB database costs a few hundred bytes of WAL and about\n1TB of write IO. With the proposed approach, the write volume approximately\ndoubles, because there'll also be about 1TB in WAL.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Jun 2021 15:13:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "\n\nOn 6/15/21 3:31 PM, Andrew Dunstan wrote:\n> \n> On 6/15/21 8:04 AM, Heikki Linnakangas wrote:\n>>\n>> Yeah, WAL-logging the contents of the source database would certainly\n>> be less weird than the current system. As Julien also pointed out, the\n>> question is, are there people using on \"CREATE DATABASE foo TEMPLATE\n>> bar\" to copy a large source database, on the premise that it's fast\n>> because it skips WAL-logging?\n> \n> \n> I'm 100% certain there are. It's not even a niche case.\n> \n> \n>>\n>> In principle, we could have both mechanisms, and use the new\n>> WAL-logged system if the database is small, and the old system with\n>> checkpoints if it's large. But I don't like idea of having to maintain\n>> both.\n>>\n>>\n> \n> Rather than use size, I'd be inclined to say use this if the source\n> database is marked as a template, and use the copydir approach for\n> anything that isn't.\n> \n\n\nI think we should be asking what is the benefit of that use case, and \nperhaps try addressing that without having to maintain two entirely \ndifferent ways to do CREATE DATABASE. It's not like we're sure the \ncurrent code is 100% reliable in various corner cases, I doubt having \ntwo separate approaches will improve the situation :-/\n\nI can see three reasons why people want to skip the WAL logging:\n\n1) it's faster, because there's no CPU and I/O for building the WAL\n\n I wonder if some optimization / batching could help with (1), as\n suggested by Andres elsewhere in this thread.\n\n2) it saves the amount of WAL (could matter with large template \ndatabases and WAL archiving, etc.)\n\n We can't really do much about this - we need to log all the data. But\n the batching from (1) might help a bit too, I guess.\n\n3) saves the amount of WAL that needs to be copied to standby, so that \nthere's no increase of replication lag, etc. particularly when the \nnetwork link has limited bandwidth\n\n I think this is a more general issue - some operations that may\n generate a lot of WAL, and we generally assume it's better to do\n that rather than hold exclusive locks for long time. But maybe we\n could have some throttling, to limit the amount of WAL per second,\n similarly to what we have to plain vacuum.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 17 Jun 2021 00:20:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "()log_newpage()On Thu, Jun 17, 2021 at 3:28 AM Andres Freund\n<andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-06-15 16:50:24 +0530, Dilip Kumar wrote:\n> > The patch modifies both CREATE DATABASE and ALTER DATABASE..SET\n> > TABLESPACE to be fully WAL-logged.\n>\n> Generally quite a bit in favor of this - the current approach is very\n> heavyweight, slow and I think we have a few open corner bugs related to\n> it.\n\nGreat!\n\n>\n> > Design Idea:\n> > -----------------\n> > First, create the target database directory along with the version\n> > file and WAL-log this operation.\n>\n> What happens if you crash / promote at this point?\n\nI will check this.\n\n> > Create the \"relation map file\" in the target database and copy the\n> > content from the source database. For this, we can use some modified\n> > versions of the write_relmap_file() and WAL-log the relmap create\n> > operation along with the file content. Now, read the relmap file to\n> > find the relfilenode for pg_class and then we read pg_class block by\n> > block and decode the tuples.\n>\n> This doesn't seem like a great approach - you're not going to be able to\n> use much of the normal infrastructure around processing tuples. So it\n> seems like it'd end up with quite a bit of special case code that needs\n> to maintained in parallel.\n\nYeah, this needs some special-purpose code but it is not too much\ncode. I agree that instead of scanning the pg_class we can scan all\nthe tablespaces and under that identify the source database directory\nas we do now. And from there we can copy each relfilenode block by\nblock with wal log. Honestly, these both seem like a special-purpose\ncode. Another problem with directly scanning the directory is, how we\nare supposed to get the \"relpersistence\" which is stored in pg_class\ntuple right?\n\n>\n> > Now read the source relfilenode block by block using\n> > ReadBufferWithoutRelCache() and copy the block to the target\n> > relfilenode using smgrextend() and WAL-log them using log_newpage().\n> > For the source database, we can not directly use the smgrread(),\n> > because there could be some dirty buffers so we will have to read them\n> > through the buffer manager interface, otherwise, we will have to flush\n> > all the dirty buffers.\n>\n> I think we might need a bit more batching for the WAL logging. There are\n> cases of template database considerably bigger than the default and the\n> overhead of logging each write separately seems likely to be noticable.\n\nYeah, we can do that, and instead of using log_newpage() we can use\nlog_newpages(), to log multiple pages at once.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 11:15:13 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Jun 17, 2021 at 3:43 AM Andres Freund <andres@anarazel.de> wrote:\n\n> > Yeah, that would also work, but I thought since we are already\n> > avoiding the checkpoint so let's avoid FlushDatabaseBuffers() also and\n> > directly use the lower level buffer manager API which doesn't need\n> > recache. And I am using pg_class to identify the useful relfilenode\n> > so that we can avoid processing some unwanted relfilenode but yeah I\n> > agree that this is orthogonal to whether we use checkpoint or not.\n>\n> It's not entirely obvious to me that it's important to avoid\n> FlushDatabaseBuffers() on its own. Forcing a checkpoint is problematic because\n> it unnecessarily writes out dirty buffers in other databases, triggers FPWs\n> etc. Normally a database used as a template won't have a meaningful amount of\n> dirty buffers itself, so the FlushDatabaseBuffers() shouldn't trigger a lot of\n> writes. Of course, there is the matter of FlushDatabaseBuffers() not being\n> cheap with a large shared_buffers - but I suspect that's not a huge factor\n> compared to the rest of the database creation cost.\n\nOkay so if I think from that POW, then maybe we can just\nFlushDatabaseBuffers() and then directly use smgrread() calls.\n\n> I think the better argument for going through shared buffers is that it might\n> be worth doing so for the *target* database. A common use of frequently\n> creating databases, in particular with a non-default template database, is to\n> run regression tests with pre-created schema / data - writing out all that data\n> just to have it then dropped a few seconds later after the regression test\n> completed is wasteful.\n\nOkay, I am not sure how common this use case is but for this use case\nit makes sense to use bufmgr for the target database.\n\n> > > In principle, we could have both mechanisms, and use the new WAL-logged\n> > > system if the database is small, and the old system with checkpoints if\n> > > it's large. But I don't like idea of having to maintain both.\n> >\n> > Yeah, I agree in some cases, where we don't have many dirty buffers,\n> > checkpointing can be faster.\n>\n> I don't think the main issue is the speed of checkpointing itself? The reaoson\n> to maintain the old paths is that the \"new approach\" is bloating WAL volume,\n> no? Right now cloning a 1TB database costs a few hundred bytes of WAL and about\n> 1TB of write IO. With the proposed approach, the write volume approximately\n> doubles, because there'll also be about 1TB in WAL.\n\nMake sense.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 13:23:36 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On 17/06/2021 08:45, Dilip Kumar wrote:\n> Another problem with directly scanning the directory is, how we\n> are supposed to get the \"relpersistence\" which is stored in pg_class\n> tuple right?\n\nYou only need relpersistence if you want to use the buffer cache, right? \nI think that's a good argument for not using it.\n\n- Heikki\n\n\n", "msg_date": "Thu, 17 Jun 2021 12:20:39 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Jun 17, 2021 at 2:50 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 17/06/2021 08:45, Dilip Kumar wrote:\n> > Another problem with directly scanning the directory is, how we\n> > are supposed to get the \"relpersistence\" which is stored in pg_class\n> > tuple right?\n>\n> You only need relpersistence if you want to use the buffer cache, right?\n> I think that's a good argument for not using it.\n\nYeah, that is the one place, another place I am using it to decide\nwhether to WAL log the new page while writing into the target\nrelfilenode, if it is unlogged relation then I am not WAL logging. But\nnow, I think that is not the right idea, during creating the database\nwe should WAL log all the pages irrespective of whether the table is\nlogged or unlogged.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 18:34:04 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Jun 16, 2021 at 6:13 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't think the main issue is the speed of checkpointing itself? The reaoson\n> to maintain the old paths is that the \"new approach\" is bloating WAL volume,\n> no? Right now cloning a 1TB database costs a few hundred bytes of WAL and about\n> 1TB of write IO. With the proposed approach, the write volume approximately\n> doubles, because there'll also be about 1TB in WAL.\n\nThis is a good point, but on the other hand, I think this smells a lot\nlike the wal_level=minimal optimization where we don't need to log\ndata being bulk-loaded into a table created in the same transaction if\nwal_level=minimal. In theory, that optimization has a lot of value,\nbut in practice it gets a lot of bad press on this list, because (1)\nsometimes doing the fsync is more expensive than writing the extra WAL\nwould have been and (2) most people want to run with\nwal_level=replica/logical so it ends up being a code path that isn't\nused much and is therefore more likely than average to have bugs\nnobody's terribly interested in fixing (except Noah ... thanks Noah!).\nIf we add features in the future, lke TDE or perhaps incremental\nbackup, that rely on new pages getting new LSNs instead of recycled\nones, this may turn into the same kind of wart. And as with that\noptimization, you're probably not even better off unless the database\nis pretty big, and you might be worse off if you have to do fsyncs or\nflush buffers synchronously. I'm not severely opposed to keeping both\nmethods around, so if that's really what people want to do, OK, but I\nguess I wonder whether we're really going to be happy with that\ndecision down the road.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 13:41:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Jun 17, 2021 at 5:20 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> You only need relpersistence if you want to use the buffer cache, right?\n> I think that's a good argument for not using it.\n\nI think the root of the problem with this feature is that it doesn't\ngo through shared_buffers, so in my opinion, it would be better if we\ncan make it all go through shared_buffers. It seems like you're\nadvocating a middle ground where half of the operation goes through\nshared_buffers and the other half doesn't, but that sounds like\ngetting rid of half of the hack when we could have gotten rid of all\nof it. I think things that don't go through shared_buffers are bad,\nand we should be making an effort to get rid of them where we can\nreasonably do so. I believe I've both introduced and fixed my share of\nbugs that were caused by such cases, and I think the behavior of the\nwhole system would be a lot easier to reason about if we had fewer of\nthose, or none.\n\nI can also think of at least one significant advantage of driving this\noff the remote database's pg_class rather than the filesystem\ncontents. It's a known defect of PostgreSQL that if you create a table\nand then crash, you leave behind a dead file that never gets removed.\nIf you now copy the database that contains that orphaned file, you\nwould ideally prefer not to copy that file, but if you do a copy based\non the filesystem contents, then you will. If you drive the copy off\nof pg_class, you won't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 13:53:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2021-06-17 13:53:38 -0400, Robert Haas wrote:\n> On Thu, Jun 17, 2021 at 5:20 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > You only need relpersistence if you want to use the buffer cache, right?\n> > I think that's a good argument for not using it.\n\nDo we really need pg_class to figure this out? Can't we just check if\nthe relation has an init fork?\n\n\n> I can also think of at least one significant advantage of driving this\n> off the remote database's pg_class rather than the filesystem\n> contents. It's a known defect of PostgreSQL that if you create a table\n> and then crash, you leave behind a dead file that never gets removed.\n> If you now copy the database that contains that orphaned file, you\n> would ideally prefer not to copy that file, but if you do a copy based\n> on the filesystem contents, then you will. If you drive the copy off\n> of pg_class, you won't.\n\nI'm very unconvinced this is the place to tackle the issue of orphan\nrelfilenodes. It'd be one thing if it were doable by existing code,\ne.g. because we supported cross-database relation accesses fully, but we\ndon't.\n\nAdding a hacky special case implementation for cross-database relation\naccesses that violates all kinds of assumptions (like holding a lock on\na relation when accessing it / pinning pages, processing relcache\ninvals, ...) doesn't seem like a good plan.\n\nI don't think this is an academic concern: You need to read from shared\nbuffers to read the \"remote\" pg_class, otherwise you'll potentially miss\nchanges. But it's not correct to read in pages or to pin pages without\nholding a lock, and there's code that relies on that (see\ne.g. InvalidateBuffer()).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Jun 2021 11:17:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Jun 17, 2021 at 2:17 PM Andres Freund <andres@anarazel.de> wrote:\n> Adding a hacky special case implementation for cross-database relation\n> accesses that violates all kinds of assumptions (like holding a lock on\n> a relation when accessing it / pinning pages, processing relcache\n> invals, ...) doesn't seem like a good plan.\n\nI agree that we don't want hacky code that violates assumptions, but\nbypassing shared_buffers is a bit hacky, too. Can't we lock the\nrelations as we're copying them? We know pg_class's OID a fortiori,\nand we can find out all the other OIDs as we go.\n\nI'm just thinking that the hackiness of going around shared_buffers\nfeels irreducible, but maybe the hackiness in the patch is something\nthat can be solved with more engineering.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 14:22:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2021-06-17 14:22:52 -0400, Robert Haas wrote:\n> On Thu, Jun 17, 2021 at 2:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > Adding a hacky special case implementation for cross-database relation\n> > accesses that violates all kinds of assumptions (like holding a lock on\n> > a relation when accessing it / pinning pages, processing relcache\n> > invals, ...) doesn't seem like a good plan.\n> \n> I agree that we don't want hacky code that violates assumptions, but\n> bypassing shared_buffers is a bit hacky, too. Can't we lock the\n> relations as we're copying them? We know pg_class's OID a fortiori,\n> and we can find out all the other OIDs as we go.\n\nWe possibly can - but I'm not sure that won't end up violating some\nother assumptions.\n\n\n> I'm just thinking that the hackiness of going around shared_buffers\n> feels irreducible, but maybe the hackiness in the patch is something\n> that can be solved with more engineering.\n\nWhich bypassing of shared buffers are you talking about here? We'd still\nhave to solve a subset of the issues around locking (at least on the\nsource side), but I don't think we need to read pg_class contents to be\nable to go through shared_buffers? As I suggested, we can use the init\nfork presence to infer relpersistence?\n\nOr do you mean that looking at the filesystem at all is bypassing shared\nbuffers?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Jun 2021 11:48:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Jun 17, 2021 at 2:48 PM Andres Freund <andres@anarazel.de> wrote:\n> Or do you mean that looking at the filesystem at all is bypassing shared\n> buffers?\n\nThis is what I mean. I think we will end up in a better spot if we can\navoid doing that without creating too much ugliness elsewhere.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 15:20:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Jun 18, 2021 at 12:50 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jun 17, 2021 at 2:48 PM Andres Freund <andres@anarazel.de> wrote:\n> > Or do you mean that looking at the filesystem at all is bypassing shared\n> > buffers?\n>\n> This is what I mean. I think we will end up in a better spot if we can\n> avoid doing that without creating too much ugliness elsewhere.\n>\n\nThe patch was not getting applied on head so I have rebased it, along\nwith that now I have used bufmgr layer for writing writing/logging\ndestination pages as well instead of directly using sgmr layer.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 6 Jul 2021 15:00:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Jul 6, 2021 at 3:00 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Jun 18, 2021 at 12:50 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Jun 17, 2021 at 2:48 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Or do you mean that looking at the filesystem at all is bypassing shared\n> > > buffers?\n> >\n> > This is what I mean. I think we will end up in a better spot if we can\n> > avoid doing that without creating too much ugliness elsewhere.\n> >\n>\n> The patch was not getting applied on head so I have rebased it, along\n> with that now I have used bufmgr layer for writing writing/logging\n> destination pages as well instead of directly using sgmr layer.\n\nI have done further cleanup of the patch and also divided it into 3 patches.\n\n0001 - Currently, write_relmap_file and load_relmap_file are tightly\ncoupled with shared_map and local_map. As part of the higher level\npatch set we need remap read/write interfaces that are not dependent\nupon shared_map and local_map, and we should be able to pass map\nmemory as an external parameter instead.\n\n0002- Support new interfaces in relmapper, 1) Support copying the\nrelmap file from one database path to the other database path. 2) Like\nRelationMapOidToFilenode, provide another interface which do the same\nbut instead of getting it for the database we are connected to it will\nget it for the input database path. These interfaces are required for\nthe next patch for supporting the wal logged created database.\n\n0003- The main patch for WAL logging the created database operation.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 2 Sep 2021 11:36:21 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Sep 2, 2021 at 2:06 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> 0003- The main patch for WAL logging the created database operation.\n\nAndres pointed out that this approach ends up accessing relations\nwithout taking a lock on them. It doesn't look like you did anything\nabout that.\n\n+ /* Built-in oids are mapped directly */\n+ if (classForm->oid < FirstGenbkiObjectId)\n+ relfilenode = classForm->oid;\n+ else if (OidIsValid(classForm->relfilenode))\n+ relfilenode = classForm->relfilenode;\n+ else\n+ continue;\n\nAm I missing something, or is this totally busted?\n\n[rhaas pgsql]$ createdb\n[rhaas pgsql]$ psql\npsql (15devel)\nType \"help\" for help.\n\nrhaas=# select oid::regclass from pg_class where relfilenode not in\n(0, oid) and oid < 10000;\n oid\n-----\n(0 rows)\n\nrhaas=# vacuum full pg_attrdef;\nVACUUM\nrhaas=# select oid::regclass from pg_class where relfilenode not in\n(0, oid) and oid < 10000;\n oid\n--------------------------------\n pg_attrdef_adrelid_adnum_index\n pg_attrdef_oid_index\n pg_toast.pg_toast_2604\n pg_toast.pg_toast_2604_index\n pg_attrdef\n(5 rows)\n\n /*\n+ * Now drop all buffers holding data of the target database; they should\n+ * no longer be dirty so DropDatabaseBuffers is safe.\n\nThe way things worked before, this was true, but now AFAICS it's\nfalse. I'm not sure whether that means that DropDatabaseBuffers() here\nis actually unsafe or whether it just means that you haven't updated\nthe comment to explain the reason.\n\n+ * Since we copy the file directly without looking at the shared buffers,\n+ * we'd better first flush out any pages of the source relation that are\n+ * in shared buffers. We assume no new changes will be made while we are\n+ * holding exclusive lock on the rel.\n\nDitto.\n\n+ /* As always, WAL must hit the disk before the data update does. */\n\nActually, the way it's coded now, part of the on-disk changes are done\nbefore WAL is issued, and part are done after. I doubt that's the\nright idea. There's nothing special about writing the actual payload\nbytes vs. the other on-disk changes (creating directories and files).\nIn any case the ordering deserves a better-considered comment than\nthis one.\n\n+ XLogRegisterData((char *) PG_MAJORVERSION, nbytes);\n\nSurely this is utterly pointless.\n\n+ CopyDatabase(src_dboid, dboid, src_deftablespace, dst_deftablespace);\n PG_END_ENSURE_ERROR_CLEANUP(createdb_failure_callback,\n PointerGetDatum(&fparms));\n\nI'd leave braces around the code for which we're ensuring error\ncleanup even if it's just one line.\n\n+ if (info == XLOG_DBASEDIR_CREATE)\n {\n xl_dbase_create_rec *xlrec = (xl_dbase_create_rec *) XLogRecGetData(record);\n\nSeems odd to rename the record but not change the name of the struct.\nI think I would be inclined to keep the existing record name, but if\nwe're going to change it we should be more thorough.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Sep 2021 11:22:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Jun 18, 2021 at 12:18 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-06-17 14:22:52 -0400, Robert Haas wrote:\n> > On Thu, Jun 17, 2021 at 2:17 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> > > Adding a hacky special case implementation for cross-database relation\n> > > accesses that violates all kinds of assumptions (like holding a lock on\n> > > a relation when accessing it / pinning pages, processing relcache\n> > > invals, ...) doesn't seem like a good plan.\n> >\n> > I agree that we don't want hacky code that violates assumptions, but\n> > bypassing shared_buffers is a bit hacky, too. Can't we lock the\n> > relations as we're copying them? We know pg_class's OID a fortiori,\n> > and we can find out all the other OIDs as we go.\n\n\n> We possibly can - but I'm not sure that won't end up violating some\n> other assumptions.\n>\n\nYeah, we can surely lock the relation as described by Robert, but IMHO,\nwhile creating the database we are already holding the exclusive lock on\nthe database and there is no one else allowed to be connected to the\ndatabase, so do we actually need to bother about the lock for the\ncorrectness?\n\n\n> > I'm just thinking that the hackiness of going around shared_buffers\n> > feels irreducible, but maybe the hackiness in the patch is something\n> > that can be solved with more engineering.\n>\n> Which bypassing of shared buffers are you talking about here? We'd still\n> have to solve a subset of the issues around locking (at least on the\n> source side), but I don't think we need to read pg_class contents to be\n> able to go through shared_buffers? As I suggested, we can use the init\n> fork presence to infer relpersistence?\n>\n\nI believe we want to avoid scanning pg_class for identifying the relation\nlist so that we can avoid this special-purpose code? IMHO, scanning the\ndisk, such as going through all the tablespaces and then finding the source\ndatabase directory and identifying each relfilenode, also appears to be a\nspecial-purpose code, unless I am missing what you mean by special-purpose\ncode.\n\nOr do you mean that looking at the filesystem at all is bypassing shared\n> buffers?\n>\n\nI think we already have such a code in multiple places where we bypass the\nshared buffers for copying the relation\ne.g. index_copy_data(), heapam_relation_copy_data(). So the current system\nas it stands, we can not claim that we are designing something for the\nfirst time where we are bypassing the shared buffers. So if we are\nthinking that bypassing the shared buffers is hackish and we don't want to\nuse it for the new patches then lets avoid it completely even while\nidentifying the relfilenodes to be copied.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Jun 18, 2021 at 12:18 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2021-06-17 14:22:52 -0400, Robert Haas wrote:\n> On Thu, Jun 17, 2021 at 2:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > Adding a hacky special case implementation for cross-database relation\n> > accesses that violates all kinds of assumptions (like holding a lock on\n> > a relation when accessing it / pinning pages, processing relcache\n> > invals, ...) doesn't seem like a good plan.\n> \n> I agree that we don't want hacky code that violates assumptions, but\n> bypassing shared_buffers is a bit hacky, too. Can't we lock the\n> relations as we're copying them? We know pg_class's OID a fortiori,\n> and we can find out all the other OIDs as we go. \n\nWe possibly can - but I'm not sure that won't end up violating some\nother assumptions.Yeah, we can surely lock the relation as described by Robert, but IMHO, while creating the database we are already holding the exclusive lock on the database and there is no one else allowed to be connected to the database, so do we actually need to bother about the lock for the correctness? \n> I'm just thinking that the hackiness of going around shared_buffers\n> feels irreducible, but maybe the hackiness in the patch is something\n> that can be solved with more engineering.\n\nWhich bypassing of shared buffers are you talking about here? We'd still\nhave to solve a subset of the issues around locking (at least on the\nsource side), but I don't think we need to read pg_class contents to be\nable to go through shared_buffers? As I suggested, we can use the init\nfork presence to infer relpersistence?I believe we want to avoid scanning pg_class for identifying the relation list so that we can avoid this special-purpose code?  IMHO, scanning the disk, such as going through all the tablespaces and then finding the source database directory and identifying each relfilenode, also appears to be a special-purpose code, unless I am missing what you mean by special-purpose code.\nOr do you mean that looking at the filesystem at all is bypassing shared\nbuffers? I think we already have such a code in multiple places where we bypass the shared buffers for copying the relation e.g. index_copy_data(), heapam_relation_copy_data().  So the current system as it stands, we can not claim that we are designing something for the first time where we are bypassing the shared buffers.   So if we are thinking that bypassing the shared buffers is hackish and we don't want to use it for the new patches then lets avoid it completely even while identifying the relfilenodes to be copied.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Sep 2021 14:25:10 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Sep 2, 2021 at 8:52 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Sep 2, 2021 at 2:06 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > 0003- The main patch for WAL logging the created database operation.\n>\n> Andres pointed out that this approach ends up accessing relations\n> without taking a lock on them. It doesn't look like you did anything\n> about that.\n>\n\nI missed that, I have shared my opinion about this in my last email [1]\n\n\n>\n> + /* Built-in oids are mapped directly */\n> + if (classForm->oid < FirstGenbkiObjectId)\n> + relfilenode = classForm->oid;\n> + else if (OidIsValid(classForm->relfilenode))\n> + relfilenode = classForm->relfilenode;\n> + else\n> + continue;\n>\n> Am I missing something, or is this totally busted?\n>\n\nOops, I think the condition should be like below, but I will think\ncarefully before posting the next version if there is something else I am\nmissing.\n\nif (OidIsValid(classForm->relfilenode))\n relfilenode = classForm->relfilenode;\nelse if if (classForm->oid < FirstGenbkiObjectId)\n relfilenode = classForm->oid;\nelse\n continue\n\n\n> /*\n> + * Now drop all buffers holding data of the target database; they should\n> + * no longer be dirty so DropDatabaseBuffers is safe.\n>\n> The way things worked before, this was true, but now AFAICS it's\n> false. I'm not sure whether that means that DropDatabaseBuffers() here\n> is actually unsafe or whether it just means that you haven't updated\n> the comment to explain the reason.\n>\n\nI think DropDatabaseBuffers(), itself is unsafe, we just copied pages using\nbufmgr and dirtied the buffers so dropping buffers is definitely unsafe\nhere.\n\n\n> + * Since we copy the file directly without looking at the shared buffers,\n> + * we'd better first flush out any pages of the source relation that are\n> + * in shared buffers. We assume no new changes will be made while we are\n> + * holding exclusive lock on the rel.\n>\n> Ditto.\n>\n\nYeah this comment no longer makes sense now.\n\n\n>\n> + /* As always, WAL must hit the disk before the data update does. */\n>\n> Actually, the way it's coded now, part of the on-disk changes are done\n> before WAL is issued, and part are done after. I doubt that's the\n> right idea.\n\nThere's nothing special about writing the actual payload\n> bytes vs. the other on-disk changes (creating directories and files).\n> In any case the ordering deserves a better-considered comment than\n> this one.\n>\n\nAgreed to all, but In general, I think WAL hitting the disk before data is\nmore applicable for the shared buffers, where we want to flush the WAL\nfirst before writing the shared buffer so that in case of torn page we have\nan option to recover the page from previous FPI. But in such cases where we\nare creating a directory or file there is no such requirement. Anyways, I\nagreed with the comments that it should be more uniform and the comment\nshould be correct.\n\n+ XLogRegisterData((char *) PG_MAJORVERSION, nbytes);\n>\n> Surely this is utterly pointless.\n>\n\nYeah it is. During the WAL replay also we know the PG_MAJORVERSION :)\n\n\n> + CopyDatabase(src_dboid, dboid, src_deftablespace, dst_deftablespace);\n> PG_END_ENSURE_ERROR_CLEANUP(createdb_failure_callback,\n> PointerGetDatum(&fparms));\n>\n> I'd leave braces around the code for which we're ensuring error\n> cleanup even if it's just one line.\n>\n\nOkay\n\n\n> + if (info == XLOG_DBASEDIR_CREATE)\n> {\n> xl_dbase_create_rec *xlrec = (xl_dbase_create_rec *)\n> XLogRecGetData(record);\n>\n> Seems odd to rename the record but not change the name of the struct.\n> I think I would be inclined to keep the existing record name, but if\n> we're going to change it we should be more thorough.\n>\n\nRight, I think we can leave the record name as it is.\n\n[1]\nhttps://www.postgresql.org/message-id/CAFiTN-sP_6hWv5AxcwnWCgg%3D4hyEeeZcCgFucZsYWpr3XQbP1g%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Sep 2, 2021 at 8:52 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Sep 2, 2021 at 2:06 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> 0003- The main patch for WAL logging the created database operation.\n\nAndres pointed out that this approach ends up accessing relations\nwithout taking a lock on them. It doesn't look like you did anything\nabout that.I missed that, I have shared my opinion about this in my last email [1] \n\n+ /* Built-in oids are mapped directly */\n+ if (classForm->oid < FirstGenbkiObjectId)\n+ relfilenode = classForm->oid;\n+ else if (OidIsValid(classForm->relfilenode))\n+ relfilenode = classForm->relfilenode;\n+ else\n+ continue;\n\nAm I missing something, or is this totally busted?Oops, I think the condition should be like below, but I will think carefully before posting the next version if there is something else I am missing.if (OidIsValid(classForm->relfilenode))   relfilenode = classForm->relfilenode;else if  if (classForm->oid < FirstGenbkiObjectId)   relfilenode = classForm->oid;else  continue \n  /*\n+ * Now drop all buffers holding data of the target database; they should\n+ * no longer be dirty so DropDatabaseBuffers is safe.\n\nThe way things worked before, this was true, but now AFAICS it's\nfalse. I'm not sure whether that means that DropDatabaseBuffers() here\nis actually unsafe or whether it just means that you haven't updated\nthe comment to explain the reason.I think DropDatabaseBuffers(), itself is unsafe, we just copied pages using bufmgr and dirtied the buffers so dropping buffers is definitely unsafe here. \n+ * Since we copy the file directly without looking at the shared buffers,\n+ * we'd better first flush out any pages of the source relation that are\n+ * in shared buffers.  We assume no new changes will be made while we are\n+ * holding exclusive lock on the rel.\n\nDitto.Yeah this comment no longer makes sense now. \n\n+ /* As always, WAL must hit the disk before the data update does. */\n\nActually, the way it's coded now, part of the on-disk changes are done\nbefore WAL is issued, and part are done after. I doubt that's the\nright idea.  There's nothing special about writing the actual payload\nbytes vs. the other on-disk changes (creating directories and files).\nIn any case the ordering deserves a better-considered comment than\nthis one.Agreed to all, but In general, I think WAL hitting the disk before data is more applicable for the shared buffers, where we want to flush the WAL first before writing the shared buffer so that in case of torn page we have an option to recover the page from previous FPI. But in such cases where we are creating a directory or file there is no such requirement.   Anyways, I agreed with the comments that it should be more uniform and the comment should be correct.\n+ XLogRegisterData((char *) PG_MAJORVERSION, nbytes);\n\nSurely this is utterly pointless.Yeah it is.  During the WAL replay also we know the PG_MAJORVERSION :) \n+ CopyDatabase(src_dboid, dboid, src_deftablespace, dst_deftablespace);\n  PG_END_ENSURE_ERROR_CLEANUP(createdb_failure_callback,\n  PointerGetDatum(&fparms));\n\nI'd leave braces around the code for which we're ensuring error\ncleanup even if it's just one line. Okay \n+ if (info == XLOG_DBASEDIR_CREATE)\n  {\n  xl_dbase_create_rec *xlrec = (xl_dbase_create_rec *) XLogRecGetData(record);\n\nSeems odd to rename the record but not change the name of the struct.\nI think I would be inclined to keep the existing record name, but if\nwe're going to change it we should be more thorough.Right, I think we can leave the record name as it is. [1] https://www.postgresql.org/message-id/CAFiTN-sP_6hWv5AxcwnWCgg%3D4hyEeeZcCgFucZsYWpr3XQbP1g%40mail.gmail.com-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Sep 2021 15:53:17 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Sep 3, 2021 at 6:23 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> + /* Built-in oids are mapped directly */\n>> + if (classForm->oid < FirstGenbkiObjectId)\n>> + relfilenode = classForm->oid;\n>> + else if (OidIsValid(classForm->relfilenode))\n>> + relfilenode = classForm->relfilenode;\n>> + else\n>> + continue;\n>>\n>> Am I missing something, or is this totally busted?\n>\n> Oops, I think the condition should be like below, but I will think carefully before posting the next version if there is something else I am missing.\n>\n> if (OidIsValid(classForm->relfilenode))\n> relfilenode = classForm->relfilenode;\n> else if if (classForm->oid < FirstGenbkiObjectId)\n> relfilenode = classForm->oid;\n> else\n> continue\n\nWhat about mapped rels that have been rewritten at some point?\n\n> Agreed to all, but In general, I think WAL hitting the disk before data is more applicable for the shared buffers, where we want to flush the WAL first before writing the shared buffer so that in case of torn page we have an option to recover the page from previous FPI. But in such cases where we are creating a directory or file there is no such requirement. Anyways, I agreed with the comments that it should be more uniform and the comment should be correct.\n\nThere have been previous debates about whether WAL records for\nfilesystem operations should be issued before or after those\noperations are performed. I'm not sure how easy those discussion are\nto find in the archives, but it's very relevant here. I think the\nshort version is - if we write a WAL record first and then the\noperation fails afterward, we have to PANIC. But if we perform the\noperation first and then write the WAL record if it succeeds, then we\ncould crash before writing WAL and end up out of sync with our\nstandbys. If we then later do any WAL-logged operation locally that\ndepends on that operation having been performed, replay will fail on\nthe standby. There used to be, or maybe still are, comments in the\ncode defending the latter approach, but more recently it's been\nstrongly criticized. The thinking, AIUI, is basically that filesystem\noperations really ought not to fail, because nobody should be doing\nweird things to the data directory, and if they do, panicking is OK.\nBut having replay fail in strange ways on the standby later is not OK.\n\nI'm not sure if everyone agrees with that logic; it seems somewhat\ndebatable. I *think* I personally agree with it but ... I'm not even\n100% sure about that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Sep 2021 10:37:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2021-09-03 14:25:10 +0530, Dilip Kumar wrote:\n> Yeah, we can surely lock the relation as described by Robert, but IMHO,\n> while creating the database we are already holding the exclusive lock on\n> the database and there is no one else allowed to be connected to the\n> database, so do we actually need to bother about the lock for the\n> correctness?\n\nThe problem is that checkpointer, bgwriter, buffer reclaim don't care about\nthe database of the buffer they're working on... The exclusive lock on the\ndatabase doesn't change anything about that. Perhaps you can justify it's safe\nbecause there can't be any dirty buffers or such though.\n\n\n> I think we already have such a code in multiple places where we bypass the\n> shared buffers for copying the relation\n> e.g. index_copy_data(), heapam_relation_copy_data().\n\nThat's not at all comparable. We hold an exclusive lock on the relation at\nthat point, and we don't have a separate implementation of reading tuples from\nthe table or something like that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Sep 2021 14:54:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sat, Sep 4, 2021 at 3:24 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-09-03 14:25:10 +0530, Dilip Kumar wrote:\n> > Yeah, we can surely lock the relation as described by Robert, but IMHO,\n> > while creating the database we are already holding the exclusive lock on\n> > the database and there is no one else allowed to be connected to the\n> > database, so do we actually need to bother about the lock for the\n> > correctness?\n>\n> The problem is that checkpointer, bgwriter, buffer reclaim don't care about\n> the database of the buffer they're working on... The exclusive lock on the\n> database doesn't change anything about that.\n\n\nBut these directly operate on the buffers and In my patch, whether we are\nreading the pg_class for identifying the relfilenode or we are copying the\nrelation block by block we are always holding the lock on the buffer.\n\n\n> Perhaps you can justify it's safe\n> because there can't be any dirty buffers or such though.\n>\n>\n> > I think we already have such a code in multiple places where we bypass\n> the\n> > shared buffers for copying the relation\n> > e.g. index_copy_data(), heapam_relation_copy_data().\n>\n> That's not at all comparable. We hold an exclusive lock on the relation at\n> that point, and we don't have a separate implementation of reading tuples\n> from\n> the table or something like that.\n>\n\nOkay, but my example was against the point Robert raised that he feels that\nbypassing the shared buffer anywhere is hackish. But yeah, I agree his\npoint might be that even if we are using it in existing code we can not\njustify it.\n\nFor moving forward I think the main open concerns we have as of now are\n\n1. Special purpose code of scanning pg_class, so that we can solve it by\nscanning the source database directory, I think Robert doesn't like this\napproach because we are directly scanning to directory and bypassing the\nshared buffers? But this is not any worse than what we have now right? I\nmean now also we are scanning the directory directly, so only change will\nbe instead of copying files directly we will read file and copy block by\nblock.\n\n2. Another problem is, while copying the relation we are accessing the\nrelation buffers but we are not holding the relation lock, but we are\nalready holding the buffer so I am not sure do we really have a problem\nhere w.r.t checkpointer, bgwriter? But if we have the problem then also we\ncan create the lock tag and acquire the relation lock.\n\n3. While copying the relation whether to use the bufmgr or directly use the\nsmgr?\n\nIf we use the bufmgr then maybe we can avoid flushing some of the buffers\nto the disk and save some I/O but in general we copy from the template\ndatabase so there might not be a lot of dirty buffers and we might not save\nanything, OTOH, if we directly use the smgr for copying the relation data\nwe can reuse some existing code RelationCopyStorage() and the patch will be\nsimpler. Other than just code simplicity or IO there is also a concern by\nRobert that he doesn't like to bypass the bufmgr, and that will be\napplicable to the point #1 as well as #3.\n\nThoughts?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sat, Sep 4, 2021 at 3:24 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2021-09-03 14:25:10 +0530, Dilip Kumar wrote:\n> Yeah, we can surely lock the relation as described by Robert, but IMHO,\n> while creating the database we are already holding the exclusive lock on\n> the database and there is no one else allowed to be connected to the\n> database, so do we actually need to bother about the lock for the\n> correctness?\n\nThe problem is that checkpointer, bgwriter, buffer reclaim don't care about\nthe database of the buffer they're working on... The exclusive lock on the\ndatabase doesn't change anything about that.But these directly operate on the buffers and In my patch, whether we are reading the pg_class for identifying the relfilenode or we are copying the relation block by block we are always holding the lock on the buffer.  Perhaps you can justify it's safe\nbecause there can't be any dirty buffers or such though.\n\n\n> I think we already have such a code in multiple places where we bypass the\n> shared buffers for copying the relation\n> e.g. index_copy_data(), heapam_relation_copy_data().\n\nThat's not at all comparable. We hold an exclusive lock on the relation at\nthat point, and we don't have a separate implementation of reading tuples from\nthe table or something like that.Okay, but my example was against the point Robert raised that he feels that bypassing the shared buffer anywhere is hackish.  But yeah, I agree his point might be that even if we are using it in existing code we can not justify it.For moving forward I think the main open concerns we have as of now are1. Special purpose code of scanning pg_class, so that we can solve it by scanning the source database directory, I think Robert doesn't like this approach because we are directly scanning to directory and bypassing the shared buffers?  But this is not any worse than what we have now right?  I mean now also we are scanning the directory directly, so only change will be instead of copying files directly we will read file and copy block by block.2. Another problem is, while copying the relation we are accessing the relation buffers but we are not holding the relation lock, but we are already holding the buffer so I am not sure do we really have a problem here w.r.t checkpointer, bgwriter?  But if we have the problem then also we can create the lock tag and acquire the relation lock.3. While copying the relation whether to use the bufmgr or directly use the smgr?If we use the bufmgr then maybe we can avoid flushing some of the buffers to the disk and save some I/O but in general we copy from the template database so there might not be a lot of dirty buffers and we might not save anything,  OTOH, if we directly use the smgr for copying the relation data we can reuse some existing code RelationCopyStorage() and the patch will be simpler.  Other than just code simplicity or IO there is also a concern by Robert that he doesn't like to bypass the bufmgr, and that will be applicable to the point #1 as well as #3.Thoughts?-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 5 Sep 2021 14:22:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On 2021-09-05 14:22:51 +0530, Dilip Kumar wrote:\n> On Sat, Sep 4, 2021 at 3:24 AM Andres Freund <andres@anarazel.de> wrote:\n> \n> > Hi,\n> >\n> > On 2021-09-03 14:25:10 +0530, Dilip Kumar wrote:\n> > > Yeah, we can surely lock the relation as described by Robert, but IMHO,\n> > > while creating the database we are already holding the exclusive lock on\n> > > the database and there is no one else allowed to be connected to the\n> > > database, so do we actually need to bother about the lock for the\n> > > correctness?\n> >\n> > The problem is that checkpointer, bgwriter, buffer reclaim don't care about\n> > the database of the buffer they're working on... The exclusive lock on the\n> > database doesn't change anything about that.\n> \n> \n> But these directly operate on the buffers and In my patch, whether we are\n> reading the pg_class for identifying the relfilenode or we are copying the\n> relation block by block we are always holding the lock on the buffer.\n\nI don't think a buffer lock is really sufficient. See e.g. code like:\n\nstatic void\nInvalidateBuffer(BufferDesc *buf)\n{\n...\n\t/*\n\t * We assume the only reason for it to be pinned is that someone else is\n\t * flushing the page out. Wait for them to finish. (This could be an\n\t * infinite loop if the refcount is messed up... it would be nice to time\n\t * out after awhile, but there seems no way to be sure how many loops may\n\t * be needed. Note that if the other guy has pinned the buffer but not\n\t * yet done StartBufferIO, WaitIO will fall through and we'll effectively\n\t * be busy-looping here.)\n\t */\n\tif (BUF_STATE_GET_REFCOUNT(buf_state) != 0)\n\t{\n\t\tUnlockBufHdr(buf, buf_state);\n\t\tLWLockRelease(oldPartitionLock);\n\t\t/* safety check: should definitely not be our *own* pin */\n\t\tif (GetPrivateRefCount(BufferDescriptorGetBuffer(buf)) > 0)\n\t\t\telog(ERROR, \"buffer is pinned in InvalidateBuffer\");\n\t\tWaitIO(buf);\n\t\tgoto retry;\n\t}\n\nIOW, currently we assume that you're only allowed to pin a block in a relation\nwhile you hold a lock on the relation. It might be a good idea to change that,\nbut it's not as trivial as one might think - consider e.g. dropping a\nrelation's buffers while holding an exclusive lock: If there's potential\nconcurrent reads of that buffer we'd be in trouble.\n\n\n> 3. While copying the relation whether to use the bufmgr or directly use the\n> smgr?\n> \n> If we use the bufmgr then maybe we can avoid flushing some of the buffers\n> to the disk and save some I/O but in general we copy from the template\n> database so there might not be a lot of dirty buffers and we might not save\n> anything\n\nI would assume the big benefit would be that the *target* database does not\nhave to be written out / shared buffer is immediately populated.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Sep 2021 13:28:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Sep 6, 2021 at 1:58 AM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2021-09-05 14:22:51 +0530, Dilip Kumar wrote:\n\n> But these directly operate on the buffers and In my patch, whether we are\n> > reading the pg_class for identifying the relfilenode or we are copying\n> the\n> > relation block by block we are always holding the lock on the buffer.\n>\n> I don't think a buffer lock is really sufficient. See e.g. code like:\n>\n\nI agree that the only buffer lock is not sufficient, but here we are\ntalking about the case where we are already holding the exclusive lock on\nthe database + the buffer lock. So the cases like below which should be\ncalled only from the drop relation must be protected by the database\nexclusive lock and the other example like buffer reclaim/checkpointer\nshould be protected by the buffer pin + lock. Having said that, I am not\nagainst the point that we should not acquire the relation lock in our\ncase. I agree that if there is an assumption that for holding the buffer\npin we must be holding the relation lock then better not to break that.\n\n\nstatic void\n> InvalidateBuffer(BufferDesc *buf)\n> {\n> ...\n> /*\n> * We assume the only reason for it to be pinned is that someone\n> else is\n> * flushing the page out. Wait for them to finish. (This could\n> be an\n> * infinite loop if the refcount is messed up... it would be nice\n> to time\n> * out after awhile, but there seems no way to be sure how many\n> loops may\n> * be needed. Note that if the other guy has pinned the buffer\n> but not\n> * yet done StartBufferIO, WaitIO will fall through and we'll\n> effectively\n> * be busy-looping here.)\n> */\n> if (BUF_STATE_GET_REFCOUNT(buf_state) != 0)\n> {\n> UnlockBufHdr(buf, buf_state);\n> LWLockRelease(oldPartitionLock);\n> /* safety check: should definitely not be our *own* pin */\n> if (GetPrivateRefCount(BufferDescriptorGetBuffer(buf)) > 0)\n> elog(ERROR, \"buffer is pinned in\n> InvalidateBuffer\");\n> WaitIO(buf);\n> goto retry;\n> }\n>\n> IOW, currently we assume that you're only allowed to pin a block in a\n> relation\n> while you hold a lock on the relation. It might be a good idea to change\n> that,\n> but it's not as trivial as one might think - consider e.g. dropping a\n> relation's buffers while holding an exclusive lock: If there's potential\n> concurrent reads of that buffer we'd be in trouble.\n>\n\n> > 3. While copying the relation whether to use the bufmgr or directly use\n> the\n> > smgr?\n> >\n> > If we use the bufmgr then maybe we can avoid flushing some of the buffers\n> > to the disk and save some I/O but in general we copy from the template\n> > database so there might not be a lot of dirty buffers and we might not\n> save\n> > anything\n>\n> I would assume the big benefit would be that the *target* database does not\n> have to be written out / shared buffer is immediately populated.\n>\n\nOkay, that makes sense. Infact for using the shared buffers for the\ndestination database's relation we don't even have the locking issue,\nbecause that database is not yet accessible to anyone right?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Sep 6, 2021 at 1:58 AM Andres Freund <andres@anarazel.de> wrote:On 2021-09-05 14:22:51 +0530, Dilip Kumar wrote: \n> But these directly operate on the buffers and In my patch, whether we are\n> reading the pg_class for identifying the relfilenode or we are copying the\n> relation block by block we are always holding the lock on the buffer.\n\nI don't think a buffer lock is really sufficient. See e.g. code like:I agree that the only buffer lock is not sufficient, but here we are talking about the case where we are already holding the exclusive lock on the database + the buffer lock.   So the cases like below which should be called only from the drop relation must be protected by the database exclusive lock and the other example like buffer reclaim/checkpointer should be protected by the buffer pin + lock.   Having said that, I am not against the point that we should not acquire the relation lock in our case.  I agree that if there is an assumption that for holding the buffer pin we must be holding the relation lock then better not to break that.\nstatic void\nInvalidateBuffer(BufferDesc *buf)\n{\n...\n        /*\n         * We assume the only reason for it to be pinned is that someone else is\n         * flushing the page out.  Wait for them to finish.  (This could be an\n         * infinite loop if the refcount is messed up... it would be nice to time\n         * out after awhile, but there seems no way to be sure how many loops may\n         * be needed.  Note that if the other guy has pinned the buffer but not\n         * yet done StartBufferIO, WaitIO will fall through and we'll effectively\n         * be busy-looping here.)\n         */\n        if (BUF_STATE_GET_REFCOUNT(buf_state) != 0)\n        {\n                UnlockBufHdr(buf, buf_state);\n                LWLockRelease(oldPartitionLock);\n                /* safety check: should definitely not be our *own* pin */\n                if (GetPrivateRefCount(BufferDescriptorGetBuffer(buf)) > 0)\n                        elog(ERROR, \"buffer is pinned in InvalidateBuffer\");\n                WaitIO(buf);\n                goto retry;\n        }\n\nIOW, currently we assume that you're only allowed to pin a block in a relation\nwhile you hold a lock on the relation. It might be a good idea to change that,\nbut it's not as trivial as one might think - consider e.g. dropping a\nrelation's buffers while holding an exclusive lock: If there's potential\nconcurrent reads of that buffer we'd be in trouble.\n\n> 3. While copying the relation whether to use the bufmgr or directly use the\n> smgr?\n> \n> If we use the bufmgr then maybe we can avoid flushing some of the buffers\n> to the disk and save some I/O but in general we copy from the template\n> database so there might not be a lot of dirty buffers and we might not save\n> anything\n\nI would assume the big benefit would be that the *target* database does not\nhave to be written out / shared buffer is immediately populated.Okay, that makes sense.  Infact for using the shared buffers for the destination database's relation we don't even have the locking issue, because that database is not yet accessible to anyone right?-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 6 Sep 2021 11:17:09 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Sep 6, 2021 at 11:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote\n\n>\n> Okay, that makes sense. Infact for using the shared buffers for the\n> destination database's relation we don't even have the locking issue,\n> because that database is not yet accessible to anyone right?\n>\n\nBased on all these discussions I am planning to change the design as below,\n\n- FlushDatabaseBuffers().\n\n- Scan the database directory under each tablespace and prepare a\ntablespace-wise relfilenode list, along with this we will remember the\npersistent level as well based on the presence of INITFORK.\n\n- Next, copy each relfilenode to the destination, while copying for the\nsource relation directly use the smgrread whereas for the destination\nrelation use bufmgr. The main reasons for not using the bufmgr for the\nsource relations are a) We can avoid acquiring a special-purpose lock on\nthe relation b) We are copying from the template database so in most of the\ncases there might not be many dirty buffers for that database so there is\nno real need for using the shared buffers.\n\nAny objections to the above design?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Sep 6, 2021 at 11:17 AM Dilip Kumar <dilipbalaut@gmail.com> wroteOkay, that makes sense.  Infact for using the shared buffers for the destination database's relation we don't even have the locking issue, because that database is not yet accessible to anyone right?Based on all these discussions I am planning to change the design as below,- FlushDatabaseBuffers().- Scan the database directory under each tablespace and prepare a tablespace-wise relfilenode list, along with this we will remember the persistent level as well based on the presence of INITFORK.- Next, copy each relfilenode to the destination, while copying for the source relation directly use the smgrread whereas for the destination relation use bufmgr.  The main reasons for not using the bufmgr for the source relations are a) We can avoid acquiring a special-purpose lock on the relation b) We are copying from the template database so in most of the cases there might not be many dirty buffers for that database so there is no real need for using the shared buffers.Any objections to the above design?-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 6 Sep 2021 14:29:38 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Sep 3, 2021 at 5:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think we already have such a code in multiple places where we bypass the\n> > shared buffers for copying the relation\n> > e.g. index_copy_data(), heapam_relation_copy_data().\n>\n> That's not at all comparable. We hold an exclusive lock on the relation at\n> that point, and we don't have a separate implementation of reading tuples from\n> the table or something like that.\n\nI don't think there's a way to do this that is perfectly clean, so the\ndiscussion here is really about finding the least unpleasant\nalternative. I *really* like the idea of using pg_class to figure out\nwhat relations to copy. As far as I'm concerned, pg_class is the\ncanonical list of what's in the database, and to the extent that the\nfilesystem happens to agree, that's good luck. From that perspective,\nusing the filesystem to figure out what to copy is by definition a\nhack.\n\nNow, having to use dedicated tuple-reading code is also a hack, but to\nme that's largely an accident of questionable design decisions\nelsewhere. You can't read a buffer with just the minimal amount of\ninformation that you need to read a buffer; you have to have a\nrelcache entry, so we have things like ReadBufferWithoutRelcache and\nCreateFakeRelcacheEntry. It's a little crazy to me that someone saw\nthat ReadBuffer() needed a thing which some callers might not have and\ninstead of saying \"hmm, maybe we ought to change the arguments so that\nanyone with enough information to call this function can do so,\" they\nsaid \"hmm, let's create a fake object that is not really the same as a\nreal one but good enough to fool the function into doing the right\nthing, probably.\" I think the code layering here is just flat-out\nbroken and ought to be fixed. A layer whose job it is to read and\nwrite blocks should not know that relations are even a thing. (The\nwidespread use of global variables in the relcache code, the catcache\ncode, and many other places in lieu of explicit parameter-passing just\nmakes everything a lot worse.)\n\nSo I think if we commit to the hackiness of the sort that this patch\nintroduces, there is some hope of things getting better in the future.\nI don't think it's a real easy path forward, but maybe it's possible.\nIf on the other hand we commit to using the filesystem, I don't see\nhow it ever gets any better. Unlogged tables are a great example of a\nfeature that depended on the filesystem and it now seems to me to be -\nby far - the worst thing about that feature. I have no idea how to get\nrid of that dependency or all of the associated problems without\nreverting the feature. But in this case, we seem to have another\noption, and so I think we should take it.\n\nYour (or other people's mileage) may vary ... this is just my view of it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Sep 2021 12:24:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Sep 8, 2021 at 9:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Sep 3, 2021 at 5:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I think we already have such a code in multiple places where we bypass\n> the\n> > > shared buffers for copying the relation\n> > > e.g. index_copy_data(), heapam_relation_copy_data().\n> >\n> > That's not at all comparable. We hold an exclusive lock on the relation\n> at\n> > that point, and we don't have a separate implementation of reading\n> tuples from\n> > the table or something like that.\n>\n> I don't think there's a way to do this that is perfectly clean, so the\n> discussion here is really about finding the least unpleasant\n> alternative. I *really* like the idea of using pg_class to figure out\n> what relations to copy. As far as I'm concerned, pg_class is the\n> canonical list of what's in the database, and to the extent that the\n> filesystem happens to agree, that's good luck. From that perspective,\n> using the filesystem to figure out what to copy is by definition a\n> hack.\n>\n\nI agree with you, even though I think that scanning pg_class for\nidentifying the relfilenode looks like a more sensible thing to do than\ndirectly scanning the file system, we need to consider one point that, now\nalso in current system (in create database) we are scanning the directory\nfor copying the file so instead of copying them directly we need to\nlogically identify the relfilenode and then copy it block by block, so\nmaybe this approach will not make anyone unhappy because it is not any\nworse than the current system.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Sep 8, 2021 at 9:54 PM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Sep 3, 2021 at 5:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think we already have such a code in multiple places where we bypass the\n> > shared buffers for copying the relation\n> > e.g. index_copy_data(), heapam_relation_copy_data().\n>\n> That's not at all comparable. We hold an exclusive lock on the relation at\n> that point, and we don't have a separate implementation of reading tuples from\n> the table or something like that.\n\nI don't think there's a way to do this that is perfectly clean, so the\ndiscussion here is really about finding the least unpleasant\nalternative. I *really* like the idea of using pg_class to figure out\nwhat relations to copy. As far as I'm concerned, pg_class is the\ncanonical list of what's in the database, and to the extent that the\nfilesystem happens to agree, that's good luck. From that perspective,\nusing the filesystem to figure out what to copy is by definition a\nhack.I agree with you, even though I think that scanning pg_class for identifying the relfilenode looks like a more sensible thing to do than directly scanning the file system, we need to consider one point that, now also in current system  (in create database) we are scanning the directory for copying the file so instead of copying them directly we need to logically identify the relfilenode and then copy it block by block, so maybe this approach will not make anyone unhappy because it is not any worse than the current system.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 11 Sep 2021 09:47:00 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sat, Sep 11, 2021 at 12:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I agree with you, even though I think that scanning pg_class for identifying the relfilenode looks like a more sensible thing to do than directly scanning the file system, we need to consider one point that, now also in current system (in create database) we are scanning the directory for copying the file so instead of copying them directly we need to logically identify the relfilenode and then copy it block by block, so maybe this approach will not make anyone unhappy because it is not any worse than the current system.\n\nSo, I agree. If we can't get agreement on this approach, then we can\ndo that, and as you say, it's no worse than what we are doing now. But\nI am just trying to lay out my view of why I think that's not as good\nas this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Sep 2021 12:14:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Sep 2, 2021 at 8:52 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\nPFA, updated version of the patch, where I have fixed the issues\nreported by you and also done some more refactoring and patch split,\nnext I am planning to post the patch with another approach where we\nscan the directory instead of scanning the pg_class for identifying\nthe relfilenodes. For specific comments please find my response\ninline,\n\n\n> Andres pointed out that this approach ends up accessing relations\n> without taking a lock on them. It doesn't look like you did anything\n> about that.\n\nNow I have acquired a lock before scanning the pg_class as well as\nother relfilenode.\n\n>\n> + /* Built-in oids are mapped directly */\n> + if (classForm->oid < FirstGenbkiObjectId)\n> + relfilenode = classForm->oid;\n> + else if (OidIsValid(classForm->relfilenode))\n> + relfilenode = classForm->relfilenode;\n> + else\n> + continue;\n>\n> Am I missing something, or is this totally busted?\n\nHandled the mapped relation using relmapper.\n\n> /*\n> + * Now drop all buffers holding data of the target database; they should\n> + * no longer be dirty so DropDatabaseBuffers is safe.\n>\n> The way things worked before, this was true, but now AFAICS it's\n> false. I'm not sure whether that means that DropDatabaseBuffers() here\n> is actually unsafe or whether it just means that you haven't updated\n> the comment to explain the reason.\n\nNow we can only drop the buffer specific to old tablespace not the new\ntablespace so can not directly use the dboid, so extended the\nDropDatabaseBuffers interface to take tablespace oid as and input and\nupdated the comments accordingly.\n\n> + * Since we copy the file directly without looking at the shared buffers,\n> + * we'd better first flush out any pages of the source relation that are\n> + * in shared buffers. We assume no new changes will be made while we are\n> + * holding exclusive lock on the rel.\n>\n> Ditto.\n\nI think these comments is related to index_copy_data() and this is\nstill valid, it is showing in the patch due to some refactoring so I\nhave separated out this refactoring patch as 0003 to avoid confusion.\n\n>\n> + /* As always, WAL must hit the disk before the data update does. */\n>\n> Actually, the way it's coded now, part of the on-disk changes are done\n> before WAL is issued, and part are done after. I doubt that's the\n> right idea. There's nothing special about writing the actual payload\n> bytes vs. the other on-disk changes (creating directories and files).\n> In any case the ordering deserves a better-considered comment than\n> this one.\n\nChanged, now WAL first and then disk change.\n\n\nOpen question:\n- Scan pg_class vs scan directories\n- Whether to retain the old created database mechanism as option or not.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Sep 2021 12:23:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Sep 27, 2021 at 12:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n>\n> Open question:\n> - Scan pg_class vs scan directories\n> - Whether to retain the old created database mechanism as option or not.\n\nI have done some code improvement in 0001 and 0002.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 4 Oct 2021 14:51:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Oct 4, 2021 at 2:51 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\nI have implemented the patch with approach2 as well, i.e. instead of\nscanning the pg-class, we scan the directory.\n\nIMHO, we have already discussed most of the advantages and\ndisadvantages of both approaches so I don't want to mention those\nagain. But I have noticed one more issue with the approach2,\nbasically, if we scan the directory then we don't have any way to\nidentify the relation-OID and that is required in order to acquire the\nrelation lock before copying it, right?\n\nPatch details:\n0001 to 0006 implements an approach1\n0007 removes the code of pg_class scanning and adds the directory scan.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 5 Oct 2021 13:36:56 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nI've looked over this patch set and email thread a couple times, and I\ndon't see anything amiss, but I'm also not terribly familiar with the\nsubsystems this part of the code relies on. I haven't yet tried to stress\ntest with a large database, but it seems like a good idea to do so.\n\nI have a couple comments and questions:\n\n0006:\n\n+ * XXX We can optimize RelationMapOidToFileenodeForDatabase API\n+ * so that instead of reading the relmap file every time, it can\n+ * save it in a temporary variable and use it for subsequent\n+ * calls. Then later reset it once we're done or at the\n+ * transaction end.\n\nDo we really need to consider optimizing here? Only a handful of relations\nwill be found in the relmap, right?\n\n+ * Once we start copying files from the source database, we need to be able\n+ * to clean 'em up if we fail. Use an ENSURE block to make sure this\n+ * happens. (This is not a 100% solution, because of the possibility of\n+ * failure during transaction commit after we leave this routine, but it\n+ * should handle most scenarios.)\n\nThis comment in master started with\n\n- * Once we start copying subdirectories, we need to be able to clean 'em\n\nIs the distinction important enough to change this comment? Also, is \"most\nscenarios\" still true with the patch? I haven't read into how ENSURE works.\n\nSame with this comment change, seems fine the way it was:\n\n- * Use an ENSURE block to make sure we remove the debris if the copy fails\n- * (eg, due to out-of-disk-space). This is not a 100% solution, because\n- * of the possibility of failure during transaction commit, but it should\n- * handle most scenarios.\n+ * Use an ENSURE block to make sure we remove the debris if the copy fails.\n+ * This is not a 100% solution, because of the possibility of failure\n+ * during transaction commit, but it should handle most scenarios.\n\nAnd do we need additional tests? Maybe we don't, but it seems good to make\nsure.\n\nI haven't looked at 0007, and I have no opinion on which approach is better.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nHi,I've looked over this patch set and email thread a couple times, and I don't see anything amiss, but I'm also not terribly familiar with the subsystems this part of the code relies on. I haven't yet tried to stress test with a large database, but it seems like a good idea to do so.I have a couple comments and questions:0006:+\t\t\t\t * XXX We can optimize RelationMapOidToFileenodeForDatabase API+\t\t\t\t * so that instead of reading the relmap file every time, it can+\t\t\t\t * save it in a temporary variable and use it for subsequent+\t\t\t\t * calls.  Then later reset it once we're done or at the+\t\t\t\t * transaction end.Do we really need to consider optimizing here? Only a handful of relations will be found in the relmap, right?+\t * Once we start copying files from the source database, we need to be able+\t * to clean 'em up if we fail.  Use an ENSURE block to make sure this+\t * happens.  (This is not a 100% solution, because of the possibility of+\t * failure during transaction commit after we leave this routine, but it+\t * should handle most scenarios.)This comment in master started with-\t * Once we start copying subdirectories, we need to be able to clean 'emIs the distinction important enough to change this comment? Also, is \"most scenarios\" still true with the patch? I haven't read into how ENSURE works.Same with this comment change, seems fine the way it was:-\t * Use an ENSURE block to make sure we remove the debris if the copy fails-\t * (eg, due to out-of-disk-space).  This is not a 100% solution, because-\t * of the possibility of failure during transaction commit, but it should-\t * handle most scenarios.+\t * Use an ENSURE block to make sure we remove the debris if the copy fails.+\t * This is not a 100% solution, because of the possibility of failure+\t * during transaction commit, but it should handle most scenarios.And do we need additional tests? Maybe we don't, but it seems good to make sure.I haven't looked at 0007, and I have no opinion on which approach is better.-- John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 23 Nov 2021 12:59:00 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Nov 23, 2021 at 10:29 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> I've looked over this patch set and email thread a couple times, and I don't see anything amiss, but I'm also not terribly familiar with the subsystems this part of the code relies on. I haven't yet tried to stress test with a large database, but it seems like a good idea to do so.\n\nThanks, John for looking into the patches. Yeah, that makes sense,\nnext week I will try to test with a large database and maybe with\nmultiple tablespaces as well to see how this behaves.\n\n> I have a couple comments and questions:\n>\n> 0006:\n>\n> + * XXX We can optimize RelationMapOidToFileenodeForDatabase API\n> + * so that instead of reading the relmap file every time, it can\n> + * save it in a temporary variable and use it for subsequent\n> + * calls. Then later reset it once we're done or at the\n> + * transaction end.\n>\n> Do we really need to consider optimizing here? Only a handful of relations will be found in the relmap, right?\n\nYou are right, it is actually not required I will remove this comment.\n\n>\n> + * Once we start copying files from the source database, we need to be able\n> + * to clean 'em up if we fail. Use an ENSURE block to make sure this\n> + * happens. (This is not a 100% solution, because of the possibility of\n> + * failure during transaction commit after we leave this routine, but it\n> + * should handle most scenarios.)\n>\n> This comment in master started with\n>\n> - * Once we start copying subdirectories, we need to be able to clean 'em\n>\n> Is the distinction important enough to change this comment? Also, is \"most scenarios\" still true with the patch? I haven't read into how ENSURE works.\n\nActually, it is like PG_TRY(), CATCH() block with extra assurance to\ncleanup on shm_exit as well. And in the cleanup function, we go\nthrough all the tablespaces and remove the new DB-related directory\nwhich we are trying to create. And you are right, we actually don't\nneed to change the comments.\n\n> Same with this comment change, seems fine the way it was:\n\nCorrect.\n\n> - * Use an ENSURE block to make sure we remove the debris if the copy fails\n> - * (eg, due to out-of-disk-space). This is not a 100% solution, because\n> - * of the possibility of failure during transaction commit, but it should\n> - * handle most scenarios.\n> + * Use an ENSURE block to make sure we remove the debris if the copy fails.\n> + * This is not a 100% solution, because of the possibility of failure\n> + * during transaction commit, but it should handle most scenarios.\n>\n> And do we need additional tests? Maybe we don't, but it seems good to make sure.\n>\n> I haven't looked at 0007, and I have no opinion on which approach is better.\n\nOkay, I like approach 6 because of mainly two reasons, 1) it is not\ndirectly scanning the raw file to identify which files to copy so\nseems cleaner to me 2) with 0007 if we directly scan directory we\ndon't know the relation oid, so before acquiring the buffer lock there\nis no way to acquire the relation lock.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 09:49:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Oct 5, 2021 at 7:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Patch details:\n> 0001 to 0006 implements an approach1\n> 0007 removes the code of pg_class scanning and adds the directory scan.\n>\n\nI had a scan through the patches, though have not yet actually run any\ntests to try to better gauge their benefit.\nI do have some initial review comments though:\n\n0003\n\nsrc/backend/commands/tablecmds.c\n(1) RelationCopyAllFork()\nIn the following comment:\n\n+/*\n+ * Copy source smgr all fork's data to the destination smgr.\n+ */\n\nShouldn't it say \"smgr relation\"?\nAlso, you could additionally say \", using a specified fork data\ncopying function.\" or something like that, to account for the\nadditional argument.\n\n\n0006\n\nsrc/backend/commands/dbcommands.c\n(1) function prototype location\n\nThe following prototype is currently located in the \"non-export\nfunction prototypes\" section of the source file, but it's not static -\nshouldn't it be in dbcommands.h?\n\n+void RelationCopyStorageUsingBuffer(SMgrRelation src, SMgrRelation dst,\n+ ForkNumber forkNum, char relpersistence);\n\n(2) CreateDirAndVersionFile()\nShouldn't the following code:\n\n+ fd = OpenTransientFile(versionfile, O_RDWR | O_CREAT | O_EXCL | PG_BINARY);\n+ if (fd < 0 && errno == EEXIST && isRedo)\n+ fd = OpenTransientFile(versionfile, O_RDWR | PG_BINARY);\n\nactually be:\n\n+ fd = OpenTransientFile(versionfile, O_WRONLY | O_CREAT | O_EXCL | PG_BINARY);\n+ if (fd < 0 && errno == EEXIST && isRedo)\n+ fd = OpenTransientFile(versionfile, O_WRONLY | O_TRUNC | PG_BINARY);\n\nsince we're only writing to that file descriptor and we want to\ntruncate the file if it already exists.\n\nThe current comment says \"... open it in the write mode.\", but should\nsay \"... open it in write mode.\"\n\nAlso, shouldn't you be writing a newline (\\n) after the\nPG_MAJORVERSION ? (compare with code in initdb.c)\n\n(3) GetDatabaseRelationList()\nShouldn't:\n\n+ if (PageIsNew(page) || PageIsEmpty(page))\n+ continue;\n\nbe:\n\n+ if (PageIsNew(page) || PageIsEmpty(page))\n+ {\n+ UnlockReleaseBuffer(buf);\n+ continue;\n+ }\n\n?\n\nAlso, in the following code:\n\n+ if (rnodelist == NULL)\n+ rnodelist = list_make1(relinfo);\n+ else\n+ rnodelist = lappend(rnodelist, relinfo);\n\nit should really be \"== NIL\" rather than \"== NULL\".\nBut in any case, that code can just be:\n\n rnodelist = lappend(rnodelist, relinfo);\n\nbecause lappend() will create a list if the first arg is NIL.\n\n(4) RelationCopyStorageUsingBuffer()\n\nIn the function comments, IMO it is better to use \"APIs\" instead of \"apis\".\n\nAlso, better to use \"get\" instead of \"got\" in the following comment:\n\n+ /* If we got a cancel signal during the copy of the data, quit */\n\n\n0007\n\n(I think I prefer the first approach rather than this 2nd approach)\n\nsrc/backend/commands/dbcommands.c\n(1) createdb()\npfree(srcpath) seems to be missing, in the case that CopyDatabase() gets called.\n\n(2) GetRelfileNodeFromFileName()\n%s in sscanf() allows an unbounded read and is considered potentially\ndangerous (allows buffer overflow), especially here where\nFORKNAMECHARS is so small.\n\n+ nmatch = sscanf(filename, \"%u_%s\", &relfilenode, forkname);\n\nhow about using the following instead in this case:\n\n+ nmatch = sscanf(filename, \"%u_%4s\", &relfilenode, forkname);\n\n?\n\n(even if there were > 4 chars after the underscore, it would still\nmatch and InvalidOid would be returned because nmatch==2)\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 25 Nov 2021 18:37:48 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Nov 25, 2021 at 1:07 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Oct 5, 2021 at 7:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > Patch details:\n> > 0001 to 0006 implements an approach1\n> > 0007 removes the code of pg_class scanning and adds the directory scan.\n> >\n>\n> I had a scan through the patches, though have not yet actually run any\n> tests to try to better gauge their benefit.\n> I do have some initial review comments though:\n>\n> 0003\n>\n> src/backend/commands/tablecmds.c\n> (1) RelationCopyAllFork()\n> In the following comment:\n>\n> +/*\n> + * Copy source smgr all fork's data to the destination smgr.\n> + */\n>\n> Shouldn't it say \"smgr relation\"?\n> Also, you could additionally say \", using a specified fork data\n> copying function.\" or something like that, to account for the\n> additional argument.\n>\n>\n> 0006\n>\n> src/backend/commands/dbcommands.c\n> (1) function prototype location\n>\n> The following prototype is currently located in the \"non-export\n> function prototypes\" section of the source file, but it's not static -\n> shouldn't it be in dbcommands.h?\n>\n> +void RelationCopyStorageUsingBuffer(SMgrRelation src, SMgrRelation dst,\n> + ForkNumber forkNum, char relpersistence);\n>\n> (2) CreateDirAndVersionFile()\n> Shouldn't the following code:\n>\n> + fd = OpenTransientFile(versionfile, O_RDWR | O_CREAT | O_EXCL | PG_BINARY);\n> + if (fd < 0 && errno == EEXIST && isRedo)\n> + fd = OpenTransientFile(versionfile, O_RDWR | PG_BINARY);\n>\n> actually be:\n>\n> + fd = OpenTransientFile(versionfile, O_WRONLY | O_CREAT | O_EXCL | PG_BINARY);\n> + if (fd < 0 && errno == EEXIST && isRedo)\n> + fd = OpenTransientFile(versionfile, O_WRONLY | O_TRUNC | PG_BINARY);\n>\n> since we're only writing to that file descriptor and we want to\n> truncate the file if it already exists.\n>\n> The current comment says \"... open it in the write mode.\", but should\n> say \"... open it in write mode.\"\n>\n> Also, shouldn't you be writing a newline (\\n) after the\n> PG_MAJORVERSION ? (compare with code in initdb.c)\n>\n> (3) GetDatabaseRelationList()\n> Shouldn't:\n>\n> + if (PageIsNew(page) || PageIsEmpty(page))\n> + continue;\n>\n> be:\n>\n> + if (PageIsNew(page) || PageIsEmpty(page))\n> + {\n> + UnlockReleaseBuffer(buf);\n> + continue;\n> + }\n>\n> ?\n>\n> Also, in the following code:\n>\n> + if (rnodelist == NULL)\n> + rnodelist = list_make1(relinfo);\n> + else\n> + rnodelist = lappend(rnodelist, relinfo);\n>\n> it should really be \"== NIL\" rather than \"== NULL\".\n> But in any case, that code can just be:\n>\n> rnodelist = lappend(rnodelist, relinfo);\n>\n> because lappend() will create a list if the first arg is NIL.\n>\n> (4) RelationCopyStorageUsingBuffer()\n>\n> In the function comments, IMO it is better to use \"APIs\" instead of \"apis\".\n>\n> Also, better to use \"get\" instead of \"got\" in the following comment:\n>\n> + /* If we got a cancel signal during the copy of the data, quit */\n\nThanks for the review and many valuable comments, I have fixed all of\nthem except this comment (/* If we got a cancel signal during the copy\nof the data, quit */) because this looks fine to me. 0007, I have\ndropped from the patchset for now. I have also included fixes for\ncomments given by John.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 25 Nov 2021 16:46:54 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Nov 25, 2021 at 10:17 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Thanks for the review and many valuable comments, I have fixed all of\n> them except this comment (/* If we got a cancel signal during the copy\n> of the data, quit */) because this looks fine to me. 0007, I have\n> dropped from the patchset for now. I have also included fixes for\n> comments given by John.\n>\n\nAny progress/results yet on testing against a large database (as\nsuggested by John Naylor) and multiple tablespaces?\n\nThanks for the patch updates.\nI have some additional minor comments:\n\n0002\n\n(1) Tidy patch comment\n\nI suggest minor tidying of the patch comment, as follows:\n\nSupport new interfaces in relmapper, 1) Support copying the\nrelmap file from one database path to another database path.\n2) Like RelationMapOidToFilenode, provide another interface\nwhich does the same but, instead of getting it for the database\nwe are connected to, it will get it for the input database\npath.\n\nThese interfaces are required for the next patch, for supporting\nthe WAL-logged created database.\n\n\n0003\n\nsrc/include/commands/tablecmds.h\n(1) typedef void (*copy_relation_storage) ...\n\nThe new typename \"copy_relation_storage\" needs to be added to\nsrc/tools/pgindent/typedefs.list\n\n\n0006\n\nsrc/backend/commands/dbcommands.c\n(1) CreateDirAndVersionFile\n\nAfter writing to the file, you should probably pfree(buf.data), right?\nActually, I don't think StringInfo (dynamic string allocation) is\nneeded here, since the version string is so short, so why not just use\na local \"char buf[16]\" buffer and snprintf() the\nPG_MAJORVERSION+newline into that?\n\nAlso (as mentioned in my first review) shouldn't the \"O_TRUNC\" flag be\nadditionally specified in the case when OpenTransientFile() is tried\nfor a 2nd time because of errno==EEXIST on the 1st attempt? (otherwise\nif the existing file did contain something you'd end up writing after\nthe existing data in the file).\n\n\nsrc/backend/commands/dbcommands.c\n(2) typedef struct CreateDBRelInfo ... CreateDBRelInfo\n\nThe new typename \"CreateDBRelInfo\" needs to be added to\nsrc/tools/pgindent/typedefs.list\n\nsrc/bin/pg_rewind/parsexlog.c\n(3) Include additional header file\n\nIt seems that the following additional header file is not needed to\ncompile the source file:\n\n+#include \"utils/relmapper.h\"\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 1 Dec 2021 12:57:17 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Nov 25, 2021 at 10:17 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Thanks for the review and many valuable comments, I have fixed all of\n> them except this comment (/* If we got a cancel signal during the copy\n> of the data, quit */) because this looks fine to me. 0007, I have\n> dropped from the patchset for now. I have also included fixes for\n> comments given by John.\n>\n\nI found the following issue with the patches applied:\n\nA server crash occurs after the following sequence of commands:\n\ncreate tablespace tbsp1 location '<directory>/tbsp1';\ncreate tablespace tbsp2 location '<directory>/tbsp2';\ncreate database test1 tablespace tbsp1;\ncreate database test2 template test1 tablespace tbsp2;\nalter database test2 set tablespace tbsp1;\ncheckpoint;\n\nThe following type of message is seen in the server log:\n\n2021-12-01 16:48:26.623 AEDT [67423] PANIC: could not fsync file\n\"pg_tblspc/16385/PG_15_202111301/16387/3394\": No such file or\ndirectory\n2021-12-01 16:48:27.228 AEDT [67422] LOG: checkpointer process (PID\n67423) was terminated by signal 6: Aborted\n2021-12-01 16:48:27.228 AEDT [67422] LOG: terminating any other\nactive server processes\n2021-12-01 16:48:27.233 AEDT [67422] LOG: all server processes\nterminated; reinitializing\n\nAlso (prior to running the checkpoint command above) I've seen errors\nlike the following when running pg_dumpall:\n\npg_dump: error: connection to server on socket \"/tmp/.s.PGSQL.5432\"\nfailed: PANIC: could not open critical system index 2662\npg_dumpall: error: pg_dump failed on database \"test2\", exiting\n\nHopefully the above example will help in tracking down the cause.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 1 Dec 2021 17:37:45 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Dec 1, 2021 at 12:07 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Nov 25, 2021 at 10:17 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > Thanks for the review and many valuable comments, I have fixed all of\n> > them except this comment (/* If we got a cancel signal during the copy\n> > of the data, quit */) because this looks fine to me. 0007, I have\n> > dropped from the patchset for now. I have also included fixes for\n> > comments given by John.\n> >\n>\n> I found the following issue with the patches applied:\n>\n> A server crash occurs after the following sequence of commands:\n>\n> create tablespace tbsp1 location '<directory>/tbsp1';\n> create tablespace tbsp2 location '<directory>/tbsp2';\n> create database test1 tablespace tbsp1;\n> create database test2 template test1 tablespace tbsp2;\n> alter database test2 set tablespace tbsp1;\n> checkpoint;\n>\n> The following type of message is seen in the server log:\n>\n> 2021-12-01 16:48:26.623 AEDT [67423] PANIC: could not fsync file\n> \"pg_tblspc/16385/PG_15_202111301/16387/3394\": No such file or\n> directory\n\nThanks a lot for testing this. From the error, it seems like some of\nthe old buffer w.r.t. the previous tablespace is not dropped after the\nmovedb. Actually, we are calling DropDatabaseBuffers() after copying\nto a new tablespace and dropping all the buffers of this database\nw.r.t the old tablespace. But seems something is missing, I will\nreproduce this and try to fix it by tomorrow. I will also fix the\nother review comments raised by you in the previous mail.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Dec 2021 18:04:07 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Dec 1, 2021 at 6:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> Thanks a lot for testing this. From the error, it seems like some of\n> the old buffer w.r.t. the previous tablespace is not dropped after the\n> movedb. Actually, we are calling DropDatabaseBuffers() after copying\n> to a new tablespace and dropping all the buffers of this database\n> w.r.t the old tablespace. But seems something is missing, I will\n> reproduce this and try to fix it by tomorrow. I will also fix the\n> other review comments raised by you in the previous mail.\n\nOkay, I got the issue, basically we are dropping the database buffers\nbut not unregistering the existing sync request for database buffers\nw.r.t old tablespace. Attached patch fixes that. I also had to extend\nForgetDatabaseSyncRequests so that we can delete the sync request of\nthe database for the particular tablespace so added another patch for\nthe same (0006).\n\nI will test the performance scenario next week, which is suggested by John.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 2 Dec 2021 19:19:50 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "I see that this patch is reducing the database creation time by almost 3-4\ntimes provided that the template database has some user data in it.\nHowever, there are couple of points to be noted:\n\n1) It makes the crash recovery a bit slower than before if the crash has\noccurred after the execution of a create database statement. Moreover, if\nthe template database size is big, it might even generate a lot of WAL\nfiles which the user needs to be aware of.\n\n2) This will put a lot of load on the first checkpoint that will occur\nafter creating the database statement. I will experiment around this to see\nif this has any side effects.\n\n--\n\nFurther, the code changes in the patch looks good. I just have few comments:\n\n+void\n+LockRelationId(LockRelId *relid, LOCKMODE lockmode)\n+{\n+ LOCKTAG tag;\n+ LOCALLOCK *locallock;\n+ LockAcquireResult res;\n+\n+ SET_LOCKTAG_RELATION(tag, relid->dbId, relid->relId);\n\nShould there be an assertion statement here to ensure that relid->dbid\nand relid->relid is valid?\n\n--\n\n if (info == XLOG_DBASE_CREATE)\n {\n xl_dbase_create_rec *xlrec = (xl_dbase_create_rec *)\nXLogRecGetData(record);\n- char *src_path;\n- char *dst_path;\n- struct stat st;\n-\n- src_path = GetDatabasePath(xlrec->src_db_id,\nxlrec->src_tablespace_id);\n- dst_path = GetDatabasePath(xlrec->db_id, xlrec->tablespace_id);\n+ char *dbpath;\n\n- /*\n- * Our theory for replaying a CREATE is to forcibly drop the target\n- * subdirectory if present, then re-copy the source data. This may\nbe\n- * more work than needed, but it is simple to implement.\n- */\n- if (stat(dst_path, &st) == 0 && S_ISDIR(st.st_mode))\n- {\n- if (!rmtree(dst_path, true))\n- /* If this failed, copydir() below is going to error. */\n- ereport(WARNING,\n- (errmsg(\"some useless files may be left behind in\nold database directory \\\"%s\\\"\",\n- dst_path)));\n- }\n\nI think this is a significant change and probably needs some kind of\nexplanation/comments as-in why we are just creating a dir and copying the\nversion file when replaying create database operation. Earlier, this meant\nreplaying the complete create database operation, that doesn't seem to be\nthe case now.\n\n--\n\nHave you intentionally skipped pg_internal.init file from being copied to\nthe target database?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\nOn Thu, Dec 2, 2021 at 7:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Wed, Dec 1, 2021 at 6:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > Thanks a lot for testing this. From the error, it seems like some of\n> > the old buffer w.r.t. the previous tablespace is not dropped after the\n> > movedb. Actually, we are calling DropDatabaseBuffers() after copying\n> > to a new tablespace and dropping all the buffers of this database\n> > w.r.t the old tablespace. But seems something is missing, I will\n> > reproduce this and try to fix it by tomorrow. I will also fix the\n> > other review comments raised by you in the previous mail.\n>\n> Okay, I got the issue, basically we are dropping the database buffers\n> but not unregistering the existing sync request for database buffers\n> w.r.t old tablespace. Attached patch fixes that. I also had to extend\n> ForgetDatabaseSyncRequests so that we can delete the sync request of\n> the database for the particular tablespace so added another patch for\n> the same (0006).\n>\n> I will test the performance scenario next week, which is suggested by John.\n>\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nI see that this patch is reducing the database creation time by almost 3-4 times provided that the template database has some user data in it. However, there are couple of points to be noted:1) It makes the crash recovery a bit slower than before if the crash has occurred after the execution of a create database statement. Moreover, if the template database size is big, it might even generate a lot of WAL files which the user needs to be aware of.2) This will put a lot of load on the first checkpoint that will occur after creating the database statement. I will experiment around this to see if this has any side effects.--Further, the code changes in the patch looks good. I just have few comments:+void+LockRelationId(LockRelId *relid, LOCKMODE lockmode)+{+   LOCKTAG     tag;+   LOCALLOCK  *locallock;+   LockAcquireResult res;++   SET_LOCKTAG_RELATION(tag, relid->dbId, relid->relId);Should there be an assertion statement here to ensure that relid->dbid and  relid->relid is valid?--    if (info == XLOG_DBASE_CREATE)    {        xl_dbase_create_rec *xlrec = (xl_dbase_create_rec *) XLogRecGetData(record);-       char       *src_path;-       char       *dst_path;-       struct stat st;--       src_path = GetDatabasePath(xlrec->src_db_id, xlrec->src_tablespace_id);-       dst_path = GetDatabasePath(xlrec->db_id, xlrec->tablespace_id);+       char       *dbpath;-       /*-        * Our theory for replaying a CREATE is to forcibly drop the target-        * subdirectory if present, then re-copy the source data. This may be-        * more work than needed, but it is simple to implement.-        */-       if (stat(dst_path, &st) == 0 && S_ISDIR(st.st_mode))-       {-           if (!rmtree(dst_path, true))-               /* If this failed, copydir() below is going to error. */-               ereport(WARNING,-                       (errmsg(\"some useless files may be left behind in old database directory \\\"%s\\\"\",-                               dst_path)));-       }I think this is a significant change and probably needs some kind of explanation/comments as-in why we are just creating a dir and copying the version file when replaying create database operation. Earlier, this meant replaying the complete create database operation, that doesn't seem to be the case now.--Have you intentionally skipped pg_internal.init file from being copied to the target database?--With Regards,Ashutosh Sharma.On Thu, Dec 2, 2021 at 7:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Wed, Dec 1, 2021 at 6:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> Thanks a lot for testing this. From the error, it seems like some of\n> the old buffer w.r.t. the previous tablespace is not dropped after the\n> movedb.  Actually, we are calling DropDatabaseBuffers() after copying\n> to a new tablespace and dropping all the buffers of this database\n> w.r.t the old tablespace.  But seems something is missing, I will\n> reproduce this and try to fix it by tomorrow.  I will also fix the\n> other review comments raised by you in the previous mail.\n\nOkay, I got the issue, basically we are dropping the database buffers\nbut not unregistering the existing sync request for database buffers\nw.r.t old tablespace. Attached patch fixes that.  I also had to extend\nForgetDatabaseSyncRequests so that we can delete the sync request of\nthe database for the particular tablespace so added another patch for\nthe same (0006).\n\nI will test the performance scenario next week, which is suggested by John.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Dec 2021 19:38:45 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Dec 3, 2021 at 7:38 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> I see that this patch is reducing the database creation time by almost 3-4 times provided that the template database has some user data in it. However, there are couple of points to be noted:\n\nThanks a lot for looking into the patches.\n>\n> 1) It makes the crash recovery a bit slower than before if the crash has occurred after the execution of a create database statement. Moreover, if the template database size is big, it might even generate a lot of WAL files which the user needs to be aware of.\n\nYes it will but actually that is the only correct way to do it, in\ncurrent we are just logging the WAL as copying the source directory to\ndestination directory without really noting down exactly what we\nwanted to copy, so we are force to do the checkpoint right after\ncreate database because in crash recovery we can not actually replay\nthat WAL. Because WAL just say copy the source to destination so it\nis very much possible that at the DO time source directory had some\ndifferent content than the REDO time so this would have created the\ninconsistencies in the crash recovery so to avoid this bug they force\nthe checkpoint so now also if you do force checkpoint then again crash\nrecovery will be equally fast. So I would not say that we have made\ncrash recovery slow but we have removed some bugs and with that now we\ndon't need to force the checkpoint. Also note that in current code\neven with force checkpoint the bug is not completely avoided in all\nthe cases, check below comments from the code[1].\n\n> 2) This will put a lot of load on the first checkpoint that will occur after creating the database statement. I will experiment around this to see if this has any side effects.\n\nBut now a checkpoint can happen at its own need and there is no need\nto force a checkpoint like it was before patch.\n\nSo the major goal of this patch is 1) Correctly WAL log the create\ndatabase which is hack in the current system, 2) Avoid force\ncheckpoints, 3) We copy page by page so it will support TDE because if\nthe source and destination database has different encryption then we\ncan reencrypt the page before copying to destination database, which\nis not possible in current system as we are copying directory 4) Now\nthe new database pages will get the latest LSN which is the correct\nthings earlier new database pages were getting copied directly with\nold LSN only.\n\n\n> Further, the code changes in the patch looks good. I just have few comments:\n\nI will look into the other comments and get back to you, thanks.\n\n[1]\n* In PITR replay, the first of these isn't an issue, and the second\n* is only a risk if the CREATE DATABASE and subsequent template\n* database change both occur while a base backup is being taken.\n* There doesn't seem to be much we can do about that except document\n* it as a limitation.\n*\n* Perhaps if we ever implement CREATE DATABASE in a less cheesy way,\n* we can avoid this.\n*/\nRequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_FORCE | CHECKPOINT_WAIT);\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Dec 2021 20:27:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Dec 3, 2021 at 8:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Fri, Dec 3, 2021 at 7:38 PM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> >\n> > I see that this patch is reducing the database creation time by almost\n> 3-4 times provided that the template database has some user data in it.\n> However, there are couple of points to be noted:\n>\n> Thanks a lot for looking into the patches.\n> >\n> > 1) It makes the crash recovery a bit slower than before if the crash has\n> occurred after the execution of a create database statement. Moreover, if\n> the template database size is big, it might even generate a lot of WAL\n> files which the user needs to be aware of.\n>\n> Yes it will but actually that is the only correct way to do it, in\n> current we are just logging the WAL as copying the source directory to\n> destination directory without really noting down exactly what we\n> wanted to copy, so we are force to do the checkpoint right after\n> create database because in crash recovery we can not actually replay\n> that WAL. Because WAL just say copy the source to destination so it\n> is very much possible that at the DO time source directory had some\n> different content than the REDO time so this would have created the\n> inconsistencies in the crash recovery so to avoid this bug they force\n> the checkpoint so now also if you do force checkpoint then again crash\n> recovery will be equally fast. So I would not say that we have made\n> crash recovery slow but we have removed some bugs and with that now we\n> don't need to force the checkpoint. Also note that in current code\n> even with force checkpoint the bug is not completely avoided in all\n> the cases, check below comments from the code[1].\n>\n> > 2) This will put a lot of load on the first checkpoint that will occur\n> after creating the database statement. I will experiment around this to see\n> if this has any side effects.\n>\n> But now a checkpoint can happen at its own need and there is no need\n> to force a checkpoint like it was before patch.\n>\n> So the major goal of this patch is 1) Correctly WAL log the create\n> database which is hack in the current system, 2) Avoid force\n> checkpoints, 3) We copy page by page so it will support TDE because if\n> the source and destination database has different encryption then we\n> can reencrypt the page before copying to destination database, which\n> is not possible in current system as we are copying directory 4) Now\n> the new database pages will get the latest LSN which is the correct\n> things earlier new database pages were getting copied directly with\n> old LSN only.\n>\n\nOK. Understood, thanks.!\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Fri, Dec 3, 2021 at 8:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Fri, Dec 3, 2021 at 7:38 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> I see that this patch is reducing the database creation time by almost 3-4 times provided that the template database has some user data in it. However, there are couple of points to be noted:\n\nThanks a lot for looking into the patches.\n>\n> 1) It makes the crash recovery a bit slower than before if the crash has occurred after the execution of a create database statement. Moreover, if the template database size is big, it might even generate a lot of WAL files which the user needs to be aware of.\n\nYes it will but actually that is the only correct way to do it, in\ncurrent we are just logging the WAL as copying the source directory to\ndestination directory without really noting down exactly what we\nwanted to copy, so we are force to do the checkpoint right after\ncreate database because in crash recovery we can not actually replay\nthat WAL.  Because WAL just say copy the source to destination so it\nis very much possible that at the DO time source directory had some\ndifferent content than the REDO time so this would have created the\ninconsistencies in the crash recovery so to avoid this bug they force\nthe checkpoint so now also if you do force checkpoint then again crash\nrecovery will be equally fast.  So I would not say that we have made\ncrash recovery slow but we have removed some bugs and with that now we\ndon't need to force the checkpoint.  Also note that in current code\neven with force checkpoint the bug is not completely avoided in all\nthe cases, check below comments from the code[1].\n\n> 2) This will put a lot of load on the first checkpoint that will occur after creating the database statement. I will experiment around this to see if this has any side effects.\n\nBut now a checkpoint can happen at its own need and there is no need\nto force a checkpoint like it was before patch.\n\nSo the major goal of this patch is 1) Correctly WAL log the create\ndatabase which is hack in the current system,  2) Avoid force\ncheckpoints, 3) We copy page by page so it will support TDE because if\nthe source and destination database has different encryption then we\ncan reencrypt the page before copying to destination database, which\nis not possible in current system as we are copying directory  4) Now\nthe new database pages will get the latest LSN which is the correct\nthings earlier new database pages were getting copied directly with\nold LSN only.OK. Understood, thanks.!--With Regards,Ashutosh Sharma.", "msg_date": "Mon, 6 Dec 2021 09:12:52 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Here are few more review comments:\n\n1) It seems that we are not freeing the memory allocated for buf.data in\nCreateDirAndVersionFile().\n\n--\n\n+ */\n+static void\n+CreateDirAndVersionFile(char *dbpath, Oid dbid, Oid tsid, bool isRedo)\n+{\n\n2) Do we need to pass dbpath here? I mean why not reconstruct it from dbid\nand tsid.\n\n--\n\n3) Not sure if this point has already been discussed, Will we be able to\nrecover the data when wal_level is set to minimal because the following\ncondition would be false with this wal level.\n\n+ use_wal = XLogIsNeeded() &&\n+ (relpersistence == RELPERSISTENCE_PERMANENT || copying_initfork);\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Mon, Dec 6, 2021 at 9:12 AM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> On Fri, Dec 3, 2021 at 8:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n>> On Fri, Dec 3, 2021 at 7:38 PM Ashutosh Sharma <ashu.coek88@gmail.com>\n>> wrote:\n>> >\n>> > I see that this patch is reducing the database creation time by almost\n>> 3-4 times provided that the template database has some user data in it.\n>> However, there are couple of points to be noted:\n>>\n>> Thanks a lot for looking into the patches.\n>> >\n>> > 1) It makes the crash recovery a bit slower than before if the crash\n>> has occurred after the execution of a create database statement. Moreover,\n>> if the template database size is big, it might even generate a lot of WAL\n>> files which the user needs to be aware of.\n>>\n>> Yes it will but actually that is the only correct way to do it, in\n>> current we are just logging the WAL as copying the source directory to\n>> destination directory without really noting down exactly what we\n>> wanted to copy, so we are force to do the checkpoint right after\n>> create database because in crash recovery we can not actually replay\n>> that WAL. Because WAL just say copy the source to destination so it\n>> is very much possible that at the DO time source directory had some\n>> different content than the REDO time so this would have created the\n>> inconsistencies in the crash recovery so to avoid this bug they force\n>> the checkpoint so now also if you do force checkpoint then again crash\n>> recovery will be equally fast. So I would not say that we have made\n>> crash recovery slow but we have removed some bugs and with that now we\n>> don't need to force the checkpoint. Also note that in current code\n>> even with force checkpoint the bug is not completely avoided in all\n>> the cases, check below comments from the code[1].\n>>\n>> > 2) This will put a lot of load on the first checkpoint that will occur\n>> after creating the database statement. I will experiment around this to see\n>> if this has any side effects.\n>>\n>> But now a checkpoint can happen at its own need and there is no need\n>> to force a checkpoint like it was before patch.\n>>\n>> So the major goal of this patch is 1) Correctly WAL log the create\n>> database which is hack in the current system, 2) Avoid force\n>> checkpoints, 3) We copy page by page so it will support TDE because if\n>> the source and destination database has different encryption then we\n>> can reencrypt the page before copying to destination database, which\n>> is not possible in current system as we are copying directory 4) Now\n>> the new database pages will get the latest LSN which is the correct\n>> things earlier new database pages were getting copied directly with\n>> old LSN only.\n>>\n>\n> OK. Understood, thanks.!\n>\n> --\n> With Regards,\n> Ashutosh Sharma.\n>\n\nHere are few more review comments:1) It seems that we are not freeing the memory allocated for buf.data in CreateDirAndVersionFile().--+ */+static void+CreateDirAndVersionFile(char *dbpath, Oid dbid, Oid tsid, bool isRedo)+{2) Do we need to pass dbpath here? I mean why not reconstruct it from dbid and tsid.--3) Not sure if this point has already been discussed, Will we be able to recover the data when wal_level is set to minimal because the following condition would be false with this wal level.+   use_wal = XLogIsNeeded() &&+       (relpersistence == RELPERSISTENCE_PERMANENT || copying_initfork);--With Regards,Ashutosh Sharma.On Mon, Dec 6, 2021 at 9:12 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:On Fri, Dec 3, 2021 at 8:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Fri, Dec 3, 2021 at 7:38 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> I see that this patch is reducing the database creation time by almost 3-4 times provided that the template database has some user data in it. However, there are couple of points to be noted:\n\nThanks a lot for looking into the patches.\n>\n> 1) It makes the crash recovery a bit slower than before if the crash has occurred after the execution of a create database statement. Moreover, if the template database size is big, it might even generate a lot of WAL files which the user needs to be aware of.\n\nYes it will but actually that is the only correct way to do it, in\ncurrent we are just logging the WAL as copying the source directory to\ndestination directory without really noting down exactly what we\nwanted to copy, so we are force to do the checkpoint right after\ncreate database because in crash recovery we can not actually replay\nthat WAL.  Because WAL just say copy the source to destination so it\nis very much possible that at the DO time source directory had some\ndifferent content than the REDO time so this would have created the\ninconsistencies in the crash recovery so to avoid this bug they force\nthe checkpoint so now also if you do force checkpoint then again crash\nrecovery will be equally fast.  So I would not say that we have made\ncrash recovery slow but we have removed some bugs and with that now we\ndon't need to force the checkpoint.  Also note that in current code\neven with force checkpoint the bug is not completely avoided in all\nthe cases, check below comments from the code[1].\n\n> 2) This will put a lot of load on the first checkpoint that will occur after creating the database statement. I will experiment around this to see if this has any side effects.\n\nBut now a checkpoint can happen at its own need and there is no need\nto force a checkpoint like it was before patch.\n\nSo the major goal of this patch is 1) Correctly WAL log the create\ndatabase which is hack in the current system,  2) Avoid force\ncheckpoints, 3) We copy page by page so it will support TDE because if\nthe source and destination database has different encryption then we\ncan reencrypt the page before copying to destination database, which\nis not possible in current system as we are copying directory  4) Now\nthe new database pages will get the latest LSN which is the correct\nthings earlier new database pages were getting copied directly with\nold LSN only.OK. Understood, thanks.!--With Regards,Ashutosh Sharma.", "msg_date": "Mon, 6 Dec 2021 09:17:39 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Dec 6, 2021 at 9:17 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Here are few more review comments:\n\nThanks for reviewing it.\n\n> 1) It seems that we are not freeing the memory allocated for buf.data in CreateDirAndVersionFile().\n\nYeah this was a problem in v6 but I have fixed in v7, can you check that.\n>\n> + */\n> +static void\n> +CreateDirAndVersionFile(char *dbpath, Oid dbid, Oid tsid, bool isRedo)\n> +{\n>\n> 2) Do we need to pass dbpath here? I mean why not reconstruct it from dbid and tsid.\n\nYeah we can do that but I thought computing dbpath has some cost and\nsince the caller already has it why not to pass it.\n\n>\n> 3) Not sure if this point has already been discussed, Will we be able to recover the data when wal_level is set to minimal because the following condition would be false with this wal level.\n>\n> + use_wal = XLogIsNeeded() &&\n> + (relpersistence == RELPERSISTENCE_PERMANENT || copying_initfork);\n>\n\nSince we are creating new relfilenode this is fine, refer \"Skipping\nWAL for New RelFileNode\" in src/backend/access/transam/README\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Dec 2021 09:59:16 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Thank you, Dilip for the quick response. I am okay with the changes done in\nthe v7 patch.\n\nOne last point - If we try to clone a huge database, as expected CREATE\nDATABASE emits a lot of WALs, causing a lot of intermediate checkpoints\nwhich seems to be affecting the performance slightly.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Mon, Dec 6, 2021 at 9:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Mon, Dec 6, 2021 at 9:17 AM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> >\n> > Here are few more review comments:\n>\n> Thanks for reviewing it.\n>\n> > 1) It seems that we are not freeing the memory allocated for buf.data in\n> CreateDirAndVersionFile().\n>\n> Yeah this was a problem in v6 but I have fixed in v7, can you check that.\n> >\n> > + */\n> > +static void\n> > +CreateDirAndVersionFile(char *dbpath, Oid dbid, Oid tsid, bool isRedo)\n> > +{\n> >\n> > 2) Do we need to pass dbpath here? I mean why not reconstruct it from\n> dbid and tsid.\n>\n> Yeah we can do that but I thought computing dbpath has some cost and\n> since the caller already has it why not to pass it.\n>\n> >\n> > 3) Not sure if this point has already been discussed, Will we be able to\n> recover the data when wal_level is set to minimal because the following\n> condition would be false with this wal level.\n> >\n> > + use_wal = XLogIsNeeded() &&\n> > + (relpersistence == RELPERSISTENCE_PERMANENT || copying_initfork);\n> >\n>\n> Since we are creating new relfilenode this is fine, refer \"Skipping\n> WAL for New RelFileNode\" in src/backend/access/transam/README\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nThank you, Dilip for the quick response. I am okay with the changes done in the v7 patch.One last point - If we try to clone a huge database, as expected CREATE DATABASE emits a lot of WALs, causing a lot of intermediate checkpoints which seems to be affecting the performance slightly.--With Regards,Ashutosh Sharma.On Mon, Dec 6, 2021 at 9:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Mon, Dec 6, 2021 at 9:17 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Here are few more review comments:\n\nThanks for reviewing it.\n\n> 1) It seems that we are not freeing the memory allocated for buf.data in CreateDirAndVersionFile().\n\nYeah this was a problem in v6 but I have fixed in v7, can you check that.\n>\n> + */\n> +static void\n> +CreateDirAndVersionFile(char *dbpath, Oid dbid, Oid tsid, bool isRedo)\n> +{\n>\n> 2) Do we need to pass dbpath here? I mean why not reconstruct it from dbid and tsid.\n\nYeah we can do that but I thought computing dbpath has some cost and\nsince the caller already has it why not to pass it.\n\n>\n> 3) Not sure if this point has already been discussed, Will we be able to recover the data when wal_level is set to minimal because the following condition would be false with this wal level.\n>\n> +   use_wal = XLogIsNeeded() &&\n> +       (relpersistence == RELPERSISTENCE_PERMANENT || copying_initfork);\n>\n\nSince we are creating new relfilenode this is fine, refer \"Skipping\nWAL for New RelFileNode\" in src/backend/access/transam/README\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 6 Dec 2021 19:53:13 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Dec 6, 2021 at 9:23 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> One last point - If we try to clone a huge database, as expected CREATE DATABASE emits a lot of WALs, causing a lot of intermediate checkpoints which seems to be affecting the performance slightly.\n\nYes, I think this needs to be characterized better. If you have a big\nshared buffers setting and a lot of those buffers are dirty and the\ntemplate database is small, all of which is fairly normal, then this\nnew approach should be much quicker. On the other hand, what if the\nsituation is reversed? Perhaps you have a small shared buffers and not\nmuch of it is dirty and the template database is gigantic. Then maybe\nthis new approach will be slower. But right now I think we don't know\nwhere the crossover point is, and I think we should try to figure that\nout.\n\nSo for example, imagine tests with 1GB of shard_buffers, 8GB, and\n64GB. And template databases with sizes of whatever the default is,\n1GB, 10GB, 100GB. Repeatedly make 75% of the pages dirty and then\ncreate a new database from one of the templates. And then just measure\nthe performance. Maybe for large databases this approach is just\nreally the pits -- and if your max_wal_size is too small, it\ndefinitely will be. But, I don't know, maybe with reasonable settings\nit's not that bad. Writing everything to disk twice - once to WAL and\nonce to the target directory - has to be more expensive than doing it\nonce. But on the other hand, it's all sequential I/O and the data\npages don't need to be fsync'd, so perhaps the overhead is relatively\nmild. I don't know.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Dec 2021 12:45:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Thanks Robert for sharing your thoughts.\n\nOn Mon, Dec 6, 2021 at 11:16 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Dec 6, 2021 at 9:23 AM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> > One last point - If we try to clone a huge database, as expected CREATE\n> DATABASE emits a lot of WALs, causing a lot of intermediate checkpoints\n> which seems to be affecting the performance slightly.\n>\n> Yes, I think this needs to be characterized better. If you have a big\n> shared buffers setting and a lot of those buffers are dirty and the\n> template database is small, all of which is fairly normal, then this\n> new approach should be much quicker. On the other hand, what if the\n> situation is reversed? Perhaps you have a small shared buffers and not\n> much of it is dirty and the template database is gigantic. Then maybe\n> this new approach will be slower. But right now I think we don't know\n> where the crossover point is, and I think we should try to figure that\n> out.\n>\n\nYes I think so too.\n\n\n>\n> So for example, imagine tests with 1GB of shard_buffers, 8GB, and\n> 64GB. And template databases with sizes of whatever the default is,\n> 1GB, 10GB, 100GB. Repeatedly make 75% of the pages dirty and then\n> create a new database from one of the templates. And then just measure\n> the performance. Maybe for large databases this approach is just\n> really the pits -- and if your max_wal_size is too small, it\n> definitely will be. But, I don't know, maybe with reasonable settings\n> it's not that bad. Writing everything to disk twice - once to WAL and\n> once to the target directory - has to be more expensive than doing it\n> once. But on the other hand, it's all sequential I/O and the data\n> pages don't need to be fsync'd, so perhaps the overhead is relatively\n> mild. I don't know.\n>\n\nSo far, I haven't found much performance overhead with a few gb of data in\nthe template database. It's just a bit with the default settings, perhaps\nsetting a higher value of max_wal_size would reduce this overhead.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nThanks Robert for sharing your thoughts.On Mon, Dec 6, 2021 at 11:16 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Dec 6, 2021 at 9:23 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> One last point - If we try to clone a huge database, as expected CREATE DATABASE emits a lot of WALs, causing a lot of intermediate checkpoints which seems to be affecting the performance slightly.\n\nYes, I think this needs to be characterized better. If you have a big\nshared buffers setting and a lot of those buffers are dirty and the\ntemplate database is small, all of which is fairly normal, then this\nnew approach should be much quicker. On the other hand, what if the\nsituation is reversed? Perhaps you have a small shared buffers and not\nmuch of it is dirty and the template database is gigantic. Then maybe\nthis new approach will be slower. But right now I think we don't know\nwhere the crossover point is, and I think we should try to figure that\nout.Yes I think so too. \n\nSo for example, imagine tests with 1GB of shard_buffers, 8GB, and\n64GB. And template databases with sizes of whatever the default is,\n1GB, 10GB, 100GB. Repeatedly make 75% of the pages dirty and then\ncreate a new database from one of the templates. And then just measure\nthe performance. Maybe for large databases this approach is just\nreally the pits -- and if your max_wal_size is too small, it\ndefinitely will be. But, I don't know, maybe with reasonable settings\nit's not that bad. Writing everything to disk twice - once to WAL and\nonce to the target directory - has to be more expensive than doing it\nonce. But on the other hand, it's all sequential I/O and the data\npages don't need to be fsync'd, so perhaps the overhead is relatively\nmild. I don't know.So far, I haven't found much performance overhead with a few gb of data in the template database. It's just a bit with the default settings, perhaps setting a higher value of max_wal_size would reduce this overhead.--With Regards,Ashutosh Sharma.", "msg_date": "Tue, 7 Dec 2021 06:22:34 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Dec 6, 2021 at 7:53 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Thank you, Dilip for the quick response. I am okay with the changes done in the v7 patch.\n>\n> One last point - If we try to clone a huge database, as expected CREATE DATABASE emits a lot of WALs, causing a lot of intermediate checkpoints which seems to be affecting the performance slightly.\n\nYeah, that is a valid point because instead of just one WAL for\ncreatedb we will generate WAL for each page in the database, so I\nagree that if the max_wal_size is not enough for those WALs then we\nmight have to pay the cost of multiple checkpoints. However, if we\ncompare it with the current mechanism then now it is a forced\ncheckpoint and there is no way to avoid it whereas with the new\napproach user can set enough max_wal_size and they can avoid it. So\nin other words now the checkpoint is driven by the amount of resource\nwhich is true for any other operation e.g. ALTER TABLE SET TABLESPACE\nso now it is in more sync with the rest of the system, but without the\npatch, it was a special purpose forced checkpoint only for the\ncreatedb.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Dec 2021 13:53:42 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hello Dilip,\n\nWhile testing the v7 patches, I am observing a crash with the below test\ncase.\n\nTest case:\ncreate tablespace tab location '<dir_path>/test_dir';\ncreate tablespace tab1 location '<dir_path>/test_dir1';\ncreate database test tablespace tab;\n\\c test\ncreate table t( a int PRIMARY KEY,b text);\nCREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS 'select\narray_agg(md5(g::text))::text from generate_series(1, 256) g';\ninsert into t values (generate_series(1,2000000), large_val());\nalter table t set tablespace tab1 ;\n\\c postgres\ncreate database test1 template test;\nalter database test set tablespace pg_default;\nalter database test set tablespace tab;\n\\c test1\nalter table t set tablespace tab;\n\n Logfile says:\n2021-12-08 23:31:58.855 +04 [134252] PANIC: could not fsync file\n\"base/16386/4152\": No such file or directory\n2021-12-08 23:31:59.398 +04 [134251] LOG: checkpointer process (PID\n134252) was terminated by signal 6: Aborted\n\n\nThanks.\n--\nRegards,\nNeha Sharma\n\n\nOn Tue, Dec 7, 2021 at 12:24 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Mon, Dec 6, 2021 at 7:53 PM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> >\n> > Thank you, Dilip for the quick response. I am okay with the changes done\n> in the v7 patch.\n> >\n> > One last point - If we try to clone a huge database, as expected CREATE\n> DATABASE emits a lot of WALs, causing a lot of intermediate checkpoints\n> which seems to be affecting the performance slightly.\n>\n> Yeah, that is a valid point because instead of just one WAL for\n> createdb we will generate WAL for each page in the database, so I\n> agree that if the max_wal_size is not enough for those WALs then we\n> might have to pay the cost of multiple checkpoints. However, if we\n> compare it with the current mechanism then now it is a forced\n> checkpoint and there is no way to avoid it whereas with the new\n> approach user can set enough max_wal_size and they can avoid it. So\n> in other words now the checkpoint is driven by the amount of resource\n> which is true for any other operation e.g. ALTER TABLE SET TABLESPACE\n> so now it is in more sync with the rest of the system, but without the\n> patch, it was a special purpose forced checkpoint only for the\n> createdb.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\nHello Dilip,While testing the v7 patches, I am observing a crash with the below test case.Test case:create tablespace tab location '<dir_path>/test_dir';create tablespace tab1 location '<dir_path>/test_dir1';create database test tablespace tab;\\c testcreate table t( a int PRIMARY KEY,b text);CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';insert into t values (generate_series(1,2000000), large_val());alter table t set tablespace tab1 ;\\c postgrescreate database test1 template test;alter database test set tablespace pg_default;alter database test set tablespace tab;\\c test1alter table t set tablespace tab; Logfile says:2021-12-08 23:31:58.855 +04 [134252] PANIC:  could not fsync file \"base/16386/4152\": No such file or directory2021-12-08 23:31:59.398 +04 [134251] LOG:  checkpointer process (PID 134252) was terminated by signal 6: AbortedThanks.--Regards,Neha SharmaOn Tue, Dec 7, 2021 at 12:24 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Mon, Dec 6, 2021 at 7:53 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Thank you, Dilip for the quick response. I am okay with the changes done in the v7 patch.\n>\n> One last point - If we try to clone a huge database, as expected CREATE DATABASE emits a lot of WALs, causing a lot of intermediate checkpoints which seems to be affecting the performance slightly.\n\nYeah, that is a valid point because instead of just one WAL for\ncreatedb we will generate WAL for each page in the database, so I\nagree that if the max_wal_size is not enough for those WALs then we\nmight have to pay the cost of multiple checkpoints.  However, if we\ncompare it with the current mechanism then now it is a forced\ncheckpoint and there is no way to avoid it whereas with the new\napproach user can set enough max_wal_size and they can avoid it.  So\nin other words now the checkpoint is driven by the amount of resource\nwhich is true for any other operation e.g. ALTER TABLE SET TABLESPACE\nso now it is in more sync with the rest of the system, but without the\npatch, it was a special purpose forced checkpoint only for the\ncreatedb.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 8 Dec 2021 23:57:31 +0400", "msg_from": "Neha Sharma <neha.sharma@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Dec 9, 2021 at 6:57 AM Neha Sharma <neha.sharma@enterprisedb.com> wrote:\n>\n> While testing the v7 patches, I am observing a crash with the below test case.\n>\n> Test case:\n> create tablespace tab location '<dir_path>/test_dir';\n> create tablespace tab1 location '<dir_path>/test_dir1';\n> create database test tablespace tab;\n> \\c test\n> create table t( a int PRIMARY KEY,b text);\n> CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\n> insert into t values (generate_series(1,2000000), large_val());\n> alter table t set tablespace tab1 ;\n> \\c postgres\n> create database test1 template test;\n> alter database test set tablespace pg_default;\n> alter database test set tablespace tab;\n> \\c test1\n> alter table t set tablespace tab;\n>\n> Logfile says:\n> 2021-12-08 23:31:58.855 +04 [134252] PANIC: could not fsync file \"base/16386/4152\": No such file or directory\n> 2021-12-08 23:31:59.398 +04 [134251] LOG: checkpointer process (PID 134252) was terminated by signal 6: Aborted\n>\n\nI tried to reproduce the issue using your test scenario, but I needed\nto reduce the amount of inserted data (so reduced 2000000 to 20000)\ndue to disk space.\nI then consistently get an error like the following:\n\npostgres=# alter database test set tablespace pg_default;\nERROR: could not create file\n\"pg_tblspc/16385/PG_15_202111301/16386/36395\": File exists\n\n(this only happens when the patch is used)\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 9 Dec 2021 11:26:18 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Dec 9, 2021 at 4:26 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n\n> On Thu, Dec 9, 2021 at 6:57 AM Neha Sharma <neha.sharma@enterprisedb.com>\n> wrote:\n> >\n> > While testing the v7 patches, I am observing a crash with the below test\n> case.\n> >\n> > Test case:\n> > create tablespace tab location '<dir_path>/test_dir';\n> > create tablespace tab1 location '<dir_path>/test_dir1';\n> > create database test tablespace tab;\n> > \\c test\n> > create table t( a int PRIMARY KEY,b text);\n> > CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS\n> 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\n> > insert into t values (generate_series(1,2000000), large_val());\n> > alter table t set tablespace tab1 ;\n> > \\c postgres\n> > create database test1 template test;\n> > alter database test set tablespace pg_default;\n> > alter database test set tablespace tab;\n> > \\c test1\n> > alter table t set tablespace tab;\n> >\n> > Logfile says:\n> > 2021-12-08 23:31:58.855 +04 [134252] PANIC: could not fsync file\n> \"base/16386/4152\": No such file or directory\n> > 2021-12-08 23:31:59.398 +04 [134251] LOG: checkpointer process (PID\n> 134252) was terminated by signal 6: Aborted\n> >\n>\n> I tried to reproduce the issue using your test scenario, but I needed\n> to reduce the amount of inserted data (so reduced 2000000 to 20000)\n> due to disk space.\n> I then consistently get an error like the following:\n>\n> postgres=# alter database test set tablespace pg_default;\n> ERROR: could not create file\n> \"pg_tblspc/16385/PG_15_202111301/16386/36395\": File exists\n>\n> (this only happens when the patch is used)\n>\n>\nYes, I was also getting this, and moving further we get a crash when we\nalter the table of database test1.\nBelow is the output of the test at my end.\n\npostgres=# create tablespace tab1 location\n'/home/edb/PGsources/postgresql/inst/bin/rep_test1';\nCREATE TABLESPACE\npostgres=# create tablespace tab location\n'/home/edb/PGsources/postgresql/inst/bin/rep_test';\nCREATE TABLESPACE\npostgres=# create database test tablespace tab;\nCREATE DATABASE\npostgres=# \\c test\nYou are now connected to database \"test\" as user \"edb\".\ntest=# create table t( a int PRIMARY KEY,b text);\nCREATE TABLE\ntest=# CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS\n'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\nCREATE FUNCTION\ntest=# insert into t values (generate_series(1,2000000), large_val());\nINSERT 0 2000000\ntest=# alter table t set tablespace tab1 ;\nALTER TABLE\ntest=# \\c postgres\nYou are now connected to database \"postgres\" as user \"edb\".\npostgres=# create database test1 template test;\nCREATE DATABASE\npostgres=# alter database test set tablespace pg_default;\nERROR: could not create file\n\"pg_tblspc/16384/PG_15_202111301/16386/2016395\": File exists\npostgres=# alter database test set tablespace tab;\nALTER DATABASE\npostgres=# \\c test1\nYou are now connected to database \"test1\" as user \"edb\".\ntest1=# alter table t set tablespace tab;\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back the\ncurrent transaction and exit, because another server process exited\nabnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!?>\n\n>\n> Regards,\n> Greg Nancarrow\n> Fujitsu Australia\n>\n\nOn Thu, Dec 9, 2021 at 4:26 AM Greg Nancarrow <gregn4422@gmail.com> wrote:On Thu, Dec 9, 2021 at 6:57 AM Neha Sharma <neha.sharma@enterprisedb.com> wrote:\n>\n> While testing the v7 patches, I am observing a crash with the below test case.\n>\n> Test case:\n> create tablespace tab location '<dir_path>/test_dir';\n> create tablespace tab1 location '<dir_path>/test_dir1';\n> create database test tablespace tab;\n> \\c test\n> create table t( a int PRIMARY KEY,b text);\n> CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\n> insert into t values (generate_series(1,2000000), large_val());\n> alter table t set tablespace tab1 ;\n> \\c postgres\n> create database test1 template test;\n> alter database test set tablespace pg_default;\n> alter database test set tablespace tab;\n> \\c test1\n> alter table t set tablespace tab;\n>\n>  Logfile says:\n> 2021-12-08 23:31:58.855 +04 [134252] PANIC:  could not fsync file \"base/16386/4152\": No such file or directory\n> 2021-12-08 23:31:59.398 +04 [134251] LOG:  checkpointer process (PID 134252) was terminated by signal 6: Aborted\n>\n\nI tried to reproduce the issue using your test scenario, but I needed\nto reduce the amount of inserted data (so reduced 2000000 to 20000)\ndue to disk space.\nI then consistently get an error like the following:\n\npostgres=# alter database test set tablespace pg_default;\nERROR:  could not create file\n\"pg_tblspc/16385/PG_15_202111301/16386/36395\": File exists\n\n(this only happens when the patch is used)\n Yes, I was also getting this, and moving further we get a crash when we alter the table of database test1.Below is the output of the test at my end.postgres=# create tablespace tab1 location '/home/edb/PGsources/postgresql/inst/bin/rep_test1';CREATE TABLESPACEpostgres=# create tablespace tab location '/home/edb/PGsources/postgresql/inst/bin/rep_test';CREATE TABLESPACEpostgres=# create database test tablespace tab;CREATE DATABASEpostgres=# \\c testYou are now connected to database \"test\" as user \"edb\".test=# create table t( a int PRIMARY KEY,b text);CREATE TABLEtest=# CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';CREATE FUNCTIONtest=# insert into t values (generate_series(1,2000000), large_val());INSERT 0 2000000test=# alter table t set tablespace tab1 ;ALTER TABLEtest=# \\c postgresYou are now connected to database \"postgres\" as user \"edb\".postgres=# create database test1 template test;CREATE DATABASEpostgres=# alter database test set tablespace pg_default;ERROR:  could not create file \"pg_tblspc/16384/PG_15_202111301/16386/2016395\": File existspostgres=# alter database test set tablespace tab;ALTER DATABASEpostgres=# \\c test1You are now connected to database \"test1\" as user \"edb\".test1=# alter table t set tablespace tab;WARNING:  terminating connection because of crash of another server processDETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.HINT:  In a moment you should be able to reconnect to the database and repeat your command.server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.!?> \n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Thu, 9 Dec 2021 07:12:59 +0400", "msg_from": "Neha Sharma <neha.sharma@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nThe issue here is that we are trying to create a table that exists inside a\nnon-default tablespace when doing ALTER DATABASE. I think this should be\nskipped otherwise we will come across the error like shown below:\n\nashu@postgres=# alter database test set tablespace pg_default;\nERROR: 58P02: could not create file\n\"pg_tblspc/16385/PG_15_202111301/16386/16390\": File exists\n\nI have taken the above from Neha's test-case.\n\n--\n\nAttached patch fixes this. I am passing a new boolean flag named *movedb*\nto CopyDatabase() so that it could skip the creation of tables existing in\nnon-default tablespace when doing alter database. Alternatively, we can\nalso rename the boolean flag movedb to createdb and pass its value\naccordingly from movedb() or createdb(). Either way looks fine to me.\nKindly check the attached patch for the changes.\n\nDilip, Could you please check the attached patch and let me know if it\nlooks fine or not?\n\nNeha, can you please re-run the test-cases with the attached patch.\n\nThanks,\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Dec 9, 2021 at 8:43 AM Neha Sharma <neha.sharma@enterprisedb.com>\nwrote:\n\n>\n>\n>\n> On Thu, Dec 9, 2021 at 4:26 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n>> On Thu, Dec 9, 2021 at 6:57 AM Neha Sharma <neha.sharma@enterprisedb.com>\n>> wrote:\n>> >\n>> > While testing the v7 patches, I am observing a crash with the below\n>> test case.\n>> >\n>> > Test case:\n>> > create tablespace tab location '<dir_path>/test_dir';\n>> > create tablespace tab1 location '<dir_path>/test_dir1';\n>> > create database test tablespace tab;\n>> > \\c test\n>> > create table t( a int PRIMARY KEY,b text);\n>> > CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS\n>> 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\n>> > insert into t values (generate_series(1,2000000), large_val());\n>> > alter table t set tablespace tab1 ;\n>> > \\c postgres\n>> > create database test1 template test;\n>> > alter database test set tablespace pg_default;\n>> > alter database test set tablespace tab;\n>> > \\c test1\n>> > alter table t set tablespace tab;\n>> >\n>> > Logfile says:\n>> > 2021-12-08 23:31:58.855 +04 [134252] PANIC: could not fsync file\n>> \"base/16386/4152\": No such file or directory\n>> > 2021-12-08 23:31:59.398 +04 [134251] LOG: checkpointer process (PID\n>> 134252) was terminated by signal 6: Aborted\n>> >\n>>\n>> I tried to reproduce the issue using your test scenario, but I needed\n>> to reduce the amount of inserted data (so reduced 2000000 to 20000)\n>> due to disk space.\n>> I then consistently get an error like the following:\n>>\n>> postgres=# alter database test set tablespace pg_default;\n>> ERROR: could not create file\n>> \"pg_tblspc/16385/PG_15_202111301/16386/36395\": File exists\n>>\n>> (this only happens when the patch is used)\n>>\n>>\n> Yes, I was also getting this, and moving further we get a crash when we\n> alter the table of database test1.\n> Below is the output of the test at my end.\n>\n> postgres=# create tablespace tab1 location\n> '/home/edb/PGsources/postgresql/inst/bin/rep_test1';\n> CREATE TABLESPACE\n> postgres=# create tablespace tab location\n> '/home/edb/PGsources/postgresql/inst/bin/rep_test';\n> CREATE TABLESPACE\n> postgres=# create database test tablespace tab;\n> CREATE DATABASE\n> postgres=# \\c test\n> You are now connected to database \"test\" as user \"edb\".\n> test=# create table t( a int PRIMARY KEY,b text);\n> CREATE TABLE\n> test=# CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS\n> 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\n> CREATE FUNCTION\n> test=# insert into t values (generate_series(1,2000000), large_val());\n> INSERT 0 2000000\n> test=# alter table t set tablespace tab1 ;\n> ALTER TABLE\n> test=# \\c postgres\n> You are now connected to database \"postgres\" as user \"edb\".\n> postgres=# create database test1 template test;\n> CREATE DATABASE\n> postgres=# alter database test set tablespace pg_default;\n> ERROR: could not create file\n> \"pg_tblspc/16384/PG_15_202111301/16386/2016395\": File exists\n> postgres=# alter database test set tablespace tab;\n> ALTER DATABASE\n> postgres=# \\c test1\n> You are now connected to database \"test1\" as user \"edb\".\n> test1=# alter table t set tablespace tab;\n> WARNING: terminating connection because of crash of another server process\n> DETAIL: The postmaster has commanded this server process to roll back the\n> current transaction and exit, because another server process exited\n> abnormally and possibly corrupted shared memory.\n> HINT: In a moment you should be able to reconnect to the database and\n> repeat your command.\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !?>\n>\n>>\n>> Regards,\n>> Greg Nancarrow\n>> Fujitsu Australia\n>>\n>", "msg_date": "Thu, 9 Dec 2021 12:41:53 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Dec 9, 2021 at 12:42 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi,\n>\n> The issue here is that we are trying to create a table that exists inside a non-default tablespace when doing ALTER DATABASE. I think this should be skipped otherwise we will come across the error like shown below:\n>\n> ashu@postgres=# alter database test set tablespace pg_default;\n> ERROR: 58P02: could not create file \"pg_tblspc/16385/PG_15_202111301/16386/16390\": File exists\n>\n> I have taken the above from Neha's test-case.\n>\n> --\n>\n> Attached patch fixes this. I am passing a new boolean flag named *movedb* to CopyDatabase() so that it could skip the creation of tables existing in non-default tablespace when doing alter database. Alternatively, we can also rename the boolean flag movedb to createdb and pass its value accordingly from movedb() or createdb(). Either way looks fine to me. Kindly check the attached patch for the changes.\n>\n> Dilip, Could you please check the attached patch and let me know if it looks fine or not?\n>\n> Neha, can you please re-run the test-cases with the attached patch.\n\nThanks Ahustosh, yeah I have observed the same, earlier we were\ndirectly copying the whole directory so this was not an issue, now if\nsome tables of the database are already in the destination tablespace\nthen we should skip them while copying. I will review your patch and\nmerge into the main patch.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Dec 2021 12:46:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Dec 9, 2021 at 11:12 AM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> Hi,\n>\n> The issue here is that we are trying to create a table that exists inside\n> a non-default tablespace when doing ALTER DATABASE. I think this should be\n> skipped otherwise we will come across the error like shown below:\n>\n> ashu@postgres=# alter database test set tablespace pg_default;\n> ERROR: 58P02: could not create file\n> \"pg_tblspc/16385/PG_15_202111301/16386/16390\": File exists\n>\n\nThanks Ashutosh for the patch, the mentioned issue has been resolved with\nthe patch.\n\nBut I am still able to reproduce the crash consistently on top of this\npatch + v7 patches,just the test case has been modified.\n\ncreate tablespace tab1 location '<dir_path>/test1';\ncreate tablespace tab location '<dir_path>/test';\ncreate database test tablespace tab;\n\\c test\ncreate table t( a int PRIMARY KEY,b text);\nCREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS 'select\narray_agg(md5(g::text))::text from generate_series(1, 256) g';\ninsert into t values (generate_series(1,100000), large_val());\nalter table t set tablespace tab1 ;\n\\c postgres\ncreate database test1 template test;\n\\c test1\nalter table t set tablespace tab;\n\\c postgres\nalter database test1 set tablespace tab1;\n\n--Cancel the below command after few seconds\nalter database test1 set tablespace pg_default;\n\n\\c test1\nalter table t set tablespace tab1;\n\n\nLogfile Snippet:\n2021-12-09 17:49:18.110 +04 [18151] PANIC: could not fsync file\n\"base/116398/116400\": No such file or directory\n2021-12-09 17:49:19.105 +04 [18150] LOG: checkpointer process (PID 18151)\nwas terminated by signal 6: Aborted\n2021-12-09 17:49:19.105 +04 [18150] LOG: terminating any other active\nserver processes\n\nOn Thu, Dec 9, 2021 at 11:12 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:Hi,The issue here is that we are trying to create a table that exists inside a non-default tablespace when doing ALTER DATABASE. I think this should be skipped otherwise we will come across the error like shown below:ashu@postgres=# alter database test set tablespace pg_default;ERROR:  58P02: could not create file \"pg_tblspc/16385/PG_15_202111301/16386/16390\": File exists Thanks Ashutosh for the patch, the mentioned issue has been resolved with the patch. But I am still able to reproduce the crash consistently on top of this patch + v7 patches,just the test case has been modified.create tablespace tab1 location '<dir_path>/test1';create tablespace tab location '<dir_path>/test';create database test tablespace tab;\\c testcreate table t( a int PRIMARY KEY,b text);CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';insert into t values (generate_series(1,100000), large_val());alter table t set tablespace tab1 ;\\c postgrescreate database test1 template test;\\c test1alter table t set tablespace tab;\\c postgresalter database test1 set tablespace tab1;--Cancel the below command after few secondsalter database test1 set tablespace pg_default;\\c test1alter table t set tablespace tab1;Logfile Snippet:2021-12-09 17:49:18.110 +04 [18151] PANIC:  could not fsync file \"base/116398/116400\": No such file or directory2021-12-09 17:49:19.105 +04 [18150] LOG:  checkpointer process (PID 18151) was terminated by signal 6: Aborted2021-12-09 17:49:19.105 +04 [18150] LOG:  terminating any other active server processes", "msg_date": "Thu, 9 Dec 2021 17:53:17 +0400", "msg_from": "Neha Sharma <neha.sharma@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Dec 9, 2021 at 7:23 PM Neha Sharma <neha.sharma@enterprisedb.com> wrote:\n>\n> On Thu, Dec 9, 2021 at 11:12 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n\n> \\c postgres\n> alter database test1 set tablespace tab1;\n>\n> --Cancel the below command after few seconds\n> alter database test1 set tablespace pg_default;\n>\n> \\c test1\n> alter table t set tablespace tab1;\n>\n>\n> Logfile Snippet:\n> 2021-12-09 17:49:18.110 +04 [18151] PANIC: could not fsync file \"base/116398/116400\": No such file or directory\n> 2021-12-09 17:49:19.105 +04 [18150] LOG: checkpointer process (PID 18151) was terminated by signal 6: Aborted\n> 2021-12-09 17:49:19.105 +04 [18150] LOG: terminating any other active server processes\n\nYeah, it seems like the fsync requests produced while copying database\nobjects to the new tablespace are not unregistered. This seems like a\ndifferent issue than previously raised. I will work on this next\nweek, thanks for testing.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Dec 2021 19:34:36 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Dec 9, 2021 at 7:23 PM Neha Sharma <neha.sharma@enterprisedb.com>\nwrote:\n\n> On Thu, Dec 9, 2021 at 11:12 AM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n>\n>> Hi,\n>>\n>> The issue here is that we are trying to create a table that exists inside\n>> a non-default tablespace when doing ALTER DATABASE. I think this should be\n>> skipped otherwise we will come across the error like shown below:\n>>\n>> ashu@postgres=# alter database test set tablespace pg_default;\n>> ERROR: 58P02: could not create file\n>> \"pg_tblspc/16385/PG_15_202111301/16386/16390\": File exists\n>>\n>\n> Thanks Ashutosh for the patch, the mentioned issue has been resolved with\n> the patch.\n>\n> But I am still able to reproduce the crash consistently on top of this\n> patch + v7 patches,just the test case has been modified.\n>\n> create tablespace tab1 location '<dir_path>/test1';\n> create tablespace tab location '<dir_path>/test';\n> create database test tablespace tab;\n> \\c test\n> create table t( a int PRIMARY KEY,b text);\n> CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS\n> 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\n> insert into t values (generate_series(1,100000), large_val());\n> alter table t set tablespace tab1 ;\n> \\c postgres\n> create database test1 template test;\n> \\c test1\n> alter table t set tablespace tab;\n> \\c postgres\n> alter database test1 set tablespace tab1;\n>\n> --Cancel the below command after few seconds\n> alter database test1 set tablespace pg_default;\n>\n> \\c test1\n> alter table t set tablespace tab1;\n>\n>\n> Logfile Snippet:\n> 2021-12-09 17:49:18.110 +04 [18151] PANIC: could not fsync file\n> \"base/116398/116400\": No such file or directory\n> 2021-12-09 17:49:19.105 +04 [18150] LOG: checkpointer process (PID 18151)\n> was terminated by signal 6: Aborted\n> 2021-12-09 17:49:19.105 +04 [18150] LOG: terminating any other active\n> server processes\n>\n\nThis is different from the issue you raised earlier. As Dilip said, we need\nto unregister sync requests for files that got successfully copied to the\ntarget database, but the overall alter database statement failed. We are\ndoing this when the database is created successfully, but not when it fails.\nProbably doing the same inside the cleanup function\nmovedb_failure_callback() should fix the problem.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Dec 9, 2021 at 7:23 PM Neha Sharma <neha.sharma@enterprisedb.com> wrote:On Thu, Dec 9, 2021 at 11:12 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:Hi,The issue here is that we are trying to create a table that exists inside a non-default tablespace when doing ALTER DATABASE. I think this should be skipped otherwise we will come across the error like shown below:ashu@postgres=# alter database test set tablespace pg_default;ERROR:  58P02: could not create file \"pg_tblspc/16385/PG_15_202111301/16386/16390\": File exists Thanks Ashutosh for the patch, the mentioned issue has been resolved with the patch. But I am still able to reproduce the crash consistently on top of this patch + v7 patches,just the test case has been modified.create tablespace tab1 location '<dir_path>/test1';create tablespace tab location '<dir_path>/test';create database test tablespace tab;\\c testcreate table t( a int PRIMARY KEY,b text);CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';insert into t values (generate_series(1,100000), large_val());alter table t set tablespace tab1 ;\\c postgrescreate database test1 template test;\\c test1alter table t set tablespace tab;\\c postgresalter database test1 set tablespace tab1;--Cancel the below command after few secondsalter database test1 set tablespace pg_default;\\c test1alter table t set tablespace tab1;Logfile Snippet:2021-12-09 17:49:18.110 +04 [18151] PANIC:  could not fsync file \"base/116398/116400\": No such file or directory2021-12-09 17:49:19.105 +04 [18150] LOG:  checkpointer process (PID 18151) was terminated by signal 6: Aborted2021-12-09 17:49:19.105 +04 [18150] LOG:  terminating any other active server processesThis is different from the issue you raised earlier. As Dilip said, we need to unregister sync requests for files that got successfully copied to the target database, but the overall alter database statement failed. We are doing this when the database is created successfully, but not when it fails.Probably doing the same inside the cleanup function movedb_failure_callback() should fix the problem. --With Regards,Ashutosh Sharma.", "msg_date": "Fri, 10 Dec 2021 07:38:49 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Dec 10, 2021 at 7:39 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>>\n>> Logfile Snippet:\n>> 2021-12-09 17:49:18.110 +04 [18151] PANIC: could not fsync file \"base/116398/116400\": No such file or directory\n>> 2021-12-09 17:49:19.105 +04 [18150] LOG: checkpointer process (PID 18151) was terminated by signal 6: Aborted\n>> 2021-12-09 17:49:19.105 +04 [18150] LOG: terminating any other active server processes\n>\n>\n> This is different from the issue you raised earlier. As Dilip said, we need to unregister sync requests for files that got successfully copied to the target database, but the overall alter database statement failed. We are doing this when the database is created successfully, but not when it fails.\n> Probably doing the same inside the cleanup function movedb_failure_callback() should fix the problem.\n\nCorrect, I have done this cleanup, apart from this we have dropped the\nfsyc request in create database failure case as well and also need to\ndrop buffer in error case of creatdb as well as movedb. I have also\nfixed the other issue for which you gave the patch (a bit differently)\nbasically, in case of movedb the source and destination dboid are same\nso we don't need an additional parameter and also readjusted the\nconditions to avoid nested if.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 12 Dec 2021 13:39:37 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "+ /*\n+ * If the relation is from the default tablespace then we need to\n+ * create it in the destinations db's default tablespace.\nOtherwise,\n+ * we need to create in the same tablespace as it is in the source\n+ * database.\n+ */\n\nThis comment looks a bit confusing to me especially because when we say\ndestination db's default tablespace people may think of pg_default\ntablespace (at least I think so). Basically what you are trying to say here\n- \"If the relation exists in the same tablespace as the src database, then\nin the destination db also it should be the same or something like that.. \"\nSo, why not put it that way instead of referring to it as the default\ntablespace. It's just my view. If you disagree you can ignore it.\n\n--\n\n+ else if (src_dboid == dst_dboid)\n+ continue;\n+ else\n+ dstrnode.spcNode = srcrnode.spcNode;;\n\nThere is an extra semicolon here.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\nOn Sun, Dec 12, 2021 at 1:39 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Fri, Dec 10, 2021 at 7:39 AM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> >>\n> >> Logfile Snippet:\n> >> 2021-12-09 17:49:18.110 +04 [18151] PANIC: could not fsync file\n> \"base/116398/116400\": No such file or directory\n> >> 2021-12-09 17:49:19.105 +04 [18150] LOG: checkpointer process (PID\n> 18151) was terminated by signal 6: Aborted\n> >> 2021-12-09 17:49:19.105 +04 [18150] LOG: terminating any other active\n> server processes\n> >\n> >\n> > This is different from the issue you raised earlier. As Dilip said, we\n> need to unregister sync requests for files that got successfully copied to\n> the target database, but the overall alter database statement failed. We\n> are doing this when the database is created successfully, but not when it\n> fails.\n> > Probably doing the same inside the cleanup function\n> movedb_failure_callback() should fix the problem.\n>\n> Correct, I have done this cleanup, apart from this we have dropped the\n> fsyc request in create database failure case as well and also need to\n> drop buffer in error case of creatdb as well as movedb. I have also\n> fixed the other issue for which you gave the patch (a bit differently)\n> basically, in case of movedb the source and destination dboid are same\n> so we don't need an additional parameter and also readjusted the\n> conditions to avoid nested if.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\n+       /*+        * If the relation is from the default tablespace then we need to+        * create it in the destinations db's default tablespace.  Otherwise,+        * we need to create in the same tablespace as it is in the source+        * database.+        */This comment looks a bit confusing to me especially because when we say destination db's default tablespace people may think of pg_default tablespace (at least I think so). Basically what you are trying to say here - \"If the relation exists in the same tablespace as the src database, then in the destination db also it should be the same or something like that.. \" So, why not put it that way instead of referring to it as the default tablespace. It's just my view. If you disagree you can ignore it.--+       else if (src_dboid == dst_dboid)+           continue;+       else+           dstrnode.spcNode = srcrnode.spcNode;;There is an extra semicolon here.--With Regards,Ashutosh Sharma.On Sun, Dec 12, 2021 at 1:39 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Fri, Dec 10, 2021 at 7:39 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>>\n>> Logfile Snippet:\n>> 2021-12-09 17:49:18.110 +04 [18151] PANIC:  could not fsync file \"base/116398/116400\": No such file or directory\n>> 2021-12-09 17:49:19.105 +04 [18150] LOG:  checkpointer process (PID 18151) was terminated by signal 6: Aborted\n>> 2021-12-09 17:49:19.105 +04 [18150] LOG:  terminating any other active server processes\n>\n>\n> This is different from the issue you raised earlier. As Dilip said, we need to unregister sync requests for files that got successfully copied to the target database, but the overall alter database statement failed. We are doing this when the database is created successfully, but not when it fails.\n> Probably doing the same inside the cleanup function movedb_failure_callback() should fix the problem.\n\nCorrect, I have done this cleanup, apart from this we have dropped the\nfsyc request in create database failure case as well and also need to\ndrop buffer in error case of creatdb as well as movedb.  I have also\nfixed the other issue for which you gave the patch (a bit differently)\nbasically, in case of movedb the source and destination dboid are same\nso we don't need an additional parameter and also readjusted the\nconditions to avoid nested if.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Dec 2021 08:34:30 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Dec 13, 2021 at 8:34 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> + /*\n> + * If the relation is from the default tablespace then we need to\n> + * create it in the destinations db's default tablespace. Otherwise,\n> + * we need to create in the same tablespace as it is in the source\n> + * database.\n> + */\n>\n> This comment looks a bit confusing to me especially because when we say destination db's default tablespace people may think of pg_default tablespace (at least I think so). Basically what you are trying to say here - \"If the relation exists in the same tablespace as the src database, then in the destination db also it should be the same or something like that.. \" So, why not put it that way instead of referring to it as the default tablespace. It's just my view. If you disagree you can ignore it.\n>\n> --\n>\n> + else if (src_dboid == dst_dboid)\n> + continue;\n> + else\n> + dstrnode.spcNode = srcrnode.spcNode;;\n>\n> There is an extra semicolon here.\n\n\nNoted. I will fix them in the next version.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Dec 2021 19:59:39 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Dec 2, 2021 at 07:19:50PM +0530, Dilip Kumar wrote:\n From the patch:\n\n> Currently, CREATE DATABASE forces a checkpoint, then copies all the files,\n> then forces another checkpoint. The comments in the createdb() function\n> explain the reasons for this. The attached patch fixes this problem by making\n> create database completely WAL logged so that we can avoid the checkpoints.\n> \n> This can also be useful for supporting the TDE. For example, if we need different\n> encryption for the source and the target database then we can not re-encrypt the\n> page data if we copy the whole directory. But with this patch, we are copying\n> page by page so we have an opportunity to re-encrypt the page before copying that\n> to the target database.\n\nUh, why is this true? Why can't we just copy the heap/index files 8k at\na time and reencrypt them during the file copy, rather than using shared\nbuffers?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 15 Dec 2021 13:45:44 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Dec 16, 2021 at 12:15 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Dec 2, 2021 at 07:19:50PM +0530, Dilip Kumar wrote:\n> From the patch:\n>\n> > Currently, CREATE DATABASE forces a checkpoint, then copies all the files,\n> > then forces another checkpoint. The comments in the createdb() function\n> > explain the reasons for this. The attached patch fixes this problem by making\n> > create database completely WAL logged so that we can avoid the checkpoints.\n> >\n> > This can also be useful for supporting the TDE. For example, if we need different\n> > encryption for the source and the target database then we can not re-encrypt the\n> > page data if we copy the whole directory. But with this patch, we are copying\n> > page by page so we have an opportunity to re-encrypt the page before copying that\n> > to the target database.\n>\n> Uh, why is this true? Why can't we just copy the heap/index files 8k at\n> a time and reencrypt them during the file copy, rather than using shared\n> buffers?\n\nHi Bruce,\n\nYeah, you are right that if we copy in 8k block then we can re-encrypt\nthe page, but in the current system, we are not copying block by\nblock. So the main effort for this patch is not only for TDE but to\nget rid of the checkpoint we are forced to do before and after create\ndatabase. So my point is that in this patch since we are copying page\nby page we get an opportunity to re-encrypt the page. I agree that if\nthe re-encryption would have been the main goal of this patch then\ntrue we can copy files in 8k blocks and re-encrypt those blocks, that\ntime even if we have to access some page data for re-encryption (like\nnonce) then also we can do it, but that is not the main objective.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Dec 2021 17:47:03 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nWhile testing the v8 patches in a hot-standby setup, it was observed the\nmaster is crashing with the below error;\n\n2021-12-16 19:32:47.757 +04 [101483] PANIC: could not fsync file\n\"pg_tblspc/16385/PG_15_202112111/16386/16391\": No such file or directory\n2021-12-16 19:32:48.917 +04 [101482] LOG: checkpointer process (PID\n101483) was terminated by signal 6: Aborted\n\nParameters configured at master:\nwal_level = hot_standby\nmax_wal_senders = 3\nhot_standby = on\nmax_standby_streaming_delay= -1\nwal_consistency_checking='all'\nmax_wal_size= 10GB\ncheckpoint_timeout= 1d\nlog_min_messages=debug1\n\nTest Case:\ncreate tablespace tab1 location\n'/home/edb/PGsources/postgresql/inst/bin/test1';\ncreate tablespace tab location\n'/home/edb/PGsources/postgresql/inst/bin/test';\ncreate database test tablespace tab;\n\\c test\ncreate table t( a int PRIMARY KEY,b text);\nCREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS 'select\narray_agg(md5(g::text))::text from generate_series(1, 256) g';\ninsert into t values (generate_series(1,100000), large_val());\nalter table t set tablespace tab1 ;\n\\c postgres\ncreate database test1 template test;\n\\c test1\nalter table t set tablespace tab;\n\\c postgres\nalter database test1 set tablespace tab1;\n\n--cancel the below command\nalter database test1 set tablespace pg_default; --press ctrl+c\n\\c test1\nalter table t set tablespace tab1;\n\n\nLog file attached for reference.\n\nThanks.\n--\nRegards,\nNeha Sharma\n\n\nOn Thu, Dec 16, 2021 at 4:17 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Thu, Dec 16, 2021 at 12:15 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Thu, Dec 2, 2021 at 07:19:50PM +0530, Dilip Kumar wrote:\n> > From the patch:\n> >\n> > > Currently, CREATE DATABASE forces a checkpoint, then copies all the\n> files,\n> > > then forces another checkpoint. The comments in the createdb() function\n> > > explain the reasons for this. The attached patch fixes this problem by\n> making\n> > > create database completely WAL logged so that we can avoid the\n> checkpoints.\n> > >\n> > > This can also be useful for supporting the TDE. For example, if we\n> need different\n> > > encryption for the source and the target database then we can not\n> re-encrypt the\n> > > page data if we copy the whole directory. But with this patch, we are\n> copying\n> > > page by page so we have an opportunity to re-encrypt the page before\n> copying that\n> > > to the target database.\n> >\n> > Uh, why is this true? Why can't we just copy the heap/index files 8k at\n> > a time and reencrypt them during the file copy, rather than using shared\n> > buffers?\n>\n> Hi Bruce,\n>\n> Yeah, you are right that if we copy in 8k block then we can re-encrypt\n> the page, but in the current system, we are not copying block by\n> block. So the main effort for this patch is not only for TDE but to\n> get rid of the checkpoint we are forced to do before and after create\n> database. So my point is that in this patch since we are copying page\n> by page we get an opportunity to re-encrypt the page. I agree that if\n> the re-encryption would have been the main goal of this patch then\n> true we can copy files in 8k blocks and re-encrypt those blocks, that\n> time even if we have to access some page data for re-encryption (like\n> nonce) then also we can do it, but that is not the main objective.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>", "msg_date": "Thu, 16 Dec 2021 19:56:09 +0400", "msg_from": "Neha Sharma <neha.sharma@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "I am getting the below error when running the same test-case that Neha\nshared in her previous email.\n\nERROR: 55000: some relations of database \"test1\" are already in tablespace\n\"tab1\"\nHINT: You must move them back to the database's default tablespace before\nusing this command.\nLOCATION: movedb, dbcommands.c:1555\n\ntest-case:\n========\ncreate tablespace tab1 location '/home/ashu/test1';\ncreate tablespace tab location '/home/ashu/test';\n\ncreate database test tablespace tab;\n\\c test\n\ncreate table t(a int primary key, b text);\n\ncreate or replace function large_val() returns text language sql as 'select\narray_agg(md5(g::text))::text from generate_series(1, 256) g';\n\ninsert into t values (generate_series(1,100000), large_val());\n\nalter table t set tablespace tab1 ;\n\n\\c postgres\ncreate database test1 template test;\n\n\\c test1\nalter table t set tablespace tab;\n\n\\c postgres\nalter database test1 set tablespace tab1; -- this fails with the given\nerror.\n\nObservations:\n===========\nPlease note that before running above alter database statement, the table\n't' is moved to tablespace 'tab' from 'tab1' so not sure why ReadDir() is\nreturning true when searching for table 't' in tablespace 'tab1'. It should\nhave returned NULL here:\n\n while ((xlde = ReadDir(dstdir, dst_dbpath)) != NULL)\n {\n if (strcmp(xlde->d_name, \".\") == 0 ||\n strcmp(xlde->d_name, \"..\") == 0)\n continue;\n\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n errmsg(\"some relations of database \\\"%s\\\" are already\nin tablespace \\\"%s\\\"\",\n dbname, tblspcname),\n errhint(\"You must move them back to the database's\ndefault tablespace before using this command.\")));\n }\n\nAlso, if I run the checkpoint explicitly before executing the above alter\ndatabase statement, this error doesn't appear which means it only happens\nwith the new changes because earlier we were doing the force checkpoint at\nthe end of createdb statement.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Dec 16, 2021 at 9:26 PM Neha Sharma <neha.sharma@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> While testing the v8 patches in a hot-standby setup, it was observed the\n> master is crashing with the below error;\n>\n> 2021-12-16 19:32:47.757 +04 [101483] PANIC: could not fsync file\n> \"pg_tblspc/16385/PG_15_202112111/16386/16391\": No such file or directory\n> 2021-12-16 19:32:48.917 +04 [101482] LOG: checkpointer process (PID\n> 101483) was terminated by signal 6: Aborted\n>\n> Parameters configured at master:\n> wal_level = hot_standby\n> max_wal_senders = 3\n> hot_standby = on\n> max_standby_streaming_delay= -1\n> wal_consistency_checking='all'\n> max_wal_size= 10GB\n> checkpoint_timeout= 1d\n> log_min_messages=debug1\n>\n> Test Case:\n> create tablespace tab1 location\n> '/home/edb/PGsources/postgresql/inst/bin/test1';\n> create tablespace tab location\n> '/home/edb/PGsources/postgresql/inst/bin/test';\n> create database test tablespace tab;\n> \\c test\n> create table t( a int PRIMARY KEY,b text);\n> CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS\n> 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\n> insert into t values (generate_series(1,100000), large_val());\n> alter table t set tablespace tab1 ;\n> \\c postgres\n> create database test1 template test;\n> \\c test1\n> alter table t set tablespace tab;\n> \\c postgres\n> alter database test1 set tablespace tab1;\n>\n> --cancel the below command\n> alter database test1 set tablespace pg_default; --press ctrl+c\n> \\c test1\n> alter table t set tablespace tab1;\n>\n>\n> Log file attached for reference.\n>\n> Thanks.\n> --\n> Regards,\n> Neha Sharma\n>\n>\n> On Thu, Dec 16, 2021 at 4:17 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n>> On Thu, Dec 16, 2021 at 12:15 AM Bruce Momjian <bruce@momjian.us> wrote:\n>> >\n>> > On Thu, Dec 2, 2021 at 07:19:50PM +0530, Dilip Kumar wrote:\n>> > From the patch:\n>> >\n>> > > Currently, CREATE DATABASE forces a checkpoint, then copies all the\n>> files,\n>> > > then forces another checkpoint. The comments in the createdb()\n>> function\n>> > > explain the reasons for this. The attached patch fixes this problem\n>> by making\n>> > > create database completely WAL logged so that we can avoid the\n>> checkpoints.\n>> > >\n>> > > This can also be useful for supporting the TDE. For example, if we\n>> need different\n>> > > encryption for the source and the target database then we can not\n>> re-encrypt the\n>> > > page data if we copy the whole directory. But with this patch, we\n>> are copying\n>> > > page by page so we have an opportunity to re-encrypt the page before\n>> copying that\n>> > > to the target database.\n>> >\n>> > Uh, why is this true? Why can't we just copy the heap/index files 8k at\n>> > a time and reencrypt them during the file copy, rather than using shared\n>> > buffers?\n>>\n>> Hi Bruce,\n>>\n>> Yeah, you are right that if we copy in 8k block then we can re-encrypt\n>> the page, but in the current system, we are not copying block by\n>> block. So the main effort for this patch is not only for TDE but to\n>> get rid of the checkpoint we are forced to do before and after create\n>> database. So my point is that in this patch since we are copying page\n>> by page we get an opportunity to re-encrypt the page. I agree that if\n>> the re-encryption would have been the main goal of this patch then\n>> true we can copy files in 8k blocks and re-encrypt those blocks, that\n>> time even if we have to access some page data for re-encryption (like\n>> nonce) then also we can do it, but that is not the main objective.\n>>\n>> --\n>> Regards,\n>> Dilip Kumar\n>> EnterpriseDB: http://www.enterprisedb.com\n>>\n>>\n>>\n\nI am getting the below error when running the same test-case that Neha shared in her previous email.ERROR:  55000: some relations of database \"test1\" are already in tablespace \"tab1\"HINT:  You must move them back to the database's default tablespace before using this command.LOCATION:  movedb, dbcommands.c:1555test-case:========create tablespace tab1 location '/home/ashu/test1';create tablespace tab location '/home/ashu/test';create database test tablespace tab;\\c testcreate table t(a int primary key, b text);create or replace function large_val() returns text language sql as 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';insert into t values (generate_series(1,100000), large_val());alter table t set tablespace tab1 ;\\c postgrescreate database test1 template test;\\c test1alter table t set tablespace tab;\\c postgresalter database test1 set tablespace tab1; -- this fails with  the given error.Observations:===========Please note that before running above alter database statement, the table 't'  is moved to tablespace 'tab' from 'tab1' so not sure why ReadDir() is returning true when searching for table 't' in tablespace 'tab1'. It should have returned NULL here: while ((xlde = ReadDir(dstdir, dst_dbpath)) != NULL)        {            if (strcmp(xlde->d_name, \".\") == 0 ||                strcmp(xlde->d_name, \"..\") == 0)                continue;            ereport(ERROR,                    (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),                     errmsg(\"some relations of database \\\"%s\\\" are already in tablespace \\\"%s\\\"\",                            dbname, tblspcname),                     errhint(\"You must move them back to the database's default tablespace before using this command.\")));        }Also, if I run the checkpoint explicitly before executing the above alter database statement, this error doesn't appear which means it only happens with the new changes because earlier we were doing the force checkpoint at the end of createdb statement.--With Regards,Ashutosh Sharma.On Thu, Dec 16, 2021 at 9:26 PM Neha Sharma <neha.sharma@enterprisedb.com> wrote:Hi,While testing the v8 patches in a hot-standby setup, it was observed the master is crashing with the below error;2021-12-16 19:32:47.757 +04 [101483] PANIC:  could not fsync file \"pg_tblspc/16385/PG_15_202112111/16386/16391\": No such file or directory2021-12-16 19:32:48.917 +04 [101482] LOG:  checkpointer process (PID 101483) was terminated by signal 6: AbortedParameters configured at master:wal_level = hot_standbymax_wal_senders = 3hot_standby = onmax_standby_streaming_delay= -1wal_consistency_checking='all'max_wal_size= 10GBcheckpoint_timeout= 1dlog_min_messages=debug1Test Case:create tablespace tab1 location '/home/edb/PGsources/postgresql/inst/bin/test1';create tablespace tab location '/home/edb/PGsources/postgresql/inst/bin/test';create database test tablespace tab;\\c testcreate table t( a int PRIMARY KEY,b text);CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';insert into t values (generate_series(1,100000), large_val());alter table t set tablespace tab1 ;\\c postgrescreate database test1 template test;\\c test1alter table t set tablespace tab;\\c postgresalter database test1 set tablespace tab1;--cancel the below commandalter database test1 set tablespace pg_default; --press ctrl+c\\c test1alter table t set tablespace tab1;Log file attached for reference.Thanks.--Regards,Neha SharmaOn Thu, Dec 16, 2021 at 4:17 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Thu, Dec 16, 2021 at 12:15 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Dec  2, 2021 at 07:19:50PM +0530, Dilip Kumar wrote:\n> From the patch:\n>\n> > Currently, CREATE DATABASE forces a checkpoint, then copies all the files,\n> > then forces another checkpoint. The comments in the createdb() function\n> > explain the reasons for this. The attached patch fixes this problem by making\n> > create database completely WAL logged so that we can avoid the checkpoints.\n> >\n> > This can also be useful for supporting the TDE. For example, if we need different\n> > encryption for the source and the target database then we can not re-encrypt the\n> > page data if we copy the whole directory.  But with this patch, we are copying\n> > page by page so we have an opportunity to re-encrypt the page before copying that\n> > to the target database.\n>\n> Uh, why is this true?  Why can't we just copy the heap/index files 8k at\n> a time and reencrypt them during the file copy, rather than using shared\n> buffers?\n\nHi Bruce,\n\nYeah, you are right that if we copy in 8k block then we can re-encrypt\nthe page, but in the current system, we are not copying block by\nblock.  So the main effort for this patch is not only for TDE but to\nget rid of the checkpoint we are forced to do before and after create\ndatabase.  So my point is that in this patch since we are copying page\nby page we get an opportunity to re-encrypt the page.  I agree that if\nthe re-encryption would have been the main goal of this patch then\ntrue we can copy files in 8k blocks and re-encrypt those blocks, that\ntime even if we have to access some page data for re-encryption (like\nnonce) then also we can do it, but that is not the main objective.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 21 Dec 2021 11:10:17 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Dec 16, 2021 at 9:26 PM Neha Sharma\n<neha.sharma@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> While testing the v8 patches in a hot-standby setup, it was observed the master is crashing with the below error;\n>\n> 2021-12-16 19:32:47.757 +04 [101483] PANIC: could not fsync file \"pg_tblspc/16385/PG_15_202112111/16386/16391\": No such file or directory\n> 2021-12-16 19:32:48.917 +04 [101482] LOG: checkpointer process (PID 101483) was terminated by signal 6: Aborted\n>\n> Parameters configured at master:\n> wal_level = hot_standby\n> max_wal_senders = 3\n> hot_standby = on\n> max_standby_streaming_delay= -1\n> wal_consistency_checking='all'\n> max_wal_size= 10GB\n> checkpoint_timeout= 1d\n> log_min_messages=debug1\n>\n> Test Case:\n> create tablespace tab1 location '/home/edb/PGsources/postgresql/inst/bin/test1';\n> create tablespace tab location '/home/edb/PGsources/postgresql/inst/bin/test';\n> create database test tablespace tab;\n> \\c test\n> create table t( a int PRIMARY KEY,b text);\n> CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\n> insert into t values (generate_series(1,100000), large_val());\n> alter table t set tablespace tab1 ;\n> \\c postgres\n> create database test1 template test;\n> \\c test1\n> alter table t set tablespace tab;\n> \\c postgres\n> alter database test1 set tablespace tab1;\n>\n> --cancel the below command\n> alter database test1 set tablespace pg_default; --press ctrl+c\n> \\c test1\n> alter table t set tablespace tab1;\n>\n>\n> Log file attached for reference.\n\nSeems like this is an existing issue and I am able to reproduce on the\nPostgreSQL head as well [1]\n\n[1] https://www.postgresql.org/message-id/CAFiTN-szX%3DayO80EnSWonBu1YMZrpOr9V0R3BzHBSjMrMPAeMg%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Dec 2021 17:34:09 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi Dilip,\n\nOn Tue, Dec 21, 2021 at 11:10 AM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> I am getting the below error when running the same test-case that Neha\n> shared in her previous email.\n>\n> ERROR: 55000: some relations of database \"test1\" are already in\n> tablespace \"tab1\"\n> HINT: You must move them back to the database's default tablespace before\n> using this command.\n> LOCATION: movedb, dbcommands.c:1555\n>\n> test-case:\n> ========\n> create tablespace tab1 location '/home/ashu/test1';\n> create tablespace tab location '/home/ashu/test';\n>\n> create database test tablespace tab;\n> \\c test\n>\n> create table t(a int primary key, b text);\n>\n> create or replace function large_val() returns text language sql as\n> 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\n>\n> insert into t values (generate_series(1,100000), large_val());\n>\n> alter table t set tablespace tab1 ;\n>\n> \\c postgres\n> create database test1 template test;\n>\n> \\c test1\n> alter table t set tablespace tab;\n>\n> \\c postgres\n> alter database test1 set tablespace tab1; -- this fails with the given\n> error.\n>\n> Observations:\n> ===========\n> Please note that before running above alter database statement, the table\n> 't' is moved to tablespace 'tab' from 'tab1' so not sure why ReadDir() is\n> returning true when searching for table 't' in tablespace 'tab1'. It should\n> have returned NULL here:\n>\n> while ((xlde = ReadDir(dstdir, dst_dbpath)) != NULL)\n> {\n> if (strcmp(xlde->d_name, \".\") == 0 ||\n> strcmp(xlde->d_name, \"..\") == 0)\n> continue;\n>\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> errmsg(\"some relations of database \\\"%s\\\" are already\n> in tablespace \\\"%s\\\"\",\n> dbname, tblspcname),\n> errhint(\"You must move them back to the database's\n> default tablespace before using this command.\")));\n> }\n>\n> Also, if I run the checkpoint explicitly before executing the above alter\n> database statement, this error doesn't appear which means it only happens\n> with the new changes because earlier we were doing the force checkpoint at\n> the end of createdb statement.\n>\n\nIs this expected? I think it is not.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nHi Dilip,On Tue, Dec 21, 2021 at 11:10 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:I am getting the below error when running the same test-case that Neha shared in her previous email.ERROR:  55000: some relations of database \"test1\" are already in tablespace \"tab1\"HINT:  You must move them back to the database's default tablespace before using this command.LOCATION:  movedb, dbcommands.c:1555test-case:========create tablespace tab1 location '/home/ashu/test1';create tablespace tab location '/home/ashu/test';create database test tablespace tab;\\c testcreate table t(a int primary key, b text);create or replace function large_val() returns text language sql as 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';insert into t values (generate_series(1,100000), large_val());alter table t set tablespace tab1 ;\\c postgrescreate database test1 template test;\\c test1alter table t set tablespace tab;\\c postgresalter database test1 set tablespace tab1; -- this fails with  the given error.Observations:===========Please note that before running above alter database statement, the table 't'  is moved to tablespace 'tab' from 'tab1' so not sure why ReadDir() is returning true when searching for table 't' in tablespace 'tab1'. It should have returned NULL here: while ((xlde = ReadDir(dstdir, dst_dbpath)) != NULL)        {            if (strcmp(xlde->d_name, \".\") == 0 ||                strcmp(xlde->d_name, \"..\") == 0)                continue;            ereport(ERROR,                    (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),                     errmsg(\"some relations of database \\\"%s\\\" are already in tablespace \\\"%s\\\"\",                            dbname, tblspcname),                     errhint(\"You must move them back to the database's default tablespace before using this command.\")));        }Also, if I run the checkpoint explicitly before executing the above alter database statement, this error doesn't appear which means it only happens with the new changes because earlier we were doing the force checkpoint at the end of createdb statement.Is this expected? I think it is not.--With Regards,Ashutosh Sharma.", "msg_date": "Wed, 22 Dec 2021 13:48:54 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Dec 21, 2021 at 11:10 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> I am getting the below error when running the same test-case that Neha shared in her previous email.\n>\n> ERROR: 55000: some relations of database \"test1\" are already in tablespace \"tab1\"\n> HINT: You must move them back to the database's default tablespace before using this command.\n> LOCATION: movedb, dbcommands.c:1555\n>\n> test-case:\n> ========\n> create tablespace tab1 location '/home/ashu/test1';\n> create tablespace tab location '/home/ashu/test';\n>\n> create database test tablespace tab;\n> \\c test\n>\n> create table t(a int primary key, b text);\n>\n> create or replace function large_val() returns text language sql as 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\n>\n> insert into t values (generate_series(1,100000), large_val());\n>\n> alter table t set tablespace tab1 ;\n>\n> \\c postgres\n> create database test1 template test;\n>\n> \\c test1\n> alter table t set tablespace tab;\n>\n> \\c postgres\n> alter database test1 set tablespace tab1; -- this fails with the given error.\n>\n> Observations:\n> ===========\n> Please note that before running above alter database statement, the table 't' is moved to tablespace 'tab' from 'tab1' so not sure why ReadDir() is returning true when searching for table 't' in tablespace 'tab1'. It should have returned NULL here:\n>\n> while ((xlde = ReadDir(dstdir, dst_dbpath)) != NULL)\n> {\n> if (strcmp(xlde->d_name, \".\") == 0 ||\n> strcmp(xlde->d_name, \"..\") == 0)\n> continue;\n>\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> errmsg(\"some relations of database \\\"%s\\\" are already in tablespace \\\"%s\\\"\",\n> dbname, tblspcname),\n> errhint(\"You must move them back to the database's default tablespace before using this command.\")));\n> }\n>\n> Also, if I run the checkpoint explicitly before executing the above alter database statement, this error doesn't appear which means it only happens with the new changes because earlier we were doing the force checkpoint at the end of createdb statement.\n>\n\nBasically, ALTER TABLE SET TABLESPACE, will register the\nSYNC_UNLINK_REQUEST for the table files w.r.t the old tablespace, but\nthose will get unlinked during the next checkpoint. Although the\nfiles must be truncated during commit itself but unlink might not have\nbeen processed until the next checkpoint. This is the explanation for\nthe behavior you found during your investigation, but I haven't looked\ninto the issue so I will do it latest by tomorrow and send my\nanalysis.\n\nThanks for working on this.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 22 Dec 2021 14:44:37 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Dec 22, 2021 at 2:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Tue, Dec 21, 2021 at 11:10 AM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> >\n> > I am getting the below error when running the same test-case that Neha\n> shared in her previous email.\n> >\n> > ERROR: 55000: some relations of database \"test1\" are already in\n> tablespace \"tab1\"\n> > HINT: You must move them back to the database's default tablespace\n> before using this command.\n> > LOCATION: movedb, dbcommands.c:1555\n> >\n> > test-case:\n> > ========\n> > create tablespace tab1 location '/home/ashu/test1';\n> > create tablespace tab location '/home/ashu/test';\n> >\n> > create database test tablespace tab;\n> > \\c test\n> >\n> > create table t(a int primary key, b text);\n> >\n> > create or replace function large_val() returns text language sql as\n> 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\n> >\n> > insert into t values (generate_series(1,100000), large_val());\n> >\n> > alter table t set tablespace tab1 ;\n> >\n> > \\c postgres\n> > create database test1 template test;\n> >\n> > \\c test1\n> > alter table t set tablespace tab;\n> >\n> > \\c postgres\n> > alter database test1 set tablespace tab1; -- this fails with the given\n> error.\n> >\n> > Observations:\n> > ===========\n> > Please note that before running above alter database statement, the\n> table 't' is moved to tablespace 'tab' from 'tab1' so not sure why\n> ReadDir() is returning true when searching for table 't' in tablespace\n> 'tab1'. It should have returned NULL here:\n> >\n> > while ((xlde = ReadDir(dstdir, dst_dbpath)) != NULL)\n> > {\n> > if (strcmp(xlde->d_name, \".\") == 0 ||\n> > strcmp(xlde->d_name, \"..\") == 0)\n> > continue;\n> >\n> > ereport(ERROR,\n> > (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > errmsg(\"some relations of database \\\"%s\\\" are\n> already in tablespace \\\"%s\\\"\",\n> > dbname, tblspcname),\n> > errhint(\"You must move them back to the database's\n> default tablespace before using this command.\")));\n> > }\n> >\n> > Also, if I run the checkpoint explicitly before executing the above\n> alter database statement, this error doesn't appear which means it only\n> happens with the new changes because earlier we were doing the force\n> checkpoint at the end of createdb statement.\n> >\n>\n> Basically, ALTER TABLE SET TABLESPACE, will register the\n> SYNC_UNLINK_REQUEST for the table files w.r.t the old tablespace, but\n> those will get unlinked during the next checkpoint. Although the\n> files must be truncated during commit itself but unlink might not have\n> been processed until the next checkpoint. This is the explanation for\n> the behavior you found during your investigation, but I haven't looked\n> into the issue so I will do it latest by tomorrow and send my\n> analysis.\n>\n> Thanks for working on this.\n>\n\nYeah the problem here is that the old rel file that needs to be unlinked\nstill exists in the old tablespace. Earlier, without your changes we were\ndoing force checkpoint before starting with the actual work for the alter\ndatabase which unlinked/deleted the rel file from the old tablespace, but\nthat is not the case here. Now we have removed the force checkpoint from\nmovedb() which means until the auto checkpoint happens the old rel file\nwill remain in the old tablespace thereby creating this problem.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Wed, Dec 22, 2021 at 2:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Tue, Dec 21, 2021 at 11:10 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> I am getting the below error when running the same test-case that Neha shared in her previous email.\n>\n> ERROR:  55000: some relations of database \"test1\" are already in tablespace \"tab1\"\n> HINT:  You must move them back to the database's default tablespace before using this command.\n> LOCATION:  movedb, dbcommands.c:1555\n>\n> test-case:\n> ========\n> create tablespace tab1 location '/home/ashu/test1';\n> create tablespace tab location '/home/ashu/test';\n>\n> create database test tablespace tab;\n> \\c test\n>\n> create table t(a int primary key, b text);\n>\n> create or replace function large_val() returns text language sql as 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\n>\n> insert into t values (generate_series(1,100000), large_val());\n>\n> alter table t set tablespace tab1 ;\n>\n> \\c postgres\n> create database test1 template test;\n>\n> \\c test1\n> alter table t set tablespace tab;\n>\n> \\c postgres\n> alter database test1 set tablespace tab1; -- this fails with  the given error.\n>\n> Observations:\n> ===========\n> Please note that before running above alter database statement, the table 't'  is moved to tablespace 'tab' from 'tab1' so not sure why ReadDir() is returning true when searching for table 't' in tablespace 'tab1'. It should have returned NULL here:\n>\n>  while ((xlde = ReadDir(dstdir, dst_dbpath)) != NULL)\n>         {\n>             if (strcmp(xlde->d_name, \".\") == 0 ||\n>                 strcmp(xlde->d_name, \"..\") == 0)\n>                 continue;\n>\n>             ereport(ERROR,\n>                     (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n>                      errmsg(\"some relations of database \\\"%s\\\" are already in tablespace \\\"%s\\\"\",\n>                             dbname, tblspcname),\n>                      errhint(\"You must move them back to the database's default tablespace before using this command.\")));\n>         }\n>\n> Also, if I run the checkpoint explicitly before executing the above alter database statement, this error doesn't appear which means it only happens with the new changes because earlier we were doing the force checkpoint at the end of createdb statement.\n>\n\nBasically, ALTER TABLE SET TABLESPACE, will register the\nSYNC_UNLINK_REQUEST for the table files w.r.t the old tablespace, but\nthose will get unlinked during the next checkpoint.  Although the\nfiles must be truncated during commit itself but unlink might not have\nbeen processed until the next checkpoint.  This is the explanation for\nthe behavior you found during your investigation, but I haven't looked\ninto the issue so I will do it latest by tomorrow and send my\nanalysis.\n\nThanks for working on this.Yeah the problem here is that the old rel file that needs to be unlinked still exists in the old tablespace. Earlier, without your changes we were doing force checkpoint before starting with the actual work for the alter database which unlinked/deleted the rel file from the old tablespace, but that is not the case here.  Now we have removed the force checkpoint from movedb() which means until the auto checkpoint happens the old rel file will remain in the old tablespace thereby creating this problem.--With Regards,Ashutosh Sharma.", "msg_date": "Wed, 22 Dec 2021 16:26:18 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Dec 22, 2021 at 4:26 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n\n>> Basically, ALTER TABLE SET TABLESPACE, will register the\n>> SYNC_UNLINK_REQUEST for the table files w.r.t the old tablespace, but\n>> those will get unlinked during the next checkpoint. Although the\n>> files must be truncated during commit itself but unlink might not have\n>> been processed until the next checkpoint. This is the explanation for\n>> the behavior you found during your investigation, but I haven't looked\n>> into the issue so I will do it latest by tomorrow and send my\n>> analysis.\n>>\n>> Thanks for working on this.\n>\n>\n> Yeah the problem here is that the old rel file that needs to be unlinked still exists in the old tablespace. Earlier, without your changes we were doing force checkpoint before starting with the actual work for the alter database which unlinked/deleted the rel file from the old tablespace, but that is not the case here. Now we have removed the force checkpoint from movedb() which means until the auto checkpoint happens the old rel file will remain in the old tablespace thereby creating this problem.\n\nOne solution to this problem could be that, similar to mdpostckpt(),\nwe invent one more function which takes dboid and dsttblspc oid as\ninput and it will unlink all the requests which are w.r.t. the dboid\nand tablespaceoid, and before doing it we should also do\nForgetDatabaseSyncRequests(), so that next checkpoint does not flush\nsome old request.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 22 Dec 2021 17:06:40 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Dec 22, 2021 at 5:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Dec 22, 2021 at 4:26 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> >> Basically, ALTER TABLE SET TABLESPACE, will register the\n> >> SYNC_UNLINK_REQUEST for the table files w.r.t the old tablespace, but\n> >> those will get unlinked during the next checkpoint. Although the\n> >> files must be truncated during commit itself but unlink might not have\n> >> been processed until the next checkpoint. This is the explanation for\n> >> the behavior you found during your investigation, but I haven't looked\n> >> into the issue so I will do it latest by tomorrow and send my\n> >> analysis.\n> >>\n> >> Thanks for working on this.\n> >\n> >\n> > Yeah the problem here is that the old rel file that needs to be unlinked still exists in the old tablespace. Earlier, without your changes we were doing force checkpoint before starting with the actual work for the alter database which unlinked/deleted the rel file from the old tablespace, but that is not the case here. Now we have removed the force checkpoint from movedb() which means until the auto checkpoint happens the old rel file will remain in the old tablespace thereby creating this problem.\n>\n> One solution to this problem could be that, similar to mdpostckpt(),\n> we invent one more function which takes dboid and dsttblspc oid as\n> input and it will unlink all the requests which are w.r.t. the dboid\n> and tablespaceoid, and before doing it we should also do\n> ForgetDatabaseSyncRequests(), so that next checkpoint does not flush\n> some old request.\n\nI couldn't find the mdpostchkpt() function. Are you talking about\nSyncPostCheckpoint() ? Anyway, as you have rightly said, we need to\nunlink all the files available inside the dst_tablespaceoid/dst_dboid/\ndirectory by scanning the pendingUnlinks list. And finally we don't\nwant the next checkpoint to unlink this file again and PANIC so for\nthat we have to update the entry for this unlinked rel file in the\nhash table i.e. cancel the sync request for it.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Wed, 22 Dec 2021 20:02:38 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Dec 6, 2021 at 12:45 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> So for example, imagine tests with 1GB of shard_buffers, 8GB, and\n> 64GB. And template databases with sizes of whatever the default is,\n> 1GB, 10GB, 100GB. Repeatedly make 75% of the pages dirty and then\n> create a new database from one of the templates. And then just measure\n> the performance. Maybe for large databases this approach is just\n> really the pits -- and if your max_wal_size is too small, it\n> definitely will be. But, I don't know, maybe with reasonable settings\n> it's not that bad. Writing everything to disk twice - once to WAL and\n> once to the target directory - has to be more expensive than doing it\n> once. But on the other hand, it's all sequential I/O and the data\n> pages don't need to be fsync'd, so perhaps the overhead is relatively\n> mild. I don't know.\n\nI have been tied up with other things for a bit now and have not had\ntime to look at this thread; sorry about that. I have a little more\ntime available now so I thought I would take a look at this again and\nsee where things stand.\n\nSadly, it doesn't appear to me that anyone has done any performance\ntesting of this patch, along the lines suggested above or otherwise,\nand I think it's a crucial question for the patch. My reading of this\nthread is that nobody really likes the idea of maintaining two methods\nfor performing CREATE DATABASE, but nobody wants to hose people who\nare using it to clone large databases, either. To some extent those\nthings are inexorably in conflict. If we postulate that the 10TB\ntemplate database is on a local RAID array with 40 spindles, while\npg_wal is on an iSCSI volume that we access via a 128kB ISDN link,\nthen the new system is going to be infinitely worse. But real\nsituations aren't likely to be that bad, and it would be useful in my\nopinion to have an idea how bad they actually are.\n\nI'm somewhat inclined to propose that we keep the existing method\naround along with the new method. Even though nobody really likes\nthat, we don't necessarily have to maintain both methods forever. If,\nsay, we use the new method by default in all cases, but add an option\nto get the old method back if you need it, we could leave it that way\nfor a few years and then propose removing the old method (and the\nswitch to activate it) and see if anyone complains. That way, if the\nnew method turns out to suck in certain cases, users have a way out.\nHowever, I still think doing some performance testing would be a\nreally good idea. It's not a great plan to make decisions about this\nkind of thing in an information vacuum.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Feb 2022 12:09:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Dec 22, 2021 at 9:32 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> I couldn't find the mdpostchkpt() function. Are you talking about\n> SyncPostCheckpoint() ? Anyway, as you have rightly said, we need to\n> unlink all the files available inside the dst_tablespaceoid/dst_dboid/\n> directory by scanning the pendingUnlinks list. And finally we don't\n> want the next checkpoint to unlink this file again and PANIC so for\n> that we have to update the entry for this unlinked rel file in the\n> hash table i.e. cancel the sync request for it.\n\nUntil commit 3eb77eba5a51780d5cf52cd66a9844cd4d26feb0 in April 2019,\nthere was an mdpostckpt function, which is probably what was meant\nhere.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Feb 2022 12:11:49 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sun, Dec 12, 2021 at 3:09 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Correct, I have done this cleanup, apart from this we have dropped the\n> fsyc request in create database failure case as well and also need to\n> drop buffer in error case of creatdb as well as movedb. I have also\n> fixed the other issue for which you gave the patch (a bit differently)\n> basically, in case of movedb the source and destination dboid are same\n> so we don't need an additional parameter and also readjusted the\n> conditions to avoid nested if.\n\nAmazingly to me given how much time has passed, these patches still\napply, although I think there are a few outstanding issues that you\npromised to fix in the next version and haven't yet addressed.\n\nIn 0007, I think you will need to work a bit harder. I don't think\nthat you can just add a second argument to\nForgetDatabaseSyncRequests() that makes it do something other than\nwhat the name of the function suggests but without renaming the\nfunction or updating any comments. Elsewhere we have things like\nTablespaceCreateDbspace and ResetUnloggedRelationsInDbspaceDir so\nperhaps we ought to just add a new function with a name inspired by\nthose precedents alongside the existing one, rather than doing it this\nway.\n\nIn 0008, this is a bit confusing:\n\n+ PageInit(dstPage, BufferGetPageSize(dstBuf), 0);\n+ memcpy(dstPage, srcPage, BLCKSZ);\n\nAfter a minute, I figured out that the point here was to force\nlog_newpage() to actually set the LSN, but how about a comment?\n\nI kind of wonder whether GetDatabaseRelationList should be broken into\ntwo functions so that don't have quite such deep nesting. And I wonder\nif maybe the return value of GetActiveSnapshot() should be cached in a\nlocal variable.\n\nOn the whole I think there aren't huge code-level issues here, even if\nthings need to be tweaked here and there and bugs fixed. The real key\nis arriving at a set of design trade-offs that doesn't make anyone too\nupset.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Feb 2022 15:30:00 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Feb 8, 2022 at 10:39 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I have been tied up with other things for a bit now and have not had\n> time to look at this thread; sorry about that. I have a little more\n> time available now so I thought I would take a look at this again and\n> see where things stand.\n\nThanks for looking into this.\n\n>\n> Sadly, it doesn't appear to me that anyone has done any performance\n> testing of this patch, along the lines suggested above or otherwise,\n> and I think it's a crucial question for the patch.\n\nYeah, actually some performance testing started as shared by Ahustosh\n[1] and soon after that we got side tracked by another issue[2] which\nwe thought had to be fixed before we proceed with this feature.\n\nMy reading of this\n> thread is that nobody really likes the idea of maintaining two methods\n> for performing CREATE DATABASE, but nobody wants to hose people who\n> are using it to clone large databases, either. To some extent those\n> things are inexorably in conflict. If we postulate that the 10TB\n> template database is on a local RAID array with 40 spindles, while\n> pg_wal is on an iSCSI volume that we access via a 128kB ISDN link,\n> then the new system is going to be infinitely worse. But real\n> situations aren't likely to be that bad, and it would be useful in my\n> opinion to have an idea how bad they actually are.\n\nYeah that makes sense, I will work on performance testing in this line\nand also on previous ideas you suggested.\n\n> I'm somewhat inclined to propose that we keep the existing method\n> around along with the new method. Even though nobody really likes\n> that, we don't necessarily have to maintain both methods forever. If,\n> say, we use the new method by default in all cases, but add an option\n> to get the old method back if you need it, we could leave it that way\n> for a few years and then propose removing the old method (and the\n> switch to activate it) and see if anyone complains. That way, if the\n> new method turns out to suck in certain cases, users have a way out.\n> However, I still think doing some performance testing would be a\n> really good idea. It's not a great plan to make decisions about this\n> kind of thing in an information vacuum.\n\nYeah that makes sense to me.\n\nNow, one bigger question is can we proceed with this patch without\nfixing [2], IMHO, if we are deciding to keep the old method also\nintact then one option could be that for now only change CREATE\nDATABASE to support both old and new way of creating database and for\ntime being leave the ALTER DATABASE SET TABLESPACE alone and let it\nwork only with the old method? And another option is that we first\nfix the issue related to the tombstone file and then come back to\nthis?\n\nIMHO, the first option could be better in a way that we have already\nmade better progress in this patch and this is in better shape than\nthe other patch we are trying to make for removing the tombstone\nfiles.\n\n\n[1]https://www.postgresql.org/message-id/CAE9k0Pkg20tHq8oiJ%2BxXa9%3Daf3QZCSYTw99aBaPthA1UMKhnTg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA%2BTgmobM5FN5x0u3tSpoNvk_TZPFCdbcHxsXCoY1ytn1dXROvg%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 10:16:52 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Feb 8, 2022 at 12:09:08PM -0500, Robert Haas wrote:\n> Sadly, it doesn't appear to me that anyone has done any performance\n> testing of this patch, along the lines suggested above or otherwise,\n> and I think it's a crucial question for the patch. My reading of this\n> thread is that nobody really likes the idea of maintaining two methods\n> for performing CREATE DATABASE, but nobody wants to hose people who\n> are using it to clone large databases, either. To some extent those\n> things are inexorably in conflict. If we postulate that the 10TB\n> template database is on a local RAID array with 40 spindles, while\n> pg_wal is on an iSCSI volume that we access via a 128kB ISDN link,\n> then the new system is going to be infinitely worse. But real\n> situations aren't likely to be that bad, and it would be useful in my\n> opinion to have an idea how bad they actually are.\n\nHonestly, I never understood why the checkpoint during CREATE DATABASE\nwas as problem --- we checkpoint by default every five minutes anyway,\nso why is an additional two a problem --- it just means the next\ncheckpoint will do less work. It is hard to see how avoiding\ncheckpoints to add WAL writes, fscyncs, and replication traffic could be\na win.\n\nI see the patch justification outlined here:\n\n\thttps://www.postgresql.org/message-id/CAFiTN-sP6yLVTfjR42mEfvFwJ-SZ2iEtG1t0j=QX09X=BM+KWQ@mail.gmail.com\n\nTDE is mentioned as a value for this patch, but I don't see why it is\nneeded --- TDE can easily decrypt/encrypt the pages while they are\ncopied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 9 Feb 2022 09:18:57 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "\nOn 6/16/21 03:52, Dilip Kumar wrote:\n> On Tue, Jun 15, 2021 at 7:01 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> Rather than use size, I'd be inclined to say use this if the source\n>> database is marked as a template, and use the copydir approach for\n>> anything that isn't.\n> Yeah, that is possible, on the other thought wouldn't it be good to\n> provide control to the user by providing two different commands, e.g.\n> COPY DATABASE for the existing method (copydir) and CREATE DATABASE\n> for the new method (fully wal logged)?\n>\n\n\nThis proposal seems to have gotten lost.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 9 Feb 2022 10:55:38 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Feb 9, 2022 at 7:49 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> Honestly, I never understood why the checkpoint during CREATE DATABASE\n> was as problem --- we checkpoint by default every five minutes anyway,\n> so why is an additional two a problem --- it just means the next\n> checkpoint will do less work. It is hard to see how avoiding\n> checkpoints to add WAL writes, fscyncs, and replication traffic could be\n> a win.\n\nBut don't you think that the current way of WAL logging the CREATE\nDATABASE is a bit hacky? I mean we are just logically WAL logging the\nsource and destination directory paths without actually WAL logging\nwhat content we want to copy. IMHO this is against the basic\nprinciple of WAL and that's the reason we are forcefully checkpointing\nto avoid replaying that WAL during crash recovery. Even after this\nsome of the code comments say that we have limitations during PITR[1]\nand we want to avoid it sometime in the future.\n\n[1]\n* In PITR replay, the first of these isn't an issue, and the second\n* is only a risk if the CREATE DATABASE and subsequent template\n* database change both occur while a base backup is being taken.\n* There doesn't seem to be much we can do about that except document\n* it as a limitation.\n*\n* Perhaps if we ever implement CREATE DATABASE in a less cheesy way,\n* we can avoid this.\n*/\nRequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_FORCE | CHECKPOINT_WAIT);\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 21:26:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Feb 9, 2022 at 9:25 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 6/16/21 03:52, Dilip Kumar wrote:\n> > On Tue, Jun 15, 2021 at 7:01 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >> Rather than use size, I'd be inclined to say use this if the source\n> >> database is marked as a template, and use the copydir approach for\n> >> anything that isn't.\n> > Yeah, that is possible, on the other thought wouldn't it be good to\n> > provide control to the user by providing two different commands, e.g.\n> > COPY DATABASE for the existing method (copydir) and CREATE DATABASE\n> > for the new method (fully wal logged)?\n> >\n>\n>\n> This proposal seems to have gotten lost.\n\nYeah, I am planning to work on this part so that we can support both methods.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 21:28:52 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Feb 9, 2022 at 9:19 AM Bruce Momjian <bruce@momjian.us> wrote:\n> Honestly, I never understood why the checkpoint during CREATE DATABASE\n> was as problem --- we checkpoint by default every five minutes anyway,\n> so why is an additional two a problem --- it just means the next\n> checkpoint will do less work. It is hard to see how avoiding\n> checkpoints to add WAL writes, fscyncs, and replication traffic could be\n> a win.\n\nTry running pgbench with the --progress option and enough concurrent\njobs to keep a moderately large system busy and watching what happens\nto the tps each time a checkpoint occurs. It's extremely dramatic, or\nat least it was the last time I ran such tests. I think that\nperformance will sometimes drop by a factor of five or more when the\ncheckpoint hits, and take multiple minutes to recover.\n\nI think your statement that doing an extra checkpoint \"just means the\nnext checkpoint will do less work\" is kind of misleading. That's\ncertainly true in some situations. But when the same pages are being\ndirtied over and over again, an extra checkpoint often means that the\nsystem will do MUCH MORE work, because every checkpoint triggers a new\nset of full-page writes over the actively-updated portion of the\ndatabase.\n\nI think that very few people run systems with heavy write workloads\nwith checkpoint_timeout=5m, precisely because of this issue. Almost\nevery system I see has had that raised to at least 10m and sometimes\n30m or more. It can make a massive difference.\n\n> I see the patch justification outlined here:\n>\n> https://www.postgresql.org/message-id/CAFiTN-sP6yLVTfjR42mEfvFwJ-SZ2iEtG1t0j=QX09X=BM+KWQ@mail.gmail.com\n>\n> TDE is mentioned as a value for this patch, but I don't see why it is\n> needed --- TDE can easily decrypt/encrypt the pages while they are\n> copied.\n\nThat's true, but depending on what other design decisions we make,\nWAL-logging it might be a problem.\n\nRight now, when someone creates a new database, we log a single record\nthat basically says \"go copy the directory'\". That's very different\nthan what we normally do, which is to log changes to individual pages,\nor where required, small groups of pages (e.g. a single WAL record is\nwritten for an UPDATE even though it may touch two pages). The fact\nthat in this case we only log a single WAL record for an operation\nthat could touch an unbounded amount of data is why this needs special\nhandling around checkpoints. It also introduces a certain amount of\nfragility into the system, because if for some reason the source\ndirectory on the standby doesn't exactly match the source directory on\nthe primary, the new databases won't match either. Any errors that\ncreep into the process can be propagated around to other places by a\nsystem like this. However, ordinarily that doesn't happen, which is\nwhy we've been able to use this system successfully for so many years.\n\nThe other reason we've been able to use this successfully is that\nwe're confident that we can perform exactly the same operation on the\nstandby as we do on the primary knowing only the relevant directory\nnames. If we say \"copy this directory to there\" we believe we'll be\nable to do that exactly the same way on the standby. Is that still\ntrue with TDE? Well, it depends. If the encryption can be performed\nknowing only the key and the identity of the block (database OID,\ntablespace OID, relfilenode, fork, block number) then it's true. But\nif the encryption needs to, for example, generate a random nonce for\neach block, then it's false. If you want the standby to be an exact\ncopy of the master in a system where new blocks get random nonces,\nthen you need to replicate the copy block-by-block, not as one\ngigantic operation, so that you can log the nonce you picked for each\nblock. On the other hand, maybe you DON'T want the standby to be an\nexact copy of the master. If, for example, you imagine a system where\nthe master and standby aren't even using the same key, then this is a\nlot less relevant.\n\nI can't predict whether PostgreSQL will get TDE in the future, and if\nit does, I can't predict what form it will take. Therefore any strong\nstatement about whether this will benefit TDE or not seems to me to be\npretty questionable - we don't know that it will be useful, and we\ndon't know that it won't. But, like Dilip, I think the way we're\nWAL-logging CREATE DATABASE right now is a hack, and I *know* it can\ncause massive performance drops on busy systems.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 11:00:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Feb 9, 2022 at 10:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Wed, Feb 9, 2022 at 9:25 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > On 6/16/21 03:52, Dilip Kumar wrote:\n> > > On Tue, Jun 15, 2021 at 7:01 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > >> Rather than use size, I'd be inclined to say use this if the source\n> > >> database is marked as a template, and use the copydir approach for\n> > >> anything that isn't.\n> > > Yeah, that is possible, on the other thought wouldn't it be good to\n> > > provide control to the user by providing two different commands, e.g.\n> > > COPY DATABASE for the existing method (copydir) and CREATE DATABASE\n> > > for the new method (fully wal logged)?\n> >\n> > This proposal seems to have gotten lost.\n>\n> Yeah, I am planning to work on this part so that we can support both methods.\n\nBut can we pick a different syntax? In my view this should be an\noption to CREATE DATABASE rather than a whole new command.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 11:01:40 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "\nOn 2/9/22 10:58, Dilip Kumar wrote:\n> On Wed, Feb 9, 2022 at 9:25 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> On 6/16/21 03:52, Dilip Kumar wrote:\n>>> On Tue, Jun 15, 2021 at 7:01 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>> Rather than use size, I'd be inclined to say use this if the source\n>>>> database is marked as a template, and use the copydir approach for\n>>>> anything that isn't.\n>>> Yeah, that is possible, on the other thought wouldn't it be good to\n>>> provide control to the user by providing two different commands, e.g.\n>>> COPY DATABASE for the existing method (copydir) and CREATE DATABASE\n>>> for the new method (fully wal logged)?\n>>>\n>>\n>> This proposal seems to have gotten lost.\n> Yeah, I am planning to work on this part so that we can support both methods.\n>\n\nOK, many thanks.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 9 Feb 2022 11:15:11 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Feb 8, 2022 at 11:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Now, one bigger question is can we proceed with this patch without\n> fixing [2], IMHO, if we are deciding to keep the old method also\n> intact then one option could be that for now only change CREATE\n> DATABASE to support both old and new way of creating database and for\n> time being leave the ALTER DATABASE SET TABLESPACE alone and let it\n> work only with the old method? And another option is that we first\n> fix the issue related to the tombstone file and then come back to\n> this?\n>\n> IMHO, the first option could be better in a way that we have already\n> made better progress in this patch and this is in better shape than\n> the other patch we are trying to make for removing the tombstone\n> files.\n\nYeah, it's getting quite close to the end of this release cycle. I'm\nnot sure whether we can get anything committed here at all in the time\nwe have remaining, but I agree with you that this patch seems like a\nbetter prospect than that one.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 11:21:55 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Feb 9, 2022 at 11:00:06AM -0500, Robert Haas wrote:\n> Try running pgbench with the --progress option and enough concurrent\n> jobs to keep a moderately large system busy and watching what happens\n> to the tps each time a checkpoint occurs. It's extremely dramatic, or\n> at least it was the last time I ran such tests. I think that\n> performance will sometimes drop by a factor of five or more when the\n> checkpoint hits, and take multiple minutes to recover.\n> \n> I think your statement that doing an extra checkpoint \"just means the\n> next checkpoint will do less work\" is kind of misleading. That's\n> certainly true in some situations. But when the same pages are being\n> dirtied over and over again, an extra checkpoint often means that the\n> system will do MUCH MORE work, because every checkpoint triggers a new\n> set of full-page writes over the actively-updated portion of the\n> database.\n> \n> I think that very few people run systems with heavy write workloads\n> with checkpoint_timeout=5m, precisely because of this issue. Almost\n> every system I see has had that raised to at least 10m and sometimes\n> 30m or more. It can make a massive difference.\n\nWell, I think the worst case is that the checkpoint happens exactly\nbetween two checkpoints, so you are checkpointing twice as often, but if\nit happens just before or after a checkpoint, I assume the effect would\nbe minimal.\n\nSo, it seems we are weighing having a checkpoint happen in the middle of\na checkpoint interval vs writing more WAL. If the WAL traffic, without\nCREATE DATABASE, is high, and the template database is small, writing\nmore WAL and skipping the checkpoint will be win, but if the WAL traffic\nis small and the template database is big, the extra WAL will be a loss.\nIs this accurate?\n\n> I can't predict whether PostgreSQL will get TDE in the future, and if\n> it does, I can't predict what form it will take. Therefore any strong\n> statement about whether this will benefit TDE or not seems to me to be\n> pretty questionable - we don't know that it will be useful, and we\n\nAgreed. We would want to have a different heap/index key on the standby\nso we can rotate the heap/index key.\n\n> don't know that it won't. But, like Dilip, I think the way we're\n> WAL-logging CREATE DATABASE right now is a hack, and I *know* it can\n\nYes, it is a hack, but it seems to be a clever one that we might have\nchosen if it had not been part of the original system.\n\n> cause massive performance drops on busy systems.\n\nSee above.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 9 Feb 2022 13:34:21 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Feb 9, 2022 at 1:34 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Well, I think the worst case is that the checkpoint happens exactly\n> between two checkpoints, so you are checkpointing twice as often, but if\n> it happens just before or after a checkpoint, I assume the effect would\n> be minimal.\n\nI agree for the most part. I think that if checkpoints happen every 8\nminutes normally and the extra checkpoint happens 2 minutes after the\nprevious checkpoint, the impact may be almost as bad as if it had\nhappened right in the middle. If it happens 5 seconds after the\nprevious checkpoint, it should be low impact.\n\n> So, it seems we are weighing having a checkpoint happen in the middle of\n> a checkpoint interval vs writing more WAL. If the WAL traffic, without\n> CREATE DATABASE, is high, and the template database is small, writing\n> more WAL and skipping the checkpoint will be win, but if the WAL traffic\n> is small and the template database is big, the extra WAL will be a loss.\n> Is this accurate?\n\nI think that's basically correct. I would expect that the worry about\nbig template database is mostly about template databases that are\nREALLY big. I think if your template database is 10GB you probably\nshouldn't be worried about this feature. 10GB of extra WAL isn't\nnothing, but if you've got reasonably capable hardware, it's not\noverloaded, and max_wal_size is big enough, it's probably not going to\nhave a huge impact. Also, most of the impact will probably be on the\nCREATE DATABASE command itself, and other things running on the system\nat the same time will be impacted to a lesser degree. I think it's\neven possible that you will be happier with this feature than without,\nbecause you may like the idea that CREATE DATABASE itself is slow more\nthan you like the idea of it making everything else on the system\nslow. On the other hand, if your template database is 1TB, the extra\nWAL is probably going to be a fairly big problem.\n\nBasically I think for most people this should be neutral or a win. For\npeople with really large template databases, it's a loss. Hence the\ndiscussion about having a way for people who prefer the current\nbehavior to keep it.\n\n> Agreed. We would want to have a different heap/index key on the standby\n> so we can rotate the heap/index key.\n\nI don't like that design, and I don't think that's what we should do,\nbut I understand that you feel differently. IMHO, this thread is not\nthe place to hash that out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 14:30:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Feb 09, 2022 at 02:30:08PM -0500, Robert Haas wrote:\n> On Wed, Feb 9, 2022 at 1:34 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Well, I think the worst case is that the checkpoint happens exactly\n> > between two checkpoints, so you are checkpointing twice as often, but if\n> > it happens just before or after a checkpoint, I assume the effect would\n> > be minimal.\n> \n> I agree for the most part. I think that if checkpoints happen every 8\n> minutes normally and the extra checkpoint happens 2 minutes after the\n> previous checkpoint, the impact may be almost as bad as if it had\n> happened right in the middle. If it happens 5 seconds after the\n> previous checkpoint, it should be low impact.\n\nBut the extra checkpoints will be immediate, while on a properly configured\nsystem it should be spread checkpoint. That will add some more overhead.\n\n> > So, it seems we are weighing having a checkpoint happen in the middle of\n> > a checkpoint interval vs writing more WAL. If the WAL traffic, without\n> > CREATE DATABASE, is high, and the template database is small, writing\n> > more WAL and skipping the checkpoint will be win, but if the WAL traffic\n> > is small and the template database is big, the extra WAL will be a loss.\n> > Is this accurate?\n> \n> I think that's basically correct. I would expect that the worry about\n> big template database is mostly about template databases that are\n> REALLY big. I think if your template database is 10GB you probably\n> shouldn't be worried about this feature. 10GB of extra WAL isn't\n> nothing, but if you've got reasonably capable hardware, it's not\n> overloaded, and max_wal_size is big enough, it's probably not going to\n> have a huge impact. Also, most of the impact will probably be on the\n> CREATE DATABASE command itself, and other things running on the system\n> at the same time will be impacted to a lesser degree. I think it's\n> even possible that you will be happier with this feature than without,\n> because you may like the idea that CREATE DATABASE itself is slow more\n> than you like the idea of it making everything else on the system\n> slow. On the other hand, if your template database is 1TB, the extra\n> WAL is probably going to be a fairly big problem.\n> \n> Basically I think for most people this should be neutral or a win. For\n> people with really large template databases, it's a loss. Hence the\n> discussion about having a way for people who prefer the current\n> behavior to keep it.\n\nThose extra WALs will also impact backups and replication. You could have\nfancy hardware, a read-mostly workload and the need to replicate over a slow\nWAN, and in that case the 10GB could be much more problematic.\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:52:28 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Feb 9, 2022 at 9:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Feb 9, 2022 at 10:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > On Wed, Feb 9, 2022 at 9:25 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > > On 6/16/21 03:52, Dilip Kumar wrote:\n> > > > On Tue, Jun 15, 2021 at 7:01 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > > >> Rather than use size, I'd be inclined to say use this if the source\n> > > >> database is marked as a template, and use the copydir approach for\n> > > >> anything that isn't.\n> > > > Yeah, that is possible, on the other thought wouldn't it be good to\n> > > > provide control to the user by providing two different commands, e.g.\n> > > > COPY DATABASE for the existing method (copydir) and CREATE DATABASE\n> > > > for the new method (fully wal logged)?\n> > >\n> > > This proposal seems to have gotten lost.\n> >\n> > Yeah, I am planning to work on this part so that we can support both methods.\n>\n> But can we pick a different syntax? In my view this should be an\n> option to CREATE DATABASE rather than a whole new command.\n\nMaybe we can provide something like\n\nCREATE DATABASE..WITH WAL_LOG=true/false ? OR\nCREATE DATABASE..WITH WAL_LOG_DATA_PAGE=true/false ? OR\nCREATE DATABASE..WITH CHECKPOINT=true/false ? OR\n\nAnd then we can explain in documentation about these options? I think\ndefault should be new method?\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Feb 2022 18:02:31 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Feb 10, 2022 at 2:52 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Those extra WALs will also impact backups and replication. You could have\n> fancy hardware, a read-mostly workload and the need to replicate over a slow\n> WAN, and in that case the 10GB could be much more problematic.\n\nTrue, I guess, but how bad does your WAN have to be for that to be an\nissue? On a 1 gigabit/second link, that's a little over 2 minutes of\ntransfer time. That's not nothing, but it's not extreme, either,\nespecially because there's no sense in querying an empty database.\nYou're going to have to put some stuff in that database before you can\ndo anything meaningful with it, and that's going to have to be\nreplicated with or without this feature.\n\nI am not saying it couldn't be a problem, and that's why I'm endorsing\nmaking the behavior optional. But I think that it's a niche scenario.\nYou need a bigger-than-normal template database, a slow WAN link, AND\nyou need the amount of data loaded into the databases you create from\nthe template to be small enough to make the cost of logging the\ntemplate pages material. If you create a 10GB database from a template\nand then load 200GB of data into it, the WAL-logging overhead of\ncreating the template is only 5%.\n\nI won't really be surprised if we hear that someone has a 10GB\ntemplate database and likes to make a ton of copies and only change\n500 rows in each one while replicating the whole thing over a slow\nWAN. That can definitely happen, and I'm sure whoever is doing that\nhas reasons for it which they consider good and sufficient. However, I\ndon't think there are likely to be a ton of people doing stuff like\nthat - just a few.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Feb 2022 10:32:42 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-02-10 10:32:42 -0500, Robert Haas wrote:\n> I won't really be surprised if we hear that someone has a 10GB\n> template database and likes to make a ton of copies and only change\n> 500 rows in each one while replicating the whole thing over a slow\n> WAN. That can definitely happen, and I'm sure whoever is doing that\n> has reasons for it which they consider good and sufficient. However, I\n> don't think there are likely to be a ton of people doing stuff like\n> that - just a few.\n\nYea. I would be a bit more concerned if we made creating template databases\nvery cheap, e.g. by using file copy-on-write functionality like we have for\npg_upgrade. But right now it's a fairly hefty operation anyway.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:10:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "\nOn 2/10/22 07:32, Dilip Kumar wrote:\n> On Wed, Feb 9, 2022 at 9:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Wed, Feb 9, 2022 at 10:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>> On Wed, Feb 9, 2022 at 9:25 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>> On 6/16/21 03:52, Dilip Kumar wrote:\n>>>>> On Tue, Jun 15, 2021 at 7:01 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>>>> Rather than use size, I'd be inclined to say use this if the source\n>>>>>> database is marked as a template, and use the copydir approach for\n>>>>>> anything that isn't.\n>>>>> Yeah, that is possible, on the other thought wouldn't it be good to\n>>>>> provide control to the user by providing two different commands, e.g.\n>>>>> COPY DATABASE for the existing method (copydir) and CREATE DATABASE\n>>>>> for the new method (fully wal logged)?\n>>>> This proposal seems to have gotten lost.\n>>> Yeah, I am planning to work on this part so that we can support both methods.\n>> But can we pick a different syntax? In my view this should be an\n>> option to CREATE DATABASE rather than a whole new command.\n> Maybe we can provide something like\n>\n> CREATE DATABASE..WITH WAL_LOG=true/false ? OR\n> CREATE DATABASE..WITH WAL_LOG_DATA_PAGE=true/false ? OR\n> CREATE DATABASE..WITH CHECKPOINT=true/false ? OR\n>\n> And then we can explain in documentation about these options? I think\n> default should be new method?\n>\n>\n\nThe last one at least has the advantage that it doesn't invent yet\nanother keyword.\n\nI can live with the new method being the default. I'm sure it would be\nhighlighted in the release notes too.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 11 Feb 2022 12:11:40 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Feb 11, 2022 at 12:11 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> The last one at least has the advantage that it doesn't invent yet\n> another keyword.\n\nWe don't need a new keyword for this as long as it lexes as one token,\nbecause createdb_opt_name accepts IDENT. So I think we should focus on\ntrying to come up with something that is as clear as we know how to\nmake it.\n\nWhat I find difficult about doing that is that this is all a bunch of\ntechnical details that users may have difficulty understanding. If we\nsay WAL_LOG or WAL_LOG_DATA, a reasonably but not incredibly\nwell-informed user will assume that skipping WAL is not really an\noption. If we say CHECKPOINT, a reasonably but not incredibly\nwell-informed user will presume they don't want one (I think).\nCHECKPOINT also seems like it's naming the switch by the unwanted side\neffect, which doesn't seem too flattering to the existing method.\n\nHow about something like LOG_AS_CLONE? That makes it clear, I hope,\nthat we're logging it a different way, but that method of logging it\nis different in each case. You'd still have to read the documentation\nto find out what it really means, but at least it seems like it points\nyou more in the right direction. To me, anyway.\n\n> I can live with the new method being the default. I'm sure it would be\n> highlighted in the release notes too.\n\nThat would make sense.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 12:35:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Feb 11, 2022 at 12:35:50PM -0500, Robert Haas wrote:\n> How about something like LOG_AS_CLONE? That makes it clear, I hope,\n> that we're logging it a different way, but that method of logging it\n> is different in each case. You'd still have to read the documentation\n> to find out what it really means, but at least it seems like it points\n> you more in the right direction. To me, anyway.\n\nI think CLONE would be confusing since we don't use that term often,\nmaybe LOG_DB_COPY or LOG_FILE_COPY?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 11 Feb 2022 12:50:49 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Feb 11, 2022 at 12:50 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Fri, Feb 11, 2022 at 12:35:50PM -0500, Robert Haas wrote:\n> > How about something like LOG_AS_CLONE? That makes it clear, I hope,\n> > that we're logging it a different way, but that method of logging it\n> > is different in each case. You'd still have to read the documentation\n> > to find out what it really means, but at least it seems like it points\n> > you more in the right direction. To me, anyway.\n>\n> I think CLONE would be confusing since we don't use that term often,\n> maybe LOG_DB_COPY or LOG_FILE_COPY?\n\nYeah, maybe. But it's not clear to me with that kind of naming whether\nTRUE or FALSE would be the existing behavior? One version logs a\nsingle record for the whole database, and the other logs a record per\ndatabase block. Neither version logs per file. LOG_COPIED_BLOCKS,\nmaybe?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 13:18:58 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Feb 11, 2022 at 01:18:58PM -0500, Robert Haas wrote:\n> On Fri, Feb 11, 2022 at 12:50 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > On Fri, Feb 11, 2022 at 12:35:50PM -0500, Robert Haas wrote:\n> > > How about something like LOG_AS_CLONE? That makes it clear, I hope,\n> > > that we're logging it a different way, but that method of logging it\n> > > is different in each case. You'd still have to read the documentation\n> > > to find out what it really means, but at least it seems like it points\n> > > you more in the right direction. To me, anyway.\n> >\n> > I think CLONE would be confusing since we don't use that term often,\n> > maybe LOG_DB_COPY or LOG_FILE_COPY?\n> \n> Yeah, maybe. But it's not clear to me with that kind of naming whether\n> TRUE or FALSE would be the existing behavior? One version logs a\n> single record for the whole database, and the other logs a record per\n> database block. Neither version logs per file. LOG_COPIED_BLOCKS,\n> maybe?\n\nYes, I like BLOCKS more than FILE.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 11 Feb 2022 13:32:46 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "\nOn 2/11/22 13:32, Bruce Momjian wrote:\n> On Fri, Feb 11, 2022 at 01:18:58PM -0500, Robert Haas wrote:\n>> On Fri, Feb 11, 2022 at 12:50 PM Bruce Momjian <bruce@momjian.us> wrote:\n>>> On Fri, Feb 11, 2022 at 12:35:50PM -0500, Robert Haas wrote:\n>>>> How about something like LOG_AS_CLONE? That makes it clear, I hope,\n>>>> that we're logging it a different way, but that method of logging it\n>>>> is different in each case. You'd still have to read the documentation\n>>>> to find out what it really means, but at least it seems like it points\n>>>> you more in the right direction. To me, anyway.\n>>> I think CLONE would be confusing since we don't use that term often,\n>>> maybe LOG_DB_COPY or LOG_FILE_COPY?\n>> Yeah, maybe. But it's not clear to me with that kind of naming whether\n>> TRUE or FALSE would be the existing behavior? One version logs a\n>> single record for the whole database, and the other logs a record per\n>> database block. Neither version logs per file. LOG_COPIED_BLOCKS,\n>> maybe?\n> Yes, I like BLOCKS more than FILE.\n\n\nI'm not really sure any single parameter name is going to capture the\nsubtlety involved here.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 11 Feb 2022 15:40:15 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Feb 11, 2022 at 3:40 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> I'm not really sure any single parameter name is going to capture the\n> subtlety involved here.\n\nI mean to some extent that's inevitable, but it's not a reason not to\ndo the best we can.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 15:47:44 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Feb 11, 2022 at 1:32 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Yeah, maybe. But it's not clear to me with that kind of naming whether\n> > TRUE or FALSE would be the existing behavior? One version logs a\n> > single record for the whole database, and the other logs a record per\n> > database block. Neither version logs per file. LOG_COPIED_BLOCKS,\n> > maybe?\n>\n> Yes, I like BLOCKS more than FILE.\n\nCool.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 15:48:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On 2022-Feb-11, Robert Haas wrote:\n\n> What I find difficult about doing that is that this is all a bunch of\n> technical details that users may have difficulty understanding. If we\n> say WAL_LOG or WAL_LOG_DATA, a reasonably but not incredibly\n> well-informed user will assume that skipping WAL is not really an\n> option. If we say CHECKPOINT, a reasonably but not incredibly\n> well-informed user will presume they don't want one (I think).\n> CHECKPOINT also seems like it's naming the switch by the unwanted side\n> effect, which doesn't seem too flattering to the existing method.\n\nIt seems you're thinking deciding what to do based on an option that\ngets a boolean argument. But what about making the argument be an enum?\nFor example\n\nCREATE DATABASE ... WITH (STRATEGY = LOG);\t-- default if option is omitted\nCREATE DATABASE ... WITH (STRATEGY = CHECKPOINT);\n\nSo the user has to think about it in terms of some strategy to choose,\nrather than enabling or disabling some flag with nontrivial\nimplications.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"[PostgreSQL] is a great group; in my opinion it is THE best open source\ndevelopment communities in existence anywhere.\" (Lamar Owen)\n\n\n", "msg_date": "Fri, 11 Feb 2022 18:08:31 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "\nOn 2/11/22 15:47, Robert Haas wrote:\n> On Fri, Feb 11, 2022 at 3:40 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> I'm not really sure any single parameter name is going to capture the\n>> subtlety involved here.\n> I mean to some extent that's inevitable, but it's not a reason not to\n> do the best we can.\n\n\nTrue.\n\nI do think we should be wary of any name starting with \"LOG\", though.\nLong experience tells us that's something that confuses users when it\nrefers to the WAL.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 11 Feb 2022 16:11:18 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Feb 11, 2022 at 4:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> It seems you're thinking deciding what to do based on an option that\n> gets a boolean argument. But what about making the argument be an enum?\n> For example\n>\n> CREATE DATABASE ... WITH (STRATEGY = LOG); -- default if option is omitted\n> CREATE DATABASE ... WITH (STRATEGY = CHECKPOINT);\n>\n> So the user has to think about it in terms of some strategy to choose,\n> rather than enabling or disabling some flag with nontrivial\n> implications.\n\nI don't like those particular strategy names very much, but in general\nI think that could be a way to go, too. I somewhat hope we never end\nup with THREE strategies for creating a new database, but now that I\nthink about it, we might. Somebody might want to use a fancy FS\nprimitive that clones a directory at the FS level, or something.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 16:19:12 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-02-11 16:19:12 -0500, Robert Haas wrote:\n> I somewhat hope we never end up with THREE strategies for creating a new\n> database, but now that I think about it, we might. Somebody might want to\n> use a fancy FS primitive that clones a directory at the FS level, or\n> something.\n\nI think that'd be a great, and pretty easy to implement, feature. But it seems\nlike it'd be mostly orthogonal to the \"WAL log data\" vs \"checkpoint data\"\nquestion? On the primary / single node system using \"WAL log data\" with \"COW\nfile copy\" would work well.\n\nI bet using COW file copies would speed up our own regression tests noticeably\n- on slower systems we spend a fair bit of time and space creating template0\nand postgres, with the bulk of the data never changing.\n\nTemplate databases are also fairly commonly used by application developers to\navoid the cost of rerunning all the setup DDL & initial data loading for\ndifferent tests. Making that measurably cheaper would be a significant win.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Feb 2022 18:00:44 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sat, Feb 12, 2022 at 06:00:44PM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2022-02-11 16:19:12 -0500, Robert Haas wrote:\n> > I somewhat hope we never end up with THREE strategies for creating a new\n> > database, but now that I think about it, we might. Somebody might want to\n> > use a fancy FS primitive that clones a directory at the FS level, or\n> > something.\n> \n> I think that'd be a great, and pretty easy to implement, feature. But it seems\n> like it'd be mostly orthogonal to the \"WAL log data\" vs \"checkpoint data\"\n> question? On the primary / single node system using \"WAL log data\" with \"COW\n> file copy\" would work well.\n> \n> I bet using COW file copies would speed up our own regression tests noticeably\n> - on slower systems we spend a fair bit of time and space creating template0\n> and postgres, with the bulk of the data never changing.\n> \n> Template databases are also fairly commonly used by application developers to\n> avoid the cost of rerunning all the setup DDL & initial data loading for\n> different tests. Making that measurably cheaper would be a significant win.\n\n+1\n\nI ran into this last week and was still thinking about proposing it.\n\nWould this help CI or any significant fraction of buildfarm ?\nOr just tests run locally on supporting filesystems.\n\nNote that pg_upgrade already supports copy/link/clone. (Obviously, link\nwouldn't do anything desirable for CREATE DATABASE).\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 12 Feb 2022 20:17:46 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sat, Feb 12, 2022 at 2:38 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Feb-11, Robert Haas wrote:\n>\n> > What I find difficult about doing that is that this is all a bunch of\n> > technical details that users may have difficulty understanding. If we\n> > say WAL_LOG or WAL_LOG_DATA, a reasonably but not incredibly\n> > well-informed user will assume that skipping WAL is not really an\n> > option. If we say CHECKPOINT, a reasonably but not incredibly\n> > well-informed user will presume they don't want one (I think).\n> > CHECKPOINT also seems like it's naming the switch by the unwanted side\n> > effect, which doesn't seem too flattering to the existing method.\n>\n> It seems you're thinking deciding what to do based on an option that\n> gets a boolean argument. But what about making the argument be an enum?\n> For example\n>\n> CREATE DATABASE ... WITH (STRATEGY = LOG); -- default if option is omitted\n> CREATE DATABASE ... WITH (STRATEGY = CHECKPOINT);\n>\n> So the user has to think about it in terms of some strategy to choose,\n> rather than enabling or disabling some flag with nontrivial\n> implications.\n\n\nYeah I think being explicit about giving the strategy to the user\nlooks like a better option. Now they can choose whether they want it\nto create using WAL log or using CHECKPOINT. Otherwise, if we give a\nflag then we will have to give an explanation that if they choose not\nto WAL log then we will have to do a checkpoint internally. So I\nthink giving LOG vs CHECKPOINT as an explicit option looks better to\nme.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 13 Feb 2022 10:12:38 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sun, Feb 13, 2022 at 10:12 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n\nI have done performance testing with different template DB sizes and\ndifferent amounts of dirty shared buffers and I think as expected the\nbigger the dirty shared buffer the checkpoint approach becomes costly\nand OTOH the larger the template DB size the WAL log approach takes\nmore time.\n\nI think it is very common to have larger shared buffers and of course,\nif somebody has configured such a large shared buffer then a good % of\nit will be dirty most of the time. So IMHO in the future, the WAL log\napproach is going to be more usable in general. However, this is just\nmy opinion, and others may have completely different thoughts and\nanyhow we are keeping options for both the approaches so no worry.\n\nNext, I am planning to do some more tests, where we are having pgbench\nrunning and concurrently we do CREATEDB maybe every 1 minute and see\nwhat is the CREATEDB time as well as what is the impact on pgbench\nperformance. Because currently I have only measured CREATEDB time but\nwe must be knowing the impact of createdb on the other system as well.\n\nTest setup:\nmax_wal_size=64GB\ncheckpoint_timeout=15min\n- CREATE base TABLE of size of Shared Buffers\n- CREATE template database and table in it of varying sizes (as per test)\n- CHECKPOINT (write out dirty buffers)\n- UPDATE 70% of tuple in base table (dirty 70% of shared buffers)\n- CREATE database using template db. (Actual test target)\n\ntest1:\n1 GB shared buffers, template DB size = 6MB, dirty shared buffer=70%\nHead: 2341.665 ms\nPatch: 85.229 ms\n\ntest2:\n1 GB shared buffers, template DB size = 1GB, dirty shared buffer=70%\nHead: 4044 ms\nPatch: 8376 ms\n\ntest3:\n8 GB shared buffers, template DB size = 1GB, dirty shared buffer=70%\nHead: 21398 ms\nPatch: 9834 ms\n\ntest4:\n8 GB shared buffers, template DB size = 10GB, dirty shared buffer=95%\nHead: 38574 ms\nPatch: 77160 ms\n\ntest4:\n32 GB shared buffers, template DB size = 10GB, dirty shared buffer=70%\nHead: 47656 ms\nPatch: 79767 ms\n\ntest5:\n64 GB shared buffers, template DB size = 1GB, dirty shared buffer=70%\nHead: 59151 ms\nPatch: 8742 ms\n\ntest6:\n64 GB shared buffers, template DB size = 50GB, dirty shared buffer=50%\nHead: 171614 ms\nPatch: 406040 ms\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 13 Feb 2022 12:04:16 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sun, Feb 13, 2022 at 1:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrot>\n> test4:\n> 32 GB shared buffers, template DB size = 10GB, dirty shared buffer=70%\n> Head: 47656 ms\n> Patch: 79767 ms\n\nThis seems like the most surprising result of the bunch. Here, the\ntemplate DB is both small enough to fit in shared_buffers and small\nenough not to trigger a checkpoint all by itself, and yet the patch\nloses.\n\nDid you checkpoint between one test and the next, or might this test\nhave been done after a bunch of WAL had already been written since the\nlast checkpoint so that the 10GB pushed it over the edge?\n\nBTW, you have test4 twice in your list of results.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 13 Feb 2022 11:26:07 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sun, Feb 13, 2022 at 9:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sun, Feb 13, 2022 at 1:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrot>\n> > test4:\n> > 32 GB shared buffers, template DB size = 10GB, dirty shared buffer=70%\n> > Head: 47656 ms\n> > Patch: 79767 ms\n>\n> This seems like the most surprising result of the bunch. Here, the\n> template DB is both small enough to fit in shared_buffers and small\n> enough not to trigger a checkpoint all by itself, and yet the patch\n> loses.\n\nWell this is not really surprising to me because what I have noticed\nis that with the new approach the createdb time is completely\ndependent upon the template db size. So if the source db size is 10GB\nit is taking around 80sec and the shared buffers size does not have a\nmajor impact. Maybe a very small shared buffer can have more impact\nso I will test that as well.\n\n>\n> Did you checkpoint between one test and the next, or might this test\n> have been done after a bunch of WAL had already been written since the\n> last checkpoint so that the 10GB pushed it over the edge?\n\nNot really, I am running each test with a new initdb so that could\nnot be an issue.\n\n> BTW, you have test4 twice in your list of results.\n\nMy bad, those are different tests.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 10:31:17 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sun, Feb 13, 2022 at 12:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sun, Feb 13, 2022 at 10:12 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> Next, I am planning to do some more tests, where we are having pgbench\n> running and concurrently we do CREATEDB maybe every 1 minute and see\n> what is the CREATEDB time as well as what is the impact on pgbench\n> performance. Because currently I have only measured CREATEDB time but\n> we must be knowing the impact of createdb on the other system as well.\n\nI have done tests with the pgbench as well. So basically I did not\nnotice any significant difference in the TPS, I was expecting there\nshould be some difference due to the checkpoint on the head so maybe I\nneed to test with more backend maybe. And createdb time there is a\nhuge difference. I think this is because template1 db is very small so\npatch is getting completed in no time whereas head is taking huge time\nbecause of high dirty shared buffers (due to concurrent pgbench).\n\nconfig:\necho \"logging_collector=on\" >> data/postgresql.conf\necho \"port = 5432\" >> data/postgresql.conf\necho \"max_wal_size=64GB\" >> data/postgresql.conf\necho \"checkpoint_timeout=15min\" >> data/postgresql.conf\necho \"shared_buffers=32GB\" >> data/postgresql.conf\n\nTest:\n./pgbench -i -s 1000 postgres\n./pgbench -c 32 -j 32 -T 1200 -M prepared postgres >> result.txt\n-- Concurrently run below script every 1 mins\nCREATE DATABASE mydb log_copied_blocks=true/false;\n\nResults:\n- Pgbench TPS: Did not observe any difference head vs patch\n- Create db time(very small template):\nhead: 21000 ms to 42000 ms (at different time)\npatch: 80 ms\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 10:43:43 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Feb 14, 2022 at 10:31 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sun, Feb 13, 2022 at 9:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Sun, Feb 13, 2022 at 1:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrot>\n> > > test4:\n> > > 32 GB shared buffers, template DB size = 10GB, dirty shared buffer=70%\n> > > Head: 47656 ms\n> > > Patch: 79767 ms\n> >\n> > This seems like the most surprising result of the bunch. Here, the\n> > template DB is both small enough to fit in shared_buffers and small\n> > enough not to trigger a checkpoint all by itself, and yet the patch\n> > loses.\n>\n> Well this is not really surprising to me because what I have noticed\n> is that with the new approach the createdb time is completely\n> dependent upon the template db size. So if the source db size is 10GB\n> it is taking around 80sec and the shared buffers size does not have a\n> major impact. Maybe a very small shared buffer can have more impact\n> so I will test that as well.\n\nI have done some more experiments just to understand where we are\nspending most of the time. First I have tried with synchronous commit\nand fsync off and the creation time dropped from 80s to 70s then I\njust removed the log_newpage then time further dropped to 50s. I have\nalso tried with different shared buffer sizes and observed that\nreducing or increasing the shared buffer size does not have much\nimpact on the created db with the new approach.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 15:49:55 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Feb 14, 2022 at 12:01 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Well this is not really surprising to me because what I have noticed\n> is that with the new approach the createdb time is completely\n> dependent upon the template db size. So if the source db size is 10GB\n> it is taking around 80sec and the shared buffers size does not have a\n> major impact. Maybe a very small shared buffer can have more impact\n> so I will test that as well.\n\nOK. Well, then this approach is somewhat worse than I expected for\nmoderately large template databases. But it seems very good for small\ntemplate databases, especially when there is other work in progress on\nthe system.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 08:53:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi Dilip,\n\nOn Sun, Feb 13, 2022 at 12:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sun, Feb 13, 2022 at 10:12 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n>\n> I have done performance testing with different template DB sizes and\n> different amounts of dirty shared buffers and I think as expected the\n> bigger the dirty shared buffer the checkpoint approach becomes costly\n> and OTOH the larger the template DB size the WAL log approach takes\n> more time.\n>\n> I think it is very common to have larger shared buffers and of course,\n> if somebody has configured such a large shared buffer then a good % of\n> it will be dirty most of the time. So IMHO in the future, the WAL log\n> approach is going to be more usable in general. However, this is just\n> my opinion, and others may have completely different thoughts and\n> anyhow we are keeping options for both the approaches so no worry.\n>\n> Next, I am planning to do some more tests, where we are having pgbench\n> running and concurrently we do CREATEDB maybe every 1 minute and see\n> what is the CREATEDB time as well as what is the impact on pgbench\n> performance. Because currently I have only measured CREATEDB time but\n> we must be knowing the impact of createdb on the other system as well.\n>\n> Test setup:\n> max_wal_size=64GB\n> checkpoint_timeout=15min\n> - CREATE base TABLE of size of Shared Buffers\n> - CREATE template database and table in it of varying sizes (as per test)\n> - CHECKPOINT (write out dirty buffers)\n> - UPDATE 70% of tuple in base table (dirty 70% of shared buffers)\n> - CREATE database using template db. (Actual test target)\n>\n> test1:\n> 1 GB shared buffers, template DB size = 6MB, dirty shared buffer=70%\n> Head: 2341.665 ms\n> Patch: 85.229 ms\n>\n> test2:\n> 1 GB shared buffers, template DB size = 1GB, dirty shared buffer=70%\n> Head: 4044 ms\n> Patch: 8376 ms\n>\n> test3:\n> 8 GB shared buffers, template DB size = 1GB, dirty shared buffer=70%\n> Head: 21398 ms\n> Patch: 9834 ms\n>\n> test4:\n> 8 GB shared buffers, template DB size = 10GB, dirty shared buffer=95%\n> Head: 38574 ms\n> Patch: 77160 ms\n>\n> test4:\n> 32 GB shared buffers, template DB size = 10GB, dirty shared buffer=70%\n> Head: 47656 ms\n> Patch: 79767 ms\n>\n\nIs it possible to see the WAL size generated by these two statements:\nUPDATE 70% of the tuple in the base table (dirty 70% of the shared\nbuffers) && CREATE database using template DB (Actual test target).\nJust wanted to know if it can exceed the max_wal_size of 64GB. Also,\nis it possible to try with minimal wal_level? Sorry for asking you\nthis, I could try it myself but I don't have any high level system to\ntry it.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Mon, 14 Feb 2022 21:17:47 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Feb 14, 2022 at 9:17 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n>\n> Is it possible to see the WAL size generated by these two statements:\n> UPDATE 70% of the tuple in the base table (dirty 70% of the shared\n> buffers) && CREATE database using template DB (Actual test target).\n> Just wanted to know if it can exceed the max_wal_size of 64GB.\n\nI think we already know the wal size generated by creating a db with\nan old and new approach. With the old approach it is just one WAL log\nand with the new approach it is going to log every single block of the\ndatabase. Yeah the updating 70% of the database could have some\nimpact but for verification purposes I tested without the update and\nstill the create db with WAL log is taking almost the same time. But\nanyway when I test next time I will verify again that no force\ncheckpoint is triggered.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 21:30:43 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sun, Feb 13, 2022 at 10:12 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, Feb 12, 2022 at 2:38 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > It seems you're thinking deciding what to do based on an option that\n> > gets a boolean argument. But what about making the argument be an enum?\n> > For example\n> >\n> > CREATE DATABASE ... WITH (STRATEGY = LOG); -- default if option is omitted\n> > CREATE DATABASE ... WITH (STRATEGY = CHECKPOINT);\n> >\n> > So the user has to think about it in terms of some strategy to choose,\n> > rather than enabling or disabling some flag with nontrivial\n> > implications.\n>\n>\n> Yeah I think being explicit about giving the strategy to the user\n> looks like a better option. Now they can choose whether they want it\n> to create using WAL log or using CHECKPOINT. Otherwise, if we give a\n> flag then we will have to give an explanation that if they choose not\n> to WAL log then we will have to do a checkpoint internally. So I\n> think giving LOG vs CHECKPOINT as an explicit option looks better to\n> me.\n\nSo do we have consensus to use (STRATEGY = LOG/CHECKPOINT or do we\nthink that keeping it bool i.e. Is LOG_COPIED_BLOCKS a better option?\nOnce we have consensus on this I will make this change and\ndocumentation as well along with the other changes suggested by\nRobert.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 21:55:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Feb 14, 2022 at 11:26 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> So do we have consensus to use (STRATEGY = LOG/CHECKPOINT or do we\n> think that keeping it bool i.e. Is LOG_COPIED_BLOCKS a better option?\n> Once we have consensus on this I will make this change and\n> documentation as well along with the other changes suggested by\n> Robert.\n\nI think we have consensus on STRATEGY. I'm not sure if we have\nconsensus on what the option values should be. If we had an option to\nuse fs-based cloning, that would also need to issue a checkpoint,\nwhich makes me think that CHECKPOINT is not the best name.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 12:27:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Feb 14, 2022 at 12:27:10PM -0500, Robert Haas wrote:\n> On Mon, Feb 14, 2022 at 11:26 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > So do we have consensus to use (STRATEGY = LOG/CHECKPOINT or do we\n> > think that keeping it bool i.e. Is LOG_COPIED_BLOCKS a better option?\n> > Once we have consensus on this I will make this change and\n> > documentation as well along with the other changes suggested by\n> > Robert.\n> \n> I think we have consensus on STRATEGY. I'm not sure if we have\n> consensus on what the option values should be. If we had an option to\n> use fs-based cloning, that would also need to issue a checkpoint,\n> which makes me think that CHECKPOINT is not the best name.\n\nI think if we want LOG, it has tob e WAL_LOG instead of just LOG. Was\nthere discussion that the user _has_ to specify and option instead of\nusing a default? That doesn't seem good.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 14 Feb 2022 13:58:39 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Feb 14, 2022 at 1:58 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I think we have consensus on STRATEGY. I'm not sure if we have\n> > consensus on what the option values should be. If we had an option to\n> > use fs-based cloning, that would also need to issue a checkpoint,\n> > which makes me think that CHECKPOINT is not the best name.\n>\n> I think if we want LOG, it has tob e WAL_LOG instead of just LOG. Was\n> there discussion that the user _has_ to specify and option instead of\n> using a default? That doesn't seem good.\n\nI agree. I think we can set a default, which can be either whatever we\nthink will be best on average, or maybe it can be conditional based on\nthe database size or something.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 15:05:18 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Andrew made a good case above for avoiding LOG:\n\n>I do think we should be wary of any name starting with \"LOG\", though.\n>Long experience tells us that's something that confuses users when it\nrefers to the WAL.\n\nAndrew made a good case above for avoiding LOG:>I do think we should be wary of any name starting with \"LOG\", though.>Long experience tells us that's something that confuses users when itrefers to the WAL.", "msg_date": "Mon, 14 Feb 2022 12:31:16 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Feb 15, 2022 at 2:01 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\nHere is the updated version of the patch, the changes are 1) Fixed\nreview comments given by Robert and one open comment from Ashutosh.\n2) Preserved the old create db method. 3) As agreed upthread for now\nwe are using the new strategy only for createdb not for movedb so I\nhave removed the changes in ForgetDatabaseSyncRequests() and\nDropDatabaseBuffers(). 3) Provided a database creation strategy\noption as of now I have kept it as below.\n\nCREATE DATABASE ... WITH (STRATEGY = WAL_LOG); -- default if\noption is omitted\nCREATE DATABASE ... WITH (STRATEGY = FILE_COPY);\n\nI have updated the document but I was not sure how much internal\ninformation to be exposed to the user so I will work on that based on\nfeedback from others.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 15 Feb 2022 17:18:47 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Feb 15, 2022 at 6:49 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Here is the updated version of the patch, the changes are 1) Fixed\n> review comments given by Robert and one open comment from Ashutosh.\n> 2) Preserved the old create db method. 3) As agreed upthread for now\n> we are using the new strategy only for createdb not for movedb so I\n> have removed the changes in ForgetDatabaseSyncRequests() and\n> DropDatabaseBuffers(). 3) Provided a database creation strategy\n> option as of now I have kept it as below.\n>\n> CREATE DATABASE ... WITH (STRATEGY = WAL_LOG); -- default if\n> option is omitted\n> CREATE DATABASE ... WITH (STRATEGY = FILE_COPY);\n\nAll right. I think there have been two design-level objections to this\npatch, and this resolves one of them. The other one is trickier,\nbecause AFAICT it's basically an opinion question: is accessing\npg_class in the template database from some backend that is connected\nto another database too ugly to be acceptable? Several people have\nexpressed concerns about that, but it's not clear to me whether they\nare essentially saying \"that is not what I would do if I were doing\nthis project\" or more like \"if you commit something that does it that\nway I will be enraged and demand an immediate revert and the removal\nof your commit bit.\" If it's the former, I think it's possible to\nclean up various details of these patches to make them look nicer than\nthey do at present and get something committed for PostgreSQL 15. But\nif it is the latter then there's really no point to that kind of\ncleanup work and we should probably just give up now. So, Andres,\nHeikki, and anybody else with a strong opinion, can you clarify how\nvigorously you hate this design, or don't?\n\nMy own opinion is that this is actually rather elegant. It just makes\nsense to me that the right way to figure out what relations are in a\ndatabase is to get that list from the database rather than from the\nfilesystem. Nobody would accept the idea of making \\d work by listing\nout the directory contents rather than by walking pg_class, and so the\nonly reason we ought to accept that in the case of cloning a database\nis if the code is too ugly any other way. But is it really? It's true\nthat this patch set does some refactoring of interfaces in order to\nmake that work, and there's a few things about how it does that that I\nthink could be improved, but on the whole, it's seems like a\nremarkably small amount of code to do something that we've long\nconsidered absolutely taboo. Now, it's nowhere close to being\nsomething that could be used to allow fully general cross-database\naccess, and there are severe problems with the idea of allowing any\nsuch thing. In particular, there are various places that test for\nconnections to a database, and aren't going to be expected processes\nnot connected to the database to be touching it. My belief is that a\nheavyweight lock on the database is a suitable surrogate, because we\nactually take such a lock when connecting to a database, and that\nforms part of the regular interlock. Taking such locks routinely for\nshort periods would be expensive and might create other problems, but\ndoing it for a maintenance operation seems OK. Also, if we wanted to\nactually support full cross-database access, locking wouldn't be the\nonly problem by far. We'd have to deal with things like the relcache\nand the catcache, which would be hard, and might increase the cost of\nvery common things that we need to be cheap. But none of that is\nimplicated in this patch, which only generalizes code paths that are\nnot so commonly taken as to pose a problem, and manages to reuse quite\na bit of code rather than introducing entirely new code to do the same\nthings.\n.\nIt does introduce some new code here and there, though: there isn't\nzero duplication. The biggest chunk of that FWICS is in 0006, in\nGetDatabaseRelationList and GetRelListFromPage. I just can't get\nexcited about that. It's literally about two screens worth of code.\nWe're not talking about duplicating src/backend/access/heapam or\nsomething like that. I do think it would be a good idea to split it up\njust a bit more: I think the code inside GetRelListFromPage that is\nguarded by HeapTupleSatisfiesVisibility() could be moved into a\nseparate subroutine, and I think that would likely look a big nicer.\nBut fundamentally I just don't see a huge issue here. That is not to\nsay there isn't a huge issue here: just that I don't see it.\n\nComments?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 14:27:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-02-17 14:27:09 -0500, Robert Haas wrote:\n> The other one is trickier, because AFAICT it's basically an opinion\n> question: is accessing pg_class in the template database from some backend\n> that is connected to another database too ugly to be acceptable? Several\n> people have expressed concerns about that, but it's not clear to me whether\n> they are essentially saying \"that is not what I would do if I were doing\n> this project\" or more like \"if you commit something that does it that way I\n> will be enraged and demand an immediate revert and the removal of your\n> commit bit.\" If it's the former, I think it's possible to clean up various\n> details of these patches to make them look nicer than they do at present and\n> get something committed for PostgreSQL 15.\n\nCould you or Dilip outline how it now works, and what exactly makes it safe\netc (e.g. around locking, invalidation processing, snapshots, xid horizons)?\n\nI just scrolled through the patchset without finding such an explanation, so\nit's a bit hard to judge.\n\n\n> But if it is the latter then there's really no point to that kind of cleanup\n> work and we should probably just give up now.\n\nThis thread is long. Could you summarize what lead you to consider other\napproaches (e.g. looking in the filesystem for relfilenodes) as not feasible /\ntoo ugly / ...?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Feb 2022 13:13:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Feb 17, 2022 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n> Could you or Dilip outline how it now works, and what exactly makes it safe\n> etc (e.g. around locking, invalidation processing, snapshots, xid horizons)?\n>\n> I just scrolled through the patchset without finding such an explanation, so\n> it's a bit hard to judge.\n\nThat's a good question and it's making me think about a few things I\nhadn't considered before.\n\nDilip can add more here, but my impression is that most problems are\nprevented by CREATE DATABASE, with or without this patch, starts by\nacquiring a ShareLock on the database, preventing new connections, and\nthen making sure there are no existing connections. That means nothing\nin the target database can be changing, which I think makes a lot of\nthe stuff on your list a non-issue. Any problems that remain have to\nbe the result of something that CREATE DATABASE does having a bad\ninteraction either with something that is completed beforehand or\nsomething that begins afterward. There just can't be overlap, and I\nthink that rules out most problems.\n\nNow you pointed out earlier one problem that it doesn't fix: unlike\nthe current method, this method involves reading buffers from the\ntemplate database into shared_buffers, and those buffers, once read,\nstick around even after the operation finishes. That's not an\nintrinsic problem, though, because a connection to the database could\ndo the same thing. However, again as you pointed out, it is a problem,\nthough, if we do it with less locking than a real database connection\nwould have done. It seems to me that if there are other problems here,\nthey have to be basically of the same sort: they have to leave the\nsystem in some state which is otherwise impossible. Do you see some\nother kind of hazard, or more examples of how that could happen? I\nthink the leftover buffers in shared_buffers have to be basically the\nonly thing, because apart from that, how is this any different than a\nfile copy?\n\nThe only other kind of hazard I can think of is: could it be unsafe to\ntry to interpret the contents of a database to which no one else is\nconnected at present due to any of the issues you mention? But what's\nthe hazard exactly? It can't be a problem if we've failed to process\nsinval messages for the target database, because we're not using any\ncaches anyway. We can't. It can't be unsafe to test visibility of XIDs\nfor that database, because in an alternate universe some backend could\nhave connected to that database and seen the same XIDs. One thing we\nCOULD be doing wrong is using the wrong snapshot to test the\nvisibility of XIDs. The patch uses GetActiveSnapshot(), and I'm\nthinking that is probably wrong. Shouldn't it be GetLatestSnapshot()?\nAnd do we need to worry about snapshots being database-specific? Maybe\nso.\n\n> > But if it is the latter then there's really no point to that kind of cleanup\n> > work and we should probably just give up now.\n>\n> This thread is long. Could you summarize what lead you to consider other\n> approaches (e.g. looking in the filesystem for relfilenodes) as not feasible /\n> too ugly / ...?\n\nI don't think it's infeasible to look at the filesystem for files and\njust copy whatever files we find. It's a plausible alternate design. I\njust don't like it as well. I think that relying on the filesystem\ncontents to tell us what's going on is kind of hacky. The only\ntechnical issue I see there is that the WAL logging might require more\nkludgery, since that mechanism is kind of intertwined with\nshared_buffers. You'd have to get the right block references into the\nWAL record, and you have to make sure that checkpoints don't move the\nredo pointer at an inopportune moment.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 18:00:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-02-17 18:00:19 -0500, Robert Haas wrote:\n> Now you pointed out earlier one problem that it doesn't fix: unlike\n> the current method, this method involves reading buffers from the\n> template database into shared_buffers, and those buffers, once read,\n> stick around even after the operation finishes.\n\nYea, I don't see a problem with that. A concurrent DROP DATABASE or such would\nbe problematic, but the locking prevents that.\n\n\n> The only other kind of hazard I can think of is: could it be unsafe to\n> try to interpret the contents of a database to which no one else is\n> connected at present due to any of the issues you mention? But what's\n> the hazard exactly?\n\nI don't quite know. But I don't think it's impossible to run into trouble in\nthis area. E.g. xid horizons are computed in a database specific way. If the\nroutine reading pg_class did hot pruning, you could end up removing data\nthat's actually needed for a logical slot in the other database because the\nbackend local horizon state was computed for the \"local\" database?\n\nCould there be problems because other backends wouldn't see the backend\naccessing the other database as being connected to that database\n(PGPROC->databaseId)?\n\nOr what if somebody optimized snapshots to disregard readonly transactions in\nother databases?\n\n\n> It can't be a problem if we've failed to process sinval messages for the\n> target database, because we're not using any caches anyway.\n\nCould you end up with an outdated relmap entry? Probably not.\n\n\n> We can't. It can't be unsafe to test visibility of XIDs for that database,\n> because in an alternate universe some backend could have connected to that\n> database and seen the same XIDs.\n\nThat's a weak argument, because in that alternative universe a PGPROC entry\nwith the PGPROC->databaseId = template_databases_oid would exist.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Feb 2022 15:39:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Feb 18, 2022 at 4:30 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>\n> > This thread is long. Could you summarize what lead you to consider other\n> > approaches (e.g. looking in the filesystem for relfilenodes) as not feasible /\n> > too ugly / ...?\n>\n> I don't think it's infeasible to look at the filesystem for files and\n> just copy whatever files we find. It's a plausible alternate design. I\n> just don't like it as well. I think that relying on the filesystem\n> contents to tell us what's going on is kind of hacky. The only\n> technical issue I see there is that the WAL logging might require more\n> kludgery, since that mechanism is kind of intertwined with\n> shared_buffers. You'd have to get the right block references into the\n> WAL record, and you have to make sure that checkpoints don't move the\n> redo pointer at an inopportune moment.\n\n\nActually based on the previous discussion, I also tried to write the\nPOC with the file system scanning approach to identify the relation to\nbe copied seet patch 0007 in this thread [1]. And later we identified\none issue [2], i.e. while scanning directly the disk file we will only\nknow the relfilenode but we can not identify the relation oid that\nmeans we can not lock the relation. Now, I am not saying that there\nis no way to work around that issue but that was also one of the\nreasons for not pursuing that approach.\n\n[1] https://www.postgresql.org/message-id/CAFiTN-v1KYsVAhq_fOWFa27LZiw9uK4n4cz5XmQJxJpsVcfq1w%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAFiTN-v%3DU58by_BeiZruNhykxk1q9XUxF%2BqLzD2LZAsEn2EBkg%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Feb 2022 09:57:09 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Feb 18, 2022 at 5:09 AM Andres Freund <andres@anarazel.de> wrote:\n\nThanks a lot Andres for taking time to read the thread and patch.\n\n> > The only other kind of hazard I can think of is: could it be unsafe to\n> > try to interpret the contents of a database to which no one else is\n> > connected at present due to any of the issues you mention? But what's\n> > the hazard exactly?\n>\n> I don't quite know. But I don't think it's impossible to run into trouble in\n> this area. E.g. xid horizons are computed in a database specific way. If the\n> routine reading pg_class did hot pruning, you could end up removing data\n> that's actually needed for a logical slot in the other database because the\n> backend local horizon state was computed for the \"local\" database?\n\nI agree that while computing the xid horizon (ComputeXidHorizons()),\nwe only consider the backend which are connected to the same database\nto which we are connected. But we don't need to worry here because we\nknow the fact that there could be absolutely no backend connected to\nthe database we are trying to copy so we don't need to worry about\npruning the tuples which are visible to other backends.\n\nNow if we are worried about the replication slot then for that we also\nconsider the xmin horizon from the replication slots so I don't think\nthat we have any problem here as well. And we also consider the\nwalsender as well for computing the xid horizon.\n\n> Could there be problems because other backends wouldn't see the backend\n> accessing the other database as being connected to that database\n> (PGPROC->databaseId)?\n\nYou mean that other backend will not consider this backend (which is\ncopying database) as connected to source database? Yeah that is\ncorrect but what is the problem in that, other backends can not\nconnect to the source database so what problem can they create to the\nbackend which is copying the database.\n\n> Or what if somebody optimized snapshots to disregard readonly transactions in\n> other databases?\n\nCan you elaborate on this point?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Feb 2022 11:33:09 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Feb 17, 2022 at 6:39 PM Andres Freund <andres@anarazel.de> wrote:\n> > The only other kind of hazard I can think of is: could it be unsafe to\n> > try to interpret the contents of a database to which no one else is\n> > connected at present due to any of the issues you mention? But what's\n> > the hazard exactly?\n>\n> I don't quite know. But I don't think it's impossible to run into trouble in\n> this area. E.g. xid horizons are computed in a database specific way. If the\n> routine reading pg_class did hot pruning, you could end up removing data\n> that's actually needed for a logical slot in the other database because the\n> backend local horizon state was computed for the \"local\" database?\n\nYeah, but it doesn't -- and shouldn't. There's no HeapScanDesc here,\nso we can't accidentally wander into heap_page_prune_opt(). We should\ndocument the things we're thinking about here in the comments to\nprevent future mistakes, but I think for the moment we are OK.\n\n> Could there be problems because other backends wouldn't see the backend\n> accessing the other database as being connected to that database\n> (PGPROC->databaseId)?\n\nI think that if there's any hazard here, it must be related to\nsnapshots, which brings us to your next point:\n\n> Or what if somebody optimized snapshots to disregard readonly transactions in\n> other databases?\n\nSo there are two related questions here. One is whether the snapshot\nthat we're using to access the template database can be unsafe, and\nthe other is whether the read-only access that we're performing inside\nthe template database could mess up somebody else's snapshot. Let's\ndeal with the second point first: nobody else knows that we're reading\nfrom the template database, and nobody else is reading from the\ntemplate database except possibly for someone who is doing exactly\nwhat we're doing. Therefore, I think this hazard can be ruled out.\n\nOn the first point, a key point in my opinion is that there can be no\nin-flight transactions in the template database, because nobody is\nconnected to it, and prepared transactions in a template database are\nverboten. It therefore can't matter if we include too few XIDs in our\nsnapshot, or if our xmin is too new. The reverse case can matter,\nthough: if the xmin of our snapshot were too old, or if we had extra\nXIDs in our snapshot, then we might think that a transaction is still\nin progress when it isn't. Therefore, I think the patch is wrong to\nuse GetActiveSnapshot() and must use GetLatestSnapshot() *after* it's\nfinished making sure that nobody is using the template database. I\ndon't think there's a hazard beyond that, though. Let's consider the\ntwo ways in which things could go wrong:\n\n1. Extra XIDs in the snapshot. Any current or future optimization of\nsnapshots would presumably be trying to make them smaller by removing\nXIDs from the snapshot, not making them bigger by adding XIDs to the\nsnapshot. I guess in theory you can imagine an optimization that tests\nfor the presence of XIDs by some method other than scanning through an\narray, and which feels free to add XIDs to the snapshot if they \"can't\nmatter,\" but I think it's up to the author of that hypothetical future\npatch to make sure they don't break anything in so doing -- especially\nbecause it's entirely possible for our session to see XIDs used by a\nbackend in some other database, because they could show up in shared\ncatalogs. I think that's why, as far as I can tell, we only use the\ndatabase ID when setting pruning thresholds, and not for snapshots.\n\n2. xmin of snapshot too new. There are no in-progress transactions in\nthe template database, so how can this even happen? If we set the xmin\n\"in the future,\" then we could get confused about what's visible due\nto wraparound, but that seems crazy. I don't see how there can be a\nproblem here.\n\n> > It can't be a problem if we've failed to process sinval messages for the\n> > target database, because we're not using any caches anyway.\n>\n> Could you end up with an outdated relmap entry? Probably not.\n\nAgain, we're not relying on caching -- we read the file.\n\n> > We can't. It can't be unsafe to test visibility of XIDs for that database,\n> > because in an alternate universe some backend could have connected to that\n> > database and seen the same XIDs.\n>\n> That's a weak argument, because in that alternative universe a PGPROC entry\n> with the PGPROC->databaseId = template_databases_oid would exist.\n\nSo what? As I also argue above, I don't think that affects snapshot\ngeneration, and if it did it wouldn't matter anyway, because it could\nonly remove in-progress transactions from the snapshot, and there\naren't any in the template database anyhow.\n\nAnother way of looking at this is: we could just as well use\nSnapshotSelf or (if it still existed) SnapshotNow to test visibility.\nIn a world where there are no transactions in flight, it's the same\nthing. In fact, maybe we should do it that way, just to make it\nclearer what's happening.\n\nI think these are really good questions you are raising, so I'm not\ntrying to be dismissive. But after some thought I'm not yet seeing any\nproblems (other than the use of GetActiveSnapshot).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Feb 2022 10:44:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "I'm not sure about the current status, but found it while playing\naround with the latest changes a bit, so thought of sharing it here.\n\n+ <varlistentry>\n+ <term><replaceable class=\"parameter\">strategy</replaceable></term>\n+ <listitem>\n+ <para>\n+ This is used for copying the database directory. Currently, we have\n+ two strategies the <literal>WAL_LOG_BLOCK</literal> and the\n\nIsn't it wal_log instead of wal_log_block?\n\nI think when users input wrong strategy with createdb command, we\nshould provide a hint message showing allowed values for strategy\ntypes along with an error message. This will be helpful for the users.\n\nOn Tue, Feb 15, 2022 at 5:19 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Feb 15, 2022 at 2:01 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> >\n> Here is the updated version of the patch, the changes are 1) Fixed\n> review comments given by Robert and one open comment from Ashutosh.\n> 2) Preserved the old create db method. 3) As agreed upthread for now\n> we are using the new strategy only for createdb not for movedb so I\n> have removed the changes in ForgetDatabaseSyncRequests() and\n> DropDatabaseBuffers(). 3) Provided a database creation strategy\n> option as of now I have kept it as below.\n>\n> CREATE DATABASE ... WITH (STRATEGY = WAL_LOG); -- default if\n> option is omitted\n> CREATE DATABASE ... WITH (STRATEGY = FILE_COPY);\n>\n> I have updated the document but I was not sure how much internal\n> information to be exposed to the user so I will work on that based on\n> feedback from others.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Feb 2022 20:27:12 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Feb 22, 2022 at 8:27 PM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> I'm not sure about the current status, but found it while playing\n> around with the latest changes a bit, so thought of sharing it here.\n>\n> + <varlistentry>\n> + <term><replaceable class=\"parameter\">strategy</replaceable></term>\n> + <listitem>\n> + <para>\n> + This is used for copying the database directory. Currently, we\n> have\n> + two strategies the <literal>WAL_LOG_BLOCK</literal> and the\n>\n> Isn't it wal_log instead of wal_log_block?\n>\n> I think when users input wrong strategy with createdb command, we\n> should provide a hint message showing allowed values for strategy\n> types along with an error message. This will be helpful for the users.\n>\n\nI will fix these two comments while posting the next version.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Feb 22, 2022 at 8:27 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:I'm not sure about the current status, but found it while playing\naround with the latest changes a bit, so thought of sharing it here.\n\n+      <varlistentry>\n+       <term><replaceable class=\"parameter\">strategy</replaceable></term>\n+       <listitem>\n+        <para>\n+         This is used for copying the database directory.  Currently, we have\n+         two strategies the <literal>WAL_LOG_BLOCK</literal> and the\n\nIsn't it wal_log instead of wal_log_block?\n\nI think when users input wrong strategy with createdb command, we\nshould provide a hint message showing allowed values for strategy\ntypes along with an error message. This will be helpful for the users.I will fix these two comments while posting the next version. -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 1 Mar 2022 17:15:46 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 1, 2022 at 5:15 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Tue, Feb 22, 2022 at 8:27 PM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n>\n>> I'm not sure about the current status, but found it while playing\n>> around with the latest changes a bit, so thought of sharing it here.\n>>\n>> + <varlistentry>\n>> + <term><replaceable class=\"parameter\">strategy</replaceable></term>\n>> + <listitem>\n>> + <para>\n>> + This is used for copying the database directory. Currently, we\n>> have\n>> + two strategies the <literal>WAL_LOG_BLOCK</literal> and the\n>>\n>> Isn't it wal_log instead of wal_log_block?\n>>\n>> I think when users input wrong strategy with createdb command, we\n>> should provide a hint message showing allowed values for strategy\n>> types along with an error message. This will be helpful for the users.\n>>\n>\n> I will fix these two comments while posting the next version.\n>\n>\n\nThe new version of the patch fixes these 2 comments pointed by Ashutosh and\nalso splits the GetRelListFromPage() function as suggested by Robert and\nuses the latest snapshot for scanning the pg_class instead of active\nsnapshot as pointed out by Robert. I haven't yet added the test case to\ncreate a database using this new strategy option. So if we are okay with\nthese two options FILE_COPY and WAL_LOG then I will add test cases for the\nsame.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 3 Mar 2022 21:52:37 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 3, 2022 at 11:22 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> The new version of the patch fixes these 2 comments pointed by Ashutosh and also splits the GetRelListFromPage() function as suggested by Robert and uses the latest snapshot for scanning the pg_class instead of active snapshot as pointed out by Robert. I haven't yet added the test case to create a database using this new strategy option. So if we are okay with these two options FILE_COPY and WAL_LOG then I will add test cases for the same.\n\nReviewing 0001, the boundaries of the critical section move slightly,\nbut only over a memcpy, which can't fail, so that seems fine. But this\ncomment looks ominous:\n\n * Note: we're cheating a little bit here by assuming that mapped files\n * are either in pg_global or the database's default tablespace.\n\nIt's not clear to me how the code that follows relies on this\nassumption, but the overall patch set would make that not true any\nmore, so there's some kind of an issue to think about there.\n\nIt's a little asymmetric that load_relmap_file() gets a subroutine\nread_relmap_file() while write_relmap_file() gets a subroutine\nwrite_relmap_file_internal(). Perhaps we could call the functions\n{load,write}_named_relmap_file() or something of that sort.\n\nReviewing 0002, your comment updates in relmap_redo() are not\ncomplete. Note that there's an unmodified comment that says \"Write out\nthe new map and send sinval\" just above where you modify the code to\nonly conditionally send sinval. I'm somewhat uncomfortable with the\nshape of this logic, too. It looks weird to be sometimes calling\nwrite_relmap_file and sometimes write_relmap_file_internal. You'd\nexpect functions with those names to be called at different\nabstraction levels, rather than at parallel call sites. The renaming I\nproposed would help with this but it's not just a cosmetic issue: the\nlogic to construct mapfilename is in this function in one case, but in\nthe called function in the other case. I can't help but think that the\nwrite_relmap_file()/write_relmap_file_internal() split isn't entirely\nthe right thing.\n\nI think part of the confusion here is that, pre-patch,\nwrite_relmap_file() gets called during both recovery and normal\nrunning, and the updates to shared_map or local_map are actually\nnonsense during recovery, because the local map at least is local to\nwhatever our database is, and we don't have a database connection if\nwe're the startup process. After your patch, we're still going through\nwrite_relmap_file in recovery in some cases, but really those map\nupdates don't seem like things that should be happening at all. And on\nthe other hand it's not clear to me why the CRC stuff isn't needed in\nall cases, but that's only going to happen when we go through the\nnon-internal version of the function. You've probably spent more time\nlooking at this code than I have, but I'm wondering if the division\nshould be like this: we have one function that does the actual update,\nand another function that does the update plus sets global variables.\nRecovery always uses the first one, and outside of recovery we use the\nfirst one for the create-database case and the second one otherwise.\nThoughts?\n\nRegarding 0003, my initial thought was to like the fact that you'd\navoided duplicating code by using a function parameter, but as I look\nat it a bit more, it's not clear to me that it's enough code that we\nreally care about not duplicating it. I would not expect to find a\nfunction called RelationCopyAllFork() in tablecmds.c. I'd expect to\nfind it in storage.c, I think. And I think I'd be surprised to find\nout that it doesn't actually know anything about copying; it's\nbasically just a loop over the forks to which you can supply your own\ncopy-function. And the fact that it's got an argument of type\ncopy_relation_storage and the argument name is copy_storage and the\nvalue is sometimes RelationCopyStorageis a terminological muddle, too.\nSo I feel like perhaps this needs more thought.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Mar 2022 16:42:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 9, 2022 at 3:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\nThanks for reviewing and valuable feedback.\n\n> Reviewing 0001, the boundaries of the critical section move slightly,\n> but only over a memcpy, which can't fail, so that seems fine. But this\n> comment looks ominous:\n>\n> * Note: we're cheating a little bit here by assuming that mapped files\n> * are either in pg_global or the database's default tablespace.\n>\n> It's not clear to me how the code that follows relies on this\n> assumption, but the overall patch set would make that not true any\n> more, so there's some kind of an issue to think about there.\n\nI think the comments are w.r.t choosing the file path, because here we\nassume either it is in the global tablespace or default tablespace of\nthe database. Here also the comment is partially true because we also\nassume that it will be in the default tablespace of the database\n(because we do not need to worry about the shared relations). But I\nthink this comments can move to the caller function where we are\ncreating the file path.\n\nif (shared)\n{\nsnprintf(mapfilename, sizeof(mapfilename), \"global/%s\",\nRELMAPPER_FILENAME);\n}\nelse\n{\nsnprintf(mapfilename, sizeof(mapfilename), \"%s/%s\",\ndbpath, RELMAPPER_FILENAME);\n}\n\n> It's a little asymmetric that load_relmap_file() gets a subroutine\n> read_relmap_file() while write_relmap_file() gets a subroutine\n> write_relmap_file_internal(). Perhaps we could call the functions\n> {load,write}_named_relmap_file() or something of that sort.\n\nYeah this should be changed.\n\n> Reviewing 0002, your comment updates in relmap_redo() are not\n> complete. Note that there's an unmodified comment that says \"Write out\n> the new map and send sinval\" just above where you modify the code to\n> only conditionally send sinval. I'm somewhat uncomfortable with the\n> shape of this logic, too. It looks weird to be sometimes calling\n> write_relmap_file and sometimes write_relmap_file_internal. You'd\n> expect functions with those names to be called at different\n> abstraction levels, rather than at parallel call sites. The renaming I\n> proposed would help with this but it's not just a cosmetic issue: the\n> logic to construct mapfilename is in this function in one case, but in\n> the called function in the other case. I can't help but think that the\n> write_relmap_file()/write_relmap_file_internal() split isn't entirely\n> the right thing.\n>\n> I think part of the confusion here is that, pre-patch,\n> write_relmap_file() gets called during both recovery and normal\n> running, and the updates to shared_map or local_map are actually\n> nonsense during recovery, because the local map at least is local to\n> whatever our database is, and we don't have a database connection if\n> we're the startup process.\n\nYeah you are correct about the local map, but I am not sure whether we\ncan rely on not updating the shared map in the startup process.\nBecause how can we guarantee that now or in future the startup process\ncan never look into the map? I agree that it is not connected to the\ndatabase so it doesn't make sense to look into the local map but how\nwe are going to ensure the shared map. Said that I think there are\nonly 3 function which must be looking at these maps directly\nRelationMapOidToFilenode(), RelationMapFilenodeToOid() and\nRelationMapUpdateMap() and these functions are called from a very few\nplaces and I don't think these should be called during recovery. So\nprobably we can put a elog saying they should never be called during\nrecovery?\n\nAfter your patch, we're still going through\n> write_relmap_file in recovery in some cases, but really those map\n> updates don't seem like things that should be happening at all. And on\n> the other hand it's not clear to me why the CRC stuff isn't needed in\n> all cases, but that's only going to happen when we go through the\n> non-internal version of the function. You've probably spent more time\n> looking at this code than I have, but I'm wondering if the division\n> should be like this: we have one function that does the actual update,\n> and another function that does the update plus sets global variables.\n> Recovery always uses the first one, and outside of recovery we use the\n> first one for the create-database case and the second one otherwise.\n> Thoughts?\n\nRight, infact now also if you see the logic, the\nwrite_relmap_file_internal() is taking care of the actual update and\nthe write_relmap_file() is doing update + setting the global\nvariables. So yeah we can rename as you suggested in 0001 and here\nalso we can change the logic as you suggested that the recovery and\ncreatedb will only call the first function which is just doing the\nupdate.\n\n\n> Regarding 0003, my initial thought was to like the fact that you'd\n> avoided duplicating code by using a function parameter, but as I look\n> at it a bit more, it's not clear to me that it's enough code that we\n> really care about not duplicating it. I would not expect to find a\n> function called RelationCopyAllFork() in tablecmds.c.\n\nOkay, actually I see this logic of copying the fork at a few different\nplaces like\nindex_copy_data() in tablecmds.c. and then in\nheapam_relation_copy_data() in heapam_handler.c. So I was not sure\nwhat could be right place for this function so I kept it in the same\nfile (tablecmds.c) because I splitted it from the function in this\nfile.\n\nI'd expect to\n> find it in storage.c, I think. And I think I'd be surprised to find\n> out that it doesn't actually know anything about copying; it's\n> basically just a loop over the forks to which you can supply your own\n> copy-function.\n\nYeah but it eventually expects a function pointer to copy storage so\nwe can not completely deny that it knows nothing about the copy?\n\nAnd the fact that it's got an argument of type\n> copy_relation_storage and the argument name is copy_storage and the\n> value is sometimes RelationCopyStorageis a terminological muddle, too.\n> So I feel like perhaps this needs more thought.\n\nOne option is that we can duplicate this loop in dbcommand.c as well\nlike we are having it already in tablecmds.c and heapam_handler.c?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Mar 2022 16:37:01 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 9, 2022 at 6:07 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Yeah you are correct about the local map, but I am not sure whether we\n> can rely on not updating the shared map in the startup process.\n> Because how can we guarantee that now or in future the startup process\n> can never look into the map? I agree that it is not connected to the\n> database so it doesn't make sense to look into the local map but how\n> we are going to ensure the shared map. Said that I think there are\n> only 3 function which must be looking at these maps directly\n> RelationMapOidToFilenode(), RelationMapFilenodeToOid() and\n> RelationMapUpdateMap() and these functions are called from a very few\n> places and I don't think these should be called during recovery. So\n> probably we can put a elog saying they should never be called during\n> recovery?\n\nYeah, that seems reasonable.\n\n> Right, infact now also if you see the logic, the\n> write_relmap_file_internal() is taking care of the actual update and\n> the write_relmap_file() is doing update + setting the global\n> variables. So yeah we can rename as you suggested in 0001 and here\n> also we can change the logic as you suggested that the recovery and\n> createdb will only call the first function which is just doing the\n> update.\n\nBut I think we want the path construction to be managed by the\nfunction rather than the caller, too.\n\n> I'd expect to\n> > find it in storage.c, I think. And I think I'd be surprised to find\n> > out that it doesn't actually know anything about copying; it's\n> > basically just a loop over the forks to which you can supply your own\n> > copy-function.\n>\n> Yeah but it eventually expects a function pointer to copy storage so\n> we can not completely deny that it knows nothing about the copy?\n\nSure, I guess. It's just not obvious why the argument would actually\nneed to be a function that copies storage, or why there's more than\none way to copy storage. I'd rather keep all the code paths unified,\nif we can, and set behavior via flags or something, maybe. I'm not\nsure whether that's realistic, though.\n\n> And the fact that it's got an argument of type\n> > copy_relation_storage and the argument name is copy_storage and the\n> > value is sometimes RelationCopyStorageis a terminological muddle, too.\n> > So I feel like perhaps this needs more thought.\n>\n> One option is that we can duplicate this loop in dbcommand.c as well\n> like we are having it already in tablecmds.c and heapam_handler.c?\n\nYeah, I think this is also worth considering.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Mar 2022 08:14:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 9, 2022 at 6:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> > Right, infact now also if you see the logic, the\n> > write_relmap_file_internal() is taking care of the actual update and\n> > the write_relmap_file() is doing update + setting the global\n> > variables. So yeah we can rename as you suggested in 0001 and here\n> > also we can change the logic as you suggested that the recovery and\n> > createdb will only call the first function which is just doing the\n> > update.\n>\n> But I think we want the path construction to be managed by the\n> function rather than the caller, too.\n\nI have completely changed the logic for this refactoring. Basically,\nwrite_relmap_file(), is already having parameters to control whether\nto write wal, send inval and we are already passing the dbpath.\nInstead of making a new function I just pass one additional parameter\nto this function itself about whether we are creating a new map or not\nand I think with that changes are very less and this looks cleaner to\nme. Similarly for load_relmap_file() also I just had to pass the\ndbpath and memory for destination map. Please have a look and let me\nknow your thoughts.\n\n> Sure, I guess. It's just not obvious why the argument would actually\n> need to be a function that copies storage, or why there's more than\n> one way to copy storage. I'd rather keep all the code paths unified,\n> if we can, and set behavior via flags or something, maybe. I'm not\n> sure whether that's realistic, though.\n\nI try considering that, I think it doesn't look good to make it flag\nbased, One of the main problem I noticed is that now for copying\neither we need to call RelationCopyStorageis() or\nRelationCopyStorageUsingBuffer() based on the input flag. But if we\nmove the main copy function to the storage.c then the storage.c will\nhave depedency on bufmgr functions because I don't think we can keep\nRelationCopyStorageUsingBuffer() inside storage.c. So for now, I have\nduplicated the loop which is already there in index_copy_data() and\nheapam_relation_copy_data() and kept that in bufmgr.c and also moved\nRelationCopyStorageUsingBuffer() into the bufmgr.c. I think bufmgr.c\nis already having function which are dealing with smgr things so I\nfeel this is the right place for the function.\n\nOther changes:\n1. 0001 and 0002 are merged because now we are not really refactoring\nthese function and just passing the additioanl arguments to it make\nsense to combine the changes.\n2. Same with 0003, that now we are not refactoring existing functions\nbut providing new interfaces so merged it with the 0004 (which was\n0006 previously)\n\nI think we should also write the test cases for create database\nstrategy. But I do not see any test case for create database for\ntesting the existing options. So I am wondering whether we should add\nthe test case only for the new option we are providing or we should\ncreate a separate path which tests the new option as well as the\nexisting options.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 10 Mar 2022 16:32:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Here are some review comments on the latest patch\n(v11-0004-WAL-logged-CREATE-DATABASE.patch). I actually did the review\nof the v10 patch but that applies for this latest version as well.\n\n+ /* Now errors are fatal ... */\n+ START_CRIT_SECTION();\n\nDid you mean PANIC instead of FATAL?\n\n==\n\n+\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"invalid create\nstrategy %s\", strategy),\n+ errhint(\"Valid strategies are\n\\\"wal_log\\\", and \\\"file_copy\\\".\")));\n+ }\n\n\nShould this be - \"invalid createdb strategy\" instead of \"invalid\ncreate strategy\"?\n\n==\n\n+ /*\n+ * In case of ALTER DATABASE SET TABLESPACE we don't need to do\n+ * anything for the object which are not in the source\ndb's default\n+ * tablespace. The source and destination dboid will be same in\n+ * case of ALTER DATABASE SET TABLESPACE.\n+ */\n+ else if (src_dboid == dst_dboid)\n+ continue;\n+ else\n+ dstrnode.spcNode = srcrnode.spcNode;\n\n\nIs this change still required? Do we support the WAL_COPY strategy for\nALTER DATABASE?\n\n==\n\n+ /* Open the source and the destination relation at\nsmgr level. */\n+ src_smgr = smgropen(srcrnode, InvalidBackendId);\n+ dst_smgr = smgropen(dstrnode, InvalidBackendId);\n+\n+ /* Copy relation storage from source to the destination. */\n+ CreateAndCopyRelationData(src_smgr, dst_smgr,\nrelinfo->relpersistence);\n\nDo we need to do smgropen for destination relfilenode here? Aren't we\nalready doing that inside RelationCreateStorage?\n\n==\n\n+ use_wal = XLogIsNeeded() &&\n+ (relpersistence == RELPERSISTENCE_PERMANENT ||\ncopying_initfork);\n+\n+ /* Get number of blocks in the source relation. */\n+ nblocks = smgrnblocks(src, forkNum);\n\nWhat if number of blocks in a source relation is ZERO? Should we check\nfor that and return immediately. We have already done smgrcreate.\n\n==\n\n+ /* We don't need to copy the shared objects to the target. */\n+ if (classForm->reltablespace == GLOBALTABLESPACE_OID)\n+ return NULL;\n+\n+ /*\n+ * If the object doesn't have the storage then nothing to be\n+ * done for that object so just ignore it.\n+ */\n+ if (!RELKIND_HAS_STORAGE(classForm->relkind))\n+ return NULL;\n\nWe can probably club together above two if-checks.\n\n==\n\n+ <varlistentry>\n+ <term><replaceable class=\"parameter\">strategy</replaceable></term>\n+ <listitem>\n+ <para>\n+ This is used for copying the database directory. Currently, we have\n+ two strategies the <literal>WAL_LOG</literal> and the\n+ <literal>FILE_COPY</literal>. If <literal>WAL_LOG</literal> strategy\n+ is used then the database will be copied block by block and it will\n+ also WAL log each copied block. Otherwise, if <literal>FILE_COPY\n\nI think we need to mention the default strategy in the documentation page.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Mar 10, 2022 at 4:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Mar 9, 2022 at 6:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > > Right, infact now also if you see the logic, the\n> > > write_relmap_file_internal() is taking care of the actual update and\n> > > the write_relmap_file() is doing update + setting the global\n> > > variables. So yeah we can rename as you suggested in 0001 and here\n> > > also we can change the logic as you suggested that the recovery and\n> > > createdb will only call the first function which is just doing the\n> > > update.\n> >\n> > But I think we want the path construction to be managed by the\n> > function rather than the caller, too.\n>\n> I have completely changed the logic for this refactoring. Basically,\n> write_relmap_file(), is already having parameters to control whether\n> to write wal, send inval and we are already passing the dbpath.\n> Instead of making a new function I just pass one additional parameter\n> to this function itself about whether we are creating a new map or not\n> and I think with that changes are very less and this looks cleaner to\n> me. Similarly for load_relmap_file() also I just had to pass the\n> dbpath and memory for destination map. Please have a look and let me\n> know your thoughts.\n>\n> > Sure, I guess. It's just not obvious why the argument would actually\n> > need to be a function that copies storage, or why there's more than\n> > one way to copy storage. I'd rather keep all the code paths unified,\n> > if we can, and set behavior via flags or something, maybe. I'm not\n> > sure whether that's realistic, though.\n>\n> I try considering that, I think it doesn't look good to make it flag\n> based, One of the main problem I noticed is that now for copying\n> either we need to call RelationCopyStorageis() or\n> RelationCopyStorageUsingBuffer() based on the input flag. But if we\n> move the main copy function to the storage.c then the storage.c will\n> have depedency on bufmgr functions because I don't think we can keep\n> RelationCopyStorageUsingBuffer() inside storage.c. So for now, I have\n> duplicated the loop which is already there in index_copy_data() and\n> heapam_relation_copy_data() and kept that in bufmgr.c and also moved\n> RelationCopyStorageUsingBuffer() into the bufmgr.c. I think bufmgr.c\n> is already having function which are dealing with smgr things so I\n> feel this is the right place for the function.\n>\n> Other changes:\n> 1. 0001 and 0002 are merged because now we are not really refactoring\n> these function and just passing the additioanl arguments to it make\n> sense to combine the changes.\n> 2. Same with 0003, that now we are not refactoring existing functions\n> but providing new interfaces so merged it with the 0004 (which was\n> 0006 previously)\n>\n> I think we should also write the test cases for create database\n> strategy. But I do not see any test case for create database for\n> testing the existing options. So I am wondering whether we should add\n> the test case only for the new option we are providing or we should\n> create a separate path which tests the new option as well as the\n> existing options.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Mar 2022 19:21:54 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 10, 2022 at 7:22 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Here are some review comments on the latest patch\n> (v11-0004-WAL-logged-CREATE-DATABASE.patch). I actually did the review\n> of the v10 patch but that applies for this latest version as well.\n>\n> + /* Now errors are fatal ... */\n> + START_CRIT_SECTION();\n>\n> Did you mean PANIC instead of FATAL?\n\nI think here fatal didn't really mean the error level FATAL, that\nmeans critical and I have seen it is used in other places also. But I\nreally don't think we need this comments to removed to avoid any\nconfusion.\n\n> ==\n>\n> +\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"invalid create\n> strategy %s\", strategy),\n> + errhint(\"Valid strategies are\n> \\\"wal_log\\\", and \\\"file_copy\\\".\")));\n> + }\n>\n>\n> Should this be - \"invalid createdb strategy\" instead of \"invalid\n> create strategy\"?\n\nChanged\n\n> ==\n>\n> + /*\n> + * In case of ALTER DATABASE SET TABLESPACE we don't need to do\n> + * anything for the object which are not in the source\n> db's default\n> + * tablespace. The source and destination dboid will be same in\n> + * case of ALTER DATABASE SET TABLESPACE.\n> + */\n> + else if (src_dboid == dst_dboid)\n> + continue;\n> + else\n> + dstrnode.spcNode = srcrnode.spcNode;\n>\n>\n> Is this change still required? Do we support the WAL_COPY strategy for\n> ALTER DATABASE?\n\nYeah now it is unreachable code so removed.\n\n> ==\n>\n> + /* Open the source and the destination relation at\n> smgr level. */\n> + src_smgr = smgropen(srcrnode, InvalidBackendId);\n> + dst_smgr = smgropen(dstrnode, InvalidBackendId);\n> +\n> + /* Copy relation storage from source to the destination. */\n> + CreateAndCopyRelationData(src_smgr, dst_smgr,\n> relinfo->relpersistence);\n>\n> Do we need to do smgropen for destination relfilenode here? Aren't we\n> already doing that inside RelationCreateStorage?\n\nYeah I have changed the complete logic and removed the smgr_open for\nboth source and destination and moved inside\nCreateAndCopyRelationData, please check the updated code.\n\n> ==\n>\n> + use_wal = XLogIsNeeded() &&\n> + (relpersistence == RELPERSISTENCE_PERMANENT ||\n> copying_initfork);\n> +\n> + /* Get number of blocks in the source relation. */\n> + nblocks = smgrnblocks(src, forkNum);\n>\n> What if number of blocks in a source relation is ZERO? Should we check\n> for that and return immediately. We have already done smgrcreate.\n\nYeah make sense to optimize, with that we will not have to get the\nbuffer strategy so done.\n\n> ==\n>\n> + /* We don't need to copy the shared objects to the target. */\n> + if (classForm->reltablespace == GLOBALTABLESPACE_OID)\n> + return NULL;\n> +\n> + /*\n> + * If the object doesn't have the storage then nothing to be\n> + * done for that object so just ignore it.\n> + */\n> + if (!RELKIND_HAS_STORAGE(classForm->relkind))\n> + return NULL;\n>\n> We can probably club together above two if-checks.\n\nDone\n\n> ==\n>\n> + <varlistentry>\n> + <term><replaceable class=\"parameter\">strategy</replaceable></term>\n> + <listitem>\n> + <para>\n> + This is used for copying the database directory. Currently, we have\n> + two strategies the <literal>WAL_LOG</literal> and the\n> + <literal>FILE_COPY</literal>. If <literal>WAL_LOG</literal> strategy\n> + is used then the database will be copied block by block and it will\n> + also WAL log each copied block. Otherwise, if <literal>FILE_COPY\n>\n> I think we need to mention the default strategy in the documentation page.\n\nDone\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 10 Mar 2022 20:37:57 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 10, 2022 at 6:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have completely changed the logic for this refactoring. Basically,\n> write_relmap_file(), is already having parameters to control whether\n> to write wal, send inval and we are already passing the dbpath.\n> Instead of making a new function I just pass one additional parameter\n> to this function itself about whether we are creating a new map or not\n> and I think with that changes are very less and this looks cleaner to\n> me. Similarly for load_relmap_file() also I just had to pass the\n> dbpath and memory for destination map. Please have a look and let me\n> know your thoughts.\n\nIt's not terrible, but how about something like the attached instead?\nI think this has the effect of reducing the number of cases that the\nlow-level code needs to know about from 2 to 1, instead of making it\ngo up from 2 to 3.\n\n> I think we should also write the test cases for create database\n> strategy. But I do not see any test case for create database for\n> testing the existing options. So I am wondering whether we should add\n> the test case only for the new option we are providing or we should\n> create a separate path which tests the new option as well as the\n> existing options.\n\nFWIW, src/bin/scripts/t/020_createdb.pl does a little bit of testing\nof this kind.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 10 Mar 2022 11:48:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Thanks Dilip for working on the review comments. I'll take a look at\nthe new version of patch and let you know my comments, if any.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Mar 10, 2022 at 8:38 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Mar 10, 2022 at 7:22 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Here are some review comments on the latest patch\n> > (v11-0004-WAL-logged-CREATE-DATABASE.patch). I actually did the review\n> > of the v10 patch but that applies for this latest version as well.\n> >\n> > + /* Now errors are fatal ... */\n> > + START_CRIT_SECTION();\n> >\n> > Did you mean PANIC instead of FATAL?\n>\n> I think here fatal didn't really mean the error level FATAL, that\n> means critical and I have seen it is used in other places also. But I\n> really don't think we need this comments to removed to avoid any\n> confusion.\n>\n> > ==\n> >\n> > +\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"invalid create\n> > strategy %s\", strategy),\n> > + errhint(\"Valid strategies are\n> > \\\"wal_log\\\", and \\\"file_copy\\\".\")));\n> > + }\n> >\n> >\n> > Should this be - \"invalid createdb strategy\" instead of \"invalid\n> > create strategy\"?\n>\n> Changed\n>\n> > ==\n> >\n> > + /*\n> > + * In case of ALTER DATABASE SET TABLESPACE we don't need to do\n> > + * anything for the object which are not in the source\n> > db's default\n> > + * tablespace. The source and destination dboid will be same in\n> > + * case of ALTER DATABASE SET TABLESPACE.\n> > + */\n> > + else if (src_dboid == dst_dboid)\n> > + continue;\n> > + else\n> > + dstrnode.spcNode = srcrnode.spcNode;\n> >\n> >\n> > Is this change still required? Do we support the WAL_COPY strategy for\n> > ALTER DATABASE?\n>\n> Yeah now it is unreachable code so removed.\n>\n> > ==\n> >\n> > + /* Open the source and the destination relation at\n> > smgr level. */\n> > + src_smgr = smgropen(srcrnode, InvalidBackendId);\n> > + dst_smgr = smgropen(dstrnode, InvalidBackendId);\n> > +\n> > + /* Copy relation storage from source to the destination. */\n> > + CreateAndCopyRelationData(src_smgr, dst_smgr,\n> > relinfo->relpersistence);\n> >\n> > Do we need to do smgropen for destination relfilenode here? Aren't we\n> > already doing that inside RelationCreateStorage?\n>\n> Yeah I have changed the complete logic and removed the smgr_open for\n> both source and destination and moved inside\n> CreateAndCopyRelationData, please check the updated code.\n>\n> > ==\n> >\n> > + use_wal = XLogIsNeeded() &&\n> > + (relpersistence == RELPERSISTENCE_PERMANENT ||\n> > copying_initfork);\n> > +\n> > + /* Get number of blocks in the source relation. */\n> > + nblocks = smgrnblocks(src, forkNum);\n> >\n> > What if number of blocks in a source relation is ZERO? Should we check\n> > for that and return immediately. We have already done smgrcreate.\n>\n> Yeah make sense to optimize, with that we will not have to get the\n> buffer strategy so done.\n>\n> > ==\n> >\n> > + /* We don't need to copy the shared objects to the target. */\n> > + if (classForm->reltablespace == GLOBALTABLESPACE_OID)\n> > + return NULL;\n> > +\n> > + /*\n> > + * If the object doesn't have the storage then nothing to be\n> > + * done for that object so just ignore it.\n> > + */\n> > + if (!RELKIND_HAS_STORAGE(classForm->relkind))\n> > + return NULL;\n> >\n> > We can probably club together above two if-checks.\n>\n> Done\n>\n> > ==\n> >\n> > + <varlistentry>\n> > + <term><replaceable class=\"parameter\">strategy</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + This is used for copying the database directory. Currently, we have\n> > + two strategies the <literal>WAL_LOG</literal> and the\n> > + <literal>FILE_COPY</literal>. If <literal>WAL_LOG</literal> strategy\n> > + is used then the database will be copied block by block and it will\n> > + also WAL log each copied block. Otherwise, if <literal>FILE_COPY\n> >\n> > I think we need to mention the default strategy in the documentation page.\n>\n> Done\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 10:35:58 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 10, 2022 at 10:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 10, 2022 at 6:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have completely changed the logic for this refactoring. Basically,\n> > write_relmap_file(), is already having parameters to control whether\n> > to write wal, send inval and we are already passing the dbpath.\n> > Instead of making a new function I just pass one additional parameter\n> > to this function itself about whether we are creating a new map or not\n> > and I think with that changes are very less and this looks cleaner to\n> > me. Similarly for load_relmap_file() also I just had to pass the\n> > dbpath and memory for destination map. Please have a look and let me\n> > know your thoughts.\n>\n> It's not terrible, but how about something like the attached instead?\n> I think this has the effect of reducing the number of cases that the\n> low-level code needs to know about from 2 to 1, instead of making it\n> go up from 2 to 3.\n>\n\nLooks better, but why do you want to pass elevel to the\nload_relmap_file()? Are we calling this function from somewhere other\nthan read_relmap_file()? If not, do we have any plans to call this\nfunction directly bypassing read_relmap_file for any upcoming patch?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Fri, 11 Mar 2022 10:45:25 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 10, 2022 at 10:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 10, 2022 at 6:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have completely changed the logic for this refactoring. Basically,\n> > write_relmap_file(), is already having parameters to control whether\n> > to write wal, send inval and we are already passing the dbpath.\n> > Instead of making a new function I just pass one additional parameter\n> > to this function itself about whether we are creating a new map or not\n> > and I think with that changes are very less and this looks cleaner to\n> > me. Similarly for load_relmap_file() also I just had to pass the\n> > dbpath and memory for destination map. Please have a look and let me\n> > know your thoughts.\n>\n> It's not terrible, but how about something like the attached instead?\n> I think this has the effect of reducing the number of cases that the\n> low-level code needs to know about from 2 to 1, instead of making it\n> go up from 2 to 3.\n\nYeah this looks cleaner, I will rebase the remaining patch.\n\n> > I think we should also write the test cases for create database\n> > strategy. But I do not see any test case for create database for\n> > testing the existing options. So I am wondering whether we should add\n> > the test case only for the new option we are providing or we should\n> > create a separate path which tests the new option as well as the\n> > existing options.\n>\n> FWIW, src/bin/scripts/t/020_createdb.pl does a little bit of testing\n> of this kind.\n\nOkay, I think we need to support the strategy in createdb bin as well.\nI will do that.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 11:52:44 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Mar 11, 2022 at 11:52 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Mar 10, 2022 at 10:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Mar 10, 2022 at 6:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > I have completely changed the logic for this refactoring. Basically,\n> > > write_relmap_file(), is already having parameters to control whether\n> > > to write wal, send inval and we are already passing the dbpath.\n> > > Instead of making a new function I just pass one additional parameter\n> > > to this function itself about whether we are creating a new map or not\n> > > and I think with that changes are very less and this looks cleaner to\n> > > me. Similarly for load_relmap_file() also I just had to pass the\n> > > dbpath and memory for destination map. Please have a look and let me\n> > > know your thoughts.\n> >\n> > It's not terrible, but how about something like the attached instead?\n> > I think this has the effect of reducing the number of cases that the\n> > low-level code needs to know about from 2 to 1, instead of making it\n> > go up from 2 to 3.\n>\n> Yeah this looks cleaner, I will rebase the remaining patch.\n\nHere is the updated version of the patch set.\n\nChanges, 1) it take Robert's patch as first refactoring patch 2)\nRebase other new relmapper apis on top of that in 0002 3) Some code\nrefactoring in main patch 0005 and also one problem fix, earlier in\nwal log method I have removed ForceSyncCommit(), but IMHO that is\nequally valid whether we use file_copy or wal_log because in both\ncases we are creating the disk files. 4) Support strategy in createdb\ntool and add test case as part of 0006.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 11 Mar 2022 15:50:56 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "You may also need to add documentation to app-createdb.sgml. Currently\nyou have just added to create_database.sgml. Also, I had a quick look\nat the new changes done in v13-0005-WAL-logged-CREATE-DATABASE.patch\nand they seemed fine to me although I haven't put much emphasis on the\ncomments and other cosmetic stuff.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Fri, Mar 11, 2022 at 3:51 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Mar 11, 2022 at 11:52 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, Mar 10, 2022 at 10:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Thu, Mar 10, 2022 at 6:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > I have completely changed the logic for this refactoring. Basically,\n> > > > write_relmap_file(), is already having parameters to control whether\n> > > > to write wal, send inval and we are already passing the dbpath.\n> > > > Instead of making a new function I just pass one additional parameter\n> > > > to this function itself about whether we are creating a new map or not\n> > > > and I think with that changes are very less and this looks cleaner to\n> > > > me. Similarly for load_relmap_file() also I just had to pass the\n> > > > dbpath and memory for destination map. Please have a look and let me\n> > > > know your thoughts.\n> > >\n> > > It's not terrible, but how about something like the attached instead?\n> > > I think this has the effect of reducing the number of cases that the\n> > > low-level code needs to know about from 2 to 1, instead of making it\n> > > go up from 2 to 3.\n> >\n> > Yeah this looks cleaner, I will rebase the remaining patch.\n>\n> Here is the updated version of the patch set.\n>\n> Changes, 1) it take Robert's patch as first refactoring patch 2)\n> Rebase other new relmapper apis on top of that in 0002 3) Some code\n> refactoring in main patch 0005 and also one problem fix, earlier in\n> wal log method I have removed ForceSyncCommit(), but IMHO that is\n> equally valid whether we use file_copy or wal_log because in both\n> cases we are creating the disk files. 4) Support strategy in createdb\n> tool and add test case as part of 0006.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 19:32:41 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Mar 11, 2022 at 12:15 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Looks better, but why do you want to pass elevel to the\n> load_relmap_file()? Are we calling this function from somewhere other\n> than read_relmap_file()? If not, do we have any plans to call this\n> function directly bypassing read_relmap_file for any upcoming patch?\n\nIf it fails during CREATE DATABASE, it should be ERROR, not FATAL. In\nthat case, we only need to stop trying to create a database; we don't\nneed to terminate the session. On the other hand if we can't read our\nown database's relmap files, that's an unrecoverable error, because we\nwill not be able to run any queries at all, so FATAL is appropriate.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 09:50:51 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Mar 11, 2022 at 8:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Mar 11, 2022 at 12:15 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > Looks better, but why do you want to pass elevel to the\n> > load_relmap_file()? Are we calling this function from somewhere other\n> > than read_relmap_file()? If not, do we have any plans to call this\n> > function directly bypassing read_relmap_file for any upcoming patch?\n>\n> If it fails during CREATE DATABASE, it should be ERROR, not FATAL. In\n> that case, we only need to stop trying to create a database; we don't\n> need to terminate the session. On the other hand if we can't read our\n> own database's relmap files, that's an unrecoverable error, because we\n> will not be able to run any queries at all, so FATAL is appropriate.\n>\n\nOK. I can see it being used in the v13 patch. In the previous patches\nit was hard-coded with FATAL. Also, we simply error out when doing\nfile copy as I can see in the copy_file function. So yes FATAL is not\nthe right option to use when creating a database. Thanks.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Fri, 11 Mar 2022 20:42:26 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Mar 11, 2022 at 5:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Changes, 1) it take Robert's patch as first refactoring patch 2)\n> Rebase other new relmapper apis on top of that in 0002 3) Some code\n> refactoring in main patch 0005 and also one problem fix, earlier in\n> wal log method I have removed ForceSyncCommit(), but IMHO that is\n> equally valid whether we use file_copy or wal_log because in both\n> cases we are creating the disk files. 4) Support strategy in createdb\n> tool and add test case as part of 0006.\n\nI don't think you've adequately considered temporary relations here.\nIt seems to be that ReadBufferWithoutRelcache() could not be safe on a\ntemprel, because we'd need a BackendId to access the underlying\nstorage. So I think that ReadBufferWithoutRelcache can only accept\nunlogged or permanent, and maybe the argument ought to be a Boolean\ninstead of a relpersistence value. I thought that this problem might\nbe only cosmetic, but I checked the code that actually does the copy,\nand there's no filter there on relpersistence either. And I think\nthere should be.\n\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 13:10:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Mar 11, 2022 at 1:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't think you've adequately considered temporary relations here.\n> It seems to be that ReadBufferWithoutRelcache() could not be safe on a\n> temprel, because we'd need a BackendId to access the underlying\n> storage. So I think that ReadBufferWithoutRelcache can only accept\n> unlogged or permanent, and maybe the argument ought to be a Boolean\n> instead of a relpersistence value. I thought that this problem might\n> be only cosmetic, but I checked the code that actually does the copy,\n> and there's no filter there on relpersistence either. And I think\n> there should be.\n\nI hit \"send\" too quickly there:\n\nrhaas=# create database fudge;\nCREATE DATABASE\nrhaas=# \\c fudge\nYou are now connected to database \"fudge\" as user \"rhaas\".\nfudge=# create temp table q ();\nCREATE TABLE\nfudge=# ^Z\n[2]+ Stopped psql\n[rhaas Downloads]$ pg_ctl stop -mi\nwaiting for server to shut down.... done\nserver stopped\n[rhaas Downloads]$ %%\npsql\n\\c\nYou are now connected to database \"fudge\" as user \"rhaas\".\nfudge=# select * from pg_class where relpersistence='t';\n oid | relname | relnamespace | reltype | reloftype | relowner |\nrelam | relfilenode | reltablespace | relpages | reltuples |\nrelallvisible | reltoastrelid | relhasindex | relisshared |\nrelpersistence | relkind | relnatts | relchecks | relhasrules |\nrelhastriggers | relhassubclass | relrowsecurity | relforcerowsecurity\n| relispopulated | relreplident | relispartition | relrewrite |\nrelfrozenxid | relminmxid | relacl | reloptions | relpartbound\n-------+---------+--------------+---------+-----------+----------+-------+-------------+---------------+----------+-----------+---------------+---------------+-------------+-------------+----------------+---------+----------+-----------+-------------+----------------+----------------+----------------+---------------------+----------------+--------------+----------------+------------+--------------+------------+--------+------------+--------------\n 16388 | q | 16386 | 16390 | 0 | 10 |\n2 | 16388 | 0 | 0 | -1 | 0\n| 0 | f | f | t | r\n| 0 | 0 | f | f | f\n| f | f | t | d\n| f | 0 | 721 | 1 | |\n |\n(1 row)\n\nfudge=# \\c rhaas\nYou are now connected to database \"rhaas\" as user \"rhaas\".\nrhaas=# alter database fudge is_template true;\nALTER DATABASE\nrhaas=# create database cookies template fudge;\nCREATE DATABASE\nrhaas=# \\c cookies\nYou are now connected to database \"cookies\" as user \"rhaas\".\ncookies=# select count(*) from pg_class where relpersistence='t';\n count\n-------\n 1\n(1 row)\n\nYou have to be quick, because autovacuum will drop the orphaned temp\ntable when it notices it, but it is possible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 13:21:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Mar 11, 2022 at 5:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Changes, 1) it take Robert's patch as first refactoring patch 2)\n> Rebase other new relmapper apis on top of that in 0002 3) Some code\n> refactoring in main patch 0005 and also one problem fix, earlier in\n> wal log method I have removed ForceSyncCommit(), but IMHO that is\n> equally valid whether we use file_copy or wal_log because in both\n> cases we are creating the disk files. 4) Support strategy in createdb\n> tool and add test case as part of 0006.\n\nI think there's something wrong with what this patch is doing with the\nXLOG records. It adds XLOG_DBASE_CREATEDIR, but the only new\nXLogInsert() calls in the patch are passing XLOG_DBASE_CREATE, and no\nexisting references are adjusted. Similarly with xl_dbase_create_rec\nand xl_dbase_createdir_rec. Why would we introduce a new record type\nand not use it?\n\nLet's not call the functions for the different strategies\nCopyDatabase() and CopyDatabaseWithWal() but rather something that\nmatches up to the strategy names e.g. create_database_using_wal_log()\nand create_database_using_file_copy(). There's something a little\nfunny about the names wal_log and file_copy ... they're not quite\nparallel gramatically. But it's probably OK.\n\nThe changes to createdb_failure_params make me a little nervous. I\nthink we'd be in real trouble if we failed before completing both\nDropDatabaseBuffers() and ForgetDatabaseSyncRequests(). However, it\nlooks to me like those are both intended to be no-fail operations, so\nI don't see an actual hazard here. But, hmm, what about on the\nrecovery side? Suppose that we start copying the database block by\nblock and then either (a) the standby is promoted before the copy is\nfinished or (b) the copy fails. Now the standby has data in\nshared_buffers for a database that does not exist. If that's not bad,\nthen why does createdb_failure_params need to DropDatabaseBuffers()?\nBut I bet it is bad. I wonder if we should be using\nRelationCopyStorage() rather than this new function\nRelationCopyStorageUsingBuffer(). That would avoid having the buffers\nin shared_buffers, dodging the problem. But then we have a problem\nwith checkpoint interlocking: we could begin replay from a checkpoint\nand find that the pages that were supposed to get copied prior to the\ncheckpoint were actually not copied, because the checkpoint record\ncould be written after we've logged a page being copied and before we\nactually write the page. Or, we could crash after writing a whole lot\nof pages and a checkpoint record, but before RelationCopyStorage()\nfsyncs the destination fork. It doesn't seem advisable to hold off\ncheckpoints for the time it takes to copy an entire relation fork, so\nthe solution is apparently to keep the data in shared buffers after\nall. But that brings us right back to square one. Have you thought\nthrough this whole problem carefully? It seems like a total mess to me\nat the moment, but maybe I'm missing something.\n\nThere seems to be no reason to specify specific values for the members\nof enum CreateDBStrategy.\n\nI think the naming of some of the new functions might need work, in\nparticular GetRelInfoFromTuple, GetRelListFromPage, and\nGetDatabaseRelationList. The names seem kind of generic for what\nthey're doing. Maybe ScanSourceDatabasePgClass,\nScanSourceDatabasePgClassPage, ScanSourceDatabasePgClassTuple?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 15:25:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sat, Mar 12, 2022 at 1:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\nResponding to this specific issue..\n\n> The changes to createdb_failure_params make me a little nervous. I\n> think we'd be in real trouble if we failed before completing both\n> DropDatabaseBuffers() and ForgetDatabaseSyncRequests(). However, it\n> looks to me like those are both intended to be no-fail operations, so\n> I don't see an actual hazard here.\n\nI might be missing something but even without that I do not see a real\nproblem here. The reason we are dropping the database buffers and\npending sync request is because right after this we are removing the\nunderlying files and if we just remove the files without dropping the\nbuffer from the buffer cache then the checkpointer will fail while\ntrying to flush the buffer.\n\nBut, hmm, what about on the\n> recovery side? Suppose that we start copying the database block by\n> block and then either (a) the standby is promoted before the copy is\n> finished or (b) the copy fails.\n\nI think the above logic will be valid in case of standby as well\nbecause we are not really deleting the underlying files.\n\nNow the standby has data in\n> shared_buffers for a database that does not exist. If that's not bad,\n> then why does createdb_failure_params need to DropDatabaseBuffers()?\n> But I bet it is bad. I wonder if we should be using\n> RelationCopyStorage() rather than this new function\n> RelationCopyStorageUsingBuffer().\n\nI am not sure how RelationCopyStorage() will help in the standby side,\nbecause then also we will log the same WAL (XLOG_FPI) for each page\nand standby side we will use buffer to apply this FPI so if you think\nthat there is a problem then it will be same with\nRelationCopyStorage() at least on the standby side.\n\nIn fact while we are rewriting the relation during vacuum full that\ntime also we are calling log_newpage() under RelationCopyStorage() and\nduring standby if it gets promoted we will be having some buffers in\nthe buffer pool with the new relfilenode. So I think our case is also\nthe same.\n\nSo here my stand is that we need to drop database buffers and remove\npending sync requests because we are deleting underlying files and if\nwe do not do that in some extreme cases then there is no need to drop\nthe buffers or remove the pending sync request and the worst\nconsequences would be waste of disk space.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 12 Mar 2022 11:06:17 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Mar 11, 2022 at 11:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Mar 11, 2022 at 1:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I don't think you've adequately considered temporary relations here.\n> > It seems to be that ReadBufferWithoutRelcache() could not be safe on a\n> > temprel, because we'd need a BackendId to access the underlying\n> > storage. So I think that ReadBufferWithoutRelcache can only accept\n> > unlogged or permanent, and maybe the argument ought to be a Boolean\n> > instead of a relpersistence value. I thought that this problem might\n> > be only cosmetic, but I checked the code that actually does the copy,\n> > and there's no filter there on relpersistence either. And I think\n> > there should be.\n\nYeah right for now, this api can only support unlogged or permanent.\nI will fix this\n\n> I hit \"send\" too quickly there:\n>\n> rhaas=# create database fudge;\n> CREATE DATABASE\n> rhaas=# \\c fudge\n> You are now connected to database \"fudge\" as user \"rhaas\".\n> fudge=# create temp table q ();\n> CREATE TABLE\n> fudge=# ^Z\n> [2]+ Stopped psql\n> [rhaas Downloads]$ pg_ctl stop -mi\n> waiting for server to shut down.... done\n> server stopped\n> [rhaas Downloads]$ %%\n> psql\n> \\c\n> You are now connected to database \"fudge\" as user \"rhaas\".\n> fudge=# select * from pg_class where relpersistence='t';\n> oid | relname | relnamespace | reltype | reloftype | relowner |\n> relam | relfilenode | reltablespace | relpages | reltuples |\n> relallvisible | reltoastrelid | relhasindex | relisshared |\n> relpersistence | relkind | relnatts | relchecks | relhasrules |\n> relhastriggers | relhassubclass | relrowsecurity | relforcerowsecurity\n> | relispopulated | relreplident | relispartition | relrewrite |\n> relfrozenxid | relminmxid | relacl | reloptions | relpartbound\n> -------+---------+--------------+---------+-----------+----------+-------+-------------+---------------+----------+-----------+---------------+---------------+-------------+-------------+----------------+---------+----------+-----------+-------------+----------------+----------------+----------------+---------------------+----------------+--------------+----------------+------------+--------------+------------+--------+------------+--------------\n> 16388 | q | 16386 | 16390 | 0 | 10 |\n> 2 | 16388 | 0 | 0 | -1 | 0\n> | 0 | f | f | t | r\n> | 0 | 0 | f | f | f\n> | f | f | t | d\n> | f | 0 | 721 | 1 | |\n> |\n> (1 row)\n>\n> fudge=# \\c rhaas\n> You are now connected to database \"rhaas\" as user \"rhaas\".\n> rhaas=# alter database fudge is_template true;\n> ALTER DATABASE\n> rhaas=# create database cookies template fudge;\n> CREATE DATABASE\n> rhaas=# \\c cookies\n> You are now connected to database \"cookies\" as user \"rhaas\".\n> cookies=# select count(*) from pg_class where relpersistence='t';\n> count\n> -------\n> 1\n> (1 row)\n\nI think this is not a right example to show the problem, no? Because\nyou are showing the pg_class entry and the pg_class is not a temp\nrelation so even if we avoid copying the temp relation pg_class will\nbe copied right? so you will still see this uncleaned temp relation\nentry. I could reproduce exactly the same issue without my patch as\nwell.\n\nSo I agree we need to avoid copying temp relations but with that the\nabove behavior will not change. Am I missing something?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 12 Mar 2022 15:19:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sat, Mar 12, 2022 at 11:06 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> In fact while we are rewriting the relation during vacuum full that\n> time also we are calling log_newpage() under RelationCopyStorage() and\n> during standby if it gets promoted we will be having some buffers in\n> the buffer pool with the new relfilenode. So I think our case is also\n> the same.\n>\n> So here my stand is that we need to drop database buffers and remove\n> pending sync requests because we are deleting underlying files and if\n> we do not do that in some extreme cases then there is no need to drop\n> the buffers or remove the pending sync request and the worst\n> consequences would be waste of disk space.\n\nSo other than this open point I have fixed other comments given by you\nwhich includes,\n\n- Avoid copying temp relfilenode\n- Rename of functions CopyDatabase* -> CreateDatabaseUsing*\n- GetDatabaseRelationList and friends to ScanSourceDatabasePgClass*\n- Removed unused structure and macro because we are using the same WAL\nfor copying the database using the old method or creating the\ndirectory and version files for the new method. Do you think we\nshould introduce a new WAL for that instead of using the same?\n\nOther than that, I have fixed some mistakes in comments and supported\ntab completion for the new options.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 14 Mar 2022 17:21:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sat, Mar 12, 2022 at 12:36 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> So here my stand is that we need to drop database buffers and remove\n> pending sync requests because we are deleting underlying files and if\n> we do not do that in some extreme cases then there is no need to drop\n> the buffers or remove the pending sync request and the worst\n> consequences would be waste of disk space.\n\nHmm, I guess you're right.\n\nOn Mon, Mar 14, 2022 at 7:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> - Removed unused structure and macro because we are using the same WAL\n> for copying the database using the old method or creating the\n> directory and version files for the new method. Do you think we\n> should introduce a new WAL for that instead of using the same?\n\nI think it would make sense to have two different WAL records e.g.\nXLOG_DBASE_CREATE_WAL_LOG and XLOG_DBASE_CREATE_FILE_COPY. Then it's\neasy to see how this could be generalized to other strategies in the\nfuture.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 11:33:25 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Mar 14, 2022 at 7:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Other than that, I have fixed some mistakes in comments and supported\n> tab completion for the new options.\n\nI was looking at 0001 and 0002 again and realized that I swapped the\nnames load_relmap_file() and read_relmap_file() from what I should\nhave done. Here's a revised version. With this, read_relmap_file() and\nwrite_relmap_file() become functions that just read and write the file\nwithout touching any global variables, and load_relmap_file() is the\nfunction that reads data from the file and puts it into a global\nvariable, which seems more sensible than the way I had it before.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 14 Mar 2022 12:04:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Mar 14, 2022 at 12:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Mar 14, 2022 at 7:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Other than that, I have fixed some mistakes in comments and supported\n> > tab completion for the new options.\n>\n> I was looking at 0001 and 0002 again and realized that I swapped the\n> names load_relmap_file() and read_relmap_file() from what I should\n> have done. Here's a revised version. With this, read_relmap_file() and\n> write_relmap_file() become functions that just read and write the file\n> without touching any global variables, and load_relmap_file() is the\n> function that reads data from the file and puts it into a global\n> variable, which seems more sensible than the way I had it before.\n\nRegarding 0003 and 0005, I'm not a fan of 'bool isunlogged'. I think\n'bool permanent' would be better (note BM_PERMANENT). This would\ninvolve reversing true and false.\n\nRegarding 0004, I can't really see a reason for this function to take\na LockRelId as a parameter rather than two separate OIDs. I also can't\nentirely see why it should be called LockRelationId. Maybe\nLockRelationInDatabaseById(Oid dbid, Oid relid, LOCKMODE lockmode)?\nNote that neither caller actually has a LockRelId available; both have\nto construct one.\n\nRegarding 0005:\n\n+ CREATEDB_WAL_LOG = 0,\n+ CREATEDB_FILE_COPY = 1\n\nI still think you don't need = 0 and = 1 here.\n\nI'll probably go through and do a pass over the comments once you post\nthe next version of this. There seems to be work needed in a bunch of\nplaces, but it probably makes more sense for me to go through and\nadjust the things that seem to need it rather than listing a bunch of\nchanges for you to make.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 12:34:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Mar 14, 2022 at 10:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Regarding 0004, I can't really see a reason for this function to take\n> a LockRelId as a parameter rather than two separate OIDs. I also can't\n> entirely see why it should be called LockRelationId. Maybe\n> LockRelationInDatabaseById(Oid dbid, Oid relid, LOCKMODE lockmode)?\n> Note that neither caller actually has a LockRelId available; both have\n> to construct one.\n\nActually we already have an existing function\nUnlockRelationId(LockRelId *relid, LOCKMODE lockmode) so it makes more\nsense to have a parallel lock function. Do you still think we should\nchange?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 22:14:27 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Mar 14, 2022 at 12:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Mon, Mar 14, 2022 at 10:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Regarding 0004, I can't really see a reason for this function to take\n> > a LockRelId as a parameter rather than two separate OIDs. I also can't\n> > entirely see why it should be called LockRelationId. Maybe\n> > LockRelationInDatabaseById(Oid dbid, Oid relid, LOCKMODE lockmode)?\n> > Note that neither caller actually has a LockRelId available; both have\n> > to construct one.\n>\n> Actually we already have an existing function\n> UnlockRelationId(LockRelId *relid, LOCKMODE lockmode) so it makes more\n> sense to have a parallel lock function. Do you still think we should\n> change?\n\nOh! OK, well, then what you did makes sense, for consistency. Didn't\nrealize that.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 12:55:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Mar 14, 2022 at 10:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> I think it would make sense to have two different WAL records e.g.\n> XLOG_DBASE_CREATE_WAL_LOG and XLOG_DBASE_CREATE_FILE_COPY. Then it's\n> easy to see how this could be generalized to other strategies in the\n> future.\n\nDone that way. In dbase_desc(), for XLOG_DBASE_CREATE_FILE_COPY I\nhave kept the older description i.e. \"copy dir\" and for\nXLOG_DBASE_CREATE_WAL_LOG it is \"create dir\", because logically the\nfirst one is actually copying and the second one is just creating the\ndirectory. Do you think we should be using \"copy dir file_copy\" and\n\"copy dir wal_log\" in the description as well?\n\n> On Mon, Mar 14, 2022 at 12:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I was looking at 0001 and 0002 again and realized that I swapped the\n> > names load_relmap_file() and read_relmap_file() from what I should\n> > have done. Here's a revised version. With this, read_relmap_file() and\n> > write_relmap_file() become functions that just read and write the file\n> > without touching any global variables, and load_relmap_file() is the\n> > function that reads data from the file and puts it into a global\n> > variable, which seems more sensible than the way I had it before.\n\nOkay, I have included this patch and rebased other patches on top of that.\n\n> Regarding 0003 and 0005, I'm not a fan of 'bool isunlogged'. I think\n> 'bool permanent' would be better (note BM_PERMANENT). This would\n> involve reversing true and false.\n\nOkay changed.\n\n> Regarding 0005:\n>\n> + CREATEDB_WAL_LOG = 0,\n> + CREATEDB_FILE_COPY = 1\n>\n> I still think you don't need = 0 and = 1 here.\n\nDone\n\n> I'll probably go through and do a pass over the comments once you post\n> the next version of this. There seems to be work needed in a bunch of\n> places, but it probably makes more sense for me to go through and\n> adjust the things that seem to need it rather than listing a bunch of\n> changes for you to make.\n\nSure, thanks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 15 Mar 2022 15:23:59 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Few comments on the latest patch:\n\n- /* We need to construct the pathname for this database */\n- dbpath = GetDatabasePath(xlrec->dbid, xlrec->tsid);\n+ if (xlrec->dbid != InvalidOid)\n+ dbpath = GetDatabasePath(xlrec->dbid, xlrec->tsid);\n+ else\n+ dbpath = pstrdup(\"global\");\n\nDo we really need this change? Is GetDatabasePath() alone not capable\nof handling it?\n\n==\n\n+static CreateDBRelInfo *ScanSourceDatabasePgClassTuple(HeapTupleData *tuple,\n+\n Oid tbid, Oid dbid,\n+\n char *srcpath);\n+static List *ScanSourceDatabasePgClassPage(Page page, Buffer buf, Oid tbid,\n+\n Oid dbid, char *srcpath,\n+\n List *rnodelist, Snapshot snapshot);\n+static List *ScanSourceDatabasePgClass(Oid srctbid, Oid srcdbid, char\n*srcpath);\n\nI think we can shorten these function names to probably\nScanSourceDBPgClassRel(), ScanSourceDBPgClassTuple() and likewise?\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Tue, Mar 15, 2022 at 3:24 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Mar 14, 2022 at 10:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> > I think it would make sense to have two different WAL records e.g.\n> > XLOG_DBASE_CREATE_WAL_LOG and XLOG_DBASE_CREATE_FILE_COPY. Then it's\n> > easy to see how this could be generalized to other strategies in the\n> > future.\n>\n> Done that way. In dbase_desc(), for XLOG_DBASE_CREATE_FILE_COPY I\n> have kept the older description i.e. \"copy dir\" and for\n> XLOG_DBASE_CREATE_WAL_LOG it is \"create dir\", because logically the\n> first one is actually copying and the second one is just creating the\n> directory. Do you think we should be using \"copy dir file_copy\" and\n> \"copy dir wal_log\" in the description as well?\n>\n> > On Mon, Mar 14, 2022 at 12:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > I was looking at 0001 and 0002 again and realized that I swapped the\n> > > names load_relmap_file() and read_relmap_file() from what I should\n> > > have done. Here's a revised version. With this, read_relmap_file() and\n> > > write_relmap_file() become functions that just read and write the file\n> > > without touching any global variables, and load_relmap_file() is the\n> > > function that reads data from the file and puts it into a global\n> > > variable, which seems more sensible than the way I had it before.\n>\n> Okay, I have included this patch and rebased other patches on top of that.\n>\n> > Regarding 0003 and 0005, I'm not a fan of 'bool isunlogged'. I think\n> > 'bool permanent' would be better (note BM_PERMANENT). This would\n> > involve reversing true and false.\n>\n> Okay changed.\n>\n> > Regarding 0005:\n> >\n> > + CREATEDB_WAL_LOG = 0,\n> > + CREATEDB_FILE_COPY = 1\n> >\n> > I still think you don't need = 0 and = 1 here.\n>\n> Done\n>\n> > I'll probably go through and do a pass over the comments once you post\n> > the next version of this. There seems to be work needed in a bunch of\n> > places, but it probably makes more sense for me to go through and\n> > adjust the things that seem to need it rather than listing a bunch of\n> > changes for you to make.\n>\n> Sure, thanks.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Mar 2022 22:00:33 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 15, 2022 at 12:30 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Few comments on the latest patch:\n>\n> - /* We need to construct the pathname for this database */\n> - dbpath = GetDatabasePath(xlrec->dbid, xlrec->tsid);\n> + if (xlrec->dbid != InvalidOid)\n> + dbpath = GetDatabasePath(xlrec->dbid, xlrec->tsid);\n> + else\n> + dbpath = pstrdup(\"global\");\n>\n> Do we really need this change? Is GetDatabasePath() alone not capable\n> of handling it?\n\nWell, I mean, that function has a special case for\nGLOBALTABLESPACE_OID, but GLOBALTABLESPACE_OID is 1664, and InvalidOid\nis 0.\n\n> I think we can shorten these function names to probably\n> ScanSourceDBPgClassRel(), ScanSourceDBPgClassTuple() and likewise?\n\nWe could, but I don't think it's an improvement.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Mar 2022 12:47:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 15, 2022 at 10:17 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Mar 15, 2022 at 12:30 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Few comments on the latest patch:\n> >\n> > - /* We need to construct the pathname for this database */\n> > - dbpath = GetDatabasePath(xlrec->dbid, xlrec->tsid);\n> > + if (xlrec->dbid != InvalidOid)\n> > + dbpath = GetDatabasePath(xlrec->dbid, xlrec->tsid);\n> > + else\n> > + dbpath = pstrdup(\"global\");\n> >\n> > Do we really need this change? Is GetDatabasePath() alone not capable\n> > of handling it?\n>\n> Well, I mean, that function has a special case for\n> GLOBALTABLESPACE_OID, but GLOBALTABLESPACE_OID is 1664, and InvalidOid\n> is 0.\n>\n\nWouldn't this be true only in case of a shared map file (when dbOid is\nInvalid and tblspcOid is globaltablespace_oid) or am I missing\nsomething?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Tue, 15 Mar 2022 22:56:31 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 15, 2022 at 1:26 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > On Tue, Mar 15, 2022 at 12:30 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > Few comments on the latest patch:\n> > >\n> > > - /* We need to construct the pathname for this database */\n> > > - dbpath = GetDatabasePath(xlrec->dbid, xlrec->tsid);\n> > > + if (xlrec->dbid != InvalidOid)\n> > > + dbpath = GetDatabasePath(xlrec->dbid, xlrec->tsid);\n> > > + else\n> > > + dbpath = pstrdup(\"global\");\n> > >\n> > > Do we really need this change? Is GetDatabasePath() alone not capable\n> > > of handling it?\n> >\n> > Well, I mean, that function has a special case for\n> > GLOBALTABLESPACE_OID, but GLOBALTABLESPACE_OID is 1664, and InvalidOid\n> > is 0.\n> >\n>\n> Wouldn't this be true only in case of a shared map file (when dbOid is\n> Invalid and tblspcOid is globaltablespace_oid) or am I missing\n> something?\n\n*facepalm*\n\nGood catch, sorry that I'm slow on the uptake today.\n\nv3 attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 15 Mar 2022 13:39:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 15, 2022 at 11:09 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Mar 15, 2022 at 1:26 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > On Tue, Mar 15, 2022 at 12:30 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > > Few comments on the latest patch:\n> > > >\n> > > > - /* We need to construct the pathname for this database */\n> > > > - dbpath = GetDatabasePath(xlrec->dbid, xlrec->tsid);\n> > > > + if (xlrec->dbid != InvalidOid)\n> > > > + dbpath = GetDatabasePath(xlrec->dbid, xlrec->tsid);\n> > > > + else\n> > > > + dbpath = pstrdup(\"global\");\n> > > >\n> > > > Do we really need this change? Is GetDatabasePath() alone not capable\n> > > > of handling it?\n> > >\n> > > Well, I mean, that function has a special case for\n> > > GLOBALTABLESPACE_OID, but GLOBALTABLESPACE_OID is 1664, and InvalidOid\n> > > is 0.\n> > >\n> >\n> > Wouldn't this be true only in case of a shared map file (when dbOid is\n> > Invalid and tblspcOid is globaltablespace_oid) or am I missing\n> > something?\n>\n> *facepalm*\n>\n> Good catch, sorry that I'm slow on the uptake today.\n>\n> v3 attached.\n\nThanks Ashutosh and Robert. Other pacthes cleanly applied on this\npatch still generated a new version so that we can find all patches\ntogether. There are no other changes.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 16 Mar 2022 10:23:33 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 16, 2022 at 12:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Thanks Ashutosh and Robert. Other pacthes cleanly applied on this\n> patch still generated a new version so that we can find all patches\n> together. There are no other changes.\n\nI committed my v3 of my refactoring patch, here 0001.\n\nI'm working over the comments in the rest of the patch series and will\npost an updated version when I get done. I think I will likely merge\nall the remaining patches together just to make it simpler to manage;\nwe can split things out again if we need to do that.\n\nOne question that occurred to me when looking this over is whether, or\nwhy, it's safe against concurrent smgr invalidations. It seems to me\nthat every loop in the new CREATE DATABASE code needs to\nCHECK_FOR_INTERRUPTS() -- some do already -- and when they do that, I\nthink we might receive an invalidation message that causes us to\nsmgrclose() some or all of the things where we previously did\nsmgropen(). I don't quite see why that can't cause problems here. I\ntried running the src/bin/scripts regression tests with\ndebug_discard_caches=1 and none of the tests failed, so there may very\nwell be a reason why this is actually totally fine, but I don't know\nwhat it is. On the other hand, it may be that things went horribly\nwrong and the tests are just smart enough to catch it, or maybe\nthere's a problematic scenario which those tests just don't hit. I\ndon't know. Thoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Mar 2022 16:13:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Mar 18, 2022 at 1:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Mar 16, 2022 at 12:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Thanks Ashutosh and Robert. Other pacthes cleanly applied on this\n> > patch still generated a new version so that we can find all patches\n> > together. There are no other changes.\n>\n> I committed my v3 of my refactoring patch, here 0001.\n>\n> I'm working over the comments in the rest of the patch series and will\n> post an updated version when I get done. I think I will likely merge\n> all the remaining patches together just to make it simpler to manage;\n> we can split things out again if we need to do that.\n\nThanks for the effort.\n\n> One question that occurred to me when looking this over is whether, or\n> why, it's safe against concurrent smgr invalidations.\n\nWe are only accessing the smgr of the source database and the\ndestination database. And there is no one else that can be connected\nto the source db and the destination db is not visible to anyone. So\ndo we really need to worry about the concurrent smgr invalidation?\nWhat am I missing?\n\nIt seems to me\n> that every loop in the new CREATE DATABASE code needs to\n> CHECK_FOR_INTERRUPTS() -- some do already -- and when they do that,\n\nYes, the pg_class reading code is missing this check so we need to put\nit. But copying code like\nCreateDatabaseUsingWalLog() have it inside the deepest loop in\nRelationCopyStorageUsingBuffer() and similarly\nCreateDatabaseUsingFileCopy() have it in copydir(). Maybe we should\nput it in all loop so that we do not skip checking due to some\ncondition.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Mar 2022 10:09:39 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Mar 18, 2022 at 12:39 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > One question that occurred to me when looking this over is whether, or\n> > why, it's safe against concurrent smgr invalidations.\n>\n> We are only accessing the smgr of the source database and the\n> destination database. And there is no one else that can be connected\n> to the source db and the destination db is not visible to anyone. So\n> do we really need to worry about the concurrent smgr invalidation?\n> What am I missing?\n\nA sinval reset can occur at any moment due to an overflow of the\nqueue. That acts as a universal reset of everything. So you can't\nreason on the basis of what somebody might be sending.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 19 Mar 2022 14:33:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sun, Mar 20, 2022 at 12:03 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Mar 18, 2022 at 12:39 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > One question that occurred to me when looking this over is whether, or\n> > > why, it's safe against concurrent smgr invalidations.\n> >\n> > We are only accessing the smgr of the source database and the\n> > destination database. And there is no one else that can be connected\n> > to the source db and the destination db is not visible to anyone. So\n> > do we really need to worry about the concurrent smgr invalidation?\n> > What am I missing?\n>\n> A sinval reset can occur at any moment due to an overflow of the\n> queue. That acts as a universal reset of everything. So you can't\n> reason on the basis of what somebody might be sending.\n\nI thought that way because IIUC, when we are locking the database\ntuple we are ensuring that we are calling\nReceiveSharedInvalidMessages() right? And IIUC\nReceiveSharedInvalidMessages(), is designed such a way that it will\nconsume all the outstanding messages and that's the reason it loops\nmultiple times if it identifies that the queue is full. And if my\nassumption here is correct then I think it is also correct that now we\nonly need to worry about anyone generating new invalidations and that\nis not possible in this case.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 20 Mar 2022 11:04:39 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sun, Mar 20, 2022 at 1:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I thought that way because IIUC, when we are locking the database\n> tuple we are ensuring that we are calling\n> ReceiveSharedInvalidMessages() right? And IIUC\n> ReceiveSharedInvalidMessages(), is designed such a way that it will\n> consume all the outstanding messages and that's the reason it loops\n> multiple times if it identifies that the queue is full. And if my\n> assumption here is correct then I think it is also correct that now we\n> only need to worry about anyone generating new invalidations and that\n> is not possible in this case.\n\nWell, I don't see how that chain of logic addresses my concern about\nsinval reset.\n\nMind you, I'm not sure there's an actual problem here, because I tried\ntesting the patch with debug_discard_caches=1 and nothing failed. But\nI still don't understand WHY nothing failed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Mar 2022 09:36:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Mar 21, 2022 at 7:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sun, Mar 20, 2022 at 1:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I thought that way because IIUC, when we are locking the database\n> > tuple we are ensuring that we are calling\n> > ReceiveSharedInvalidMessages() right? And IIUC\n> > ReceiveSharedInvalidMessages(), is designed such a way that it will\n> > consume all the outstanding messages and that's the reason it loops\n> > multiple times if it identifies that the queue is full. And if my\n> > assumption here is correct then I think it is also correct that now we\n> > only need to worry about anyone generating new invalidations and that\n> > is not possible in this case.\n>\n> Well, I don't see how that chain of logic addresses my concern about\n> sinval reset.\n>\n> Mind you, I'm not sure there's an actual problem here, because I tried\n> testing the patch with debug_discard_caches=1 and nothing failed. But\n> I still don't understand WHY nothing failed.\n\nOkay, I see what you are saying. Yeah this looks like a problem to me\nas well. I will try to reproduce this issue.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Mar 2022 20:29:26 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Mar 21, 2022 at 8:29 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Mar 21, 2022 at 7:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Sun, Mar 20, 2022 at 1:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > I thought that way because IIUC, when we are locking the database\n> > > tuple we are ensuring that we are calling\n> > > ReceiveSharedInvalidMessages() right? And IIUC\n> > > ReceiveSharedInvalidMessages(), is designed such a way that it will\n> > > consume all the outstanding messages and that's the reason it loops\n> > > multiple times if it identifies that the queue is full. And if my\n> > > assumption here is correct then I think it is also correct that now we\n> > > only need to worry about anyone generating new invalidations and that\n> > > is not possible in this case.\n> >\n> > Well, I don't see how that chain of logic addresses my concern about\n> > sinval reset.\n> >\n> > Mind you, I'm not sure there's an actual problem here, because I tried\n> > testing the patch with debug_discard_caches=1 and nothing failed. But\n> > I still don't understand WHY nothing failed.\n>\n> Okay, I see what you are saying. Yeah this looks like a problem to me\n> as well. I will try to reproduce this issue.\n\nI tried to debug the case but I realized that somehow\nCHECK_FOR_INTERRUPTS() is not calling the\nAcceptInvalidationMessages() and I could not find the same while\nlooking into the code as well. While debugging I noticed that\nAcceptInvalidationMessages() is called multiple times but that is only\nthrough LockRelationId() but while locking the relation we had already\nclosed the previous smgr because at a time we keep only one smgr open.\nAnd that's the reason it is not hitting the issue which we think it\ncould. Is there any condition under which it will call\nAcceptInvalidationMessages() through CHECK_FOR_INTERRUPTS() ? because\nI could not see while debugging as well as in code.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Mar 2022 20:51:12 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Mar 21, 2022 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I tried to debug the case but I realized that somehow\n> CHECK_FOR_INTERRUPTS() is not calling the\n> AcceptInvalidationMessages() and I could not find the same while\n> looking into the code as well. While debugging I noticed that\n> AcceptInvalidationMessages() is called multiple times but that is only\n> through LockRelationId() but while locking the relation we had already\n> closed the previous smgr because at a time we keep only one smgr open.\n> And that's the reason it is not hitting the issue which we think it\n> could. Is there any condition under which it will call\n> AcceptInvalidationMessages() through CHECK_FOR_INTERRUPTS() ? because\n> I could not see while debugging as well as in code.\n\nYeah, I think the reason you can't find it is that it's not there. I\nwas confused in what I wrote earlier. I think we only process sinval\ncatchups when we're idle, not at every CHECK_FOR_INTERRUPTS(). And I\nthink the reason for that is precisely that it would be hard to write\ncorrect code otherwise, since invalidations might then get processed\nin a lot more places. So ... I guess all we really need to do here is\navoid assuming that the results of smgropen() are valid across any\ncode that might acquire relation locks. Which I think the code already\ndoes.\n\nBut on a related note, why doesn't CreateDatabaseUsingWalLog() acquire\nlocks on both the source and destination relations? It looks like\nyou're only taking locks for the source template database ... but I\nthought the intention here was to make sure that we didn't pull pages\ninto shared_buffers without holding a lock on the relation and/or the\ndatabase? I suppose the point is that while the template database\nmight be concurrently dropped, nobody can be doing anything\nconcurrently to the target database because nobody knows that it\nexists yet. Still, I think that this would be the only case where we\nlet pages into shared_buffers without a relation or database lock,\nthough maybe I'm confused about this point, too. If not, perhaps we\nshould consider locking the target database OID and each relation OID\nas we are copying it?\n\nI guess I'm imagining that there might be more code pathways in the\nfuture that want to ensure that there are no remaining buffers for\nsome particular database or relation OID. It seems natural to want to\nbe able to take some lock that prevents buffers from being added, and\nthen go and get rid of all the ones that are there already. But I\nadmit I can't quite think of a concrete case where we'd want to do\nsomething like this where the patch as coded would be a problem. I'm\njust thinking perhaps taking locks is fairly harmless and might avoid\nsome hypothetical problem later.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Mar 2022 14:23:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Mar 21, 2022 at 11:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Mar 21, 2022 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I tried to debug the case but I realized that somehow\n> > CHECK_FOR_INTERRUPTS() is not calling the\n> > AcceptInvalidationMessages() and I could not find the same while\n> > looking into the code as well. While debugging I noticed that\n> > AcceptInvalidationMessages() is called multiple times but that is only\n> > through LockRelationId() but while locking the relation we had already\n> > closed the previous smgr because at a time we keep only one smgr open.\n> > And that's the reason it is not hitting the issue which we think it\n> > could. Is there any condition under which it will call\n> > AcceptInvalidationMessages() through CHECK_FOR_INTERRUPTS() ? because\n> > I could not see while debugging as well as in code.\n>\n> Yeah, I think the reason you can't find it is that it's not there. I\n> was confused in what I wrote earlier. I think we only process sinval\n> catchups when we're idle, not at every CHECK_FOR_INTERRUPTS(). And I\n> think the reason for that is precisely that it would be hard to write\n> correct code otherwise, since invalidations might then get processed\n> in a lot more places. So ... I guess all we really need to do here is\n> avoid assuming that the results of smgropen() are valid across any\n> code that might acquire relation locks. Which I think the code already\n> does.\n>\n> But on a related note, why doesn't CreateDatabaseUsingWalLog() acquire\n> locks on both the source and destination relations? It looks like\n> you're only taking locks for the source template database ... but I\n> thought the intention here was to make sure that we didn't pull pages\n> into shared_buffers without holding a lock on the relation and/or the\n> database? I suppose the point is that while the template database\n> might be concurrently dropped, nobody can be doing anything\n> concurrently to the target database because nobody knows that it\n> exists yet. Still, I think that this would be the only case where we\n> let pages into shared_buffers without a relation or database lock,\n> though maybe I'm confused about this point, too. If not, perhaps we\n> should consider locking the target database OID and each relation OID\n> as we are copying it?\n>\n> I guess I'm imagining that there might be more code pathways in the\n> future that want to ensure that there are no remaining buffers for\n> some particular database or relation OID. It seems natural to want to\n> be able to take some lock that prevents buffers from being added, and\n> then go and get rid of all the ones that are there already. But I\n> admit I can't quite think of a concrete case where we'd want to do\n> something like this where the patch as coded would be a problem. I'm\n> just thinking perhaps taking locks is fairly harmless and might avoid\n> some hypothetical problem later.\n>\n> Thoughts?\n\nI think this make sense. I haven't changed the original patch as you\ntold you were improving on some comments, so in order to avoid\nconflict I have created this add on patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Mar 2022 10:28:38 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 22, 2022 at 10:28 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n>\n> I think this make sense. I haven't changed the original patch as you\n> told you were improving on some comments, so in order to avoid\n> conflict I have created this add on patch.\n>\n\nIn my previous patch mistakenly I used src_dboid instead of\ndest_dboid. Fixed in this version. For destination db I have used\nlock mode as AccessSharedLock. Logically if we see access wise we\ndon't want anyone else to be accessing that db but that is anyway\nprotected because it is not visible to anyone else. So I think\nAccessSharedLock should be correct here because we are just taking\nthis lock because we are accessing pages in shared buffers from this\ndatabase's relations.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Mar 2022 14:30:05 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 22, 2022 at 5:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> In my previous patch mistakenly I used src_dboid instead of\n> dest_dboid. Fixed in this version. For destination db I have used\n> lock mode as AccessSharedLock. Logically if we see access wise we\n> don't want anyone else to be accessing that db but that is anyway\n> protected because it is not visible to anyone else. So I think\n> AccessSharedLock should be correct here because we are just taking\n> this lock because we are accessing pages in shared buffers from this\n> database's relations.\n\nHere's my worked-over version of your previous patch. I haven't tried\nto incorporate your incremental patch that you just posted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Mar 2022 11:23:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 22, 2022 at 11:23 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Here's my worked-over version of your previous patch. I haven't tried\n> to incorporate your incremental patch that you just posted.\n\nAlso, please have a look at the XXX comments that I added in a few\nplaces where I think you need to make further changes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Mar 2022 11:24:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-03-22 11:23:16 -0400, Robert Haas wrote:\n> From 116bcdb6174a750b7ef7ae05ef6f39cebaf9bcf5 Mon Sep 17 00:00:00 2001\n> From: Robert Haas <rhaas@postgresql.org>\n> Date: Tue, 22 Mar 2022 11:22:26 -0400\n> Subject: [PATCH v1] Add new block-by-block strategy for CREATE DATABASE.\n\nI might have missed it because I just skimmed the patch. But I still think it\nshould contain a comment detailing why accessing catalogs from another\ndatabase is safe in this instance, and perhaps a comment or three in places\nthat could break it (e.g. snapshot computation, horizon stuff).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 22 Mar 2022 08:42:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 22, 2022 at 11:42 AM Andres Freund <andres@anarazel.de> wrote:\n> I might have missed it because I just skimmed the patch. But I still think it\n> should contain a comment detailing why accessing catalogs from another\n> database is safe in this instance, and perhaps a comment or three in places\n> that could break it (e.g. snapshot computation, horizon stuff).\n\nPlease see the function header comment for ScanSourceDatabasePgClass.\nI don't quite see how changes in those places would break this, but if\nyou want to be more specific perhaps I will see the light?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Mar 2022 11:55:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Mar 21, 2022 at 2:23 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Mar 21, 2022 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I tried to debug the case but I realized that somehow\n> > CHECK_FOR_INTERRUPTS() is not calling the\n> > AcceptInvalidationMessages() and I could not find the same while\n> > looking into the code as well. While debugging I noticed that\n> > AcceptInvalidationMessages() is called multiple times but that is only\n> > through LockRelationId() but while locking the relation we had already\n> > closed the previous smgr because at a time we keep only one smgr open.\n> > And that's the reason it is not hitting the issue which we think it\n> > could. Is there any condition under which it will call\n> > AcceptInvalidationMessages() through CHECK_FOR_INTERRUPTS() ? because\n> > I could not see while debugging as well as in code.\n>\n> Yeah, I think the reason you can't find it is that it's not there. I\n> was confused in what I wrote earlier. I think we only process sinval\n> catchups when we're idle, not at every CHECK_FOR_INTERRUPTS(). And I\n> think the reason for that is precisely that it would be hard to write\n> correct code otherwise, since invalidations might then get processed\n> in a lot more places. So ... I guess all we really need to do here is\n> avoid assuming that the results of smgropen() are valid across any\n> code that might acquire relation locks. Which I think the code already\n> does.\n\nSo I talked to Andres and Thomas about this and they told me that I\nwas right to worry about this problem. Over on the thread about \"wrong\nfds used for refilenodes after pg_upgrade relfilenode changes\nReply-To:\" there is a plan to make use ProcSignalBarrier to make smgr\nobjects disappear, and ProcSignalBarrier can be processed at any\nCHECK_FOR_INTERRUPTS(), so then we'd have a problem here. Commit\nf10f0ae420ee62400876ab34dca2c09c20dcd030 established a policy that you\nshould always re-fetch the smgr object instead of reusing one you've\nalready got, and even before that it was known to be unsafe to keep\nthem around for any period of time, because anything that opened a\nrelation, including a syscache lookup, could potentially accept\ninvalidations. So most of our code is already hardened against the\npossibility of smgr objects disappearing. I have a feeling there may\nbe some that isn't, but it would be good if this patch didn't\nintroduce more such code at the same time that patch is trying to\nintroduce more ways to get rid of smgr objects. It was suggested to me\nthat what this patch ought to be doing is calling\nCreateFakeRelcacheEntry() and then using RelationGetSmgr(fakerel)\nevery time we need the SmgrRelation, without ever keeping it around\nfor any amount of code. That way, if the smgr relation gets closed out\nfrom under us at a CHECK_FOR_INTERRUPTS(), we'll just recreate it at\nthe next RelationGetSmgr() call.\n\nAndres also noted that he thinks the patch performs redundant cleanup,\nbecause of the fact that it uses RelationCreateStorage. That will\narrange to remove files on abort, but createdb() also has its own\nmechanism for that. It doesn't seem like a thing to do twice in two\ndifferent ways.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Mar 2022 16:44:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 22, 2022 at 8:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Mar 22, 2022 at 5:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > In my previous patch mistakenly I used src_dboid instead of\n> > dest_dboid. Fixed in this version. For destination db I have used\n> > lock mode as AccessSharedLock. Logically if we see access wise we\n> > don't want anyone else to be accessing that db but that is anyway\n> > protected because it is not visible to anyone else. So I think\n> > AccessSharedLock should be correct here because we are just taking\n> > this lock because we are accessing pages in shared buffers from this\n> > database's relations.\n>\n> Here's my worked-over version of your previous patch. I haven't tried\n> to incorporate your incremental patch that you just posted.\n\nThanks for working on the comments. Please find the updated version\nwhich include below changes\n- Worked on the XXX comments added by you.\n- Added database level lock for the target database as well.\n- Used fake relcache and removed direct access to the smgr, I think it\nwas not really necessary in\nScanSourceDatabasePgClass() because we are using it for a very short\nperiod of time but still I have changed it, let me know if you think\nthat it is unneccessary to create the fake relcache here.\n- Removed extra space in createdb.c and fixed test case.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 23 Mar 2022 14:06:41 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 23, 2022 at 2:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> So I talked to Andres and Thomas about this and they told me that I\n> was right to worry about this problem. Over on the thread about \"wrong\n> fds used for refilenodes after pg_upgrade relfilenode changes\n> Reply-To:\" there is a plan to make use ProcSignalBarrier to make smgr\n> objects disappear, and ProcSignalBarrier can be processed at any\n> CHECK_FOR_INTERRUPTS(), so then we'd have a problem here. Commit\n> f10f0ae420ee62400876ab34dca2c09c20dcd030 established a policy that you\n> should always re-fetch the smgr object instead of reusing one you've\n> already got, and even before that it was known to be unsafe to keep\n> them around for any period of time, because anything that opened a\n> relation, including a syscache lookup, could potentially accept\n> invalidations. So most of our code is already hardened against the\n> possibility of smgr objects disappearing. I have a feeling there may\n> be some that isn't, but it would be good if this patch didn't\n> introduce more such code at the same time that patch is trying to\n> introduce more ways to get rid of smgr objects. It was suggested to me\n> that what this patch ought to be doing is calling\n> CreateFakeRelcacheEntry() and then using RelationGetSmgr(fakerel)\n> every time we need the SmgrRelation, without ever keeping it around\n> for any amount of code. That way, if the smgr relation gets closed out\n> from under us at a CHECK_FOR_INTERRUPTS(), we'll just recreate it at\n> the next RelationGetSmgr() call.\n\nOkay, I have changed this in my latest version of the patch.\n\n\n> Andres also noted that he thinks the patch performs redundant cleanup,\n> because of the fact that it uses RelationCreateStorage. That will\n> arrange to remove files on abort, but createdb() also has its own\n> mechanism for that. It doesn't seem like a thing to do twice in two\n> different ways.\n\nOkay this is an interesting point. So one option is that in case of\nfailure while using the wal log strategy we do not remove the database\ndirectory, because an abort transaction will take care of removing the\nrelation file. But then in failure case we will leave the orphaned\ndatabase directory with version file and the relmap file. Another\noption is to do the redundant cleanup as we are doing now. Any other\noptions?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Mar 2022 14:12:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 23, 2022 at 4:42 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Okay this is an interesting point. So one option is that in case of\n> failure while using the wal log strategy we do not remove the database\n> directory, because an abort transaction will take care of removing the\n> relation file. But then in failure case we will leave the orphaned\n> database directory with version file and the relmap file. Another\n> option is to do the redundant cleanup as we are doing now. Any other\n> options?\n\nI think our overriding goal should be to get everything using one\nmechanism. It doesn't look straightforward to get everything to go\nthrough the PendingRelDelete mechanism, because as you say, it can't\nhandle non-relation files or directories. However, what if we opt out\nof that mechanism? We could do that either by not using\nRelationCreateStorage() in the first place and directly calling\nsmgrcreate(), or by using RelationPreserveStorage() afterwards to yank\nthe file back out of the list.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Mar 2022 08:24:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 23, 2022 at 5:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Mar 23, 2022 at 4:42 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Okay this is an interesting point. So one option is that in case of\n> > failure while using the wal log strategy we do not remove the database\n> > directory, because an abort transaction will take care of removing the\n> > relation file. But then in failure case we will leave the orphaned\n> > database directory with version file and the relmap file. Another\n> > option is to do the redundant cleanup as we are doing now. Any other\n> > options?\n>\n> I think our overriding goal should be to get everything using one\n> mechanism. It doesn't look straightforward to get everything to go\n> through the PendingRelDelete mechanism, because as you say, it can't\n> handle non-relation files or directories. However, what if we opt out\n> of that mechanism? We could do that either by not using\n> RelationCreateStorage() in the first place and directly calling\n> smgrcreate(), or by using RelationPreserveStorage() afterwards to yank\n> the file back out of the list.\n\nI think directly using smgrcreate() is a better idea instead of first\nregistering and then unregistering it. I have made that change in\nthe attached patch. After this change now we can merge creating the\nMAIN_FORKNUM also in the loop below where we are creating other\nfork[1] with one extra condition but I think current code is in more\nsync with the other code where we are doing the similar things so I\nhave not merged it in the loop. Please let me know if you think\notherwise.\n\n[1]\n+ /*\n+ * Create and copy all forks of the relation. We are not using\n+ * RelationCreateStorage() as it is registering the cleanup for the\n+ * underlying relation storage on the transaction abort. But during create\n+ * database failure, we have a separate cleanup mechanism for the whole\n+ * database directory. Therefore, we don't need to register cleanup for\n+ * each individual relation storage.\n+ */\n+ smgrcreate(RelationGetSmgr(dst_rel), MAIN_FORKNUM, false);\n+ if (permanent)\n+ log_smgrcreate(&dst_rnode, MAIN_FORKNUM);\n+\n+ /* copy main fork. */\n+ RelationCopyStorageUsingBuffer(src_rel, dst_rel, MAIN_FORKNUM, permanent);\n+\n+ /* copy those extra forks that exist */\n+ for (ForkNumber forkNum = MAIN_FORKNUM + 1;\n+ forkNum <= MAX_FORKNUM; forkNum++)\n+ {\n+ if (smgrexists(RelationGetSmgr(src_rel), forkNum))\n+ {\n+ smgrcreate(RelationGetSmgr(dst_rel), forkNum, false);\n+\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 23 Mar 2022 18:49:11 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 23, 2022 at 9:19 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I think directly using smgrcreate() is a better idea instead of first\n> registering and then unregistering it. I have made that change in\n> the attached patch. After this change now we can merge creating the\n> MAIN_FORKNUM also in the loop below where we are creating other\n> fork[1] with one extra condition but I think current code is in more\n> sync with the other code where we are doing the similar things so I\n> have not merged it in the loop. Please let me know if you think\n> otherwise.\n\nGenerally I think our practice is that we do the main fork\nunconditionally (because it should always be there) and the others\nonly if they exist. I suggest that you make this consistent with that,\nbut you could do it like if (forkNum != MAIN_FORKNUM &&\n!smgrexists(...)) continue if that seems nicer.\n\nDo you think that this version handles pending syncs correctly? I\nthink perhaps that is overlooked.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Mar 2022 09:33:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 23, 2022 at 7:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Mar 23, 2022 at 9:19 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I think directly using smgrcreate() is a better idea instead of first\n> > registering and then unregistering it. I have made that change in\n> > the attached patch. After this change now we can merge creating the\n> > MAIN_FORKNUM also in the loop below where we are creating other\n> > fork[1] with one extra condition but I think current code is in more\n> > sync with the other code where we are doing the similar things so I\n> > have not merged it in the loop. Please let me know if you think\n> > otherwise.\n>\n> Generally I think our practice is that we do the main fork\n> unconditionally (because it should always be there) and the others\n> only if they exist. I suggest that you make this consistent with that,\n> but you could do it like if (forkNum != MAIN_FORKNUM &&\n> !smgrexists(...)) continue if that seems nicer.\n\nMaybe we can do that.\n\n> Do you think that this version handles pending syncs correctly? I\n> think perhaps that is overlooked.\n\nYeah I missed that. So options are either we go to the other approach\nand call RelationPreserveStorage() after\nRelationCreateStorage(), or we expose the AddPendingSync() function\nfrom the storage layer and then conditionally use it. I think if we\nare planning to expose this api then we better rename it to\nRelationAddPendingSync(). Honestly, I do not have any specific\npreference here. I can try both the approaches and send both if you\nor anyone else do not have any preference here?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Mar 2022 21:05:09 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-03-23 18:49:11 +0530, Dilip Kumar wrote:\n> I think directly using smgrcreate() is a better idea instead of first\n> registering and then unregistering it. I have made that change in\n> the attached patch. After this change now we can merge creating the\n> MAIN_FORKNUM also in the loop below where we are creating other\n> fork[1] with one extra condition but I think current code is in more\n> sync with the other code where we are doing the similar things so I\n> have not merged it in the loop. Please let me know if you think\n> otherwise.\n\nFWIW, this fails tests: https://cirrus-ci.com/build/4929662173315072\nhttps://cirrus-ci.com/task/6651773434724352?logs=test_bin#L121\nhttps://cirrus-ci.com/task/6088823481303040?logs=test_world#L2377\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 23 Mar 2022 08:43:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 23, 2022 at 9:13 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-03-23 18:49:11 +0530, Dilip Kumar wrote:\n> > I think directly using smgrcreate() is a better idea instead of first\n> > registering and then unregistering it. I have made that change in\n> > the attached patch. After this change now we can merge creating the\n> > MAIN_FORKNUM also in the loop below where we are creating other\n> > fork[1] with one extra condition but I think current code is in more\n> > sync with the other code where we are doing the similar things so I\n> > have not merged it in the loop. Please let me know if you think\n> > otherwise.\n>\n> FWIW, this fails tests: https://cirrus-ci.com/build/4929662173315072\n> https://cirrus-ci.com/task/6651773434724352?logs=test_bin#L121\n> https://cirrus-ci.com/task/6088823481303040?logs=test_world#L2377\n\nStrange to see that these changes are making a failure in the\nfile_copy strategy[1] because we made changes only related to the\nwal_log strategy. However I will look into this. Thanks.\n[1]\nFailed test 'createdb -T foobar2 foobar5 -S file_copy exit code 0'\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Mar 2022 21:25:10 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 23, 2022 at 9:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Mar 23, 2022 at 7:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Wed, Mar 23, 2022 at 9:19 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > I think directly using smgrcreate() is a better idea instead of first\n> > > registering and then unregistering it. I have made that change in\n> > > the attached patch. After this change now we can merge creating the\n> > > MAIN_FORKNUM also in the loop below where we are creating other\n> > > fork[1] with one extra condition but I think current code is in more\n> > > sync with the other code where we are doing the similar things so I\n> > > have not merged it in the loop. Please let me know if you think\n> > > otherwise.\n> >\n> > Generally I think our practice is that we do the main fork\n> > unconditionally (because it should always be there) and the others\n> > only if they exist. I suggest that you make this consistent with that,\n> > but you could do it like if (forkNum != MAIN_FORKNUM &&\n> > !smgrexists(...)) continue if that seems nicer.\n>\n> Maybe we can do that.\n>\n> > Do you think that this version handles pending syncs correctly? I\n> > think perhaps that is overlooked.\n>\n> Yeah I missed that. So options are either we go to the other approach\n> and call RelationPreserveStorage() after\n> RelationCreateStorage(),\n\nHere is the patch with this approach, I am not sending both patches\nwith different approaches in the same mail otherwise cfbot might\ngenerate conflict while applying the patch I think, so I will send it\nin a seperate mail.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 23 Mar 2022 21:49:38 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 23, 2022 at 9:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Mar 23, 2022 at 9:13 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-03-23 18:49:11 +0530, Dilip Kumar wrote:\n> > > I think directly using smgrcreate() is a better idea instead of first\n> > > registering and then unregistering it. I have made that change in\n> > > the attached patch. After this change now we can merge creating the\n> > > MAIN_FORKNUM also in the loop below where we are creating other\n> > > fork[1] with one extra condition but I think current code is in more\n> > > sync with the other code where we are doing the similar things so I\n> > > have not merged it in the loop. Please let me know if you think\n> > > otherwise.\n> >\n> > FWIW, this fails tests: https://cirrus-ci.com/build/4929662173315072\n> > https://cirrus-ci.com/task/6651773434724352?logs=test_bin#L121\n> > https://cirrus-ci.com/task/6088823481303040?logs=test_world#L2377\n>\n> Strange to see that these changes are making a failure in the\n> file_copy strategy[1] because we made changes only related to the\n> wal_log strategy. However I will look into this. Thanks.\n> [1]\n> Failed test 'createdb -T foobar2 foobar5 -S file_copy exit code 0'\n\nI could not see any reason for it to fail, and I could not reproduce\nit either. Is it possible to access the server log for this cfbot\nfailure?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Mar 2022 22:29:40 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-03-23 22:29:40 +0530, Dilip Kumar wrote:\n> I could not see any reason for it to fail, and I could not reproduce\n> it either. Is it possible to access the server log for this cfbot\n> failure?\n\nYes, near the top, below the cpu / memory graphs, there's a file\nnavigator. Should have all files ending with *.log or starting with\nregress_log_*.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 23 Mar 2022 10:07:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 23, 2022 at 10:37 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-03-23 22:29:40 +0530, Dilip Kumar wrote:\n> > I could not see any reason for it to fail, and I could not reproduce\n> > it either. Is it possible to access the server log for this cfbot\n> > failure?\n>\n> Yes, near the top, below the cpu / memory graphs, there's a file\n> navigator. Should have all files ending with *.log or starting with\n> regress_log_*.\n\nOkay, I think I have found the reasoning for this failure, basically,\nif we see the below logs then the second statement is failing with\nfoobar5 already exists and that is because some of the above test case\nis conditionally generating the same name. So the fix is to use a\ndifferent name.\n\n2022-03-23 13:53:54.554 UTC [32647][client backend]\n[020_createdb.pl][3/12:0] LOG: statement: CREATE DATABASE foobar5\nTEMPLATE template0 LOCALE_PROVIDER icu ICU_LOCALE 'en';\n......\n2022-03-23 13:53:55.374 UTC [32717][client backend]\n[020_createdb.pl][3/46:0] LOG: statement: CREATE DATABASE foobar5\nSTRATEGY file_copy TEMPLATE foobar2;\n2022-03-23 13:53:55.390 UTC [32717][client backend]\n[020_createdb.pl][3/46:0] ERROR: database \"foobar5\" already exists\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Mar 2022 22:50:17 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 23, 2022 at 10:50 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Mar 23, 2022 at 10:37 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-03-23 22:29:40 +0530, Dilip Kumar wrote:\n> > > I could not see any reason for it to fail, and I could not reproduce\n> > > it either. Is it possible to access the server log for this cfbot\n> > > failure?\n> >\n> > Yes, near the top, below the cpu / memory graphs, there's a file\n> > navigator. Should have all files ending with *.log or starting with\n> > regress_log_*.\n>\n> Okay, I think I have found the reasoning for this failure, basically,\n> if we see the below logs then the second statement is failing with\n> foobar5 already exists and that is because some of the above test case\n> is conditionally generating the same name. So the fix is to use a\n> different name.\n\nIn the latest version I have fixed this issue by using a non\nconflicting name, because when it was compiled with-icu the foobar5\nwas already used and we were seeing failure. Apart from this I have\nfixed the duplicate cleanup problem by passing an extra parameter to\nRelationCreateStorage, which decides whether to register for on-abort\ndelete or not and added the comments for the same. IMHO this looks\nthe most cleaner way to do it, please check the patch and let me know\nyour thoughts.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 24 Mar 2022 10:58:47 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 24, 2022 at 1:29 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> In the latest version I have fixed this issue by using a non\n> conflicting name, because when it was compiled with-icu the foobar5\n> was already used and we were seeing failure. Apart from this I have\n> fixed the duplicate cleanup problem by passing an extra parameter to\n> RelationCreateStorage, which decides whether to register for on-abort\n> delete or not and added the comments for the same. IMHO this looks\n> the most cleaner way to do it, please check the patch and let me know\n> your thoughts.\n\nI think that might be an OK way to do it. I think if we were starting\nfrom scratch we'd probably want to come up with some better system,\nbut that's true of a lot of things.\n\nI went over your version and changed some comments. I also added\ndocumentation for the new wait event. Here's a new version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 24 Mar 2022 11:59:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 24, 2022 at 9:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 24, 2022 at 1:29 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > In the latest version I have fixed this issue by using a non\n> > conflicting name, because when it was compiled with-icu the foobar5\n> > was already used and we were seeing failure. Apart from this I have\n> > fixed the duplicate cleanup problem by passing an extra parameter to\n> > RelationCreateStorage, which decides whether to register for on-abort\n> > delete or not and added the comments for the same. IMHO this looks\n> > the most cleaner way to do it, please check the patch and let me know\n> > your thoughts.\n>\n> I think that might be an OK way to do it. I think if we were starting\n> from scratch we'd probably want to come up with some better system,\n> but that's true of a lot of things.\n\nRight.\n\n> I went over your version and changed some comments. I also added\n> documentation for the new wait event. Here's a new version.\n>\n\nThanks, I have gone through your changes in comments and docs and those LGTM.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Mar 2022 21:42:39 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 24, 2022 at 12:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Thanks, I have gone through your changes in comments and docs and those LGTM.\n\nIt looks like this patch will need to be updated for Alvaro's commit\n49d9cfc68bf4e0d32a948fe72d5a0ef7f464944e. The newly added test\n029_replay_tsp_drops.pl fails with this patch applied. The standby log\nshows:\n\n2022-03-25 10:00:10.022 EDT [38209] LOG: entering standby mode\n2022-03-25 10:00:10.024 EDT [38209] LOG: redo starts at 0/3000028\n2022-03-25 10:00:10.062 EDT [38209] FATAL: could not create directory\n\"pg_tblspc/16385/PG_15_202203241/16390\": No such file or directory\n2022-03-25 10:00:10.062 EDT [38209] CONTEXT: WAL redo at 0/43EBD88\nfor Database/CREATE_WAL_LOG: create dir 16385/16390\n\nOn a quick look, I'm guessing that XLOG_DBASE_CREATE_WAL_LOG will need\nto mirror some of the logic that was added to the replay code for the\nexisting strategy, but I haven't figured out the details.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Mar 2022 10:11:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Mar 25, 2022 at 7:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 24, 2022 at 12:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Thanks, I have gone through your changes in comments and docs and those LGTM.\n>\n> It looks like this patch will need to be updated for Alvaro's commit\n> 49d9cfc68bf4e0d32a948fe72d5a0ef7f464944e. The newly added test\n> 029_replay_tsp_drops.pl fails with this patch applied. The standby log\n> shows:\n>\n> 2022-03-25 10:00:10.022 EDT [38209] LOG: entering standby mode\n> 2022-03-25 10:00:10.024 EDT [38209] LOG: redo starts at 0/3000028\n> 2022-03-25 10:00:10.062 EDT [38209] FATAL: could not create directory\n> \"pg_tblspc/16385/PG_15_202203241/16390\": No such file or directory\n> 2022-03-25 10:00:10.062 EDT [38209] CONTEXT: WAL redo at 0/43EBD88\n> for Database/CREATE_WAL_LOG: create dir 16385/16390\n>\n> On a quick look, I'm guessing that XLOG_DBASE_CREATE_WAL_LOG will need\n> to mirror some of the logic that was added to the replay code for the\n> existing strategy, but I haven't figured out the details.\n>\n\nYeah, I think I got it, for XLOG_DBASE_CREATE_WAL_LOG now we will have\nto handle the missing parent directory case, like Alvaro handled for\nthe XLOG_DBASE_CREATE(_FILE_COPY) case.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Mar 2022 20:16:16 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Mar 25, 2022 at 8:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > On a quick look, I'm guessing that XLOG_DBASE_CREATE_WAL_LOG will need\n> > to mirror some of the logic that was added to the replay code for the\n> > existing strategy, but I haven't figured out the details.\n> >\n>\n> Yeah, I think I got it, for XLOG_DBASE_CREATE_WAL_LOG now we will have\n> to handle the missing parent directory case, like Alvaro handled for\n> the XLOG_DBASE_CREATE(_FILE_COPY) case.\n\nI have updated the patch so now we skip the XLOG_DBASE_CREATE_WAL_LOG\nas well if the tablespace directory is missing. But with our new\nwal_log method there will be other follow up wal logs like,\nXLOG_RELMAP_UPDATE, XLOG_SMGR_CREATE and XLOG_FPI.\n\nI have put the similar logic for relmap_update WAL replay as well, but\nwe don't need this for smgr_create or fpi. Because the mdcreate() is\ntaking care of creating missing directory in TablespaceCreateDbspace()\nand fpi only logged after we create the new smgr at least in case of\ncreate database.\n\nNow, is it possible to get the FPI without smgr_create wal in other\ncases? If it is then that problem is orthogonal to this path, but\nanyway I could not find any such scenario.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 26 Mar 2022 17:55:20 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sat, Mar 26, 2022 at 5:55 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Mar 25, 2022 at 8:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > On a quick look, I'm guessing that XLOG_DBASE_CREATE_WAL_LOG will need\n> > > to mirror some of the logic that was added to the replay code for the\n> > > existing strategy, but I haven't figured out the details.\n> > >\n> >\n> > Yeah, I think I got it, for XLOG_DBASE_CREATE_WAL_LOG now we will have\n> > to handle the missing parent directory case, like Alvaro handled for\n> > the XLOG_DBASE_CREATE(_FILE_COPY) case.\n>\n> I have updated the patch so now we skip the XLOG_DBASE_CREATE_WAL_LOG\n> as well if the tablespace directory is missing. But with our new\n> wal_log method there will be other follow up wal logs like,\n> XLOG_RELMAP_UPDATE, XLOG_SMGR_CREATE and XLOG_FPI.\n>\n> I have put the similar logic for relmap_update WAL replay as well,\n\nThere was some mistake in the last patch, basically, for relmap update\nalso I have checked the missing tablespace directory but I should have\nchecked the missing database directory so I have fixed that.\n\n> Now, is it possible to get the FPI without smgr_create wal in other\n> cases? If it is then that problem is orthogonal to this path, but\n> anyway I could not find any such scenario.\n\nI have digged further into it, tried manually removing the directory\nbefore XLOG_FPI, but I noticed that during FPI also\nXLogReadBufferExtended() take cares of creating the missing files\nusing smgrcreate() and that intern take care of missing directory\ncreation so I don't think we have any problem here.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 28 Mar 2022 11:48:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Mar 28, 2022 at 2:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have put the similar logic for relmap_update WAL replay as well,\n>\n> There was some mistake in the last patch, basically, for relmap update\n> also I have checked the missing tablespace directory but I should have\n> checked the missing database directory so I have fixed that.\n>\n> > Now, is it possible to get the FPI without smgr_create wal in other\n> > cases? If it is then that problem is orthogonal to this path, but\n> > anyway I could not find any such scenario.\n>\n> I have digged further into it, tried manually removing the directory\n> before XLOG_FPI, but I noticed that during FPI also\n> XLogReadBufferExtended() take cares of creating the missing files\n> using smgrcreate() and that intern take care of missing directory\n> creation so I don't think we have any problem here.\n\nI don't understand whether XLOG_RELMAP_UPDATE should be just doing\nsmgrcreate() as we would for most WAL records or whether it should be\nadopting the new system introduced by\n49d9cfc68bf4e0d32a948fe72d5a0ef7f464944e. I wrote about this concern\nover here:\n\nhttp://postgr.es/m/CA+TgmoYcUPL+WOJL2ZzhH=zmrhj0iOQ=iCFM0SuYqBbqZEamEg@mail.gmail.com\n\nBut apart from that question your adaptations here look reasonable to me.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Mar 2022 15:08:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 29, 2022 at 12:38 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Mar 28, 2022 at 2:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > I have put the similar logic for relmap_update WAL replay as well,\n> >\n> > There was some mistake in the last patch, basically, for relmap update\n> > also I have checked the missing tablespace directory but I should have\n> > checked the missing database directory so I have fixed that.\n> >\n> > > Now, is it possible to get the FPI without smgr_create wal in other\n> > > cases? If it is then that problem is orthogonal to this path, but\n> > > anyway I could not find any such scenario.\n> >\n> > I have digged further into it, tried manually removing the directory\n> > before XLOG_FPI, but I noticed that during FPI also\n> > XLogReadBufferExtended() take cares of creating the missing files\n> > using smgrcreate() and that intern take care of missing directory\n> > creation so I don't think we have any problem here.\n>\n> I don't understand whether XLOG_RELMAP_UPDATE should be just doing\n> smgrcreate()\n\nXLOG_RELMAP_UPDATE is for the complete database so for which relnode\nit will create smgr? I think you probably meant\nTablespaceCreateDbspace()?\n\n as we would for most WAL records or whether it should be\n> adopting the new system introduced by\n> 49d9cfc68bf4e0d32a948fe72d5a0ef7f464944e. I wrote about this concern\n> over here:\n\nokay, thanks.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 09:32:08 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Mon, Mar 28, 2022 at 3:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> smgrcreate() as we would for most WAL records or whether it should be\n> adopting the new system introduced by\n> 49d9cfc68bf4e0d32a948fe72d5a0ef7f464944e. I wrote about this concern\n> over here:\n>\n> http://postgr.es/m/CA+TgmoYcUPL+WOJL2ZzhH=zmrhj0iOQ=iCFM0SuYqBbqZEamEg@mail.gmail.com\n>\n> But apart from that question your adaptations here look reasonable to me.\n\nThat commit having been reverted, I committed v6 instead. Let's see\nwhat breaks...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 11:55:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On 2022-03-29 11:55:05 -0400, Robert Haas wrote:\n> That commit having been reverted, I committed v6 instead. Let's see\n> what breaks...\n\nIt fails in CI (for the mirror of the postgres repo on github):\nhttps://cirrus-ci.com/task/6279465603956736?logs=test_bin#L121\ntap test log: https://api.cirrus-ci.com/v1/artifact/task/6279465603956736/log/src/bin/scripts/tmp_check/log/regress_log_020_createdb\npostmaster log: https://api.cirrus-ci.com/v1/artifact/task/6279465603956736/log/src/bin/scripts/tmp_check/log/020_createdb_main.log\n\nrecent versions failed similarly on cfbot:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/37/3192\nhttps://cirrus-ci.com/task/5217140407009280?logs=test_bin#L121\n\n# Running: createdb -T foobar2 foobar6 -S wal_log\ncreatedb: error: too many command-line arguments (first is \"wal_log\")\nTry \"createdb --help\" for more information.\nnot ok 31 - createdb -T foobar2 foobar6 -S wal_log exit code 0\n\n# Failed test 'createdb -T foobar2 foobar6 -S wal_log exit code 0'\n# at t/020_createdb.pl line 117.\nnot ok 32 - create database with WAL_LOG strategy: SQL found in server log\n\n# Failed test 'create database with WAL_LOG strategy: SQL found in server log'\n# at t/020_createdb.pl line 117.\n# ''\n# doesn't match '(?^:statement: CREATE DATABASE foobar6 STRATEGY wal_log TEMPLATE foobar2)'\n# Running: createdb -T foobar2 foobar7 -S file_copy\ncreatedb: error: too many command-line arguments (first is \"file_copy\")\nTry \"createdb --help\" for more information.\nnot ok 33 - createdb -T foobar2 foobar7 -S file_copy exit code 0\n\n# Failed test 'createdb -T foobar2 foobar7 -S file_copy exit code 0'\n# at t/020_createdb.pl line 122.\nnot ok 34 - create database with FILE_COPY strategy: SQL found in server log\n\n# Failed test 'create database with FILE_COPY strategy: SQL found in server log'\n# at t/020_createdb.pl line 122.\n# ''\n# doesn't match '(?^:statement: CREATE DATABASE foobar7 STRATEGY file_copy TEMPLATE foobar2)'\n\nLooks like there's some problem with commandline parsing?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 29 Mar 2022 10:35:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 29, 2022 at 1:35 PM Andres Freund <andres@anarazel.de> wrote:\n> # Running: createdb -T foobar2 foobar6 -S wal_log\n> createdb: error: too many command-line arguments (first is \"wal_log\")\n> Try \"createdb --help\" for more information.\n> not ok 31 - createdb -T foobar2 foobar6 -S wal_log exit code 0\n>\n> Looks like there's some problem with commandline parsing?\n\nApparently getopt_long() is fussier on Windows. I have committed a fix.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 13:52:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Looks like there's some problem with commandline parsing?\n\nThat test script is expecting glibc-like laxness of switch\nparsing. Put the switches before the non-switch arguments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Mar 2022 13:53:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 29, 2022 at 1:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Looks like there's some problem with commandline parsing?\n>\n> That test script is expecting glibc-like laxness of switch\n> parsing. Put the switches before the non-switch arguments.\n\nI just did that. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 13:57:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Mar 29, 2022 at 1:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That test script is expecting glibc-like laxness of switch\n>> parsing. Put the switches before the non-switch arguments.\n\n> I just did that. :-)\n\nYup, you pushed while I was typing.\n\nFWIW, I don't think it's \"Windows\" enforcing this, it's our own\nsrc/port/getopt[_long].c. If there were a well-defined spec\nfor what glibc does with such cases, it might be interesting to\ntry to make our version bug-compatible with theirs. But AFAIK\nit's some random algorithm that they probably feel at liberty\nto change.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Mar 2022 14:17:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 29, 2022 at 2:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Mar 29, 2022 at 1:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> That test script is expecting glibc-like laxness of switch\n> >> parsing. Put the switches before the non-switch arguments.\n>\n> > I just did that. :-)\n>\n> Yup, you pushed while I was typing.\n>\n> FWIW, I don't think it's \"Windows\" enforcing this, it's our own\n> src/port/getopt[_long].c. If there were a well-defined spec\n> for what glibc does with such cases, it might be interesting to\n> try to make our version bug-compatible with theirs. But AFAIK\n> it's some random algorithm that they probably feel at liberty\n> to change.\n\nI guess that characterization surprises me. The man page for\ngetopt_long() says this, and has for a long time at least on systems\nI've used:\n\nENVIRONMENT\n POSIXLY_CORRECT If set, option processing stops when the first non-\n option is found and a leading `-' or `+' in the\n optstring is ignored.\n\nAnd also this:\n\nBUGS\n The argv argument is not really const as its elements may be permuted\n (unless POSIXLY_CORRECT is set).\n\nDoesn't that make it pretty clear what the GNU version is doing?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 14:24:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Mar 29, 2022 at 2:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> it's some random algorithm that they probably feel at liberty\n>> to change.\n\n> I guess that characterization surprises me. The man page for\n> getopt_long() says this, and has for a long time at least on systems\n> I've used:\n\nYeah, they say they follow the POSIX spec when you set POSIXLY_CORRECT.\nWhat they don't spell out in any detail is what they do when you don't.\nWe know that it involves rearranging the argv[] array behind the\napplication's back, but not what the rules are for doing that. In\nparticular, they must have some undocumented and probably not very safe\nmethod for deciding which arguments are neither switches nor switch\narguments.\n\n(Actually, if I recall previous discussions properly, another stumbling\nblock to doing anything here is that we'd also have to change all our\ndocumentation to explain it. Fixing the command line synopses would\nbe a mess already, and explaining the rules would be worse.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Mar 2022 14:37:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 29, 2022 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Mar 29, 2022 at 2:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> it's some random algorithm that they probably feel at liberty\n> >> to change.\n>\n> > I guess that characterization surprises me. The man page for\n> > getopt_long() says this, and has for a long time at least on systems\n> > I've used:\n>\n> Yeah, they say they follow the POSIX spec when you set POSIXLY_CORRECT.\n> What they don't spell out in any detail is what they do when you don't.\n> We know that it involves rearranging the argv[] array behind the\n> application's back, but not what the rules are for doing that. In\n> particular, they must have some undocumented and probably not very safe\n> method for deciding which arguments are neither switches nor switch\n> arguments.\n\nI mean, I think of an option as something that starts with '-'. The\ndocumentation contains a caveat that says: \"The special argument ‘--’\nforces in all cases the end of option scanning.\" So I think I would\nexpect it just looks for arguments starting with '-' that do not\nfollow an argument that is exactly \"--\".\n\n<looks around for the source code>\n\nhttps://github.com/gcc-mirror/gcc/blob/master/libiberty/getopt.c\n\n If an element of ARGV starts with '-', and is not exactly \"-\" or \"--\",\n then it is an option element. The characters of this element\n (aside from the initial '-') are option characters. If `getopt'\n is called repeatedly, it returns successively each of the option characters\n from each of the option elements.\n\nOK - so I was off slightly. Either \"-\" or \"--\" terminates the options\nlist. Apart from that anything starting with \"-\" is an option.\n\nI think you're overestimating the level of mystery that's present\nhere, as well as the likelihood that the rules could ever be changed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 15:20:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-03-29 11:55:05 -0400, Robert Haas wrote:\n> I committed v6 instead.\n\nJust noticed that it makes initdb a bit slower / the cluster a bit bigger,\nbecause now there's WAL traffic from creating the databases. There's an\noptimization (albeit insufficient) to reduce WAL traffic in bootstrap mode,\nbut not for single user mode when the CREATE DATABASEs happen.\n\nIn an optimized build, with wal-segsize 1 (the most extreme case) using\nFILE_COPY vs WAL_LOG:\n\nperf stat ~/build/postgres/dev-optimize/install/bin/initdb /tmp/initdb/ --wal-segsize=1\nWAL_LOG:\n\n 487.58 msec task-clock # 0.848 CPUs utilized\n 2,874 context-switches # 5.894 K/sec\n 0 cpu-migrations # 0.000 /sec\n 10,209 page-faults # 20.938 K/sec\n 1,550,483,095 cycles # 3.180 GHz\n 2,537,618,094 instructions # 1.64 insn per cycle\n 492,780,121 branches # 1.011 G/sec\n 7,384,884 branch-misses # 1.50% of all branches\n\n 0.575213800 seconds time elapsed\n\n 0.349812000 seconds user\n 0.133225000 seconds sys\n\nFILE_COPY:\n\n 476.54 msec task-clock # 0.854 CPUs utilized\n 3,005 context-switches # 6.306 K/sec\n 0 cpu-migrations # 0.000 /sec\n 10,050 page-faults # 21.090 K/sec\n 1,516,058,200 cycles # 3.181 GHz\n 2,504,126,907 instructions # 1.65 insn per cycle\n 488,042,856 branches # 1.024 G/sec\n 7,327,364 branch-misses # 1.50% of all branches\n\n 0.557934976 seconds time elapsed\n\n 0.360473000 seconds user\n 0.112109000 seconds sys\n\n\nthe numbers are similar if repeated.\n\ndu -s /tmp/initdb/\nWAL_LOG: 35112\nFILE_COPY: 29288\n\nSo it seems we should specify a strategy in initdb? It kind of makes sense -\nwe're not going to read anything from those database. And because of the\nringbuffer of 256kB, we'll not even reduce IO meaningfully.\n\n- Andres\n\n\n", "msg_date": "Tue, 29 Mar 2022 18:17:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Mar 30, 2022 at 6:47 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>\n> du -s /tmp/initdb/\n> WAL_LOG: 35112\n> FILE_COPY: 29288\n>\n> So it seems we should specify a strategy in initdb? It kind of makes sense -\n> we're not going to read anything from those database. And because of the\n> ringbuffer of 256kB, we'll not even reduce IO meaningfully.\n\nI think this makes sense, so you mean with initdb we will always use\nfile_copy or we want to give a command line option for initdb ?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Mar 2022 09:28:58 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 29, 2022 at 9:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Mar 28, 2022 at 3:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > smgrcreate() as we would for most WAL records or whether it should be\n> > adopting the new system introduced by\n> > 49d9cfc68bf4e0d32a948fe72d5a0ef7f464944e. I wrote about this concern\n> > over here:\n> >\n> > http://postgr.es/m/CA+TgmoYcUPL+WOJL2ZzhH=zmrhj0iOQ=iCFM0SuYqBbqZEamEg@mail.gmail.com\n> >\n> > But apart from that question your adaptations here look reasonable to me.\n>\n> That commit having been reverted, I committed v6 instead. Let's see\n> what breaks...\n>\n\nThere was a duplicate error check for the invalid createdb strategy\noption in the test case, although it would not create any issue but it\nis duplicate so I have fixed it in the attached patch.\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 30 Mar 2022 17:17:41 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-03-30 09:28:58 +0530, Dilip Kumar wrote:\n> On Wed, Mar 30, 2022 at 6:47 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> >\n> > du -s /tmp/initdb/\n> > WAL_LOG: 35112\n> > FILE_COPY: 29288\n> >\n> > So it seems we should specify a strategy in initdb? It kind of makes sense -\n> > we're not going to read anything from those database. And because of the\n> > ringbuffer of 256kB, we'll not even reduce IO meaningfully.\n> \n> I think this makes sense, so you mean with initdb we will always use\n> file_copy or we want to give a command line option for initdb ?\n\nDon't see a need for a commandline option / a situation where using WAL_LOG\nwould be preferrable for initdb.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Mar 2022 09:31:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-03-29 11:55:05 -0400, Robert Haas wrote:\n> I committed v6 instead.\n\nI was just discussing the WAL prefetching patch with Thomas. A question in\nthat discussion made me look at the coverage of REDO for CREATE DATABASE:\nhttps://coverage.postgresql.org/src/backend/commands/dbcommands.c.gcov.html\n\nSeems there's currently nothing hitting the REDO for\nXLOG_DBASE_CREATE_FILE_COPY (currently line 3019). I think it'd be good to\nkeep coverage for that. How about adding a\n CREATE DATABASE ... STRATEGY file_copy\nto 001_stream_rep.pl?\n\n\nMight be worth adding a test for ALTER DATABASE ... SET TABLESPACE at the same\ntime, this patch did affect that path in some minor ways. And, somewhat\nshockingly, we don't have a single test for it.\n\n- Andres\n\n\n", "msg_date": "Wed, 30 Mar 2022 16:36:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 31, 2022 at 5:07 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-03-29 11:55:05 -0400, Robert Haas wrote:\n> > I committed v6 instead.\n>\n> I was just discussing the WAL prefetching patch with Thomas. A question in\n> that discussion made me look at the coverage of REDO for CREATE DATABASE:\n> https://coverage.postgresql.org/src/backend/commands/dbcommands.c.gcov.html\n>\n> Seems there's currently nothing hitting the REDO for\n> XLOG_DBASE_CREATE_FILE_COPY (currently line 3019). I think it'd be good to\n> keep coverage for that. How about adding a\n> CREATE DATABASE ... STRATEGY file_copy\n> to 001_stream_rep.pl?\n>\n>\n> Might be worth adding a test for ALTER DATABASE ... SET TABLESPACE at the same\n> time, this patch did affect that path in some minor ways. And, somewhat\n> shockingly, we don't have a single test for it.\n\nI will add tests for both of these cases and send the patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 09:46:16 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 31, 2022 at 9:46 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Mar 31, 2022 at 5:07 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-03-29 11:55:05 -0400, Robert Haas wrote:\n> > > I committed v6 instead.\n> >\n> > I was just discussing the WAL prefetching patch with Thomas. A question in\n> > that discussion made me look at the coverage of REDO for CREATE DATABASE:\n> > https://coverage.postgresql.org/src/backend/commands/dbcommands.c.gcov.html\n> >\n> > Seems there's currently nothing hitting the REDO for\n> > XLOG_DBASE_CREATE_FILE_COPY (currently line 3019). I think it'd be good to\n> > keep coverage for that. How about adding a\n> > CREATE DATABASE ... STRATEGY file_copy\n> > to 001_stream_rep.pl?\n> >\n> >\n> > Might be worth adding a test for ALTER DATABASE ... SET TABLESPACE at the same\n> > time, this patch did affect that path in some minor ways. And, somewhat\n> > shockingly, we don't have a single test for it.\n>\n> I will add tests for both of these cases and send the patch.\n\n\n0001 is changing the strategy to file copy during initdb and 0002\npatch adds the test cases for both these cases.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 31 Mar 2022 13:22:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 31, 2022 at 3:52 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> 0001 is changing the strategy to file copy during initdb and 0002\n> patch adds the test cases for both these cases.\n\nIMHO, 0001 looks fine, except for needing some adjustments to the wording.\n\nI'm less sure about 0002. It's testing the stuff Andres mentioned, but\nI'm not sure how good the tests are.\n\nAndres, thoughts? Do you want me to polish and commit 0001?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 10:05:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-03-31 13:22:24 +0530, Dilip Kumar wrote:\n> 0001 is changing the strategy to file copy during initdb and 0002\n> patch adds the test cases for both these cases.\n\nThanks!\n\n> From 4a997e2a95074a520777cd2b369f9c728b360969 Mon Sep 17 00:00:00 2001\n> From: Dilip Kumar <dilipkumar@localhost.localdomain>\n> Date: Thu, 31 Mar 2022 10:43:16 +0530\n> Subject: [PATCH 1/2] Use file_copy strategy during initdb\n> \n> Because skipping the checkpoint during initdb will not result\n> in significant savings, so there is no point in using wal_log\n> as that will simply increase the cluster size by generating\n> extra wal.\n> ---\n> src/bin/initdb/initdb.c | 14 +++++++++++---\n> 1 file changed, 11 insertions(+), 3 deletions(-)\n> \n> diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c\n> index 5e36943..1256082 100644\n> --- a/src/bin/initdb/initdb.c\n> +++ b/src/bin/initdb/initdb.c\n> @@ -1856,6 +1856,11 @@ make_template0(FILE *cmdfd)\n> \t * it would fail. To avoid that, assign a fixed OID to template0 rather\n> \t * than letting the server choose one.\n> \t *\n> +\t * Using file_copy strategy is preferable over wal_log here because\n> +\t * skipping the checkpoint during initdb will not result in significant\n> +\t * savings, so there is no point in using wal_log as that will simply\n> +\t * increase the cluster size by generating extra wal.\n\nIt's not just the increase in size, it's also the increase in time due to WAL logging.\n\n\n> \t * (Note that, while the user could have dropped and recreated these\n> \t * objects in the old cluster, the problem scenario only exists if the OID\n> \t * that is in use in the old cluster is also used in the new cluster - and\n> @@ -1863,7 +1868,7 @@ make_template0(FILE *cmdfd)\n> \t */\n> \tstatic const char *const template0_setup[] = {\n> \t\t\"CREATE DATABASE template0 IS_TEMPLATE = true ALLOW_CONNECTIONS = false OID = \"\n> -\t\tCppAsString2(Template0ObjectId) \";\\n\\n\",\n> +\t\tCppAsString2(Template0ObjectId) \" STRATEGY = file_copy;\\n\\n\",\n\nI'd perhaps break this into a separate line, but...\n\n\n> From d0759bcfc4fed674e938e4a03159f5953ca9718d Mon Sep 17 00:00:00 2001\n> From: Dilip Kumar <dilipkumar@localhost.localdomain>\n> Date: Thu, 31 Mar 2022 12:07:19 +0530\n> Subject: [PATCH 2/2] Create database test coverage\n> \n> Test create database strategy wal replay and alter database\n> set tablespace.\n> ---\n> src/test/modules/test_misc/t/002_tablespace.pl | 12 ++++++++++++\n> src/test/recovery/t/001_stream_rep.pl | 24 ++++++++++++++++++++++++\n> 2 files changed, 36 insertions(+)\n> \n> diff --git a/src/test/modules/test_misc/t/002_tablespace.pl b/src/test/modules/test_misc/t/002_tablespace.pl\n> index 04e5439..f3bbddc 100644\n> --- a/src/test/modules/test_misc/t/002_tablespace.pl\n> +++ b/src/test/modules/test_misc/t/002_tablespace.pl\n> @@ -83,7 +83,19 @@ $result = $node->psql('postgres',\n> \t\"ALTER TABLE t SET tablespace regress_ts1\");\n> ok($result == 0, 'move table in-place->abs');\n> \n> +# Test ALTER DATABASE SET TABLESPACE\n> +$result = $node->psql('postgres',\n> +\t\"CREATE DATABASE testdb TABLESPACE regress_ts1\");\n> +ok($result == 0, 'create database in tablespace 1');\n> +$result = $node->psql('testdb',\n> +\t\"CREATE TABLE t ()\");\n> +ok($result == 0, 'create table in testdb database');\n> +$result = $node->psql('postgres',\n> +\t\"ALTER DATABASE testdb SET TABLESPACE regress_ts2\");\n> +ok($result == 0, 'move database to tablespace 2');\n\nThis just tests the command doesn't fail, but not whether it actually did\nsomething useful. Seem we should at least insert a row or two into the the\ntable, and verify they can be accessed?\n\n\n> +# Create database with different strategies and check its presence in standby\n> +$node_primary->safe_psql('postgres',\n> +\t\"CREATE DATABASE testdb1 STRATEGY = FILE_COPY; \");\n> +$node_primary->safe_psql('testdb1',\n> +\t\"CREATE TABLE tab_int AS SELECT generate_series(1,10) AS a\");\n> +$node_primary->safe_psql('postgres',\n> +\t\"CREATE DATABASE testdb2 STRATEGY = WAL_LOG; \");\n> +$node_primary->safe_psql('testdb2',\n> +\t\"CREATE TABLE tab_int AS SELECT generate_series(1,10) AS a\");\n> +\n> +# Wait for standbys to catch up\n> +$primary_lsn = $node_primary->lsn('write');\n> +$node_primary->wait_for_catchup($node_standby_1, 'replay', $primary_lsn);\n> +\n> +$result =\n> + $node_standby_1->safe_psql('testdb1', \"SELECT count(*) FROM tab_int\");\n> +print \"standby 1: $result\\n\";\n> +is($result, qq(10), 'check streamed content on standby 1');\n> +\n> +$result =\n> + $node_standby_1->safe_psql('testdb2', \"SELECT count(*) FROM tab_int\");\n> +print \"standby 1: $result\\n\";\n> +is($result, qq(10), 'check streamed content on standby 1');\n> +\n> # Check that only READ-only queries can run on standbys\n> is($node_standby_1->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n> \t3, 'read-only queries on standby 1');\n\nI'd probably add a function for creating database / table and then testing it,\nwith a strategy parameter. That way we can afterwards add more tests verifying\nthat everything worked.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 31 Mar 2022 09:21:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-03-31 10:05:10 -0400, Robert Haas wrote:\n> On Thu, Mar 31, 2022 at 3:52 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > 0001 is changing the strategy to file copy during initdb and 0002\n> > patch adds the test cases for both these cases.\n> \n> IMHO, 0001 looks fine, except for needing some adjustments to the wording.\n\nAgreed.\n\n\n> I'm less sure about 0002. It's testing the stuff Andres mentioned, but\n> I'm not sure how good the tests are.\n\nI came to a similar conclusion. It's still better than nothing, but it's just\na small bit of additional effort to do some basic testing that e.g. the move\nactually worked...\n\n\n> Andres, thoughts? Do you want me to polish and commit 0001?\n\nYes please!\n\n\nFWIW, once the freeze is done I'm planning to set up scripting to see which\nparts of the code we whacked around don't have test coverage...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 31 Mar 2022 09:25:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 31, 2022 at 12:25 PM Andres Freund <andres@anarazel.de> wrote:\n> > Andres, thoughts? Do you want me to polish and commit 0001?\n>\n> Yes please!\n\nHere is a polished version. Comments?\n\n> FWIW, once the freeze is done I'm planning to set up scripting to see which\n> parts of the code we whacked around don't have test coverage...\n\nSounds terrifying.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 31 Mar 2022 14:31:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On 2022-03-31 14:31:43 -0400, Robert Haas wrote:\n> On Thu, Mar 31, 2022 at 12:25 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Andres, thoughts? Do you want me to polish and commit 0001?\n> >\n> > Yes please!\n> \n> Here is a polished version. Comments?\n\nLGTM.\n\n\n", "msg_date": "Thu, 31 Mar 2022 11:44:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 31, 2022 at 2:44 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-03-31 14:31:43 -0400, Robert Haas wrote:\n> > On Thu, Mar 31, 2022 at 12:25 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > Andres, thoughts? Do you want me to polish and commit 0001?\n> > >\n> > > Yes please!\n> >\n> > Here is a polished version. Comments?\n>\n> LGTM.\n\nCommitted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 15:20:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Mar 31, 2022 at 9:52 PM Andres Freund <andres@anarazel.de> wrote:\n\n> > + \"ALTER DATABASE testdb SET TABLESPACE regress_ts2\");\n> > +ok($result == 0, 'move database to tablespace 2');\n>\n> This just tests the command doesn't fail, but not whether it actually did\n> something useful. Seem we should at least insert a row or two into the the\n> table, and verify they can be accessed?\n\nNow, added some tuples and verified them.\n\n\n> > # Check that only READ-only queries can run on standbys\n> > is($node_standby_1->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n> > 3, 'read-only queries on standby 1');\n>\n> I'd probably add a function for creating database / table and then testing it,\n> with a strategy parameter. That way we can afterwards add more tests verifying\n> that everything worked.\n\nI have created a function to create a database and table and verify\nthe content in it. Another option is we can just keep the database\nand table creation inside the function and the verification part\noutside it so that if some future test case wants to create some extra\ncontent and verify it then they can do so. But with the current\ntests in mind the way I got it in the attached patch has less\nduplicate code so I preferred it this way.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 1 Apr 2022 13:51:25 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-03-29 11:55:05 -0400, Robert Haas wrote:\n> I committed v6 instead.\n\nCoverity complains that this patch added GetDatabasePath() calls without\nfreeing its return value. Normally that'd be easy to dismiss, due to memory\ncontexts, but there's no granular resets in CreateDatabaseUsingFileCopy(). And\nobviously there can be a lot of relations in one database - we shouldn't hold\nonto the same path over and over again.\n\nThe case in recovery is worse, because there we don't have a memory context to\nreset afaics. Oddly enough, it sure looks like we have an existing version of\nthis bug in the file-copy path?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 3 Apr 2022 09:21:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sun, Apr 3, 2022 at 9:52 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-03-29 11:55:05 -0400, Robert Haas wrote:\n> > I committed v6 instead.\n>\n> Coverity complains that this patch added GetDatabasePath() calls without\n> freeing its return value. Normally that'd be easy to dismiss, due to memory\n> contexts, but there's no granular resets in CreateDatabaseUsingFileCopy(). And\n> obviously there can be a lot of relations in one database - we shouldn't hold\n> onto the same path over and over again.\n\n> The case in recovery is worse, because there we don't have a memory context to\n> reset afaics. Oddly enough, it sure looks like we have an existing version of\n> this bug in the file-copy path?\n\nYeah, I see that the createdb() and dbase_redo() had this existing\nproblem and with this patch we have created a few more such\noccurrences.\nThe attached patch fixes it.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 4 Apr 2022 15:54:56 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Mar 29, 2022 at 11:55:05AM -0400, Robert Haas wrote:\n> On Mon, Mar 28, 2022 at 3:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > smgrcreate() as we would for most WAL records or whether it should be\n> > adopting the new system introduced by\n> > 49d9cfc68bf4e0d32a948fe72d5a0ef7f464944e. I wrote about this concern\n> > over here:\n> >\n> > http://postgr.es/m/CA+TgmoYcUPL+WOJL2ZzhH=zmrhj0iOQ=iCFM0SuYqBbqZEamEg@mail.gmail.com\n> >\n> > But apart from that question your adaptations here look reasonable to me.\n> \n> That commit having been reverted, I committed v6 instead. Let's see\n> what breaks...\n\nThere's a crash\n\n2022-07-31 01:22:51.437 CDT client backend[13362] [unknown] PANIC: could not open critical system index 2662\n\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51\n#1 0x00007efe27999801 in __GI_abort () at abort.c:79\n#2 0x00005583891941dc in errfinish (filename=<optimized out>, filename@entry=0x558389420437 \"relcache.c\", lineno=lineno@entry=4328,\n funcname=funcname@entry=0x558389421680 <__func__.33178> \"load_critical_index\") at elog.c:675\n#3 0x00005583891713ef in load_critical_index (indexoid=indexoid@entry=2662, heapoid=heapoid@entry=1259) at relcache.c:4328\n#4 0x0000558389172667 in RelationCacheInitializePhase3 () at relcache.c:4103\n#5 0x00005583891b93a4 in InitPostgres (in_dbname=in_dbname@entry=0x55838a50d468 \"a\", dboid=dboid@entry=0, username=username@entry=0x55838a50d448 \"pryzbyj\", useroid=useroid@entry=0,\n load_session_libraries=<optimized out>, override_allow_connections=override_allow_connections@entry=false, out_dbname=0x0) at postinit.c:1087\n#6 0x0000558388daa7bb in PostgresMain (dbname=0x55838a50d468 \"a\", username=username@entry=0x55838a50d448 \"pryzbyj\") at postgres.c:4081\n#7 0x0000558388b9f423 in BackendRun (port=port@entry=0x55838a505dd0) at postmaster.c:4490\n#8 0x0000558388ba6e07 in BackendStartup (port=port@entry=0x55838a505dd0) at postmaster.c:4218\n#9 0x0000558388ba747f in ServerLoop () at postmaster.c:1808\n#10 0x0000558388ba8f93 in PostmasterMain (argc=7, argv=<optimized out>) at postmaster.c:1480\n#11 0x0000558388840e1f in main (argc=7, argv=0x55838a4dc000) at main.c:197\n\nwhile :; do psql -qh /tmp postgres -c \"DROP DATABASE a\" -c \"CREATE DATABASE a TEMPLATE postgres STRATEGY wal_log\"; done\n# Run this for a few loops and then ^C or hold down ^C until it stops,\n# and then connect to postgres and try to connect to 'a':\npostgres=# \\c a\n2022-07-31 01:22:51.437 CDT client backend[13362] [unknown] PANIC: could not open critical system index 2662\n\nUnfortunately, that isn't very consistent, and you have have to run it a bunch\nof times...\n\nI don't know if it's an issue of any significance that CREATE DATABASE / ^C\nleaves behind a broken database, but it is an issue that the cluster crashes.\n\nWhile struggling to reproduce that problem, I also hit this warning, which may\nor may not be the same. I added an abort() after WARNING in aset.c to get a\nbacktrace.\n\nWARNING: problem in alloc set PortalContext: bogus aset link in block 0x55a63f2f9d60, chunk 0x55a63f2fb138\n\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51\n51 ../sysdeps/unix/sysv/linux/raise.c: No existe el archivo o el directorio.\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51\n#1 0x00007f81144f1801 in __GI_abort () at abort.c:79\n#2 0x000055a63c834c5d in AllocSetCheck (context=context@entry=0x55a63f26fea0) at aset.c:1491\n#3 0x000055a63c835b09 in AllocSetDelete (context=0x55a63f26fea0) at aset.c:638\n#4 0x000055a63c854322 in MemoryContextDelete (context=0x55a63f26fea0) at mcxt.c:252\n#5 0x000055a63c8591d5 in PortalDrop (portal=portal@entry=0x55a63f2bb7a0, isTopCommit=isTopCommit@entry=false) at portalmem.c:596\n#6 0x000055a63c3e4a7b in exec_simple_query (query_string=query_string@entry=0x55a63f24db90 \"CREATE DATABASE a TEMPLATE postgres STRATEGY wal_log ;\") at postgres.c:1253\n#7 0x000055a63c3e7fc1 in PostgresMain (dbname=<optimized out>, username=username@entry=0x55a63f279448 \"pryzbyj\") at postgres.c:4505\n#8 0x000055a63c1dc423 in BackendRun (port=port@entry=0x55a63f271dd0) at postmaster.c:4490\n#9 0x000055a63c1e3e07 in BackendStartup (port=port@entry=0x55a63f271dd0) at postmaster.c:4218\n#10 0x000055a63c1e447f in ServerLoop () at postmaster.c:1808\n#11 0x000055a63c1e5f93 in PostmasterMain (argc=7, argv=<optimized out>) at postmaster.c:1480\n#12 0x000055a63be7de1f in main (argc=7, argv=0x55a63f248000) at main.c:197\n\nI reproduced that by running this a couple dozen times in an interactive psql.\nIt doesn't seem to affect STRATEGY=file_copy.\n\nSET statement_timeout=0; DROP DATABASE a; SET statement_timeout='60ms'; CREATE DATABASE a TEMPLATE postgres STRATEGY wal_log ; \\c a \\c postgres\n\nAlso, if I understand correctly, this patch seems to assume that nobody is\nconnected to the source database. But what's actually enforced is just that\nnobody *else* is connected. Is it any issue that the current DB can be used as\na source? Anyway, both of the above problems are reproducible using a\ndifferent database.\n\n|postgres=# CREATE DATABASE new TEMPLATE postgres STRATEGY wal_log;\n|CREATE DATABASE\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 2 Aug 2022 12:50:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Aug 2, 2022 at 1:50 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Unfortunately, that isn't very consistent, and you have have to run it a bunch\n> of times...\n\nI was eventually able to reproduce this in part by using the\ninteractive psql method you describe. It didn't crash, but it did spit\nout a bunch of funny error messages:\n\npostgres=# SET statement_timeout=0; DROP DATABASE a; SET\nstatement_timeout='60ms'; CREATE DATABASE a TEMPLATE postgres STRATEGY\nwal_log ; \\c a \\c postgres\nSET\nERROR: database \"a\" does not exist\nSET\nERROR: canceling statement due to statement timeout\nWARNING: problem in alloc set PortalContext: req size > alloc size\nfor chunk 0x7f99508911f0 in block 0x7f9950890800\nWARNING: problem in alloc set PortalContext: bad size 0 for chunk\n0x7f99508911f0 in block 0x7f9950890800\nWARNING: problem in alloc set PortalContext: bad single-chunk\n0x7f9950891208 in block 0x7f9950890800\nWARNING: problem in alloc set PortalContext: found inconsistent\nmemory block 0x7f9950890800\nWARNING: problem in alloc set PortalContext: req size > alloc size\nfor chunk 0x7f99508911f0 in block 0x7f9950890800\nWARNING: problem in alloc set PortalContext: bad size 0 for chunk\n0x7f99508911f0 in block 0x7f9950890800\nWARNING: problem in alloc set PortalContext: bad single-chunk\n0x7f9950891208 in block 0x7f9950890800\nWARNING: problem in alloc set PortalContext: found inconsistent\nmemory block 0x7f9950890800\nconnection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL:\ndatabase \"a\" does not exist\nPrevious connection kept\npostgres=# select * from pg_database;\n oid | datname | datdba | encoding | datlocprovider | datistemplate\n| datallowconn | datconnlimit | datfrozenxid | datminmxid |\ndattablespace | datcollate | datctype | daticulocale |\ndatcollversion | datacl\n-----+-----------+--------+----------+----------------+---------------+--------------+--------------+--------------+------------+---------------+-------------+-------------+--------------+----------------+----------------------------\n 5 | postgres | 10 | 6 | c | f\n| t | -1 | 718 | 1 |\n1663 | en_US.UTF-8 | en_US.UTF-8 | | |\n 1 | template1 | 10 | 6 | c | t\n| t | -1 | 718 | 1 |\n1663 | en_US.UTF-8 | en_US.UTF-8 | | |\n{=c/rhaas,rhaas=CTc/rhaas}\n 4 | template0 | 10 | 6 | c | t\n| f | -1 | 718 | 1 |\n1663 | en_US.UTF-8 | en_US.UTF-8 | | |\n{=c/rhaas,rhaas=CTc/rhaas}\n(3 rows)\n\nI then set backtrace_functions='AllocSetCheck' and reproduced it\nagain, which led to stack traces like this:\n\n2022-08-02 16:50:32.490 EDT [98814] WARNING: problem in alloc set\nPortalContext: bad single-chunk 0x7f9950886608 in block 0x7f9950885c00\n2022-08-02 16:50:32.490 EDT [98814] BACKTRACE:\n2 postgres 0x000000010cd37ef5 AllocSetCheck + 549\n3 postgres 0x000000010cd37730 AllocSetReset + 48\n4 postgres 0x000000010cd3f6f1\nMemoryContextResetOnly + 81\n5 postgres 0x000000010cd378b9 AllocSetDelete + 73\n6 postgres 0x000000010cd41e09 PortalDrop + 425\n7 postgres 0x000000010cd427bb\nAtCleanup_Portals + 203\n8 postgres 0x000000010c86476d\nCleanupTransaction + 29\n9 postgres 0x000000010c865d4f\nAbortCurrentTransaction + 63\n10 postgres 0x000000010cba1395 PostgresMain + 885\n11 postgres 0x000000010caf5472 PostmasterMain + 7586\n12 postgres 0x000000010ca31e3d main + 2205\n13 libdyld.dylib 0x00007fff699afcc9 start + 1\n14 ??? 0x0000000000000001 0x0 + 1\n\nI recompiled with -O0 and hacked the code that emits the BACKTRACE:\nbit to go into an infinite loop if it's hit, which enabled me to hook\nup a debugger at the point of the failure. The debugger says:\n\n(lldb) bt\n* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP\n frame #0: 0x000000010e98a157\npostgres`send_message_to_server_log(edata=0x000000010ec0f658) at\nelog.c:2916:4\n frame #1: 0x000000010e9866d6 postgres`EmitErrorReport at elog.c:1537:3\n frame #2: 0x000000010e986016 postgres`errfinish(filename=\"aset.c\",\nlineno=1470, funcname=\"AllocSetCheck\") at elog.c:592:2\n frame #3: 0x000000010e9c8465\npostgres`AllocSetCheck(context=0x00007ff77c80d200) at aset.c:1469:5\n frame #4: 0x000000010e9c7c05\npostgres`AllocSetDelete(context=0x00007ff77c80d200) at aset.c:638:2\n frame #5: 0x000000010e9d368b\npostgres`MemoryContextDelete(context=0x00007ff77c80d200) at\nmcxt.c:252:2\n * frame #6: 0x000000010e9d705b\npostgres`PortalDrop(portal=0x00007ff77e028920, isTopCommit=false) at\nportalmem.c:596:2\n frame #7: 0x000000010e9d7e0e postgres`AtCleanup_Portals at portalmem.c:907:3\n frame #8: 0x000000010e22030d postgres`CleanupTransaction at xact.c:2890:2\n frame #9: 0x000000010e2219da postgres`AbortCurrentTransaction at\nxact.c:3328:4\n frame #10: 0x000000010e763237\npostgres`PostgresMain(dbname=\"postgres\", username=\"rhaas\") at\npostgres.c:4232:3\n frame #11: 0x000000010e6625aa\npostgres`BackendRun(port=0x00007ff77c1042c0) at postmaster.c:4490:2\n frame #12: 0x000000010e661b18\npostgres`BackendStartup(port=0x00007ff77c1042c0) at\npostmaster.c:4218:3\n frame #13: 0x000000010e66088a postgres`ServerLoop at postmaster.c:1808:7\n frame #14: 0x000000010e65def2 postgres`PostmasterMain(argc=1,\nargv=0x00007ff77ae05cf0) at postmaster.c:1480:11\n frame #15: 0x000000010e50521f postgres`main(argc=1,\nargv=0x00007ff77ae05cf0) at main.c:197:3\n frame #16: 0x00007fff699afcc9 libdyld.dylib`start + 1\n(lldb) fr sel 6\nframe #6: 0x000000010e9d705b\npostgres`PortalDrop(portal=0x00007ff77e028920, isTopCommit=false) at\nportalmem.c:596:2\n 593 MemoryContextDelete(portal->holdContext);\n 594\n 595 /* release subsidiary storage */\n-> 596 MemoryContextDelete(portal->portalContext);\n 597\n 598 /* release portal struct (it's in TopPortalContext) */\n 599 pfree(portal);\n(lldb) fr sel 3\nframe #3: 0x000000010e9c8465\npostgres`AllocSetCheck(context=0x00007ff77c80d200) at aset.c:1469:5\n 1466 * Check chunk size\n 1467 */\n 1468 if (dsize > chsize)\n-> 1469 elog(WARNING, \"problem in alloc set %s: req size > alloc size\nfor chunk %p in block %p\",\n 1470 name, chunk, block);\n 1471 if (chsize < (1 << ALLOC_MINBITS))\n 1472 elog(WARNING, \"problem in alloc set %s: bad size %zu for chunk\n%p in block %p\",\n(lldb) p dsize\n(Size) $3 = 20\n(lldb) p chsize\n(Size) $4 = 0\n\nIt seems like CreateDatabaseUsingWalLog() must be doing something that\ncorrupts PortalContext, but at the moment I'm not sure what that thing\ncould be.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Aug 2022 17:18:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> WARNING: problem in alloc set PortalContext: req size > alloc size\n> for chunk 0x7f99508911f0 in block 0x7f9950890800\n> WARNING: problem in alloc set PortalContext: bad size 0 for chunk\n> 0x7f99508911f0 in block 0x7f9950890800\n> WARNING: problem in alloc set PortalContext: bad single-chunk\n> 0x7f9950891208 in block 0x7f9950890800\n> WARNING: problem in alloc set PortalContext: found inconsistent\n> memory block 0x7f9950890800\n> WARNING: problem in alloc set PortalContext: req size > alloc size\n> for chunk 0x7f99508911f0 in block 0x7f9950890800\n> WARNING: problem in alloc set PortalContext: bad size 0 for chunk\n> 0x7f99508911f0 in block 0x7f9950890800\n> WARNING: problem in alloc set PortalContext: bad single-chunk\n> 0x7f9950891208 in block 0x7f9950890800\n> WARNING: problem in alloc set PortalContext: found inconsistent\n> memory block 0x7f9950890800\n\nThis looks like nothing so much as the fallout from something scribbling\npast the end of an allocated palloc chunk, or perhaps writing on\nalready-freed space. Perhaps running the test case under valgrind\nwould help to finger the culprit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Aug 2022 17:46:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Aug 02, 2022 at 05:46:34PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > WARNING: problem in alloc set PortalContext: req size > alloc size for chunk 0x7f99508911f0 in block 0x7f9950890800\n> \n> This looks like nothing so much as the fallout from something scribbling\n> past the end of an allocated palloc chunk, or perhaps writing on\n> already-freed space. Perhaps running the test case under valgrind\n> would help to finger the culprit.\n\nYeah but my test case is so poor that it's a chore ...\n\n(Sorry for that, but it took me 2 days to be able to reproduce the problem so I\nsent it sooner rather than looking for a better way ... )\n\nI got this interesting looking thing.\n\n==11628== Invalid write of size 8\n==11628== at 0x1D12B3A: smgrsetowner (smgr.c:213)\n==11628== by 0x1C7C224: RelationGetSmgr (rel.h:572)\n==11628== by 0x1C7C224: RelationCopyStorageUsingBuffer (bufmgr.c:3725)\n==11628== by 0x1C7C7A6: CreateAndCopyRelationData (bufmgr.c:3817)\n==11628== by 0x14A4518: CreateDatabaseUsingWalLog (dbcommands.c:221)\n==11628== by 0x14AB009: createdb (dbcommands.c:1393)\n==11628== by 0x1D2B9AF: standard_ProcessUtility (utility.c:776)\n==11628== by 0x1D2C46A: ProcessUtility (utility.c:530)\n==11628== by 0x1D265F5: PortalRunUtility (pquery.c:1158)\n==11628== by 0x1D27089: PortalRunMulti (pquery.c:1315)\n==11628== by 0x1D27A7C: PortalRun (pquery.c:791)\n==11628== by 0x1D1E33D: exec_simple_query (postgres.c:1243)\n==11628== by 0x1D218BC: PostgresMain (postgres.c:4505)\n==11628== Address 0x1025bc18 is 2,712 bytes inside a block of size 8,192 free'd\n==11628== at 0x4033A3F: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)\n==11628== by 0x217D7C2: AllocSetReset (aset.c:608)\n==11628== by 0x219B57A: MemoryContextResetOnly (mcxt.c:181)\n==11628== by 0x217DBD5: AllocSetDelete (aset.c:654)\n==11628== by 0x219C1EC: MemoryContextDelete (mcxt.c:252)\n==11628== by 0x21A109F: PortalDrop (portalmem.c:596)\n==11628== by 0x21A269C: AtCleanup_Portals (portalmem.c:907)\n==11628== by 0x11FEAB1: CleanupTransaction (xact.c:2890)\n==11628== by 0x120A74C: AbortCurrentTransaction (xact.c:3328)\n==11628== by 0x1D2158C: PostgresMain (postgres.c:4232)\n==11628== by 0x1B15DB5: BackendRun (postmaster.c:4490)\n==11628== by 0x1B1D799: BackendStartup (postmaster.c:4218)\n==11628== Block was alloc'd at\n==11628== at 0x40327F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)\n==11628== by 0x217F0DC: AllocSetAlloc (aset.c:920)\n==11628== by 0x219E4D2: palloc (mcxt.c:1082)\n==11628== by 0x14A14BE: ScanSourceDatabasePgClassTuple (dbcommands.c:444)\n==11628== by 0x14A1CD8: ScanSourceDatabasePgClassPage (dbcommands.c:384)\n==11628== by 0x14A20BF: ScanSourceDatabasePgClass (dbcommands.c:322)\n==11628== by 0x14A4348: CreateDatabaseUsingWalLog (dbcommands.c:177)\n==11628== by 0x14AB009: createdb (dbcommands.c:1393)\n==11628== by 0x1D2B9AF: standard_ProcessUtility (utility.c:776)\n==11628== by 0x1D2C46A: ProcessUtility (utility.c:530)\n==11628== by 0x1D265F5: PortalRunUtility (pquery.c:1158)\n==11628== by 0x1D27089: PortalRunMulti (pquery.c:1315)\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 2 Aug 2022 17:04:16 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On 2022-08-02 17:04:16 -0500, Justin Pryzby wrote:\n> I got this interesting looking thing.\n> \n> ==11628== Invalid write of size 8\n> ==11628== at 0x1D12B3A: smgrsetowner (smgr.c:213)\n> ==11628== by 0x1C7C224: RelationGetSmgr (rel.h:572)\n> ==11628== by 0x1C7C224: RelationCopyStorageUsingBuffer (bufmgr.c:3725)\n> ==11628== by 0x1C7C7A6: CreateAndCopyRelationData (bufmgr.c:3817)\n> ==11628== by 0x14A4518: CreateDatabaseUsingWalLog (dbcommands.c:221)\n> ==11628== by 0x14AB009: createdb (dbcommands.c:1393)\n> ==11628== by 0x1D2B9AF: standard_ProcessUtility (utility.c:776)\n> ==11628== by 0x1D2C46A: ProcessUtility (utility.c:530)\n> ==11628== by 0x1D265F5: PortalRunUtility (pquery.c:1158)\n> ==11628== by 0x1D27089: PortalRunMulti (pquery.c:1315)\n> ==11628== by 0x1D27A7C: PortalRun (pquery.c:791)\n> ==11628== by 0x1D1E33D: exec_simple_query (postgres.c:1243)\n> ==11628== by 0x1D218BC: PostgresMain (postgres.c:4505)\n> ==11628== Address 0x1025bc18 is 2,712 bytes inside a block of size 8,192 free'd\n> ==11628== at 0x4033A3F: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)\n> ==11628== by 0x217D7C2: AllocSetReset (aset.c:608)\n> ==11628== by 0x219B57A: MemoryContextResetOnly (mcxt.c:181)\n> ==11628== by 0x217DBD5: AllocSetDelete (aset.c:654)\n> ==11628== by 0x219C1EC: MemoryContextDelete (mcxt.c:252)\n> ==11628== by 0x21A109F: PortalDrop (portalmem.c:596)\n> ==11628== by 0x21A269C: AtCleanup_Portals (portalmem.c:907)\n> ==11628== by 0x11FEAB1: CleanupTransaction (xact.c:2890)\n> ==11628== by 0x120A74C: AbortCurrentTransaction (xact.c:3328)\n> ==11628== by 0x1D2158C: PostgresMain (postgres.c:4232)\n> ==11628== by 0x1B15DB5: BackendRun (postmaster.c:4490)\n> ==11628== by 0x1B1D799: BackendStartup (postmaster.c:4218)\n> ==11628== Block was alloc'd at\n> ==11628== at 0x40327F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)\n> ==11628== by 0x217F0DC: AllocSetAlloc (aset.c:920)\n> ==11628== by 0x219E4D2: palloc (mcxt.c:1082)\n> ==11628== by 0x14A14BE: ScanSourceDatabasePgClassTuple (dbcommands.c:444)\n> ==11628== by 0x14A1CD8: ScanSourceDatabasePgClassPage (dbcommands.c:384)\n> ==11628== by 0x14A20BF: ScanSourceDatabasePgClass (dbcommands.c:322)\n> ==11628== by 0x14A4348: CreateDatabaseUsingWalLog (dbcommands.c:177)\n> ==11628== by 0x14AB009: createdb (dbcommands.c:1393)\n> ==11628== by 0x1D2B9AF: standard_ProcessUtility (utility.c:776)\n> ==11628== by 0x1D2C46A: ProcessUtility (utility.c:530)\n> ==11628== by 0x1D265F5: PortalRunUtility (pquery.c:1158)\n> ==11628== by 0x1D27089: PortalRunMulti (pquery.c:1315)\n\nIck. That looks like somehow we end up with smgr entries still pointing to\nfake relcache entries, created in a prior attempt at create database.\n\nLooks like you'd need error trapping to call FreeFakeRelcacheEntry() (or just\nsmgrclearowner()) in case of error.\n\nOr perhaps we can instead prevent the fake relcache entry being set as the\nowner in the first place?\n\nWhy do we even need fake relcache entries here? Looks like all that they're\nused for is a bunch of RelationGetSmgr() calls? Can't we instead just pass the\nrnode to smgropen()? Given that we're doing that once for every buffer in the\nbody of RelationCopyStorageUsingBuffer(), doing it in a bunch of other\nless-frequent places can't be a problem.\ncan't \n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 2 Aug 2022 15:23:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Aug 3, 2022 at 3:53 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-08-02 17:04:16 -0500, Justin Pryzby wrote:\n> > I got this interesting looking thing.\n> >\n> > ==11628== Invalid write of size 8\n> > ==11628== at 0x1D12B3A: smgrsetowner (smgr.c:213)\n> > ==11628== by 0x1C7C224: RelationGetSmgr (rel.h:572)\n> > ==11628== by 0x1C7C224: RelationCopyStorageUsingBuffer (bufmgr.c:3725)\n> > ==11628== by 0x1C7C7A6: CreateAndCopyRelationData (bufmgr.c:3817)\n> > ==11628== by 0x14A4518: CreateDatabaseUsingWalLog (dbcommands.c:221)\n> > ==11628== by 0x14AB009: createdb (dbcommands.c:1393)\n> > ==11628== by 0x1D2B9AF: standard_ProcessUtility (utility.c:776)\n> > ==11628== by 0x1D2C46A: ProcessUtility (utility.c:530)\n> > ==11628== by 0x1D265F5: PortalRunUtility (pquery.c:1158)\n> > ==11628== by 0x1D27089: PortalRunMulti (pquery.c:1315)\n> > ==11628== by 0x1D27A7C: PortalRun (pquery.c:791)\n> > ==11628== by 0x1D1E33D: exec_simple_query (postgres.c:1243)\n> > ==11628== by 0x1D218BC: PostgresMain (postgres.c:4505)\n> > ==11628== Address 0x1025bc18 is 2,712 bytes inside a block of size 8,192 free'd\n> > ==11628== at 0x4033A3F: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)\n> > ==11628== by 0x217D7C2: AllocSetReset (aset.c:608)\n> > ==11628== by 0x219B57A: MemoryContextResetOnly (mcxt.c:181)\n> > ==11628== by 0x217DBD5: AllocSetDelete (aset.c:654)\n> > ==11628== by 0x219C1EC: MemoryContextDelete (mcxt.c:252)\n> > ==11628== by 0x21A109F: PortalDrop (portalmem.c:596)\n> > ==11628== by 0x21A269C: AtCleanup_Portals (portalmem.c:907)\n> > ==11628== by 0x11FEAB1: CleanupTransaction (xact.c:2890)\n> > ==11628== by 0x120A74C: AbortCurrentTransaction (xact.c:3328)\n> > ==11628== by 0x1D2158C: PostgresMain (postgres.c:4232)\n> > ==11628== by 0x1B15DB5: BackendRun (postmaster.c:4490)\n> > ==11628== by 0x1B1D799: BackendStartup (postmaster.c:4218)\n> > ==11628== Block was alloc'd at\n> > ==11628== at 0x40327F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)\n> > ==11628== by 0x217F0DC: AllocSetAlloc (aset.c:920)\n> > ==11628== by 0x219E4D2: palloc (mcxt.c:1082)\n> > ==11628== by 0x14A14BE: ScanSourceDatabasePgClassTuple (dbcommands.c:444)\n> > ==11628== by 0x14A1CD8: ScanSourceDatabasePgClassPage (dbcommands.c:384)\n> > ==11628== by 0x14A20BF: ScanSourceDatabasePgClass (dbcommands.c:322)\n> > ==11628== by 0x14A4348: CreateDatabaseUsingWalLog (dbcommands.c:177)\n> > ==11628== by 0x14AB009: createdb (dbcommands.c:1393)\n> > ==11628== by 0x1D2B9AF: standard_ProcessUtility (utility.c:776)\n> > ==11628== by 0x1D2C46A: ProcessUtility (utility.c:530)\n> > ==11628== by 0x1D265F5: PortalRunUtility (pquery.c:1158)\n> > ==11628== by 0x1D27089: PortalRunMulti (pquery.c:1315)\n>\n> Ick. That looks like somehow we end up with smgr entries still pointing to\n> fake relcache entries, created in a prior attempt at create database.\n\nThe surprising thing is how the smgr entry survived the transaction\nabort, I mean AtEOXact_SMgr should have closed the smgr and should\nhave removed from the\nsmgr cache.\n\n> Looks like you'd need error trapping to call FreeFakeRelcacheEntry() (or just\n> smgrclearowner()) in case of error.\n>\n> Or perhaps we can instead prevent the fake relcache entry being set as the\n> owner in the first place?\n>\n> Why do we even need fake relcache entries here? Looks like all that they're\n> used for is a bunch of RelationGetSmgr() calls? Can't we instead just pass the\n> rnode to smgropen()? Given that we're doing that once for every buffer in the\n> body of RelationCopyStorageUsingBuffer(), doing it in a bunch of other\n> less-frequent places can't be a problem.\n> can't\n\nI think in some of the previous versions of the patch we were using\nsmgropen() but changed it so that we do not reuse the smgr after it\ngets removed during interrupt processing, see discussion here[1]\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmoYKovODW2Y7rQmmRFaKu445p9uAahjpgfbY8eyeL07BXA%40mail.gmail.com\n\n From the Valgrind report, it is clear that we are getting the smgr\nentry whose smgr->smgr_owner is pointing into the fake relcache entry.\nSo I am investigating further how it is possible for the smgr created\nduring a previous create database attempt to survive beyond abort\ntransaction.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Aug 2022 11:28:03 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Aug 3, 2022 at 11:28 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Aug 3, 2022 at 3:53 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2022-08-02 17:04:16 -0500, Justin Pryzby wrote:\n> > > I got this interesting looking thing.\n> > >\n> > > ==11628== Invalid write of size 8\n> > > ==11628== at 0x1D12B3A: smgrsetowner (smgr.c:213)\n> > > ==11628== by 0x1C7C224: RelationGetSmgr (rel.h:572)\n> > > ==11628== by 0x1C7C224: RelationCopyStorageUsingBuffer (bufmgr.c:3725)\n> > > ==11628== by 0x1C7C7A6: CreateAndCopyRelationData (bufmgr.c:3817)\n> > > ==11628== by 0x14A4518: CreateDatabaseUsingWalLog (dbcommands.c:221)\n> > > ==11628== by 0x14AB009: createdb (dbcommands.c:1393)\n> > > ==11628== by 0x1D2B9AF: standard_ProcessUtility (utility.c:776)\n> > > ==11628== by 0x1D2C46A: ProcessUtility (utility.c:530)\n> > > ==11628== by 0x1D265F5: PortalRunUtility (pquery.c:1158)\n> > > ==11628== by 0x1D27089: PortalRunMulti (pquery.c:1315)\n> > > ==11628== by 0x1D27A7C: PortalRun (pquery.c:791)\n> > > ==11628== by 0x1D1E33D: exec_simple_query (postgres.c:1243)\n> > > ==11628== by 0x1D218BC: PostgresMain (postgres.c:4505)\n> > > ==11628== Address 0x1025bc18 is 2,712 bytes inside a block of size 8,192 free'd\n> > > ==11628== at 0x4033A3F: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)\n> > > ==11628== by 0x217D7C2: AllocSetReset (aset.c:608)\n> > > ==11628== by 0x219B57A: MemoryContextResetOnly (mcxt.c:181)\n> > > ==11628== by 0x217DBD5: AllocSetDelete (aset.c:654)\n> > > ==11628== by 0x219C1EC: MemoryContextDelete (mcxt.c:252)\n> > > ==11628== by 0x21A109F: PortalDrop (portalmem.c:596)\n> > > ==11628== by 0x21A269C: AtCleanup_Portals (portalmem.c:907)\n> > > ==11628== by 0x11FEAB1: CleanupTransaction (xact.c:2890)\n> > > ==11628== by 0x120A74C: AbortCurrentTransaction (xact.c:3328)\n> > > ==11628== by 0x1D2158C: PostgresMain (postgres.c:4232)\n> > > ==11628== by 0x1B15DB5: BackendRun (postmaster.c:4490)\n> > > ==11628== by 0x1B1D799: BackendStartup (postmaster.c:4218)\n> > > ==11628== Block was alloc'd at\n> > > ==11628== at 0x40327F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)\n> > > ==11628== by 0x217F0DC: AllocSetAlloc (aset.c:920)\n> > > ==11628== by 0x219E4D2: palloc (mcxt.c:1082)\n> > > ==11628== by 0x14A14BE: ScanSourceDatabasePgClassTuple (dbcommands.c:444)\n> > > ==11628== by 0x14A1CD8: ScanSourceDatabasePgClassPage (dbcommands.c:384)\n> > > ==11628== by 0x14A20BF: ScanSourceDatabasePgClass (dbcommands.c:322)\n> > > ==11628== by 0x14A4348: CreateDatabaseUsingWalLog (dbcommands.c:177)\n> > > ==11628== by 0x14AB009: createdb (dbcommands.c:1393)\n> > > ==11628== by 0x1D2B9AF: standard_ProcessUtility (utility.c:776)\n> > > ==11628== by 0x1D2C46A: ProcessUtility (utility.c:530)\n> > > ==11628== by 0x1D265F5: PortalRunUtility (pquery.c:1158)\n> > > ==11628== by 0x1D27089: PortalRunMulti (pquery.c:1315)\n> >\n> > Ick. That looks like somehow we end up with smgr entries still pointing to\n> > fake relcache entries, created in a prior attempt at create database.\n>\n> The surprising thing is how the smgr entry survived the transaction\n> abort, I mean AtEOXact_SMgr should have closed the smgr and should\n> have removed from the\n> smgr cache.\n>\n\nOkay, so AtEOXact_SMgr will only get rid of unowned smgr and ours are\nowned by a fake relcache and whose lifetime is just portal memory\ncontext which will go away at the transaction end. So as Andres\nsuggested options could be that a) we catch the error and do\nFreeFakeRelcacheEntry. b) directly use smgropen instead of\nRelationGetSmgr because actually, we do not want the owner to be set\nfor these smgrs.\n\nI think option b) looks better to me, I will prepare a patch and test\nwhether the error goes away with that or not.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Aug 2022 12:00:16 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Aug 3, 2022 at 12:00 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n\n> Okay, so AtEOXact_SMgr will only get rid of unowned smgr and ours are\n> owned by a fake relcache and whose lifetime is just portal memory\n> context which will go away at the transaction end. So as Andres\n> suggested options could be that a) we catch the error and do\n> FreeFakeRelcacheEntry. b) directly use smgropen instead of\n> RelationGetSmgr because actually, we do not want the owner to be set\n> for these smgrs.\n>\n> I think option b) looks better to me, I will prepare a patch and test\n> whether the error goes away with that or not.\n>\n\nHere is the patch which directly uses smgropen instead of using fake\nrelcache entry. We don't preserve the smgr pointer and whenever\nrequired we again call the smgropen.\n\nWith this patch it resolved the problem for me at least what I was\nable to reproduce. I was able to reproduce the WARNING messages that\nRobert got as well as the valgrind error that Justin got and with this\npatch both are resolved.\n\n@Justin can you help in verifying the original issue?\n\nAnother alternative could be that continue using fake relcache entry\nbut instead of RelationGetSmgr() create some new function which\ndoesn't set the owner in the smgr.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 3 Aug 2022 13:41:30 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Aug 3, 2022 at 1:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Aug 3, 2022 at 12:00 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n>\n> > Okay, so AtEOXact_SMgr will only get rid of unowned smgr and ours are\n> > owned by a fake relcache and whose lifetime is just portal memory\n> > context which will go away at the transaction end. So as Andres\n> > suggested options could be that a) we catch the error and do\n> > FreeFakeRelcacheEntry. b) directly use smgropen instead of\n> > RelationGetSmgr because actually, we do not want the owner to be set\n> > for these smgrs.\n> >\n> > I think option b) looks better to me, I will prepare a patch and test\n> > whether the error goes away with that or not.\n> >\n>\n> Here is the patch which directly uses smgropen instead of using fake\n> relcache entry. We don't preserve the smgr pointer and whenever\n> required we again call the smgropen.\n>\n> With this patch it resolved the problem for me at least what I was\n> able to reproduce. I was able to reproduce the WARNING messages that\n> Robert got as well as the valgrind error that Justin got and with this\n> patch both are resolved.\n\nAnother version of the patch which closes the smgr at the end using\nsmgrcloserellocator() and I have also added a commit message.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 3 Aug 2022 16:45:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Aug 3, 2022 at 7:15 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Another version of the patch which closes the smgr at the end using\n> smgrcloserellocator() and I have also added a commit message.\n\nHmm, but didn't we decide against doing it that way intentionally? The\ncomment you're deleting says \"If we didn't do this and used the smgr\nlayer directly, we would have to worry about invalidations.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Aug 2022 11:51:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, 3 Aug 2022 at 9:22 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Aug 3, 2022 at 7:15 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Another version of the patch which closes the smgr at the end using\n> > smgrcloserellocator() and I have also added a commit message.\n>\n> Hmm, but didn't we decide against doing it that way intentionally? The\n> comment you're deleting says \"If we didn't do this and used the smgr\n> layer directly, we would have to worry about invalidations.\"\n\n\nI think we only need to worry if we keep the smgr reference around and try\nto reuse it. But in this patch I am not keeping the reference to the smgr.\n\n—\nDilip\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, 3 Aug 2022 at 9:22 PM, Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Aug 3, 2022 at 7:15 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Another version of the patch which closes the smgr at the end using\n> smgrcloserellocator() and I have also added a commit message.\n\nHmm, but didn't we decide against doing it that way intentionally? The\ncomment you're deleting says \"If we didn't do this and used the smgr\nlayer directly, we would have to worry about invalidations.\"I think we only need to worry if we keep the smgr reference around and try to reuse it.  But in this patch I am not keeping the reference to the smgr.—Dilip-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 3 Aug 2022 21:25:10 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Aug 03, 2022 at 04:45:23PM +0530, Dilip Kumar wrote:\n> Another version of the patch which closes the smgr at the end using\n> smgrcloserellocator() and I have also added a commit message.\n\nThanks for providing a patch.\nThis seems to fix the second problem with accessing freed memory.\n\nBut I reproduced the first problem with a handful of tries interrupting the\nwhile loop:\n\n2022-08-03 10:39:50.129 CDT client backend[5530] [unknown] PANIC: could not open critical system index 2662\n\nIn the failure, when trying to connect to the new \"a\" DB, it does this:\n\n[pid 10700] openat(AT_FDCWD, \"base/17003/pg_filenode.map\", O_RDONLY) = 11\n[pid 10700] read(11, \"\\27'Y\\0\\21\\0\\0\\0\\353\\4\\0\\0\\353\\4\\0\\0\\341\\4\\0\\0\\341\\4\\0\\0\\347\\4\\0\\0\\347\\4\\0\\0\\337\\4\\0\\0\\337\\4\\0\\0\\24\\v\\0\\0\\24\\v\\0\\0\\25\\v\\0\\0\\25\\v\\0\\0K\\20\\0\\0K\\20\\0\\0L\\20\\0\\0L\\20\\0\\0\\202\\n\\0\\0\\202\\n\\0\\0\\203\\n\\0\\0\\203\\n\\0\\0\\217\\n\\0\\0\\217\\n\\0\\0\\220\\n\\0\\0\\220\\n\\0\\0b\\n\\0\\0b\\n\\0\\0c\\n\\0\\0c\\n\\0\\0f\\n\\0\\0f\\n\\0\\0g\\n\\0\\0g\\n\\0\\0\\177\\r\\0\\0\\177\\r\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\362\\366\\252\\337\", 524) = 524\n[pid 10700] close(11) = 0\n[pid 10700] openat(AT_FDCWD, \"base/17003/pg_internal.init\", O_RDONLY) = -1 ENOENT (No such file or directory)\n[pid 10700] openat(AT_FDCWD, \"base/17003/1259\", O_RDWR) = 11\n[pid 10700] lseek(11, 0, SEEK_END) = 106496\n[pid 10700] lseek(11, 0, SEEK_END) = 106496\n\nAnd then reads nothing but zero bytes from FD 11 (rel 1259/pg_class)\n\nSo far, I haven't succeeded in eliciting anything useful from valgrind.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 3 Aug 2022 11:02:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Aug 3, 2022 at 9:32 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Aug 03, 2022 at 04:45:23PM +0530, Dilip Kumar wrote:\n> > Another version of the patch which closes the smgr at the end using\n> > smgrcloserellocator() and I have also added a commit message.\n>\n> Thanks for providing a patch.\n> This seems to fix the second problem with accessing freed memory.\n\nThanks for the confirmation.\n\n> But I reproduced the first problem with a handful of tries interrupting the\n> while loop:\n>\n> 2022-08-03 10:39:50.129 CDT client backend[5530] [unknown] PANIC: could not open critical system index 2662\n>\n> In the failure, when trying to connect to the new \"a\" DB, it does this:\n>\n> [pid 10700] openat(AT_FDCWD, \"base/17003/pg_filenode.map\", O_RDONLY) = 11\n> [pid 10700] read(11, \"\\27'Y\\0\\21\\0\\0\\0\\353\\4\\0\\0\\353\\4\\0\\0\\341\\4\\0\\0\\341\\4\\0\\0\\347\\4\\0\\0\\347\\4\\0\\0\\337\\4\\0\\0\\337\\4\\0\\0\\24\\v\\0\\0\\24\\v\\0\\0\\25\\v\\0\\0\\25\\v\\0\\0K\\20\\0\\0K\\20\\0\\0L\\20\\0\\0L\\20\\0\\0\\202\\n\\0\\0\\202\\n\\0\\0\\203\\n\\0\\0\\203\\n\\0\\0\\217\\n\\0\\0\\217\\n\\0\\0\\220\\n\\0\\0\\220\\n\\0\\0b\\n\\0\\0b\\n\\0\\0c\\n\\0\\0c\\n\\0\\0f\\n\\0\\0f\\n\\0\\0g\\n\\0\\0g\\n\\0\\0\\177\\r\\0\\0\\177\\r\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\362\\366\\252\\337\", 524) = 524\n> [pid 10700] close(11) = 0\n> [pid 10700] openat(AT_FDCWD, \"base/17003/pg_internal.init\", O_RDONLY) = -1 ENOENT (No such file or directory)\n> [pid 10700] openat(AT_FDCWD, \"base/17003/1259\", O_RDWR) = 11\n> [pid 10700] lseek(11, 0, SEEK_END) = 106496\n> [pid 10700] lseek(11, 0, SEEK_END) = 106496\n>\n> And then reads nothing but zero bytes from FD 11 (rel 1259/pg_class)\n>\n> So far, I haven't succeeded in eliciting anything useful from valgrind.\n\nI tried multiple times but had no luck with reproducing this issue.\nDo you have some logs to know that just before ^C what was the last\nquery executed and whether it got canceled or executed completely?\nBecause theoretically, if the create database failed anywhere in\nbetween then it should at least clean the directory of that newly\ncreated database but seems there are some corrupted data in that\ndirectory so seems it is not symptoms of just the create database\nfailure but some combination of multiple things. I will put more\nthought into this tomorrow.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Aug 2022 22:12:05 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Aug 03, 2022 at 11:02:00AM -0500, Justin Pryzby wrote:\n> But I reproduced the first problem with a handful of tries interrupting the\n> while loop:\n> \n> 2022-08-03 10:39:50.129 CDT client backend[5530] [unknown] PANIC: could not open critical system index 2662\n...\n> So far, I haven't succeeded in eliciting anything useful from valgrind.\n\nNow, I've reproduced the problem under valgrind, but it doesn't show anything\nuseful:\n\npryzbyj@pryzbyj:~$ while :; do psql -h /tmp template1 -c \"DROP DATABASE a\" -c \"CREATE DATABASE a TEMPLATE postgres STRATEGY wal_log\"; done\nERROR: database \"a\" does not exist\nCREATE DATABASE\n^CCancel request sent\nERROR: canceling statement due to user request\nERROR: database \"a\" already exists\n^C\npryzbyj@pryzbyj:~$ ^C\npryzbyj@pryzbyj:~$ ^C\npryzbyj@pryzbyj:~$ ^C\npryzbyj@pryzbyj:~$ psql -h /tmp a -c ''\n2022-08-03 11:57:39.178 CDT client backend[31321] [unknown] PANIC: could not open critical system index 2662\npsql: error: fall� la conexi�n al servidor en el socket �/tmp/.s.PGSQL.5432�: PANIC: could not open critical system index 2662\n\n\nOn the server process, nothing interesting but the backtrace (the error was\nbefore this, while writing the relation file, but there's nothing suspicious).\n\n2022-08-03 11:08:06.628 CDT client backend[2841] [unknown] PANIC: could not open critical system index 2662\n==2841==\n==2841== Process terminating with default action of signal 6 (SIGABRT)\n==2841== at 0x5FBBE97: raise (raise.c:51)\n==2841== by 0x5FBD800: abort (abort.c:79)\n==2841== by 0x2118DEF: errfinish (elog.c:675)\n==2841== by 0x20F6002: load_critical_index (relcache.c:4328)\n==2841== by 0x20F727A: RelationCacheInitializePhase3 (relcache.c:4103)\n==2841== by 0x213DFA5: InitPostgres (postinit.c:1087)\n==2841== by 0x1D20D72: PostgresMain (postgres.c:4081)\n==2841== by 0x1B15CFD: BackendRun (postmaster.c:4490)\n==2841== by 0x1B1D6E1: BackendStartup (postmaster.c:4218)\n==2841== by 0x1B1DD59: ServerLoop (postmaster.c:1808)\n==2841== by 0x1B1F86D: PostmasterMain (postmaster.c:1480)\n==2841== by 0x17B7150: main (main.c:197)\n\nBelow, I reproduced it without valgrind (and without LANG):\n\npryzbyj@pryzbyj:~/src/postgres$ while :; do psql -qh /tmp template1 -c \"DROP DATABASE a\" -c \"CREATE DATABASE a TEMPLATE postgres STRATEGY wal_log\"; done\n2022-08-03 11:59:52.675 CDT checkpointer[1881] LOG: checkpoint starting: immediate force wait\n2022-08-03 11:59:52.862 CDT checkpointer[1881] LOG: checkpoint complete: wrote 4 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.045 s, sync=0.038 s, total=0.188 s; sync files=3, longest=0.019 s, average=0.013 s; distance=3 kB, estimate=3 kB; lsn=0/24862508, redo lsn=0/248624D0\n2022-08-03 11:59:53.213 CDT checkpointer[1881] LOG: checkpoint starting: immediate force wait\n2022-08-03 11:59:53.409 CDT checkpointer[1881] LOG: checkpoint complete: wrote 4 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.030 s, sync=0.054 s, total=0.196 s; sync files=4, longest=0.029 s, average=0.014 s; distance=4042 kB, estimate=4042 kB; lsn=0/24C54D88, redo lsn=0/24C54D50\n^CCancel request sent\n2022-08-03 11:59:53.750 CDT checkpointer[1881] LOG: checkpoint starting: immediate force wait\n2022-08-03 11:59:53.930 CDT checkpointer[1881] LOG: checkpoint complete: wrote 4 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.029 s, sync=0.027 s, total=0.181 s; sync files=4, longest=0.022 s, average=0.007 s; distance=4042 kB, estimate=4042 kB; lsn=0/250476D0, redo lsn=0/25047698\n2022-08-03 11:59:54.270 CDT checkpointer[1881] LOG: checkpoint starting: immediate force wait\n^C2022-08-03 11:59:54.294 CDT client backend[1903] psql ERROR: canceling statement due to user request\n2022-08-03 11:59:54.294 CDT client backend[1903] psql STATEMENT: DROP DATABASE a\nCancel request sent\nERROR: canceling statement due to user request\n2022-08-03 11:59:54.296 CDT client backend[1903] psql ERROR: database \"a\" already exists\n2022-08-03 11:59:54.296 CDT client backend[1903] psql STATEMENT: CREATE DATABASE a TEMPLATE postgres STRATEGY wal_log\nERROR: database \"a\" already exists\n^C\npryzbyj@pryzbyj:~/src/postgres$ ^C\npryzbyj@pryzbyj:~/src/postgres$ ^C\npryzbyj@pryzbyj:~/src/postgres$ 2022-08-03 11:59:54.427 CDT checkpointer[1881] LOG: checkpoint complete: wrote 4 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.024 s, sync=0.036 s, total=0.158 s; sync files=4, longest=0.027 s, average=0.009 s; distance=4042 kB, estimate=4042 kB; lsn=0/2543A108, redo lsn=0/2543A0A8\n^C\npryzbyj@pryzbyj:~/src/postgres$ ^C\npryzbyj@pryzbyj:~/src/postgres$ ^C\npryzbyj@pryzbyj:~/src/postgres$ psql -h /tmp a -c '' 2022-08-03 11:59:56.617 CDT client backend[1914] [unknown] PANIC: could not open critical system index 2662\n\n\n", "msg_date": "Wed, 3 Aug 2022 12:01:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-08-03 12:01:18 -0500, Justin Pryzby wrote:\n> Now, I've reproduced the problem under valgrind, but it doesn't show anything\n> useful\n\nYea, that looks like an issue on a different level.\n\n> \n> pryzbyj@pryzbyj:~$ while :; do psql -h /tmp template1 -c \"DROP DATABASE a\" -c \"CREATE DATABASE a TEMPLATE postgres STRATEGY wal_log\"; done\n> ERROR: database \"a\" does not exist\n> CREATE DATABASE\n> ^CCancel request sent\n> ERROR: canceling statement due to user request\n> ERROR: database \"a\" already exists\n> ^C\n\nHm. This looks more like an issue of DROP DATABASE not being interruptible. I\nsuspect this isn't actually related to STRATEGY wal_log and could likely be\nreproduced in older versions too.\n\nIt's pretty obvious that dropdb() isn't safe against being interrupted. We\ndelete the data before we have committed the deletion of the pg_database\nentry.\n\nSeems like we should hold interrupts across the remove_dbtablespaces() until\n*after* we've committed the transaction?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Aug 2022 11:26:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Aug 03, 2022 at 11:26:43AM -0700, Andres Freund wrote:\n> Hm. This looks more like an issue of DROP DATABASE not being interruptible. I\n> suspect this isn't actually related to STRATEGY wal_log and could likely be\n> reproduced in older versions too.\n\nI couldn't reproduce it with file_copy, but my recipe isn't exactly reliable.\nThat may just mean that it's easier to hit now.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 3 Aug 2022 13:48:57 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Aug 4, 2022 at 12:18 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Aug 03, 2022 at 11:26:43AM -0700, Andres Freund wrote:\n> > Hm. This looks more like an issue of DROP DATABASE not being interruptible. I\n> > suspect this isn't actually related to STRATEGY wal_log and could likely be\n> > reproduced in older versions too.\n>\n> I couldn't reproduce it with file_copy, but my recipe isn't exactly reliable.\n> That may just mean that it's easier to hit now.\n\nI think this looks like a problem with drop db but IMHO you are seeing\nthis behavior only when a database is created using WAL LOG because in\nthis strategy we are using buffers to write the destination database\npages and some of the dirty buffers and sync requests might still be\npending. And now when we try to drop the database it drops all the\ndirty buffers and all pending sync requests and then before it\nactually removes the directory it gets interrupted and now you see the\ndatabase directory on disk which is partially corrupted. See below\nsequence of drop database\n\n\ndropdb()\n{\n...\nDropDatabaseBuffers(db_id);\n...\nForgetDatabaseSyncRequests(db_id);\n...\nRequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_FORCE | CHECKPOINT_WAIT);\n\nWaitForProcSignalBarrier(EmitProcSignalBarrier(PROCSIGNAL_BARRIER_SMGRRELEASE));\n -- Inside this it can process the cancel query and get interrupted\nremove_dbtablespaces(db_id);\n..\n}\n\nI reproduced the same error by inducing error just before\nWaitForProcSignalBarrier.\n\npostgres[14968]=# CREATE DATABASE a STRATEGY WAL_LOG ; drop database a;\nCREATE DATABASE\nERROR: XX000: test error\nLOCATION: dropdb, dbcommands.c:1684\npostgres[14968]=# \\c a\nconnection to server on socket \"/tmp/.s.PGSQL.5432\" failed: PANIC:\ncould not open critical system index 2662\nPrevious connection kept\npostgres[14968]=#\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 09:41:14 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Aug 4, 2022 at 9:41 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Aug 4, 2022 at 12:18 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Wed, Aug 03, 2022 at 11:26:43AM -0700, Andres Freund wrote:\n> > > Hm. This looks more like an issue of DROP DATABASE not being interruptible. I\n> > > suspect this isn't actually related to STRATEGY wal_log and could likely be\n> > > reproduced in older versions too.\n> >\n> > I couldn't reproduce it with file_copy, but my recipe isn't exactly reliable.\n> > That may just mean that it's easier to hit now.\n>\n> I think this looks like a problem with drop db but IMHO you are seeing\n> this behavior only when a database is created using WAL LOG because in\n> this strategy we are using buffers to write the destination database\n> pages and some of the dirty buffers and sync requests might still be\n> pending. And now when we try to drop the database it drops all the\n> dirty buffers and all pending sync requests and then before it\n> actually removes the directory it gets interrupted and now you see the\n> database directory on disk which is partially corrupted. See below\n> sequence of drop database\n>\n>\n> dropdb()\n> {\n> ...\n> DropDatabaseBuffers(db_id);\n> ...\n> ForgetDatabaseSyncRequests(db_id);\n> ...\n> RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_FORCE | CHECKPOINT_WAIT);\n>\n> WaitForProcSignalBarrier(EmitProcSignalBarrier(PROCSIGNAL_BARRIER_SMGRRELEASE));\n> -- Inside this it can process the cancel query and get interrupted\n> remove_dbtablespaces(db_id);\n> ..\n> }\n>\n> I reproduced the same error by inducing error just before\n> WaitForProcSignalBarrier.\n>\n> postgres[14968]=# CREATE DATABASE a STRATEGY WAL_LOG ; drop database a;\n> CREATE DATABASE\n> ERROR: XX000: test error\n> LOCATION: dropdb, dbcommands.c:1684\n> postgres[14968]=# \\c a\n> connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: PANIC:\n> could not open critical system index 2662\n> Previous connection kept\n> postgres[14968]=#\n\nSo basically, from this we can say it is completely a problem with\ndrop databases, I mean I can produce any behavior by interrupting drop\ndatabase\n1. If we created some tables/inserted data and the drop database got\ncancelled, it might have a database directory and those objects are\nlost.\n2. Or you can even drop the database directory and then get cancelled\nbefore deleting the pg_database entry then also you will end up with a\ncorrupted database, doesn't matter whether you created it with WAL LOG\nor FILE COPY.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 16:38:35 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Aug 3, 2022 at 7:15 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Another version of the patch which closes the smgr at the end using\n> smgrcloserellocator() and I have also added a commit message.\n\nI have reviewed this patch and I don't see a problem with it. However,\nit would be nice if Andres or someone else who understands this area\nwell (Tom? Thomas?) would also review it, because I also reviewed\nwhat's in the tree now and that turns out to be buggy, which leads me\nto conclude that I don't understand this area as well as would be\ndesirable.\n\nI'm inclined to hold off on committing this until next week, not only\nfor that reason, but also because there's a wrap planned on Monday,\nand committing anything now seems like it might have too much of a\nrisk of upsetting that plan.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 16:07:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Aug 04, 2022 at 04:07:01PM -0400, Robert Haas wrote:\n> I'm inclined to hold off on committing this until next week, not only\n\n+1\n\nI don't see any reason to hurry to fix problems that occur when DROP DATABASE\nis interrupted.\n\nSorry to beat up your patches so much and for such crappy test cases^C\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 4 Aug 2022 15:47:07 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Aug 04, 2022 at 04:07:01PM -0400, Robert Haas wrote:\n>> I'm inclined to hold off on committing this until next week, not only\n\n> +1\n\n+1 ... there are some other v15 open items that I don't think we'll\nsee fixed for beta3, either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 17:26:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-08-03 16:45:23 +0530, Dilip Kumar wrote:\n> Another version of the patch which closes the smgr at the end using\n> smgrcloserellocator() and I have also added a commit message.\n\nWhat's the motivation behind the explicit close?\n\n\n> @@ -258,8 +258,8 @@ ScanSourceDatabasePgClass(Oid tbid, Oid dbid, char *srcpath)\n> \tPage\t\tpage;\n> \tList\t *rlocatorlist = NIL;\n> \tLockRelId\trelid;\n> -\tRelation\trel;\n> \tSnapshot\tsnapshot;\n> +\tSMgrRelation\tsmgr;\n> \tBufferAccessStrategy bstrategy;\n> \n> \t/* Get pg_class relfilenumber. */\n> @@ -276,16 +276,9 @@ ScanSourceDatabasePgClass(Oid tbid, Oid dbid, char *srcpath)\n> \trlocator.dbOid = dbid;\n> \trlocator.relNumber = relfilenumber;\n> \n> -\t/*\n> -\t * We can't use a real relcache entry for a relation in some other\n> -\t * database, but since we're only going to access the fields related to\n> -\t * physical storage, a fake one is good enough. If we didn't do this and\n> -\t * used the smgr layer directly, we would have to worry about\n> -\t * invalidations.\n> -\t */\n> -\trel = CreateFakeRelcacheEntry(rlocator);\n> -\tnblocks = smgrnblocks(RelationGetSmgr(rel), MAIN_FORKNUM);\n> -\tFreeFakeRelcacheEntry(rel);\n> +\tsmgr = smgropen(rlocator, InvalidBackendId);\n> +\tnblocks = smgrnblocks(smgr, MAIN_FORKNUM);\n> +\tsmgrclose(smgr);\n\nWhy are you opening and then closing again? Part of the motivation for the\nquestion is that a local SMgrRelation variable may lead to it being used\nfurther, opening up interrupt processing issues.\n\n\n> +\trlocator.locator = src_rlocator;\n> +\tsmgrcloserellocator(rlocator);\n> +\n> +\trlocator.locator = dst_rlocator;\n> +\tsmgrcloserellocator(rlocator);\n\nAs mentioned above, it's not clear to me why this is now done...\n\nOtherwise looks good to me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Aug 2022 14:29:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-08-04 16:07:01 -0400, Robert Haas wrote:\n> On Wed, Aug 3, 2022 at 7:15 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Another version of the patch which closes the smgr at the end using\n> > smgrcloserellocator() and I have also added a commit message.\n> \n> I have reviewed this patch and I don't see a problem with it. However,\n> it would be nice if Andres or someone else who understands this area\n> well (Tom? Thomas?) would also review it, because I also reviewed\n> what's in the tree now and that turns out to be buggy, which leads me\n> to conclude that I don't understand this area as well as would be\n> desirable.\n\nI don't think this issue is something I'd have caught \"originally\"\neither. It's probably more of a \"fake relcache entry\" issue (or undocumented\nlimitation) than a bug in the new code.\n\nI did a quick review upthread - some minor quibbles aside, I think it looks\ngood.\n\n\n> I'm inclined to hold off on committing this until next week, not only\n> for that reason, but also because there's a wrap planned on Monday,\n> and committing anything now seems like it might have too much of a\n> risk of upsetting that plan.\n\nMakes sense. Unlikely to be a blocker for anybody.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Aug 2022 14:32:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-08-04 16:38:35 +0530, Dilip Kumar wrote:\n> So basically, from this we can say it is completely a problem with\n> drop databases, I mean I can produce any behavior by interrupting drop\n> database\n> 1. If we created some tables/inserted data and the drop database got\n> cancelled, it might have a database directory and those objects are\n> lost.\n> 2. Or you can even drop the database directory and then get cancelled\n> before deleting the pg_database entry then also you will end up with a\n> corrupted database, doesn't matter whether you created it with WAL LOG\n> or FILE COPY.\n\nYea. I think at the very least we need to start holding interrupts before the\nDropDatabaseBuffers() and do so until commit. That's probably best done by\ndoing the transaction commit inside dropdb.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Aug 2022 14:46:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I have reviewed this patch and I don't see a problem with it. However,\n> it would be nice if Andres or someone else who understands this area\n> well (Tom? Thomas?) would also review it, because I also reviewed\n> what's in the tree now and that turns out to be buggy, which leads me\n> to conclude that I don't understand this area as well as would be\n> desirable.\n\nFWIW, I approve of getting rid of the use of CreateFakeRelcacheEntry\nhere, because I do not think that mechanism is meant to be used\noutside of WAL replay. However, this patch fails to remove it from\nCreateAndCopyRelationData, which seems likely to be just as much\nat risk.\n\nThe \"invalidation\" comment bothered me for awhile, but I think it's\nfine: we know that no other backend can connect to the source DB\nbecause we have it locked, and we know that no other backend can\nconnect to the destination DB because it doesn't exist yet according\nto the catalogs, so nothing could possibly occur to invalidate our\nidea of where the physical files are. It would be nice to document\nthese assumptions, though, rather than merely remove all the relevant\ncommentary.\n\nWhile I'm at it, I would like to strenuously object to the current\nframing of CreateAndCopyRelationData as a general-purpose copying\nmechanism. Because of the above assumptions, I think it's utterly\nunsafe to use anywhere except in CREATE DATABASE. The header comment\nfails to warn about that at all, and placing it in bufmgr.c rather\nthan static in dbcommands.c is just an invitation to future misuse.\nPerhaps I'm overly sensitive to that because I just finished cleaning\nup somebody's misuse of non-general-purpose code (1aa8dad41), but\nas this stands I think it's positively dangerous.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 18:02:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Yea. I think at the very least we need to start holding interrupts before the\n> DropDatabaseBuffers() and do so until commit. That's probably best done by\n> doing the transaction commit inside dropdb.\n\nWe've talked before about ignoring interrupts across commit, but\nI find the idea a bit scary. In any case, DROP DATABASE is far\nfrom the only place with a problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 18:05:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Aug 04, 2022 at 06:02:50PM -0400, Tom Lane wrote:\n> The \"invalidation\" comment bothered me for awhile, but I think it's\n> fine: we know that no other backend can connect to the source DB\n> because we have it locked,\n\nAbout that - is it any problem that the currently-connected db can be used as a\ntemplate? It's no issue for 2-phase commit, because \"create database\" cannot\nrun in an txn.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 4 Aug 2022 17:16:04 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-08-04 18:05:25 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Yea. I think at the very least we need to start holding interrupts before the\n> > DropDatabaseBuffers() and do so until commit. That's probably best done by\n> > doing the transaction commit inside dropdb.\n> \n> We've talked before about ignoring interrupts across commit, but\n> I find the idea a bit scary.\n\nI'm not actually suggesting to do so across commit, just until the\ncommit. Maintaining that seems easiest if dropdb() does the commit internally.\n\n\n> In any case, DROP DATABASE is far from the only place with a problem.\n\nWhat other place has a database corrupting potential of this magnitude just\nbecause interrupts are accepted? We throw valid s_b contents away and then\naccept interrupts before committing - with predictable results. We also accept\ninterrupts as part of deleting the db data dir (due to catalog access).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Aug 2022 15:51:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "I wrote:\n> While I'm at it, I would like to strenuously object to the current\n> framing of CreateAndCopyRelationData as a general-purpose copying\n> mechanism.\n\nAnd while I'm piling on, how is this bit in RelationCopyStorageUsingBuffer\nnot completely broken?\n\n /* Read block from source relation. */\n srcBuf = ReadBufferWithoutRelcache(src->rd_locator, forkNum, blkno,\n RBM_NORMAL, bstrategy_src,\n permanent);\n srcPage = BufferGetPage(srcBuf);\n if (PageIsNew(srcPage) || PageIsEmpty(srcPage))\n {\n ReleaseBuffer(srcBuf);\n continue;\n }\n\n /* Use P_NEW to extend the destination relation. */\n dstBuf = ReadBufferWithoutRelcache(dst->rd_locator, forkNum, P_NEW,\n RBM_NORMAL, bstrategy_dst,\n permanent);\n\nYou can't skip pages just because they are empty. Well, maybe you could\nif you were doing something to ensure that you zero-fill the corresponding\nblocks on the destination side. But this isn't doing that. It's using\nP_NEW for dstBuf, which will have the effect of silently collapsing out\nsuch pages. Maybe in isolation a heap could withstand that, but its\nindexes won't be happy (and I guess t_ctid chain links won't either).\n\nI think you should just lose the if() stanza. There's no optimization to\nbe had here that's worth any extra complication.\n\n(This seems worth fixing before beta3, as it looks like a rather\nnasty data corruption hazard.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 19:01:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "I wrote:\n> And while I'm piling on, how is this bit in RelationCopyStorageUsingBuffer\n> not completely broken?\n\n[pile^2] Also, what is the rationale for locking the target buffer\nbut not the source buffer? That seems pretty hard to justify from\nhere, even granting the assumption that we don't expect any other\nprocesses to be interested in these buffers (which I don't grant,\nbecause checkpointer).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 19:11:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-08-04 19:01:06 -0400, Tom Lane wrote:\n> And while I'm piling on, how is this bit in RelationCopyStorageUsingBuffer\n> not completely broken?\n> \n> /* Read block from source relation. */\n> srcBuf = ReadBufferWithoutRelcache(src->rd_locator, forkNum, blkno,\n> RBM_NORMAL, bstrategy_src,\n> permanent);\n> srcPage = BufferGetPage(srcBuf);\n> if (PageIsNew(srcPage) || PageIsEmpty(srcPage))\n> {\n> ReleaseBuffer(srcBuf);\n> continue;\n> }\n> \n> /* Use P_NEW to extend the destination relation. */\n> dstBuf = ReadBufferWithoutRelcache(dst->rd_locator, forkNum, P_NEW,\n> RBM_NORMAL, bstrategy_dst,\n> permanent);\n> \n> You can't skip pages just because they are empty. Well, maybe you could\n> if you were doing something to ensure that you zero-fill the corresponding\n> blocks on the destination side. But this isn't doing that. It's using\n> P_NEW for dstBuf, which will have the effect of silently collapsing out\n> such pages. Maybe in isolation a heap could withstand that, but its\n> indexes won't be happy (and I guess t_ctid chain links won't either).\n> \n> I think you should just lose the if() stanza. There's no optimization to\n> be had here that's worth any extra complication.\n> \n> (This seems worth fixing before beta3, as it looks like a rather\n> nasty data corruption hazard.)\n\nUgh, yes. And even with this fixed I think this should grow at least an\nassertion that the block numbers match, probably even an elog.\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Thu, 4 Aug 2022 16:12:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-04 18:05:25 -0400, Tom Lane wrote:\n>> In any case, DROP DATABASE is far from the only place with a problem.\n\n> What other place has a database corrupting potential of this magnitude just\n> because interrupts are accepted? We throw valid s_b contents away and then\n> accept interrupts before committing - with predictable results. We also accept\n> interrupts as part of deleting the db data dir (due to catalog access).\n\nThose things would be better handled by moving the data-discarding\nsteps to post-commit. Maybe that argues for having an internal\ncommit halfway through DROP DATABASE: remove pg_database row,\ncommit, start new transaction, clean up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 19:14:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-04 19:01:06 -0400, Tom Lane wrote:\n>> (This seems worth fixing before beta3, as it looks like a rather\n>> nasty data corruption hazard.)\n\n> Ugh, yes. And even with this fixed I think this should grow at least an\n> assertion that the block numbers match, probably even an elog.\n\nYeah, the assumption that P_NEW would automatically match the source block\nwas making me itchy too. An explicit test-and-elog seems worth the\ncycles.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 19:20:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi, \n\nOn August 4, 2022 4:11:13 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>I wrote:\n>> And while I'm piling on, how is this bit in RelationCopyStorageUsingBuffer\n>> not completely broken?\n>\n>[pile^2] Also, what is the rationale for locking the target buffer\n>but not the source buffer? That seems pretty hard to justify from\n>here, even granting the assumption that we don't expect any other\n>processes to be interested in these buffers (which I don't grant,\n>because checkpointer).\n\nI'm not arguing it's good or should stay that way, but it's probably okayish that checkpointer / bgwriter have access, given that they will never modify buffers. They just take a lock to prevent concurrent modifications, which RelationCopyStorageUsingBuffer hopefully doesn't do. \n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 04 Aug 2022 16:59:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi, \n\nOn August 4, 2022 4:20:16 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Yeah, the assumption that P_NEW would automatically match the source block\n>was making me itchy too. An explicit test-and-elog seems worth the\n>cycles.\n\nIs there a good reason to rely on P_NEW at all? Both from an efficiency and robustness POV it seems like it'd be better to use smgrextend to bulk extend just after smgrcreate() and then fill s_b u using RBM_ZERO (or whatever it is called). That bulk smgrextend would later be a good point to use fallocate so the FS can immediately size the file correctly, without a lot of pointless writes of zeroes. \n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 04 Aug 2022 17:06:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On August 4, 2022 4:11:13 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> [pile^2] Also, what is the rationale for locking the target buffer\n>> but not the source buffer? That seems pretty hard to justify from\n>> here, even granting the assumption that we don't expect any other\n>> processes to be interested in these buffers (which I don't grant,\n>> because checkpointer).\n\n> I'm not arguing it's good or should stay that way, but it's probably okayish that checkpointer / bgwriter have access, given that they will never modify buffers. They just take a lock to prevent concurrent modifications, which RelationCopyStorageUsingBuffer hopefully doesn't do. \n\nI'm not arguing that it's actively broken today --- but AFAIR,\nevery other access to a shared buffer takes a buffer lock.\nIt does not seem to me to be very future-proof for this code to\ndecide it's exempt from that rule, without so much as a comment\njustifying it. Furthermore, what's the gain? We aren't expecting\ncontention here, I think. If we were, then it probably *would* be\nactively broken.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 20:21:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Is there a good reason to rely on P_NEW at all? Both from an efficiency\n> and robustness POV it seems like it'd be better to use smgrextend to\n> bulk extend just after smgrcreate() and then fill s_b u using RBM_ZERO\n> (or whatever it is called). That bulk smgrextend would later be a good\n> point to use fallocate so the FS can immediately size the file\n> correctly, without a lot of pointless writes of zeroes.\n\nHmm ... makes sense. We'd be using smgrextend to write just the last page\nof the file, relying on its API spec \"Note that we assume writing a block\nbeyond current EOF causes intervening file space to become filled with\nzeroes\". I don't know that we're using that assumption aggressively\ntoday, but as long as it doesn't confuse the kernel it'd probably be fine.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Aug 2022 20:32:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Aug 4, 2022 at 6:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I have reviewed this patch and I don't see a problem with it. However,\n> > it would be nice if Andres or someone else who understands this area\n> > well (Tom? Thomas?) would also review it, because I also reviewed\n> > what's in the tree now and that turns out to be buggy, which leads me\n> > to conclude that I don't understand this area as well as would be\n> > desirable.\n>\n> FWIW, I approve of getting rid of the use of CreateFakeRelcacheEntry\n> here, because I do not think that mechanism is meant to be used\n> outside of WAL replay. However, this patch fails to remove it from\n> CreateAndCopyRelationData, which seems likely to be just as much\n> at risk.\n\nIt looks to me like it does?\n\n> The \"invalidation\" comment bothered me for awhile, but I think it's\n> fine: we know that no other backend can connect to the source DB\n> because we have it locked, and we know that no other backend can\n> connect to the destination DB because it doesn't exist yet according\n> to the catalogs, so nothing could possibly occur to invalidate our\n> idea of where the physical files are. It would be nice to document\n> these assumptions, though, rather than merely remove all the relevant\n> commentary.\n\nI don't think that's the point. We could always suffer a sinval reset\nor a PROCSIGNAL_BARRIER_SMGRRELEASE. But since the code avoids ever\nreusing the smgr, it should be OK. I think.\n\n> While I'm at it, I would like to strenuously object to the current\n> framing of CreateAndCopyRelationData as a general-purpose copying\n> mechanism. Because of the above assumptions, I think it's utterly\n> unsafe to use anywhere except in CREATE DATABASE. The header comment\n> fails to warn about that at all, and placing it in bufmgr.c rather\n> than static in dbcommands.c is just an invitation to future misuse.\n> Perhaps I'm overly sensitive to that because I just finished cleaning\n> up somebody's misuse of non-general-purpose code (1aa8dad41), but\n> as this stands I think it's positively dangerous.\n\nOK. No objection to you revising the comments however you feel is appropriate.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 22:24:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Aug 4, 2022 at 7:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> And while I'm piling on, how is this bit in RelationCopyStorageUsingBuffer\n> not completely broken?\n\nOuch. That's pretty bad.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 22:26:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Aug 4, 2022 at 7:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> [pile^2] Also, what is the rationale for locking the target buffer\n> but not the source buffer? That seems pretty hard to justify from\n> here, even granting the assumption that we don't expect any other\n> processes to be interested in these buffers (which I don't grant,\n> because checkpointer).\n\nOoph. I agree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Aug 2022 22:27:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Aug 5, 2022 at 4:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > While I'm at it, I would like to strenuously object to the current\n> > framing of CreateAndCopyRelationData as a general-purpose copying\n> > mechanism.\n>\n> And while I'm piling on, how is this bit in RelationCopyStorageUsingBuffer\n> not completely broken?\n>\n> /* Read block from source relation. */\n> srcBuf = ReadBufferWithoutRelcache(src->rd_locator, forkNum, blkno,\n> RBM_NORMAL, bstrategy_src,\n> permanent);\n> srcPage = BufferGetPage(srcBuf);\n> if (PageIsNew(srcPage) || PageIsEmpty(srcPage))\n> {\n> ReleaseBuffer(srcBuf);\n> continue;\n> }\n>\n> /* Use P_NEW to extend the destination relation. */\n> dstBuf = ReadBufferWithoutRelcache(dst->rd_locator, forkNum, P_NEW,\n> RBM_NORMAL, bstrategy_dst,\n> permanent);\n>\n> You can't skip pages just because they are empty. Well, maybe you could\n> if you were doing something to ensure that you zero-fill the corresponding\n> blocks on the destination side. But this isn't doing that. It's using\n> P_NEW for dstBuf, which will have the effect of silently collapsing out\n> such pages. Maybe in isolation a heap could withstand that, but its\n> indexes won't be happy (and I guess t_ctid chain links won't either).\n>\n> I think you should just lose the if() stanza. There's no optimization to\n> be had here that's worth any extra complication.\n>\n> (This seems worth fixing before beta3, as it looks like a rather\n> nasty data corruption hazard.)\n\nYeah this is broken.\n\n--\nDilip\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Aug 2022 09:35:20 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Aug 5, 2022 at 5:36 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On August 4, 2022 4:20:16 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >Yeah, the assumption that P_NEW would automatically match the source block\n> >was making me itchy too. An explicit test-and-elog seems worth the\n> >cycles.\n>\n> Is there a good reason to rely on P_NEW at all?\n\nI think there were 2 arguments for which we used bufmgr instead of\nsmgrextend for the destination database\n\n1) (Comment from Andres) The big benefit would be that the *target*\ndatabase does not have to be written out / shared buffer is\nimmediately populated. [1]\n2) (Comment from Robert) We wanted to avoid writing new code which\nbypasses the shared buffers.\n\n[1]https://www.postgresql.org/message-id/20210905202800.ji4fnfs3xzhvo7l6%40alap3.anarazel.de\n\n Both from an efficiency and robustness POV it seems like it'd be\nbetter to use smgrextend to bulk extend just after smgrcreate() and\nthen fill s_b u using RBM_ZERO (or whatever it is called). That bulk\nsmgrextend would later be a good point to use fallocate so the FS can\nimmediately size the file correctly, without a lot of pointless writes\nof zeroes.\n\nYeah okay, so you mean since we already know the nblocks in the source\nfile so we can directly do smgrextend in bulk before the copy loop and\nthen we can just copy block by block using bufmgr with proper blkno\ninstead of P_NEW. Yeah I think this looks optimized to me and this\nwill take care of the above 2 points I mentioned that we will still\nhave the target database pages in the shared buffers and we are not\nbypassing the shared buffers also.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Aug 2022 10:22:35 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Aug 5, 2022 at 2:59 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-08-03 16:45:23 +0530, Dilip Kumar wrote:\n> > Another version of the patch which closes the smgr at the end using\n> > smgrcloserellocator() and I have also added a commit message.\n>\n> What's the motivation behind the explicit close?\n>\n>\n> > @@ -258,8 +258,8 @@ ScanSourceDatabasePgClass(Oid tbid, Oid dbid, char *srcpath)\n> > Page page;\n> > List *rlocatorlist = NIL;\n> > LockRelId relid;\n> > - Relation rel;\n> > Snapshot snapshot;\n> > + SMgrRelation smgr;\n> > BufferAccessStrategy bstrategy;\n> >\n> > /* Get pg_class relfilenumber. */\n> > @@ -276,16 +276,9 @@ ScanSourceDatabasePgClass(Oid tbid, Oid dbid, char *srcpath)\n> > rlocator.dbOid = dbid;\n> > rlocator.relNumber = relfilenumber;\n> >\n> > - /*\n> > - * We can't use a real relcache entry for a relation in some other\n> > - * database, but since we're only going to access the fields related to\n> > - * physical storage, a fake one is good enough. If we didn't do this and\n> > - * used the smgr layer directly, we would have to worry about\n> > - * invalidations.\n> > - */\n> > - rel = CreateFakeRelcacheEntry(rlocator);\n> > - nblocks = smgrnblocks(RelationGetSmgr(rel), MAIN_FORKNUM);\n> > - FreeFakeRelcacheEntry(rel);\n> > + smgr = smgropen(rlocator, InvalidBackendId);\n> > + nblocks = smgrnblocks(smgr, MAIN_FORKNUM);\n> > + smgrclose(smgr);\n>\n> Why are you opening and then closing again? Part of the motivation for the\n> question is that a local SMgrRelation variable may lead to it being used\n> further, opening up interrupt processing issues.\n\nYeah okay, I think there is no reason to close, in the previous\nversion I had like below and I think that's a better idea.\n\nnblocks = smgrnblocks(smgropen(rlocator, InvalidBackendId), MAIN_FORKNUM)\n\n>\n> > + rlocator.locator = src_rlocator;\n> > + smgrcloserellocator(rlocator);\n> > +\n> > + rlocator.locator = dst_rlocator;\n> > + smgrcloserellocator(rlocator);\n>\n> As mentioned above, it's not clear to me why this is now done...\n>\n> Otherwise looks good to me.\n\nYeah maybe it is not necessary to close as these unowned smgr will\nautomatically get closed on the transaction end. Actually the\nprevious person of the patch had both these comments fixed. The\nreason for explicitly closing it is that I have noticed that most of\nthe places we explicitly close the smgr where we do smgropen e.g.\nindex_copy_data(), heapam_relation_copy_data() OTOH some places we\ndon't close it e.g. IssuePendingWritebacks(). So now I think that in\nour case better we do not close it because I do not like this specific\ncode at the end to close the smgr.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Aug 2022 10:43:46 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Aug 5, 2022 at 10:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Yeah maybe it is not necessary to close as these unowned smgr will\n> automatically get closed on the transaction end. Actually the\n> previous person of the patch had both these comments fixed. The\n> reason for explicitly closing it is that I have noticed that most of\n> the places we explicitly close the smgr where we do smgropen e.g.\n> index_copy_data(), heapam_relation_copy_data() OTOH some places we\n> don't close it e.g. IssuePendingWritebacks(). So now I think that in\n> our case better we do not close it because I do not like this specific\n> code at the end to close the smgr.\n\nPFA patches for different problems discussed in the thread\n\n0001 - Fix the problem of skipping the empty block and buffer lock on\nsource buffer\n0002 - Remove fake relcache entry (same as 0001-BugfixInWalLogCreateDB.patch)\n0003 - Optimization to avoid extending block by block\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 5 Aug 2022 12:32:39 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-08-04 19:14:08 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-08-04 18:05:25 -0400, Tom Lane wrote:\n> >> In any case, DROP DATABASE is far from the only place with a problem.\n> \n> > What other place has a database corrupting potential of this magnitude just\n> > because interrupts are accepted? We throw valid s_b contents away and then\n> > accept interrupts before committing - with predictable results. We also accept\n> > interrupts as part of deleting the db data dir (due to catalog access).\n> \n> Those things would be better handled by moving the data-discarding\n> steps to post-commit. Maybe that argues for having an internal\n> commit halfway through DROP DATABASE: remove pg_database row,\n> commit, start new transaction, clean up.\n\nThat'd still require holding interrupts, I think. We shouldn't accept\ninterrupts until the on-disk data is actually deleted.\n\n\nIn theory I think we should have a pg_database column indicating whether the\ndatabase is valid or not. For database creation, insert the pg_database row\nwith valid=false, commit, then do the filesystem operation, then mark as\nvalid, commit. For database drop, mark as invalid, commit, remove filesystem\nstuff, delete row, commit. With dropdb allowed against an invalid database,\nbut obviously nothing else. But clearly this isn't a short term /\nbackpatchable thing.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Aug 2022 13:41:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Dilip Kumar <dilipbalaut@gmail.com> writes:\n> PFA patches for different problems discussed in the thread\n\n> 0001 - Fix the problem of skipping the empty block and buffer lock on\n> source buffer\n> 0002 - Remove fake relcache entry (same as 0001-BugfixInWalLogCreateDB.patch)\n> 0003 - Optimization to avoid extending block by block\n\nI pushed 0001, because it seems fairly critical to get that in before\nbeta3. The others can stand more leisurely discussion.\n\nI note from\nhttps://coverage.postgresql.org/src/backend/storage/buffer/bufmgr.c.gcov.html\nthat the block-skipping path is actually taken in our tests (this won't be\nvisible there for very much longer of course). So we actually *are*\nmaking a corrupt copy, and we haven't noticed. This is perhaps not too\nsurprising, because the only test case that I can find is in\n020_createdb.pl:\n\n$node->issues_sql_like(\n\t[ 'createdb', '-T', 'foobar2', '-S', 'wal_log', 'foobar6' ],\n\tqr/statement: CREATE DATABASE foobar6 STRATEGY wal_log TEMPLATE foobar2/,\n\t'create database with WAL_LOG strategy');\n\nwhich is, um, not exactly a robust test of whether anything happened\nat all, let alone whether it was correct. I'm not real sure that\nthis test would even notice if the CREATE reported failure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Aug 2022 11:59:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Dilip Kumar <dilipbalaut@gmail.com> writes:\n> On Fri, Aug 5, 2022 at 10:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> Yeah maybe it is not necessary to close as these unowned smgr will\n>> automatically get closed on the transaction end.\n\nI do not think this is a great idea for the per-relation smgrs created\nduring RelationCopyStorageUsingBuffer. Yeah, they'll be mopped up at\ntransaction end, but that doesn't mean that creating possibly tens of\nthousands of transient smgrs isn't going to cause performance issues.\n\nI think RelationCopyStorageUsingBuffer needs to open and then close\nthe smgrs it uses, which means that ReadBufferWithoutRelcache is not the\nappropriate API for it to use, either; need to go down another level.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Aug 2022 12:06:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sat, Aug 6, 2022 at 9:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dilip Kumar <dilipbalaut@gmail.com> writes:\n> > On Fri, Aug 5, 2022 at 10:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >> Yeah maybe it is not necessary to close as these unowned smgr will\n> >> automatically get closed on the transaction end.\n>\n> I do not think this is a great idea for the per-relation smgrs created\n> during RelationCopyStorageUsingBuffer. Yeah, they'll be mopped up at\n> transaction end, but that doesn't mean that creating possibly tens of\n> thousands of transient smgrs isn't going to cause performance issues.\n\nOkay, so for that we can simply call smgrcloserellocator(rlocator);\nbefore exiting the RelationCopyStorageUsingBuffer() right?\n\n> I think RelationCopyStorageUsingBuffer needs to open and then close\n> the smgrs it uses, which means that ReadBufferWithoutRelcache is not the\n> appropriate API for it to use, either; need to go down another level.\n\nNot sure how going down another level would help, the whole point is\nthat we don't want to keep the reference of the smgr for a long time\nespecially in the loop which is interruptible. So everytime we need\nsmgr we can call smgropen and if it is already in the smgr cache then\nwe will get it from there. So I think it makes sense that when we are\nexiting the function that time we can just call smgrcloserellocator()\nso that if it is opened it will be closed and otherwise it will do\nnothing.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 7 Aug 2022 09:24:40 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-08-07 09:24:40 +0530, Dilip Kumar wrote:\n> On Sat, Aug 6, 2022 at 9:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Dilip Kumar <dilipbalaut@gmail.com> writes:\n> > > On Fri, Aug 5, 2022 at 10:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >> Yeah maybe it is not necessary to close as these unowned smgr will\n> > >> automatically get closed on the transaction end.\n> >\n> > I do not think this is a great idea for the per-relation smgrs created\n> > during RelationCopyStorageUsingBuffer. Yeah, they'll be mopped up at\n> > transaction end, but that doesn't mean that creating possibly tens of\n> > thousands of transient smgrs isn't going to cause performance issues.\n\nI was assuming that the files would get reopened at the end of the transaction\nanyway, but it looks like that's not the case, unless wal_level=minimal.\n\nHm. CreateAndCopyRelationData() calls RelationCreateStorage() with\nregister_delete = false, which is ok because createdb_failure_callback will\nclean things up. But that's another thing that's not great for a routine with\na general name...\n\n\n> Okay, so for that we can simply call smgrcloserellocator(rlocator);\n> before exiting the RelationCopyStorageUsingBuffer() right?\n\nYea, I think so.\n\n\n> > I think RelationCopyStorageUsingBuffer needs to open and then close\n> > the smgrs it uses, which means that ReadBufferWithoutRelcache is not the\n> > appropriate API for it to use, either; need to go down another level.\n> \n> Not sure how going down another level would help, the whole point is\n> that we don't want to keep the reference of the smgr for a long time\n> especially in the loop which is interruptible.\n\nYea, I'm not following either.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 6 Aug 2022 21:17:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Sun, Aug 7, 2022 at 9:47 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-08-07 09:24:40 +0530, Dilip Kumar wrote:\n> > On Sat, Aug 6, 2022 at 9:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Dilip Kumar <dilipbalaut@gmail.com> writes:\n> > > > On Fri, Aug 5, 2022 at 10:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >> Yeah maybe it is not necessary to close as these unowned smgr will\n> > > >> automatically get closed on the transaction end.\n> > >\n> > > I do not think this is a great idea for the per-relation smgrs created\n> > > during RelationCopyStorageUsingBuffer. Yeah, they'll be mopped up at\n> > > transaction end, but that doesn't mean that creating possibly tens of\n> > > thousands of transient smgrs isn't going to cause performance issues.\n>\n> I was assuming that the files would get reopened at the end of the transaction\n> anyway, but it looks like that's not the case, unless wal_level=minimal.\n>\n> Hm. CreateAndCopyRelationData() calls RelationCreateStorage() with\n> register_delete = false, which is ok because createdb_failure_callback will\n> clean things up. But that's another thing that's not great for a routine with\n> a general name...\n>\n>\n> > Okay, so for that we can simply call smgrcloserellocator(rlocator);\n> > before exiting the RelationCopyStorageUsingBuffer() right?\n>\n> Yea, I think so.\n\nDone, along with that, I have also got the hunk of smgropen and\nsmgrclose in ScanSourceDatabasePgClass() which I had in v1 patch[1].\nBecause here we do not want to reuse the smgr of the pg_class again so\ninstead of closing at the end with smgrcloserellocator() we can just\nkeep the smgr reference and close immediately after getting the number\nof blocks. Whereas in CreateAndCopyRelationData and\nRelationCopyStorageUsingBuffer() we are using the smgr of the source\nand dest relation multiple time so it make sense to not close it\nimmediately and we can close while exiting the function with\nsmgrcloserellocator().\n\n[1]\n+ smgr = smgropen(rlocator, InvalidBackendId);\n+ nblocks = smgrnblocks(smgr, MAIN_FORKNUM);\n+ smgrclose(smgr);\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 10 Aug 2022 10:31:27 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Aug 10, 2022 at 1:01 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Done, along with that, I have also got the hunk of smgropen and\n> smgrclose in ScanSourceDatabasePgClass() which I had in v1 patch[1].\n> Because here we do not want to reuse the smgr of the pg_class again so\n> instead of closing at the end with smgrcloserellocator() we can just\n> keep the smgr reference and close immediately after getting the number\n> of blocks. Whereas in CreateAndCopyRelationData and\n> RelationCopyStorageUsingBuffer() we are using the smgr of the source\n> and dest relation multiple time so it make sense to not close it\n> immediately and we can close while exiting the function with\n> smgrcloserellocator().\n\nAs far as I know, this 0001 addresses all outstanding comments and\nfixes the reported bug.\n\nDoes anyone think otherwise?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Aug 2022 14:15:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Thu, Aug 11, 2022 at 2:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> As far as I know, this 0001 addresses all outstanding comments and\n> fixes the reported bug.\n>\n> Does anyone think otherwise?\n\nIf they do, they're keeping quiet, so I committed this and\nback-patched it to v15.\n\nRegarding 0002 -- should it, perhaps, use PGAlignedBlock?\n\nAlthough 0002 is formally a performance optimization, I'm inclined to\nthink that if we're going to commit it, it should also be back-patched\ninto v15, because letting the code diverge when we're not even out of\nbeta yet seems painful.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Aug 2022 09:03:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Aug 12, 2022 at 6:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Aug 11, 2022 at 2:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > As far as I know, this 0001 addresses all outstanding comments and\n> > fixes the reported bug.\n> >\n> > Does anyone think otherwise?\n>\n> If they do, they're keeping quiet, so I committed this and\n> back-patched it to v15.\n>\n> Regarding 0002 -- should it, perhaps, use PGAlignedBlock?\n\nYes we can do that, although here we are not using this buffer\ndirectly as a \"Page\" so we do not have any real alignment issue but I\ndo not see any problem in using PGAlignedBlock so change that.\n\n> Although 0002 is formally a performance optimization, I'm inclined to\n> think that if we're going to commit it, it should also be back-patched\n> into v15, because letting the code diverge when we're not even out of\n> beta yet seems painful.\n\nYeah that makes sense to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Aug 2022 09:32:07 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Wed, Aug 17, 2022 at 12:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Although 0002 is formally a performance optimization, I'm inclined to\n> > think that if we're going to commit it, it should also be back-patched\n> > into v15, because letting the code diverge when we're not even out of\n> > beta yet seems painful.\n>\n> Yeah that makes sense to me.\n\nOK, done.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Aug 2022 11:34:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Tue, Aug 02, 2022 at 12:50:43PM -0500, Justin Pryzby wrote:\n> Also, if I understand correctly, this patch seems to assume that nobody is\n> connected to the source database. But what's actually enforced is just that\n> nobody *else* is connected. Is it any issue that the current DB can be used as\n> a source? Anyway, both of the above problems are reproducible using a\n> different database.\n> \n> |postgres=# CREATE DATABASE new TEMPLATE postgres STRATEGY wal_log;\n> |CREATE DATABASE\n\nOn Thu, Aug 04, 2022 at 05:16:04PM -0500, Justin Pryzby wrote:\n> On Thu, Aug 04, 2022 at 06:02:50PM -0400, Tom Lane wrote:\n> > The \"invalidation\" comment bothered me for awhile, but I think it's\n> > fine: we know that no other backend can connect to the source DB\n> > because we have it locked,\n> \n> About that - is it any problem that the currently-connected db can be used as a\n> template? It's no issue for 2-phase commit, because \"create database\" cannot\n> run in an txn.\n\nWould anybody want to comment on this ?\nIs it okay that the *current* DB can be used as a template ?\n\n\n", "msg_date": "Fri, 2 Sep 2022 06:55:52 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "On Fri, Sep 2, 2022 at 5:25 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Aug 02, 2022 at 12:50:43PM -0500, Justin Pryzby wrote:\n> > Also, if I understand correctly, this patch seems to assume that nobody is\n> > connected to the source database. But what's actually enforced is just that\n> > nobody *else* is connected. Is it any issue that the current DB can be used as\n> > a source? Anyway, both of the above problems are reproducible using a\n> > different database.\n> >\n> > |postgres=# CREATE DATABASE new TEMPLATE postgres STRATEGY wal_log;\n> > |CREATE DATABASE\n>\n> On Thu, Aug 04, 2022 at 05:16:04PM -0500, Justin Pryzby wrote:\n> > On Thu, Aug 04, 2022 at 06:02:50PM -0400, Tom Lane wrote:\n> > > The \"invalidation\" comment bothered me for awhile, but I think it's\n> > > fine: we know that no other backend can connect to the source DB\n> > > because we have it locked,\n> >\n> > About that - is it any problem that the currently-connected db can be used as a\n> > template? It's no issue for 2-phase commit, because \"create database\" cannot\n> > run in an txn.\n>\n> Would anybody want to comment on this ?\n> Is it okay that the *current* DB can be used as a template ?\n\nI don't think there should be any problem with that. The main problem\ncould have been that since we are reading the pg_class tuple block by\nblock there could be an issue if someone concurrently modifies the\npg_class or there are some tuples that are inserted by the prepared\ntransaction. But in this case, the same backend can not have an open\nprepared transaction while creating a database and that backend of\ncourse can not perform any parallel operation as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Sep 2022 19:51:31 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" }, { "msg_contents": "Hi.\r\n\r\nWhile investigating the codes in RelationCopyStorageUsingBuffer, I wonder that\r\nthere is any reason to use RBM_NORMAL for acquiring destination buffer.\r\nI think we can use RBM_ZERO_AND_LOCK here instead of RBM_NORMAL.\r\n\r\nWhen we use RBM_NORMAL, a destination block is read by smgrread and it's\r\ncontent is verified with PageIsVerifiedExtended in ReadBuffer_common.\r\nA page verification normally succeeds because destination blocks are\r\nzero-filled by using smgrextend, but do we trust that it is surely zero-filled?\r\n\r\nOn Fri, Aug 5, 2022 at 00:32 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:\r\n> Hmm ... makes sense. We'd be using smgrextend to write just the last page\r\n> of the file, relying on its API spec \"Note that we assume writing a block\r\n> beyond current EOF causes intervening file space to become filled with\r\n> zeroes\". I don't know that we're using that assumption aggressively\r\n> today, but as long as it doesn't confuse the kernel it'd probably be fine.\r\n\r\nIf there is a block which is not zero-filled, page verification would fail and\r\neventually CREATE DATABASE would fail.\r\n(I also think that originally we don't need page verification for destination blocks\r\nhere because those blocks are just overwritten by source buffer.)\r\n\r\nAlso, from performance POV, I think it is good to use RBM_ZERO_AND_LOCK.\r\nIn RBM_NORMAL, destination blocks are read from disk by smgrread each time, but\r\nin RBM_ZERO_AND_LOCK, we only set buffers zero-filled by MemSet and don't\r\naccess the disk which makes performance better.\r\nIf we expect the destination buffer is always zero-filled, we can use\r\nRBM_ZERO_AND_LOCK.\r\n\r\n\r\nApart from above, there seems to be an old comment which is for the old codes\r\nwhen acquiring destination buffer by using P_NEW.\r\n\r\n\"/* Use P_NEW to extend the destination relation. */\"\r\n\r\n\r\n--\r\nYoshikazu Imai\r\n\r\n> -----Original Message-----\r\n> From: Dilip Kumar <dilipbalaut@gmail.com>\r\n> Sent: Friday, September 2, 2022 11:22 PM\r\n> To: Justin Pryzby <pryzby@telsasoft.com>\r\n> Cc: Robert Haas <robertmhaas@gmail.com>; Tom Lane <tgl@sss.pgh.pa.us>; Andres Freund <andres@anarazel.de>;\r\n> Ashutosh Sharma <ashu.coek88@gmail.com>; Maciek Sakrejda <m.sakrejda@gmail.com>; Bruce Momjian\r\n> <bruce@momjian.us>; Alvaro Herrera <alvherre@alvh.no-ip.org>; Andrew Dunstan <andrew@dunslane.net>; Heikki\r\n> Linnakangas <hlinnaka@iki.fi>; pgsql-hackers@lists.postgresql.org; Thomas Munro <thomas.munro@gmail.com>\r\n> Subject: Re: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints\r\n> \r\n> On Fri, Sep 2, 2022 at 5:25 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\r\n> >\r\n> > On Tue, Aug 02, 2022 at 12:50:43PM -0500, Justin Pryzby wrote:\r\n> > > Also, if I understand correctly, this patch seems to assume that\r\n> > > nobody is connected to the source database. But what's actually\r\n> > > enforced is just that nobody *else* is connected. Is it any issue\r\n> > > that the current DB can be used as a source? Anyway, both of the\r\n> > > above problems are reproducible using a different database.\r\n> > >\r\n> > > |postgres=# CREATE DATABASE new TEMPLATE postgres STRATEGY wal_log;\r\n> > > |CREATE DATABASE\r\n> >\r\n> > On Thu, Aug 04, 2022 at 05:16:04PM -0500, Justin Pryzby wrote:\r\n> > > On Thu, Aug 04, 2022 at 06:02:50PM -0400, Tom Lane wrote:\r\n> > > > The \"invalidation\" comment bothered me for awhile, but I think\r\n> > > > it's\r\n> > > > fine: we know that no other backend can connect to the source DB\r\n> > > > because we have it locked,\r\n> > >\r\n> > > About that - is it any problem that the currently-connected db can\r\n> > > be used as a template? It's no issue for 2-phase commit, because\r\n> > > \"create database\" cannot run in an txn.\r\n> >\r\n> > Would anybody want to comment on this ?\r\n> > Is it okay that the *current* DB can be used as a template ?\r\n> \r\n> I don't think there should be any problem with that. The main problem could have been that since we are reading the\r\n> pg_class tuple block by block there could be an issue if someone concurrently modifies the pg_class or there are some\r\n> tuples that are inserted by the prepared transaction. But in this case, the same backend can not have an open prepared\r\n> transaction while creating a database and that backend of course can not perform any parallel operation as well.\r\n> \r\n> --\r\n> Regards,\r\n> Dilip Kumar\r\n> EnterpriseDB: http://www.enterprisedb.com\r\n> \r\n\r\n", "msg_date": "Thu, 12 Jan 2023 02:15:56 +0000", "msg_from": "\"Yoshikazu Imai (Fujitsu)\" <imai.yoshikazu@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Proposal] Fully WAL logged CREATE DATABASE - No Checkpoints" } ]
[ { "msg_contents": "I am just quoting the whole file here for simplicity, as its smallhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/port/pg_crc32c_sse42_choose.c;h=0608e02011f7f5d8dbba7673a5ab4ba071d017a0;hb=e4f9737fac77a5cb03a84d1f4038d300ffd28afd.In line #43 the compiler errors out, if there is no cpuid.h or intrin.h available. As the code is supposed to fallback on a CRC software implementation, if SSE isn't available, returning false here instead, would be better. Ciao   Nat!```   1 /*-------------------------------------------------------------------------   2  *   3  * pg_crc32c_sse42_choose.c   4  *    Choose between Intel SSE 4.2 and software CRC-32C implementation.   5  *   6  * On first call, checks if the CPU we're running on supports Intel SSE   7  * 4.2. If it does, use the special SSE instructions for CRC-32C   8  * computation. Otherwise, fall back to the pure software implementation   9  * (slicing-by-8).  10  *  11  * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group  12  * Portions Copyright (c) 1994, Regents of the University of California  13  *  14  *  15  * IDENTIFICATION  16  *    src/port/pg_crc32c_sse42_choose.c  17  *  18  *-------------------------------------------------------------------------  19  */  20   21 #include \"c.h\"  22   23 #ifdef HAVE__GET_CPUID  24 #include <cpuid.h>  25 #endif  26   27 #ifdef HAVE__CPUID  28 #include <intrin.h>  29 #endif  30   31 #include \"port/pg_crc32c.h\"  32   33 static bool  34 pg_crc32c_sse42_available(void)  35 {  36     unsigned int exx[4] = {0, 0, 0, 0};  37   38 #if defined(HAVE__GET_CPUID)  39     __get_cpuid(1, &exx[0], &exx[1], &exx[2], &exx[3]);  40 #elif defined(HAVE__CPUID)  41     __cpuid(exx, 1);  42 #else  43 #error cpuid instruction not available  44 #endif  45   46     return (exx[2] & (1 << 20)) != 0;   /* SSE 4.2 */  47 }  48   49 /*  50  * This gets called on the first call. It replaces the function pointer  51  * so that subsequent calls are routed directly to the chosen implementation.  52  */  53 static pg_crc32c  54 pg_comp_crc32c_choose(pg_crc32c crc, const void *data, size_t len)  55 {  56     if (pg_crc32c_sse42_available())  57         pg_comp_crc32c = pg_comp_crc32c_sse42;  58     else  59         pg_comp_crc32c = pg_comp_crc32c_sb8;  60   61     return pg_comp_crc32c(crc, data, len);  62 }  63   64 pg_crc32c   (*pg_comp_crc32c) (pg_crc32c crc, const void *data, size_t len) = pg_comp_crc32c_choose;```\n", "msg_date": "Tue, 15 Jun 2021 15:04:11 +0200", "msg_from": "Nat! <nat@mulle-kybernetik.com>", "msg_from_op": true, "msg_subject": "Less compiler errors in pg_crc32c_sse42_choose.c" }, { "msg_contents": "On Tue, Jun 15, 2021 at 03:04:11PM +0200, Nat! wrote:\n\nPlease don't send emails in html only.\n\nThose includes are protected by some #ifdef which shouldn't be set unless\nconfigure detects that that they're usable. Do you have a different behavior?\n\n\n", "msg_date": "Thu, 17 Jun 2021 17:43:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Less compiler errors in pg_crc32c_sse42_choose.c" } ]
[ { "msg_contents": "In the NSS thread it was discussed (20210603210642.GF22012@momjian.us etc) that\nwe use SSL rather than TLS in the documentation, which is technically somewhat\nincorrect. Consensus came to using SSL/TLS instead for referring to encrypted\nconnections. Since this isn't really limited to the NSS work, I'm breaking\nthis out into a new thread.\n\nLooking at the docs it turns out that we have a mix of SSL (with one ssl),\nSSL/TLS and TLS for referring to the same thing. The attached changes the\ndocumentation to consistently use SSL/TLS when referring to an encrypted\nconnection using a TLS protocol, leaving bare SSL and TLS only for referring to\nthe actual protocols. I *think* I found all instances, there are many so I\nmight have missed some, but this version seemed like a good place to continue\nthe discussion from the previous thread.\n\nAdmittedly it gets pretty unwieldy with the <acronym /> markup on SSL and TLS\nbut I opted for being consistent, since I don't know of any rules for when it\ncan/should be omitted (and it seems quite arbitrary right now). Mentions in\ntitles were previously not marked up so I've left those as is. I've also left\nline breaks as an excercise for later to make the diff more readable.\n\nWhile in there I added IMO missing items to the glossary and acronyms sections\nas well as fixed up markup around OpenSSL.\n\nThis only deals with docs, but if this is deemed interesting then userfacing\nmessages in the code should use SSL/TLS as well of course.\n\nThoughts?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 15 Jun 2021 15:59:18 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "SSL/TLS instead of SSL in docs" }, { "msg_contents": "On Tue, Jun 15, 2021 at 03:59:18PM +0200, Daniel Gustafsson wrote:\n> While in there I added IMO missing items to the glossary and acronyms sections\n> as well as fixed up markup around OpenSSL.\n> \n> This only deals with docs, but if this is deemed interesting then userfacing\n> messages in the code should use SSL/TLS as well of course.\n\n+ <term><acronym>SNI</acronym></term>\n+ <listitem>\n+ <para>\n+ <link linkend=\"libpq-connect-sslsni\">Server Name Indication</link>\n+ </para>\n+ </listitem>\nIt looks inconsistent to me to point to the libpq documentation to get\nthe details about SNI. Wouldn't is be better to have an item in the\nglossary that refers to the bits of RFC 6066, and remove the reference\nof the RPC from the libpq page?\n\n- to present a valid (trusted) SSL certificate, while\n+ to present a valid (trusted) <acronym>SSL</acronym>/<acronym>TLS</acronym> certificate, while\nThis style with two acronyms for what we want to be one thing is\nheavy. Could it be better to just have one single acronym called\nSSL/TLS that references both parts?\n\nPatch 0003, for the <productname> markups with OpenSSL, included one\nSSL/TLS entry.\n--\nMichael", "msg_date": "Fri, 18 Jun 2021 14:37:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "> On 18 Jun 2021, at 07:37, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Jun 15, 2021 at 03:59:18PM +0200, Daniel Gustafsson wrote:\n>> While in there I added IMO missing items to the glossary and acronyms sections\n>> as well as fixed up markup around OpenSSL.\n>> \n>> This only deals with docs, but if this is deemed interesting then userfacing\n>> messages in the code should use SSL/TLS as well of course.\n> \n> + <term><acronym>SNI</acronym></term>\n> + <listitem>\n> + <para>\n> + <link linkend=\"libpq-connect-sslsni\">Server Name Indication</link>\n> + </para>\n> + </listitem>\n> It looks inconsistent to me to point to the libpq documentation to get\n> the details about SNI. Wouldn't is be better to have an item in the\n> glossary that refers to the bits of RFC 6066, and remove the reference\n> of the RPC from the libpq page?\n\nI opted for a version with SNI in the glossary but without a link to the RFC\nsince no definitions in the glossary has ulinks.\n\n> - to present a valid (trusted) SSL certificate, while\n> + to present a valid (trusted) <acronym>SSL</acronym>/<acronym>TLS</acronym> certificate, while\n> This style with two acronyms for what we want to be one thing is\n> heavy. Could it be better to just have one single acronym called\n> SSL/TLS that references both parts?\n\nMaybe, I don't know. I certainly don't prefer the way which is in the patch\nbut I also think it's the most \"correct\" way so I opted for that to start the\ndiscussion. If we're fine with one acronym tag for the combination then I'm\nhappy to change to that.\n\nA slightly more invasive idea would be to not have acronyms at all and instead\nmove the ones that do benefit from clarification to the glossary? ISTM that\nhaving a brief description of terms and not just the expansion is beneficial to\nthe users. That would however be for another thread, but perhaps thats worth\ndiscussing?\n\n> Patch 0003, for the <productname> markups with OpenSSL, included one\n> SSL/TLS entry.\n\nUgh, sorry, that must've been a git add -p fat-finger.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Mon, 21 Jun 2021 13:23:56 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "On Mon, Jun 21, 2021 at 01:23:56PM +0200, Daniel Gustafsson wrote:\n> On 18 Jun 2021, at 07:37, Michael Paquier <michael@paquier.xyz> wrote:\n>> It looks inconsistent to me to point to the libpq documentation to get\n>> the details about SNI. Wouldn't is be better to have an item in the\n>> glossary that refers to the bits of RFC 6066, and remove the reference\n>> of the RPC from the libpq page?\n> \n> I opted for a version with SNI in the glossary but without a link to the RFC\n> since no definitions in the glossary has ulinks.\n\nOkay, but why making all this complicated if it can be simple? If I\nread the top of the page, the description of the glossary refers more\nto terms that apply to PostgreSQL and RDBMs in general. I think that\nsomething like the attached is actually more adapted, where there are\nacronyms for SNI and MITM, and where the references we have in the\nlibpq docs are moved to the page for acronyms. That's consistent with\nwhat we do now for the existing acronyms SSL and TLS, and there does\nnot seem to need any references to all those terms in the glossary.\n\n>> - to present a valid (trusted) SSL certificate, while\n>> + to present a valid (trusted) <acronym>SSL</acronym>/<acronym>TLS</acronym> certificate, while\n>> This style with two acronyms for what we want to be one thing is\n>> heavy. Could it be better to just have one single acronym called\n>> SSL/TLS that references both parts?\n> \n> Maybe, I don't know. I certainly don't prefer the way which is in the patch\n> but I also think it's the most \"correct\" way so I opted for that to start the\n> discussion. If we're fine with one acronym tag for the combination then I'm\n> happy to change to that.\n\nThat feels a bit more natural to me to have SSL/TLS in the contexts\nwhere they apply as a single keyword. Do we have actually sections in\nthe docs where only one of them apply, like some protocol references?\n\n> A slightly more invasive idea would be to not have acronyms at all and instead\n> move the ones that do benefit from clarification to the glossary? ISTM that\n> having a brief description of terms and not just the expansion is beneficial to\n> the users. That would however be for another thread, but perhaps thats worth\n> discussing?\n\nNot sure about this bit. That's a more general topic for sure, but I\nalso like the separation we have not between the glossary and the\nacronyms. Having an entry in one does not make necessary the same\nentry in the other, and vice-versa.\n\n>> Patch 0003, for the <productname> markups with OpenSSL, included one\n>> SSL/TLS entry.\n> \n> Ugh, sorry, that must've been a git add -p fat-finger.\n\nThe extra SSL/TLS entry was on one of the files changed f80979f, so\ngit add has been working just fine :)\n--\nMichael", "msg_date": "Tue, 22 Jun 2021 13:37:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "> On 22 Jun 2021, at 06:37, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Jun 21, 2021 at 01:23:56PM +0200, Daniel Gustafsson wrote:\n>> On 18 Jun 2021, at 07:37, Michael Paquier <michael@paquier.xyz> wrote:\n>>> It looks inconsistent to me to point to the libpq documentation to get\n>>> the details about SNI. Wouldn't is be better to have an item in the\n>>> glossary that refers to the bits of RFC 6066, and remove the reference\n>>> of the RPC from the libpq page?\n>> \n>> I opted for a version with SNI in the glossary but without a link to the RFC\n>> since no definitions in the glossary has ulinks.\n> \n> Okay, but why making all this complicated if it can be simple? If I\n> read the top of the page, the description of the glossary refers more\n> to terms that apply to PostgreSQL and RDBMs in general. I think that\n> something like the attached is actually more adapted, where there are\n> acronyms for SNI and MITM, and where the references we have in the\n> libpq docs are moved to the page for acronyms. That's consistent with\n> what we do now for the existing acronyms SSL and TLS, and there does\n> not seem to need any references to all those terms in the glossary.\n\nThe attached v3 does it this way.\n\n>>> - to present a valid (trusted) SSL certificate, while\n>>> + to present a valid (trusted) <acronym>SSL</acronym>/<acronym>TLS</acronym> certificate, while\n>>> This style with two acronyms for what we want to be one thing is\n>>> heavy. Could it be better to just have one single acronym called\n>>> SSL/TLS that references both parts?\n>> \n>> Maybe, I don't know. I certainly don't prefer the way which is in the patch\n>> but I also think it's the most \"correct\" way so I opted for that to start the\n>> discussion. If we're fine with one acronym tag for the combination then I'm\n>> happy to change to that.\n> \n> That feels a bit more natural to me to have SSL/TLS in the contexts\n> where they apply as a single keyword. Do we have actually sections in\n> the docs where only one of them apply, like some protocol references?\n\nYes, there are a few but not too many. Whenever the protocol is refererred to\nand not the concept of an encrypted connection, just the applicable term is\nused.\n\nThe attached v3 wraps SSL/TLS in a single acronym block, which for sure is more\npleasing to the eye when working with the docs, but I still have no idea which\nversion technically is the most correct.\n\n>> A slightly more invasive idea would be to not have acronyms at all and instead\n>> move the ones that do benefit from clarification to the glossary? ISTM that\n>> having a brief description of terms and not just the expansion is beneficial to\n>> the users. That would however be for another thread, but perhaps thats worth\n>> discussing?\n> \n> Not sure about this bit. That's a more general topic for sure, but I\n> also like the separation we have not between the glossary and the\n> acronyms. Having an entry in one does not make necessary the same\n> entry in the other, and vice-versa.\n\nIt doesn't, I'm just not convinced that the acronyms page is consulted all too\nfrequently anymore to provide much value. I might be totally wrong though.\nEither way, thats (potentially) for a separate discussion.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Thu, 24 Jun 2021 13:53:47 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "On Thu, Jun 24, 2021 at 01:53:47PM +0200, Daniel Gustafsson wrote:\n> The attached v3 does it this way.\n\nThanks. Mostly what was on message upthread. Applied this one.\n\n> Yes, there are a few but not too many. Whenever the protocol is refererred to\n> and not the concept of an encrypted connection, just the applicable term is\n> used.\n\nMakes sense.\n\n> The attached v3 wraps SSL/TLS in a single acronym block, which for sure is more\n> pleasing to the eye when working with the docs, but I still have no idea which\n> version technically is the most correct.\n\nI am not sure 100% sure, but I would still vote in favor of this\nchange, perhaps with a small addition of one extra entry for SSL/TLS\ndirectly on the acronym's page for consistency. What you have here\nsounds rather fine to me.\n\n> It doesn't, I'm just not convinced that the acronyms page is consulted all too\n> frequently anymore to provide much value. I might be totally wrong though.\n> Either way, thats (potentially) for a separate discussion.\n\nNo idea about that.\n--\nMichael", "msg_date": "Fri, 25 Jun 2021 11:45:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "On 15.06.21 15:59, Daniel Gustafsson wrote:\n> Looking at the docs it turns out that we have a mix of SSL (with one ssl),\n> SSL/TLS and TLS for referring to the same thing. The attached changes the\n> documentation to consistently use SSL/TLS when referring to an encrypted\n> connection using a TLS protocol, leaving bare SSL and TLS only for referring to\n> the actual protocols. I*think* I found all instances, there are many so I\n> might have missed some, but this version seemed like a good place to continue\n> the discussion from the previous thread.\n\nI am not in favor of this direction. I think it just adds tediousness \nand doesn't really help anyone. If we are worried about correct \nterminology, then we should just change everything to TLS. If we are \nnot, then saying SSL is enough.\n\nI note that popular places such as the Apache and nginx SSL/TLS modules \njust use SSL in their documentation, and they are probably under much \nmore scrutiny in this area. curl is a bit more inconsistent but also \ngenerally just uses SSL. So it seems not a lot of people are really \nbothered by this.\n\n\n", "msg_date": "Wed, 30 Jun 2021 20:20:50 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "On Wed, 2021-06-30 at 20:20 +0200, Peter Eisentraut wrote:\r\n> I note that popular places such as the Apache and nginx SSL/TLS modules \r\n> just use SSL in their documentation, and they are probably under much \r\n> more scrutiny in this area.\r\n\r\nFor Apache, that's not strictly true [1, 2]. mod_ssl and its directive\r\nnames are probably a lost cause due to inertia, but the page titles\r\nthemselves have mostly changed to SSL/TLS.\r\n\r\nhttpd documentation is also less centrally directed than this project\r\nis, in my experience -- if someone has the motivation to change things,\r\nthey'll be changed; otherwise, the status quo rules.\r\n\r\n--Jacob\r\n\r\n[1] https://httpd.apache.org/docs/2.4/ssl/\r\n[2] https://httpd.apache.org/docs/2.4/ssl/ssl_intro.html\r\n", "msg_date": "Wed, 30 Jun 2021 18:43:09 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "> On 30 Jun 2021, at 20:20, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> I am not in favor of this direction. I think it just adds tediousness and doesn't really help anyone. If we are worried about correct terminology, then we should just change everything to TLS.\n\nI actually think SSL/TLS has won the debate of \"correct terminology\" for\ndescribing a secure connection encrypted by a TLS protocol.\n\n> If we are not, then saying SSL is enough.\n\nI think consistency is the interesting aspect here. We already have a mix of\nSSL, TLS and SSL/TLS (although heavily skewed towards SSL) so we should settle\non one and stick to it. The arguments in the NSS thread which led to this\npointed to SSL/TLS. If we feel that the churn isn't worth it, then we should\nchange all to SSL and perhaps instead just add TLS as indexterms to those\nsections.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 30 Jun 2021 22:46:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "On Wed, Jun 30, 2021, at 5:46 PM, Daniel Gustafsson wrote:\n> > On 30 Jun 2021, at 20:20, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> > I am not in favor of this direction. I think it just adds tediousness and doesn't really help anyone. If we are worried about correct terminology, then we should just change everything to TLS.\n> \n> I actually think SSL/TLS has won the debate of \"correct terminology\" for\n> describing a secure connection encrypted by a TLS protocol.\n> \nTLS is described as a successor of SSL. However, the terminology SSL is still\npopular when you are talking about secure connection over a computer network.\nIt seems that's one of the main reasons for articles/documentation use SSL/TLS.\n\nThe primary use of SSL/TLS is to secure WWW connections over HTTP protocol. A\nrecent survey reveals that SSL is supported by less than 4% of the websites in\nthe world [1]. SSL 3.0 (the latest published protocol version) is deprecated\nsince 2015 (6 years ago) [2]. There is no web browser that has SSL enabled by\ndefault (indeed, most of them don't support SSL anymore).\n\nI tend to agree with Peter that the correct terminology is TLS. However, SSL is\nstill popular (probably because popular SSL/TLS libraries contain SSL in its\nname). If we change to SSL/TLS, I'm afraid we have this discussion again for\n(a) remove SSL or (b) add another popular secure protocol and we end up with\nSSL/TLS/FOO terminology.\n\nCommit fe61df7f introduces a new configure option that is --with-ssl. Such\noption is also used in other softwares too. All configuration parameters\nrelated to SSL/TLS starts with ssl. It is hard to decide among popular (SSL),\ncorrect (TLS), and mix (SSL/TLS).\n\nIf I have to pick one, it would be SSL/TLS. It mentions both acronyms that is\neasier to correlate with configuration parameters, secure connections (via\n--with-ssl) and current protocol (TLS).\n\nYour patch doesn't apply anymore and requires a rebase. I'm attaching a new\nversion. It looks good to me. I noticed that you are using\n<acronym>SSL/TLS</acronym>, however, the acronyms are declared separated. It\ndoesn't seem to be a presentation issue per se but I'm asking just in case.\n\n\n[1] https://en.wikipedia.org/wiki/Transport_Layer_Security#Websites\n[1] https://datatracker.ietf.org/doc/html/rfc7568\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/", "msg_date": "Thu, 01 Jul 2021 13:01:52 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "On 30.06.21 20:43, Jacob Champion wrote:\n> On Wed, 2021-06-30 at 20:20 +0200, Peter Eisentraut wrote:\n>> I note that popular places such as the Apache and nginx SSL/TLS modules\n>> just use SSL in their documentation, and they are probably under much\n>> more scrutiny in this area.\n> \n> For Apache, that's not strictly true [1, 2]. mod_ssl and its directive\n> names are probably a lost cause due to inertia, but the page titles\n> themselves have mostly changed to SSL/TLS.\n\n> [1] https://httpd.apache.org/docs/2.4/ssl/\n> [2] https://httpd.apache.org/docs/2.4/ssl/ssl_intro.html\n\nThat page entirely supports my point: It uses \"SSL\" throughout, except \nin the title and where it talks about the specific protocol names.\n\n\n", "msg_date": "Thu, 1 Jul 2021 22:26:55 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "On 30.06.21 22:46, Daniel Gustafsson wrote:\n> I think consistency is the interesting aspect here. We already have a mix of\n> SSL, TLS and SSL/TLS (although heavily skewed towards SSL) so we should settle\n> on one and stick to it. The arguments in the NSS thread which led to this\n> pointed to SSL/TLS. If we feel that the churn isn't worth it, then we should\n> change all to SSL and perhaps instead just add TLS as indexterms to those\n> sections.\n\nI think it is already consistent in that it uses \"SSL\". Is that not the \ncase?\n\nI notice that the NSS documentation also uses \"SSL\" almost exclusively \nwhen referring to the SSL and TLS protocols and related APIs.\n\n\n\n", "msg_date": "Thu, 1 Jul 2021 22:40:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "> On 1 Jul 2021, at 22:40, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 30.06.21 22:46, Daniel Gustafsson wrote:\n>> I think consistency is the interesting aspect here. We already have a mix of\n>> SSL, TLS and SSL/TLS (although heavily skewed towards SSL) so we should settle\n>> on one and stick to it. The arguments in the NSS thread which led to this\n>> pointed to SSL/TLS. If we feel that the churn isn't worth it, then we should\n>> change all to SSL and perhaps instead just add TLS as indexterms to those\n>> sections.\n> \n> I think it is already consistent in that it uses \"SSL\". Is that not the case?\n\nAlmost, but not entirely, and if we want to settle on a single term now is a\ngood time before it diverges too far.\n\n> I notice that the NSS documentation also uses \"SSL\" almost exclusively when referring to the SSL and TLS protocols and related APIs.\n\nTo be fair, the NSS documentation has more or less not seen updates at all in\nyears, large parts of the API are completely missing.\n\nThe best maintained TLS library documentation today is, I would argue, OpenSSL\nand grepping around there (unscientifially) looks a bit different:\n\nSSL: 177 (corrected for not counting the SSL struct)\nSSL/TLS (or TLS/SSL): 154\nTLS: 252\n\nThis patch came about since there was an ask over in the NSS thread to top\nusing SSL as a term, but if there isn't enough support to warrant the churn\nthen we should standardize on SSL and just include a paragraph explaining what\nwe mean by that.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 1 Jul 2021 23:40:27 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "Since the approach taken wasn't to anyones liking, attached is a v4 (partly\nextracted from the previous patch) which only adds notes that SSL is used\ninterchangeably with TLS in our documentation and configuration.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 15 Sep 2021 14:47:11 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "On Wed, Sep 15, 2021 at 8:47 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Since the approach taken wasn't to anyones liking, attached is a v4 (partly\n> extracted from the previous patch) which only adds notes that SSL is used\n> interchangeably with TLS in our documentation and configuration.\n\nI have actually been wondering why we have been insisting on calling\nit SSL when it clearly is not. However, if we're not ready/willing to\nmake a bigger change, then doing as you have proposed here seems fine\nto me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Mar 2022 15:58:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "> On 25 Mar 2022, at 20:58, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Wed, Sep 15, 2021 at 8:47 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> Since the approach taken wasn't to anyones liking, attached is a v4 (partly\n>> extracted from the previous patch) which only adds notes that SSL is used\n>> interchangeably with TLS in our documentation and configuration.\n> \n> I have actually been wondering why we have been insisting on calling\n> it SSL when it clearly is not.\n\nSSL has become the de facto term for a network channel encryption regardless of\nactual protocol used. Few use TLS, with most SSL/TLS is\n\n> However, if we're not ready/willing to make a bigger change, then doing as you\n> have proposed here seems fine to me.\n\nThanks for review! Trying out again just now the patch still applies (with\nsome offsets) and builds.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 25 Mar 2022 22:01:01 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "> On 25 Mar 2022, at 22:01, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 25 Mar 2022, at 20:58, Robert Haas <robertmhaas@gmail.com> wrote:\n\n>> However, if we're not ready/willing to make a bigger change, then doing as you\n>> have proposed here seems fine to me.\n> \n> Thanks for review! Trying out again just now the patch still applies (with\n> some offsets) and builds.\n\nBarring objections I will go ahead and push this for 15. It's the minimal\nchange but it might still help someone new to PostgreSQL who gets confused on\nthe choice of naming/wording.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 28 Mar 2022 23:51:30 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: SSL/TLS instead of SSL in docs" }, { "msg_contents": "> On 28 Mar 2022, at 23:51, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Barring objections I will go ahead and push this for 15. It's the minimal\n> change but it might still help someone new to PostgreSQL who gets confused on\n> the choice of naming/wording.\n\nHearing no objections I went ahead with this now.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 30 Mar 2022 13:43:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: SSL/TLS instead of SSL in docs" } ]
[ { "msg_contents": "Hi,\n\nMy use case is to create an isolated interface schema consisting of only\nviews and functions (possibly many schemas, for multi-tenancy or\nmulti-version), which has the minimal access exposure. To reduce the mental\nand maintenance burden, I am inclined to create one role per interface\nschema, instead of creating separate roles for the owner and the user. As a\nconsequence, the default privileges must be revoked from the owner.\nExplicit revocation works just fine, except that it requires repetitive and\nforgettable statements for each object in the schema.\n\nThe default privileges come to rescue. It mostly works, despite a bit of\nconfusion to me.\n\nThe ending contents are some experiments and demonstrations. To sum up, I\nhave to either leave some non-critical privileges (e.g., trigger,\nreferences) by the default privilege mechanism or manually revoke all\nprivileges, to stop the owner having all the default privileges. Plus, the\nfirst alternative is not applicable to functions because there is only one\nprivilege for functions (execute).\n\nTo me, it is confusing and less intuitive. Or is there something I miss?\n\nTL;DR\nRevoking all default privileges is effectively equivalent to revoking\nnothing, because an empty string of access privileges is handled as\n'default'.\n\nMaybe 'NULL' for 'default', and '' (empty string) means nothing?\n\nRegards.\n\n\n------------------------------------------------------------------------------------------\n\ndrop owned by owner;\ndrop role if exists owner, guest;\n\ncreate role owner;\ncreate role guest;\n\ndrop schema if exists s;\ncreate schema if not exists s authorization owner;\n\nDROP OWNED DROP ROLE CREATE ROLE CREATE ROLE DROP SCHEMA CREATE SCHEMA\n1. tables\n1.1. no-op\n\nset role to owner;\ncreate or replace view s.v1 as select 1;\n\n\\dp+ s.v1\n\nSchema Name Type Access privileges Column privileges Policies\ns v1 view\n\nselect * from information_schema.role_table_grants where table_name='v1';\n\ngrantor grantee table_catalog table_schema table_name privilege_type\nis_grantable with_hierarchy\nowner owner postgres s v1 INSERT YES NO\nowner owner postgres s v1 SELECT YES YES\nowner owner postgres s v1 UPDATE YES NO\nowner owner postgres s v1 DELETE YES NO\nowner owner postgres s v1 TRUNCATE YES NO\nowner owner postgres s v1 REFERENCES YES NO\nowner owner postgres s v1 TRIGGER YES NO\n\nset role to owner;\nselect * from s.v1;\n\n?column?\n1\n1.2. default privilege: revoke all from owner\n\nalter default privileges for user owner revoke all on tables from owner;\n\\ddp+\n\nOwner Schema Type Access privileges\nowner table\n\nset role to owner;\ncreate or replace view s.v2 as select 1;\n\n\\dp+ s.v2\n\nSchema Name Type Access privileges Column privileges Policies\ns v2 view\n\nselect * from information_schema.role_table_grants where table_name='v2';\n\ngrantor grantee table_catalog table_schema table_name privilege_type\nis_grantable with_hierarchy\nowner owner postgres s v2 INSERT YES NO\nowner owner postgres s v2 SELECT YES YES\nowner owner postgres s v2 UPDATE YES NO\nowner owner postgres s v2 DELETE YES NO\nowner owner postgres s v2 TRUNCATE YES NO\nowner owner postgres s v2 REFERENCES YES NO\nowner owner postgres s v2 TRIGGER YES NO\n\nset role to owner;\nselect * from s.v2;\n\n?column?\n1\n1.3. default privilege: revoke all but one from owner\n\nalter default privileges for user owner revoke all on tables from owner;\nalter default privileges for user owner grant trigger on tables to owner;\n\\ddp+\n\nOwner Schema Type Access privileges\nowner table owner=t/owner\n\nset role to owner;\ncreate or replace view s.v3 as select 1;\n\n\\dp+ s.v3\n\nSchema Name Type Access privileges Column privileges Policies\ns v3 view owner=t/owner\n\nselect * from information_schema.role_table_grants where table_name='v3';\n\ngrantor grantee table_catalog table_schema table_name privilege_type\nis_grantable with_hierarchy\nowner owner postgres s v3 TRIGGER YES NO\n\nset role to owner;\nselect * from s.v3;\n\nERROR: 42501: permission denied for view v3\nLOCATION: aclcheck_error, aclchk.c:3461\n\n1.4. manual revoke all from owner\n\nalter default privileges for user owner revoke all on tables from owner;\n\\ddp+\n\nOwner Schema Type Access privileges\nowner table\n\nset role to owner;\ncreate or replace view s.v4 as select 1;\n\n\\dp+ s.v4\n\nSchema Name Type Access privileges Column privileges Policies\ns v4 view\n\nselect * from information_schema.role_table_grants where table_name='v4';\n\ngrantor grantee table_catalog table_schema table_name privilege_type\nis_grantable with_hierarchy\nowner owner postgres s v4 INSERT YES NO\nowner owner postgres s v4 SELECT YES YES\nowner owner postgres s v4 UPDATE YES NO\nowner owner postgres s v4 DELETE YES NO\nowner owner postgres s v4 TRUNCATE YES NO\nowner owner postgres s v4 REFERENCES YES NO\nowner owner postgres s v4 TRIGGER YES NO\n\nset role to owner;\nselect * from s.v4;\n\n?column?\n1\n\nSo far, the situation is identical to s.v2.\n\nset role to owner;\nrevoke all on table s.v4 from owner;\n\n\\dp+ s.v4\n\nSchema Name Type Access privileges Column privileges Policies\ns v4 view\n\nselect * from information_schema.role_table_grants where table_name='v4';\n\ngrantor grantee table_catalog table_schema table_name privilege_type\nis_grantable with_hierarchy\n\nset role to owner;\nselect * from s.v4;\n\nERROR: 42501: permission denied for view v4\nLOCATION: aclcheck_error, aclchk.c:3461\n\nHi,My use case is to create an isolated interface schema consisting of only views and functions (possibly many schemas, for multi-tenancy or multi-version), which has the minimal access exposure. To reduce the mental and maintenance burden, I am inclined to create one role per interface schema, instead of creating separate roles for the owner and the user. As a consequence, the default privileges must be revoked from the owner. Explicit revocation works just fine, except that it requires repetitive and forgettable statements for each object in the schema.The default privileges come to rescue. It mostly works, despite a bit of confusion to me. The ending contents are some experiments and demonstrations. To sum up, I\n have to either leave some non-critical privileges (e.g., trigger, \nreferences) by the default privilege mechanism or manually revoke all \nprivileges, to stop the owner having all the default privileges. Plus, the first alternative is not applicable to functions \nbecause there is only one privilege for functions (execute).To me, it is confusing and less intuitive. Or is there something I miss? TL;DRRevoking all default privileges is effectively equivalent to revoking nothing, because an empty string of access privileges is handled as 'default'.Maybe 'NULL' for 'default', and '' (empty string) means nothing?Regards.------------------------------------------------------------------------------------------\ndrop owned by owner;\ndrop role if exists owner, guest;\n\ncreate role owner;\ncreate role guest;\n\ndrop schema if exists s;\ncreate schema if not exists s authorization owner;\n\n\n\nDROP OWNED\nDROP ROLE\nCREATE ROLE\nCREATE ROLE\nDROP SCHEMA\nCREATE SCHEMA\n\n\n1. tables\n\n\n\n1.1. no-op\n\n\nset role to owner;\ncreate or replace view s.v1 as select 1;\n\n\n\n\\dp+ s.v1\n\n\n\n\n\n\n\n\n\n\n\n\n\nSchema\nName\nType\nAccess privileges\nColumn privileges\nPolicies\n\n\n\n\ns\nv1\nview\n \n \n \n\n\n\n\nselect * from information_schema.role_table_grants where table_name='v1';\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ngrantor\ngrantee\ntable_catalog\ntable_schema\ntable_name\nprivilege_type\nis_grantable\nwith_hierarchy\n\n\n\n\nowner\nowner\npostgres\ns\nv1\nINSERT\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv1\nSELECT\nYES\nYES\n\n\nowner\nowner\npostgres\ns\nv1\nUPDATE\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv1\nDELETE\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv1\nTRUNCATE\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv1\nREFERENCES\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv1\nTRIGGER\nYES\nNO\n\n\n\n\nset role to owner;\nselect * from s.v1;\n\n\n\n\n\n\n\n\n?column?\n\n\n\n\n1\n\n\n\n\n\n\n1.2. default privilege: revoke all from owner\n\n\nalter default privileges for user owner revoke all on tables from owner;\n\\ddp+\n\n\n\n\n\n\n\n\n\n\n\nOwner\nSchema\nType\nAccess privileges\n\n\n\n\nowner\n \ntable\n \n\n\n\n\nset role to owner;\ncreate or replace view s.v2 as select 1;\n\n\n\n\\dp+ s.v2\n\n\n\n\n\n\n\n\n\n\n\n\n\nSchema\nName\nType\nAccess privileges\nColumn privileges\nPolicies\n\n\n\n\ns\nv2\nview\n \n \n \n\n\n\n\nselect * from information_schema.role_table_grants where table_name='v2';\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ngrantor\ngrantee\ntable_catalog\ntable_schema\ntable_name\nprivilege_type\nis_grantable\nwith_hierarchy\n\n\n\n\nowner\nowner\npostgres\ns\nv2\nINSERT\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv2\nSELECT\nYES\nYES\n\n\nowner\nowner\npostgres\ns\nv2\nUPDATE\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv2\nDELETE\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv2\nTRUNCATE\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv2\nREFERENCES\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv2\nTRIGGER\nYES\nNO\n\n\n\n\nset role to owner;\nselect * from s.v2;\n\n\n\n\n\n\n\n\n?column?\n\n\n\n\n1\n\n\n\n\n\n\n1.3. default privilege: revoke all but one from owner\n\n\nalter default privileges for user owner revoke all on tables from owner;\nalter default privileges for user owner grant trigger on tables to owner;\n\\ddp+\n\n\n\n\n\n\n\n\n\n\n\nOwner\nSchema\nType\nAccess privileges\n\n\n\n\nowner\n \ntable\nowner=t/owner\n\n\n\n\nset role to owner;\ncreate or replace view s.v3 as select 1;\n\n\n\n\\dp+ s.v3\n\n\n\n\n\n\n\n\n\n\n\n\n\nSchema\nName\nType\nAccess privileges\nColumn privileges\nPolicies\n\n\n\n\ns\nv3\nview\nowner=t/owner\n \n \n\n\n\n\nselect * from information_schema.role_table_grants where table_name='v3';\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ngrantor\ngrantee\ntable_catalog\ntable_schema\ntable_name\nprivilege_type\nis_grantable\nwith_hierarchy\n\n\n\n\nowner\nowner\npostgres\ns\nv3\nTRIGGER\nYES\nNO\n\n\n\n\nset role to owner;\nselect * from s.v3;\n\n\nERROR: 42501: permission denied for view v3\nLOCATION: aclcheck_error, aclchk.c:3461\n\n\n\n\n1.4. manual revoke all from owner\n\n\nalter default privileges for user owner revoke all on tables from owner;\n\\ddp+\n\n\n\n\n\n\n\n\n\n\n\nOwner\nSchema\nType\nAccess privileges\n\n\n\n\nowner\n \ntable\n \n\n\n\n\nset role to owner;\ncreate or replace view s.v4 as select 1;\n\n\n\n\\dp+ s.v4\n\n\n\n\n\n\n\n\n\n\n\n\n\nSchema\nName\nType\nAccess privileges\nColumn privileges\nPolicies\n\n\n\n\ns\nv4\nview\n \n \n \n\n\n\n\nselect * from information_schema.role_table_grants where table_name='v4';\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ngrantor\ngrantee\ntable_catalog\ntable_schema\ntable_name\nprivilege_type\nis_grantable\nwith_hierarchy\n\n\n\n\nowner\nowner\npostgres\ns\nv4\nINSERT\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv4\nSELECT\nYES\nYES\n\n\nowner\nowner\npostgres\ns\nv4\nUPDATE\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv4\nDELETE\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv4\nTRUNCATE\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv4\nREFERENCES\nYES\nNO\n\n\nowner\nowner\npostgres\ns\nv4\nTRIGGER\nYES\nNO\n\n\n\n\nset role to owner;\nselect * from s.v4;\n\n\n\n\n\n\n\n\n?column?\n\n\n\n\n1\n\n\n\n\nSo far, the situation is identical to s.v2.\n\n\nset role to owner;\nrevoke all on table s.v4 from owner;\n\n\n\n\\dp+ s.v4\n\n\n\n\n\n\n\n\n\n\n\n\n\nSchema\nName\nType\nAccess privileges\nColumn privileges\nPolicies\n\n\n\n\ns\nv4\nview\n \n \n \n\n\n\n\nselect * from information_schema.role_table_grants where table_name='v4';\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ngrantor\ngrantee\ntable_catalog\ntable_schema\ntable_name\nprivilege_type\nis_grantable\nwith_hierarchy\n\n\n\n\nset role to owner;\nselect * from s.v4;\n\n\nERROR: 42501: permission denied for view v4\nLOCATION: aclcheck_error, aclchk.c:3461", "msg_date": "Tue, 15 Jun 2021 22:13:54 +0800", "msg_from": "=?UTF-8?B?5a2Z5Yaw?= <subi.the.dream.walker@gmail.com>", "msg_from_op": true, "msg_subject": "Confused by the default privilege" }, { "msg_contents": "Gee, I pasted the ending demonstration as html.\r\n\r\nRe-pasting a text version.\r\n\r\n----------------------------------------------------------------------------------\r\n\r\n\r\n┌────\r\n│ drop owned by owner;\r\n│ drop role if exists owner, guest;\r\n│\r\n│ create role owner;\r\n│ create role guest;\r\n│\r\n│ drop schema if exists s;\r\n│ create schema if not exists s authorization owner;\r\n└────\r\n\r\nDROP OWNED DROP ROLE CREATE ROLE CREATE ROLE DROP SCHEMA CREATE SCHEMA\r\n\r\n\r\n1 tables\r\n════════\r\n\r\n1.1 no-op\r\n────\r\n\r\n ┌────\r\n │ set role to owner;\r\n │ create or replace view s.v1 as select 1;\r\n └────\r\n\r\n ┌────\r\n │ \\dp+ s.v1\r\n └────\r\n\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n Schema Name Type Access privileges Column privileges Policies\r\n ────────────────────────────────────────────────────────────────────\r\n s v1 view\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ select * from information_schema.role_table_grants where\r\ntable_name='v1';\r\n └────\r\n\r\n\r\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n grantor grantee table_catalog table_schema table_name\r\n privilege_type is_grantable with_hierarchy\r\n\r\n─────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n owner owner postgres s v1 INSERT\r\n YES NO\r\n owner owner postgres s v1 SELECT\r\n YES YES\r\n owner owner postgres s v1 UPDATE\r\n YES NO\r\n owner owner postgres s v1 DELETE\r\n YES NO\r\n owner owner postgres s v1 TRUNCATE\r\n YES NO\r\n owner owner postgres s v1 REFERENCES\r\n YES NO\r\n owner owner postgres s v1 TRIGGER\r\n YES NO\r\n\r\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ set role to owner;\r\n │ select * from s.v1;\r\n └────\r\n\r\n ━━━━━━━━━━\r\n ?column?\r\n ──────────\r\n 1\r\n ━━━━━━━━━━\r\n\r\n\r\n1.2 default privilege: `revoke all from owner'\r\n───────────────────────\r\n\r\n ┌────\r\n │ alter default privileges for user owner revoke all on tables from owner;\r\n │ \\ddp+\r\n └────\r\n\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n Owner Schema Type Access privileges\r\n ─────────────────────────────────────────\r\n owner table\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ set role to owner;\r\n │ create or replace view s.v2 as select 1;\r\n └────\r\n\r\n ┌────\r\n │ \\dp+ s.v2\r\n └────\r\n\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n Schema Name Type Access privileges Column privileges Policies\r\n ────────────────────────────────────────────────────────────────────\r\n s v2 view\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ select * from information_schema.role_table_grants where\r\ntable_name='v2';\r\n └────\r\n\r\n\r\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n grantor grantee table_catalog table_schema table_name\r\n privilege_type is_grantable with_hierarchy\r\n\r\n─────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n owner owner postgres s v2 INSERT\r\n YES NO\r\n owner owner postgres s v2 SELECT\r\n YES YES\r\n owner owner postgres s v2 UPDATE\r\n YES NO\r\n owner owner postgres s v2 DELETE\r\n YES NO\r\n owner owner postgres s v2 TRUNCATE\r\n YES NO\r\n owner owner postgres s v2 REFERENCES\r\n YES NO\r\n owner owner postgres s v2 TRIGGER\r\n YES NO\r\n\r\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ set role to owner;\r\n │ select * from s.v2;\r\n └────\r\n\r\n ━━━━━━━━━━\r\n ?column?\r\n ──────────\r\n 1\r\n ━━━━━━━━━━\r\n\r\n\r\n1.3 default privilege: `revoke all but one from owner'\r\n───────────────────────────\r\n\r\n ┌────\r\n │ alter default privileges for user owner revoke all on tables from owner;\r\n │ alter default privileges for user owner grant trigger on tables to\r\nowner;\r\n │ \\ddp+\r\n └────\r\n\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n Owner Schema Type Access privileges\r\n ─────────────────────────────────────────\r\n owner table owner=t/owner\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ set role to owner;\r\n │ create or replace view s.v3 as select 1;\r\n └────\r\n\r\n ┌────\r\n │ \\dp+ s.v3\r\n └────\r\n\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n Schema Name Type Access privileges Column privileges Policies\r\n ────────────────────────────────────────────────────────────────────\r\n s v3 view owner=t/owner\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ select * from information_schema.role_table_grants where\r\ntable_name='v3';\r\n └────\r\n\r\n\r\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n grantor grantee table_catalog table_schema table_name\r\n privilege_type is_grantable with_hierarchy\r\n\r\n─────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n owner owner postgres s v3 TRIGGER\r\n YES NO\r\n\r\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ set role to owner;\r\n │ select * from s.v3;\r\n └────\r\n\r\n ┌────\r\n │ ERROR: 42501: permission denied for view v3\r\n │ LOCATION: aclcheck_error, aclchk.c:3461\r\n └────\r\n\r\n\r\n1.4 manual `revoke all from owner'\r\n─────────────────\r\n\r\n ┌────\r\n │ alter default privileges for user owner revoke all on tables from owner;\r\n │ \\ddp+\r\n └────\r\n\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n Owner Schema Type Access privileges\r\n ─────────────────────────────────────────\r\n owner table\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ set role to owner;\r\n │ create or replace view s.v4 as select 1;\r\n └────\r\n\r\n ┌────\r\n │ \\dp+ s.v4\r\n └────\r\n\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n Schema Name Type Access privileges Column privileges Policies\r\n ────────────────────────────────────────────────────────────────────\r\n s v4 view\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ select * from information_schema.role_table_grants where\r\ntable_name='v4';\r\n └────\r\n\r\n\r\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n grantor grantee table_catalog table_schema table_name\r\n privilege_type is_grantable with_hierarchy\r\n\r\n─────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n owner owner postgres s v4 INSERT\r\n YES NO\r\n owner owner postgres s v4 SELECT\r\n YES YES\r\n owner owner postgres s v4 UPDATE\r\n YES NO\r\n owner owner postgres s v4 DELETE\r\n YES NO\r\n owner owner postgres s v4 TRUNCATE\r\n YES NO\r\n owner owner postgres s v4 REFERENCES\r\n YES NO\r\n owner owner postgres s v4 TRIGGER\r\n YES NO\r\n\r\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ set role to owner;\r\n │ select * from s.v4;\r\n └────\r\n\r\n ━━━━━━━━━━\r\n ?column?\r\n ──────────\r\n 1\r\n ━━━━━━━━━━\r\n\r\n So far, the situation is identical to s.v2.\r\n\r\n ┌────\r\n │ set role to owner;\r\n │ revoke all on table s.v4 from owner;\r\n └────\r\n\r\n ┌────\r\n │ \\dp+ s.v4\r\n └────\r\n\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n Schema Name Type Access privileges Column privileges Policies\r\n ────────────────────────────────────────────────────────────────────\r\n s v4 view\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ select * from information_schema.role_table_grants where\r\ntable_name='v4';\r\n └────\r\n\r\n\r\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n grantor grantee table_catalog table_schema table_name\r\n privilege_type is_grantable with_hierarchy\r\n\r\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\r\n\r\n ┌────\r\n │ set role to owner;\r\n │ select * from s.v4;\r\n └────\r\n\r\n ┌────\r\n │ ERROR: 42501: permission denied for view v4\r\n │ LOCATION: aclcheck_error, aclchk.c:3461\r\n └────\r\n\nGee, I pasted the ending demonstration as html.Re-pasting a text version.----------------------------------------------------------------------------------┌────│ drop owned by owner;│ drop role if exists owner, guest;││ create role owner;│ create role guest;││ drop schema if exists s;│ create schema if not exists s authorization owner;└────DROP OWNED DROP ROLE CREATE ROLE CREATE ROLE DROP SCHEMA CREATE SCHEMA1 tables════════1.1 no-op────  ┌────  │ set role to owner;  │ create or replace view s.v1 as select 1;  └────  ┌────  │ \\dp+ s.v1  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   Schema  Name  Type  Access privileges  Column privileges  Policies  ────────────────────────────────────────────────────────────────────   s       v1    view  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ select * from information_schema.role_table_grants where table_name='v1';  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   grantor  grantee  table_catalog  table_schema  table_name  privilege_type  is_grantable  with_hierarchy  ─────────────────────────────────────────────────────────────────────────────────────────────────────────   owner    owner    postgres       s             v1          INSERT          YES           NO   owner    owner    postgres       s             v1          SELECT          YES           YES   owner    owner    postgres       s             v1          UPDATE          YES           NO   owner    owner    postgres       s             v1          DELETE          YES           NO   owner    owner    postgres       s             v1          TRUNCATE        YES           NO   owner    owner    postgres       s             v1          REFERENCES      YES           NO   owner    owner    postgres       s             v1          TRIGGER         YES           NO  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ set role to owner;  │ select * from s.v1;  └────  ━━━━━━━━━━   ?column?  ──────────          1  ━━━━━━━━━━1.2 default privilege: `revoke all from owner'───────────────────────  ┌────  │ alter default privileges for user owner revoke all on tables from owner;  │ \\ddp+  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   Owner  Schema  Type   Access privileges  ─────────────────────────────────────────   owner          table  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ set role to owner;  │ create or replace view s.v2 as select 1;  └────  ┌────  │ \\dp+ s.v2  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   Schema  Name  Type  Access privileges  Column privileges  Policies  ────────────────────────────────────────────────────────────────────   s       v2    view  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ select * from information_schema.role_table_grants where table_name='v2';  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   grantor  grantee  table_catalog  table_schema  table_name  privilege_type  is_grantable  with_hierarchy  ─────────────────────────────────────────────────────────────────────────────────────────────────────────   owner    owner    postgres       s             v2          INSERT          YES           NO   owner    owner    postgres       s             v2          SELECT          YES           YES   owner    owner    postgres       s             v2          UPDATE          YES           NO   owner    owner    postgres       s             v2          DELETE          YES           NO   owner    owner    postgres       s             v2          TRUNCATE        YES           NO   owner    owner    postgres       s             v2          REFERENCES      YES           NO   owner    owner    postgres       s             v2          TRIGGER         YES           NO  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ set role to owner;  │ select * from s.v2;  └────  ━━━━━━━━━━   ?column?  ──────────          1  ━━━━━━━━━━1.3 default privilege: `revoke all but one from owner'───────────────────────────  ┌────  │ alter default privileges for user owner revoke all on tables from owner;  │ alter default privileges for user owner grant trigger on tables to owner;  │ \\ddp+  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   Owner  Schema  Type   Access privileges  ─────────────────────────────────────────   owner          table  owner=t/owner  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ set role to owner;  │ create or replace view s.v3 as select 1;  └────  ┌────  │ \\dp+ s.v3  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   Schema  Name  Type  Access privileges  Column privileges  Policies  ────────────────────────────────────────────────────────────────────   s       v3    view  owner=t/owner  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ select * from information_schema.role_table_grants where table_name='v3';  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   grantor  grantee  table_catalog  table_schema  table_name  privilege_type  is_grantable  with_hierarchy  ─────────────────────────────────────────────────────────────────────────────────────────────────────────   owner    owner    postgres       s             v3          TRIGGER         YES           NO  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ set role to owner;  │ select * from s.v3;  └────  ┌────  │ ERROR:  42501: permission denied for view v3  │ LOCATION:  aclcheck_error, aclchk.c:3461  └────1.4 manual `revoke all from owner'─────────────────  ┌────  │ alter default privileges for user owner revoke all on tables from owner;  │ \\ddp+  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   Owner  Schema  Type   Access privileges  ─────────────────────────────────────────   owner          table  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ set role to owner;  │ create or replace view s.v4 as select 1;  └────  ┌────  │ \\dp+ s.v4  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   Schema  Name  Type  Access privileges  Column privileges  Policies  ────────────────────────────────────────────────────────────────────   s       v4    view  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ select * from information_schema.role_table_grants where table_name='v4';  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   grantor  grantee  table_catalog  table_schema  table_name  privilege_type  is_grantable  with_hierarchy  ─────────────────────────────────────────────────────────────────────────────────────────────────────────   owner    owner    postgres       s             v4          INSERT          YES           NO   owner    owner    postgres       s             v4          SELECT          YES           YES   owner    owner    postgres       s             v4          UPDATE          YES           NO   owner    owner    postgres       s             v4          DELETE          YES           NO   owner    owner    postgres       s             v4          TRUNCATE        YES           NO   owner    owner    postgres       s             v4          REFERENCES      YES           NO   owner    owner    postgres       s             v4          TRIGGER         YES           NO  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ set role to owner;  │ select * from s.v4;  └────  ━━━━━━━━━━   ?column?  ──────────          1  ━━━━━━━━━━  So far, the situation is identical to s.v2.  ┌────  │ set role to owner;  │ revoke all on table s.v4 from owner;  └────  ┌────  │ \\dp+ s.v4  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   Schema  Name  Type  Access privileges  Column privileges  Policies  ────────────────────────────────────────────────────────────────────   s       v4    view  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ select * from information_schema.role_table_grants where table_name='v4';  └────  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   grantor  grantee  table_catalog  table_schema  table_name  privilege_type  is_grantable  with_hierarchy  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  ┌────  │ set role to owner;  │ select * from s.v4;  └────  ┌────  │ ERROR:  42501: permission denied for view v4  │ LOCATION:  aclcheck_error, aclchk.c:3461  └────", "msg_date": "Tue, 15 Jun 2021 22:19:46 +0800", "msg_from": "=?UTF-8?B?5a2Z5Yaw?= <subi.the.dream.walker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Confused by the default privilege" } ]
[ { "msg_contents": "While working on some related issues I found that the wal receiver\ntries to call walrcv_receive() loop\nbefore replying the write/flush/apply LSN to wal senders in\nXLogWalRcvSendReply(). It is possible\nthat walrcv_receive() loop receives and writes a lot of xlogs, so it\ndoes not reply those LSN\ninformation in time, thus finally slows down those transactions due to\nsyncrep wait (assuming default synchronous_commit)\n\nIn my TPCB testing, I found the worst case is that 10,466,469 bytes\nwere consumed in the walrcv_receive() loop.\n\nMore seriously, we call XLogWalRcvSendReply(false, false) after\nhandling those bytes; The first\nargument false means no force , i.e. it notifies unless max time of\nguc wal_receiver_status_interval\nvalue (10s by default) is reached, so we may have to wait other calls\nof XLogWalRcvSendReply()\nto notify the wal sender.\n\nI thought and tried enhancing this by force-replying to the wal sender\neach when receiving\na maximum bytes (e.g. 128K) but several things confused me:\n\n- What's the purpose of guc wal_receiver_status_interval? The OS\nkernel is usually not\n efficient when handling small packets but we are not replying that\naggressively so why\n is this guc there?\n\n- I run simple TPCB (1000 scaling, 200 connections, shared_buffers,\nmax_connections tuned)\n but found no obvious performance difference with and without the\ncode change. I did not\n see obvious system (IO/CPU/network) bottleneck - probably the\nbottleneck is in PG itself.\n I did not investigate further at this moment, but the change should\nin theory help, no?\n\nAnother thing came to my mind is the wal receiver logic:\nCurrently the wal receiver process does network io, wal write, wal\nflush in one process.\nNetwork io is async, blocking at epoll/poll, wal write is mostly\nnon-blocking, but for wal flush,\nprobably we could decouple it to a dedicated process. And maybe use\nsync_file_range instead\nof wal file fsync in issue_xlog_fsync()? We should sync those wal\ncontents with lower LSN at\nfirst and reply to the wal sender in time, right?.\n\nBelow is the related code:\n\n /* See if we can read data immediately */\n len = walrcv_receive(wrconn, &buf, &wait_fd);\n if (len != 0)\n {\n /*\n * Process the received data,\nand any subsequent data we\n * can read without blocking.\n */\n for (;;)\n {\n if (len > 0)\n {\n /*\n * Something\nwas received from primary, so reset\n * timeout\n */\n\nlast_recv_timestamp = GetCurrentTimestamp();\n ping_sent = false;\n\nXLogWalRcvProcessMsg(buf[0], &buf[1], len - 1);\n }\n else if (len == 0)\n break;\n else if (len < 0)\n {\n ereport(LOG,\n\n (errmsg(\"replication terminated by primary server\"),\n\n errdetail(\"End of WAL reached on timeline %u at %X/%X.\",\n\n startpointTLI,\n\n LSN_FORMAT_ARGS(LogstreamResult.Write))));\n endofwal = true;\n break;\n }\n len =\nwalrcv_receive(wrconn, &buf, &wait_fd);\n }\n\n /* Let the primary know that\nwe received some data. */\n XLogWalRcvSendReply(false, false);\n\n /*\n * If we've written some\nrecords, flush them to disk and\n * let the startup process and\nprimary server know about\n * them.\n */\n XLogWalRcvFlush(false);\n\n-- \nPaul Guo (Vmware)\n\n\n", "msg_date": "Tue, 15 Jun 2021 23:39:59 +0800", "msg_from": "Paul Guo <paulguo@gmail.com>", "msg_from_op": true, "msg_subject": "Should wal receiver reply to wal sender more aggressively?" }, { "msg_contents": "[ Resending the mail since I found my previous email has a very\n bad format that is hard to read].\n\nWhile working on some related issues I found that the wal receiver\ntries to call walrcv_receive() loop before replying the write/flush/apply\nLSN to wal senders in XLogWalRcvSendReply(). It is possible\nthat walrcv_receive() loop receives and writes a lot of xlogs, so it\ndoes not reply those LSN information in time, thus finally slows down\nthe transactions due to syncrep wait (assuming default\nsynchronous_commit)\n\nDuring TPCB testing, I found the worst case is that 10,466,469 bytes\nwere consumed in the walrcv_receive() loop.\n\nMore seriously, we call XLogWalRcvSendReply(false, false) after\nhandling those bytes; The first argument false means no force ,\ni.e. it notifies unless max time of guc wal_receiver_status_interval\nvalue (10s by default) is reached, so we may have to wait for other\ncalls of XLogWalRcvSendReply() to notify the wal sender.\n\nI thought and tried enhancing this by force-flushing-replying each\ntime when receiving a maximum bytes (e.g. 128K) but several things\nconfused me:\n\n- What's the purpose of guc wal_receiver_status_interval? The OS\n kernel is usually not efficient when handling small packets but we\n are not replying that aggressively so why is this guc there?\n\n- I run simple TPCB (1000 scaling, 200 connections, shared_buffers,\n max_connections tuned) but found no obvious performance difference\n with and without the code change. I did not see an obvious system\n IO/CPU/network) bottleneck - probably the bottleneck is in PG itself?\n I did not investigate further at this moment, but the change should in\n theory help, right? I may continue investigating but probably won't\n do this unless I have some clear answers to the confusions.\n\nAnother thing came to my mind is the wal receiver logic:\nCurrently the wal receiver process does network io, wal write, wal\nflush in one process. Network io is async, blocking at epoll/poll, etc,\nwal write is mostly non-blocking, but for wal flush, probably we could\ndecouple it to a dedicated process? And maybe use sync_file_range\ninstead of wal file fsync in issue_xlog_fsync()? We should sync those\nwal contents with lower LSN at first and reply to the wal sender in\ntime, right?.\n\nBelow is the related code:\n\n /* See if we can read data immediately */\nlen = walrcv_receive(wrconn, &buf, &wait_fd);\nif (len != 0)\n{\n /*\n * Process the received data, and any subsequent data we\n * can read without blocking.\n */\n for (;;)\n {\n if (len > 0)\n {\n /*\n * Something was received from primary, so reset\n * timeout\n */\n last_recv_timestamp = GetCurrentTimestamp();\n ping_sent = false;\n XLogWalRcvProcessMsg(buf[0], &buf[1], len - 1);\n }\n else if (len == 0)\n break;\n else if (len < 0)\n {\n ereport(LOG,\n (errmsg(\"replication terminated by primary server\"),\n errdetail(\"End of WAL reached on timeline %u at %X/%X.\",\n startpointTLI,\n LSN_FORMAT_ARGS(LogstreamResult.Write))));\n endofwal = true;\n break;\n }\n len = walrcv_receive(wrconn, &buf, &wait_fd);\n }\n\n /* Let the primary know that we received some data. */\n XLogWalRcvSendReply(false, false);\n\n /*\n * If we've written some records, flush them to disk and\n * let the startup process and primary server know about\n * them.\n */\n XLogWalRcvFlush(false);\n\n\n", "msg_date": "Wed, 16 Jun 2021 22:23:53 +0800", "msg_from": "Paul Guo <paulguo@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Should wal receiver reply to wal sender more aggressively?" } ]
[ { "msg_contents": "I propose to change some defaults:\n\nlog_autovacuum_min_duration = 0\nlog_checkpoints = on\nlog_lock_waits = on (and log_recovery_conflict_waits too?)\nlog_temp_files = 128kB\n\nNote that pg_regress does this:\n| fputs(\"\\n# Configuration added by pg_regress\\n\\n\", pg_conf);\n| fputs(\"log_autovacuum_min_duration = 0\\n\", pg_conf);\n| fputs(\"log_checkpoints = on\\n\", pg_conf);\n| fputs(\"log_line_prefix = '%m %b[%p] %q%a '\\n\", pg_conf);\n| fputs(\"log_lock_waits = on\\n\", pg_conf);\n| fputs(\"log_temp_files = 128kB\\n\", pg_conf);\n| fputs(\"max_prepared_transactions = 2\\n\", pg_conf);\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 15 Jun 2021 11:18:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "change logging defaults" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I propose to change some defaults:\n> log_autovacuum_min_duration = 0\n> log_checkpoints = on\n> log_lock_waits = on (and log_recovery_conflict_waits too?)\n> log_temp_files = 128kB\n\nWhy?\n\nBased on reports that I see, some quite large percentage of Postgres\nDBAs never look at the postmaster log at all. So making the log\nbulkier isn't something that will be useful to them. People who do\nwant these reports are certainly capable of turning them on.\n\n> Note that pg_regress does this:\n\nWhat we find useful for testing seems to me to be nearly\nunrelated to production needs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 13:03:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: change logging defaults" } ]
[ { "msg_contents": " From time to time, someone tells me that they've configured\nenable_nestloop=false on postgresql.conf, which is a pretty bad idea\nsince there are a significant number of cases where such plans are the\nonly reasonable way of executing some query. However, it's no great\nsecret that PostgreSQL's optimizer sometimes produces nested loops\nthat are very, very, very slow, generally because it has grossly\nunderestimated the cardinality of the inner side of the nested loop.\nThe frustration which users experience as a result of such events is\nunderstandable.\n\nI read https://15721.courses.cs.cmu.edu/spring2020/papers/22-costmodels/p204-leis.pdf\ntoday and found out that the authors of that paper did something a bit\nmore nuanced which, in their experiments, was very successful. It\nsounds like what they basically did is disabled unparameterized nested\nloops. They argue that such nested loops figure to gain very little as\ncompared with a hash join, but that they might turn out to lose a lot\nif the cardinality estimation is inaccurate, and they present\nexperimental results to back up those claims. One observation that the\npaper makes along the way is that every system they tested is more\nlikely to underestimate the cardinality of joins than to overestimate\nit, and that this tendency becomes more pronounced as the size of the\njoin planning problem increases. On reflection, I believe this matches\nmy experience, and it makes sense that it should be so, since it\noccasionally happens that the join selectivity estimate is essentially\nzero, and a bigger join problem is more likely to have at least one\nsuch case. On the other hand, the join selectivity estimate can never\nbe +infinity. Hence, it's more important in general for a database\nsystem to be resilient against underestimates than to be resilient\nagainst overestimates. Being less willing to choose unparameterized\nnested loops is one way to move in that direction.\n\nHow to do that is not very clear. One very simple thing we could do\nwould be to introduce enable_nestloop=parameterized or\nenable_parameterized_nestloop=false. That is a pretty blunt instrument\nbut the authors of the paper seem to have done that with positive\nresults, so maybe it's actually not a dumb idea. Another approach\nwould be to introduce a large fuzz factor for such nested loops e.g.\nkeep them only if the cost estimate is better than the comparable hash\njoin plan by at least a factor of N (or if no such plan is being\ngenerated for some reason). I'm not very confident that this would\nactually do what we want, though. In the problematic cases, a nested\nloop is likely to look extremely cheap, so just imagining that the\ncost might be higher is not very protective. Perhaps a better approach\nwould be something like: if the estimated number of inner rows is less\nthan 100, then re-estimate the cost of this approach and of the best\navailable hash join on the assumption that there are 100 inner rows.\nIf the hash join still wins, keep it; if it loses under that\nassumption, throw it out. I think it's likely that this approach would\neliminate a large number of highly risky nested loop plans, probably\neven with s/100/10/g, without causing many other problems (except\nperhaps increased planner CPU consumption ... but maybe that's not too\nbad).\n\nJust to be clear, I do understand that there are cases where no Hash\nJoin is possible, but anything we do here could be made to apply only\nwhen a hash join is in fact possible. We could also think about making\nthe point of comparison the best other plans of any sort rather than a\nhash join specifically, which feels a little more principled but might\nactually be worse. When a Nested Loop is a stupid idea, it's stupid\nprecisely because the inner side is big and we could've avoided\nrecomputing it over and over by using a Hash Join instead, not because\nsome Merge Join based plan turns out to be better. I mean, it is\npossible that some Merge Join plan does turn out to be better, but\nthat's not rage-inducing in the same way. Nobody looks at a\ncomplicated join plan that happened to use a Nested Loop and says\n\"obviously, this is inferior to a merge join,\" or if they do, they're\nprobably full of hot air. But people look at complicated join plans\nthat happen to use a Nested Loop and say \"obviously, this is inferior\nto a hash join\" *all the time* and assuming the inner path is\nunparameterized, they are very often correct.\n\nThoughts? I'd be particularly curious to hear about any cases anyone\nknows about where an unparameterized nested loop and a hash join with\nbatches=1 are both possible and where the unparameterized nested loop\nis *much* cheaper than the hash join.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Jun 2021 13:09:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "disfavoring unparameterized nested loops" }, { "msg_contents": "On Tue, Jun 15, 2021 at 10:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> How to do that is not very clear. One very simple thing we could do\n> would be to introduce enable_nestloop=parameterized or\n> enable_parameterized_nestloop=false. That is a pretty blunt instrument\n> but the authors of the paper seem to have done that with positive\n> results, so maybe it's actually not a dumb idea.\n\nI think that it's probably a good idea as-is.\n\nSimple heuristics that are very frequently wrong when considered in a\nnaive way can work very well in practice. This seems to happen when\nthey capture some kind of extreme naturally occuring cost/benefit\nasymmetry -- especially one with fixed well understood costs and\nunlimited benefits (this business with unparameterized nestloop joins\nis about *avoiding* the inverse asymmetry, but that seems very\nsimilar). My go to example of such an asymmetry is the rightmost page\nsplit heuristic of applying leaf fillfactor regardless of any of the\nother specifics; we effectively assume that all indexes are on columns\nwith ever-increasing values. Which is obviously wrong.\n\nWe're choosing between two alternatives (unparameterized nested loop\nvs hash join) that are really very similar when things go as expected,\nbut diverge sharply when there is a misestimation -- who wouldn't take\nthe \"conservative\" choice here?\n\nI guess that there is a hesitation to not introduce heuristics like\nthis because it doesn't fit into some larger framework that captures\nrisk -- it might be seen as an ugly special case. But isn't this\nalready actually kind of special, whether or not we officially think\nso?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Jun 2021 11:04:10 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Tue, Jun 15, 2021 at 2:04 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I guess that there is a hesitation to not introduce heuristics like\n> this because it doesn't fit into some larger framework that captures\n> risk -- it might be seen as an ugly special case. But isn't this\n> already actually kind of special, whether or not we officially think\n> so?\n\nYes, I think it is. Reading the paper really helped me crystallize my\nthoughts about this, because when I've studied the problems myself, I\ncame, as you postulate here, to the conclusion that there's a lot of\nstuff the planner does where there is risk and uncertainty, and thus\nthat a general framework would be necessary to deal with it. But the\nfact that an academic researcher called this problem out as the only\none worth treating specially makes me think that perhaps it deserves\nspecial handling. In defense of that approach, note that this is a\ncase where we know both that the Nested Loop is risky and that Hash\nJoin is a similar alternative with probably similar cost. I am not\nsure there are any other cases where we can say quite so generally\nboth that a certain thing is risky and what we could do instead.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Jun 2021 15:31:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Tue, Jun 15, 2021 at 12:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Yes, I think it is. Reading the paper really helped me crystallize my\n> thoughts about this, because when I've studied the problems myself, I\n> came, as you postulate here, to the conclusion that there's a lot of\n> stuff the planner does where there is risk and uncertainty, and thus\n> that a general framework would be necessary to deal with it.\n\nIt is an example (perhaps the only example in the optimizer) of an\noasis of certainty in an ocean of uncertainty. As uncertain as\neverything is, we seemingly can make strong robust statements about\nthe relative merits of each strategy *in general*, just in this\nparticular instance. It's just not reasonable to make such a reckless\nchoice, no matter what your general risk tolerance is.\n\nGoetz Graefe is interviewed here, and goes into his philosophy on\nrobustness -- it seems really interesting to me:\n\nhttps://sigmodrecord.org/publications/sigmodRecord/2009/pdfs/05_Profiles_Graefe.pdf\n\n> In defense of that approach, note that this is a\n> case where we know both that the Nested Loop is risky and that Hash\n> Join is a similar alternative with probably similar cost. I am not\n> sure there are any other cases where we can say quite so generally\n> both that a certain thing is risky and what we could do instead.\n\nI tend to think of a hash join as like a nested loop join with an\ninner index scan where you build the index yourself, dynamically. That\nmight be why I find it easy to make this mental leap. In theory you\ncould do this by giving the nestloop join runtime smarts -- make it\nturn into a hash join adaptively. Like Graefe's G-Join design. That\nway you could do this in a theoretically pure way.\n\nI don't think that that's actually necessary just to deal with this\ncase -- it probably really is as simple as it seems. I point this out\nbecause perhaps it's useful to have that theoretical anchoring.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Jun 2021 13:00:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Wed, 16 Jun 2021 at 05:09, Robert Haas <robertmhaas@gmail.com> wrote:\n> How to do that is not very clear. One very simple thing we could do\n> would be to introduce enable_nestloop=parameterized or\n> enable_parameterized_nestloop=false. That is a pretty blunt instrument\n> but the authors of the paper seem to have done that with positive\n> results, so maybe it's actually not a dumb idea.\n\nIt's not great that people are having to use such blunt instruments to\nget the planner to behave. It might not be a terrible idea to provide\nthem with something a bit sharper as you suggest. The GUC idea is\ncertainly something that could be done without too much effort.\n\nThere was some talk of doing that in [1].\n\n> Another approach\n> would be to introduce a large fuzz factor for such nested loops e.g.\n> keep them only if the cost estimate is better than the comparable hash\n> join plan by at least a factor of N (or if no such plan is being\n> generated for some reason).\n\nIn my experience, the most common reason that the planner chooses\nnon-parameterized nested loops wrongly is when there's row\nunderestimation that says there's just going to be 1 row returned by\nsome set of joins. The problem often comes when some subsequent join\nis planned and the planner sees the given join rel only produces one\nrow. The cheapest join method we have to join 1 row is Nested Loop.\nSo the planner just sticks the 1-row join rel on the outer side\nthinking the executor will only need to scan the inner side of the\njoin once. When the outer row count blows up, then we end up scanning\nthat inner side many more times. The problem is compounded when you\nnest it a few joins deep\n\nMost of the time when I see that happen it's down to either the\nselectivity of some correlated base-quals being multiplied down to a\nnumber low enough that we clamp the estimate to be 1 row. The other\ncase is similar, but with join quals.\n\nIt seems to me that each time we multiply 2 selectivities together\nthat the risk of the resulting selectivity being out increases. The\nrisk is likely lower when we have some extended statistics which\nallows us to do fewer selectivity multiplications.\n\nFor that 1-row case, doing an unparameterized nested loop is only the\ncheapest join method by a tiny amount. It really wouldn't be much\nmore expensive to just put that single row into a hash table. If that\n1 estimated row turns out to be 10 actual rows then it's likely not\ntoo big a deal for the hash join code to accommodate the 9 additional\nrows.\n\nThis makes me think that it's insanely risky for the planner to be\npicking Nested Loop joins in this case. And that puts me down the path\nof thinking that this problem should be solved by having the planner\ntake into account risk as well as costs when generating plans.\n\nI don't really have any concrete ideas on that, but a basic idea that\nI have been considering is that a Path has a risk_factor field that is\ndecided somewhere like clauselist_selectivity(). Maybe the risk can go\nup by 1 each time we multiply an individual selectivity. (As of\nmaster, estimate_num_groups() allows the estimation to pass back some\nfurther information to the caller. I added that for Result Cache so I\ncould allow the caller to get visibility about when the estimate fell\nback on DEFAULT_NUM_DISTINCT. clauselist_selectivity() maybe could get\nsimilar treatment to allow the risk_factor or number of nstats_used to\nbe passed back). We'd then add a GUC, something like\nplanner_risk_adversion which could be set to anything from 0.0 to some\npositive number. During add_path() we could do the cost comparison\nlike: path1.cost * path1.risk_factor * (1.0 + planner_risk_adversion)\n< path2.cost * path2.risk_factor * (1.0 + planner_risk_adversion).\nThat way, if you set planner_risk_adversion to 0.0, then the planner\ndoes as it does today, i.e takes big risks.\n\nThe problem I have with this idea is that I really don't know how to\nproperly calculate what the risk_factor should be set to. It seems\neasy at first to set it to something that has the planner avoid these\nsilly 1-row estimate nested loop mistakes, but I think what we'd set\nthe risk_factor to would become much more important when more and more\nPath types start using it. So if we did this and just guessed the\nrisk_factor, that might be fine when only 1 of the paths being\ncompared had a non-zero risk_factor, but as soon as both paths have\none set, unless they're set to something sensible, then we just end up\ncomparing garbage costs to garbage costs.\n\nAnother common mistake the planner makes is around WHERE a = <value>\nORDER BY b LIMIT n; where there are separate indexes on (a) and (b).\nScanning the (b) index is pretty risky. All the \"a\" values you need\nmight be right at the end of the index. It seems safer to scan the (a)\nindex as we'd likely have statistics that tell us how many rows exist\nwith <value>. We don't have any stats that tell us where in the (b)\nindex are all the rows with a = <value>.\n\nI don't really think we should solve this by having nodeNestloop.c\nfall back on hashing when the going gets tough. Overloading our nodes\nthat way is not a sustainable thing to do. I'd rather see the\nexecutor throw the plan back at the planner along with some hints\nabout what was wrong with it. We could do that providing we've not\nsent anything back to the client yet.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f8nsm-T0KMvGJz_bskUjQ%3DyGmGUUtUdAcFoEaZ_tuTXjA%40mail.gmail.com\n\n\n", "msg_date": "Wed, 16 Jun 2021 11:59:55 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Tue, Jun 15, 2021 at 5:00 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I don't really think we should solve this by having nodeNestloop.c\n> fall back on hashing when the going gets tough. Overloading our nodes\n> that way is not a sustainable thing to do. I'd rather see the\n> executor throw the plan back at the planner along with some hints\n> about what was wrong with it. We could do that providing we've not\n> sent anything back to the client yet.\n\nIt wasn't a serious suggestion -- it was just a way of framing the\nissue at hand that I thought might be useful.\n\nIf we did have something like that (which FWIW I think makes sense but\nis hard to do right in a general way) then it might be expected to\npreemptively refuse to even start down the road of using an\nunparameterized nestloop join very early, or even before execution\ntime. Such an adaptive join algorithm/node might be expected to have a\nhuge bias against this particular plan shape, that can be reduced to a\nsimple heuristic. But you can have the simple heuristic without\nneeding to build everything else.\n\nWhether or not we throw the plan back at the planner or \"really change\nour minds at execution time\" seems like a distinction without a\ndifference. Either way we're changing our minds about the plan based\non information that is fundamentally execution time information, not\nplan time information. Have I missed something?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Jun 2021 17:11:39 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Wed, 16 Jun 2021 at 12:11, Peter Geoghegan <pg@bowt.ie> wrote:\n> Whether or not we throw the plan back at the planner or \"really change\n> our minds at execution time\" seems like a distinction without a\n> difference.\n\nWhat is \"really change our minds at execution time\"? Is that switch\nto another plan without consulting the planner? If so what decides\nwhat that new plan should be? The planner is meant to be the expert in\nthat department. The new information might cause the join order to\ncompletely change. It might not be as simple as swapping a Nested Loop\nfor a Hash Join.\n\n> Either way we're changing our minds about the plan based\n> on information that is fundamentally execution time information, not\n> plan time information. Have I missed something?\n\nI don't really see why you think the number of rows that a given join\nmight produce is execution information. It's exactly the sort of\ninformation the planner needs to make a good plan. It's just that\ntoday we get that information from statistics. Plenty of other DBMSs\nmake decisions from sampling. FWIW, we do already have a minimalist\nsampling already in get_actual_variable_range().\n\nI'm just trying to highlight that I don't think overloading nodes is a\ngood path to go down. It's not a sustainable practice. It's a path\ntowards just having a single node that does everything. If your\nsuggestion was not serious then there's no point in discussing it\nfurther.\n\nDavid\n\n\n", "msg_date": "Wed, 16 Jun 2021 13:15:17 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Tue, Jun 15, 2021 at 6:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 16 Jun 2021 at 12:11, Peter Geoghegan <pg@bowt.ie> wrote:\n> > Whether or not we throw the plan back at the planner or \"really change\n> > our minds at execution time\" seems like a distinction without a\n> > difference.\n>\n> What is \"really change our minds at execution time\"? Is that switch\n> to another plan without consulting the planner?\n\nI don't know what it means. That was my point -- it all seems like\nsemantics to me.\n\nThe strong separation between plan time and execution time isn't\nnecessarily a good thing, at least as far as solving some of the\nthorniest problems goes. It seems obvious to me that cardinality\nestimation is the main problem, and that the most promising solutions\nare all fundamentally about using execution time information to change\ncourse. Some problems with planning just can't be solved at plan time\n-- no model can ever be smart enough. Better to focus on making query\nexecution more robust, perhaps by totally changing the plan when it is\nclearly wrong. But also by using more techniques that we've\ntraditionally thought of as execution time techniques (e.g. role\nreversal in hash join). The distinction is blurry to me.\n\nThere are no doubt practical software engineering issues with this --\nseparation of concerns and whatnot. But it seems premature to go into\nthat now.\n\n> The new information might cause the join order to\n> completely change. It might not be as simple as swapping a Nested Loop\n> for a Hash Join.\n\nI agree that it might not be that simple at all. I think that Robert\nis saying that this is one case where it really does appear to be that\nsimple, and so we really can expect to benefit from a simple plan-time\nheuristic that works within the confines of the current model. Why\nwouldn't we just take that easy win, once the general idea has been\nvalidated some more? Why let the perfect be the enemy of the good?\n\nI have perhaps muddied the waters by wading into the more general\nquestion of robust execution, the inherent uncertainty with\ncardinality estimation, and so on. Robert really didn't seem to be\ntalking about that at all (though it is clearly related).\n\n> > Either way we're changing our minds about the plan based\n> > on information that is fundamentally execution time information, not\n> > plan time information. Have I missed something?\n>\n> I don't really see why you think the number of rows that a given join\n> might produce is execution information.\n\nIf we're 100% sure a join will produce at least n rows because we\nexecuted it (with the express intention of actually doing real query\nprocessing that returns rows to the client), and it already produced n\nrows, then what else could it be called? Why isn't it that simple?\n\n> It's exactly the sort of\n> information the planner needs to make a good plan. It's just that\n> today we get that information from statistics. Plenty of other DBMSs\n> make decisions from sampling.\n\n> FWIW, we do already have a minimalist\n> sampling already in get_actual_variable_range().\n\nI know, but that doesn't seem all that related -- it almost seems like\nthe opposite idea. It isn't the executor balking when it notices that\nthe plan is visibly wrong during execution, in some important way.\nIt's more like the planner using the executor to get information about\nan index that is well within the scope of what we think of as plan\ntime.\n\nTo some degree the distinction gets really blurred due to nodes like\nhash join, where some important individual decisions are delayed until\nexecution time already. It's really unclear when precisely it stops\nbeing that, and starts being more of a case of either partially or\nwholly replanning. I don't know how to talk about it without it being\nconfusing.\n\n> I'm just trying to highlight that I don't think overloading nodes is a\n> good path to go down. It's not a sustainable practice. It's a path\n> towards just having a single node that does everything. If your\n> suggestion was not serious then there's no point in discussing it\n> further.\n\nAs I said, it was a way of framing one particular issue that Robert is\nconcerned about.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Jun 2021 18:48:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Tue, Jun 15, 2021 at 5:00 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> Most of the time when I see that happen it's down to either the\n> selectivity of some correlated base-quals being multiplied down to a\n> number low enough that we clamp the estimate to be 1 row. The other\n> case is similar, but with join quals.\n>\n> It seems to me that each time we multiply 2 selectivities together\n> that the risk of the resulting selectivity being out increases. The\n> risk is likely lower when we have some extended statistics which\n> allows us to do fewer selectivity multiplications.\n\nIt seems important to distinguish between risk and uncertainty --\nthey're rather different things. The short version is that you can\nmodel risk but you cannot model uncertainty. This seems like a problem\nof uncertainty to me.\n\nThe example from the paper that Robert cited isn't interesting to me\nbecause it hints at a way of managing the uncertainty, exactly. It's\ninteresting because it seems to emphasize the user's exposure to the\nproblem, which is what really matters. Even if it was extremely\nunlikely that the user would have a problem, the downside of being\nwrong is still absurdly high, and the upside of being right is low\n(possibly even indistinguishable from zero). It's just not worth\nthinking about. Besides, we all know that selectivity estimates are\nvery often quite wrong without the user ever noticing. It's amazing\nthat database optimizers work as well as they do, really.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Jun 2021 20:08:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": ">\n> The problem I have with this idea is that I really don't know how to\n> properly calculate what the risk_factor should be set to. It seems\n> easy at first to set it to something that has the planner avoid these\n> silly 1-row estimate nested loop mistakes, but I think what we'd set\n> the risk_factor to would become much more important when more and more\n> Path types start using it. So if we did this and just guessed the\n> risk_factor, that might be fine when only 1 of the paths being\n> compared had a non-zero risk_factor, but as soon as both paths have\n> one set, unless they're set to something sensible, then we just end up\n> comparing garbage costs to garbage costs.\n\nRisk factor is the inverse of confidence on estimate, lesser\nconfidence, higher risk. If we associate confidence with the\nselectivity estimate, or computer confidence interval of the estimate\ninstead of a single number, we can associate risk factor with each\nestimate. When we combine estimates to calculate new estimates, we\nalso combine their confidences/confidence intervals. If my memory\nserves well, confidence intervals/confidences are calculated based on\nthe sample size and method used for estimation, so we should be able\nto compute those during ANALYZE.\n\nI have not come across many papers which leverage this idea. Googling\n\"selectivity estimation confidence interval\", does not yield many\npapers. Although I found [1] to be using a similar idea. So may be\nthere's not merit in this idea, thought theoretically it sounds fine\nto me.\n\n\n[1] https://pi3.informatik.uni-mannheim.de/~moer/Publications/vldb18_smpl_synop.pdf\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 18 Jun 2021 15:50:22 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Fri, Jun 18, 2021 at 6:20 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> I have not come across many papers which leverage this idea. Googling\n> \"selectivity estimation confidence interval\", does not yield many\n> papers. Although I found [1] to be using a similar idea. So may be\n> there's not merit in this idea, thought theoretically it sounds fine\n> to me.\n>\n>\n> [1]\nhttps://pi3.informatik.uni-mannheim.de/~moer/Publications/vldb18_smpl_synop.pdf\n\nWell, that paper's title shows it's a bit too far forward for us, since we\ndon't use samples during plan time (although that's a separate topic worth\nconsidering). From the references, however, this one gives some\nmathematical framing of the problem that lead to the thread subject,\nalthough I haven't read enough to see if we can get practical advice from\nit:\n\nY. E. Ioannidis and S. Christodoulakis. On the propagation of errors in the\nsize of join results.\nhttps://www.csd.uoc.gr/~hy460/pdf/p268-ioannidis.pdf\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Jun 18, 2021 at 6:20 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:> I have not come across many papers which leverage this idea. Googling> \"selectivity estimation confidence interval\", does not yield many> papers. Although I found [1] to be using a similar idea. So may be> there's not merit in this idea, thought theoretically it sounds fine> to me.>>> [1] https://pi3.informatik.uni-mannheim.de/~moer/Publications/vldb18_smpl_synop.pdfWell, that paper's title shows it's a bit too far forward for us, since we don't use samples during plan time (although that's a separate topic worth considering). From the references, however, this one gives some mathematical framing of the problem that lead to the thread subject, although I haven't read enough to see if we can get practical advice from it:Y. E. Ioannidis and S. Christodoulakis. On the\npropagation of errors in the size of join results.https://www.csd.uoc.gr/~hy460/pdf/p268-ioannidis.pdf--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 18 Jun 2021 12:32:29 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Wed, 16 Jun 2021 at 15:08, Peter Geoghegan <pg@bowt.ie> wrote:\n> It seems important to distinguish between risk and uncertainty --\n> they're rather different things. The short version is that you can\n> model risk but you cannot model uncertainty. This seems like a problem\n> of uncertainty to me.\n\nYou might be right there. \"Uncertainty\" or \"Certainty\" seems more\nlike a value that clauselist_selectivity() would be able to calculate\nitself. It would just be up to the planner to determine what to do\nwith it.\n\nOne thing I thought about is that if the costing modal was able to\nseparate out a cost of additional (unexpected) rows then it would be\neasier for add_path() to take into account how bad things might go if\nwe underestimate.\n\nFor example, in an unparameterized Nested Loop that estimates the\nouter Path to have 1 row will cost an entire additional inner cost if\nthere are 2 rows. With Hash Join the cost is just an additional\nhashtable lookup, which is dead cheap. I don't know exactly how\nadd_path() would weigh all that up, but it seems to me that I wouldn't\ntake the risk unless I was 100% certain that the Nested Loop's outer\nPath would only return 1 row exactly, if there was any chance at all\nit could return more, I'd be picking some other join method.\n\nDavid\n\n\n", "msg_date": "Mon, 21 Jun 2021 22:41:40 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Tue, Jun 15, 2021 at 8:00 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> In my experience, the most common reason that the planner chooses\n> non-parameterized nested loops wrongly is when there's row\n> underestimation that says there's just going to be 1 row returned by\n> some set of joins. The problem often comes when some subsequent join\n> is planned and the planner sees the given join rel only produces one\n> row. The cheapest join method we have to join 1 row is Nested Loop.\n> So the planner just sticks the 1-row join rel on the outer side\n> thinking the executor will only need to scan the inner side of the\n> join once. When the outer row count blows up, then we end up scanning\n> that inner side many more times. The problem is compounded when you\n> nest it a few joins deep\n>\n> Most of the time when I see that happen it's down to either the\n> selectivity of some correlated base-quals being multiplied down to a\n> number low enough that we clamp the estimate to be 1 row. The other\n> case is similar, but with join quals.\n\nIf an estimate is lower than 1, that should be a red flag that Something Is\nWrong. This is kind of a crazy idea, but what if we threw it back the other\nway by computing 1 / est , and clamping that result to 2 <= res < 10 (or\n100 or something)? The theory is, the more impossibly low it is, the more\nwrong it is. I'm attracted to the idea of dealing with it as an estimation\nproblem and not needing to know about join types. Might have unintended\nconsequences, though.\n\nLong term, it would be great to calculate something about the distribution\nof cardinality estimates, so we can model risk in the estimates.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Jun 15, 2021 at 8:00 PM David Rowley <dgrowleyml@gmail.com> wrote:> In my experience, the most common reason that the planner chooses> non-parameterized nested loops wrongly is when there's row> underestimation that says there's just going to be 1 row returned by> some set of joins.  The problem often comes when some subsequent join> is planned and the planner sees the given join rel only produces one> row.  The cheapest join method we have to join 1 row is Nested Loop.> So the planner just sticks the 1-row join rel on the outer side> thinking the executor will only need to scan the inner side of the> join once.  When the outer row count blows up, then we end up scanning> that inner side many more times. The problem is compounded when you> nest it a few joins deep>> Most of the time when I see that happen it's down to either the> selectivity of some correlated base-quals being multiplied down to a> number low enough that we clamp the estimate to be 1 row.   The other> case is similar, but with join quals.If an estimate is lower than 1, that should be a red flag that Something Is Wrong. This is kind of a crazy idea, but what if we threw it back the other way by computing 1 / est , and clamping that result to 2 <= res < 10 (or 100 or something)? The theory is, the more impossibly low it is, the more wrong it is. I'm attracted to the idea of dealing with it as an estimation problem and not needing to know about join types. Might have unintended consequences, though.Long term, it would be great to calculate something about the distribution of cardinality estimates, so we can model risk in the estimates. --John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 21 Jun 2021 07:27:10 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "> >\n> > Most of the time when I see that happen it's down to either the\n> > selectivity of some correlated base-quals being multiplied down to a\n> > number low enough that we clamp the estimate to be 1 row. The other\n> > case is similar, but with join quals.\n> \n> If an estimate is lower than 1, that should be a red flag that Something Is\n> Wrong. This is kind of a crazy idea, but what if we threw it back the other\n> way by computing 1 / est , and clamping that result to 2 <= res < 10 (or\n> 100 or something)? The theory is, the more impossibly low it is, the more\n> wrong it is. I'm attracted to the idea of dealing with it as an estimation\n> problem and not needing to know about join types. Might have unintended\n> consequences, though.\n> \n> Long term, it would be great to calculate something about the distribution\n> of cardinality estimates, so we can model risk in the estimates.\n> \n\nHi,\n\nLaurenz suggested clamping to 2 in this thread in 2017:\n\nhttps://www.postgresql.org/message-id/1509611428.3268.5.camel%40cybertec.at\n\nHaving been the victim of this problem in the past, I like the risk\nbased approach to this. If the cost of being wrong in the estimate is\nhigh, use a merge join instead. In every case that I have encountered,\nthat heuristic would have given the correct performant plan.\n\nRegards,\nKen\n\n\n", "msg_date": "Mon, 21 Jun 2021 09:15:41 -0500", "msg_from": "Kenneth Marshall <ktm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Mon, Jun 21, 2021 at 6:41 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> For example, in an unparameterized Nested Loop that estimates the\n> outer Path to have 1 row will cost an entire additional inner cost if\n> there are 2 rows. With Hash Join the cost is just an additional\n> hashtable lookup, which is dead cheap. I don't know exactly how\n> add_path() would weigh all that up, but it seems to me that I wouldn't\n> take the risk unless I was 100% certain that the Nested Loop's outer\n> Path would only return 1 row exactly, if there was any chance at all\n> it could return more, I'd be picking some other join method.\n\nIt seems like everyone agrees that it would be good to do something\nabout this problem, but the question is whether it's best to do\nsomething that tries to be general, or whether we should instead do\nsomething about this specific case. I favor the latter approach. Risk\nand uncertainty exist all over the place, but dealing with that in a\ngeneral way seems difficult, and maybe unnecessary. Addressing the\ncase of unparameterized nest loops specifically seems simpler, because\nit's easier to reason about what the alternatives are. Your last\nsentence here seems right on point to me.\n\nBasically, what you argue for there is disabling unparameterized\nnested loops entirely except when we can prove that the inner side\nwill never generate more than one row. But, that's almost never going\nto be something that we can prove. If the inner side is coming from a\ntable or sub-join, it can turn out to be big. As far as I can see, the\nonly way that this doesn't happen is if it's something like a subquery\nthat aggregates everything down to one row, or has LIMIT 1, but those\nare such special cases that I don't even think we should be worrying\nabout them.\n\nSo my counter-proposal is: how about if we split\nmerge_unsorted_outer() into two functions, one of which generates\nnested loops only based on parameterized paths and the other of which\ngenerates nested loops based only on unparameterized paths, and then\nrejigger add_paths_to_joinrel() so that we do the latter between the\nsteps that are currently number 5 and 6 and only if we haven't got any\nother paths yet? If somebody later comes up with logic for proving\nthat the inner side can never have more than 1 row, we can let this be\nrun in those cases as well. In the meantime, if somebody does\nsomething like a JOIN b ON a.x < b.x, we will still generate these\npaths because there's no other approach, or similarly if it's a.x =\nb.x but for some strange type that doesn't have a hash-joinable or\nmerge-joinable equality operator. But otherwise we omit those paths\nentirely.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Jun 2021 10:45:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Mon, Jun 21, 2021 at 7:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Jun 21, 2021 at 6:41 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > For example, in an unparameterized Nested Loop that estimates the\n> > outer Path to have 1 row will cost an entire additional inner cost if\n> > there are 2 rows. With Hash Join the cost is just an additional\n> > hashtable lookup, which is dead cheap. I don't know exactly how\n> > add_path() would weigh all that up, but it seems to me that I wouldn't\n> > take the risk unless I was 100% certain that the Nested Loop's outer\n> > Path would only return 1 row exactly, if there was any chance at all\n> > it could return more, I'd be picking some other join method.\n>\n> It seems like everyone agrees that it would be good to do something\n> about this problem, but the question is whether it's best to do\n> something that tries to be general, or whether we should instead do\n> something about this specific case. I favor the latter approach.\n\nI agree with your conclusion, but FWIW I am sympathetic to David's\nview too. I certainly understand why he'd naturally want to define the\nclass of problems that are like this one, to understand what the\nboundaries are.\n\nThe heuristic that has the optimizer flat out avoids an\nunparameterized nested loop join is justified by the belief that\nthat's fundamentally reckless. Even though we all agree on that much,\nI don't know when it stops being reckless and starts being \"too risky\nfor me, but not fundamentally reckless\". I think that that's worth\nliving with, but it isn't very satisfying.\n\n> Risk\n> and uncertainty exist all over the place, but dealing with that in a\n> general way seems difficult, and maybe unnecessary. Addressing the\n> case of unparameterized nest loops specifically seems simpler, because\n> it's easier to reason about what the alternatives are. Your last\n> sentence here seems right on point to me.\n\nRight. Part of why this is a good idea is that the user is exposed to\nso many individual risks and uncertainties. We cannot see any one risk\nas existing in a vacuum. It is not the only risk the user will ever\ntake in the planner -- if it was then it might actually be okay to\nallow unparameterized nested loop joins.\n\nA bad unparameterized nested loop join plan has, in a sense, unknown\nand unbounded cost/downside. But it is only very slightly faster than\na hash join, by a fixed well understood amount. With enough \"trials\"\nand on a long enough timeline, it will inevitably blow up and cause\nthe application to grind to a halt. It seems like no amount of fixed,\nbounded benefit from \"fast unparameterized nested loop joins\" could\npossibly make up for that. The life of Postgres users would be a lot\nbetter if bad plans were at least \"survivable events\".\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 21 Jun 2021 08:39:24 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> The heuristic that has the optimizer flat out avoids an\n> unparameterized nested loop join is justified by the belief that\n> that's fundamentally reckless. Even though we all agree on that much,\n> I don't know when it stops being reckless and starts being \"too risky\n> for me, but not fundamentally reckless\". I think that that's worth\n> living with, but it isn't very satisfying.\n\nThere are certainly cases where the optimizer can prove (in principle;\nit doesn't do so today) that a plan node will produce at most one row.\nThey're hardly uncommon either: an equality comparison on a unique\nkey, or a subquery with a simple aggregate function, come to mind.\n \nIn such cases, not only is this choice not reckless, but it's provably\nsuperior to a hash join. So in the end this gets back to the planning\nrisk factor that we keep circling around but nobody quite wants to\ntackle.\n\nI'd be a lot happier if this proposal were couched around some sort\nof estimate of the risk of the outer side producing more than the\nexpected number of rows. The arguments so far seem like fairly lame\nrationalizations for not putting forth the effort to do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Jun 2021 11:55:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Mon, Jun 21, 2021 at 8:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There are certainly cases where the optimizer can prove (in principle;\n> it doesn't do so today) that a plan node will produce at most one row.\n> They're hardly uncommon either: an equality comparison on a unique\n> key, or a subquery with a simple aggregate function, come to mind.\n\nThat sounds like it might be useful in general.\n\n> In such cases, not only is this choice not reckless, but it's provably\n> superior to a hash join. So in the end this gets back to the planning\n> risk factor that we keep circling around but nobody quite wants to\n> tackle.\n\nLet's assume for the sake of argument that we really have to have that\nadditional infrastructure to move forward with the idea. (I'm not sure\nif it's possible in principle to use infrastructure like that for some\nof the cases that Robert has in mind, but for now I'll assume that it\nis both possible and a practical necessity.)\n\nEven when I make this working assumption I don't see what it changes\nat a fundamental level. You've merely come up with a slightly more\nspecific definition of the class of plans that are \"reckless\". You've\nonly refined the original provisional definition of \"reckless\" to\nexclude specific \"clearly not reckless\" cases (I think). But the\ndefinition of \"reckless\" is no less squishy than what we started out\nwith.\n\n> I'd be a lot happier if this proposal were couched around some sort\n> of estimate of the risk of the outer side producing more than the\n> expected number of rows. The arguments so far seem like fairly lame\n> rationalizations for not putting forth the effort to do that.\n\nI'm not so sure that it is. The point isn't the risk, even if it could\nbe calculated. The point is the downsides of being wrong are huge and\npretty much unbounded, whereas the benefits of being right are tiny\nand bounded. It almost doesn't matter what the underlying\nprobabilities are.\n\nTo be clear I'm not arguing against modelling risk. I'm just not sure\nthat it's useful to think of this problem as truly a problem of risk.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 21 Jun 2021 09:31:14 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "\n\nOn 6/21/21 5:55 PM, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n>> The heuristic that has the optimizer flat out avoids an\n>> unparameterized nested loop join is justified by the belief that\n>> that's fundamentally reckless. Even though we all agree on that much,\n>> I don't know when it stops being reckless and starts being \"too risky\n>> for me, but not fundamentally reckless\". I think that that's worth\n>> living with, but it isn't very satisfying.\n> \n> There are certainly cases where the optimizer can prove (in principle;\n> it doesn't do so today) that a plan node will produce at most one row.\n> They're hardly uncommon either: an equality comparison on a unique\n> key, or a subquery with a simple aggregate function, come to mind.\n> \n> In such cases, not only is this choice not reckless, but it's provably\n> superior to a hash join. So in the end this gets back to the planning\n> risk factor that we keep circling around but nobody quite wants to\n> tackle.\n> \n\nAgreed.\n\n> I'd be a lot happier if this proposal were couched around some sort\n> of estimate of the risk of the outer side producing more than the\n> expected number of rows. The arguments so far seem like fairly lame\n> rationalizations for not putting forth the effort to do that.\n> \nI agree having such measure would be helpful, but do you have an idea \nhow it could be implemented?\n\nI've been thinking about this a bit recently and searching for papers \ntalking about this, and but it's not clear to me how to calculate the \nrisk (and propagate it through the plan) without making the whole cost \nevaluation way more complicated / expensive :-(\n\nThe \"traditional approach\" to quantifying risk would be confidence \nintervals, i.e. for each cardinality estimate \"e\" we'd determine a range \n[a,b] so that P(a <= e <= b) = p. So we could pick \"p\" depending on how \n\"robust\" the plan choice should be (say 0.9 for \"risky\" and 0.999 for \n\"safe\" plans) and calculate a,b. Or maybe we could calculate where the \nplan changes, and then we'd see if those \"break points\" are within the \nconfidence interval. If not, great - we can assume the plan is stable, \notherwise we'd need to consider the other plans too, somehow.\n\nBut what I'm not sure about is:\n\n1) Now we're dealing with three cardinality estimates (the original \"e\" \nand the boundaries \"a, \"b\"). So which one do we use to calculate cost \nand pass to upper parts of the plan?\n\n2) The outer relation may be a complex join, so we'd need to combine the \nconfidence intervals for the two input relations, somehow.\n\n3) We'd need to know how to calculate the confidence intervals for most \nplan nodes, which I'm not sure we know how to do. So it's not clear to \nme how to do this, which seems rather problematic because we need to \npropagate and combine those confidence intervals through the plan.\n\n\nBut maybe you have thought about some much simpler approach, addressing \nthis sufficiently well?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 21 Jun 2021 18:42:20 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Jun 21, 2021 at 8:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd be a lot happier if this proposal were couched around some sort\n>> of estimate of the risk of the outer side producing more than the\n>> expected number of rows. The arguments so far seem like fairly lame\n>> rationalizations for not putting forth the effort to do that.\n\n> I'm not so sure that it is. The point isn't the risk, even if it could\n> be calculated. The point is the downsides of being wrong are huge and\n> pretty much unbounded, whereas the benefits of being right are tiny\n> and bounded. It almost doesn't matter what the underlying\n> probabilities are.\n\nYou're throwing around a lot of pejorative adjectives there without\nhaving bothered to quantify any of them. This sounds less like a sound\nargument to me than like a witch trial.\n\nReflecting for a bit on the ancient principle that \"the only good numbers\nin computer science are 0, 1, and N\", it seems to me that we could make\nan effort to classify RelOptInfos as provably empty, provably at most one\nrow, and others. (This would be a property of relations, not individual\npaths, so it needn't bloat struct Path.) We already understand about\nprovably-empty rels, so this is just generalizing that idea a little.\nOnce we know about that, at least for the really-common cases like unique\nkeys, I'd be okay with a hard rule that we don't consider unparameterized\nnestloop joins with an outer side that falls in the \"N\" category.\nUnless there's no alternative, of course.\n\nAnother thought that occurs to me here is that maybe we could get rid of\nthe enable_material knob in favor of forcing (or at least encouraging)\nmaterialization when the outer side isn't provably one row.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Jun 2021 12:52:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> I've been thinking about this a bit recently and searching for papers \n> talking about this, and but it's not clear to me how to calculate the \n> risk (and propagate it through the plan) without making the whole cost \n> evaluation way more complicated / expensive :-(\n\nYeah, a truly complete approach using confidence intervals or the\nlike seems frighteningly complicated.\n\n> But maybe you have thought about some much simpler approach, addressing \n> this sufficiently well?\n\nSee my nearby response to Peter. The main case that's distressing me\nis the possibility of forcing a hash join even when the outer side\nis obviously only one row. If we could avoid that, at least for\nlarge values of \"obvious\", I'd be far more comfortable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Jun 2021 13:01:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Mon, Jun 21, 2021 at 9:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm not so sure that it is. The point isn't the risk, even if it could\n> > be calculated. The point is the downsides of being wrong are huge and\n> > pretty much unbounded, whereas the benefits of being right are tiny\n> > and bounded. It almost doesn't matter what the underlying\n> > probabilities are.\n>\n> You're throwing around a lot of pejorative adjectives there without\n> having bothered to quantify any of them. This sounds less like a sound\n> argument to me than like a witch trial.\n\nI'm not sure what you mean by pejorative. Isn't what I said about\ndownsides and upsides pretty accurate?\n\n> Reflecting for a bit on the ancient principle that \"the only good numbers\n> in computer science are 0, 1, and N\", it seems to me that we could make\n> an effort to classify RelOptInfos as provably empty, provably at most one\n> row, and others. (This would be a property of relations, not individual\n> paths, so it needn't bloat struct Path.) We already understand about\n> provably-empty rels, so this is just generalizing that idea a little.\n\nIt sounds like you're concerned about properly distinguishing between:\n\n1. Cases where the only non-reckless choice is a hash join over a\nunparameterized nested loop join\n\n2. Cases that look like that at first, that don't really have that\nquality on closer examination.\n\nThis seems like a reasonable concern.\n\n> Once we know about that, at least for the really-common cases like unique\n> keys, I'd be okay with a hard rule that we don't consider unparameterized\n> nestloop joins with an outer side that falls in the \"N\" category.\n> Unless there's no alternative, of course.\n\nI thought that you were arguing against the premise itself. It's now\nclear that you weren't, though.\n\nI don't have an opinion for or against bringing the provably-empty\nrels stuff into the picture. At least not right now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 21 Jun 2021 10:11:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Mon, Jun 21, 2021 at 11:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There are certainly cases where the optimizer can prove (in principle;\n> it doesn't do so today) that a plan node will produce at most one row.\n> They're hardly uncommon either: an equality comparison on a unique\n> key, or a subquery with a simple aggregate function, come to mind.\n\nHmm, maybe I need to see an example of the sort of plan shape that you\nhave in mind. To me it feels like a comparison on a unique key ought\nto use a *parameterized* nested loop. And it's also not clear to me\nwhy a nested loop is actually better in a case like this. If the\nnested loop iterates more than once because there are more rows on the\nouter side, then you don't want to have something on the inner side\nthat might be expensive, and either an aggregate or an unparameterized\nsearch for a unique value are potentially quite expensive. Now if you\nput a materialize node on top of the inner side, then you don't have\nto worry about that problem, but how much are you saving at that point\nvs. just doing a hash join?\n\n> I'd be a lot happier if this proposal were couched around some sort\n> of estimate of the risk of the outer side producing more than the\n> expected number of rows. The arguments so far seem like fairly lame\n> rationalizations for not putting forth the effort to do that.\n\nI don't understand how to generate a risk assessment or what we ought\nto do with it if we had one. I don't even understand what units we\nwould use. We measure costs using abstract cost units, but those\nabstract cost units are intended to be a proxy for runtime. If it's\nnot the case that a plan that runs for longer has a higher cost, then\nsomething's wrong with the costing model or the settings. In the case\nof risk, the whole thing seems totally subjective. We're talking about\nthe risk that our estimate is bogus, but how do we estimate the risk\nthat we don't know how to estimate? Given quals (x + 0) = x, x = some\nMCV, and x = some non-MCV, we can say that we're most likely to be\nwrong about the first one and least likely to be wrong about the\nsecond one, but how much more likely? I don't know how you can decide\nthat, even in principle. We can also say that an unparameterized\nnested loop is more risky than some other plan because it could turn\nout to be crazy expensive, but is that more or less risky than\nscanning the index on b as a way to implement SELECT * FROM foo WHERE\na = 1 ORDER BY b LIMIT 1? How much more risky, and why?\n\nAnd then, even supposing we had a risk metric for every path, what\nexactly would we do with it? Suppose path A is cheaper than path B,\nbut also more risky. Which should we keep? We could keep both, but\nthat seems to be just kicking the can down the road. If plan B is\nlikely to blow up in our face, we should probably just get rid of it,\nor not even generate it in the first place. Even if we do keep both,\nat some point we're going to have to make a cost-vs-risk tradeoff, and\nI don't see how to do that intelligently, because the point is\nprecisely that if the risk is high, the cost number might be totally\nwrong. If we know that plan A is cheaper than plan B, we should choose\nplan A. But if all we know is that plan A would be cheaper than plan B\nif our estimate of the cost were correct, but also that it probably\nisn't, then what we actually know is nothing. We have no principled\nbasis for deciding anything based on cost unless we're reasonably\nconfident that the cost estimate is pretty good. So AFAICT the only\nprincipled strategy here is to throw away high risk paths as early as\nwe possibly can. What am I missing?\n\nThe other thing is - the risk of a particular path doesn't matter in\nan absolute sense, only a relative one. In the particular case I'm on\nabout here, we *know* there's a less-risky alternative. We don't need\nto quantify the risk to know which of the two options has more. In\nmany other cases, the risk is irreducible e.g. a default estimate\ncould be totally bogus, but switching paths is of no help in getting\nrid of it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Jun 2021 13:14:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Mon, Jun 21, 2021 at 1:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Jun 21, 2021 at 9:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > I'm not so sure that it is. The point isn't the risk, even if it could\n> > > be calculated. The point is the downsides of being wrong are huge and\n> > > pretty much unbounded, whereas the benefits of being right are tiny\n> > > and bounded. It almost doesn't matter what the underlying\n> > > probabilities are.\n> >\n> > You're throwing around a lot of pejorative adjectives there without\n> > having bothered to quantify any of them. This sounds less like a sound\n> > argument to me than like a witch trial.\n>\n> I'm not sure what you mean by pejorative. Isn't what I said about\n> downsides and upsides pretty accurate?\n\nYeah, I don't see why Peter's characterization deserves to be labelled\nas pejorative here. A hash join or merge join or parameterized nested\nloop can turn out to be slower than some other algorithm, but all of\nthose approaches have some component that tends to make the asymptotic\ncost less than the product of the sizes of the inputs. I don't think\nthat's true in absolutely every case; for example, if a merge join has\nevery row duplicated on both sides, it will have to scan every inner\ntuple once per outer tuple, just like a nested loop, and the other\nalgorithms also are going to degrade toward O(NM) performance in the\nface of many duplicates. Also, a hash join can be pretty close to that\nif it needs a shazillion batches. But in normal cases, any algorithm\nother than an unparameterized nested loop tends to read each input\ntuple on each side ONCE, so the cost is more like the sum of the input\nsizes than the product. And there's nothing pejorative in saying that\nN + M can be less than N * M by an unbounded amount. That's just the\nfacts.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Jun 2021 13:35:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jun 21, 2021 at 11:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> There are certainly cases where the optimizer can prove (in principle;\n>> it doesn't do so today) that a plan node will produce at most one row.\n>> They're hardly uncommon either: an equality comparison on a unique\n>> key, or a subquery with a simple aggregate function, come to mind.\n\n> Hmm, maybe I need to see an example of the sort of plan shape that you\n> have in mind. To me it feels like a comparison on a unique key ought\n> to use a *parameterized* nested loop.\n\nThe unique-key comparison would be involved in the outer scan in\nthe cases I'm thinking of. As an example,\n\n\tselect * from t1, t2 where t1.id = constant and t1.x op t2.y;\n\nwhere I'm not assuming much about the properties of \"op\".\nThis could be amenable to a plan like\n\n\tNestLoop Join\n\t Join Filter: t1.x op t2.y\n\t -> Index Scan on t1_pkey\n\t Index Cond: t1.id = constant\n\t -> Seq Scan on t2\n\nand if we can detect that the pkey indexscan produces just one row,\nthis is very possibly the best available plan. Nor do I think this\nis an unusual situation that we can just ignore.\n\nBTW, it strikes me that there might be an additional consideration\nhere: did parameterization actually help anything? That is, the\nproposed rule wants to reject the above but allow\n\n\tNestLoop Join\n\t -> Index Scan on t1_pkey\n\t Index Cond: t1.id = constant\n\t -> Seq Scan on t2\n\t Filter: t1.x op t2.y\n\neven though the latter isn't meaningfully better. It's possible\nthis won't arise because we don't consider parameterized paths\nexcept where the parameter is used in an indexqual or the like,\nbut I'm not confident of that. See in particular reparameterize_path\nand friends before you assert there's no such issue. So we might\nneed to distinguish essential from incidental parameterization,\nor something like that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Jun 2021 13:38:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "I wrote:\n> ... As an example,\n> \tselect * from t1, t2 where t1.id = constant and t1.x op t2.y;\n> where I'm not assuming much about the properties of \"op\".\n> This could be amenable to a plan like\n> \tNestLoop Join\n> \t Join Filter: t1.x op t2.y\n> \t -> Index Scan on t1_pkey\n> \t Index Cond: t1.id = constant\n> \t -> Seq Scan on t2\n\nAlso, to enlarge on that example: if \"op\" isn't hashable then the\noriginal argument is moot. However, it'd still be useful to know\nif the outer scan is sure to return no more than one row, as that\ncould inform the choice whether to plaster a Materialize node on\nthe inner scan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Jun 2021 14:26:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Mon, Jun 21, 2021 at 1:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Hmm, maybe I need to see an example of the sort of plan shape that you\n> > have in mind. To me it feels like a comparison on a unique key ought\n> > to use a *parameterized* nested loop.\n>\n> The unique-key comparison would be involved in the outer scan in\n> the cases I'm thinking of. As an example,\n>\n> select * from t1, t2 where t1.id = constant and t1.x op t2.y;\n>\n> where I'm not assuming much about the properties of \"op\".\n> This could be amenable to a plan like\n>\n> NestLoop Join\n> Join Filter: t1.x op t2.y\n> -> Index Scan on t1_pkey\n> Index Cond: t1.id = constant\n> -> Seq Scan on t2\n>\n> and if we can detect that the pkey indexscan produces just one row,\n> this is very possibly the best available plan.\n\nHmm, yeah, I guess that's possible. How much do you think this loses\nas compared with:\n\nHash Join\nHash Cond: t1.x op t2.y\n-> Seq Scan on t2\n-> Hash\n -> Index Scan on t1_pkey\n\n(If the operator is not hashable then this plan is impractical, but in\nsuch a case the question of preferring the hash join over the nested\nloop doesn't arise anyway.)\n\n> BTW, it strikes me that there might be an additional consideration\n> here: did parameterization actually help anything? That is, the\n> proposed rule wants to reject the above but allow\n>\n> NestLoop Join\n> -> Index Scan on t1_pkey\n> Index Cond: t1.id = constant\n> -> Seq Scan on t2\n> Filter: t1.x op t2.y\n>\n> even though the latter isn't meaningfully better. It's possible\n> this won't arise because we don't consider parameterized paths\n> except where the parameter is used in an indexqual or the like,\n> but I'm not confident of that. See in particular reparameterize_path\n> and friends before you assert there's no such issue. So we might\n> need to distinguish essential from incidental parameterization,\n> or something like that.\n\nHmm, perhaps. I think it won't happen in the normal cases, but I can't\ncompletely rule out the possibility that there are corner cases where\nit does.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Jun 2021 14:49:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jun 21, 2021 at 1:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> NestLoop Join\n>> Join Filter: t1.x op t2.y\n>> -> Index Scan on t1_pkey\n>> Index Cond: t1.id = constant\n>> -> Seq Scan on t2\n\n> Hmm, yeah, I guess that's possible. How much do you think this loses\n> as compared with:\n\n> Hash Join\n> Hash Cond: t1.x op t2.y\n> -> Seq Scan on t2\n> -> Hash\n> -> Index Scan on t1_pkey\n\nHard to say. The hash overhead might or might not pay for itself.\nIf the equality operator proper is expensive and we get to avoid\napplying it at most t2 rows, then this might be an OK alternative;\notherwise not so much.\n\nIn any case, the former looks like plans that we generate now,\nthe second not. Do you really want to field a lot of questions\nabout why we suddenly changed to a not-clearly-superior plan?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Jun 2021 15:03:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Mon, Jun 21, 2021 at 10:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Hmm, maybe I need to see an example of the sort of plan shape that you\n> have in mind. To me it feels like a comparison on a unique key ought\n> to use a *parameterized* nested loop. And it's also not clear to me\n> why a nested loop is actually better in a case like this. If the\n> nested loop iterates more than once because there are more rows on the\n> outer side, then you don't want to have something on the inner side\n> that might be expensive, and either an aggregate or an unparameterized\n> search for a unique value are potentially quite expensive. Now if you\n> put a materialize node on top of the inner side, then you don't have\n> to worry about that problem, but how much are you saving at that point\n> vs. just doing a hash join?\n\nI suspected that that was true, but even that doesn't seem like the\nreally important thing. While it may be true that the simple heuristic\ncan't be quite as simple as we'd hoped at first, ISTM that this is\nultimately not much of a problem. The basic fact remains: some more or\nless simple heuristic makes perfect sense, and should be adapted.\n\nThis conclusion is counterintuitive because it's addressing a very\ncomplicated problem with a very simple solution. However, if we lived\nin a world where things that sound too good to be true always turned\nout to be false, we'd also live in a world where optimizers were\ncompletely impractical and useless. Optimizers have that quality\nalready, whether or not we officially acknowledge it.\n\n> I don't understand how to generate a risk assessment or what we ought\n> to do with it if we had one. I don't even understand what units we\n> would use. We measure costs using abstract cost units, but those\n> abstract cost units are intended to be a proxy for runtime. If it's\n> not the case that a plan that runs for longer has a higher cost, then\n> something's wrong with the costing model or the settings. In the case\n> of risk, the whole thing seems totally subjective. We're talking about\n> the risk that our estimate is bogus, but how do we estimate the risk\n> that we don't know how to estimate?\n\nClearly we need a risk estimate for our risk estimate!\n\n> The other thing is - the risk of a particular path doesn't matter in\n> an absolute sense, only a relative one. In the particular case I'm on\n> about here, we *know* there's a less-risky alternative.\n\nExactly! This, a thousand times.\n\nThis reminds me of how people behave in the real world. In the real\nworld people deal with this without too much difficulty. Everything is\nsituational and based on immediate trade-offs, with plenty of\nuncertainty at every step. For example, if you think that there is\neven a tiny chance of a piece of fruit being poisonous, you don't eat\nthe piece of fruit -- better to wait until lunchtime. This is one of\nthe *easiest* decisions I can think of, despite the uncertainty.\n(Except perhaps if you happen to be in danger of dying of starvation,\nin which case it might be a very different story. And so on.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 21 Jun 2021 13:42:12 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Jun 21, 2021 at 10:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> The other thing is - the risk of a particular path doesn't matter in\n>> an absolute sense, only a relative one. In the particular case I'm on\n>> about here, we *know* there's a less-risky alternative.\n\n> Exactly! This, a thousand times.\n\nThis is a striking oversimplification.\n\nYou're ignoring the fact that the plan shape we generate now is in fact\n*optimal*, and easily proven to be so, in some very common cases. I don't\nthink the people whose use-cases get worse are going to be mollified by\nthe argument that you reduced their risk, when there is provably no risk.\nObviously the people whose use-cases are currently hitting the wrong end\nof the risk will be happy with any change whatever, but those may not be\nthe same people.\n\nI'm willing to take some flak if there's not an easy proof that the outer\nscan is single-row, but I don't think we should just up and change it\nfor cases where there is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Jun 2021 16:52:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Mon, Jun 21, 2021 at 1:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> You're ignoring the fact that the plan shape we generate now is in fact\n> *optimal*, and easily proven to be so, in some very common cases.\n\nAs I've said I don't reject the idea that there is room for\ndisagreement on the specifics. For example perhaps it'll turn out that\nonly a restricted subset of the cases that Robert originally had in\nmind will truly turn out to work as well as hoped. But that just seems\nlike a case of Robert refining a very preliminary proposal. I\nabsolutely expect there to be some need to iron out the wrinkles.\n\n> I don't\n> think the people whose use-cases get worse are going to be mollified by\n> the argument that you reduced their risk, when there is provably no risk.\n\nBy definition what we're doing here is throwing away slightly cheaper\nplans when the potential downside is much higher than the potential\nupside of choosing a reasonable alternative. I don't think that the\ndownside is particularly likely. In fact I believe that it's fairly\nunlikely in general. This is an imperfect trade-off, at least in\ntheory. I fully own that.\n\n> I'm willing to take some flak if there's not an easy proof that the outer\n> scan is single-row, but I don't think we should just up and change it\n> for cases where there is.\n\nSeems reasonable to me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 21 Jun 2021 14:26:33 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Mon, Jun 21, 2021 at 12:52:39PM -0400, Tom Lane wrote:\n> You're throwing around a lot of pejorative adjectives there without\n> having bothered to quantify any of them. This sounds less like a sound\n> argument to me than like a witch trial.\n>\n> Reflecting for a bit on the ancient principle that \"the only good numbers\n> in computer science are 0, 1, and N\", it seems to me that we could make\n> an effort to classify RelOptInfos as provably empty, provably at most one\n> row, and others. (This would be a property of relations, not individual\n> paths, so it needn't bloat struct Path.) We already understand about\n> provably-empty rels, so this is just generalizing that idea a little.\n> Once we know about that, at least for the really-common cases like unique\n> keys, I'd be okay with a hard rule that we don't consider unparameterized\n> nestloop joins with an outer side that falls in the \"N\" category.\n> Unless there's no alternative, of course.\n> \n> Another thought that occurs to me here is that maybe we could get rid of\n> the enable_material knob in favor of forcing (or at least encouraging)\n> materialization when the outer side isn't provably one row.\n\nThere were a lot of interesting ideas in this thread and I want to\nanalyze some of them. First, there is the common assumption (not\nstated) that over-estimating by 5% and underestimating by 5% cause the\nsame harm, which is clearly false. If I go to a restaurant and estimate\nthe bill to be 5% higher or %5 lower, assuming I have sufficient funds,\nunder or over estimating is probably fine. If I am driving and\nunder-estimate the traction of my tires, I am probably fine, but if I\nover-estimate their traction by 5%, I might crash.\n\nCloser to home, Postgres is more tolerant of memory usage\nunder-estimation than over-estimation:\n\n\thttps://momjian.us/main/blogs/pgblog/2018.html#December_7_2018\n\nWhat I hear Robert saying is that unparameterized nested loops are much\nmore sensitive to misestimation than hash joins, and only slightly\nfaster than hash joins when they use accurate row counts, so we might\nwant to avoid them by default. Tom is saying that if we know the outer\nside will have zero or one row, we should still do unparameterized\nnested loops because those are not more sensitive to misestimation than\nhash joins, and slightly faster.\n\nIf that is accurate, I think the big question is how common are cases\nwhere the outer side can't be proven to have zero or one row and nested\nloops are enough of a win to risk its greater sensitivity to\nmisestimation. If it is uncommon, seems we could just code the\noptimizer to use hash joins in those cases without a user-visible knob,\nbeyond the knob that already turns off nested loop joins.\n\nPeter's comment about having nodes in the executor that adjust to the\nrow counts it finds is interesting, and eventually might be necessary\nonce we are sure we have gone as far as we can without it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 21 Jun 2021 19:51:19 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Mon, Jun 21, 2021 at 4:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n> There were a lot of interesting ideas in this thread and I want to\n> analyze some of them. First, there is the common assumption (not\n> stated) that over-estimating by 5% and underestimating by 5% cause the\n> same harm, which is clearly false. If I go to a restaurant and estimate\n> the bill to be 5% higher or %5 lower, assuming I have sufficient funds,\n> under or over estimating is probably fine. If I am driving and\n> under-estimate the traction of my tires, I am probably fine, but if I\n> over-estimate their traction by 5%, I might crash.\n\nMy favorite analogy is the home insurance one:\n\nIt might make sense to buy home insurance because losing one's home\n(say through fire) is a loss that usually just cannot be tolerated --\nyou are literally ruined. We can debate how likely it is to happen,\nbut in the end it's not so unlikely that it can't be ruled out. At the\nsame time I may be completely unwilling to buy insurance for personal\nelectronic devices. I can afford to replace all of them if I truly\nhave to. And the chances of all of them breaking or being stolen on\nthe same day is remote (unless my home burns down!). If I drop my cell\nphone and crack the screen, I'll be annoyed, but it's certainly not\nthe end of the world.\n\nThis behavior will make perfect sense to most people. But it doesn't\nscale up or down. I have quite a few electronic devices, but only a\nsingle home, so technically I'm taking risks way more often than I am\nplaying it safe here. Am I risk tolerant when it comes to insurance?\nConservative?\n\nI myself don't think that it is sensible to apply either term here.\nIt's easier to just look at the specifics. A home is a pretty\nimportant thing to almost everybody; we can afford to treat it as a\nspecial case.\n\n> If that is accurate, I think the big question is how common are cases\n> where the outer side can't be proven to have zero or one row and nested\n> loops are enough of a win to risk its greater sensitivity to\n> misestimation. If it is uncommon, seems we could just code the\n> optimizer to use hash joins in those cases without a user-visible knob,\n> beyond the knob that already turns off nested loop joins.\n\nI think it's possible that Robert's proposal will lead to very\nslightly slower plans in the vast majority of cases that are affected,\nwhile still being a very good idea. Why should insurance be 100% free,\nthough? Maybe it can be in some cases where we get lucky, but why\nshould that be the starting point? It just has to be very cheap\nrelative to what we do today for us to come out ahead, certainly, but\nthat seems quite possible in at least this case.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 21 Jun 2021 17:25:31 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "\n\nOn 6/22/21 2:25 AM, Peter Geoghegan wrote:\n> On Mon, Jun 21, 2021 at 4:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n>> There were a lot of interesting ideas in this thread and I want to\n>> analyze some of them. First, there is the common assumption (not\n>> stated) that over-estimating by 5% and underestimating by 5% cause the\n>> same harm, which is clearly false. If I go to a restaurant and estimate\n>> the bill to be 5% higher or %5 lower, assuming I have sufficient funds,\n>> under or over estimating is probably fine. If I am driving and\n>> under-estimate the traction of my tires, I am probably fine, but if I\n>> over-estimate their traction by 5%, I might crash.\n> \n> My favorite analogy is the home insurance one:\n> \n> It might make sense to buy home insurance because losing one's home\n> (say through fire) is a loss that usually just cannot be tolerated --\n> you are literally ruined. We can debate how likely it is to happen,\n> but in the end it's not so unlikely that it can't be ruled out. At the\n> same time I may be completely unwilling to buy insurance for personal\n> electronic devices. I can afford to replace all of them if I truly\n> have to. And the chances of all of them breaking or being stolen on\n> the same day is remote (unless my home burns down!). If I drop my cell\n> phone and crack the screen, I'll be annoyed, but it's certainly not\n> the end of the world.\n> \n> This behavior will make perfect sense to most people. But it doesn't\n> scale up or down. I have quite a few electronic devices, but only a\n> single home, so technically I'm taking risks way more often than I am\n> playing it safe here. Am I risk tolerant when it comes to insurance?\n> Conservative?\n> \n> I myself don't think that it is sensible to apply either term here.\n> It's easier to just look at the specifics. A home is a pretty\n> important thing to almost everybody; we can afford to treat it as a\n> special case.\n> \n>> If that is accurate, I think the big question is how common are cases\n>> where the outer side can't be proven to have zero or one row and nested\n>> loops are enough of a win to risk its greater sensitivity to\n>> misestimation. If it is uncommon, seems we could just code the\n>> optimizer to use hash joins in those cases without a user-visible knob,\n>> beyond the knob that already turns off nested loop joins.\n> \n> I think it's possible that Robert's proposal will lead to very\n> slightly slower plans in the vast majority of cases that are affected,\n> while still being a very good idea. Why should insurance be 100% free,\n> though? Maybe it can be in some cases where we get lucky, but why\n> should that be the starting point? It just has to be very cheap\n> relative to what we do today for us to come out ahead, certainly, but\n> that seems quite possible in at least this case.\n> \n\nYeah, I like the insurance analogy - it gets to the crux of the problem,\nbecause insurance is pretty much exactly about managing risk. But making\neverything slower will be a hard sell, because wast majority of\nworkloads already running on Postgres don't have this issue at all, so\nfor them it's not worth the expense. Following the insurance analogy,\nselling tornado insurance in Europe is mostly pointless.\n\nInsurance is also about personal preference / risk tolerance. Maybe I'm\nfine with accepting risk that my house burns down, or whatever ...\n\nAnd the lack of data also plays role - the insurance company will ask\nfor higher rates when it does not have enough accurate data about the\nphenomenon, or when there's a lot of unknowns. Maybe this would allow\nsome basic measure of uncertainty, based on the number and type of\nrestrictions, joins, etc. The more restrictions we have, the less\ncertain the estimates are. Some conditions are estimated less\naccurately, and using default estimates makes it much less accurate.\n\nSo maybe some fairly rough measure of uncertainty might work, and the\nuser might specify how much risk it's willing to tolerate.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 22 Jun 2021 11:53:26 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Mon, Jun 21, 2021 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm willing to take some flak if there's not an easy proof that the outer\n> scan is single-row, but I don't think we should just up and change it\n> for cases where there is.\n\nI think that's a reasonable request. I'm not sure that I believe it's\n100% necessary, but it's certainly an improvement on a technical\nlevel, and given that the proposed change could impact quite a lot of\nplans, it's fair to want to see some effort being put into mitigating\nthe possible downsides. Now, I'm not sure when I might have time to\nactually try to do the work, which kind of sucks, but that's how it\ngoes sometimes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Jun 2021 07:13:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Tue, Jun 22, 2021 at 2:53 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Yeah, I like the insurance analogy - it gets to the crux of the problem,\n> because insurance is pretty much exactly about managing risk.\n\nThe user's exposure to harm is what truly matters. I admit that that's\nvery hard to quantify, but we should at least try to do so.\n\nWe sometimes think about a plan that is 10x slower as if it's\ninfinitely slow, or might as well be. But it's usually not like that\n-- it is generally meaningfully much better than the plan being 100x\nslower, which is itself sometimes appreciably better than 1000x\nslower. And besides, users often don't get anything like the optimal\nplan, even on what they would consider to be a good day (which is most\ndays). So maybe 10x slower is actually the baseline good case already,\nwithout anybody knowing it. Most individual queries are not executed\nvery often, even on the busiest databases. The extremes really do\nmatter a lot.\n\nIf a web app or OLTP query is ~10x slower than optimal then it might\nbe the practical equivalent of an outage that affects the query alone\n(i.e. \"infinitely slow\") -- but probably not. I think that it is worth\npaying more than nothing to avoid plans that are so far from optimal\nthat they might as well take forever to execute. This is not\nmeaningfully different from a database outage affecting one particular\nquery. It kind of is in a category of its own that surpasses \"slow\nplan\", albeit one that is very difficult or impossible to define\nformally.\n\nThere may be a huge amount of variation in risk tolerance among\nbasically reasonable people. For example, if somebody chooses to\nengage in some kind of extreme sport, to me it seems understandable.\nIt's just not my cup of tea. Whereas if somebody chooses to never wear\na seatbelt while driving, then to me they're simply behaving\nfoolishly. They're not willing to incur the tiniest inconvenience in\norder to get a huge reduction in potential harm -- including a very\nreal risk of approximately the worst thing that can happen to you.\nSure, refusing to wear a seatbelt can theoretically be classified as\njust another point on the risk tolerance spectrum, but that seems\nutterly contrived to me. Some things (maybe not that many) really are\nlike that, or can at least be assumed to work that way as a practical\nmatter.\n\n> But making\n> everything slower will be a hard sell, because wast majority of\n> workloads already running on Postgres don't have this issue at all, so\n> for them it's not worth the expense.\n\nI think that we're accepting too much risk here. But I bet it's also\ntrue that we're not taking enough risk in other areas. That was really\nmy point with the insurance analogy -- we can afford to take lots of\nindividual risks as long as they don't increase our exposure to truly\ndisastrous outcomes -- by which I mean queries that might as well take\nforever to execute as far as the user is concerned. (Easier said than\ndone, of course.)\n\nA simple trade-off between fast and robust doesn't seem like a\nuniversally helpful thing. Sometimes it's a very unhelpful way of\nlooking at the situation. If you make something more robust to extreme\nbad outcomes, then you may have simultaneously made it *faster* (not\nslower) for all practical purposes. This can happen when the increase\nin robustness allows the user to tune the system aggressively, and\nonly take on new risks that they can truly live with (which wouldn't\nhave been possible without the increase in robustness). For example, I\nimagine that the failsafe mechanism added to VACUUM will actually make\nit possible to tune VACUUM much more aggressively -- it might actually\nend up significantly improving performance for all practical purposes,\neven though technically it has nothing to do with performance.\n\nHaving your indexes a little more bloated because the failsafe\nkicked-in is a survivable event -- the DBA lives to fight another day,\nand *learns* to tune vacuum/the app so it doesn't happen again and\nagain. An anti-wraparound failure is perhaps not a survivable event --\nthe DBA gets fired. This really does seem like a fundamental\ndifference to me.\n\n> Following the insurance analogy,\n> selling tornado insurance in Europe is mostly pointless.\n\nPrincipled skepticism of this kind of thing is of course necessary and\nwelcome. It *could* be taken too far.\n\n> And the lack of data also plays role - the insurance company will ask\n> for higher rates when it does not have enough accurate data about the\n> phenomenon, or when there's a lot of unknowns. Maybe this would allow\n> some basic measure of uncertainty, based on the number and type of\n> restrictions, joins, etc.\n\nI don't think that you can really model uncertainty. But you can have\ntrue certainty (or close to it) about a trade-off that makes the\nsystem fundamentally more robust over time. You can largely be certain\nabout both the cost of the insurance, as well as how it ameliorates\nthe problem in at least some cases.\n\n> So maybe some fairly rough measure of uncertainty might work, and the\n> user might specify how much risk it's willing to tolerate.\n\nI think that most or all of the interesting stuff is where you have\nthis extreme asymmetry -- places where it's much more likely to be\ntrue that basically everybody wants that. Kind of like wearing\nseatbelts -- things that we really can claim are a universal good\nwithout too much controversy. There might be as few as one or two\nthings in the optimizer that this could be said of. But they matter.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 22 Jun 2021 15:37:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "I think that it is worth paying more than nothing to avoid plans that are\nso far from optimal that they might as well take forever to execute\n\n\nI recently came across this article from 2016 that expounded upon a bad\nplan of the sort discussed in this thread:\nhttps://heap.io/blog/when-to-avoid-jsonb-in-a-postgresql-schema\n\n(The proximate cause in this case was Postgresql not collecting statistics\nfor fields in a JSONB column, estimating rowcount of 1, and thus creating a\npathological slowdown.)\n\n–Mike\n\n\nOn Tue, Jun 22, 2021 at 7:37 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Tue, Jun 22, 2021 at 2:53 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>\n> Yeah, I like the insurance analogy - it gets to the crux of the problem,\n> because insurance is pretty much exactly about managing risk.\n>\n> The user's exposure to harm is what truly matters. I admit that that's\n> very hard to quantify, but we should at least try to do so.\n>\n> We sometimes think about a plan that is 10x slower as if it's infinitely\n> slow, or might as well be. But it's usually not like that\n> -- it is generally meaningfully much better than the plan being 100x\n> slower, which is itself sometimes appreciably better than 1000x slower. And\n> besides, users often don't get anything like the optimal plan, even on what\n> they would consider to be a good day (which is most days). So maybe 10x\n> slower is actually the baseline good case already, without anybody knowing\n> it. Most individual queries are not executed very often, even on the\n> busiest databases. The extremes really do matter a lot.\n>\n> If a web app or OLTP query is ~10x slower than optimal then it might be\n> the practical equivalent of an outage that affects the query alone\n> (i.e. \"infinitely slow\") -- but probably not. I think that it is worth\n> paying more than nothing to avoid plans that are so far from optimal that\n> they might as well take forever to execute. This is not meaningfully\n> different from a database outage affecting one particular query. It kind of\n> is in a category of its own that surpasses \"slow plan\", albeit one that is\n> very difficult or impossible to define formally.\n>\n> There may be a huge amount of variation in risk tolerance among basically\n> reasonable people. For example, if somebody chooses to engage in some kind\n> of extreme sport, to me it seems understandable. It's just not my cup of\n> tea. Whereas if somebody chooses to never wear a seatbelt while driving,\n> then to me they're simply behaving foolishly. They're not willing to incur\n> the tiniest inconvenience in order to get a huge reduction in potential\n> harm -- including a very real risk of approximately the worst thing that\n> can happen to you. Sure, refusing to wear a seatbelt can theoretically be\n> classified as just another point on the risk tolerance spectrum, but that\n> seems utterly contrived to me. Some things (maybe not that many) really are\n> like that, or can at least be assumed to work that way as a practical\n> matter.\n>\n> But making\n> everything slower will be a hard sell, because wast majority of workloads\n> already running on Postgres don't have this issue at all, so for them it's\n> not worth the expense.\n>\n> I think that we're accepting too much risk here. But I bet it's also true\n> that we're not taking enough risk in other areas. That was really my point\n> with the insurance analogy -- we can afford to take lots of individual\n> risks as long as they don't increase our exposure to truly disastrous\n> outcomes -- by which I mean queries that might as well take forever to\n> execute as far as the user is concerned. (Easier said than done, of\n> course.)\n>\n> A simple trade-off between fast and robust doesn't seem like a universally\n> helpful thing. Sometimes it's a very unhelpful way of looking at the\n> situation. If you make something more robust to extreme bad outcomes, then\n> you may have simultaneously made it *faster* (not slower) for all practical\n> purposes. This can happen when the increase in robustness allows the user\n> to tune the system aggressively, and only take on new risks that they can\n> truly live with (which wouldn't have been possible without the increase in\n> robustness). For example, I imagine that the failsafe mechanism added to\n> VACUUM will actually make it possible to tune VACUUM much more aggressively\n> -- it might actually end up significantly improving performance for all\n> practical purposes, even though technically it has nothing to do with\n> performance.\n>\n> Having your indexes a little more bloated because the failsafe kicked-in\n> is a survivable event -- the DBA lives to fight another day, and *learns*\n> to tune vacuum/the app so it doesn't happen again and again. An\n> anti-wraparound failure is perhaps not a survivable event -- the DBA gets\n> fired. This really does seem like a fundamental difference to me.\n>\n> Following the insurance analogy,\n> selling tornado insurance in Europe is mostly pointless.\n>\n> Principled skepticism of this kind of thing is of course necessary and\n> welcome. It *could* be taken too far.\n>\n> And the lack of data also plays role - the insurance company will ask for\n> higher rates when it does not have enough accurate data about the\n> phenomenon, or when there's a lot of unknowns. Maybe this would allow some\n> basic measure of uncertainty, based on the number and type of restrictions,\n> joins, etc.\n>\n> I don't think that you can really model uncertainty. But you can have true\n> certainty (or close to it) about a trade-off that makes the system\n> fundamentally more robust over time. You can largely be certain about both\n> the cost of the insurance, as well as how it ameliorates the problem in at\n> least some cases.\n>\n> So maybe some fairly rough measure of uncertainty might work, and the user\n> might specify how much risk it's willing to tolerate.\n>\n> I think that most or all of the interesting stuff is where you have this\n> extreme asymmetry -- places where it's much more likely to be true that\n> basically everybody wants that. Kind of like wearing seatbelts -- things\n> that we really can claim are a universal good without too much controversy.\n> There might be as few as one or two things in the optimizer that this could\n> be said of. But they matter.\n>\n> --\n> Peter Geoghegan\n>\n\nI think that it is worth\npaying more than nothing to avoid plans that are so far from optimal\nthat they might as well take forever to executeI recently came across this article from 2016 that expounded upon a bad plan of the sort discussed in this thread: https://heap.io/blog/when-to-avoid-jsonb-in-a-postgresql-schema(The proximate cause in this case was Postgresql not collecting statistics for fields in a JSONB column, estimating rowcount of 1, and thus creating a pathological slowdown.)–MikeOn Tue, Jun 22, 2021 at 7:37 PM, Peter Geoghegan <pg@bowt.ie> wrote:On Tue, Jun 22, 2021 at 2:53 AM Tomas Vondra\n\n<tomas.vondra@enterprisedb.com> wrote:\n\nYeah, I like the insurance analogy - it gets to the crux of the problem,\nbecause insurance is pretty much exactly about managing risk.\n\nThe user's exposure to harm is what truly matters. I admit that that's\nvery hard to quantify, but we should at least try to do so.\n\nWe sometimes think about a plan that is 10x slower as if it's\ninfinitely slow, or might as well be. But it's usually not like that\n\n-- it is generally meaningfully much better than the plan being 100x\nslower, which is itself sometimes appreciably better than 1000x\nslower. And besides, users often don't get anything like the optimal\nplan, even on what they would consider to be a good day (which is most\ndays). So maybe 10x slower is actually the baseline good case already,\nwithout anybody knowing it. Most individual queries are not executed\nvery often, even on the busiest databases. The extremes really do\nmatter a lot.\n\nIf a web app or OLTP query is ~10x slower than optimal then it might\nbe the practical equivalent of an outage that affects the query alone\n\n(i.e. \"infinitely slow\") -- but probably not. I think that it is worth\npaying more than nothing to avoid plans that are so far from optimal\nthat they might as well take forever to execute. This is not\nmeaningfully different from a database outage affecting one particular\nquery. It kind of is in a category of its own that surpasses \"slow\nplan\", albeit one that is very difficult or impossible to define\nformally.\n\nThere may be a huge amount of variation in risk tolerance among\nbasically reasonable people. For example, if somebody chooses to\nengage in some kind of extreme sport, to me it seems understandable.\nIt's just not my cup of tea. Whereas if somebody chooses to never wear\na seatbelt while driving, then to me they're simply behaving\nfoolishly. They're not willing to incur the tiniest inconvenience in\norder to get a huge reduction in potential harm -- including a very\nreal risk of approximately the worst thing that can happen to you.\nSure, refusing to wear a seatbelt can theoretically be classified as\njust another point on the risk tolerance spectrum, but that seems\nutterly contrived to me. Some things (maybe not that many) really are\nlike that, or can at least be assumed to work that way as a practical\nmatter.\n\nBut making\n\neverything slower will be a hard sell, because wast majority of\nworkloads already running on Postgres don't have this issue at all, so\nfor them it's not worth the expense.\n\nI think that we're accepting too much risk here. But I bet it's also\ntrue that we're not taking enough risk in other areas. That was really\nmy point with the insurance analogy -- we can afford to take lots of\nindividual risks as long as they don't increase our exposure to truly\ndisastrous outcomes -- by which I mean queries that might as well take\nforever to execute as far as the user is concerned. (Easier said than\ndone, of course.)\n\nA simple trade-off between fast and robust doesn't seem like a\nuniversally helpful thing. Sometimes it's a very unhelpful way of\nlooking at the situation. If you make something more robust to extreme\nbad outcomes, then you may have simultaneously made it *faster* (not\nslower) for all practical purposes. This can happen when the increase\nin robustness allows the user to tune the system aggressively, and\nonly take on new risks that they can truly live with (which wouldn't\nhave been possible without the increase in robustness). For example, I\nimagine that the failsafe mechanism added to VACUUM will actually make\nit possible to tune VACUUM much more aggressively -- it might actually\nend up significantly improving performance for all practical purposes,\neven though technically it has nothing to do with performance.\n\nHaving your indexes a little more bloated because the failsafe\nkicked-in is a survivable event -- the DBA lives to fight another day,\nand *learns* to tune vacuum/the app so it doesn't happen again and\nagain. An anti-wraparound failure is perhaps not a survivable event --\nthe DBA gets fired. This really does seem like a fundamental\ndifference to me.\n\nFollowing the insurance analogy,\n\nselling tornado insurance in Europe is mostly pointless.\n\nPrincipled skepticism of this kind of thing is of course necessary and\nwelcome. It *could* be taken too far.\n\nAnd the lack of data also plays role - the insurance company will ask\nfor higher rates when it does not have enough accurate data about the\nphenomenon, or when there's a lot of unknowns. Maybe this would allow\nsome basic measure of uncertainty, based on the number and type of\nrestrictions, joins, etc.\n\nI don't think that you can really model uncertainty. But you can have\ntrue certainty (or close to it) about a trade-off that makes the\nsystem fundamentally more robust over time. You can largely be certain\nabout both the cost of the insurance, as well as how it ameliorates\nthe problem in at least some cases.\n\nSo maybe some fairly rough measure of uncertainty might work, and the\nuser might specify how much risk it's willing to tolerate.\n\nI think that most or all of the interesting stuff is where you have\nthis extreme asymmetry -- places where it's much more likely to be\ntrue that basically everybody wants that. Kind of like wearing\nseatbelts -- things that we really can claim are a universal good\nwithout too much controversy. There might be as few as one or two\nthings in the optimizer that this could be said of. But they matter.\n\n-- \n\nPeter Geoghegan", "msg_date": "Mon, 2 Aug 2021 16:14:00 -0700", "msg_from": "Mike Klaas <mike@superhuman.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Tue, Jun 22, 2021 at 4:13 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think that's a reasonable request. I'm not sure that I believe it's\n> 100% necessary, but it's certainly an improvement on a technical\n> level, and given that the proposed change could impact quite a lot of\n> plans, it's fair to want to see some effort being put into mitigating\n> the possible downsides. Now, I'm not sure when I might have time to\n> actually try to do the work, which kind of sucks, but that's how it\n> goes sometimes.\n\nShould I take it that you've dropped this project? I was rather hoping\nthat you'd get back to it, FWIW.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 12 Jan 2022 16:19:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Wed, Jan 12, 2022 at 7:20 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Should I take it that you've dropped this project? I was rather hoping\n> that you'd get back to it, FWIW.\n\nHonestly, I'd forgotten all about it. While in theory I'd like to do\nsomething about this, but I just have too many other things going on.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jan 2022 12:51:53 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" } ]
[ { "msg_contents": "I've been spending a lot of time looking at isolationtester results\nover the past couple of days, and gotten really annoyed at how poorly\nit formats query results. In particular, any column heading or value\nthat is 15 characters or longer is not separated from the next column,\nrendering the output quite confusing.\n\nAttached is a little hack that tries to improve that case while making\nminimal changes to the output files otherwise.\n\nThere's still a good deal to be desired here: notably, the code still\ndoes nothing to ensure vertical alignment of successive lines when\nthere are wide headings or values. But doing anything about that\nwould involve much-more-invasive changes of the output files.\nIf we wanted to buy into that, I'd think about discarding this\nad-hoc code altogether in favor of using one of libpq's fe-print.c\nroutines. But I'm not really sure that the small legibility gains\nthat would result are worth massive changes in the output files.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 15 Jun 2021 19:03:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Improving isolationtester's data output" }, { "msg_contents": "On 2021-Jun-15, Tom Lane wrote:\n\n> I've been spending a lot of time looking at isolationtester results\n> over the past couple of days, and gotten really annoyed at how poorly\n> it formats query results. In particular, any column heading or value\n> that is 15 characters or longer is not separated from the next column,\n> rendering the output quite confusing.\n\nYeah, I noticed this too.\n\n> Attached is a little hack that tries to improve that case while making\n> minimal changes to the output files otherwise.\n\nSeems pretty reasonable.\n\n> There's still a good deal to be desired here: notably, the code still\n> does nothing to ensure vertical alignment of successive lines when\n> there are wide headings or values. But doing anything about that\n> would involve much-more-invasive changes of the output files.\n> If we wanted to buy into that, I'd think about discarding this\n> ad-hoc code altogether in favor of using one of libpq's fe-print.c\n> routines. But I'm not really sure that the small legibility gains\n> that would result are worth massive changes in the output files.\n\nShrug -- it's a one time change. It wouldn't bother me, for one.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Hay que recordar que la existencia en el cosmos, y particularmente la\nelaboraci�n de civilizaciones dentro de �l no son, por desgracia,\nnada id�licas\" (Ijon Tichy)\n\n\n", "msg_date": "Tue, 15 Jun 2021 19:20:11 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jun-15, Tom Lane wrote:\n>> If we wanted to buy into that, I'd think about discarding this\n>> ad-hoc code altogether in favor of using one of libpq's fe-print.c\n>> routines. But I'm not really sure that the small legibility gains\n>> that would result are worth massive changes in the output files.\n\n> Shrug -- it's a one time change. It wouldn't bother me, for one.\n\nGoing forward it wouldn't be a problem, but back-patching isolation\ntest cases might find it annoying. On the other hand, my nearby\npatch to improve isolation test stability is already going to create\nissues of that sort. (Unless, dare I say it, we back-patch that.)\n\nI do find it a bit attractive to create some regression-testing\ncoverage of fe-print.c. We are never going to remove that code,\nAFAICS, so getting some benefit from it would be nice.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 19:26:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "Hi,\n\nOn 2021-06-15 19:26:25 -0400, Tom Lane wrote:\n> Going forward it wouldn't be a problem, but back-patching isolation\n> test cases might find it annoying. On the other hand, my nearby\n> patch to improve isolation test stability is already going to create\n> issues of that sort. (Unless, dare I say it, we back-patch that.)\n\nIt might be worth to back-patch - aren't there some back branch cases of\ntest instability? And perhaps more importantly, I'm sure we'll encounter\ncases of writing new isolation tests in the course of fixing bugs that\nwe'd want to backpatch that are hard to make reliable without the new\nfeatures?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Jun 2021 18:31:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-06-15 19:26:25 -0400, Tom Lane wrote:\n>> Going forward it wouldn't be a problem, but back-patching isolation\n>> test cases might find it annoying. On the other hand, my nearby\n>> patch to improve isolation test stability is already going to create\n>> issues of that sort. (Unless, dare I say it, we back-patch that.)\n\n> It might be worth to back-patch - aren't there some back branch cases of\n> test instability? And perhaps more importantly, I'm sure we'll encounter\n> cases of writing new isolation tests in the course of fixing bugs that\n> we'd want to backpatch that are hard to make reliable without the new\n> features?\n\nYeah, there absolutely is a case to back-patch things like this. Whether\nit's a strong enough case, I dunno. I'm probably too close to the patch\nto have an unbiased opinion about that.\n\nHowever, a quick look through the commit history finds several places\nwhere we complained about not being able to back-patch isolation tests to\nbefore 9.6 because we hadn't back-patched that version's isolationtester\nimprovements. I found 6b802cfc7, 790026972, c88411995, 8b21b416e without\nlooking too hard. So that history certainly suggests that not\nback-patching such test infrastructure is the Wrong Thing.\n\n(And yeah, the failures we complained of in the other thread are\ncertainly there in the back branches. I think the only reason there\nseem to be fewer is that the back branches see fewer test runs.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 21:43:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jun-15, Tom Lane wrote:\n>> If we wanted to buy into that, I'd think about discarding this\n>> ad-hoc code altogether in favor of using one of libpq's fe-print.c\n>> routines. But I'm not really sure that the small legibility gains\n>> that would result are worth massive changes in the output files.\n\n> Shrug -- it's a one time change. It wouldn't bother me, for one.\n\nHere's a really quick-and-dirty patch to see what that would look\nlike. I haven't bothered here to update the expected-files outside\nthe main src/test/isolation directory, nor to fix the variant files.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 15 Jun 2021 22:44:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "On Tue, Jun 15, 2021 at 09:43:31PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-06-15 19:26:25 -0400, Tom Lane wrote:\n> >> Going forward it wouldn't be a problem, but back-patching isolation\n> >> test cases might find it annoying. On the other hand, my nearby\n> >> patch to improve isolation test stability is already going to create\n> >> issues of that sort. (Unless, dare I say it, we back-patch that.)\n> \n> > It might be worth to back-patch - aren't there some back branch cases of\n> > test instability? And perhaps more importantly, I'm sure we'll encounter\n> > cases of writing new isolation tests in the course of fixing bugs that\n> > we'd want to backpatch that are hard to make reliable without the new\n> > features?\n> \n> Yeah, there absolutely is a case to back-patch things like this. Whether\n> it's a strong enough case, I dunno. I'm probably too close to the patch\n> to have an unbiased opinion about that.\n> \n> However, a quick look through the commit history finds several places\n> where we complained about not being able to back-patch isolation tests to\n> before 9.6 because we hadn't back-patched that version's isolationtester\n> improvements. I found 6b802cfc7, 790026972, c88411995, 8b21b416e without\n> looking too hard. So that history certainly suggests that not\n> back-patching such test infrastructure is the Wrong Thing.\n\nI'm +1 for back-patching this class of change. I've wasted time adapting a\nback-patch's test case to account for non-back-patched test infrastructure\nchanges. Every back-patch of test infrastructure has been a strict win from\nmy perspective.\n\n\n", "msg_date": "Tue, 15 Jun 2021 20:43:23 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> I'm +1 for back-patching this class of change. I've wasted time adapting a\n> back-patch's test case to account for non-back-patched test infrastructure\n> changes. Every back-patch of test infrastructure has been a strict win from\n> my perspective.\n\nHearing few objections, I'll plan on back-patching. I'm thinking that the\nbest thing to do is apply these changes after beta2 wraps, but before we\nbranch v14. Waiting till after the branch would just create duplicate\nwork.\n\nBTW, as long as we're thinking of back-patching nontrivial specfile\nchanges, I have another modest proposal. What do people think of\nremoving the requirement for step/session names to be double-quoted,\nand instead letting them work like SQL identifiers? A quick grep\nshows that practically all the existing names are plain identifiers,\nso we could just drop their quotes for a useful notational savings.\nWhile I haven't actually tried yet, I doubt it'd be hard to adopt\nscan.l's identifier rules into specscanner.l. (Probably wouldn't\nbother with auto case-folding, though.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 10:03:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "On 2021-Jun-16, Tom Lane wrote:\n\n> Noah Misch <noah@leadboat.com> writes:\n> > I'm +1 for back-patching this class of change. I've wasted time adapting a\n> > back-patch's test case to account for non-back-patched test infrastructure\n> > changes. Every back-patch of test infrastructure has been a strict win from\n> > my perspective.\n> \n> Hearing few objections, I'll plan on back-patching. I'm thinking that the\n> best thing to do is apply these changes after beta2 wraps, but before we\n> branch v14.\n\nGreat.\n\n> BTW, as long as we're thinking of back-patching nontrivial specfile\n> changes, I have another modest proposal. What do people think of\n> removing the requirement for step/session names to be double-quoted,\n> and instead letting them work like SQL identifiers?\n\nYes *please*.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"[PostgreSQL] is a great group; in my opinion it is THE best open source\ndevelopment communities in existence anywhere.\" (Lamar Owen)\n\n\n", "msg_date": "Wed, 16 Jun 2021 11:14:55 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "Hi,\n\nOn 2021-06-15 22:44:29 -0400, Tom Lane wrote:\n> Here's a really quick-and-dirty patch to see what that would look\n> like. I haven't bothered here to update the expected-files outside\n> the main src/test/isolation directory, nor to fix the variant files.\n\nNeat.\n\n\n> +\tmemset(&popt, 0, sizeof(popt));\n> +\tpopt.header = true;\n> +\tpopt.align = true;\n> +\tpopt.fieldSep = \"|\";\n> +\tPQprint(stdout, res, &popt);\n> }\n\nIs there an argument for not aligning because that can make diffs larger\nthan the actual data changes? E.g. one row being longer will cause all\nrows in the result set to be shown as differing because of the added\npadding? This has been a problem in the normal regression tests, where\nwe solved it by locally disabling alignment. It might be unproblematic\nfor isolationtester, because we don't often have large result sets...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:30:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jun-16, Tom Lane wrote:\n>> Hearing few objections, I'll plan on back-patching. I'm thinking that the\n>> best thing to do is apply these changes after beta2 wraps, but before we\n>> branch v14.\n\n> Great.\n\nAfter checking cross-version diffs to see how painful that is likely\nto be, I'm inclined to also back-patch Michael's v13 commits\n\n989d23b04beac0c26f44c379b04ac781eaa4265e\n Detect unused steps in isolation specs and do some cleanup\n\n9903338b5ea59093d77cfe50ec0b1c22d4a7d843\n Remove dry-run mode from isolationtester\n\nas those touched some of the same code areas, and it doesn't seem like\nthere'd be any harm in making these aspects uniform across all the\nbranches. If Michael wants to do that back-patching himself, that's\nfine with me, otherwise I'll do it.\n\nAlso, having slept on it, I'm leaning towards to the approach of\nusing PQprint() instead of just tweaking the existing code. At first\nI thought that was too much churn in the output files, but it really\ndoes seem to make them significantly more readable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 15:33:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-06-15 22:44:29 -0400, Tom Lane wrote:\n>> +\tmemset(&popt, 0, sizeof(popt));\n>> +\tpopt.header = true;\n>> +\tpopt.align = true;\n>> +\tpopt.fieldSep = \"|\";\n>> +\tPQprint(stdout, res, &popt);\n\n> Is there an argument for not aligning because that can make diffs larger\n> than the actual data changes? E.g. one row being longer will cause all\n> rows in the result set to be shown as differing because of the added\n> padding? This has been a problem in the normal regression tests, where\n> we solved it by locally disabling alignment. It might be unproblematic\n> for isolationtester, because we don't often have large result sets...\n\nI tried it that way first, and didn't much like the look of it.\n\nI think the result sets in the isolation tests don't have a big\nproblem here: as you say, they aren't very large, and in most of them\nthe column widths are fairly uniform anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 15:37:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "Hi,\n\nOn Wed, Jun 16, 2021, at 12:37, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-06-15 22:44:29 -0400, Tom Lane wrote:\n> >> +\tmemset(&popt, 0, sizeof(popt));\n> >> +\tpopt.header = true;\n> >> +\tpopt.align = true;\n> >> +\tpopt.fieldSep = \"|\";\n> >> +\tPQprint(stdout, res, &popt);\n> \n> > Is there an argument for not aligning because that can make diffs larger\n> > than the actual data changes? E.g. one row being longer will cause all\n> > rows in the result set to be shown as differing because of the added\n> > padding? This has been a problem in the normal regression tests, where\n> > we solved it by locally disabling alignment. It might be unproblematic\n> > for isolationtester, because we don't often have large result sets...\n> \n> I tried it that way first, and didn't much like the look of it.\n> \n> I think the result sets in the isolation tests don't have a big\n> problem here: as you say, they aren't very large, and in most of them\n> the column widths are fairly uniform anyway.\n\nCool. Just wanted to be sure we considered it.\n\nAndres\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:42:57 -0700", "msg_from": "\"Andres Freund\" <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "On Wed, Jun 16, 2021 at 03:33:29PM -0400, Tom Lane wrote:\n> After checking cross-version diffs to see how painful that is likely\n> to be, I'm inclined to also back-patch Michael's v13 commits\n> \n> 989d23b04beac0c26f44c379b04ac781eaa4265e\n> Detect unused steps in isolation specs and do some cleanup\n> \n> 9903338b5ea59093d77cfe50ec0b1c22d4a7d843\n> Remove dry-run mode from isolationtester\n> \n> as those touched some of the same code areas, and it doesn't seem like\n> there'd be any harm in making these aspects uniform across all the\n> branches. If Michael wants to do that back-patching himself, that's\n> fine with me, otherwise I'll do it.\n\nThere may be tests in stable branches that define steps remaining\nunused, but that's a minimal risk. Down to which version do you need\nthese? All the way down to 9.6?\n--\nMichael", "msg_date": "Thu, 17 Jun 2021 09:16:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Jun 16, 2021 at 03:33:29PM -0400, Tom Lane wrote:\n>> After checking cross-version diffs to see how painful that is likely\n>> to be, I'm inclined to also back-patch Michael's v13 commits\n>> 989d23b04beac0c26f44c379b04ac781eaa4265e\n>> Detect unused steps in isolation specs and do some cleanup\n>> 9903338b5ea59093d77cfe50ec0b1c22d4a7d843\n>> Remove dry-run mode from isolationtester\n\n> There may be tests in stable branches that define steps remaining\n> unused, but that's a minimal risk.\n\nYeah, it only results in a message in the output file anyway.\n\n> Down to which version do you need\n> these? All the way down to 9.6?\n\nYes please.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 21:10:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "On Wed, Jun 16, 2021 at 09:10:25PM -0400, Tom Lane wrote:\n> Yeah, it only results in a message in the output file anyway.\n\nThat itself would blow up the buildfarm, as 06fdc4e has proved.\n\n> Yes please.\n\nNobody has complained about the removal of --dry-run with 13~. The\nsecond one would cause tests to fail after a minor upgrade for\nextensions using isolationtester, but it seems like a good thing to\ninform people about anyway. So, okay, both parts are done.\n--\nMichael", "msg_date": "Thu, 17 Jun 2021 12:01:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improving isolationtester's data output" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Nobody has complained about the removal of --dry-run with 13~. The\n> second one would cause tests to fail after a minor upgrade for\n> extensions using isolationtester, but it seems like a good thing to\n> inform people about anyway. So, okay, both parts are done.\n\nThanks!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 23:34:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Improving isolationtester's data output" } ]
[ { "msg_contents": "I am trying to add bulk operation support to ODB (a C++ ORM) using\nthe new pipeline mode added to libpq in PostgreSQL 14. However, things\ndon't seem to be working according to the documentation (or perhaps I\nam misunderstanding something). Specifically, the documentation[1]\nmakes it sound like the use of PQpipelineSync() is optional (34.5.1.1\n\"Issuing Queries\"):\n\n\"After entering pipeline mode, the application dispatches requests using\nPQsendQuery, PQsendQueryParams, or its prepared-query sibling\nPQsendQueryPrepared. These requests are queued on the client-side until\nflushed to the server; this occurs when PQpipelineSync is used to establish a\nsynchronization point in the pipeline, or when PQflush is called. [...]\n\nThe server executes statements, and returns results, in the order the client\nsends them. The server will begin executing the commands in the pipeline\nimmediately, not waiting for the end of the pipeline. [...]\"\n\nBased on this I expect to be able to queue a single prepared INSERT\nstatement with PQsendQueryPrepared() and then call PQflush() and\nPQconsumeInput() to send/receive the data. This, however, does not\nwork: the client gets blocked because there is no data to read. Here\nis the call sequence:\n\nselect() # socket is writable\nPQsendQueryPrepared() # success\nPQflush() # returns 0 (queue is now empty)\nselect() # blocked here indefinitely\n\nIn contrast, if I add the PQpipelineSync() call after PQsendQueryPrepared(),\nthen everything starts functioning as expected:\n\nselect() # socket is writable\nPQsendQueryPrepared() # success\nPQpipelineSync() # success\nPQflush() # returns 0 (queue is now empty)\nselect() # socket is readable\nPQconsumeInput() # success\nPQgetResult() # INSERT result\nPQgetResult() # NULL\nPQgetResult() # PGRES_PIPELINE_SYNC\n\nSo to me it looks like, contrary to the documentation, the server does\nnot start executing the statements immediately, instead waiting for the\nsynchronization point. Or am I missing something here?\n\nThe above tests were performed using libpq from 14beta1 running against\nPostgreSQL server version 9.5. If you would like to take a look at the\nactual code, you can find it here[2] (the PIPELINE_SYNC macro controls\nwhether PQpipelineSync() is used).\n\nOn a related note, I've been using libpq_pipeline.c[3] as a reference\nand I believe it has a busy loop calling PQflush() repeatedly on line\n721 since once everything has been sent and we are waiting for the\nresult, select() will keep returning with an indication that the socket\nis writable (you can find one way to fix this in [2]).\n\n[1] https://www.postgresql.org/docs/14/libpq-pipeline-mode.html\n[2] https://git.codesynthesis.com/cgit/odb/libodb-pgsql/tree/odb/pgsql/statement.cxx?h=bulk#n771\n[3] https://doxygen.postgresql.org/libpq__pipeline_8c_source.html\n\n\n", "msg_date": "Wed, 16 Jun 2021 11:48:28 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jun-16, Boris Kolpackov wrote:\n\n> Specifically, the documentation[1]\n> makes it sound like the use of PQpipelineSync() is optional (34.5.1.1\n> \"Issuing Queries\"):\n\nHmm. My intention here was to indicate that you should have\nPQpipelineSync *somewhere*, but that the server was free to start\nexecuting some commands even before that, if the buffered commands\nhappened to reach the server somehow -- but not necessarily that the\nresults from those commands would reach the client immediately.\n\nI'll experiment a bit more to be sure that what I'm saying is correct.\nBut if it is, then I think the documentation you quote is misleading:\n\n> \"After entering pipeline mode, the application dispatches requests using\n> PQsendQuery, PQsendQueryParams, or its prepared-query sibling\n> PQsendQueryPrepared. These requests are queued on the client-side until\n> flushed to the server; this occurs when PQpipelineSync is used to establish a\n> synchronization point in the pipeline, or when PQflush is called. [...]\n> \n> The server executes statements, and returns results, in the order the client\n> sends them. The server will begin executing the commands in the pipeline\n> immediately, not waiting for the end of the pipeline. [...]\"\n\n... because it'll lead people to do what you've done, only to discover\nthat it doesn't really work.\n\nI think I should rephrase this to say that PQpipelineSync() is needed\nwhere the user needs the server to start executing commands; and\nseparately indicate that it is possible (but not promised) that the\nserver would start executing commands ahead of time because $reasons.\n\nDo I have it right that other than this documentation problem, you've\nbeen able to use pipeline mode successfully?\n\n> So to me it looks like, contrary to the documentation, the server does\n> not start executing the statements immediately, instead waiting for the\n> synchronization point. Or am I missing something here?\n\nI don't think you are.\n\n> The above tests were performed using libpq from 14beta1 running against\n> PostgreSQL server version 9.5. If you would like to take a look at the\n> actual code, you can find it here[2] (the PIPELINE_SYNC macro controls\n> whether PQpipelineSync() is used).\n\nThanks.\n\n> On a related note, I've been using libpq_pipeline.c[3] as a reference\n> and I believe it has a busy loop calling PQflush() repeatedly on line\n> 721 since once everything has been sent and we are waiting for the\n> result, select() will keep returning with an indication that the socket\n> is writable\n\nOops, thanks, will look at fixing this too.\n\n> (you can find one way to fix this in [2]).\n> [2] https://git.codesynthesis.com/cgit/odb/libodb-pgsql/tree/odb/pgsql/statement.cxx?h=bulk#n771\n\nNeat, can do likewise I suppose.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Fri, 18 Jun 2021 13:39:52 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> I think I should rephrase this to say that PQpipelineSync() is needed\n> where the user needs the server to start executing commands; and\n> separately indicate that it is possible (but not promised) that the\n> server would start executing commands ahead of time because $reasons.\n\nI think always requiring PQpipelineSync() is fine since it also serves\nas an error recovery boundary. But the fact that the server waits until\nthe sync message to start executing the pipeline is surprising. To me\nthis seems to go contrary to the idea of a \"pipeline\".\n\nIn fact, I see the following ways the server could behave:\n\n1. The server starts executing queries and sending their results before\n receiving the sync message.\n\n2. The server starts executing queries before receiving the sync message\n but buffers the results until it receives the sync message.\n\n3. The server buffers the queries and only starts executing them and\n sending the results after receiving the sync message.\n\nMy observations suggest that the server behaves as (3) but it could\nalso be (2).\n\nWhile it can be tempting to say that this is an implementation detail,\nthis affects the way one writes a client. For example, I currently have\nthe following comment in my code:\n\n // Send queries until we get blocked. This feels like a better\n // overall strategy to keep the server busy compared to sending one\n // query at a time and then re-checking if there is anything to read\n // because the results of INSERT/UPDATE/DELETE are presumably small\n // and quite a few of them can get buffered before the server gets\n // blocked.\n\nThis would be a good strategy for behavior (1) but not (3) (where it\nwould make more sense to queue the queries on the client side). So I\nthink it would be useful to clarify the server behavior and specify\nit in the documentation.\n\n\n> Do I have it right that other than this documentation problem, you've\n> been able to use pipeline mode successfully?\n\nSo far I've only tried it in a simple prototype (single INSERT statement).\nBut I am busy plugging it into ODB's bulk operation support (that we\nalready have for Oracle and MSSQL) and once that's done I should be\nable to exercise things in more meaningful ways.\n\n\n", "msg_date": "Mon, 21 Jun 2021 10:38:20 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jun-21, Boris Kolpackov wrote:\n\n> Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n> \n> > I think I should rephrase this to say that PQpipelineSync() is needed\n> > where the user needs the server to start executing commands; and\n> > separately indicate that it is possible (but not promised) that the\n> > server would start executing commands ahead of time because $reasons.\n> \n> I think always requiring PQpipelineSync() is fine since it also serves\n> as an error recovery boundary. But the fact that the server waits until\n> the sync message to start executing the pipeline is surprising. To me\n> this seems to go contrary to the idea of a \"pipeline\".\n\nBut does that actually happen? There's a very easy test we can do by\nsending queries that sleep. If my libpq program sends a \"SELECT\npg_sleep(2)\", then PQflush(), then sleep in the client program two more\nseconds without sending the sync; and *then* send the sync, I find that\nthe program takes 2 seconds, not four. This shows that both client and\nserver slept in parallel, even though I didn't send the Sync until after\nthe client was done sleeping.\n\nIn order to see this, I patched libpq_pipeline.c with the attached, and\nran it under time:\n\ntime ./libpq_pipeline simple_pipeline -t simple.trace\nsimple pipeline... sent and flushed the sleep. Sleeping 2s here:\nclient sleep done\nok\n\nreal\t0m2,008s\nuser\t0m0,000s\nsys\t0m0,003s\n\n\nSo I see things happening as you describe in (1):\n\n> In fact, I see the following ways the server could behave:\n> \n> 1. The server starts executing queries and sending their results before\n> receiving the sync message.\n\nI am completely at a loss on how to explain a server that behaves in any\nother way, given how the protocol is designed. There is no buffering on\nthe server side.\n\n> While it can be tempting to say that this is an implementation detail,\n> this affects the way one writes a client. For example, I currently have\n> the following comment in my code:\n> \n> // Send queries until we get blocked. This feels like a better\n> // overall strategy to keep the server busy compared to sending one\n> // query at a time and then re-checking if there is anything to read\n> // because the results of INSERT/UPDATE/DELETE are presumably small\n> // and quite a few of them can get buffered before the server gets\n> // blocked.\n> \n> This would be a good strategy for behavior (1) but not (3) (where it\n> would make more sense to queue the queries on the client side).\n\nAgreed, that's the kind of strategy I would have thought was the most\nreasonable, given my understanding of how the protocol works.\n\nI wonder if your program is being affected by something else. Maybe the\nsocket is nonblocking (though I don't quite understand how that would\naffect the client behavior in just this way), or your program is\nbuffering elsewhere. I don't do C++ much so I can't help you with that.\n\n> So I think it would be useful to clarify the server behavior and\n> specify it in the documentation.\n\nI'll see about improving the docs on these points.\n\n> > Do I have it right that other than this documentation problem, you've\n> > been able to use pipeline mode successfully?\n> \n> So far I've only tried it in a simple prototype (single INSERT statement).\n> But I am busy plugging it into ODB's bulk operation support (that we\n> already have for Oracle and MSSQL) and once that's done I should be\n> able to exercise things in more meaningful ways.\n\nFair enough.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Tue, 22 Jun 2021 18:14:52 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jun-22, Alvaro Herrera wrote:\n\n> > So I think it would be useful to clarify the server behavior and\n> > specify it in the documentation.\n> \n> I'll see about improving the docs on these points.\n\nSo I started to modify the second paragraph to indicate that the client\nwould send data on PQflush/buffer full/PQpipelineSync, only to realize\nthat the first paragraph already explains this. So I'm not sure if any\nchanges are needed.\n\nMaybe your complaint is only based on disagreement about what does libpq\ndo regarding queueing commands; and as far as I can tell in quick\nexperimentation with libpq, it works as the docs state already.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Tue, 22 Jun 2021 20:34:32 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> > I think always requiring PQpipelineSync() is fine since it also serves\n> > as an error recovery boundary. But the fact that the server waits until\n> > the sync message to start executing the pipeline is surprising. To me\n> > this seems to go contrary to the idea of a \"pipeline\".\n> \n> But does that actually happen? There's a very easy test we can do by\n> sending queries that sleep. If my libpq program sends a \"SELECT\n> pg_sleep(2)\", then PQflush(), then sleep in the client program two more\n> seconds without sending the sync; and *then* send the sync, I find that\n> the program takes 2 seconds, not four. This shows that both client and\n> server slept in parallel, even though I didn't send the Sync until after\n> the client was done sleeping.\n\nThanks for looking into it. My experiments were with INSERT and I now\nwas able to try things with larger pipelines. I can now see the server\nstarts sending results after ~400 statements. So I think you are right,\nthe server does start executing the pipeline before receiving the sync\nmessage, though there is still something strange going on (but probably\non the client side):\n\nI have a pipeline of say 500 INSERTs. If I \"execute\" this pipeline by first\nsending all the statements and then reading the results, then everything\nworks as expected. This is the call sequence I am talking about:\n\nPQsendQueryPrepared() # INSERT #1\nPQflush()\nPQsendQueryPrepared() # INSERT #2\nPQflush()\n...\nPQsendQueryPrepared() # INSERT #500\nPQpipelineSync()\nPQflush()\nPQconsumeInput()\nPQgetResult() # INSERT #1\nPQgetResult() # NULL\nPQgetResult() # INSERT #2\nPQgetResult() # NULL\n...\nPQgetResult() # INSERT #500\nPQgetResult() # NULL\nPQgetResult() # PGRES_PIPELINE_SYNC\n\nIf, however, I execute it by checking for results before sending the\nnext INSERT, I get the following call sequence:\n\nPQsendQueryPrepared() # INSERT #1\nPQflush()\nPQsendQueryPrepared() # INSERT #2\nPQflush()\n...\nPQsendQueryPrepared() # INSERT #~400\nPQflush()\nPQconsumeInput() # At this point select() indicates we can read.\nPQgetResult() # NULL (???)\nPQgetResult() # INSERT #1\nPQgetResult() # NULL\nPQgetResult() # INSERT #2\nPQgetResult() # NULL\n...\n\n\nWhat's strange here is that the first PQgetResult() call (marked with ???)\nreturns NULL instead of result for INSERT #1 as in the first call sequence.\nInterestingly, if I skip it, the rest seems to progress as expected.\n\nAny idea what might be going on here? My hunch is that there is an issue\nwith libpq's state machine. In particular, in the second case, PQgetResult()\nis called before the sync message is sent. Did you have a chance to test\nsuch a scenario (i.e., a large pipeline where the first result is processed\nbefore the PQpipelineSync() call)? Of course, this could very well be a bug\non my side or me misunderstanding something.\n\n\n", "msg_date": "Wed, 23 Jun 2021 10:37:22 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> On 2021-Jun-22, Alvaro Herrera wrote:\n> \n> > > So I think it would be useful to clarify the server behavior and\n> > > specify it in the documentation.\n> > \n> > I'll see about improving the docs on these points.\n> \n> So I started to modify the second paragraph to indicate that the client\n> would send data on PQflush/buffer full/PQpipelineSync, only to realize\n> that the first paragraph already explains this. So I'm not sure if any\n> changes are needed.\n> \n> Maybe your complaint is only based on disagreement about what does libpq\n> do regarding queueing commands; and as far as I can tell in quick\n> experimentation with libpq, it works as the docs state already.\n\nI think one change that is definitely needed is to make it clear that\nthe PQpipelineSync() call is not optional.\n\nI would also add a note saying that while the server starts processing\nthe pipeline immediately, it may buffer the results and the only way\nto flush them out is to call PQpipelineSync().\n\n\n", "msg_date": "Wed, 23 Jun 2021 13:03:52 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jun-23, Boris Kolpackov wrote:\n\n> I think one change that is definitely needed is to make it clear that\n> the PQpipelineSync() call is not optional.\n> \n> I would also add a note saying that while the server starts processing\n> the pipeline immediately, it may buffer the results and the only way\n> to flush them out is to call PQpipelineSync().\n\nCurious -- I just noticed that the server understands a message 'H' that\nrequests a flush of the server buffer. However, libpq has no way to\ngenerate that message as far as I can see. I think you could use that\nto request results from the pipeline, without the sync point.\n\nI wonder if it's worth adding an entry point to libpq to allow access to\nthis. PQrequestFlush() or something like that ... Prior to pipeline\nmode this has no use (since everything ends with ReadyForQuery which\ninvolves a flush) but it does seem to have use in pipeline mode.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Wed, 23 Jun 2021 12:22:46 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jun-23, Boris Kolpackov wrote:\n\n> I think one change that is definitely needed is to make it clear that\n> the PQpipelineSync() call is not optional.\n> \n> I would also add a note saying that while the server starts processing\n> the pipeline immediately, it may buffer the results and the only way\n> to flush them out is to call PQpipelineSync().\n\nAren't those two things one and the same? I propose the attached.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)", "msg_date": "Wed, 23 Jun 2021 12:55:40 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Boris Kolpackov <boris@codesynthesis.com> writes:\n\n> What's strange here is that the first PQgetResult() call (marked with ???)\n> returns NULL instead of result for INSERT #1 as in the first call sequence.\n\nI've hit another similar case except now an unexpected NULL result is\nreturned in the middle of PGRES_PIPELINE_ABORTED result sequence. The\ncall sequence is as follows:\n\nPQsendQueryPrepared() # INSERT #1\nPQflush()\nPQsendQueryPrepared() # INSERT #2\nPQflush()\n...\nPQsendQueryPrepared() # INSERT #251 -- insert duplicate PK\nPQflush()\n...\nPQsendQueryPrepared() # INSERT #343\nPQflush()\nPQconsumeInput() # At this point select() indicates we can read.\nPQgetResult() # NULL -- unexpected but skipped (see prev. email)\nPQgetResult() # INSERT #1\nPQgetResult() # NULL\nPQgetResult() # INSERT #2\nPQgetResult() # NULL\n...\nPQgetResult() # INSERT #251 error result, SQLSTATE 23505\nPQgetResult() # NULL\nPQgetResult() # INSERT #252 PGRES_PIPELINE_ABORTED\nPQgetResult() # NULL\nPQgetResult() # INSERT #253 PGRES_PIPELINE_ABORTED\nPQgetResult() # NULL\n...\nPQgetResult() # INSERT #343 NULL (???)\n\nNotice that result #343 corresponds to the last PQsendQueryPrepared()\ncall made before the socket became readable (it's not always 343 but\naround there).\n\nFor completeness, the statement in question is:\n\nINSERT INTO pgsql_bulk_object (id, idata, sdata) VALUES ($1, $2, $3)\n\nThe table:\n\nCREATE TABLE pgsql_bulk_object (\n id BIGINT NOT NULL PRIMARY KEY,\n idata BIGINT NOT NULL,\n sdata TEXT NOT NULL);\n\nAnd the data inserted is in the form:\n\n1, 1, \"1\"\n2, 2, \"2\"\n...\n\n\n", "msg_date": "Thu, 24 Jun 2021 11:00:20 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> Curious -- I just noticed that the server understands a message 'H' that\n> requests a flush of the server buffer. However, libpq has no way to\n> generate that message as far as I can see. I think you could use that\n> to request results from the pipeline, without the sync point.\n> \n> I wonder if it's worth adding an entry point to libpq to allow access to\n> this. \n\nYes, I think this can be useful. For example, an application may wish\nto receive the result as soon as possible in case it's used as input\nto some further computation.\n\n\n> PQrequestFlush() or something like that ...\n\nI think I would prefer PQflushResult() or something along these\nlines (\"request\" is easy to misinterpret as \"client request\").\n\n\n", "msg_date": "Thu, 24 Jun 2021 11:06:42 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> Subject: [PATCH] Clarify that pipeline sync is mandatory\n> \n> ---\n> doc/src/sgml/libpq.sgml | 6 ++++--\n> 1 file changed, 4 insertions(+), 2 deletions(-)\n> \n> diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml\n> index 441cc0da3a..0217f8d8c7 100644\n> --- a/doc/src/sgml/libpq.sgml\n> +++ b/doc/src/sgml/libpq.sgml\n> @@ -5103,10 +5103,12 @@ int PQflush(PGconn *conn);\n> The server executes statements, and returns results, in the order the\n> client sends them. The server will begin executing the commands in the\n> pipeline immediately, not waiting for the end of the pipeline.\n> + Do note that results are buffered on the server side; a synchronization\n> + point, establshied with <function>PQpipelineSync</function>, is necessary\n> + in order for all results to be flushed to the client.\n\ns/establshied/established/\n\nOtherwise, LGTM, thanks!\n\n\n", "msg_date": "Thu, 24 Jun 2021 11:10:41 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jun-23, Boris Kolpackov wrote:\n\n> If, however, I execute it by checking for results before sending the\n> next INSERT, I get the following call sequence:\n> \n> PQsendQueryPrepared() # INSERT #1\n> PQflush()\n> PQsendQueryPrepared() # INSERT #2\n> PQflush()\n> ...\n> PQsendQueryPrepared() # INSERT #~400\n> PQflush()\n> PQconsumeInput() # At this point select() indicates we can read.\n> PQgetResult() # NULL (???)\n> PQgetResult() # INSERT #1\n> PQgetResult() # NULL\n> PQgetResult() # INSERT #2\n> PQgetResult() # NULL\n> ...\n> \n> \n> What's strange here is that the first PQgetResult() call (marked with ???)\n> returns NULL instead of result for INSERT #1 as in the first call sequence.\n> Interestingly, if I skip it, the rest seems to progress as expected.\n\nYeah, I agree that there's a problem in the libpq state machine. I'm\nlooking into it now.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Thu, 24 Jun 2021 11:54:50 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jun-23, Boris Kolpackov wrote:\n\n> If, however, I execute it by checking for results before sending the\n> next INSERT, I get the following call sequence:\n> \n> PQsendQueryPrepared() # INSERT #1\n> PQflush()\n> PQsendQueryPrepared() # INSERT #2\n> PQflush()\n> ...\n> PQsendQueryPrepared() # INSERT #~400\n> PQflush()\n> PQconsumeInput() # At this point select() indicates we can read.\n> PQgetResult() # NULL (???)\n> PQgetResult() # INSERT #1\n> PQgetResult() # NULL\n> PQgetResult() # INSERT #2\n> PQgetResult() # NULL\n\nIIUC the problem is that PQgetResult is indeed not prepared to deal with\na result the first time until after the queue has been \"prepared\", and\nthis happens on calling PQpipelineSync. But I think the formulation in\nthe attached patch works too, and the resulting code is less surprising.\n\nI wrote a test case that works as you describe, and indeed with the\noriginal code it gets a NULL initially; that disappears with the\nattached patch. Can you give it a try?\n\nThanks\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Escucha y olvidar�s; ve y recordar�s; haz y entender�s\" (Confucio)", "msg_date": "Thu, 24 Jun 2021 18:32:20 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> IIUC the problem is that PQgetResult is indeed not prepared to deal with\n> a result the first time until after the queue has been \"prepared\", and\n> this happens on calling PQpipelineSync. But I think the formulation in\n> the attached patch works too, and the resulting code is less surprising.\n> \n> I wrote a test case that works as you describe, and indeed with the\n> original code it gets a NULL initially; that disappears with the\n> attached patch. Can you give it a try?\n\nYes, I can confirm this appears to have addressed the first issue,\nthanks! The second issue [1], however, is still there even with\nthis patch.\n\n[1] https://www.postgresql.org/message-id/boris.20210624103805%40codesynthesis.com\n\n\n", "msg_date": "Fri, 25 Jun 2021 09:13:53 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jun-24, Boris Kolpackov wrote:\n\n> Boris Kolpackov <boris@codesynthesis.com> writes:\n> \n> > What's strange here is that the first PQgetResult() call (marked with ???)\n> > returns NULL instead of result for INSERT #1 as in the first call sequence.\n> \n> I've hit another similar case except now an unexpected NULL result is\n> returned in the middle of PGRES_PIPELINE_ABORTED result sequence. The\n> call sequence is as follows:\n\nI haven't been able to get this to break for me yet, and I probably\nwon't today. In the meantime, here's patches for the first one. The\ntest added by 0003 fails, and then 0004 fixes it.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W", "msg_date": "Fri, 25 Jun 2021 19:50:10 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jun-25, Alvaro Herrera wrote:\n\n> From 071757645ee0f9f15f57e43447d7c234deb062c0 Mon Sep 17 00:00:00 2001\n> From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Date: Fri, 25 Jun 2021 16:02:00 -0400\n> Subject: [PATCH v2 2/4] Add PQrequestFlush()\n\nI forgot to mention:\n\n> +/*\n> + * Send request for server to flush its buffer\n> + */\n> +int\n> +PQrequestFlush(PGconn *conn)\n> +{\n> +\tif (!conn)\n> +\t\treturn 0;\n> +\n> +\t/* Don't try to send if we know there's no live connection. */\n> +\tif (conn->status != CONNECTION_OK)\n> +\t{\n> +\t\tappendPQExpBufferStr(&conn->errorMessage,\n> +\t\t\t\t\t\t\t libpq_gettext(\"no connection to the server\\n\"));\n> +\t\treturn 0;\n> +\t}\n> +\n> +\t/* Can't send while already busy, either, unless enqueuing for later */\n> +\tif (conn->asyncStatus != PGASYNC_IDLE &&\n> +\t\tconn->pipelineStatus == PQ_PIPELINE_OFF)\n> +\t{\n> +\t\tappendPQExpBufferStr(&conn->errorMessage,\n> +\t\t\t\t\t\t\t libpq_gettext(\"another command is already in progress\\n\"));\n> +\t\treturn false;\n> +\t}\n> +\n> +\tif (pqPutMsgStart('H', conn) < 0 ||\n> +\t\tpqPutMsgEnd(conn) < 0)\n> +\t{\n> +\t\treturn 0;\n> +\t}\n> +\t/* XXX useless without a flush ...? */\n> +\tpqFlush(conn);\n> +\n> +\treturn 1;\n> +}\n\nI'm not sure if it's a good idea for PQrequestFlush to itself flush\nlibpq's buffer. We can just document that PQflush is required ...\nopinions?\n\n(I didn't try PQrequestFlush in any scenarios other than the test case I\nadded.)\n\n-- \n�lvaro Herrera Valdivia, Chile\nVoy a acabar con todos los humanos / con los humanos yo acabar�\nvoy a acabar con todos (bis) / con todos los humanos acabar� �acabar�! (Bender)\n\n\n", "msg_date": "Fri, 25 Jun 2021 19:52:41 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "It hadn't occurred to me that I should ask the release management team\nabout adding a new function to libpq this late in the cycle.\n\nPlease do note that the message type used in the new routine is currenly\nunused and uncovered -- see line 4660 here:\n\nhttps://coverage.postgresql.org/src/backend/tcop/postgres.c.gcov.html\n\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"I'm impressed how quickly you are fixing this obscure issue. I came from \nMS SQL and it would be hard for me to put into words how much of a better job\nyou all are doing on [PostgreSQL].\"\n Steve Midgley, http://archives.postgresql.org/pgsql-sql/2008-08/msg00000.php\n\n\n", "msg_date": "Sat, 26 Jun 2021 17:40:15 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On Sat, Jun 26, 2021 at 05:40:15PM -0400, Alvaro Herrera wrote:\n> It hadn't occurred to me that I should ask the release management team\n> about adding a new function to libpq this late in the cycle.\n\nI have not followed the thread in details, but if you think that this\nimproves the feature in the long term even for 14, I have no\npersonally no objections to the addition of a new function, or even a \nchange of behavior in one of the existing functions. The beta cycle\nis here for such adjustments.\n--\nMichael", "msg_date": "Sun, 27 Jun 2021 10:30:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> I forgot to mention:\n> \n> > +\t/* XXX useless without a flush ...? */\n> > +\tpqFlush(conn);\n> > +\n> > +\treturn 1;\n> > +}\n> \n> I'm not sure if it's a good idea for PQrequestFlush to itself flush\n> libpq's buffer. We can just document that PQflush is required ...\n> opinions?\n\nYes, I think not calling PQflush() gives more flexibility. For example,\nan application may \"insert\" them periodically after a certain number of\nqueries but call PQflush() at different intervals.\n\n\n", "msg_date": "Mon, 28 Jun 2021 14:56:43 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jun-24, Boris Kolpackov wrote:\n\n> I've hit another similar case except now an unexpected NULL result is\n> returned in the middle of PGRES_PIPELINE_ABORTED result sequence. The\n> call sequence is as follows:\n> \n> PQsendQueryPrepared() # INSERT #1\n> PQflush()\n> PQsendQueryPrepared() # INSERT #2\n> PQflush()\n> ...\n> PQsendQueryPrepared() # INSERT #251 -- insert duplicate PK\n> PQflush()\n> ...\n> PQsendQueryPrepared() # INSERT #343\n> PQflush()\n> PQconsumeInput() # At this point select() indicates we can read.\n> PQgetResult() # NULL -- unexpected but skipped (see prev. email)\n> PQgetResult() # INSERT #1\n> PQgetResult() # NULL\n> PQgetResult() # INSERT #2\n> PQgetResult() # NULL\n> ...\n> PQgetResult() # INSERT #251 error result, SQLSTATE 23505\n> PQgetResult() # NULL\n> PQgetResult() # INSERT #252 PGRES_PIPELINE_ABORTED\n> PQgetResult() # NULL\n> PQgetResult() # INSERT #253 PGRES_PIPELINE_ABORTED\n> PQgetResult() # NULL\n> ...\n> PQgetResult() # INSERT #343 NULL (???)\n> \n> Notice that result #343 corresponds to the last PQsendQueryPrepared()\n> call made before the socket became readable (it's not always 343 but\n> around there).\n\nNo luck reproducing any problems with this sequence as yet.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Tue, 29 Jun 2021 09:54:49 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> No luck reproducing any problems with this sequence as yet.\n\nCan you try to recreate the call flow as implemented here (it's\npretty much plain old C if you ignore error handling):\n\nhttps://git.codesynthesis.com/cgit/odb/libodb-pgsql/tree/odb/pgsql/statement.cxx?h=bulk#n789\n\nExcept replacing `continue` on line 966 with `break` (that will\nmake the code read-biased which I find triggers the error more\nreadily, though I was able to trigger it both ways).\n\nThen in an explicit transaction send 500 prepared insert statements\n(see previous email for details) with 250'th having a duplicate\nprimary key.\n\n\n", "msg_date": "Tue, 29 Jun 2021 16:14:46 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jun-29, Boris Kolpackov wrote:\n\n> Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n> \n> > No luck reproducing any problems with this sequence as yet.\n> \n> Can you try to recreate the call flow as implemented here (it's\n> pretty much plain old C if you ignore error handling):\n\n> https://git.codesynthesis.com/cgit/odb/libodb-pgsql/tree/odb/pgsql/statement.cxx?h=bulk#n789\n\nHmm, I can't see what's different there than what I get on my test\nprogram. Can you please do PQtrace() on the connection and send the\nresulting trace file? Maybe I can compare the traffic to understand\nwhat's the difference.\n\n(I do see that you're doing PQisBusy that I'm not. Going to try adding\nit next.)\n\nThanks\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Tue, 29 Jun 2021 11:03:57 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jun-29, Alvaro Herrera wrote:\n\n> (I do see that you're doing PQisBusy that I'm not. Going to try adding\n> it next.)\n\nAh, yes it does. I can reproduce this now. I thought PQconsumeInput\nwas sufficient, but it's not: you have to have the PQgetResult in there\ntoo. Looking ...\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Tue, 29 Jun 2021 12:50:56 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> Ah, yes it does. I can reproduce this now. I thought PQconsumeInput\n> was sufficient, but it's not: you have to have the PQgetResult in there\n> too. Looking ...\n\nAny progress on fixing this?\n\n\n", "msg_date": "Tue, 6 Jul 2021 07:48:31 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jul-06, Boris Kolpackov wrote:\n\n> Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n> \n> > Ah, yes it does. I can reproduce this now. I thought PQconsumeInput\n> > was sufficient, but it's not: you have to have the PQgetResult in there\n> > too. Looking ...\n> \n> Any progress on fixing this?\n\nYeah ... the problem as I understand it is that the state transition in\nlibpq when the connection is in pipeline aborted state is bogus. I'll\npost a patch in a bit.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n#error \"Operator lives in the wrong universe\"\n (\"Use of cookies in real-time system development\", M. Gleixner, M. Mc Guire)\n\n\n", "msg_date": "Tue, 6 Jul 2021 10:58:42 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jul-06, Boris Kolpackov wrote:\n\n> Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n> \n> > Ah, yes it does. I can reproduce this now. I thought PQconsumeInput\n> > was sufficient, but it's not: you have to have the PQgetResult in there\n> > too. Looking ...\n> \n> Any progress on fixing this?\n\nCan you please try with this patch?\n\nThanks\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/", "msg_date": "Tue, 6 Jul 2021 13:47:36 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> Can you please try with this patch?\n\nI don't get any difference in behavior with this patch. That is, I\nstill get the unexpected NULL result. Does it make a difference for\nyour reproducer?\n\n\n", "msg_date": "Wed, 7 Jul 2021 11:38:33 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jul-07, Boris Kolpackov wrote:\n\n> I don't get any difference in behavior with this patch. That is, I\n> still get the unexpected NULL result. Does it make a difference for\n> your reproducer?\n\nYes, the behavior changes for my repro. Is it possible for you to\nshare a full program I can compile and run, plesse? Thanks\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Most hackers will be perfectly comfortable conceptualizing users as entropy\n sources, so let's move on.\" (Nathaniel Smith)\n\n\n", "msg_date": "Wed, 7 Jul 2021 07:04:27 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> On 2021-Jul-07, Boris Kolpackov wrote:\n> \n> > I don't get any difference in behavior with this patch. That is, I\n> > still get the unexpected NULL result. Does it make a difference for\n> > your reproducer?\n> \n> Yes, the behavior changes for my repro. Is it possible for you to\n> share a full program I can compile and run, plesse?\n\nHere is the test sans the connection setup:\n\n-----------------------------------------------------------------------\n\n#include <libpq-fe.h>\n\n#include <errno.h>\n#include <stdio.h>\n#include <string.h>\n#include <stddef.h>\n#include <assert.h>\n#include <sys/select.h>\n\n// Note: hack.\n//\n#include <arpa/inet.h>\n#define htonll(x) ((((long long)htonl(x)) << 32) + htonl((x) >> 32))\n\nstatic const size_t columns = 3;\n\nstruct data\n{\n long long id;\n long long idata;\n const char* sdata;\n};\n\nstatic char* values[columns];\nstatic int lengths[columns];\nstatic int formats[columns] = {1, 1, 1};\n\nstatic const unsigned int types[columns] = {\n 20, // int8\n 20, // int8\n 25 // text\n};\n\nstatic void\ninit (const struct data* d)\n{\n values[0] = (char*)&d->id;\n lengths[0] = sizeof (d->id);\n\n values[1] = (char*)&d->idata;\n lengths[1] = sizeof (d->idata);\n\n values[2] = (char*)d->sdata;\n lengths[2] = strlen (d->sdata);\n}\n\nstatic void\nexecute (PGconn* conn, const struct data* ds, size_t n)\n{\n int sock = PQsocket (conn);\n assert (sock != -1);\n\n if (PQsetnonblocking (conn, 1) == -1 ||\n PQenterPipelineMode (conn) == 0)\n assert (false);\n\n // True if we've written and read everything, respectively.\n //\n bool wdone = false;\n bool rdone = false;\n\n size_t wn = 0;\n size_t rn = 0;\n\n while (!rdone)\n {\n fd_set wds;\n if (!wdone)\n {\n FD_ZERO (&wds);\n FD_SET (sock, &wds);\n }\n\n fd_set rds;\n FD_ZERO (&rds);\n FD_SET (sock, &rds);\n\n if (select (sock + 1, &rds, wdone ? NULL : &wds, NULL, NULL) == -1)\n {\n if (errno == EINTR)\n continue;\n\n assert (false);\n }\n\n // Try to minimize the chance of blocking the server by first processing\n // the result and then sending more queries.\n //\n if (FD_ISSET (sock, &rds))\n {\n if (PQconsumeInput (conn) == 0)\n assert (false);\n\n while (PQisBusy (conn) == 0)\n {\n //fprintf (stderr, \"PQgetResult %zu\\n\", rn);\n\n PGresult* res = PQgetResult (conn);\n assert (res != NULL);\n ExecStatusType stat = PQresultStatus (res);\n\n if (stat == PGRES_PIPELINE_SYNC)\n {\n assert (wdone && rn == n);\n PQclear (res);\n rdone = true;\n break;\n }\n\n if (stat == PGRES_FATAL_ERROR)\n {\n const char* s = PQresultErrorField (res, PG_DIAG_SQLSTATE);\n\n if (strcmp (s, \"23505\") == 0)\n fprintf (stderr, \"duplicate id at %zu\\n\", rn);\n }\n\n PQclear (res);\n assert (rn != n);\n ++rn;\n\n // We get a NULL result after each query result.\n //\n {\n PGresult* end = PQgetResult (conn);\n assert (end == NULL);\n }\n }\n }\n\n if (!wdone && FD_ISSET (sock, &wds))\n {\n // Send queries until we get blocked (write-biased). This feels like\n // a better overall strategy to keep the server busy compared to\n // sending one query at a time and then re-checking if there is\n // anything to read because the results of INSERT/UPDATE/DELETE are\n // presumably small and quite a few of them can get buffered before\n // the server gets blocked.\n //\n for (;;)\n {\n if (wn != n)\n {\n //fprintf (stderr, \"PQsendQueryPrepared %zu\\n\", wn);\n\n init (ds + wn);\n\n if (PQsendQueryPrepared (conn,\n \"persist_object\",\n (int)(columns),\n values,\n lengths,\n formats,\n 1) == 0)\n assert (false);\n\n if (++wn == n)\n {\n if (PQpipelineSync (conn) == 0)\n assert (false);\n }\n }\n\n // PQflush() result:\n //\n // 0 -- success (queue is now empty)\n // 1 -- blocked\n // -1 -- error\n //\n int r = PQflush (conn);\n assert (r != -1);\n\n if (r == 0)\n {\n if (wn != n)\n {\n // If we continue here, then we are write-biased. And if we\n // break, then we are read-biased.\n //\n#if 1\n break;\n#else\n continue;\n#endif\n }\n\n wdone = true;\n }\n\n break; // Blocked or done.\n }\n }\n }\n\n if (PQexitPipelineMode (conn) == 0 ||\n PQsetnonblocking (conn, 0) == -1)\n assert (false);\n}\n\nstatic void\ntest (PGconn* conn)\n{\n const size_t batch = 500;\n struct data ds[batch];\n\n for (size_t i = 0; i != batch; ++i)\n {\n ds[i].id = htonll (i == batch / 2 ? i - 1 : i); // Cause duplicate PK.\n ds[i].idata = htonll (i);\n ds[i].sdata = \"abc\";\n }\n\n // Prepare the statement.\n //\n {\n PGresult* res = PQprepare (\n conn,\n \"persist_object\",\n \"INSERT INTO \\\"pgsql_bulk_object\\\" \"\n \"(\\\"id\\\", \"\n \"\\\"idata\\\", \"\n \"\\\"sdata\\\") \"\n \"VALUES \"\n \"($1, $2, $3)\",\n (int)(columns),\n types);\n assert (PQresultStatus (res) == PGRES_COMMAND_OK);\n PQclear (res);\n }\n\n // Begin transaction.\n //\n {\n PGresult* res = PQexec (conn, \"begin\");\n assert (PQresultStatus (res) == PGRES_COMMAND_OK);\n PQclear (res);\n }\n\n execute (conn, ds, batch);\n\n // Commit transaction.\n //\n {\n PGresult* res = PQexec (conn, \"commit\");\n assert (PQresultStatus (res) == PGRES_COMMAND_OK);\n PQclear (res);\n }\n}\n\n-----------------------------------------------------------------------\n\nUse the following statements to (re)create the table:\n\nDROP TABLE IF EXISTS \"pgsql_bulk_object\" CASCADE;\n\nCREATE TABLE \"pgsql_bulk_object\" (\n \"id\" BIGINT NOT NULL PRIMARY KEY,\n \"idata\" BIGINT NOT NULL,\n \"sdata\" TEXT NOT NULL);\n\nIt fails consistently for me when running against the local PostgreSQL\n9.5 server (connecting via the UNIX socket):\n\nduplicate id at 250\ndriver: driver.cxx:105: void execute(PGconn*, const data*, size_t): Assertion `res != NULL' failed.\n\n\n", "msg_date": "Wed, 7 Jul 2021 17:09:41 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jul-07, Boris Kolpackov wrote:\n\n> Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n> \n> > On 2021-Jul-07, Boris Kolpackov wrote:\n> > \n> > > I don't get any difference in behavior with this patch. That is, I\n> > > still get the unexpected NULL result. Does it make a difference for\n> > > your reproducer?\n> > \n> > Yes, the behavior changes for my repro. Is it possible for you to\n> > share a full program I can compile and run, plesse?\n> \n> Here is the test sans the connection setup:\n\nThanks, looking now. (I was trying to compile libodb and everything, and\nI went a down rabbit hole of configure failing with mysterious m4 errors ...)\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 7 Jul 2021 11:31:22 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jul-07, Boris Kolpackov wrote:\n\n> // Try to minimize the chance of blocking the server by first processing\n> // the result and then sending more queries.\n> //\n> if (FD_ISSET (sock, &rds))\n> {\n> if (PQconsumeInput (conn) == 0)\n> assert (false);\n> \n> while (PQisBusy (conn) == 0)\n> {\n> //fprintf (stderr, \"PQgetResult %zu\\n\", rn);\n> \n> PGresult* res = PQgetResult (conn);\n> assert (res != NULL);\n> ExecStatusType stat = PQresultStatus (res);\n\nHmm ... aren't you trying to read more results than you sent queries? I\nthink there should be a break out of that block when that happens (which\nmeans the read of the PGRES_PIPELINE_SYNC needs to be out of there too).\nWith this patch, the program seems to work well for me.\n\n***************\n*** 94,112 ****\n while (PQisBusy (conn) == 0)\n {\n //fprintf (stderr, \"PQgetResult %zu\\n\", rn);\n \n PGresult* res = PQgetResult (conn);\n assert (res != NULL);\n ExecStatusType stat = PQresultStatus (res);\n \n- if (stat == PGRES_PIPELINE_SYNC)\n- {\n- assert (wdone && rn == n);\n- PQclear (res);\n- rdone = true;\n- break;\n- }\n- \n if (stat == PGRES_FATAL_ERROR)\n {\n const char* s = PQresultErrorField (res, PG_DIAG_SQLSTATE);\n--- 94,110 ----\n while (PQisBusy (conn) == 0)\n {\n //fprintf (stderr, \"PQgetResult %zu\\n\", rn);\n+ if (rn >= wn)\n+ {\n+ if (wdone)\n+ rdone = true;\n+ break;\n+ }\n \n PGresult* res = PQgetResult (conn);\n assert (res != NULL);\n ExecStatusType stat = PQresultStatus (res);\n \n if (stat == PGRES_FATAL_ERROR)\n {\n const char* s = PQresultErrorField (res, PG_DIAG_SQLSTATE);\n***************\n*** 190,195 ****\n--- 188,201 ----\n break; // Blocked or done.\n }\n }\n+ \n+ if (rdone)\n+ {\n+ PGresult *res = PQgetResult(conn);\n+ assert(PQresultStatus(res) == PGRES_PIPELINE_SYNC);\n+ PQclear(res);\n+ break;\n+ }\n }\n \n if (PQexitPipelineMode (conn) == 0 ||\n***************\n*** 246,248 ****\n--- 252,269 ----\n PQclear (res);\n }\n }\n+ \n+ int main(int argc, char **argv)\n+ {\n+ PGconn *conn = PQconnectdb(\"\");\n+ if (PQstatus(conn) != CONNECTION_OK)\n+ {\n+ fprintf(stderr, \"connection failed: %s\\n\",\n+ PQerrorMessage(conn));\n+ return 1;\n+ }\n+ \n+ test(conn);\n+ }\n+ \n+ \n\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 7 Jul 2021 13:30:46 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> Hmm ... aren't you trying to read more results than you sent queries?\n\nHm, but should I be able to? Or, to put another way, should PQisBusy()\nindicate there is a result available without me sending a query for it?\nThat sounds very counter-intuitive to me.\n\n\n", "msg_date": "Thu, 8 Jul 2021 08:12:25 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jul-08, Boris Kolpackov wrote:\n\n> Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n> \n> > Hmm ... aren't you trying to read more results than you sent queries?\n> \n> Hm, but should I be able to? Or, to put another way, should PQisBusy()\n> indicate there is a result available without me sending a query for it?\n> That sounds very counter-intuitive to me.\n\nThat seems a fair complaint, but I think PQisBusy is doing the right\nthing per its charter. It is documented as \"would PQgetResult block?\"\nand it is returning correctly that PQgetResult would not block in that\nsituation, because no queries are pending. I think we would regret\nchanging PQisBusy in the way you suggest.\n\nI think your expectation is that we would have an entry point for easy\niteration; a way to say \"if there's a result set to be had, can I have\nit please, otherwise I'm done iterating\". That seems a reasonable ask,\nbut PQisBusy is not that. Maybe it would be PQisResultPending() or\nsomething like that. I again have to ask the RMT what do they think of\nadding such a thing to libpq this late in the cycle.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 8 Jul 2021 09:57:32 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> On 2021-Jul-08, Boris Kolpackov wrote:\n> \n> > Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n> > \n> > > Hmm ... aren't you trying to read more results than you sent queries?\n> > \n> > Hm, but should I be able to? Or, to put another way, should PQisBusy()\n> > indicate there is a result available without me sending a query for it?\n> > That sounds very counter-intuitive to me.\n> \n> That seems a fair complaint, but I think PQisBusy is doing the right\n> thing per its charter. It is documented as \"would PQgetResult block?\"\n> and it is returning correctly that PQgetResult would not block in that\n> situation, because no queries are pending.\n\nWell, that's one way to view it. But in this case one can say that\nthe entire pipeline is still \"busy\" since we haven't seen the\nPQpipelineSync() call. So maybe we could change the charter only\nfor this special situation (that is, inside the pipeline)?\n\nBut I agree, it may not be worth the trouble and a note in the\ndocumentation may be an acceptable \"solution\".\n\nI am happy to go either way, just let me know what it will be. And\nalso if the latest patch to libpq that you have shared[1] is still\nnecessary.\n\n[1] https://www.postgresql.org/message-id/202107061747.tlss7f2somqf%40alvherre.pgsql\n\n\n", "msg_date": "Thu, 8 Jul 2021 17:07:44 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jul-08, Boris Kolpackov wrote:\n\n> Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> > That seems a fair complaint, but I think PQisBusy is doing the right\n> > thing per its charter. It is documented as \"would PQgetResult block?\"\n> > and it is returning correctly that PQgetResult would not block in that\n> > situation, because no queries are pending.\n> \n> Well, that's one way to view it. But in this case one can say that\n> the entire pipeline is still \"busy\" since we haven't seen the\n> PQpipelineSync() call. So maybe we could change the charter only\n> for this special situation (that is, inside the pipeline)?\n\nTo be honest, I am hesitant to changing the charter in that way; I fear\nit may have consequences I don't foresee. I think the workaround is not\n*that* bad. On the other hand, since we explicitly made PQpipelineSync\nnot mandatory, it would be confusing to say that PQisBusy requires\nPQpipelineSync to work properly.\n\n> But I agree, it may not be worth the trouble and a note in the\n> documentation may be an acceptable \"solution\".\n\nI'm having a bit of trouble documenting this. I modified the paragraph in the\npipeline mode docs to read:\n\n <para>\n <function>PQisBusy</function>, <function>PQconsumeInput</function>, etc\n operate as normal when processing pipeline results. Note that if no\n queries are pending receipt of the corresponding results,\n <function>PQisBusy</function> returns 0.\n </para>\n\nThis seems a bit silly/obvious to me, but it may be enlightening to\npeople writing apps to use pipeline mode. Do you find this sufficient?\n(I tried to add something to the PQisBusy description, but it sounded\nsillier.)\n\n> I am happy to go either way, just let me know what it will be. And\n> also if the latest patch to libpq that you have shared[1] is still\n> necessary.\n> \n> [1] https://www.postgresql.org/message-id/202107061747.tlss7f2somqf%40alvherre.pgsql\n\nYes, this patch (or some version thereof) is still needed. I didn't\ntest the modified version of your program without it, but my repro\ndefinitely misbehaved without it.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 8 Jul 2021 13:29:23 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Looking at this again, I noticed that I could probably do away with the\nswitch on pipelineStatus, and just call pqPipelineProcessQueue in all\ncases when appending commands to the queue; I *think* that will do the\nright thing in all cases. *Except* that I don't know what will happen\nif the program is in the middle of processing a result in single-row\nmode, and then sends another query: that would wipe out the pending\nresults of the query being processed ... but maybe that problem can\nalready occur in some other way.\n\nI'll have to write some more tests in libpq_pipeline to verify this.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Java is clearly an example of money oriented programming\" (A. Stepanov)\n\n\n", "msg_date": "Thu, 8 Jul 2021 14:15:49 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n\n> To be honest, I am hesitant to changing the charter in that way; I fear\n> it may have consequences I don't foresee. I think the workaround is not\n> *that* bad.\n\nOk, fair enough. I've updated my code to account for this and it seems\nto be working fine now.\n\n\n> I'm having a bit of trouble documenting this. I modified the paragraph in the\n> pipeline mode docs to read:\n> \n> <para>\n> <function>PQisBusy</function>, <function>PQconsumeInput</function>, etc\n> operate as normal when processing pipeline results. Note that if no\n> queries are pending receipt of the corresponding results,\n> <function>PQisBusy</function> returns 0.\n> </para>\n\nHow about the following for the second sentence:\n\n\"In particular, a call to <function>PQisBusy</function> in the middle\nof a pipeline returns 0 if all the results for queries issued so far\nhave been consumed.\"\n\n\n", "msg_date": "Thu, 8 Jul 2021 20:31:32 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" }, { "msg_contents": "On 2021-Jul-08, Boris Kolpackov wrote:\n\n> Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n> \n> > To be honest, I am hesitant to changing the charter in that way; I fear\n> > it may have consequences I don't foresee. I think the workaround is not\n> > *that* bad.\n> \n> Ok, fair enough. I've updated my code to account for this and it seems\n> to be working fine now.\n\nGreat, thanks. I have pushed the fix, so beta3 (when it is released)\nshould work well for you.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ab09679429009bfed4bd894a6187afde0b7bdfcd\n\n> How about the following for the second sentence:\n> \n> \"In particular, a call to <function>PQisBusy</function> in the middle\n> of a pipeline returns 0 if all the results for queries issued so far\n> have been consumed.\"\n\nI used this wording, thanks.\n\nOn 2021-Jul-08, Alvaro Herrera wrote:\n\n> Looking at this again, I noticed that I could probably do away with the\n> switch on pipelineStatus, and just call pqPipelineProcessQueue in all\n> cases when appending commands to the queue; I *think* that will do the\n> right thing in all cases. *Except* that I don't know what will happen\n> if the program is in the middle of processing a result in single-row\n> mode, and then sends another query: that would wipe out the pending\n> results of the query being processed ... but maybe that problem can\n> already occur in some other way.\n\nI tried this and it doesn't work. It doesn't seem interesting to\npursue anyway, so I'll just drop the idea. (I did notice that the\ncomment on single-row mode was wrong, though, since\npqPipelineProcessQueue does nothing in READY_MORE state, which is what\nit is in the middle of processing a result.)\n\nThanks for all the help in testing and reviewing,\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)\n\n\n", "msg_date": "Sat, 10 Jul 2021 12:26:56 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Pipeline mode and PQpipelineSync()" } ]
[ { "msg_contents": "Hi,\n\nThere's a couple of calls to GetMultiXactIdMembers() in heapam.c which\nsubsequently pfree() the returned \"members\" pointer (pass-by-reference\nparameter) if it's non-NULL.\nHowever, there's an error return within GetMultiXactIdMembers() that\nreturns -1 without NULLing out \"members\", and the callers have simply\nallocated that pointer on the stack without initializing it to NULL.\nIf that error condition were to ever happen, pfree() would likely be\ncalled with a junk value.\nAlso note that there's another error return (about 15 lines further\ndown) in GetMultiXactIdMembers() that returns -1 and does NULL out\n\"members\", so the handling is inconsistent.\nThe attached patch adds the NULLing out of the \"members\" pointer in\nthe first error case, to fix that and guard against possible pfree()\non error by such callers.\n\nI also note that there are other callers which pfree() \"members\" based\non the returned \"nmembers\" value, and this is also inconsistent.\nSome pfree() \"members\" if nmembers>= 0, while others pfree() it if nmembers>0.\nAfter looking at the code for a while, it looks like the \"nmembers ==\n0\" case can't actually happen (right?). I decided not to mess with any\nof the calling code.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Wed, 16 Jun 2021 20:22:46 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Issue with some calls to GetMultiXactIdMembers()" }, { "msg_contents": "On 16/06/2021 13:22, Greg Nancarrow wrote:\n> Hi,\n> \n> There's a couple of calls to GetMultiXactIdMembers() in heapam.c which\n> subsequently pfree() the returned \"members\" pointer (pass-by-reference\n> parameter) if it's non-NULL.\n> However, there's an error return within GetMultiXactIdMembers() that\n> returns -1 without NULLing out \"members\", and the callers have simply\n> allocated that pointer on the stack without initializing it to NULL.\n> If that error condition were to ever happen, pfree() would likely be\n> called with a junk value.\n> Also note that there's another error return (about 15 lines further\n> down) in GetMultiXactIdMembers() that returns -1 and does NULL out\n> \"members\", so the handling is inconsistent.\n> The attached patch adds the NULLing out of the \"members\" pointer in\n> the first error case, to fix that and guard against possible pfree()\n> on error by such callers.\n\nThanks! Committed with a few additional cleanups.\n\n> I also note that there are other callers which pfree() \"members\" based\n> on the returned \"nmembers\" value, and this is also inconsistent.\n> Some pfree() \"members\" if nmembers>= 0, while others pfree() it if nmembers>0.\n> After looking at the code for a while, it looks like the \"nmembers ==\n> 0\" case can't actually happen (right?). I decided not to mess with any\n> of the calling code.\n\nI added an assertion that it never returns nmembers==0.\n\n- Heikki\n\n\n", "msg_date": "Thu, 17 Jun 2021 15:57:41 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Issue with some calls to GetMultiXactIdMembers()" } ]
[ { "msg_contents": "Hi,\n\nxlog.c is very large. We've split off some functions from it over the \nyears, but it's still large and it keeps growing.\n\nAttached is a proposal to split functions related to WAL replay, standby \nmode, fetching files from archive, computing the recovery target and so \non, to new source file called xlogrecovery.c. That's a fairly clean \nsplit. StartupXLOG() stays in xlog.c, but much of the code from it has \nbeen moved to new functions InitWalRecovery(), PerformWalRecovery() and \nEndWalRecovery(). The general idea is that xlog.c is still responsible \nfor orchestrating the servers startup, but xlogrecovery.c is responsible \nfor figuring out whether WAL recovery is needed, performing it, and \ndeciding when it can stop.\n\nThere's surely more refactoring we could do. xlog.c has a lot of global \nvariables, with similar names but slightly different meanings for \nexample. (Quick: what's the difference between InRedo, InRecovery, \nInArchiveRecovery, and RecoveryInProgress()? I have to go check the code \nevery time to remind myself). But this patch tries to just move source \ncode around for clarity.\n\nThere are small changes in the order that some of things are done in \nStartupXLOG(), for readability. I tried to be careful and check that the \nchanges are safe, but a second pair of eyes would be appreciated on that.\n\n- Heikki", "msg_date": "Wed, 16 Jun 2021 16:30:45 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Split xlog.c" }, { "msg_contents": "Hi,\n\nOn 2021-06-16 16:30:45 +0300, Heikki Linnakangas wrote:\n> xlog.c is very large. We've split off some functions from it over the years,\n> but it's still large and it keeps growing.\n>\n> Attached is a proposal to split functions related to WAL replay, standby\n> mode, fetching files from archive, computing the recovery target and so on,\n> to new source file called xlogrecovery.c.\n\nWohoo!\n\nI think this is desperately needed. I personally am more concerned about\nthe size of StartupXLOG() etc than the size of xlog.c itself, but since\nboth reasonably are done at the same time...\n\n\n> That's a fairly clean split. StartupXLOG() stays in xlog.c, but much of the\n> code from it has been moved to new functions InitWalRecovery(),\n> PerformWalRecovery() and EndWalRecovery(). The general idea is that xlog.c is\n> still responsible for orchestrating the servers startup, but xlogrecovery.c\n> is responsible for figuring out whether WAL recovery is needed, performing\n> it, and deciding when it can stop.\n\nFor some reason \"recovery\" bothers me a tiny bit, even though it's obviously\nalready in use. Using \"apply\", or \"replay\" seems more descriptive to me, but\nwhatever.\n\n\n> There's surely more refactoring we could do. xlog.c has a lot of global\n> variables, with similar names but slightly different meanings for example.\n> (Quick: what's the difference between InRedo, InRecovery, InArchiveRecovery,\n> and RecoveryInProgress()? I have to go check the code every time to remind\n> myself). But this patch tries to just move source code around for clarity.\n\nAgreed, it's quite chaotic. I think a good initial step to clean up that mess\nwould be to just collect the relevant variables into one or two structs.\n\n\n> There are small changes in the order that some of things are done in\n> StartupXLOG(), for readability. I tried to be careful and check that the\n> changes are safe, but a second pair of eyes would be appreciated on that.\n\nI think it might be worth trying to break this into a bit more incremental\nchanges - it's a huge commit and mixing code movement with code changes makes\nit really hard to review the non-movement portion.\n\n> +void\n> +PerformWalRecovery(void)\n> +{\n\n> +\n> +\tif (record != NULL)\n> +\t{\n> +\t\tErrorContextCallback errcallback;\n> +\t\tTimestampTz xtime;\n> +\t\tPGRUsage\tru0;\n> +\t\tXLogRecPtr\tReadRecPtr;\n> +\t\tXLogRecPtr\tEndRecPtr;\n> +\n> +\t\tpg_rusage_init(&ru0);\n> +\n> +\t\tInRedo = true;\n> +\n> +\t\t/* Initialize resource managers */\n> +\t\tfor (rmid = 0; rmid <= RM_MAX_ID; rmid++)\n> +\t\t{\n> +\t\t\tif (RmgrTable[rmid].rm_startup != NULL)\n> +\t\t\t\tRmgrTable[rmid].rm_startup();\n> +\t\t}\n> +\n> +\t\tereport(LOG,\n> +\t\t\t\t(errmsg(\"redo starts at %X/%X\",\n> +\t\t\t\t\t\tLSN_FORMAT_ARGS(xlogreader->ReadRecPtr))));\n> +\n> +\t\t/*\n> +\t\t * main redo apply loop\n> +\t\t */\n> +\t\tdo\n> +\t\t{\n\nIf we're refactoring all of this, can we move the apply-one-record part into\nits own function as well? Happy to do that as a followup or precursor patch\ntoo. The per-record logic has grown complicated enough to make that quite\nworthwhile imo - and imo most of the time one either is interested in the\nper-record work, or in the rest of the StartupXLog/PerformWalRecovery logic.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Jun 2021 16:00:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On 17/06/2021 02:00, Andres Freund wrote:\n> On 2021-06-16 16:30:45 +0300, Heikki Linnakangas wrote:\n>> That's a fairly clean split. StartupXLOG() stays in xlog.c, but much of the\n>> code from it has been moved to new functions InitWalRecovery(),\n>> PerformWalRecovery() and EndWalRecovery(). The general idea is that xlog.c is\n>> still responsible for orchestrating the servers startup, but xlogrecovery.c\n>> is responsible for figuring out whether WAL recovery is needed, performing\n>> it, and deciding when it can stop.\n> \n> For some reason \"recovery\" bothers me a tiny bit, even though it's obviously\n> already in use. Using \"apply\", or \"replay\" seems more descriptive to me, but\n> whatever.\n\nI think of \"recovery\" as a broader term than applying or replaying. \nReplaying the WAL records is one part of recovery. But yeah, the \ndifference is not well-defined and we tend to use those terms \ninterchangeably.\n\n>> There's surely more refactoring we could do. xlog.c has a lot of global\n>> variables, with similar names but slightly different meanings for example.\n>> (Quick: what's the difference between InRedo, InRecovery, InArchiveRecovery,\n>> and RecoveryInProgress()? I have to go check the code every time to remind\n>> myself). But this patch tries to just move source code around for clarity.\n> \n> Agreed, it's quite chaotic. I think a good initial step to clean up that mess\n> would be to just collect the relevant variables into one or two structs.\n\nNot a bad idea.\n\n>> There are small changes in the order that some of things are done in\n>> StartupXLOG(), for readability. I tried to be careful and check that the\n>> changes are safe, but a second pair of eyes would be appreciated on that.\n> \n> I think it might be worth trying to break this into a bit more incremental\n> changes - it's a huge commit and mixing code movement with code changes makes\n> it really hard to review the non-movement portion.\n\nFair. Attached is a new patch set which contains a few smaller commits \nthat reorder things in xlog.c, and then the big commit that moves things \nto xlogrecovery.c.\n\n> If we're refactoring all of this, can we move the apply-one-record part into\n> its own function as well? Happy to do that as a followup or precursor patch\n> too. The per-record logic has grown complicated enough to make that quite\n> worthwhile imo - and imo most of the time one either is interested in the\n> per-record work, or in the rest of the StartupXLog/PerformWalRecovery logic.\n\nAdded a commit to do that, as a follow-up. Yeah, I agree that makes sense.\n\n- Heikki", "msg_date": "Tue, 22 Jun 2021 00:06:41 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On Tue, Jun 22, 2021 at 2:37 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 17/06/2021 02:00, Andres Freund wrote:\n> > On 2021-06-16 16:30:45 +0300, Heikki Linnakangas wrote:\n> >> That's a fairly clean split. StartupXLOG() stays in xlog.c, but much of the\n> >> code from it has been moved to new functions InitWalRecovery(),\n> >> PerformWalRecovery() and EndWalRecovery(). The general idea is that xlog.c is\n> >> still responsible for orchestrating the servers startup, but xlogrecovery.c\n> >> is responsible for figuring out whether WAL recovery is needed, performing\n> >> it, and deciding when it can stop.\n> >\n> > For some reason \"recovery\" bothers me a tiny bit, even though it's obviously\n> > already in use. Using \"apply\", or \"replay\" seems more descriptive to me, but\n> > whatever.\n>\n> I think of \"recovery\" as a broader term than applying or replaying.\n> Replaying the WAL records is one part of recovery. But yeah, the\n> difference is not well-defined and we tend to use those terms\n> interchangeably.\n>\n> >> There's surely more refactoring we could do. xlog.c has a lot of global\n> >> variables, with similar names but slightly different meanings for example.\n> >> (Quick: what's the difference between InRedo, InRecovery, InArchiveRecovery,\n> >> and RecoveryInProgress()? I have to go check the code every time to remind\n> >> myself). But this patch tries to just move source code around for clarity.\n> >\n> > Agreed, it's quite chaotic. I think a good initial step to clean up that mess\n> > would be to just collect the relevant variables into one or two structs.\n>\n> Not a bad idea.\n>\n> >> There are small changes in the order that some of things are done in\n> >> StartupXLOG(), for readability. I tried to be careful and check that the\n> >> changes are safe, but a second pair of eyes would be appreciated on that.\n> >\n> > I think it might be worth trying to break this into a bit more incremental\n> > changes - it's a huge commit and mixing code movement with code changes makes\n> > it really hard to review the non-movement portion.\n>\n> Fair. Attached is a new patch set which contains a few smaller commits\n> that reorder things in xlog.c, and then the big commit that moves things\n> to xlogrecovery.c.\n>\n> > If we're refactoring all of this, can we move the apply-one-record part into\n> > its own function as well? Happy to do that as a followup or precursor patch\n> > too. The per-record logic has grown complicated enough to make that quite\n> > worthwhile imo - and imo most of the time one either is interested in the\n> > per-record work, or in the rest of the StartupXLog/PerformWalRecovery logic.\n>\n> Added a commit to do that, as a follow-up. Yeah, I agree that makes sense.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 15 Jul 2021 17:49:36 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On 15/07/2021 15:19, vignesh C wrote:\n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nHere's a rebase.\n\n- Heikki", "msg_date": "Sat, 31 Jul 2021 00:33:34 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "Hi,\n\nI think it'd make sense to apply the first few patches now, they seem\nuncontroversial and simple enough.\n\n\nOn 2021-07-31 00:33:34 +0300, Heikki Linnakangas wrote:\n> From 0cfb852e320bd8fe83c588d25306d5b4c57b9da6 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: Mon, 21 Jun 2021 22:14:58 +0300\n> Subject: [PATCH 1/7] Don't use O_SYNC or similar when opening signal file to\n> fsync it.\n\n+1\n\n> From 83f00e90bb818ed21bb14580f19f58c4ade87ef7 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: Wed, 9 Jun 2021 12:05:53 +0300\n> Subject: [PATCH 2/7] Remove unnecessary 'restoredFromArchive' global variable.\n> \n> It might've been useful for debugging purposes, but meh. There's\n> 'readSource' which does almost the same thing.\n\n+1\n\n\n> From ec53470c8d271c01b8d2e12b92863501c3a9b4cf Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: Mon, 21 Jun 2021 16:12:50 +0300\n> Subject: [PATCH 3/7] Extract code to get reason that recovery was stopped to a\n> function.\n\n+1\n\n\n> +/*\n> + * Create a comment for the history file to explain why and where\n> + * timeline changed.\n> + */\n> +static char *\n> +getRecoveryStopReason(void)\n> +{\n> +\tchar\t\treason[200];\n> +\n> +\tif (recoveryTarget == RECOVERY_TARGET_XID)\n> +\t\tsnprintf(reason, sizeof(reason),\n> +\t\t\t\t \"%s transaction %u\",\n> +\t\t\t\t recoveryStopAfter ? \"after\" : \"before\",\n> +\t\t\t\t recoveryStopXid);\n> +\telse if (recoveryTarget == RECOVERY_TARGET_TIME)\n> +\t\tsnprintf(reason, sizeof(reason),\n> +\t\t\t\t \"%s %s\\n\",\n> +\t\t\t\t recoveryStopAfter ? \"after\" : \"before\",\n> +\t\t\t\t timestamptz_to_str(recoveryStopTime));\n> +\telse if (recoveryTarget == RECOVERY_TARGET_LSN)\n> +\t\tsnprintf(reason, sizeof(reason),\n> +\t\t\t\t \"%s LSN %X/%X\\n\",\n> +\t\t\t\t recoveryStopAfter ? \"after\" : \"before\",\n> +\t\t\t\t LSN_FORMAT_ARGS(recoveryStopLSN));\n> +\telse if (recoveryTarget == RECOVERY_TARGET_NAME)\n> +\t\tsnprintf(reason, sizeof(reason),\n> +\t\t\t\t \"at restore point \\\"%s\\\"\",\n> +\t\t\t\t recoveryStopName);\n> +\telse if (recoveryTarget == RECOVERY_TARGET_IMMEDIATE)\n> +\t\tsnprintf(reason, sizeof(reason), \"reached consistency\");\n> +\telse\n> +\t\tsnprintf(reason, sizeof(reason), \"no recovery target specified\");\n> +\n> +\treturn pstrdup(reason);\n> +}\n\nI guess it would make sense to change this over to a switch at some\npoint, so we can get warnings if a new type of target is added...\n\n\n> From 70f688f9576b7939d18321444fd59c51c402ce23 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: Mon, 21 Jun 2021 21:25:37 +0300\n> Subject: [PATCH 4/7] Move InRecovery and standbyState global vars to\n> xlogutils.c.\n> \n> They are used in code that is sometimes called from a redo routine,\n> so xlogutils.c seems more appropriate. That's where we have other helper\n> functions used by redo routines.\n\nFWIW, with some compilers on some linux distributions there is an efficiency\ndifference between accessing a variable (or calling a function) defined in the\ncurrent translation unit or a separate one (with the separate TU going through\nthe GOT). I don't think it's a problem here, but it's worth keeping in mind\nwhile moving things around. We should probably adjust our compiler settings\nto address that at some point :(\n\n\n> From da11050ca890ce0311d9e97d2832a6a61bc43e10 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: Fri, 18 Jun 2021 12:15:04 +0300\n> Subject: [PATCH 5/7] Move code around in StartupXLOG().\n> \n> This is the order that things will happen with the next commit, this\n> makes it more explicit. To aid review, I added \"BEGIN/END function\"\n> comments to mark which blocks of code are moved to separate functions in\n> in the next commit.\n\n> ---\n> src/backend/access/transam/xlog.c | 605 ++++++++++++++++--------------\n> 1 file changed, 315 insertions(+), 290 deletions(-)\n> \n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index efb3ca273ed..b9d96d6de26 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -882,7 +882,6 @@ static MemoryContext walDebugCxt = NULL;\n> \n> static void readRecoverySignalFile(void);\n> static void validateRecoveryParameters(void);\n> -static void exitArchiveRecovery(TimeLineID endTLI, XLogRecPtr endOfLog);\n> static bool recoveryStopsBefore(XLogReaderState *record);\n> static bool recoveryStopsAfter(XLogReaderState *record);\n> static char *getRecoveryStopReason(void);\n> @@ -5592,111 +5591,6 @@ validateRecoveryParameters(void)\n> \t}\n> }\n> \n> -/*\n> - * Exit archive-recovery state\n> - */\n> -static void\n> -exitArchiveRecovery(TimeLineID endTLI, XLogRecPtr endOfLog)\n> -{\n\nI don't really understand the motivation for this part of the change? This\nkind of seems to run counter to the stated goals of the patch series? Seems\nlike it'd need a different commit message at last?\n\n\n> +\t/*---- BEGIN FreeWalRecovery ----*/\n> +\n> \t/* Shut down xlogreader */\n> \tif (readFile >= 0)\n> \t{\n\nFWIW, FreeWalRecovery() for something that closes and unlinks files among\nother things doesn't seem like a great name.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 30 Jul 2021 16:11:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On 31/07/2021 02:11, Andres Freund wrote:\n> Hi,\n> \n> I think it'd make sense to apply the first few patches now, they seem\n> uncontroversial and simple enough.\n\nPushed those, thanks!\n\n>> From da11050ca890ce0311d9e97d2832a6a61bc43e10 Mon Sep 17 00:00:00 2001\n>> From: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n>> Date: Fri, 18 Jun 2021 12:15:04 +0300\n>> Subject: [PATCH 5/7] Move code around in StartupXLOG().\n>>\n>> This is the order that things will happen with the next commit, this\n>> makes it more explicit. To aid review, I added \"BEGIN/END function\"\n>> comments to mark which blocks of code are moved to separate functions in\n>> in the next commit.\n> \n>> ---\n>> src/backend/access/transam/xlog.c | 605 ++++++++++++++++--------------\n>> 1 file changed, 315 insertions(+), 290 deletions(-)\n>>\n>> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n>> index efb3ca273ed..b9d96d6de26 100644\n>> --- a/src/backend/access/transam/xlog.c\n>> +++ b/src/backend/access/transam/xlog.c\n>> @@ -882,7 +882,6 @@ static MemoryContext walDebugCxt = NULL;\n>> \n>> static void readRecoverySignalFile(void);\n>> static void validateRecoveryParameters(void);\n>> -static void exitArchiveRecovery(TimeLineID endTLI, XLogRecPtr endOfLog);\n>> static bool recoveryStopsBefore(XLogReaderState *record);\n>> static bool recoveryStopsAfter(XLogReaderState *record);\n>> static char *getRecoveryStopReason(void);\n>> @@ -5592,111 +5591,6 @@ validateRecoveryParameters(void)\n>> \t}\n>> }\n>> \n>> -/*\n>> - * Exit archive-recovery state\n>> - */\n>> -static void\n>> -exitArchiveRecovery(TimeLineID endTLI, XLogRecPtr endOfLog)\n>> -{\n> \n> I don't really understand the motivation for this part of the change? This\n> kind of seems to run counter to the stated goals of the patch series? Seems\n> like it'd need a different commit message at last?\n\nHmm. Some parts of exitArchiveRecovery are being moved into \nxlogrecovery.c, so it becomes smaller than before. Maybe there's still \nenough code left there that a separate function makes sense. I'll try \nthat differently.\n\n>> +\t/*---- BEGIN FreeWalRecovery ----*/\n>> +\n>> \t/* Shut down xlogreader */\n>> \tif (readFile >= 0)\n>> \t{\n> \n> FWIW, FreeWalRecovery() for something that closes and unlinks files among\n> other things doesn't seem like a great name.\n\nRename to CloseWalRecovery(), maybe? I'll try that.\n\n- Heikki\n\n\n", "msg_date": "Sat, 31 Jul 2021 10:54:12 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On 31/07/2021 10:54, Heikki Linnakangas wrote:\n> On 31/07/2021 02:11, Andres Freund wrote:\n>>> @@ -5592,111 +5591,6 @@ validateRecoveryParameters(void)\n>>> \t}\n>>> }\n>>> \n>>> -/*\n>>> - * Exit archive-recovery state\n>>> - */\n>>> -static void\n>>> -exitArchiveRecovery(TimeLineID endTLI, XLogRecPtr endOfLog)\n>>> -{\n>>\n>> I don't really understand the motivation for this part of the change? This\n>> kind of seems to run counter to the stated goals of the patch series? Seems\n>> like it'd need a different commit message at last?\n> \n> Hmm. Some parts of exitArchiveRecovery are being moved into\n> xlogrecovery.c, so it becomes smaller than before. Maybe there's still\n> enough code left there that a separate function makes sense. I'll try\n> that differently.\n\nSo, my issue with exitArchiveRecovery() was that after this refactoring, \nthe function didn't really exit archive recovery anymore. \nInArchiveRecovery flag is cleared earlier already, in xlogrecovery.c. I \nrenamed exitArchiveRecovery() to XLogInitNewTimeline(), and moved the \nunlinking of the signal files into the caller. The function now only \ninitializes the first WAL segment on the new timeline, and the new name \nreflects that. I'm pretty happy with this now.\n\n>>> +\t/*---- BEGIN FreeWalRecovery ----*/\n>>> +\n>>> \t/* Shut down xlogreader */\n>>> \tif (readFile >= 0)\n>>> \t{\n>>\n>> FWIW, FreeWalRecovery() for something that closes and unlinks files among\n>> other things doesn't seem like a great name.\n> \n> Rename to CloseWalRecovery(), maybe? I'll try that.\n\nI renamed it to ShutdownWalRecovery(). I also refactored the \nFinishWalRecovery() function so that instead of having a dozen output \npointer parameters, it returns a struct with all the return values. New \npatch set attached.\n\n- Heikki", "msg_date": "Sat, 31 Jul 2021 15:24:23 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "After applying 0001 and 0002 I got a bunch of compile problems:\n\nIn file included from /pgsql/source/master/src/include/postgres.h:46,\n from /pgsql/source/master/src/backend/access/transam/xlog.c:39:\n/pgsql/source/master/src/backend/access/transam/xlog.c: In function 'StartupXLOG':\n/pgsql/source/master/src/backend/access/transam/xlog.c:5310:10: error: 'lastPageBeginPtr' undeclared (first use in this function)\n Assert(lastPageBeginPtr == EndOfLog);\n ^~~~~~~~~~~~~~~~\n/pgsql/source/master/src/include/c.h:848:9: note: in definition of macro 'Assert'\n if (!(condition)) \\\n ^~~~~~~~~\n/pgsql/source/master/src/backend/access/transam/xlog.c:5310:10: note: each undeclared identifier is reported only once for each function it appears in\n Assert(lastPageBeginPtr == EndOfLog);\n ^~~~~~~~~~~~~~~~\n/pgsql/source/master/src/include/c.h:848:9: note: in definition of macro 'Assert'\n if (!(condition)) \\\n ^~~~~~~~~\nmake[4]: *** [../../../../src/Makefile.global:938: xlog.o] Error 1\n/pgsql/source/master/src/backend/access/transam/xlog.c:5310:10: error: use of undeclared identifier 'lastPageBeginPtr'\n Assert(lastPageBeginPtr == EndOfLog);\n ^\n1 error generated.\nmake[4]: *** [../../../../src/Makefile.global:1070: xlog.bc] Error 1\nmake[4]: Target 'all' not remade because of errors.\nmake[3]: *** [/pgsql/source/master/src/backend/common.mk:39: transam-recursive] Error 2\nmake[3]: Target 'all' not remade because of errors.\nmake[2]: *** [/pgsql/source/master/src/backend/common.mk:39: access-recursive] Error 2\nmake[2]: Target 'install' not remade because of errors.\nmake[1]: *** [Makefile:42: install-backend-recurse] Error 2\nmake[1]: Target 'install' not remade because of errors.\nmake: *** [GNUmakefile:11: install-src-recurse] Error 2\nmake: Target 'install' not remade because of errors.\n/pgsql/source/master/contrib/pg_prewarm/autoprewarm.c: In function 'apw_load_buffers':\n/pgsql/source/master/contrib/pg_prewarm/autoprewarm.c:301:9: warning: implicit declaration of function 'AllocateFile'; did you mean 'load_file'? [-Wimplicit-function-declaration]\n file = AllocateFile(AUTOPREWARM_FILE, \"r\");\n ^~~~~~~~~~~~\n load_file\n/pgsql/source/master/contrib/pg_prewarm/autoprewarm.c:301:7: warning: assignment to 'FILE *' {aka 'struct _IO_FILE *'} from 'int' makes pointer from integer without a cast [-Wint-conversion]\n file = AllocateFile(AUTOPREWARM_FILE, \"r\");\n ^\n/pgsql/source/master/contrib/pg_prewarm/autoprewarm.c:342:2: warning: implicit declaration of function 'FreeFile' [-Wimplicit-function-declaration]\n FreeFile(file);\n ^~~~~~~~\n/pgsql/source/master/contrib/pg_prewarm/autoprewarm.c: In function 'apw_dump_now':\n/pgsql/source/master/contrib/pg_prewarm/autoprewarm.c:630:7: warning: assignment to 'FILE *' {aka 'struct _IO_FILE *'} from 'int' makes pointer from integer without a cast [-Wint-conversion]\n file = AllocateFile(transient_dump_file_path, \"w\");\n ^\n/pgsql/source/master/contrib/pg_prewarm/autoprewarm.c:694:9: warning: implicit declaration of function 'durable_rename'; did you mean 'errtablecolname'? [-Wimplicit-function-declaration]\n (void) durable_rename(transient_dump_file_path, AUTOPREWARM_FILE, ERROR);\n ^~~~~~~~~~~~~~\n errtablecolname\n\n\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sat, 31 Jul 2021 15:33:36 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On 31/07/2021 22:33, Alvaro Herrera wrote:\n> After applying 0001 and 0002 I got a bunch of compile problems:\n\nAh sorry, I had assertions disabled and didn't notice. Fixed version \nattached.\n\n- Heikki", "msg_date": "Sun, 1 Aug 2021 12:49:19 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On 01/08/2021 12:49, Heikki Linnakangas wrote:\n> On 31/07/2021 22:33, Alvaro Herrera wrote:\n>> After applying 0001 and 0002 I got a bunch of compile problems:\n> \n> Ah sorry, I had assertions disabled and didn't notice. Fixed version\n> attached.\n\nHere is another rebase.\n\n- Heikki", "msg_date": "Thu, 16 Sep 2021 11:23:46 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "Hello.\n\nAt Thu, 16 Sep 2021 11:23:46 +0300, Heikki Linnakangas <hlinnaka@iki.fi> wrote in \n> Here is another rebase.\n\nI have several comments on this.\n\n0001:\n\n I understand this is almost simple relocation of code fragments. But\n it seems introducing some behavioral changes.\n\n PublishStartProcessInformation() was changed to be called while\n crash recovery or on standalone server. Maybe it is harmless and\n might be more consistent, so I'm fine with it.\n\n Another call to ResetUnloggedRelations is added before redo start,\n that seems fine.\n\n recoveryStopReason is always acquired but it is used only after\n archive recovery. I'm not sure about reason for the variable to\n live in that wide context. Couldn't we remove the variable then\n call getRecoveryStopReason() directly at the required place?\n\n0002:\n\n heapam.c, clog.c, twophase.c, dbcommands.c doesn't need xlogrecvoer.h.\n\n> XLogRecCtl\n\n \"Rec\" looks like Record. Couldn't we use \"Rcv\", \"Recov\" or just\n \"Recovery\" instead?\n\n> TimeLineID\tPrevTimeLineID;\n> TransactionId oldestActiveXID;\n> bool\t\tpromoted = false;\n> EndOfWalRecoveryInfo *endofwal;\n> bool\t\thaveTblspcMap;\n\n This is just a matter of taste but the \"endofwal\" looks somewhat\n alien in the variables.\n\n\nxlog.c:\n+void\n+SwitchIntoArchiveRecovery(XLogRecPtr EndRecPtr)\n\n Isn't this a function of xlogrecovery.c? Or rather isn't\n minRecoveryPoint-related stuff of xlogrecovery.c?\n\n\n0003;\n\n Just looks fine. I might want to remove the parameter xlogreader\n from ApplyWalRecord, but that seems cause more harm than good.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Sep 2021 12:10:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On Fri, Sep 17, 2021 at 12:10:17PM +0900, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> At Thu, 16 Sep 2021 11:23:46 +0300, Heikki Linnakangas <hlinnaka@iki.fi> wrote in \n> > Here is another rebase.\n> \n> I have several comments on this.\n> \n\nHi Heikki,\n\nAre we waiting a rebased version? Currently this does not apply to head.\nI'll mark this as WoA and move it to necxt CF.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Mon, 4 Oct 2021 20:09:56 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On Thu, Sep 16, 2021 at 4:24 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Here is another rebase.\n\nLike probably everyone else who has an opinion on the topic, I like\nthe idea of splitting xlog.c. I don't have a fully formed opinion on\nthe changes yet, but it seems to be a surprisingly equal split, which\nseems good. Since I just spent a bunch of time being frustrated by\nThisTimeLineID, I'm pleased to see that the giant amount of code that\nmoves to xlogrecovery.c apparently ends up not needing that global\nvariable, which I think is excellent. Perhaps the amount of code that\nneeds that global variable can be further reduced in the future, maybe\neven to zero.\n\nI think that the small reorderings that you mention in your original\npost are the scary part: if we do stuff in a different order, maybe\nthings won't work. In the rest of this email I'm going to try to go\nthrough and analyze that. I think it might have been a bit easier if\nyou'd outlined the things you moved and the reasons why you thought\nthat was OK; as it is, I have to reverse-engineer it. But I'd like to\nsee this go forward, either as-is or with whatever modifications seem\nto be needed, so I'm going to give it a try.\n\n- RelationCacheInitFileRemove() moves later. The code over which it\nmoves seems to include sanity checks and initializations of various\nbits of in-memory state, but nothing that touches anything on disk.\nTherefore I don't see how this can break anything. I also agree that\nthe new placement of the call is more logical than the old one, since\nin the current code it's kind of in the middle of a bunch of things\nthat, as your patch highlights, are really all about initializing WAL\nrecovery, and this is a separate kind of a thing. Post-patch, it ends\nup near where we initialize a bunch of other subsystems. Cool.\n\n- Some logic to (a) sanity-check the control file's REDO pointer, (b)\nset InRecovery = true, and (c) update various bits of control file\nstate in memory has been moved substantially earlier. The actual\nupdate of the control file on disk stays where it was before. At least\non first reading, I don't really like this. On the one hand, I don't\nsee a reason why it's necessary prerequisite for splitting xlog.c. On\nthe other hand, it seems a bit dangerous. There's now ~8 calls to\nfunctions in other modules between the time you change things in\nmemory and the time that you call UpdateControlFile(). Perhaps none of\nthose functions can call anything that might in turn call\nUpdateControlFile() but I don't know why we should take the chance. Is\nthere some advantage to having the in-memory state out of sync with\nthe on-disk state across all that code?\n\n- Renaming backup_label and tablespace_map to .old is now done\nslightly earlier, just before pg_reset_all() and adjusting our notion\nof the minimum recovery point rather than just after. Seems OK.\n\n- The rm_startup() functions are now called later, only once we're\nsure that we have a WAL record to apply. Seems fine; slightly more\nefficient. Looks like the functions in question are just arranging to\nset up private memory contexts for the AMs that want them for WAL\nreplay, so they won't care if we skip that in some corner cases where\nthere's nothing to replay.\n\n- ResetUnloggedRelations(UNLOGGED_RELATION_INIT) is moved later. We'll\nnow do a few minor bookkeeping tasks like setting EndOfLog and\nEndOfLogTLI first, and we'll also now check whether we reached the\nminimum recovery point OK before doing this. This appears to me to be\na clear improvement, since checking whether the minimum recovery point\nhas been reached is fast, and resetting unlogged relations might be\nslow, and is pointless if we're just going to error out.\n\n- The recoveryWakeupLatch is disowned far later than before. I can't\nsee why this would hurt anything, but my first inclination was to\nprefer the existing placement of the call. We're only going to wait on\nthe latch while applying WAL, and the existing code seems to release\nit fairly promptly after it's done applying WAL, which seems to make\nsense. On the other hand, I can see that your intent was (I believe,\nanyway) to group it together with shutting down the xlog reader and\nremoving RECOVERYXLOG and RECOVERYHISTORY, and there doesn't seem to\nbe anything wrong with that idea.\n\n- The code to clear InArchiveRecovery and close the WAL segment we had\nopen moves earlier. I think it might be possible to fail\nAssert(InArchiveRecovery), because where you've moved this code, we\nhaven't yet verified that we reached the minimum recovery point. See\nthe comment which begins \"It's possible that archive recovery was\nrequested, but we don't know how far we need to replay the WAL before\nwe reach consistency.\" What if we reach that point, then fail the big\nhairy if-test and don't set InArchiveRecovery = true? In that case, we\ncan still do it later, in ReadRecord. But maybe that will never\nhappen. Actually it's not entirely clear to me that the assertion is\nbulletproof even where it is right now, but moving it earlier makes me\neven less confident. Possibly I just don't understand this well\nenough.\n\nIt's a little tempting, too, to see if you could somehow consolidate\nthe two places that do if (readFile >= 0) { close(readFile); readFile\n= -1 } down to one.\n\n- getRecoveryStopReason() is now called earlier than before, and is\nnow called whether or not ArchiveRecoveryRequested. This seems to just\nmove the point of initialization further from the point of use to no\nreal advantage, and I also think that the function is only designed to\ndo something useful for archive recovery, so calling it in other cases\njust seems confusing.\n\n- RECOVERYXLOG and RECOVERYHISTORY are now removed later than before.\nIt's now the last thing that happens before we enabled WAL writes.\nDoesn't seem like it should hurt anything.\n\n- The \"archive recovery complete\" message is now logged after rather\nthan before writing and archiving a timeline history file. I think\nthat's likely an improvement.\n\nThat's all I have on 0001. Is this kind of review helpful?\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Oct 2021 15:06:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "> On 5 Oct 2021, at 03:09, Jaime Casanova <jcasanov@systemguards.com.ec> wrote:\n\n> Are we waiting a rebased version? Currently this does not apply to head.\n> I'll mark this as WoA and move it to necxt CF.\n\nThis patch still doesn't apply, exacerbated by the recent ThisTimelineID\nchanges in xlog.c. I'm marking this Returned with Feedback, please feel free\nto open a new entry when you have a rebase addressing Kyotaro's and Robert's\nreviews.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 15 Nov 2021 11:10:49 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "Here's a new version. It includes two new smaller commits, before the \nmain refactoring:\n\n1. Refactor setting XLP_FIRST_IS_OVERWRITE_CONTRECORD. I moved the code \nto set that flag from AdvanceXLInsertBuffer() into \nCreateOverwriteContrecordRecord(). That avoids the need for accessing \nthe global variable in AdvanceXLInsertBuffer(), which is nice with this \npatch set because I moved the global variables into xlogrecord.c. For \ncomparison, when we are writing a continuation record, the \nXLP_FIRST_IS_CONTRECORD flag is also set by the caller, \nCopyXLogRecordToWAL(), not AdvanceXLInsertBuffer() itself. So I think \nthis is marginally more clear anyway.\n\n2. Use correct WAL position in error message on invalid XLOG page \nheader. This is the thing that Robert pointed out in the \"xlog.c: \nremoving ReadRecPtr and EndRecPtr\" thread. I needed to make the change \nfor the refactoring anyway, but since it's a minor bug fix, it seemed \nbetter to extract it to a separate commit, after all.\n\nResponses to Robert's comments below:\n\nOn 20/10/2021 22:06, Robert Haas wrote:\n> - Some logic to (a) sanity-check the control file's REDO pointer, (b)\n> set InRecovery = true, and (c) update various bits of control file\n> state in memory has been moved substantially earlier. The actual\n> update of the control file on disk stays where it was before. At least\n> on first reading, I don't really like this. On the one hand, I don't\n> see a reason why it's necessary prerequisite for splitting xlog.c. On\n> the other hand, it seems a bit dangerous.\n\nThe new contents of the control file are determined by the checkpoint \nrecord, presence of backup label file, and whether we're doing archive \nrecovery. We have that information at hand in InitWalRecovery(), whereas \nthe caller doesn't know or care whether a backup label file was present, \nfor example. That's why I wanted to move that logic to InitWalRecovery().\n\nHowever, I was afraid of moving the actual call to UpdateControlFile() \nthere. That would be a bigger behavioral change. What if initializing \none of the subsystems fails? Currently, the control file is left \nunchanged, but if we called UpdateControlFile() earlier, then it would \nbe modified already.\n\n> There's now ~8 calls to functions in other modules between the time\n> you change things in memory and the time that you call\n> UpdateControlFile(). Perhaps none of those functions can call\n> anything that might in turn call UpdateControlFile() but I don't know\n> why we should take the chance. Is there some advantage to having the\n> in-memory state out of sync with the on-disk state across all that\n> code?\nThe functions that get called in between don't call UpdateControlFile() \nand don't affect what gets written there. It would be pretty \nquestionable if they did, even on master. But for the sake of the \nargument, let's see what would happen if they did:\n\nmaster: The later call to UpdateControlFile() writes out the same values \nagain. Unless the changed field was one of the following: 'state', \n'checkPoint', 'checkPointCopy', 'minRecoveryPoint', \n'minRecoveryPointTLI', 'backupStartPoint', 'backupEndRequired' or \n'time'. If it was one of those, then it may be overwritten with the \nvalues deduced from the starting checkpoint.\n\nAfter these patches: The later call to UpdateControlFile() writes out \nthe same values again, even if it was one of those fields.\n\nSeems like a wash to me. It's hard to tell which behavior would be the \ncorrect one.\n\nOn 'master', InRecovery might or might not already be set when we call \nthose functions. It is already set if there was a backup label file, but \nif we're doing recover for any other reason, it's set only later. That's \npretty sloppy. We check InRecovery in various assertions, and it affects \nwhether UpdateMinRecoveryPoint() updates the control file or not, among \nother things. With these patches, InRecovery is always set at that point \n(or not, if recovery is not needed). That's a bit besides the point \nhere, but it highlights that the current coding isn't very robust either \nif those startup functions tried to modify the control file. I think \nthese patches make it a little better, or at least not worse.\n\n> - The code to clear InArchiveRecovery and close the WAL segment we had\n> open moves earlier. I think it might be possible to fail\n> Assert(InArchiveRecovery), because where you've moved this code, we\n> haven't yet verified that we reached the minimum recovery point. See\n> the comment which begins \"It's possible that archive recovery was\n> requested, but we don't know how far we need to replay the WAL before\n> we reach consistency.\" What if we reach that point, then fail the big\n> hairy if-test and don't set InArchiveRecovery = true? In that case, we\n> can still do it later, in ReadRecord. But maybe that will never\n> happen. Actually it's not entirely clear to me that the assertion is\n> bulletproof even where it is right now, but moving it earlier makes me\n> even less confident. Possibly I just don't understand this well\n> enough.\n\nHmm, yeah, this logic is hairy. I tried to find a case where that \nassertion would fail but couldn't find one. I believe it's correct, but \nwe could probably make it more clear.\n\nIn a nutshell, PerformWalRecovery() will never return, if \n(ArchiveRecoveryRequested && !InArchiveRecovery). Why? There are two \nways that PerformWalRecovery() can return:\n\n1. After reaching end of WAL. ReadRecord() will always always set \nInArchiveRecovery in that case, if ArchiveRecoveryRequested was set. It \nwon't return NULL without doing that.\n\n2. We reached the requested recovery target point. There's a check for \nthat case in PerformWalRecovery(), it will throw an \"ERROR: requested \nrecovery stop point is before consistent recovery point\" if that happens \nbefore InArchiveRecovery is set. Because reachedConsistency isn't set \nuntil crash recovery is finished.\n\nThat said, independently of this patch series, perhaps that assertion \nshould be changed into something like this:\n\n if (ArchiveRecoveryRequested)\n {\n- Assert(InArchiveRecovery);\n+ /*\n+ * If archive recovery was requested, we should not finish\n+ * recovery before starting archive recovery.\n+ *\n+ * There are other checks for this in PerformWalRecovery() so\n+ * this shouldn't happen, but let's be safe.\n+ */\n+ if (!InArchiveRecovery)\n+ elog(ERROR, \"archive recovery was requested, but recovery \nfinished before it started\");\n\n> It's a little tempting, too, to see if you could somehow consolidate\n> the two places that do if (readFile >= 0) { close(readFile); readFile\n> = -1 } down to one.\n\nYeah, I thought about that, but couldn't find a nice way to do it.\n\n> - getRecoveryStopReason() is now called earlier than before, and is \n> now called whether or not ArchiveRecoveryRequested. This seems to\n> just move the point of initialization further from the point of use\n> to no real advantage, and I also think that the function is only\n> designed to do something useful for archive recovery, so calling it\n> in other cases just seems confusing.\n\nOn the other hand, it's now closer to the actual end-of-recovery. The \nidea here is that it seems natural to return the reason that recovery \nended along with all the other end-of-recovery information, in the same \nEndOfWalRecoveryInfo struct.\n\nKyotaro commented on the same thing and suggested keeping the call \ngetRecoveryStopReason() where it was. That'd require exposing \ngetRecoveryStopReason() from xlogrecovery.c. Which isn't a big deal, we \ncould do it, but in general I tried to minimize the surface area between \nxlog.c and xlogrecovery.c. If getRecoveryStopReason() was a separate \nfunction, should standby_signal_file_found and \nrecovery_signal_file_found also be separate functions? I'd prefer to \ngather all the end-of-recovery information into one struct.\n\n> That's all I have on 0001. Is this kind of review helpful?\n\nYes, very helpful, thank you!\n\n- Heikki", "msg_date": "Tue, 23 Nov 2021 01:10:36 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On 17/09/2021 06:10, Kyotaro Horiguchi wrote:\n> recoveryStopReason is always acquired but it is used only after\n> archive recovery. I'm not sure about reason for the variable to\n> live in that wide context. Couldn't we remove the variable then\n> call getRecoveryStopReason() directly at the required place?\n\nRobert commented on the same thing, see my reply there.\n\n> 0002:\n> \n> heapam.c, clog.c, twophase.c, dbcommands.c doesn't need xlogrecvoer.h.\n\nCleaned that up in v7, thanks!\n\n>> XLogRecCtl\n> \n> \"Rec\" looks like Record. Couldn't we use \"Rcv\", \"Recov\" or just\n> \"Recovery\" instead?\n\nI never made that association before, but now I cannot unsee it :-). I \nchanged it to XLogRecoveryCtl.\n\n>> TimeLineID\tPrevTimeLineID;\n>> TransactionId oldestActiveXID;\n>> bool\t\tpromoted = false;\n>> EndOfWalRecoveryInfo *endofwal;\n>> bool\t\thaveTblspcMap;\n> \n> This is just a matter of taste but the \"endofwal\" looks somewhat\n> alien in the variables.\n\nChanged to \"endOfRecoveryInfo\".\n\n> \n> xlog.c:\n> +void\n> +SwitchIntoArchiveRecovery(XLogRecPtr EndRecPtr)\n> \n> Isn't this a function of xlogrecovery.c? Or rather isn't\n> minRecoveryPoint-related stuff of xlogrecovery.c?\n\nUpdating the control file is xlog.c's responsibility. There are two \ndifferent minRecoveryPoints:\n\n1. xlogrecovery.c has a copy of the minRecoveryPoint from the control \nfile, so that it knows when we have reached consistency.\n\n2. xlog.c is responsible for updating the minRecoveryPoint in the \ncontrol file, after consistency has been reached.\n\nSwitchIntoArchiveRecovery() is called on the transition.\n\n- Heikki\n\n\n", "msg_date": "Tue, 23 Nov 2021 01:11:18 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On 23/11/2021 01:10, Heikki Linnakangas wrote:\n> Here's a new version.\n\nAnd here's another rebase, now that Robert got rid of ReadRecPtr and \nEndRecPtr.\n\n- Heikki", "msg_date": "Wed, 24 Nov 2021 19:15:07 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On Wed, Nov 24, 2021 at 12:16 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> And here's another rebase, now that Robert got rid of ReadRecPtr and\n> EndRecPtr.\n\nIn general, I think 0001 is a good idea, but the comment that says\n\"Set the XLP_FIRST_IS_OVERWRITE_CONTRECORD flag on the page header\"\nseems to me to be telling the reader about what's already obvious\ninstead of explaining to them the thing they might have missed.\nGetXLogBuffer() says that it's only safe to use if you hold a WAL\ninsertion lock and don't go backwards, and here you don't hold a WAL\ninsertion lock and I guess you're not going backwards only because\nyou're staying in exactly the same place? It seems to me that the only\nreason this is safe is because, at the time this is called, only the\nstartup process is able to write WAL, and therefore the race condition\nthat would otherwise exist does not. Even then, I wonder what keeps\nthe buffer from being flushed after we return from XLogInsert() and\nbefore we set the bit, and if the answer is that nothing prevents\nthat, whether that's OK. It might be good to talk about these issues\ntoo.\n\nJust to be clear, I'm not saying that I think the code is broken. But\nI am concerned about someone using this as precedent for code that\nruns in some other place, which would be highly likely to be broken,\nand the way to avoid that is for the comment to explain the tricky\npoints.\n\nAlso, you've named the parameter to this new function so that it's\nexactly the same as the global variable. I do approve of trying to\npass the value as a parameter instead of relying on a global variable,\nand I wonder if you could find a way to remove the global variable\nentirely. But if not, I think the function parameter and the global\nvariable should have different names, because otherwise it's easy for\nanyone reading the code to get confused about which one is being\nreferenced in any particular spot, and it's also hard to grep.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 14:44:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On 24/11/2021 21:44, Robert Haas wrote:\n> On Wed, Nov 24, 2021 at 12:16 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> And here's another rebase, now that Robert got rid of ReadRecPtr and\n>> EndRecPtr.\n> \n> In general, I think 0001 is a good idea, but the comment that says\n> \"Set the XLP_FIRST_IS_OVERWRITE_CONTRECORD flag on the page header\"\n> seems to me to be telling the reader about what's already obvious\n> instead of explaining to them the thing they might have missed.\n> GetXLogBuffer() says that it's only safe to use if you hold a WAL\n> insertion lock and don't go backwards, and here you don't hold a WAL\n> insertion lock and I guess you're not going backwards only because\n> you're staying in exactly the same place? It seems to me that the only\n> reason this is safe is because, at the time this is called, only the\n> startup process is able to write WAL, and therefore the race condition\n> that would otherwise exist does not.\n\nYeah, its correctness depends on the fact that no other backend is \nallows to write WAL.\n\n> Even then, I wonder what keeps\n> the buffer from being flushed after we return from XLogInsert() and\n> before we set the bit, and if the answer is that nothing prevents\n> that, whether that's OK. It might be good to talk about these issues\n> too.\n\nHmm. We don't advance LogwrtRqst.Write, so I think a concurrent \nXLogFlush() would not flush the page. But I agree, that's more \naccidental than by design and we should be more explicit about it.\n\nI changed the code so that it sets the XLP_FIRST_IS_OVERWRITE_CONTRECORD \nflag in the page header first, and inserts the record only after that. \nThat way, you don't \"go backwards\". I also added more sanity checks to \nverify that the record really is inserted where we expect.\n\n> Also, you've named the parameter to this new function so that it's\n> exactly the same as the global variable. I do approve of trying to\n> pass the value as a parameter instead of relying on a global variable,\n> and I wonder if you could find a way to remove the global variable\n> entirely. But if not, I think the function parameter and the global\n> variable should have different names, because otherwise it's easy for\n> anyone reading the code to get confused about which one is being\n> referenced in any particular spot, and it's also hard to grep.\n\nRenamed the parameter to 'pagePtr', that describes pretty well what it's \nused for in the function.\n\nAttached is a new patch set. It includes these changes to \nCreateOverwriteContrecordRecord(), and also a bunch of other small changes:\n\n- I moved the code to redo some XLOG record types from xlog_redo() to a \nnew function in xlogrecovery.c. This got rid of the \nHandleBackupEndRecord() callback function I had to add before. This \nchange is in a separate commit, for easier review. It might make sense \nto introduce a new rmgr for those record types, but didn't do that for now.\n\n- I reordered many of the functions in xlogrecord.c, to group together \nfunctions that are used in the initialization, and functions that are \ncalled for each WAL record.\n\n- Improved comments here and there.\n\n- I renamed checkXLogConsistency() to verifyBackupPageConsistency(). I \nthink it describes the function better. There are a bunch of other \nfunctions with check* prefix like CheckRecoveryConsistency, \nCheckTimeLineSwitch, CheckForStandbyTrigger that check for various \nconditions, so using \"check\" to mean \"verify\" here was a bit confusing.\n\nI think this is ready for commit now. I'm going to wait a day or two to \ngive everyone a chance to review these latest changes, and then push.\n\n- Heikki", "msg_date": "Fri, 17 Dec 2021 13:10:18 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On 17/12/2021 13:10, Heikki Linnakangas wrote:\n> I think this is ready for commit now. I'm going to wait a day or two to\n> give everyone a chance to review these latest changes, and then push.\n\nIn last round of review, I spotted one bug: I had mixed up the meaning \nof EndOfLogTLI. It is the TLI in the *filename* of the WAL segment that \nwe read the last record from, which can be different from the TLI that \nthe last record is actually on. All existing tests were passing with \nthat bug, so I added a test case to cover that case.\n\nSo here's one more set of patches with that fixed, which I plan to \ncommit shortly.\n\n- Heikki", "msg_date": "Tue, 25 Jan 2022 12:12:40 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On Tue, Jan 25, 2022 at 12:12:40PM +0200, Heikki Linnakangas wrote:\n> In last round of review, I spotted one bug: I had mixed up the meaning of\n> EndOfLogTLI. It is the TLI in the *filename* of the WAL segment that we read\n> the last record from, which can be different from the TLI that the last\n> record is actually on. All existing tests were passing with that bug, so I\n> added a test case to cover that case.\n\nFYI, this overlaps with a different patch sent recently, as of this\nthread:\nhttps://www.postgresql.org/message-id/CAAJ_b94Vjt5cXGza_1MkjLQWciNdEemsmiWuQj0d=M7JfjAa1g@mail.gmail.com\n--\nMichael", "msg_date": "Thu, 27 Jan 2022 15:34:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On 27/01/2022 08:34, Michael Paquier wrote:\n> On Tue, Jan 25, 2022 at 12:12:40PM +0200, Heikki Linnakangas wrote:\n>> In last round of review, I spotted one bug: I had mixed up the meaning of\n>> EndOfLogTLI. It is the TLI in the *filename* of the WAL segment that we read\n>> the last record from, which can be different from the TLI that the last\n>> record is actually on. All existing tests were passing with that bug, so I\n>> added a test case to cover that case.\n> \n> FYI, this overlaps with a different patch sent recently, as of this\n> thread:\n> https://www.postgresql.org/message-id/CAAJ_b94Vjt5cXGza_1MkjLQWciNdEemsmiWuQj0d=M7JfjAa1g@mail.gmail.com\n\nThanks, I pushed this new test case now.\n\nWith the rest of the patches, I'm seeing a mysterious failure in cirrus \nCI, on macOS on the 027_stream_regress.pl test. It doesn't make much \nsense to me, but I'm investigating that now.\n\n- Heikki\n\n\n", "msg_date": "Mon, 14 Feb 2022 11:36:37 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" }, { "msg_contents": "On 14/02/2022 11:36, Heikki Linnakangas wrote:\n> On 27/01/2022 08:34, Michael Paquier wrote:\n>> On Tue, Jan 25, 2022 at 12:12:40PM +0200, Heikki Linnakangas wrote:\n>>> In last round of review, I spotted one bug: I had mixed up the meaning of\n>>> EndOfLogTLI. It is the TLI in the *filename* of the WAL segment that we read\n>>> the last record from, which can be different from the TLI that the last\n>>> record is actually on. All existing tests were passing with that bug, so I\n>>> added a test case to cover that case.\n>>\n>> FYI, this overlaps with a different patch sent recently, as of this\n>> thread:\n>> https://www.postgresql.org/message-id/CAAJ_b94Vjt5cXGza_1MkjLQWciNdEemsmiWuQj0d=M7JfjAa1g@mail.gmail.com\n> \n> Thanks, I pushed this new test case now.\n> \n> With the rest of the patches, I'm seeing a mysterious failure in cirrus\n> CI, on macOS on the 027_stream_regress.pl test. It doesn't make much\n> sense to me, but I'm investigating that now.\n\nFixed that, and pushed. Thanks everyone for the reviews!\n\n- Heikki\n\n\n", "msg_date": "Wed, 16 Feb 2022 09:53:40 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Split xlog.c" } ]
[ { "msg_contents": "Back on March 10 Thomas Munro committed and wrestled multiple reworks of\nthe pgbench code from Fabien and the crew. The feature to synchronize\nstartup I'm looking forward to testing now that I have a packaged beta.\nVariations on that problem have bit me so many times I added code last year\nto my pgbench processing pipeline to just throw out the first and last 10%\nof every data set.\n\nBefore I could get to startup timing I noticed the pgbench logging output\nwas broken via commit 547f04e7 \"Improve time logic\":\nhttps://www.postgresql.org/message-id/E1lJqpF-00064e-C6%40gemulon.postgresql.org\n\nA lot of things are timed in pgbench now so I appreciate the idea. Y'all\nstarted that whole big thread about sync on my birthday though and I didn't\nfollow the details of what that was reviewed against. For the logging use\ncase I suspect it's just broken everywhere. The two platforms I tested\nwere PGDG Ubuntu beta1 apt install and Mac git build. Example:\n\n $ createdb pgbench\n $ pgbench -i -s 1 pgbench\n $ pgbench -S -T 1 -l pgbench\n $ head pgbench_log.*\n 0 1 1730 0 1537380 70911\n 0 2 541 0 1537380 71474\n\nThe epoch time is the 5th column in the output, and this week it should\nlook like this:\n\n 0 1 1411 0 1623767029 732926\n 0 2 711 0 1623767029 733660\n\nIf you're not an epoch guru who recognizes what's wrong already, you might\ngrab https://github.com/gregs1104/pgbench-tools/ and party like it's 1970\nto see it:\n\n $ ~/repos/pgbench-tools/log-to-csv 1 local < pgbench_log* | head\n 1970-01-18 14:03:00.070911,0,1.73,1,local\n 1970-01-18 14:03:00.071474,0,0.541,1,local\n\nI have a lot of community oriented work backed up behind this right now, so\nI'm gonna be really honest. This time rework commit in its current form\nmakes me uncomfortable at this point in the release schedule. The commit\nhas already fought through two rounds of platform specific bug fixes. But\nsince the buildfarm doesn't test the logging feature, that whole process is\nsuspect.\n\nMy take on the PostgreSQL way to proceed: this bug exposes that pgbench\nlogging is a feature we finally need to design testing for. We need a new\nbuildfarm test and then a march through a full release phase to see how it\ngoes. Only then should we start messing with the time logic. Even if we\nfixed the source today on both my test platforms, I'd still be nervous that\nbeta 2 could ship and more performance testing could fall over from this\nmodification. And that's cutting things a little close.\n\nThe fastest way to get me back to comfortable would be to unwind 547f04e7\nand its associated fixes and take it back to review. I understand the\nintent and value; I appreciate the work so far. The big industry\narchitecture shift from Intel to ARM has me worried about time overhead\nagain, the old code is wonky, and in the PG15 release cycle I already have\nresources planned around this area.\n\n# PG15 Plans\n\nI didn't intend to roll back in after time away and go right to a revert\nreview. But I also really don't want to start my public PG14 story\ndocumenting the reality that I had to use PG13's pgbench to generate my\nexamples either. I can't fight much with this logging problem while also\ndoing my planned public performance testing of PG14. I already had to push\nback a solid bit of Beta 1 PR from this week, some \"community PG is great!\"\npromotional blogging.\n\nLet me offer what I can commit to from Crunchy corporate. I'm about to\nsubmit multiple pgbench feature changes to the open CF starting July, with\nDavid Christiansen. We and the rest of Crunchy will happily help re-review\nthis time change idea, its logging issues, testing, rejoin the study of\nplatform time call overhead, and bash the whole mess into shape for PG15.\nI personally am looking forward to it.\n\nThe commit made a functional change to the way connection time is\ndisplayed; that I can take or leave as committed. I'm not sure it can be\ndecoupled from the rest of the changes. It did cause a small breaking\npgbench output parsing problem for me, just trivial regex adjustment. That\nbreak would fit in fine with my upcoming round of submissions.\n\n--\nGreg Smith greg.smith@crunchydata.com\nDirector of Open Source Strategy, Crunchy Data\n\nBack on March 10 Thomas Munro committed and wrestled multiple reworks of the pgbench code from Fabien and the crew.  The feature to synchronize startup I'm looking forward to testing now that I have a packaged beta.  Variations on that problem have bit me so many times I added code last year to my pgbench processing pipeline to just throw out the first and last 10% of every data set.Before I could get to startup timing I noticed the pgbench logging output was broken via commit 547f04e7 \"Improve time logic\":  https://www.postgresql.org/message-id/E1lJqpF-00064e-C6%40gemulon.postgresql.orgA lot of things are timed in pgbench now so I appreciate the idea.  Y'all started that whole big thread about sync on my birthday though and I didn't follow the details of what that was reviewed against.  For the logging use case I suspect it's just broken everywhere.  The two platforms I tested were PGDG Ubuntu beta1 apt install and Mac git build.  Example:    $ createdb pgbench    $ pgbench -i -s 1 pgbench    $ pgbench -S -T 1 -l pgbench    $ head pgbench_log.*    0 1 1730 0 1537380 70911    0 2 541 0 1537380 71474The epoch time is the 5th column in the output, and this week it should look like this:    0 1 1411 0 1623767029 732926    0 2 711 0 1623767029 733660If you're not an epoch guru who recognizes what's wrong already, you might grab https://github.com/gregs1104/pgbench-tools/ and party like it's 1970 to see it:    $ ~/repos/pgbench-tools/log-to-csv 1 local < pgbench_log* | head    1970-01-18 14:03:00.070911,0,1.73,1,local    1970-01-18 14:03:00.071474,0,0.541,1,localI have a lot of community oriented work backed up behind this right now, so I'm gonna be really honest.  This time rework commit in its current form makes me uncomfortable at this point in the release schedule.  The commit has already fought through two rounds of platform specific bug fixes.  But since the buildfarm doesn't test the logging feature, that whole process is suspect.My take on the PostgreSQL way to proceed:  this bug exposes that pgbench logging is a feature we finally need to design testing for.  We need a new buildfarm test and then a march through a full release phase to see how it goes.  Only then should we start messing with the time logic.  Even if we fixed the source today on both my test platforms, I'd still be nervous that beta 2 could ship and more performance testing could fall over from this modification.  And that's cutting things a little close.The fastest way to get me back to comfortable would be to unwind 547f04e7 and its associated fixes and take it back to review.  I understand the intent and value; I appreciate the work so far.  The big industry architecture shift from Intel to ARM has me worried about time overhead again, the old code is wonky, and in the PG15 release cycle I already have resources planned around this area.# PG15 PlansI didn't intend to roll back in after time away and go right to a revert review.  But I also really don't want to start my public PG14 story documenting the reality that I had to use PG13's pgbench to generate my examples either.  I can't fight much with this logging problem while also doing my planned public performance testing of PG14.  I already had to push back a solid bit of Beta 1 PR from this week, some \"community PG is great!\" promotional blogging.Let me offer what I can commit to from Crunchy corporate.  I'm about to submit multiple pgbench feature changes to the open CF starting July, with David Christiansen.  We and the rest of Crunchy will happily help re-review this time change idea, its logging issues, testing, rejoin the study of platform time call overhead, and bash the whole mess into shape for PG15.  I personally am looking forward to it.The commit made a functional change to the way connection time is displayed; that I can take or leave as committed.  I'm not sure it can be decoupled from the rest of the changes.  It did cause a small breaking pgbench output parsing problem for me, just trivial regex adjustment.  That break would fit in fine with my upcoming round of submissions.--Greg Smith greg.smith@crunchydata.comDirector of Open Source Strategy, Crunchy Data", "msg_date": "Wed, 16 Jun 2021 10:49:36 -0400", "msg_from": "Gregory Smith <gregsmithpgsql@gmail.com>", "msg_from_op": true, "msg_subject": "pgbench logging broken by time logic changes" }, { "msg_contents": "On Wed, 16 Jun 2021 10:49:36 -0400\nGregory Smith <gregsmithpgsql@gmail.com> wrote:\n\n> A lot of things are timed in pgbench now so I appreciate the idea. Y'all\n> started that whole big thread about sync on my birthday though and I didn't\n> follow the details of what that was reviewed against. For the logging use\n> case I suspect it's just broken everywhere. The two platforms I tested\n> were PGDG Ubuntu beta1 apt install and Mac git build. Example:\n> \n> $ createdb pgbench\n> $ pgbench -i -s 1 pgbench\n> $ pgbench -S -T 1 -l pgbench\n> $ head pgbench_log.*\n> 0 1 1730 0 1537380 70911\n> 0 2 541 0 1537380 71474\n> \n> The epoch time is the 5th column in the output, and this week it should\n> look like this:\n> \n> 0 1 1411 0 1623767029 732926\n> 0 2 711 0 1623767029 733660\n> \n> If you're not an epoch guru who recognizes what's wrong already, you might\n> grab https://github.com/gregs1104/pgbench-tools/ and party like it's 1970\n> to see it:\n> \n> $ ~/repos/pgbench-tools/log-to-csv 1 local < pgbench_log* | head\n> 1970-01-18 14:03:00.070911,0,1.73,1,local\n> 1970-01-18 14:03:00.071474,0,0.541,1,local\n\nAfter the commit, pgbench tries to get the current timestamp by calling\npg_time_now(). This uses INSTR_TIME_SET_CURRENT in it, but this macro\ncan call clock_gettime(CLOCK_MONOTONIC[_RAW], ) instead of gettimeofday\nor clock_gettime(CLOCK_REALTIME, ). When CLOCK_MONOTONIC[_RAW] is used,\nclock_gettime doesn't return epoch time. Therefore, we can use\nINSTR_TIME_SET_CURRENT aiming to calculate a duration, but we should\nnot have used this to get the current timestamp.\n\nI think we can fix this issue by using gettimeofday for logging as before\nthis was changed. I attached the patch.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 17 Jun 2021 02:51:38 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "\nHello Greg,\n\n> I have a lot of community oriented work backed up behind this right now, so\n> I'm gonna be really honest. This time rework commit in its current form\n> makes me uncomfortable at this point in the release schedule. The commit\n> has already fought through two rounds of platform specific bug fixes. But\n> since the buildfarm doesn't test the logging feature, that whole process is\n> suspect.\n\nLogging is/should going to be fixed.\n\n> My take on the PostgreSQL way to proceed: this bug exposes that pgbench\n> logging is a feature we finally need to design testing for.\n\nSure.\n\nThe key feedback for me is the usual one: what is not tested does \nnot work. Wow:-)\n\nI'm unhappy because I already added tap tests for time-sensitive features \n(-T and others, maybe logging aggregates, cannot remember), which have \nbeen removed because they could fail under some circonstances (eg very \nvery very very slow hosts), or required some special handling (a few lines \nof code) in pgbench, and the net result of this is there is not a single \ntest in place for some features:-(\n\nThere is no problem with proposing tests, the problem is that they are \naccepted, or if they are accepted then that they are not removed at the \nfirst small issue but rather fixed, or their limitations accepted, because \ntesting time-sensitive features is not as simple as testing functional \nfeatures.\n\nNote that currently there is not a single test of psql with autocommit off \nor with \"on error rollback\". Last time a submitted tap tests to raise psql \ntest coverage from 50% to 90%, it was rejected. I'll admit that I'm tired \narguing that more testing is required, and I'm very happy if someone else \nis ready to try again. Good luck! :-)\n\n> We need a new buildfarm test and then a march through a full release \n> phase to see how it goes.\n\n> Only then should we start messing with the time logic. Even if we fixed \n> the source today on both my test platforms, I'd still be nervous that \n> beta 2 could ship and more performance testing could fall over from this \n> modification. And that's cutting things a little close.\n\nWell, the point beta is to discover bugs not caught by reviews and dev \ntests, fix them, and possibly add tests which would have caught them.\n\nIf you revert all features on the first issue in a corner case and put it \nback to the queue, then I do not see why the review and dev tests will be \nmuch better on the next round, so it does not really help moving things \nforward.\n\nIMHO, the pragmatic approach is to look at fixing first, and maybe revert \nif the problems are deep. I'm not sure this is obviously the case here.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 16 Jun 2021 20:59:45 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "> pg_time_now(). This uses INSTR_TIME_SET_CURRENT in it, but this macro\n> can call clock_gettime(CLOCK_MONOTONIC[_RAW], ) instead of gettimeofday\n> or clock_gettime(CLOCK_REALTIME, ). When CLOCK_MONOTONIC[_RAW] is used,\n> clock_gettime doesn't return epoch time. Therefore, we can use\n> INSTR_TIME_SET_CURRENT aiming to calculate a duration, but we should\n> not have used this to get the current timestamp.\n>\n> I think we can fix this issue by using gettimeofday for logging as before\n> this was changed. I attached the patch.\n\nI cannot say that I'm thrilled by having multiple tv stuff back in several \nplace. I can be okay with one, though. What about the attached? Does it \nmake sense?\n\n-- \nFabien.", "msg_date": "Wed, 16 Jun 2021 21:11:41 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "\nOn 6/16/21 2:59 PM, Fabien COELHO wrote:\n>\n> Hello Greg,\n>\n>> I have a lot of community oriented work backed up behind this right\n>> now, so\n>> I'm gonna be really honest.  This time rework commit in its current form\n>> makes me uncomfortable at this point in the release schedule.  The\n>> commit\n>> has already fought through two rounds of platform specific bug\n>> fixes.  But\n>> since the buildfarm doesn't test the logging feature, that whole\n>> process is\n>> suspect.\n>\n> Logging is/should going to be fixed.\n>\n>> My take on the PostgreSQL way to proceed:  this bug exposes that pgbench\n>> logging is a feature we finally need to design testing for.\n>\n> Sure.\n>\n> The key feedback for me is the usual one: what is not tested does not\n> work. Wow:-)\n\n\nAgreed.\n\n\n>\n> I'm unhappy because I already added tap tests for time-sensitive\n> features (-T and others, maybe logging aggregates, cannot remember),\n> which have been removed because they could fail under some\n> circonstances (eg very very very very slow hosts), or required some\n> special handling (a few lines of code) in pgbench, and the net result\n> of this is there is not a single test in place for some features:-(\n\n\nI'm not familiar with exactly what happened in this case, but tests need\nto be resilient over a wide range of performance characteristics. One\nway around this issue might be to have a way of detecting that it's on a\nslow platform and if so either skipping tests (Test::More provides\nplenty of support for this) or expecting different results.\n\n\n>\n> There is no problem with proposing tests, the problem is that they are\n> accepted, or if they are accepted then that they are not removed at\n> the first small issue but rather fixed, or their limitations accepted,\n> because testing time-sensitive features is not as simple as testing\n> functional features.\n>\n> Note that currently there is not a single test of psql with autocommit\n> off or with \"on error rollback\". Last time a submitted tap tests to\n> raise psql test coverage from 50% to 90%, it was rejected. I'll admit\n> that I'm tired arguing that more testing is required, and I'm very\n> happy if someone else is ready to try again. Good luck! :-)\n\n\n:-(\n\n\n>\n>> We need a new buildfarm test and then a march through a full release\n>> phase to see how it goes.\n>\n>> Only then should we start messing with the time logic.  Even if we\n>> fixed the source today on both my test platforms, I'd still be\n>> nervous that beta 2 could ship and more performance testing could\n>> fall over from this modification.  And that's cutting things a little\n>> close.\n>\n> Well, the point beta is to discover bugs not caught by reviews and dev\n> tests, fix them, and possibly add tests which would have caught them.\n>\n> If you revert all features on the first issue in a corner case and put\n> it back to the queue, then I do not see why the review and dev tests\n> will be much better on the next round, so it does not really help\n> moving things forward.\n>\n> IMHO, the pragmatic approach is to look at fixing first, and maybe\n> revert if the problems are deep. I'm not sure this is obviously the\n> case here.\n\n\n\nIt does look like the submitted fix basically reverts the changes w.r.t.\nthis timestamp logging.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 16 Jun 2021 15:13:30 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Hi Greg,\n\nOn Thu, Jun 17, 2021 at 2:49 AM Gregory Smith <gregsmithpgsql@gmail.com> wrote:\n> Back on March 10 Thomas Munro committed and wrestled multiple reworks of the pgbench code from Fabien and the crew. The feature to synchronize startup I'm looking forward to testing now that I have a packaged beta. Variations on that problem have bit me so many times I added code last year to my pgbench processing pipeline to just throw out the first and last 10% of every data set.\n\nYeah, commit aeb57af8 is a nice improvement and was the main thing I\nwanted to get into the tree for 14 in this area, because it was\nmeasuring the wrong thing.\n\n> Before I could get to startup timing I noticed the pgbench logging output was broken via commit 547f04e7 \"Improve time logic\": https://www.postgresql.org/message-id/E1lJqpF-00064e-C6%40gemulon.postgresql.org\n\nIt does suck that we broke the logging and that it took 3 months for\nanyone to notice and report it to the list. Seems like it should be\nstraightforward to fix, though, with fixes already proposed (though I\nhaven't studied them yet, will do).\n\n> I have a lot of community oriented work backed up behind this right now, so I'm gonna be really honest. This time rework commit in its current form makes me uncomfortable at this point in the release schedule. The commit has already fought through two rounds of platform specific bug fixes. But since the buildfarm doesn't test the logging feature, that whole process is suspect.\n\nIt's true that this work produced a few rounds of small portability\nfollow-ups: c427de42 (work around strange hacks elsewhere in the tree\nfor AIX), 68b34b23 (missing calling convention specifier on Windows),\nand de91c3b9 (adjust pthread missing-function code for threadless\nbuilds). These were problems that didn't show up on developer or CI\nsystems (including threadless and Windows), and IMHO are typical sorts\nof problems you expect to have to work through when stuff hits the\nbuild farm, especially when using new system interfaces. So I don't\nthink any of that, on its own, supports reverting anything here.\n\n> My take on the PostgreSQL way to proceed: this bug exposes that pgbench logging is a feature we finally need to design testing for. We need a new buildfarm test and then a march through a full release phase to see how it goes. Only then should we start messing with the time logic. Even if we fixed the source today on both my test platforms, I'd still be nervous that beta 2 could ship and more performance testing could fall over from this modification. And that's cutting things a little close.\n>\n> The fastest way to get me back to comfortable would be to unwind 547f04e7 and its associated fixes and take it back to review. I understand the intent and value; I appreciate the work so far. The big industry architecture shift from Intel to ARM has me worried about time overhead again, the old code is wonky, and in the PG15 release cycle I already have resources planned around this area.\n\nLet me study the proposed fixes on this and the other thread about\npgbench logging for a bit.\n\nGlad to hear that you're working on this area. I guess you might be\nresearching stuff along the same sorts of lines as in the thread\n\"Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?\" (though\nthat's about the executor). As I already expressed in that thread, if\nthe backend's instrumentation code is improved as proposed there,\nwe'll probably want to rip some of these pgbench changes out anyway\nand go back to common instrumentation code.\n\nFor that reason, I'm not super attached to that new pg_time_usec_t\nstuff at all, and wouldn't be sad if we reverted that piece. I am\nmoderately attached to the sync changes, though. pgbench 13 is\nobjectively producing incorrect results in that respect.\n\n\n", "msg_date": "Thu, 17 Jun 2021 12:36:10 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Thu, Jun 17, 2021 at 12:36:10PM +1200, Thomas Munro wrote:\n> For that reason, I'm not super attached to that new pg_time_usec_t\n> stuff at all, and wouldn't be sad if we reverted that piece. I am\n> moderately attached to the sync changes, though. pgbench 13 is\n> objectively producing incorrect results in that respect.\n\nThere is another item in this area where pgbench uses incorrect maths\nwhen aggregating the stats of transactions mid-run and at the end of a\nthread, issue caused by 547f04e as this code path forgot to handle the\ns <-> us conversion:\nhttps://www.postgresql.org/message-id/CAF7igB1r6wRfSCEAB-iZBKxkowWY6+dFF2jObSdd9+iVK+vHJg@mail.gmail.com\n\nWouldn't it be better to put all those fixes into the same bag? If\nyou drop the business with pg_time_usec_t, it looks like we don't\nreally need to do anything there.\n--\nMichael", "msg_date": "Thu, 17 Jun 2021 09:46:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Wed, 16 Jun 2021 21:11:41 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> > pg_time_now(). This uses INSTR_TIME_SET_CURRENT in it, but this macro\n> > can call clock_gettime(CLOCK_MONOTONIC[_RAW], ) instead of gettimeofday\n> > or clock_gettime(CLOCK_REALTIME, ). When CLOCK_MONOTONIC[_RAW] is used,\n> > clock_gettime doesn't return epoch time. Therefore, we can use\n> > INSTR_TIME_SET_CURRENT aiming to calculate a duration, but we should\n> > not have used this to get the current timestamp.\n> >\n> > I think we can fix this issue by using gettimeofday for logging as before\n> > this was changed. I attached the patch.\n> \n> I cannot say that I'm thrilled by having multiple tv stuff back in several \n> place. I can be okay with one, though. What about the attached? Does it \n> make sense?\n\nAt first, I also thought of fixing pg_time_now() to use gettimeofday() instead\nof INSTR_TIME_SET_CURRENT, but I noticed that using INSTR_TIME_SET_CURRENT is\nproper to measure time interval. I mean, this macro uses\nlock_gettime(CLOCK_MONOTONIC, ) if avilable, which give reliable interval\ntiming even in the face of changes to the system clock due to NTP.\n\nThe commit 547f04e7 changed all of INSTR_TIME_SET_CURRENT, gettimeofday(), and\ntime() to pg_now_time() which calls INSTR_TIME_SET_CURRENT in it. So, my patch\nintented to revert these changes only about gettimeofday() and time(), and remain\nchanges about INSTR_TIME_SET_CURRENT.\n\nI attached the updated patch because I forgot to revert pg_now_time() to time(NULL).\n\nAnother idea to fix is adding 'use_epoch' flag to pg_now_time() like below, \n\n pg_time_now(bool use_epoch)\n {\n if (use_epoch)\n {\n struct timeval tv;\n gettimeofday(&tv, NULL);\n return tv.tv_sec * 1000000 + tv.tv_usec;\n }\n else\n {\n instr_time now;\n INSTR_TIME_SET_CURRENT(now);\n return (pg_time_usec_t) INSTR_TIME_GET_MICROSEC(now);\n }\n }\n\nor making an additional function that returns epoch time.\n\n\nBy the way, there is another advantage of using clock_gettime(). That is, this\nreturns precise results in nanoseconds. However, we can not benefit from it because\npg_time_now() converts the value to uint64 by using INSTR_TIME_GET_MICROSEC. So,\nif we would like more precious statistics in pgbench, it might be better to use\nINSTR_TIME_GET_MICROSEC instead of pg_time_now in other places.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 17 Jun 2021 12:23:42 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "At Thu, 17 Jun 2021 12:23:42 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> On Wed, 16 Jun 2021 21:11:41 +0200 (CEST)\n> Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > I cannot say that I'm thrilled by having multiple tv stuff back in several \n> > place. I can be okay with one, though. What about the attached? Does it \n> > make sense?\n\n+1 The patch rounds down sd->start_time from ms to s but it seems to\nme a degradation.\n\n> At first, I also thought of fixing pg_time_now() to use gettimeofday() instead\n> of INSTR_TIME_SET_CURRENT, but I noticed that using INSTR_TIME_SET_CURRENT is\n> proper to measure time interval. I mean, this macro uses\n> lock_gettime(CLOCK_MONOTONIC, ) if avilable, which give reliable interval\n> timing even in the face of changes to the system clock due to NTP.\n\nIf I understand the op correctly, the problem here is the time values\nin pgbench log file are based on a bogus epoch. If it's the only issue\nhere and and if we just want to show the time based on the unix epoch\ntime, just recording the difference would work as I scketched in the\nattached. (Precisely theepoch would move if we set the system clock\nbut I don't think that matters:p)\n\n> By the way, there is another advantage of using clock_gettime(). That is, this\n> returns precise results in nanoseconds. However, we can not benefit from it because\n> pg_time_now() converts the value to uint64 by using INSTR_TIME_GET_MICROSEC. So,\n> if we would like more precious statistics in pgbench, it might be better to use\n> INSTR_TIME_GET_MICROSEC instead of pg_time_now in other places.\n\nI'm not sure we have transaction lasts for very short time that\nnanoseconds matters.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 17 Jun 2021 14:17:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Thu, 17 Jun 2021 14:17:56 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Thu, 17 Jun 2021 12:23:42 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> > On Wed, 16 Jun 2021 21:11:41 +0200 (CEST)\n> > Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > > I cannot say that I'm thrilled by having multiple tv stuff back in several \n> > > place. I can be okay with one, though. What about the attached? Does it \n> > > make sense?\n> \n> +1 The patch rounds down sd->start_time from ms to s but it seems to\n> me a degradation.\n\nI don't think that's matter because sd->start_time is used only for\nlog aggregation and aggregate-interval is specified in seconds, though.\n\n> > At first, I also thought of fixing pg_time_now() to use gettimeofday() instead\n> > of INSTR_TIME_SET_CURRENT, but I noticed that using INSTR_TIME_SET_CURRENT is\n> > proper to measure time interval. I mean, this macro uses\n> > lock_gettime(CLOCK_MONOTONIC, ) if avilable, which give reliable interval\n> > timing even in the face of changes to the system clock due to NTP.\n> \n> If I understand the op correctly, the problem here is the time values\n> in pgbench log file are based on a bogus epoch. If it's the only issue\n> here and and if we just want to show the time based on the unix epoch\n> time, just recording the difference would work as I scketched in the\n> attached. (Precisely theepoch would move if we set the system clock\n> but I don't think that matters:p)\n\nThat makes sense. If the system clock is shifted due to NTP (for example)\nit would not affect the measurement although timestamps in logs could be shifted\nbecause gettimeofday is called only once.\n\nIf we fix it in this way, we should fix also printProgressReport().\n\n if (progress_timestamp)\n {\n- snprintf(tbuf, sizeof(tbuf), \"%.3f s\", PG_TIME_GET_DOUBLE(now));\n+ snprintf(tbuf, sizeof(tbuf), \"%.3f s\", PG_TIME_GET_DOUBLE(now * epoch_shift));\n }\n\n> > By the way, there is another advantage of using clock_gettime(). That is, this\n> > returns precise results in nanoseconds. However, we can not benefit from it because\n> > pg_time_now() converts the value to uint64 by using INSTR_TIME_GET_MICROSEC. So,\n> > if we would like more precious statistics in pgbench, it might be better to use\n> > INSTR_TIME_GET_MICROSEC instead of pg_time_now in other places.\n> \n> I'm not sure we have transaction lasts for very short time that\n> nanoseconds matters.\n\nI thought it might affect the accuracy when statistics are accumulated\nthrough a huge numbers of transactions, but I am fine with it if no one\ncares it.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 17 Jun 2021 15:17:40 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Hello Yugo-san,\n\n>>> I think we can fix this issue by using gettimeofday for logging as before\n>>> this was changed. I attached the patch.\n>>\n>> I cannot say that I'm thrilled by having multiple tv stuff back in several\n>> place. I can be okay with one, though. What about the attached? Does it\n>> make sense?\n>\n> At first, I also thought of fixing pg_time_now() to use gettimeofday() instead\n> of INSTR_TIME_SET_CURRENT, but I noticed that using INSTR_TIME_SET_CURRENT is\n> proper to measure time interval. I mean, this macro uses\n> lock_gettime(CLOCK_MONOTONIC, ) if avilable, which give reliable interval\n> timing even in the face of changes to the system clock due to NTP.\n\nOk, I was thinking that it was possible that there was this kind of \nimplication. Now, does it matter that much if a few transactions are \nskewed by NTP from time to time? Having to deal with different time \nfunctions in the same file seems painful.\n\n> The commit 547f04e7 changed all of INSTR_TIME_SET_CURRENT, gettimeofday(), and\n> time() to pg_now_time() which calls INSTR_TIME_SET_CURRENT in it. So, my patch\n> intented to revert these changes only about gettimeofday() and time(), and remain\n> changes about INSTR_TIME_SET_CURRENT.\n\nHmmm.\n\n> pg_time_now(bool use_epoch)\n> {\n> if (use_epoch)\n> {\n> struct timeval tv;\n> gettimeofday(&tv, NULL);\n> return tv.tv_sec * 1000000 + tv.tv_usec;\n> }\n> else\n> {\n> instr_time now;\n> INSTR_TIME_SET_CURRENT(now);\n> return (pg_time_usec_t) INSTR_TIME_GET_MICROSEC(now);\n> }\n> }\n>\n> or making an additional function that returns epoch time.\n\nYes, but when to call which version? How to avoid confusion? After giving \nit some thoughts, ISTM that the best short-term decision is just to have \nepoch everywhere, i.e. having now() to rely on gettimeofday, because:\n\n - at least one user is unhappy with not having epoch in the log file,\n and indeed it makes sense to be unhappy about that if they want to\n correlated logs. So I agree to undo that, or provide an option to undo\n it.\n\n - having different times with different origins at different point in\n the code makes it really hard to understand and maintain, and if we\n trade maintainability for precision it should really be worth it, and\n I'm not sure that working around NTP adjustment is worth it right now.\n\nIn the not so short term, I'd say that the best approach would be use \nrelative time internally and just to offset these with a global epoch \nstart time when displaying a timestamp.\n\n> By the way, there is another advantage of using clock_gettime(). That is, this\n> returns precise results in nanoseconds. However, we can not benefit from it because\n> pg_time_now() converts the value to uint64 by using INSTR_TIME_GET_MICROSEC. So,\n> if we would like more precious statistics in pgbench, it might be better to use\n> INSTR_TIME_GET_MICROSEC instead of pg_time_now in other places.\n\nThe INSTR_TIME macros are pretty ugly and inefficient, especially when \ntime arithmetic is involved because they re-implements 64 bits operations \nbased on 32 bit ints. I really wanted to get rid of that as much as \npossible. From a database benchmarking perspective ISTM that �s is the \nright smallest unit, given that a transaction implies significant delays \nsuch as networks communications, parsing, and so on. So I do not think we \nshould ever need nanos.\n\nIn conclusion, ISTM that it is enough to simply change now to call \ngettimeofday to fix the issue raised by Greg. This is patch v1 on the \nthread.\n\n-- \nFabien.", "msg_date": "Thu, 17 Jun 2021 09:00:50 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Hello,\n\n>>> I cannot say that I'm thrilled by having multiple tv stuff back in several\n>>> place. I can be okay with one, though. What about the attached? Does it\n>>> make sense?\n>\n> +1 The patch rounds down sd->start_time from ms to s but it seems to\n> me a degradation.\n\nYes, please we should not use time.\n\n>> At first, I also thought of fixing pg_time_now() to use gettimeofday() instead\n>> of INSTR_TIME_SET_CURRENT, but I noticed that using INSTR_TIME_SET_CURRENT is\n>> proper to measure time interval. I mean, this macro uses\n>> lock_gettime(CLOCK_MONOTONIC, ) if avilable, which give reliable interval\n>> timing even in the face of changes to the system clock due to NTP.\n>\n> If I understand the op correctly, the problem here is the time values\n> in pgbench log file are based on a bogus epoch.\n\nIt is not \"bogus\", but is not necessary epoch depending on the underlying \nfunction called behind by INSTR_TIME macros, and people are entitled to \nexpect epoch for log correlations.\n\n> If it's the only issue\n> here and and if we just want to show the time based on the unix epoch\n> time, just recording the difference would work as I scketched in the\n> attached. (Precisely theepoch would move if we set the system clock\n> but I don't think that matters:p)\n\nI do like the approach.\n\nI'm hesitant to promote it for fixing the beta, but the code impact is \nsmall enough, so I'd say yes. Maybe there is a similar issue with progress \nwhich should probably use the same approach. I think that aligning the \nimplementations can wait for pg15.\n\nThe patch as white space issues. Attached an updated version which fixes \nthat, adds comments and simplify the code a little bit.\n\n> I'm not sure we have transaction lasts for very short time that \n> nanoseconds matters.\n\nIndeed.\n\n-- \nFabien.", "msg_date": "Thu, 17 Jun 2021 09:15:37 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Hello Thomas,\n\n>> Before I could get to startup timing I noticed the pgbench logging \n>> output was broken via commit 547f04e7 \"Improve time logic\": \n>> https://www.postgresql.org/message-id/E1lJqpF-00064e-C6%40gemulon.postgresql.org\n>\n> It does suck that we broke the logging and that it took 3 months for\n> anyone to notice and report it to the list.\n\nIndeed. Well, it also demonstrates that beta are useful.\n\n> Seems like it should be straightforward to fix, though, with fixes \n> already proposed (though I haven't studied them yet, will do).\n\nI think that fixing logging is simple enough, thus a revert is not \nnecessary.\n\n>> I have a lot of community oriented work backed up behind this right \n>> now, so I'm gonna be really honest. This time rework commit in its \n>> current form makes me uncomfortable at this point in the release \n>> schedule. The commit has already fought through two rounds of platform \n>> specific bug fixes. But since the buildfarm doesn't test the logging \n>> feature, that whole process is suspect.\n>\n> It's true that this work produced a few rounds of small portability\n> follow-ups: c427de42 (work around strange hacks elsewhere in the tree\n> for AIX), 68b34b23 (missing calling convention specifier on Windows),\n> and de91c3b9 (adjust pthread missing-function code for threadless\n> builds). These were problems that didn't show up on developer or CI\n> systems (including threadless and Windows), and IMHO are typical sorts\n> of problems you expect to have to work through when stuff hits the\n> build farm, especially when using new system interfaces. So I don't\n> think any of that, on its own, supports reverting anything here.\n\nYep, the buildfarm is here to catch portability issues, and it does its \njob:-) There is no doubt that logging is has been broken because of lack \nof tests in this area, shame on us. I think it is easy to fix.\n\n> [...] For that reason, I'm not super attached to that new pg_time_usec_t \n> stuff at all, and wouldn't be sad if we reverted that piece.\n\nWell, I was sooo happy to get rid of INSTR_TIME ugly and inefficient \nmacros in pgbench… so anything looks better to me.\n\nNote that Michaël is having a look at fixing pgbench logging issues.\n\n-- \nFabien.", "msg_date": "Thu, 17 Jun 2021 09:24:37 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "> Wouldn't it be better to put all those fixes into the same bag?\n\nAttached.\n\n-- \nFabien.", "msg_date": "Thu, 17 Jun 2021 09:34:05 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": ">> Wouldn't it be better to put all those fixes into the same bag?\n>\n> Attached.\n\nEven better if the patch is not empty.\n\n-- \nFabien.", "msg_date": "Thu, 17 Jun 2021 10:18:03 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Thu, 17 Jun 2021 10:18:03 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> >> Wouldn't it be better to put all those fixes into the same bag?\n> >\n> > Attached.\n> \n> Even better if the patch is not empty.\n\nI found you forgot to fix printProgressReport().\n\nAlso, according to the document, interval_start in Aggregated Logging\nseems to be printed in seconds instead of ms.\n\n <para>\n Here is some example output:\n <screen>\n 1345828501 5601 1542744 483552416 61 2573\n 1345828503 7884 1979812 565806736 60 1479\n 1345828505 7208 1979422 567277552 59 1391\n 1345828507 7685 1980268 569784714 60 1398\n 1345828509 7073 1979779 573489941 236 1411\n </screen></para>\n\nIf we obey the document and keep the back-compatibility, we should fix\nlogAgg().\n\nThe attached patch includes these fixes.\n\nRegards,\nYugo Nagata \n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 17 Jun 2021 17:55:42 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "\n> I found you forgot to fix printProgressReport().\n\nIndeed.\n\n> Also, according to the document, interval_start in Aggregated Logging\n> seems to be printed in seconds instead of ms.\n\nIndeed. I'm unsure about what we should really want there, but for a beta \nbug fix I agree that it must simply comply to the old documented behavior.\n\n> The attached patch includes these fixes.\n\nThanks. Works for me.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 17 Jun 2021 13:49:47 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Thu, Jun 17, 2021 at 7:24 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > Seems like it should be straightforward to fix, though, with fixes\n> > already proposed (though I haven't studied them yet, will do).\n>\n> I think that fixing logging is simple enough, thus a revert is not\n> necessary.\n\nI prepared a draft revert patch for discussion, just in case it comes\nin handy. This reverts \"pgbench: Improve time logic.\", but \"pgbench:\nSynchronize client threads.\" remains (slightly rearranged).\n\nI'm on the fence TBH, I can see that it's really small things and it\nseems we have the patches, but it's late, late enough that\nbenchmarking gurus are showing up to benchmark with it for real, and\nit's not great to be getting in the way of that with what is mostly\nrefactoring work, so I don't think it would be a bad thing if we\nagreed to try again in 15. (A number of arguments for and against\nmoving pgbench out of the postgresql source tree and release cycle\noccur to me, but I guess that's a topic for another thread.)\n\n> > [...] For that reason, I'm not super attached to that new pg_time_usec_t\n> > stuff at all, and wouldn't be sad if we reverted that piece.\n>\n> Well, I was sooo happy to get rid of INSTR_TIME ugly and inefficient\n> macros in pgbench… so anything looks better to me.\n\nI don't love it either, in this code or the executor. (I know you\nalso don't like the THREAD_CREATE etc macros. I have something to\npropose to improve that for 15....)\n\n> Note that Michaël is having a look at fixing pgbench logging issues.\n\nYeah I've been catching up with these threads.", "msg_date": "Fri, 18 Jun 2021 00:49:42 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "\nOn 6/17/21 8:49 AM, Thomas Munro wrote:\n> On Thu, Jun 17, 2021 at 7:24 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>>> Seems like it should be straightforward to fix, though, with fixes\n>>> already proposed (though I haven't studied them yet, will do).\n>> I think that fixing logging is simple enough, thus a revert is not\n>> necessary.\n> I prepared a draft revert patch for discussion, just in case it comes\n> in handy. This reverts \"pgbench: Improve time logic.\", but \"pgbench:\n> Synchronize client threads.\" remains (slightly rearranged).\n>\n> I'm on the fence TBH, I can see that it's really small things and it\n> seems we have the patches, but it's late, late enough that\n> benchmarking gurus are showing up to benchmark with it for real, and\n> it's not great to be getting in the way of that with what is mostly\n> refactoring work, so I don't think it would be a bad thing if we\n> agreed to try again in 15. \n\n\nIs there an identified issue beyond the concrete example Greg gave of\nthe timestamps?\n\n\nWe are still fixing a few things with potentially far more impact than\nanything in pgbench, so fixing this wouldn't bother me that much, as\nlong as we get it done for Beta2.\n\n\n> (A number of arguments for and against\n> moving pgbench out of the postgresql source tree and release cycle\n> occur to me, but I guess that's a topic for another thread.)\n>\n\nIndeed.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 17 Jun 2021 10:36:08 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Wed, Jun 16, 2021 at 2:59 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> I'm unhappy because I already added tap tests for time-sensitive features\n> (-T and others, maybe logging aggregates, cannot remember), which have\n> been removed because they could fail under some circonstances (eg very\n> very very very slow hosts), or required some special handling (a few lines\n> of code) in pgbench, and the net result of this is there is not a single\n> test in place for some features:-(\n>\n\nI understand your struggle and I hope I was clear about two things:\n\n-I am excited by all the progress made in pgbench, and this problem is an\nintegration loose end rather than a developer failure at any level.\n-Doing better in this messy area takes a team that goes from development to\nrelease management, and I had no right to complain unless I brought\nresources to improve in the specific areas of the process that I want to be\nbetter.\n\nI think the only thing you and I disagree on is that you see a \"first issue\nin a corner case\" where I see a process failure that is absolutely vital\nfor me to improve. Since the reality is that I might be the best\npositioned person to actually move said process forward in a meaningful\nlong-term way, I have every intention of applying pressure to the area\nyou're frustrated at. Crunchy has a whole parallel review team to the\ncommunity one now focused on what our corporate and government customers\nneed for software process control and procedure compliance. The primary\nbusiness problem I'm working on now is how to include performance review in\nthat mix.\n\nI already know I need to re-engage with you over how I need min/max numbers\nin the aggregate logging output to accomplish some valuable goals. When I\nget around to that this summer, I'd really enjoy talking with you a bit,\nvideo call or something, about really any community topic you're frustrated\nwith. I have a lot riding now on the productivity of the PostgreSQL hacker\ncommunity and I want everyone to succeed at the best goals.\n\nThere is no problem with proposing tests, the problem is that they are\n> accepted, or if they are accepted then that they are not removed at the\n> first small issue but rather fixed, or their limitations accepted, because\n> testing time-sensitive features is not as simple as testing functional\n> features.\n>\n\nFor 2020 Crunchy gave me a sort of sabbatical year to research community\noriented benchmarking topics. Having a self contained project in my home\nturned out to be the perfect way to spend *that* wreck of a year.\n\nI made significant progress toward the idea of having a performance farm\nfor PostgreSQL. On my laptop today is a 14GB database with 1s resolution\nlatency traces for 663 days of pgbench time running 4 workloads across a\nsmall bare metal farm of various operating systems and hardware classes. I\ncan answer questions like \"how long does a typical SSD take to execute an\nINSERT commit?\" across my farm with SQL. It's at the \"works for me!\" stage\nof development, and I thought this was the right time in the development\ncycle to start sharing improvement ideas from my work; thus the other\nsubmissions in progress I alluded to.\n\nThe logging feature is in an intermediate spot where validating it requires\nlight custom tooling that compares its output against known variables like\nthe system time. It doesn't quite have a performance component to it.\nSince this time logic detail is a well known portability minefield, I\nthought demanding that particular test was a pretty easy sell.\n\nThat you in particular are frustrated here makes perfect sense to me. I am\nfresh and ready to carry this forward some distance, and I hope the outcome\nmakes you happy\n\nOn Wed, Jun 16, 2021 at 2:59 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\nI'm unhappy because I already added tap tests for time-sensitive features \n(-T and others, maybe logging aggregates, cannot remember), which have \nbeen removed because they could fail under some circonstances (eg very \nvery very very slow hosts), or required some special handling (a few lines \nof code) in pgbench, and the net result of this is there is not a single \ntest in place for some features:-(I understand your struggle and I hope I was clear about two things:-I am excited by all the progress made in pgbench, and this problem is an integration loose end rather than a developer failure at any level.-Doing better in this messy area takes a team that goes from development to release management, and I had no right to complain unless I brought resources to improve in the specific areas of the process that I want to be better.I think the only thing you and I disagree on is that you see a \"first issue in a corner case\" where I see a process failure that is absolutely vital for me to improve.  Since the reality is that I might be the best positioned person to actually move said process forward in a meaningful long-term way, I have every intention of applying pressure to the area you're frustrated at.  Crunchy has a whole parallel review team to the community one now focused on what our corporate and government customers need for software process control and procedure compliance.  The primary business problem I'm working on now is how to include performance review in that mix.I already know I need to re-engage with you over how I need min/max numbers in the aggregate logging output to accomplish some valuable goals.  When I get around to that this summer, I'd really enjoy talking with you a bit, video call or something, about really any community topic you're frustrated with.  I have a lot riding now on the productivity of the PostgreSQL hacker community and I want everyone to succeed at the best goals.\nThere is no problem with proposing tests, the problem is that they are \naccepted, or if they are accepted then that they are not removed at the \nfirst small issue but rather fixed, or their limitations accepted, because \ntesting time-sensitive features is not as simple as testing functional \nfeatures.For 2020 Crunchy gave me a sort of sabbatical year to research community oriented benchmarking topics.  Having a self contained project in my home turned out to be the perfect way to spend *that* wreck of a year.  I made significant progress toward the idea of having a performance farm for PostgreSQL.  On my laptop today is a 14GB database with 1s resolution latency traces for 663 days of pgbench time running 4 workloads across a small bare metal farm of various operating systems and hardware classes.  I can answer questions like \"how long does a typical SSD take to execute an INSERT commit?\" across my farm with SQL.  It's at the \"works for me!\" stage of development, and I thought this was the right time in the development cycle to start sharing improvement ideas from my work; thus the other submissions in progress I alluded to.The logging feature is in an intermediate spot where validating it requires light custom tooling that compares its output against known variables like the system time.  It doesn't quite have a performance component to it.  Since this time logic detail is a well known portability minefield, I thought demanding that particular test was a pretty easy sell.That you in particular are frustrated here makes perfect sense to me.  I am fresh and ready to carry this forward some distance, and I hope the outcome makes you happy", "msg_date": "Thu, 17 Jun 2021 11:20:55 -0400", "msg_from": "Gregory Smith <gregsmithpgsql@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "\n> Is there an identified issue beyond the concrete example Greg gave of\n> the timestamps?\n\nAFAICS, there is a patch which fixes all known issues linked to pgbench \nlogging. Whether there exists other issues is possible, but the \"broken\"\narea was quite specific. There are also some TAP tests on pgbench which do \ncatch issues.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 17 Jun 2021 17:46:47 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Hello Greg,\n\n> I think the only thing you and I disagree on is that you see a \"first \n> issue in a corner case\" where I see a process failure that is absolutely \n> vital for me to improve.\n\nHmmm. I agree that improvements are needed, but for me there is simply a \nfew missing (removed) tap tests which should/could have caught these \nissues, which are AFAICS limited to the untested area.\n\nGiven the speed of the process and the energy and patience needed to move \nthings forward, reverting means that the patch is probably dead for at \nleast a year, possibly an eon, and that is too bad because IMHO it was an \nimprovement (my eyes are watering when I see INSTR_TIME macros), so I'd \nprefer a fix rather than a revert if it is possible, which in this case I \nthink it could be.\n\n> Since the reality is that I might be the best positioned person\n\nGood for you:-)\n\n> to actually move said process forward in a meaningful long-term way, I \n> have every intention of applying pressure to the area you're frustrated \n> at. Crunchy has a whole parallel review team to the community one now \n> focused on what our corporate and government customers need for software \n> process control and procedure compliance. The primary business problem \n> I'm working on now is how to include performance review in that mix.\n\nThis idea has been around for some time now. It is quite a task, and a \nworking and possibly extended pgbench is just one part of the overall \nsoftware, infrastructure and procedure needed to have that.\n\n> I already know I need to re-engage with you over how I need min/max numbers\n> in the aggregate logging output to accomplish some valuable goals.\n\nI do try to review every patch submitted about pgbench. Feel free to fire!\n\n> When I get around to that this summer, I'd really enjoy talking with you \n> a bit, video call or something, about really any community topic you're \n> frustrated with.\n\n\"frustrated\" may be a strong word. I'm somehow annoyed, and unlikely to \never submit many tests improvements in the future.\n\n>> There is no problem with proposing tests, the problem is that they are\n>> accepted, or if they are accepted then that they are not removed at the\n>> first small issue but rather fixed, or their limitations accepted, because\n>> testing time-sensitive features is not as simple as testing functional\n>> features.\n>\n> For 2020 Crunchy gave me a sort of sabbatical year to research community\n> oriented benchmarking topics. Having a self contained project in my home\n> turned out to be the perfect way to spend *that* wreck of a year.\n\nYep.\n\n> I made significant progress toward the idea of having a performance farm\n> for PostgreSQL. On my laptop today is a 14GB database with 1s resolution\n> latency traces for 663 days of pgbench time running 4 workloads across a\n> small bare metal farm of various operating systems and hardware classes.\n\nWow.\n\n> I can answer questions like \"how long does a typical SSD take to execute \n> an INSERT commit?\" across my farm with SQL.\n\nSo, what is the answer? :-)\n\n> It's at the \"works for me!\" stage of development, and I thought this was \n> the right time in the development cycle to start sharing improvement \n> ideas from my work; thus the other submissions in progress I alluded to.\n>\n> The logging feature is in an intermediate spot where validating it requires\n> light custom tooling that compares its output against known variables like\n> the system time.\n\nSure.\n\n> It doesn't quite have a performance component to it.\n\nHmmm, if you log all transactions it can becomes the performance \nbottleneck quite quickly:-)\n\n> Since this time logic detail is a well known portability minefield, I \n> thought demanding that particular test was a pretty easy sell.\n\nThe test I recalled was removed at ad51c6f. Ok, it would not have caught \nthe issue about timestamp (although it could have been improved to do so), \nbut it would have caught the trivial one about the catchup loop in \naggregate interval generating too many lines because of a forgotten \nconversion to �s.\n\n-- \nFabien.", "msg_date": "Thu, 17 Jun 2021 22:30:31 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "\nHello Thomas,\n\n> I prepared a draft revert patch for discussion, just in case it comes\n> in handy. This reverts \"pgbench: Improve time logic.\", but \"pgbench:\n> Synchronize client threads.\" remains (slightly rearranged).\n\nI had a quick look.\n\nI had forgotten that this patch also fixed the long-running brain-damaged \ntps computation that has been bothering me for years, so that one sane \nperformance figure is now reported instead of two not-clear-to-interpret \ntake-your-pick figures.\n\nIt would be a real loss if this user-facing fix is removed in the \nprocess:-(\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 17 Jun 2021 22:53:26 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Fri, Jun 18, 2021 at 12:49:42AM +1200, Thomas Munro wrote:\n> I'm on the fence TBH, I can see that it's really small things and it\n> seems we have the patches, but it's late, late enough that\n> benchmarking gurus are showing up to benchmark with it for real, and\n> it's not great to be getting in the way of that with what is mostly\n> refactoring work, so I don't think it would be a bad thing if we\n> agreed to try again in 15. (A number of arguments for and against\n> moving pgbench out of the postgresql source tree and release cycle\n> occur to me, but I guess that's a topic for another thread.)\n\nI may be missing something of course, but I don't see any strong\nreason why we need to do a revert here if we have patches to discuss\nfirst.\n\n>> Note that Michaël is having a look at fixing pgbench logging issues.\n> \n> Yeah I've been catching up with these threads.\n\nThomas, do you want me to look more at this issue? I don't feel\ncomfortable with the idea of doing anything if you are planning to\nlook at this thread and you are the owner here, so that should be your\ncall.\n\nFrom what I can see, we have the same area getting patched with\npatches across two threads, so it seems better to give up the other\nthread and just focus on the discussion here, where v7 has been sent:\nhttps://www.postgresql.org/message-id/20210617175542.ad6b9b82926d8469e8520324@sraoss.co.jp\nhttps://www.postgresql.org/message-id/CAF7igB1r6wRfSCEAB-iZBKxkowWY6%2BdFF2jObSdd9%2BiVK%2BvHJg%40mail.gmail.com\n--\nMichael", "msg_date": "Fri, 18 Jun 2021 09:30:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Wed, Jun 16, 2021 at 03:13:30PM -0400, Andrew Dunstan wrote:\n> On 6/16/21 2:59 PM, Fabien COELHO wrote:\n> > The key feedback for me is the usual one: what is not tested does not\n> > work. Wow:-)\n> \n> Agreed.\n> \n> > I'm unhappy because I already added tap tests for time-sensitive\n> > features (-T and others, maybe logging aggregates, cannot remember),\n> > which have been removed because they could fail under some\n> > circonstances (eg very very very very slow hosts), or required some\n> > special handling (a few lines of code) in pgbench, and the net result\n> > of this is there is not a single test in place for some features:-(\n> \n> I'm not familiar with exactly what happened in this case, but tests need\n> to be resilient over a wide range of performance characteristics. One\n> way around this issue might be to have a way of detecting that it's on a\n> slow platform and if so either skipping tests (Test::More provides\n> plenty of support for this) or expecting different results.\n\nDetection would need the host to be consistently slow, like running under\nValgrind or a 20-year-old CPU. We also test on systems having highly-variable\nperformance due to other processes competing for the same hardware. I'd\nperhaps add a \"./configure --enable-realtime-tests\" option that enables\naffected tests. Testers should use the option whenever the execution\nenvironment has sufficient reserved CPU.\n\n\n", "msg_date": "Thu, 17 Jun 2021 21:05:28 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Fri, Jun 18, 2021 at 12:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Jun 18, 2021 at 12:49:42AM +1200, Thomas Munro wrote:\n> > Yeah I've been catching up with these threads.\n>\n> Thomas, do you want me to look more at this issue? I don't feel\n> comfortable with the idea of doing anything if you are planning to\n> look at this thread and you are the owner here, so that should be your\n> call.\n>\n> From what I can see, we have the same area getting patched with\n> patches across two threads, so it seems better to give up the other\n> thread and just focus on the discussion here, where v7 has been sent:\n> https://www.postgresql.org/message-id/20210617175542.ad6b9b82926d8469e8520324@sraoss.co.jp\n> https://www.postgresql.org/message-id/CAF7igB1r6wRfSCEAB-iZBKxkowWY6%2BdFF2jObSdd9%2BiVK%2BvHJg%40mail.gmail.com\n\nThanks for looking so far. It's the weekend here and I need to\nunplug, but I'll test these changes and if all looks good push on\nMonday.\n\n\n", "msg_date": "Sat, 19 Jun 2021 11:59:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Sat, Jun 19, 2021 at 11:59:16AM +1200, Thomas Munro wrote:\n> Thanks for looking so far. It's the weekend here and I need to\n> unplug, but I'll test these changes and if all looks good push on\n> Monday.\n\nThanks for the update. Have a good weekend.\n--\nMichael", "msg_date": "Sat, 19 Jun 2021 09:49:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On 2021-Jun-19, Thomas Munro wrote:\n\n> Thanks for looking so far. It's the weekend here and I need to\n> unplug, but I'll test these changes and if all looks good push on\n> Monday.\n\nSurely not the same day as the beta stamp...\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Always assume the user will do much worse than the stupidest thing\nyou can imagine.\" (Julien PUYDT)\n\n\n", "msg_date": "Sat, 19 Jun 2021 23:18:35 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Sun, Jun 20, 2021 at 3:18 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Jun-19, Thomas Munro wrote:\n> > Thanks for looking so far. It's the weekend here and I need to\n> > unplug, but I'll test these changes and if all looks good push on\n> > Monday.\n>\n> Surely not the same day as the beta stamp...\n\nBecause of timezones, that'll be Sunday in the Americas. Still too close?\n\n\n", "msg_date": "Sun, 20 Jun 2021 16:52:32 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Sun, Jun 20, 2021 at 4:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Jun 20, 2021 at 3:18 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > On 2021-Jun-19, Thomas Munro wrote:\n> > > Thanks for looking so far. It's the weekend here and I need to\n> > > unplug, but I'll test these changes and if all looks good push on\n> > > Monday.\n> >\n> > Surely not the same day as the beta stamp...\n>\n> Because of timezones, that'll be Sunday in the Americas. Still too close?\n\nUpon reflection, that amounts to the same thing really, so yeah,\nscratch that plan. I'll defer until after that (and then I'll be\nleaning more towards the revert option).\n\n\n", "msg_date": "Sun, 20 Jun 2021 19:38:50 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "\n> Upon reflection, that amounts to the same thing really, so yeah,\n> scratch that plan. I'll defer until after that (and then I'll be\n> leaning more towards the revert option).\n\nSigh. I do not understand anything about the decision processus.\n\nIf you do revert, please consider NOT reverting the tps computation \nchanges intermixed in the patch.\n\n-- \nFabien.\n\n\n", "msg_date": "Sun, 20 Jun 2021 11:02:14 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "\nOn 6/20/21 5:02 AM, Fabien COELHO wrote:\n>\n>> Upon reflection, that amounts to the same thing really, so yeah,\n>> scratch that plan.  I'll defer until after that (and then I'll be\n>> leaning more towards the revert option).\n>\n> Sigh. I do not understand anything about the decision processus.\n\n\nYes, sometimes it passeth all understanding.\n\nThere will certainly be a BETA3, and in every recent year except last\nyear there has been a BETA4.\n\nIf this were core server code threatening data integrity I would be\ninclined to be more strict, but after all pg_bench is a utility program,\nand I think we can allow a little more latitude.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 20 Jun 2021 10:15:55 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Sun, Jun 20, 2021 at 10:15:55AM -0400, Andrew Dunstan wrote:\n> If this were core server code threatening data integrity I would be\n> inclined to be more strict, but after all pg_bench is a utility program,\n> and I think we can allow a little more latitude.\n\n+1. Let's be flexible here. It looks better to not rush a fix, and\nwe still have some time ahead.\n--\nMichael", "msg_date": "Tue, 22 Jun 2021 09:25:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Bonjour Michaᅵl,\n\n>> If this were core server code threatening data integrity I would be\n>> inclined to be more strict, but after all pg_bench is a utility program,\n>> and I think we can allow a little more latitude.\n>\n> +1. Let's be flexible here. It looks better to not rush a fix, and\n> we still have some time ahead.\n\nAttached an updated v8 patch which adds (reinstate) an improved TAP test \nwhich would have caught the various regressions on logs.\n\nGiven that such tests were removed once before, I'm unsure whether they \nwill be acceptable, despite that their usefulness has been clearly \ndemonstrated. At least it is for the record. Sigh:-(\n\n-- \nFabien.", "msg_date": "Tue, 22 Jun 2021 12:06:45 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Tue, Jun 22, 2021 at 12:06:45PM +0200, Fabien COELHO wrote:\n> Attached an updated v8 patch which adds (reinstate) an improved TAP test\n> which would have caught the various regressions on logs.\n\n> Given that such tests were removed once before, I'm unsure whether they will\n> be acceptable, despite that their usefulness has been clearly demonstrated.\n> At least it is for the record. Sigh:-(\n\nThanks!\n\nThis v8 is an addition of the fix for the epoch with the adjustments\nfor the aggregate reports in the logs. The maths look rather right\nafter a read and after some tests.\n\n+# note: this test is time sensitive, and may fail on a very\n+# loaded host.\n+# note: --progress-timestamp is not tested\n+my $delay = pgbench(\n+\t'-T 2 -P 1 -l --aggregate-interval=1 -S -b se@2'\n+\t. ' --rate=20 --latency-limit=1000 -j ' . $nthreads\n+\t. ' -c 3 -r',\n+\t0,\n+\t[ qr{type: multiple},\n+\t\tqr{clients: 3},\n+\t\tqr{threads: $nthreads},\n+\t\tqr{duration: 2 s},\n+\t\tqr{script 1: .* select only},\n+\t\tqr{script 2: .* select only},\n+\t\tqr{statement latencies in milliseconds},\n+\t\tqr{FROM pgbench_accounts} ],\n+\t[ qr{vacuum}, qr{progress: 1\\b} ],\n+\t'pgbench progress', undef,\n+\t\"--log-prefix=$bdir/001_pgbench_log_1\");\nCould we make that shorter at 1s? That will shorten the duration of\nthe test run. It is easy to miss that this test is for\n--aggregate-interval (aka the logAgg() path) without a comment.\n\n+# cool check that we are around 2 seconds\n+# The rate may results in an unlucky schedule which triggers\n+# an early exit, hence the loose bound.\n+ok(0.0 < $delay && $delay < 4.0, \"-T 2 run around 2 seconds\");\n\nThe command itself would not fail, but we would just fail on a machine\nwhere the difference in start/stop time is higher than 4 seconds,\nright? On RPI-level machines, this could fail reliably. I am not\ncompletely sure what's the additional value we can get from that extra\ntest, to be honest.\n\n+# $nthreads threads, 2 seconds, but due to timing imprecision we might get\n+# only 1 or as many as 3 progress reports per thread.\n+check_pgbench_logs($bdir, '001_pgbench_log_1', $nthreads, 1, 3,\n+\tqr{^\\d{10,} \\d{1,2} \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+$});\n+\nNow this one is good and actually stable thanks to the fact that we'd\nget at least the final logs, and the main complain we got to discuss\nabout on this thread was the format of the logs. I would say to give\nup on the first test, and keep the second. But, is this regex\ncorrect? Shouldn't we check for six integer fields only with the\nfirst one having a minimal number of digits for the epoch?\n--\nMichael", "msg_date": "Wed, 23 Jun 2021 12:47:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Hello,\n\n> +# note: this test is time sensitive, and may fail on a very\n> +# loaded host.\n> +# note: --progress-timestamp is not tested\n> +my $delay = pgbench(\n> +\t'-T 2 -P 1 -l --aggregate-interval=1 -S -b se@2'\n> +\t. ' --rate=20 --latency-limit=1000 -j ' . $nthreads\n> +\t. ' -c 3 -r',\n> +\t0,\n> +\t[ qr{type: multiple},\n> +\t\tqr{clients: 3},\n> +\t\tqr{threads: $nthreads},\n> +\t\tqr{duration: 2 s},\n> +\t\tqr{script 1: .* select only},\n> +\t\tqr{script 2: .* select only},\n> +\t\tqr{statement latencies in milliseconds},\n> +\t\tqr{FROM pgbench_accounts} ],\n> +\t[ qr{vacuum}, qr{progress: 1\\b} ],\n> +\t'pgbench progress', undef,\n> +\t\"--log-prefix=$bdir/001_pgbench_log_1\");\n\n> Could we make that shorter at 1s? That will shorten the duration of\n> the test run. It is easy to miss that this test is for\n> --aggregate-interval (aka the logAgg() path) without a comment.\n\nIt is for -T, -P and --aggregate-interval. The units of all these is \nseconds, the minimum is 1, I put 2 so that It is pretty sure to get some \noutput. We could try 1, but I'm less confident that an output is ensured \nin all cases on a very slow host which may decide to shorten the run \nbefore having shown a progress for instance.\n\n> +# cool check that we are around 2 seconds\n> +# The rate may results in an unlucky schedule which triggers\n> +# an early exit, hence the loose bound.\n> +ok(0.0 < $delay && $delay < 4.0, \"-T 2 run around 2 seconds\");\n>\n> The command itself would not fail, but we would just fail on a machine\n> where the difference in start/stop time is higher than 4 seconds,\n> right?\n\nYep. It could detect a struck pgbench process which was one of the \nreported issue. Maybe there should be a timeout added.\n\n> On RPI-level machines, this could fail reliably.\n\nDunno, Not sure what RPI means. Probably not \"Retail Price Index\"… maybe \nRasberry-Pi? In that case, the O-4 second leeway is intended to be loose \nenough to accomodate such hosts, but I cannot test that.\n\n> I am not completely sure what's the additional value we can get from \n> that extra test, to be honest.\n\nThis would be to detect a somehow stuck process. It could be looser if \nnecessary. Or removed, or preferably commented out, or enabled with some \noptions (eg an environment variable? configure option?). Such control \ncould also skip all 3 calls, in which case the 2 seconds is not an issue.\n\n> +# $nthreads threads, 2 seconds, but due to timing imprecision we might get\n> +# only 1 or as many as 3 progress reports per thread.\n> +check_pgbench_logs($bdir, '001_pgbench_log_1', $nthreads, 1, 3,\n> +\tqr{^\\d{10,} \\d{1,2} \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+$});\n> +\n> Now this one is good and actually stable thanks to the fact that we'd\n> get at least the final logs, and the main complain we got to discuss\n> about on this thread was the format of the logs.\n\nYep, this test would have probably detected the epoch issue reported by \nGreg.\n\n> I would say to give up on the first test, and keep the second.\n\nYou mean remove the time check and keep the log check. I'd like to keep a \ntime check, or at least have it in comment so that I can enable it simply.\n\n> But, is this regex correct? Shouldn't we check for six integer fields \n> only with the first one having a minimal number of digits for the epoch?\n\nEpoch (seconds since 1970-01-01?) is currently 10 digits. Not sure how \nwell it would work if some host have another zero start date.\n\nGiven the options of the bench run, there are that many fields in the \nlog output, I'm not sure why we would want to check for less?\n\n-- \nFabien.", "msg_date": "Wed, 23 Jun 2021 08:26:32 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Wed, Jun 23, 2021 at 08:26:32AM +0200, Fabien COELHO wrote:\n>> Could we make that shorter at 1s? That will shorten the duration of\n>> the test run. It is easy to miss that this test is for\n>> --aggregate-interval (aka the logAgg() path) without a comment.\n> \n> It is for -T, -P and --aggregate-interval. The units of all these is\n> seconds, the minimum is 1, I put 2 so that It is pretty sure to get some\n> output. We could try 1, but I'm less confident that an output is ensured in\n> all cases on a very slow host which may decide to shorten the run before\n> having shown a progress for instance.\n\nCould it be possible to document the intention of the test and its\ncoverage then? With the current patch, one has to guess what's the\nintention behind this case.\n\n>> +# $nthreads threads, 2 seconds, but due to timing imprecision we might get\n>> +# only 1 or as many as 3 progress reports per thread.\n>> +check_pgbench_logs($bdir, '001_pgbench_log_1', $nthreads, 1, 3,\n>> +\tqr{^\\d{10,} \\d{1,2} \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+$});\n>> +\n>> Now this one is good and actually stable thanks to the fact that we'd\n>> get at least the final logs, and the main complain we got to discuss\n>> about on this thread was the format of the logs.\n> \n> Yep, this test would have probably detected the epoch issue reported by\n> Greg.\n\n(Sorry I missed the use of throttle_delay that would generate 10\nfields in the log file)\n\nHm.. Could it be possible to tighten a bit the regex used here then?\nI was playing with it and it is not really picky in terms of patterns \nallowed or rejected. The follow-up checks for check_pgbench_logs()\ncould be a bit more restrictive as well, but my regex-fu is bad.\n\n>> I would say to give up on the first test, and keep the second.\n> \n> You mean remove the time check and keep the log check. I'd like to keep a\n> time check, or at least have it in comment so that I can enable it simply.\n\nI'd rather avoid tests that tend to fail on slow machines. There is a\nflotilla in the buildfarm.\n--\nMichael", "msg_date": "Wed, 23 Jun 2021 17:06:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Bonjour Michaᅵl,\n\n> Could it be possible to document the intention of the test and its\n> coverage then? With the current patch, one has to guess what's the\n> intention behind this case.\n\nOk, see attached.\n\n>>> +check_pgbench_logs($bdir, '001_pgbench_log_1', $nthreads, 1, 3,\n>>> +\tqr{^\\d{10,} \\d{1,2} \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+$});\n>\n> Hm.. Could it be possible to tighten a bit the regex used here then?\n\n> I was playing with it and it is not really picky in terms of patterns\n> allowed or rejected.\n\n> The follow-up checks for check_pgbench_logs() could be a bit more \n> restrictive as well, but my regex-fu is bad.\n\nGiven the probabilistic nature of a --rate run and the variability of \nhosts which may run the tests, it is hard to be more specific that \\d+ for \nactual performance data. The run may try 0 or 50 transaction within a \nsecond (both with pretty low probabilities), so the test mostly checks the \nformat and some basic sanity on the output and logs.\n\n>>> I would say to give up on the first test, and keep the second.\n>>\n>> You mean remove the time check and keep the log check. I'd like to keep a\n>> time check, or at least have it in comment so that I can enable it simply.\n>\n> I'd rather avoid tests that tend to fail on slow machines. There is a\n> flotilla in the buildfarm.\n\nCommented out in attached v9.\n\n-- \nFabien.", "msg_date": "Wed, 23 Jun 2021 11:37:58 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On 2021-Jun-23, Fabien COELHO wrote:\n\n> +# cool check that we are around 2 seconds\n> +# The rate may results in an unlucky schedule which triggers\n> +# an early exit, hence the loose bound.\n> +#\n> +# THIS TEST IS COMMENTED OUT BUT PLEASE LET IT THERE SO THAT\n> +# IT CAN BE ENABLED EASILY.\n> +#\n> +## ok(1.5 < $delay && $delay < 2.5, \"-T 2 run around 2 seconds\");\n\nI think you should use Test::More's \"skip\" for this, perhaps something\nlike this:\n\nSKIP: {\n skip \"This test is unreliable\";\n\n # explain why\n ok(1.5 < $delay && $delay < 2.5, \"-T 2 run around 2 seconds\");\n}\n\n... or, actually, even better would be to use a TODO block, so that the\ntest is run and reports its status, but if it happens not to succeed it\nwill not cause the whole test to fail. That way you'll accumulate some\nevidence that may serve to improve the test in the future until it\nworks fully:\n\nTODO: {\n local $TODO = \"Ths test is unreliable\";\n\n ok(1.5 < $delay && $delay < 2.5, \"-T 2 run around 2 seconds\");\n}\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n(Ijon Tichy en Viajes, Stanislaw Lem)\n\n\n", "msg_date": "Wed, 23 Jun 2021 09:08:35 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Ola �lvaro,\n\n> ... or, actually, even better would be to use a TODO block, so that the\n> test is run and reports its status, but if it happens not to succeed it\n> will not cause the whole test to fail. That way you'll accumulate some\n> evidence that may serve to improve the test in the future until it\n> works fully:\n>\n> TODO: {\n> local $TODO = \"Ths test is unreliable\";\n>\n> ok(1.5 < $delay && $delay < 2.5, \"-T 2 run around 2 seconds\");\n> }\n\nThanks for the hint! Why not, having the ability to collect data is a good \nthing, so attached v10 does that. If something go wrong, the TODO section \ncould be extended around all calls.\n\n-- \nFabien.", "msg_date": "Wed, 23 Jun 2021 22:01:28 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Wed, Jun 23, 2021 at 10:01:28PM +0200, Fabien COELHO wrote:\n> Thanks for the hint! Why not, having the ability to collect data is a good\n> thing, so attached v10 does that. If something go wrong, the TODO section\n> could be extended around all calls.\n\n+check_pgbench_logs($bdir, '001_pgbench_log_1', $nthreads, 1, 3,\n+ qr{^\\d{10,} \\d{1,2} \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+ \\d+$});\nFWIW, I am still seeing problems with the regex pattern you are using\nhere, because this fails to detect the number of fields we should have\ngenerated here, for one. If you are not convinced, just run your new\ntest and extend or reduce the amount of data generated by logAgg() in\nyour patch: the tests will still pass.\n\nSo I have investigated this stuff in details. The regular expressions\nare correctly shaped, but the use of grep() in check_pgbench_logs()\nseems to be incorrect.\n\nFor example, let's take an aggregate report generated by your new\ntest:\n\"1624498086 13 27632 60597490 1683 2853 3227 883179 120 386 123\"\nHere are some extra ones, shorter and longer:\n\"1624498086 13 27632 60597490 1683 2853 3227 8831\";\n\"1624498086 13 27632 60597490 1683 2853 3227 883179 120 386 123 123\";\n\nUsing grep() with \"$re\" results in all the fields matching. Using on\nthe contrary \"/$re/\" in grep(), like list_files(), would only match\nthe first one, which is correct. Please see attached a small script\nto show my point, called perl_grep.pl.\n\nWith this issue fixed, I have bumped into what looks like a different\nbug in the tests. 001_pgbench_log_2 uses pgbench with 2 clients, but\nexpects only patterns in the logs where the first column value uses\nonly 0. With two clients, those first values can be either 0 or 1 due\nto the client ID set.\n\nIt seems to me that we had better fix this issue and back-patch where\nthis has been introduced so as we have exact match checks with the log\nformarts, no? Please see the attached.\n\nThoughts?\n--\nMichael", "msg_date": "Thu, 24 Jun 2021 11:21:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Bonjour Michaᅵl,\n\n> Using grep() with \"$re\" results in all the fields matching. Using on\n> the contrary \"/$re/\" in grep(), like list_files(), would only match\n> the first one, which is correct.\n\nOk, good catch. Perl is kind of a strange language.\n\n> With this issue fixed, I have bumped into what looks like a different \n> bug in the tests. 001_pgbench_log_2 uses pgbench with 2 clients, but> \n> expects only patterns in the logs where the first column value uses only \n> 0. With two clients, those first values can be either 0 or 1 due to the \n> client ID set.\n\nIndeed. The tests passes because the number of expected lines is quite\n\n> It seems to me that we had better fix this issue and back-patch where\n> this has been introduced so as we have exact match checks with the log\n> formarts, no? Please see the attached.\n\nOk, however the regex should be \"^[01] ...\".\n\nAttached v11 with your fixes + the above regex fix.\n\n-- \nFabien.", "msg_date": "Thu, 24 Jun 2021 08:46:03 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "\nOn 6/24/21 2:46 AM, Fabien COELHO wrote:\n>\n> Bonjour Michaᅵl,\n>\n>> Using grep() with \"$re\" results in all the fields matching.ᅵ Using on\n>> the contrary \"/$re/\" in grep(), like list_files(), would only match\n>> the first one, which is correct.\n>\n> Ok, good catch. Perl is kind of a strange language.\n\n\nNot really, the explanation is fairly simple:\n\ngrep returns the values for which the test is true.\n\ngrep (\"$re\",@values) doesn't perform a regex test against the values, it\ntests the truth of \"$re\" for each value, i.e. it's more or less the same\nasᅵ grep (1, @values), which will always returns the whole @values list.\n\nBy contrast grep (/$re/, @values) returns those elements of @values that\nmatch the regex.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 24 Jun 2021 08:03:27 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Thu, Jun 24, 2021 at 08:03:27AM -0400, Andrew Dunstan wrote:\n> On 6/24/21 2:46 AM, Fabien COELHO wrote:\n>>> Using grep() with \"$re\" results in all the fields matching.  Using on\n>>> the contrary \"/$re/\" in grep(), like list_files(), would only match\n>>> the first one, which is correct.\n>>\n>> Ok, good catch. Perl is kind of a strange language.\n\nOkay, I have extracted this part from your patch, and back-patched\nthis fix down to 11. The comments were a good addition, so I have\nkept them. I have also made the second regex of check_pgbench_logs()\npickier with the client ID value expected, as it can only be 0.\n\n> By contrast grep (/$re/, @values) returns those elements of @values that\n> match the regex.\n\nThanks for the details here.\n--\nMichael", "msg_date": "Fri, 25 Jun 2021 07:25:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Bonjour Micha�l,\n\n> Okay, I have extracted this part from your patch, and back-patched\n> this fix down to 11. The comments were a good addition, so I have\n> kept them. I have also made the second regex of check_pgbench_logs()\n> pickier with the client ID value expected, as it can only be 0.\n\nAttached the remaining part of the patch to fix known issues on pgbench \nlogging.\n\nI've added an entry on the open item on the wiki. I'm unsure about who the \nowner should be.\n\n-- \nFabien.", "msg_date": "Wed, 30 Jun 2021 09:45:47 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Wed, Jun 30, 2021 at 09:45:47AM +0200, Fabien COELHO wrote:\n> Attached the remaining part of the patch to fix known issues on pgbench\n> logging.\n> \n> I've added an entry on the open item on the wiki. I'm unsure about who the\n> owner should be.\n\nThere is already an item: \"Incorrect time maths in pgbench\".\n--\nMichael", "msg_date": "Wed, 30 Jun 2021 17:05:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Wed, Jun 30, 2021 at 8:05 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Jun 30, 2021 at 09:45:47AM +0200, Fabien COELHO wrote:\n> > Attached the remaining part of the patch to fix known issues on pgbench\n> > logging.\n> >\n> > I've added an entry on the open item on the wiki. I'm unsure about who the\n> > owner should be.\n>\n> There is already an item: \"Incorrect time maths in pgbench\".\n\nFabien, thanks for the updated patch, I'm looking at it. I removed\nthe duplicate item. More soon.\n\n\n", "msg_date": "Wed, 30 Jun 2021 20:21:58 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "\nHello Thomas,\n\n>>> I've added an entry on the open item on the wiki. I'm unsure about who the\n>>> owner should be.\n>>\n>> There is already an item: \"Incorrect time maths in pgbench\".\n\nArgh *shoot*, I went over the list too quickly, looking for \"log\" as a \nkeyword.\n\n> Fabien, thanks for the updated patch, I'm looking at it. I removed\n> the duplicate item. More soon.\n\nThanks. Sorry for the noise.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 30 Jun 2021 11:40:38 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": ">> Fabien, thanks for the updated patch, I'm looking at it.\n\nAfter looking at it again, here is an update which ensure 64 bits on \nepoch_shift computation.\n\n-- \nFabien.", "msg_date": "Wed, 30 Jun 2021 11:55:38 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Wed, Jun 30, 2021 at 9:55 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> >> Fabien, thanks for the updated patch, I'm looking at it.\n>\n> After looking at it again, here is an update which ensure 64 bits on\n> epoch_shift computation.\n\nHi Fabien,\n\nThe code in pgbench 13 aggregates into buckets that begin on the\nboundaries of wall clock seconds, because it is triggered by changes\nin time_t. In the current patch, we aggregate data into buckets that\nbegin on the boundaries of whole seconds since start_time. Those\nboundaries are not aligned with wall clock seconds, and yet we print\nout the times rounded to wall clock seconds.\n\nWith the following temporary hack:\n\n static void\n logAgg(FILE *logfile, StatsData *agg)\n {\n- fprintf(logfile, INT64_FORMAT \" \" INT64_FORMAT \" %.0f %.0f %.0f %.0f\",\n- (agg->start_time + epoch_shift) / 1000000,\n+ fprintf(logfile, /*INT64_FORMAT*/ \"%f \" INT64_FORMAT \" %.0f\n%.0f %.0f %.0f\",\n+ (agg->start_time + epoch_shift) / 1000000.0,\n\n... you can see what I mean:\n\n1625115080.840406 325 999256 3197232764 1450 6846\n\nPerhaps we should round the start time of the first aggregate down to\nthe nearest wall clock second? That would mean that the first\naggregate misses a part of a second (as it does in pgbench 13), but\nall later aggregates begin at the time we write in the log (as it does\nin pgbench 13). That is, if we log 1625115080 we mean \"all results >=\n1625115080.000000\". It's a small detail, but it could be important\nfor someone trying to correlate the log with other data. What do you\nthink?\n\n\n", "msg_date": "Thu, 1 Jul 2021 17:49:42 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Hello Thomas,\n\n>> After looking at it again, here is an update which ensure 64 bits on\n>> epoch_shift computation.\n>\n> The code in pgbench 13 aggregates into buckets that begin on the\n> boundaries of wall clock seconds, because it is triggered by changes\n> in time_t. In the current patch, we aggregate data into buckets that\n> begin on the boundaries of whole seconds since start_time. Those\n> boundaries are not aligned with wall clock seconds, and yet we print\n> out the times rounded to wall clock seconds.\n\nYes, I noticed this small changed, and did not feel it was an issue at the \ntime.\n\nI thought of doing something like the format change your are suggesting. \nHowever people would like it and it would need to be discussed, hence it \nstayed that way… People have scripts to process log files and do not like \nformat changes, basically.\n\n> Perhaps we should round the start time of the first aggregate down to\n> the nearest wall clock second?\n\nYep, but that requires a common start point for all threads. Why not.\n\n> That would mean that the first aggregate misses a part of a second (as \n> it does in pgbench 13), but all later aggregates begin at the time we \n> write in the log (as it does in pgbench 13). That is, if we log \n> 1625115080 we mean \"all results >= 1625115080.000000\". It's a small \n> detail, but it could be important for someone trying to correlate the \n> log with other data. What do you think?\n\nI think that you are right. The simplest way is to align on whole seconds, \nwhich is easier than changing the format and have complaints about that, \nor not align and have complaints about the timestamp being rounded.\n\nAttached a v14 in that spirit.\n\n-- \nFabien.", "msg_date": "Thu, 1 Jul 2021 10:50:32 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Thu, Jul 1, 2021 at 8:50 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Attached a v14 in that spirit.\n\nThanks! This doesn't seem to address the complaint, though. Don't\nyou need to do something like this? (See also attached.)\n\n+ initStats(&aggs, start - (start + epoch_shift) % 1000000);\n\nThat should reproduce what pgbench 13 does implicitly when it uses\ntime(NULL). Namely, it rewinds to the start of the current *wall\nclock* second, so that all future aggregates also start at round\nnumber wall clock times, at the cost of making the first aggregate\nmiss out on a fraction of a second.\n\nI wonder if some of the confusion on the other thread about the final\naggregate[1] was due to this difference. By rounding down, we get a\n\"head start\" (because the first aggregate is short), so we usually\nmanage to record the expected number of aggregates before time runs\nout. It's a race though. Your non-rounding version was more likely\nto lose the race and finish before the final expected aggregate was\nlogged, so you added code to force a final aggregate to be logged. Do\nI have this right? I'm not entirely sure how useful a partial final\naggregate is (it's probably one you have to throw away, like the first\none, no? Isn't it better if we only have to throw away the first\none?). I'm not sure, but if we keep that change, a couple of very\nminor nits: I found the \"tx\" parameter name a little confusing. Do\nyou think it's clearer if we change it to \"final\" (with inverted\nsense)? For the final aggregate, shouldn't we call doLog() only if\nagg->cnt > 0?\n\nI think I'd be inclined to take that change back out though, making\nthis patch very small and net behaviour like pgbench 13, if you agree\nwith my explanation for why you had to add it and why it's not\nactually necessary with the fixed rounding shown above. (And perhaps\nin v15 we might consider other ideas like using hi-res times in the\nlog and not rounding, etc, a topic for later.)\n\nI don't really see the value in the test that checks that $delay falls\nin the range 1.5s - 2.5s and then ignores the result. If it hangs\nforever, we'll find out about it, and otherwise no human or machine\nwill ever care about that test. I removed it from this version. Were\nyou really attached to it?\n\nI made some very minor language tweaks in comments (we don't usually\nshorten \"benchmark\" to \"bench\" in English, \"series\" keeps the -s in\nsingular (blame the Romans), etc).\n\nI think we should make it clear when we mean the *Unix* epoch (a\ncomment \"switch to epoch\" isn't meaningful on its own, to me at\nleast), so I changed that in a few places.\n\n[1] https://www.postgresql.org/message-id/alpine.DEB.2.22.394.2106102323310.3698412%40pseudo", "msg_date": "Fri, 9 Jul 2021 00:17:28 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Thu, Jun 17, 2021 at 7:18 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> I'm not sure we have transaction lasts for very short time that\n> nanoseconds matters.\n>\n\nNanoseconds may not matter yet, but they could be handy when for\nexample we want to determine the order of parallel query executions.\n\nWe are less than an order of magnitude away from being able to do 1M\ninserts/updates/deletes per second, so microseconds already are not\nalways 100% reliable.\n\nWe could possibly move to using LSNs fetched as part of the queries\nfor this case, but this will surely introduce more problems than it\nsolves :)\n\nCheers\n-----\nHannu Krosing\nGoogle Cloud - We have a long list of planned contributions and we are hiring.\nContact me if interested.\n\n\n", "msg_date": "Thu, 8 Jul 2021 16:46:51 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Hello Thomas,\n\n> Thanks! This doesn't seem to address the complaint, though. Don't\n> you need to do something like this? (See also attached.)\n>\n> + initStats(&aggs, start - (start + epoch_shift) % 1000000);\n\nISTM that this is: (start + epoch_shift) / 1000000 * 1000000\n\n> That should reproduce what pgbench 13 does implicitly when it uses\n> time(NULL).\n\nI understand that you are shifting the aggregate internal start time to \nepoch, however ISTM that other points in the program are not shifted \nconsistently with this, eg the while comparison in doLog? Also if start \ntime is log shifted, then it should not be shifted again when printed (in \nlogAgg). Attached version tries to be consistent.\n\n> Namely, it rewinds to the start of the current *wall clock* second, so \n> that all future aggregates also start at round number wall clock times, \n> at the cost of making the first aggregate miss out on a fraction of a \n> second.\n\nISTM that it was already wall clock time, but not epoch wall clock.\nI'm okay with realigning aggregates on full seconds.\n\n> I wonder if some of the confusion on the other thread about the final\n> aggregate[1] was due to this difference.\n\nDunno. The parallel execution with thread is a pain when handling details.\n\n> By rounding down, we get a \"head start\" (because the first aggregate is \n> short), so we usually manage to record the expected number of aggregates \n> before time runs out.\n\nFine with me if everything is consistent.\n\n> It's a race though. Your non-rounding version was more likely\n> to lose the race and finish before the final expected aggregate was\n> logged, so you added code to force a final aggregate to be logged.\n\nISTM that we always want to force because some modes can have low tps, and \nthe aggregates should be \"full\".\n\n> Do I have this right? I'm not entirely sure how useful a partial final \n> aggregate is\n\nIf you ask for 10 seconds run with 1 aggregate per second, you expect to \nsee (at least, about) 10 lines, and I want to ensure that, otherwise \npeople will ask questions, tools will have to look for special cases, \nmissing rows, whatever, and it will be a pain there. We want to produce \nsomething simple, consistent, reliable, that tools can depend on.\n\n> (it's probably one you have to throw away, like the first one, no? \n> Isn't it better if we only have to throw away the first one?).\n\nThis should be the user decision to drop it or not, not the tool producing \nit, IMO.\n\n> I'm not sure, but if we keep that change, a couple of very minor nits: \n> I found the \"tx\" parameter name a little confusing. Do you think it's \n> clearer if we change it to \"final\" (with inverted sense)?\n\nI agree that tx is not a very good name, but the inversion does not look \nright to me. The \"normal\" behavior is\n\n> For the final aggregate, shouldn't we call doLog() only if agg->cnt > 0?\n\nNo, I think that we should want to have all aggregates, even with zeros, \nso that the user can expect a deterministic number of lines.\n\n> I think I'd be inclined to take that change back out though, making this \n> patch very small and net behaviour like pgbench 13, if you agree with my \n> explanation for why you had to add it and why it's not actually \n> necessary with the fixed rounding shown above. (And perhaps in v15 we \n> might consider other ideas like using hi-res times in the log and not \n> rounding, etc, a topic for later.)\n\nI think that I'm moslty okay.\n\n> I don't really see the value in the test that checks that $delay falls\n> in the range 1.5s - 2.5s and then ignores the result. If it hangs\n> forever, we'll find out about it, and otherwise no human or machine\n> will ever care about that test. I removed it from this version. Were\n> you really attached to it?\n\nYES, REALLY! It would just have caught quite a few of the regressions we \nare trying to address here. I want it there even if ignored because I'll \nlook for it to avoid regressions in the future. If the test is actually \nremoved, recreating it is a pain. If you really want to disactivate it, \nuse if(0) but PLEASE let it there so that it can ne reactivated for tests \nvery simply, not bad maintaining some test outside of the tree.\n\nAlso, if farm logs show that it is okay on all animals, it can be switched \non by removing the ignore trick.\n\n> I made some very minor language tweaks in comments (we don't usually\n> shorten \"benchmark\" to \"bench\" in English, \"series\" keeps the -s in\n> singular (blame the Romans), etc).\n\nThanks! My English is kind of fuzzy in the details:-)\n\n> I think we should make it clear when we mean the *Unix* epoch (a\n> comment \"switch to epoch\" isn't meaningful on its own, to me at\n> least), so I changed that in a few places.\n\nOk.\n\nAttached v16:\n - tries to be consistent wrt epoch & aggregates, aligning to Unix epoch\n as you suggested.\n - renames tx as accumulate, but does not invert it.\n - always shows aggregates so that the user can depend on the output,\n even if stats are zero, because ISTM that clever must be avoided.\n - put tests back, even if ignored, because I really want them available\n easily.\n\nWhen/if you get to commit this patch, eventually, do not forget that I'm \npushing forward fixes contributed by others, including Kyotaro Horiguchi \nand Yugo Nagata.\n\n-- \nFabien.", "msg_date": "Thu, 8 Jul 2021 19:15:24 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Hello Hannu,\n\n>> I'm not sure we have transaction lasts for very short time that\n>> nanoseconds matters.\n>\n> Nanoseconds may not matter yet, but they could be handy when for\n> example we want to determine the order of parallel query executions.\n>\n> We are less than an order of magnitude away from being able to do 1M\n> inserts/updates/deletes per second, so microseconds already are not\n> always 100% reliable.\n\nISTM that 1M tps would be with really a lot of parallel clients, thus the \nlatency of each would be quite measurable, so that µs would still make \nsense for measuring their performance? If an actual network is involved, \nthe network latency is already 100-200 µs even before executing any code.\n\n-- \nFabien.", "msg_date": "Thu, 8 Jul 2021 19:25:38 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Fri, Jul 9, 2021 at 5:15 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > Thanks! This doesn't seem to address the complaint, though. Don't\n> > you need to do something like this? (See also attached.)\n> >\n> > + initStats(&aggs, start - (start + epoch_shift) % 1000000);\n>\n> ISTM that this is: (start + epoch_shift) / 1000000 * 1000000\n\nSure, it's equivalent if you also change doLog() to match the change\nin epoch, as your v16 did. Your version seems fine to me. The key\nthing is that we only start new aggregates when the Unix epoch time\ncrosses over a XXX.000000 boundary, since we're only logging the XXX\npart. That's much like(*) pgbench13, and I'm happy however you want\nto code that. Case closed on that point. Thanks!\n\n> > Isn't it better if we only have to throw away the first one?).\n>\n> This should be the user decision to drop it or not, not the tool producing\n> it, IMO.\n\nLet me try this complaint again. It's admittedly a minor detail, but\nmy goal in this thread is to match pgbench13's treatment of aggregate\nboundaries and partial aggregates, so that we can close the open item\nfor 14 with a minimal fix that doesn't make any unnecessary changes.\nDiscussions about improvements or pre-existing problems can wait.\n\nFirst, let me demonstrate that pgbench13 throws away final partial\naggregates. I hacked REL_13_STABLE like so:\n\n if (agg_interval > 0)\n {\n /* log aggregated but not yet reported transactions */\n+fprintf(thread->logfile, \"XXX log aggregated but not yet reported\ntransactions: aggs.cnt = %ld\\n\", aggs.cnt);\n doLog(thread, state, &aggs, false, 0, 0);\n }\n fclose(thread->logfile);\n\nI ran pgbench -T6 --aggregate-interval 1 -l -S postgres, and it\nproduced a log file containing:\n\n===BOF===\n1625782245 7974 428574 24998642 49 683\n1625782246 19165 998755 53217669 49 310\n1625782247 19657 998868 51577774 47 189\n1625782248 19707 998898 51660408 47 189\n1625782249 19969 998867 50454999 48 156\n1625782250 19845 998877 51071013 47 210\nXXX log aggregated but not yet reported transactions: aggs.cnt = 10988\n===EOF===\n\nWe can see three interesting things:\n\n1. The first aggregate is partial (only ~7k transactions, because it\nstarted partway through a second). Users have to throw away that\nfirst aggregate because its statistics are noise. That is the price\nto pay to have later aggregates start at the time they say.\n\n2. We get 5 more full aggregates (~19k transactions). That's a total\nof 6 aggregates, which makes intuitive sense with -T6.\n\n3. At the end, the extra call to doLog() did nothing, and yet cnt =\n10988. That's data we're throwing away, because Unix epoch time has\nnot advanced far enough to reach a new aggregate start time (it's not\nimpossible, but probability is very low). Checking the commit log, I\nsee that the code that claims to log the final aggregate came from\ncommit b6037664960 (2016); apparently it doesn't do what the comments\nseem to think it does (did that ever work? Should perhaps be cleaned\nup, but separately, it's not an open item for 14).\n\nNow, in your recent patches you force that final partial aggregate to\nbe logged in that case with that new flag mechanism, as we can see:\n\n===BOF===\n1625783726 11823 609143 32170321 48 549\n1625783727 19530 998995 52383115 47 210\n1625783728 19468 999026 52208898 46 181\n1625783729 19826 999001 51238427 46 185\n1625783730 19195 999110 52841674 49 172\n1625783731 18572 998992 56028876 48 318\n1625783732 7484 388620 20951100 48 316\n===EOF===\n\n1. We get a partial initial aggregate just like 13. That sacrificial\naggregate helps us synchronize the rest of the aggregates with the\nlogged timestamps. Good.\n\n2. We get 5 full aggregates (~19k transactions) just like 13. As in\n13, that's quite likely, because the first one was \"short\" so we\nalmost always reach the end of the 6th one before -T6 runs out of\nsand. Good.\n\n3. We get a new partial aggregate at the end. Users would have to\nthrow that one away too. This is not a big deal, but it's a change in\nbehaviour that should be discussed.\n\nGiven that that last one is a waste of pixels and a (so far)\nunjustified change in behaviour, I propose, this time with a little\nmore force and an updated patch, that we abandon that part of the\nchange. I submit that you only added that because your earlier\npatches didn't have the partial aggregate at the start, so then it\noften didn't produce the 6th line of output. So, you forced it to log\nwhatever it had left, even though the full time hadn't elapsed yet.\nNow we don't need that.\n\nThe patch and resulting code are simpler, and the user experience matches 13.\n\nSee attached.\n\n> > I don't really see the value in the test that checks that $delay falls\n> > in the range 1.5s - 2.5s and then ignores the result. If it hangs\n> > forever, we'll find out about it, and otherwise no human or machine\n> > will ever care about that test. I removed it from this version. Were\n> > you really attached to it?\n>\n> YES, REALLY! It would just have caught quite a few of the regressions we\n> are trying to address here. I want it there even if ignored because I'll\n> look for it to avoid regressions in the future. If the test is actually\n> removed, recreating it is a pain. If you really want to disactivate it,\n> use if(0) but PLEASE let it there so that it can ne reactivated for tests\n> very simply, not bad maintaining some test outside of the tree.\n\nOk, you win :-)\n\n> When/if you get to commit this patch, eventually, do not forget that I'm\n> pushing forward fixes contributed by others, including Kyotaro Horiguchi\n> and Yugo Nagata.\n\nFixed, thanks.\n\n* I say \"much like\" and not \"exactly like\"; of course there may be a\nsubtle difference if ntpd adjusts your clock while a benchmark is\nrunning. Something must give, and 14's coding prefers to keep the\nduration of aggregates stable at exactly X seconds according to the\nhigh precision timer, so that the statistics it reports are\nmeaningful, but 13 prefers to keep the timestamps it logs in sync with\nother software using gettimeofday() and will give you a weird short or\nlong aggregate to achieve that (producing bad statistics). I can see\narguments for both but I'm OK with that change and I see that it is in\nline with your general goal of switching to modern accurate time\ninterfaces.", "msg_date": "Fri, 9 Jul 2021 15:53:37 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "\nHello Thomas,\n\n>>> Isn't it better if we only have to throw away the first one?).\n>>\n>> This should be the user decision to drop it or not, not the tool \n>> producing it, IMO.\n>\n> Let me try this complaint again. [...]\n\nI understand your point.\n\nFor me removing silently the last bucket is not right because the sum of \naggregates does not match the final grand total. This change is \nintentional and borders on a bug fix, which is what this patch was doing, \neven if it is also a small behavioral change: We should want the detailed \nand final reports in agreement.\n\nI do agree that users should probably ignore the first and last lines.\n\n> See attached.\n\nWorks for me: patch applies, global and local check ok. I'm fine with it.\n\nIf it was me, I'd still show the last bucket, but it does not matter much.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 9 Jul 2021 07:14:32 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "On Fri, Jul 9, 2021 at 5:14 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Works for me: patch applies, global and local check ok. I'm fine with it.\n\nI hoped we were done here but I realised that your check for 1-3 log\nlines will not survive the harsh environment of the build farm.\nAdding sleep(2) before the final doLog() confirms that. I had two\nideas:\n\n1. Give up and expect 1-180 lines. (180s is the current timeout\ntolerance used elsewhere to deal with\nswamped/valgrind/swapping/microsd computers, after a few rounds of\ninflation, so if you need an arbitrary large number to avoid buildfarm\nmeasles that's my suggestion....)\n2. Change doLog() to give up after end_time. But then I think you'd\nneed to make it 0-3 :-(\n\nI think the logging could be tightened up to work the way you expected\nin future work, though. Perhaps we could change all logging to work\nwith transaction start time instead of log-writing time, which doLog()\nshould receive. If you never start a transaction after end_time, then\nthere can never be an aggregate that begins after that, and the whole\nthing becomes more deterministic. That kind of alignment of aggregate\ntiming with whole-run timing could also get rid of those partial\naggregates completely. But that's an idea for 15.\n\nSo I think we should do 1 for now. Objections or better ideas?\n\n\n", "msg_date": "Sat, 10 Jul 2021 17:54:11 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "\n>> Works for me: patch applies, global and local check ok. I'm fine with it.\n>\n> I hoped we were done here but I realised that your check for 1-3 log\n> lines will not survive the harsh environment of the build farm.\n> Adding sleep(2) before the final doLog() confirms that. I had two\n> ideas:\n>\n> 1. Give up and expect 1-180 lines. (180s is the current timeout\n> tolerance used elsewhere to deal with\n> swamped/valgrind/swapping/microsd computers, after a few rounds of\n> inflation, so if you need an arbitrary large number to avoid buildfarm\n> measles that's my suggestion....)\n> 2. Change doLog() to give up after end_time. But then I think you'd\n> need to make it 0-3 :-(\n>\n> I think the logging could be tightened up to work the way you expected\n> in future work, though. Perhaps we could change all logging to work\n> with transaction start time instead of log-writing time, which doLog()\n> should receive. If you never start a transaction after end_time, then\n> there can never be an aggregate that begins after that, and the whole\n> thing becomes more deterministic. That kind of alignment of aggregate\n> timing with whole-run timing could also get rid of those partial\n> aggregates completely. But that's an idea for 15.\n>\n> So I think we should do 1 for now. Objections or better ideas?\n\nAt least, we now that it is too much.\n\nWhat about moving the test as is in the TODO section with a comment, next \nto the other one, for now?\n\nI hesitated to suggest that before for the above risks, but I was very \nnaively optimistic that it may pass because the test is not that too \ndemanding.\n\nSomeone suggested to have a \"real-time\" configure switch to enable/disable \ntime-sensitive tests.\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 10 Jul 2021 10:25:22 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Hello again,\n\n>> I hoped we were done here but I realised that your check for 1-3 log\n>> lines will not survive the harsh environment of the build farm.\n>> Adding sleep(2) before the final doLog() confirms that. I had two\n>> ideas:\n\n>> So I think we should do 1 for now. Objections or better ideas?\n>\n> At least, we now that it is too much.\n\nI misread your point. You think that it should fail, but it is not\ntried yet. I'm rather optimistic that it should not fail, but I'm okay \nwith averting the risk anyway.\n\n> What about moving the test as is in the TODO section with a comment, next to \n> the other one, for now?\n\nI stand by this solution which should allow to get some data from the \nfield, as v18 attached. If all is green then the TODO could be removed \nlater.\n\n-- \nFabien.", "msg_date": "Sat, 10 Jul 2021 11:36:13 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Hi Fabien,\n\nI committed the code change without the new TAP tests, because I\ndidn't want to leave the open item hanging any longer. As for the\ntest, ...\n\nOn Sat, Jul 10, 2021 at 9:36 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> >> I hoped we were done here but I realised that your check for 1-3 log\n> >> lines will not survive the harsh environment of the build farm.\n> >> Adding sleep(2) before the final doLog() confirms that. I had two\n> >> ideas:\n\n> I misread your point. You think that it should fail, but it is not\n> tried yet. I'm rather optimistic that it should not fail, but I'm okay\n> with averting the risk anyway.\n\n... I know it can fail, and your v18 didn't fix that, because...\n\n+check_pgbench_logs($bdir, '001_pgbench_log_1', $nthreads, 1, 3,\n\n ^\n |\n\n ... this range can be exceeded.\n\nThat's because extra aggregations are created based on doLog() reading\nthe clock after a transaction is finished, entirely independently of\nthe -T mechanism deciding when to stop the benchmark, and potentially\nmany seconds later in adverse conditions. As I mentioned, you can see\nit fail with your own eyes if you hack the code like so:\n\n if (agg_interval > 0)\n {\n+ /*\n+ * XXX: simulate an overloaded raspberry pi swapping to a microsd\n+ * card or other random delays as we can expect in the build farm\n+ */\n+ sleep(3);\n /* log aggregated but not yet reported transactions */\n doLog(thread, state, &aggs, false, 0, 0);\n }\n\n> I stand by this solution which should allow to get some data from the\n> field, as v18 attached. If all is green then the TODO could be removed\n> later.\n\nI suspect the number of aggregates could be made deterministic, as I\ndescribed in an earlier message. What do you think about doing\nsomething like that first for the next release, before trying to add\nassertions about the number of aggregates? I'm with you on the\nimportance of testing, but it seems better to start by making the\nthing more testable.\n\n\n", "msg_date": "Sun, 11 Jul 2021 20:16:56 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "Hello Thomas,\n\n> I committed the code change without the new TAP tests, because I\n> didn't want to leave the open item hanging any longer.\n\nOk. Good.\n\n> As for the test, ... [...]\n\nArgh, so there are no tests that would have caught the regressions:-(\n\n> ... I know it can fail, and your v18 didn't fix that, because...\n>\n> +check_pgbench_logs($bdir, '001_pgbench_log_1', $nthreads, 1, 3,\n> ... this range can be exceeded.\n\nIndeed. I meant to move that one in the TODO section as well, not just the \nprevious call, so that all time-sensitive tests are fully ignored but \nreported, which would be enough for me.\n\n> I suspect the number of aggregates could be made deterministic, as I\n> described in an earlier message. What do you think about doing\n> something like that first for the next release, before trying to add\n> assertions about the number of aggregates?\n\nI think that last time I did something to get more deterministic results \nin pgbench, which involved a few lines of hocus-pocus in pgbench, the \npatch got rejected:-)\n\nAn \"ignored\" tests looked like a good compromise to check how things are \ngoing in the farm and to be able to check for more non regressions when \ndeveloping pgbench, without introducing behavioral changes.\n\n> I'm with you on the importance of testing, but it seems better to start \n> by making the thing more testable.\n\nI'm used to my test patches being rejected, including modifying pgbench \nbehavior to make it more testable. Am I mad enough to retry? Maybe, maybe \nnot.\n\nAttached the fully \"ignored\" version of the time features test as a patch.\n\n-- \nFabien.", "msg_date": "Sun, 11 Jul 2021 15:07:09 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "> On 11 Jul 2021, at 15:07, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> Attached the fully \"ignored\" version of the time features test as a patch.\n\nThis version of the patch is failing to apply on top of HEAD, can you please\nsubmit a rebased version?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 4 Nov 2021 13:38:03 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" }, { "msg_contents": "> On 4 Nov 2021, at 13:38, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 11 Jul 2021, at 15:07, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> \n>> Attached the fully \"ignored\" version of the time features test as a patch.\n> \n> This version of the patch is failing to apply on top of HEAD, can you please\n> submit a rebased version?\n\nI'm marking this patch Returned with Feedback, please feel free to resubmit\nthis when there is an updated version of the patch available.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 2 Dec 2021 13:40:23 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgbench logging broken by time logic changes" } ]
[ { "msg_contents": "Hi,\n\nDuring the recent \"CMU vaccination\" talk given by Robert [1], a couple \nof the attendees (some of which were engineers working on various other \ndatabase systems) asked whether PostgreSQL optimizer uses sketches. \nWhich it does not, as far as I'm aware. Perhaps some of our statistics \ncould be considered sketches, but we've not using data structures like \nhyperloglog, count-min sketch, etc.\n\nBut it reminded me that I thought about using one of the common sketches \nin the past, namely the Count-Min sketch [2], which is often mentioned \nas useful to estimating join cardinalities. There's a couple papers \nexplaining how it works [3], [4], [5], but the general idea is that it \napproximates frequency table, i.e. a table tracking frequencies for all \nvalues. Our MCV list is one way to do that, but that only keeps a \nlimited number of common values - for the rest we approximate the \nfrequencies as uniform distribution. When the MCV covers only a tiny \nfraction of the data, or missing entirely, this may be an issue.\n\nWe can't possibly store exact frequencies all values for tables with \nmany distinct values. The Count-Min sketch works around this by tracking \nfrequencies in a limited number of counters - imagine you have 128 \ncounters. To add a value to the sketch, we hash it and the hash says \nwhich counter to increment.\n\nTo estimate a join size, we simply calculate \"dot product\" of the two \nsketches (which need to use the same number of counters):\n\n S = sum(s1(i) * s2(i) for i in 1 .. 128)\n\nThe actual sketches have multiple of those arrays (e.g. 8) using \ndifferent hash functions, and we use the minimum of the sums. That \nlimits the error, but I'll ignore it here for simplicity.\n\nThe attached patch is a very simple (and perhaps naive) implementation \nadding count-min sketch to pg_statistic for all attributes with a hash \nfunction (as a new statistics slot kind), and considering it in \nequijoinsel_inner. There's a GUC use_count_min_sketch to make it easier \nto see how it works.\n\nA simple example\n\n create table t1 (a int, b int);\n create table t2 (a int, b int);\n\n insert into t1 select pow(random(), 2) * 1000, i\n from generate_series(1,30000) s(i);\n insert into t2 select pow(random(), 2) * 1000, i\n from generate_series(1,30000) s(i);\n\n analyze t1, t2;\n\n explain analyze select * from t1 join t2 using (a);\n\n QUERY PLAN\n ------------------------------------------------------------------\n Hash Join (cost=808.00..115470.35 rows=8936685 width=12)\n (actual time=31.231..1083.330 rows=2177649 loops=1)\n\n\nSo it's about 4x over-estimated, while without the count-min sketch it's \nabout 2x under-estimated:\n\n set use_count_min_sketch = false;\n\n QUERY PLAN\n ------------------------------------------------------------------\n Merge Join (cost=5327.96..18964.16 rows=899101 width=12)\n (actual time=60.780..2896.829 rows=2177649 loops=1)\n\nMore about this a bit later.\n\n\nThe nice thing on count-min sketch is that there are pretty clear \nboundaries for error:\n\n size(t1,t2) <= dot_product(s1,2) <= epsilon * size(t1) * size(t2)\n\nwhere s1/s2 are sketches on t1/t2, and epsilon is relative error. User \nmay pick epsilon, and that determines size of the necessary sketch as \n2/epsilon. So with 128 buckets, the relative error is ~1.6%.\n\nThe trouble here is that this is relative to cartesian product of the \ntwo relations. So with two relations, each 30k rows, the error is up to \n~14.5M. Which is not great. We can pick lower epsilon value, but that \nincreases the sketch size.\n\nWhere does the error come from? Each counter combines frequencies for \nmultiple distinct values. So for example with 128 counters and 1024 \ndistinct values, each counter is representing ~4 values on average. But \nthe dot product ignores this - it treats as if all the frequency was for \na single value. It calculates the worst case for the bucket, because if \nyou split the frequency e.g. in half, the estimate is always lower\n\n (f/2)^2 + (f/2)^2 < f^2\n\nSo maybe this could calculate the average number of items per counter \nand correct for this, somehow. We'd lose some of the sketch guarantees, \nbut maybe it's the right thing to do.\n\nThere's a bunch of commented-out code doing this in different ways, and \nwith the geometric mean variant the result looks like this:\n\n QUERY PLAN\n ------------------------------------------------------------------\n Merge Join (cost=5328.34..53412.58 rows=3195688 width=12)\n (actual time=64.037..2937.818 rows=2177649 loops=1)\n\nwhich is much closer, but of course that depends on how exactly is the \ndata set skewed.\n\n\nThere's a bunch of other open questions:\n\n1) The papers about count-min sketch seem to be written for streaming \nuse cases, which implies all the inserted data pass through the sketch. \nThis patch only builds the sketch on analyze sample, which makes it less \nreliable. I doubt we want to do something different (e.g. because it'd \nrequire handling deletes, etc.).\n\n\n2) The patch considers the sketch before MCVs, simply because it makes \nit much simpler to enable/disable the sketch, and compare it to MCVs. \nThat's probably not what should be done - if we have MCVs, we should \nprefer using that, simply because it determines the frequencies more \naccurately than the sketch. And only use the sketch as a fallback, when \nwe don't have MCVs on both sides of the join, instead of just assuming \nuniform distribution and relying on ndistinct.\n\nWe may have histograms, but AFAIK we don't use those when estimating \njoins (at least not equijoins). That's another thing we might maybe look \ninto, comparing the histograms to verify how much they overlap. But \nthat's irrelevant here.\n\nAnyway, count-min sketches would be a better way to estimate the part \nnot covered by MCVs - we might even assume the uniform distribution for \nindividual counters, because that's what we do without MCVs anyway.\n\n\n3) It's not clear to me how to extend this for multiple columns, so that \nit can be used to estimate joins on multiple correlated columns. For \nMCVs it was pretty simple, but let's say we add this as a new extended \nstatistics kind, and user does\n\n CREATE STATISTICS s (cmsketch) ON a, b, c FROM t;\n\nShould that build sketch on (a,b,c) or something else? The trouble is a \nsketch on (a,b,c) is useless for joins on (a,b).\n\nWe might do something like for ndistinct coefficients, and build a \nsketch for each combination of the columns. The sketches are much larger \nthan ndistinct coefficients, though. But maybe that's fine - with 8 \ncolumns we'd need ~56 sketches, each ~8kB. So that's not extreme.\n\n\nregards\n\n\n[1] \nhttps://db.cs.cmu.edu/events/vaccination-2021-postgresql-optimizer-methodology-robert-haas/\n\n[2] https://en.wikipedia.org/wiki/Count%E2%80%93min_sketch\n\n[3] https://dsf.berkeley.edu/cs286/papers/countmin-latin2004.pdf\n\n[4] http://dimacs.rutgers.edu/~graham/pubs/papers/cmsoft.pdf\n\n[5] http://dimacs.rutgers.edu/~graham/pubs/papers/cmz-sdm.pdf\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 16 Jun 2021 18:23:28 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "PoC: Using Count-Min Sketch for join cardinality estimation" }, { "msg_contents": "On Wed, Jun 16, 2021 at 12:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> The attached patch is a very simple (and perhaps naive) implementation\n> adding count-min sketch to pg_statistic for all attributes with a hash\n> function (as a new statistics slot kind), and considering it in\n> equijoinsel_inner. There's a GUC use_count_min_sketch to make it easier\n> to see how it works.\n\nCool! I have some high level questions below.\n\n> So it's about 4x over-estimated, while without the count-min sketch it's\n> about 2x under-estimated:\n\n> The nice thing on count-min sketch is that there are pretty clear\n> boundaries for error:\n>\n> size(t1,t2) <= dot_product(s1,2) <= epsilon * size(t1) * size(t2)\n>\n> where s1/s2 are sketches on t1/t2, and epsilon is relative error. User\n> may pick epsilon, and that determines size of the necessary sketch as\n> 2/epsilon. So with 128 buckets, the relative error is ~1.6%.\n>\n> The trouble here is that this is relative to cartesian product of the\n> two relations. So with two relations, each 30k rows, the error is up to\n> ~14.5M. Which is not great. We can pick lower epsilon value, but that\n> increases the sketch size.\n\n+ * depth 8 and width 128 is sufficient for relative error ~1.5% with a\n+ * probability of approximately 99.6%\n\nOkay, so in the example above, we have a 99.6% probability of having less\nthan 14.5M, but the actual error is much smaller. Do we know how tight the\nerror bounds are with some lower probability?\n\n> There's a bunch of other open questions:\n>\n> 1) The papers about count-min sketch seem to be written for streaming\n> use cases, which implies all the inserted data pass through the sketch.\n> This patch only builds the sketch on analyze sample, which makes it less\n> reliable. I doubt we want to do something different (e.g. because it'd\n> require handling deletes, etc.).\n\nWe currently determine the sample size from the number of histogram buckets\nrequested, which is from the guc we expose. If these sketches are more\ndesigned for the whole stream, do we have any idea how big a sample we need\nto be reasonably accurate with them?\n\n> 2) The patch considers the sketch before MCVs, simply because it makes\n> it much simpler to enable/disable the sketch, and compare it to MCVs.\n> That's probably not what should be done - if we have MCVs, we should\n> prefer using that, simply because it determines the frequencies more\n> accurately than the sketch. And only use the sketch as a fallback, when\n> we don't have MCVs on both sides of the join, instead of just assuming\n> uniform distribution and relying on ndistinct.\n\n> Anyway, count-min sketches would be a better way to estimate the part\n> not covered by MCVs - we might even assume the uniform distribution for\n> individual counters, because that's what we do without MCVs anyway.\n\nWhen we calculate the sketch, would it make sense to exclude the MCVs that\nwe found? And use both sources for the estimate?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jun 16, 2021 at 12:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:> The attached patch is a very simple (and perhaps naive) implementation> adding count-min sketch to pg_statistic for all attributes with a hash> function (as a new statistics slot kind), and considering it in> equijoinsel_inner. There's a GUC use_count_min_sketch to make it easier> to see how it works.Cool! I have some high level questions below.> So it's about 4x over-estimated, while without the count-min sketch it's> about 2x under-estimated:> The nice thing on count-min sketch is that there are pretty clear> boundaries for error:>>    size(t1,t2) <= dot_product(s1,2) <= epsilon * size(t1) * size(t2)>> where s1/s2 are sketches on t1/t2, and epsilon is relative error. User> may pick epsilon, and that determines size of the necessary sketch as> 2/epsilon. So with 128 buckets, the relative error is ~1.6%.>> The trouble here is that this is relative to cartesian product of the> two relations. So with two relations, each 30k rows, the error is up to> ~14.5M. Which is not great. We can pick lower epsilon value, but that> increases the sketch size.+ * depth 8 and width 128 is sufficient for relative error ~1.5% with a+ * probability of approximately 99.6%Okay, so in the example above, we have a 99.6% probability of having less than 14.5M, but the actual error is much smaller. Do we know how tight the error bounds are with some lower probability?> There's a bunch of other open questions:>> 1) The papers about count-min sketch seem to be written for streaming> use cases, which implies all the inserted data pass through the sketch.> This patch only builds the sketch on analyze sample, which makes it less> reliable. I doubt we want to do something different (e.g. because it'd> require handling deletes, etc.).We currently determine the sample size from the number of histogram buckets requested, which is from the guc we expose. If these sketches are more designed for the whole stream, do we have any idea how big a sample we need to be reasonably accurate with them?> 2) The patch considers the sketch before MCVs, simply because it makes> it much simpler to enable/disable the sketch, and compare it to MCVs.> That's probably not what should be done - if we have MCVs, we should> prefer using that, simply because it determines the frequencies more> accurately than the sketch. And only use the sketch as a fallback, when> we don't have MCVs on both sides of the join, instead of just assuming> uniform distribution and relying on ndistinct.> Anyway, count-min sketches would be a better way to estimate the part> not covered by MCVs - we might even assume the uniform distribution for> individual counters, because that's what we do without MCVs anyway.When we calculate the sketch, would it make sense to exclude the MCVs that we found? And use both sources for the estimate?--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 16 Jun 2021 19:31:55 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PoC: Using Count-Min Sketch for join cardinality estimation" }, { "msg_contents": "On 6/17/21 1:31 AM, John Naylor wrote:\n> On Wed, Jun 16, 2021 at 12:23 PM Tomas Vondra \n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>> \n> wrote:\n> \n> > The attached patch is a very simple (and perhaps naive) implementation\n> > adding count-min sketch to pg_statistic for all attributes with a hash\n> > function (as a new statistics slot kind), and considering it in\n> > equijoinsel_inner. There's a GUC use_count_min_sketch to make it easier\n> > to see how it works.\n> \n> Cool! I have some high level questions below.\n> \n> > So it's about 4x over-estimated, while without the count-min sketch it's\n> > about 2x under-estimated:\n> \n> > The nice thing on count-min sketch is that there are pretty clear\n> > boundaries for error:\n> >\n> >    size(t1,t2) <= dot_product(s1,2) <= epsilon * size(t1) * size(t2)\n> >\n> > where s1/s2 are sketches on t1/t2, and epsilon is relative error. User\n> > may pick epsilon, and that determines size of the necessary sketch as\n> > 2/epsilon. So with 128 buckets, the relative error is ~1.6%.\n> >\n> > The trouble here is that this is relative to cartesian product of the\n> > two relations. So with two relations, each 30k rows, the error is up to\n> > ~14.5M. Which is not great. We can pick lower epsilon value, but that\n> > increases the sketch size.\n> \n> + * depth 8 and width 128 is sufficient for relative error ~1.5% with a\n> + * probability of approximately 99.6%\n> \n> Okay, so in the example above, we have a 99.6% probability of having \n> less than 14.5M, but the actual error is much smaller. Do we know how \n> tight the error bounds are with some lower probability?\n> \n\nI don't recall such formula mentioned in any of the papers. The [3]\npaper has a proof in section 4.2, deriving the formula using Markov's\ninequality, but it's not obvious how to relax that (it's been ages since\nI last did things like this).\n\n> > There's a bunch of other open questions:\n> >\n> > 1) The papers about count-min sketch seem to be written for streaming\n> > use cases, which implies all the inserted data pass through the sketch.\n> > This patch only builds the sketch on analyze sample, which makes it less\n> > reliable. I doubt we want to do something different (e.g. because it'd\n> > require handling deletes, etc.).\n> \n> We currently determine the sample size from the number of histogram \n> buckets requested, which is from the guc we expose. If these sketches \n> are more designed for the whole stream, do we have any idea how big a \n> sample we need to be reasonably accurate with them?\n> \n\nNot really, but to be fair even for the histograms it's only really\nsupported by \"seems to work in practice\" :-(\n\nMy feeling is it's more about the number of distinct values rather than\nthe size of the table. If there are only a couple distinct values, small\nsample is good enough. With many distinct values, we may need a larger\nsample, but maybe not - we'll have to try, I guess.\n\nFWIW there's a lot of various assumptions in the join estimates. For\nexample we assume the domains match (i.e. domain of the smaller table is\nsubset of the larger table) etc.\n\n> > 2) The patch considers the sketch before MCVs, simply because it makes\n> > it much simpler to enable/disable the sketch, and compare it to MCVs.\n> > That's probably not what should be done - if we have MCVs, we should\n> > prefer using that, simply because it determines the frequencies more\n> > accurately than the sketch. And only use the sketch as a fallback, when\n> > we don't have MCVs on both sides of the join, instead of just assuming\n> > uniform distribution and relying on ndistinct.\n> \n> > Anyway, count-min sketches would be a better way to estimate the part\n> > not covered by MCVs - we might even assume the uniform distribution for\n> > individual counters, because that's what we do without MCVs anyway.\n> \n> When we calculate the sketch, would it make sense to exclude the MCVs \n> that we found? And use both sources for the estimate?\n> \n\nNot sure. I've thought about this a bit, and excluding the MCV values\nfrom the sketch would make it more like a MCV+histogram. So we'd have\nMCV and then (sketch, histogram) on the non-MCV values.\n\nI think the partial sketch is mostly useless, at least for join\nestimates. Imagine we have MCV and sketch on both sides of the join, so\nwe have (MCV1, sketch1) and (MCV2, sketch2). Naively, we could do\nestimate using (MCV1, MCV2) and then (sketch1,sketch2). But that's too\nsimplistic - there may be \"overlap\" between MCV1 and sketch2, for example?\n\nSo it seems more likely we'll just do MCV estimation if both sides have\nit, and switch to sketch-only estimation otherwise.\n\nThere's also the fact that we exclude values wider than (1kB), so that\nthe stats are not too big, and there's no reason to do that for the\nsketch (which is fixed-size thanks to hashing). It's a bit simpler to\nbuild the full sketch during the initial scan of the data.\n\nBut it's not a very important detail - it's trivial to both add and\nremove values from the sketch, if needed. So we can either exclude the\nMCV values and \"add them\" to the partial sketch later, or we can build a\nfull sketch and then subtract them later.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 17 Jun 2021 02:23:17 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PoC: Using Count-Min Sketch for join cardinality estimation" }, { "msg_contents": "On 6/17/21 2:23 AM, Tomas Vondra wrote:\n> On 6/17/21 1:31 AM, John Naylor wrote:\n>> On Wed, Jun 16, 2021 at 12:23 PM Tomas Vondra \n>> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>> \n>> wrote:\n>>\n>> ...\n>>\n>> + * depth 8 and width 128 is sufficient for relative error ~1.5% with a\n>> + * probability of approximately 99.6%\n>>\n>> Okay, so in the example above, we have a 99.6% probability of having \n>> less than 14.5M, but the actual error is much smaller. Do we know how \n>> tight the error bounds are with some lower probability?\n>>\n> \n> I don't recall such formula mentioned in any of the papers. The [3]\n> paper has a proof in section 4.2, deriving the formula using Markov's\n> inequality, but it's not obvious how to relax that (it's been ages since\n> I last did things like this).\n> \n\n\nI've been thinking about this a bit more, and while I still don't know\nabout a nice formula, I think I have a fairly good illustration that may\nprovide some intuition about the \"typical\" error. I'll talk about self\njoins, because it makes some of the formulas simpler. But in principle\nthe same thing works for a join of two relations too.\n\nImagine you have a relation with N rows and D distinct values, and let's\nbuild a count-min sketch on it, with W counters. So assuming d=1 for\nsimplicity, we have one set of counters with frequencies:\n\n [f(1), f(2), ..., f(W)]\n\nNow, the dot product effectively calculates\n\n S = sum[ f(i)^2 for i in 1 ... W ]\n\nwhich treats each counter as if it was just a single distinct value. But\nwe know that this is the upper boundary of the join size estimate,\nbecause if we \"split\" a grou in any way, the join will always be lower:\n\n (f(i) - X)^2 + X^2 <= f(i)^2\n\nIt's as if you have a rectangle - if you split a side in some way and\ncalculate the area of those smaller rectangles, it'll be smaller than\nthe are of the whole rectangle. To minimize the area, the parts need to\nbe of equal size, and for K parts it's\n\n K * (f(i) / K) ^ 2 = f(i)^2 / K\n\nThis is the \"minimum relative error\" case assuming uniform distribution\nof the data, I think. If there are D distinct values in the data set,\nthen for uniform distribution we can assume each counter represents\nabout D / W = K distinct values, and we can assume f(i) = N / W, so then\n\n S = W * (N/W)^2 / (D/W) = N^2 / D\n\nOf course, this is the exact cardinality of the join - the count-min\nsketch simply multiplies the f(i) values, ignoring D entirely. But I\nthink this shows that the fewer distinct values are there and/or the\nmore skewed the data set is, the closer the estimate is to the actual\nvalue. More uniform data sets with more distinct values will end up\ncloser to the (N^2 / D) size, and the sketch will significantly\nover-estimate this.\n\nSo the question is whether to attempt to do any \"custom\" correction\nbased on number of distinct values (which I think the count-min sketch\ndoes not do, because the papers assumes it's unknown).\n\nI still don't know about an analytical solution, giving us smaller\nconfidence interval (with lower probability). But we could perform some\nexperiments, generating data sets with various data distribution and\nthen measure how accurate the adjusted estimate is.\n\nBut I think the fact that for \"more skewed\" data sets the estimate is\ncloser to reality is very interesting, and pretty much what we want.\nIt's probably better than just assuming uniformity on both sides, which\nis what we do when we only have MCV on one side (that's a fairly common\ncase, I think).\n\nThe other interesting feature is that it *always* overestimates (at\nleast the default version, not the variant adjusted by distinct values).\nThat's probably good, because under-estimates are generally much more\ndangerous than over-estimates (the execution often degrades pretty\nquickly, not gracefully).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 17 Jun 2021 13:29:23 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PoC: Using Count-Min Sketch for join cardinality estimation" }, { "msg_contents": "On Wed, Jun 16, 2021 at 8:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Not really, but to be fair even for the histograms it's only really\n> supported by \"seems to work in practice\" :-(\n\nHmm, we cite a theoretical result in analyze.c, but I don't know if there\nis something better out there:\n\n * The following choice of minrows is based on the paper\n * \"Random sampling for histogram construction: how much is enough?\"\n * by Surajit Chaudhuri, Rajeev Motwani and Vivek Narasayya, in\n\nWhat is more troubling to me is that we set the number of MCVs to the\nnumber of histograms. Since b5db1d93d2a6 we have a pretty good method of\nfinding the MCVs that are justified. When that first went in, I\nexperimented with removing the MCV limit and found it easy to create value\ndistributions that lead to thousands of MCVs. I guess the best\njustification now for the limit is plan time, but if we have a sketch also,\nwe can choose one or the other based on a plan-time speed vs accuracy\ntradeoff (another use for a planner effort guc). In that scenario, for\ntables with many MCVs we would only use them for restriction clauses.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jun 16, 2021 at 8:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:> Not really, but to be fair even for the histograms it's only really> supported by \"seems to work in practice\" :-(Hmm, we cite a theoretical result in analyze.c, but I don't know if there is something better out there: * The following choice of minrows is based on the paper * \"Random sampling for histogram construction: how much is enough?\" * by Surajit Chaudhuri, Rajeev Motwani and Vivek Narasayya, inWhat is more troubling to me is that we set the number of MCVs to the number of histograms. Since b5db1d93d2a6 we have a pretty good method of finding the MCVs that are justified. When that first went in, I experimented with removing the MCV limit and found it easy to create value distributions that lead to thousands of MCVs. I guess the best justification now for the limit is plan time, but if we have a sketch also, we can choose one or the other based on a plan-time speed vs accuracy tradeoff (another use for a planner effort guc). In that scenario, for tables with many MCVs we would only use them for restriction clauses.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 18 Jun 2021 13:03:05 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PoC: Using Count-Min Sketch for join cardinality estimation" }, { "msg_contents": "On 6/18/21 7:03 PM, John Naylor wrote:\n> On Wed, Jun 16, 2021 at 8:23 PM Tomas Vondra \n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>> \n> wrote:\n> \n> > Not really, but to be fair even for the histograms it's only really\n> > supported by \"seems to work in practice\" :-(\n> \n> Hmm, we cite a theoretical result in analyze.c, but I don't know if \n> there is something better out there:\n> \n>  * The following choice of minrows is based on the paper\n>  * \"Random sampling for histogram construction: how much is enough?\"\n>  * by Surajit Chaudhuri, Rajeev Motwani and Vivek Narasayya, in\n> \n\nTrue. I read that paper (long time ago), and it certainly gives some \nvery interesting guidance and guarantees regarding relative error. And \nnow that I look at it, the theorems 5 & 6, and the corollary 1 do \nprovide a way to calculate probability of a lower error (essentially \nvary the f, get the probability).\n\nI still think there's a lot of reliance on experience from practice, \nbecause even with such strong limits delta=0.5 of a histogram with 100 \nbuckets, representing 1e9 rows, is still plenty of space for errors.\n\n> What is more troubling to me is that we set the number of MCVs to the \n> number of histograms. Since b5db1d93d2a6 we have a pretty good method of \n> finding the MCVs that are justified. When that first went in, I \n> experimented with removing the MCV limit and found it easy to create \n> value distributions that lead to thousands of MCVs. I guess the best \n> justification now for the limit is plan time, but if we have a sketch \n> also, we can choose one or the other based on a plan-time speed vs \n> accuracy tradeoff (another use for a planner effort guc). In that \n> scenario, for tables with many MCVs we would only use them for \n> restriction clauses.\n> \n\nSorry, I'm not sure what you mean by \"we set the number of MCVs to the \nnumber of histograms\" :-(\n\nWhen you say \"MCV limit\" you mean that we limit the number of items to \nstatistics target, right? I agree plan time is one concern - but it's \nalso about analyze, as we need larger sample to build a larger MCV or \nhistogram (as the paper you referenced shows).\n\nI think the sketch is quite interesting for skewed data sets where the \nMCV can represent only small fraction of the data, exactly because of \nthe limit. For (close to) uniform data distributions we can just use \nndistinct estimates to get estimates that are better than those from a \nsketch, I think.\n\nSo I think we should try using MCV first, and then use sketches for the \nrest of the data (or possibly all data, if one side does not have MCV).\n\nFWIW I think the sketch may be useful even for restriction clauses, \nwhich is what the paper calls \"point queries\". Again, maybe this should \nuse the same correction depending on ndistinct estimate.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 18 Jun 2021 21:43:24 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PoC: Using Count-Min Sketch for join cardinality estimation" }, { "msg_contents": "On Fri, Jun 18, 2021 at 3:43 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Sorry, I'm not sure what you mean by \"we set the number of MCVs to the\n> number of histograms\" :-(\n>\n> When you say \"MCV limit\" you mean that we limit the number of items to\n> statistics target, right? I agree plan time is one concern - but it's\n> also about analyze, as we need larger sample to build a larger MCV or\n> histogram (as the paper you referenced shows).\n\nAh, I didn't realize the theoretical limit applied to the MCVs too, but\nthat makes sense since they're basically singleton histogram buckets.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Jun 18, 2021 at 3:43 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:> Sorry, I'm not sure what you mean by \"we set the number of MCVs to the> number of histograms\" :-(>> When you say \"MCV limit\" you mean that we limit the number of items to> statistics target, right? I agree plan time is one concern - but it's> also about analyze, as we need larger sample to build a larger MCV or> histogram (as the paper you referenced shows).Ah, I didn't realize the theoretical limit applied to the MCVs too, but that makes sense since they're basically singleton histogram buckets.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 18 Jun 2021 15:54:40 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PoC: Using Count-Min Sketch for join cardinality estimation" }, { "msg_contents": "On 6/18/21 9:54 PM, John Naylor wrote:\n> \n> On Fri, Jun 18, 2021 at 3:43 PM Tomas Vondra \n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>> \n> wrote:\n> \n> > Sorry, I'm not sure what you mean by \"we set the number of MCVs to the\n> > number of histograms\" :-(\n> >\n> > When you say \"MCV limit\" you mean that we limit the number of items to\n> > statistics target, right? I agree plan time is one concern - but it's\n> > also about analyze, as we need larger sample to build a larger MCV or\n> > histogram (as the paper you referenced shows).\n> \n> Ah, I didn't realize the theoretical limit applied to the MCVs too, but \n> that makes sense since they're basically singleton histogram buckets.\n> \n\nSomething like that, yes. Looking at MCV items as singleton histogram \nbuckets is interesting, although I'm not sure that was the reasoning \nwhen calculating the MCV size. AFAIK it was kinda the other way around, \ni.e. the sample size is derived from the histogram paper, and when \nbuilding the MCV we ask what's sufficiently different from the average \nfrequency, based on the sample size etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 18 Jun 2021 22:24:45 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PoC: Using Count-Min Sketch for join cardinality estimation" } ]
[ { "msg_contents": "[ new subject, new thread, new patch ]\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jun-16, Tom Lane wrote:\n>> BTW, as long as we're thinking of back-patching nontrivial specfile\n>> changes, I have another modest proposal. What do people think of\n>> removing the requirement for step/session names to be double-quoted,\n>> and instead letting them work like SQL identifiers?\n\n> Yes *please*.\n\nHere's a draft patch for that. I converted one specfile just as\nproof-of-concept, but I don't want to touch the rest until the other\npatch has gone in, or I'll have merge problems. (This'll have some\nmerge problems with that anyway I fear, but they'll be minor.)\n\nI decided to follow the standard SQL rule that you can use \"foo\"\"bar\"\nto include a double-quote in a quoted identifier. This broke one\nplace in test_decoding's oldest_xmin.spec where somebody had left out\na space. So maybe there's an argument for not doing that --- but I'd\nrather not document more inconsistencies than I have to.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 16 Jun 2021 12:45:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Allowing regular identifiers in isolationtester scripts" } ]
[ { "msg_contents": "I haven't pushed my quick-hack fix for bug #17053 ([1]) because\nI wasn't really satisfied with band-aiding that problem in one\nmore place. I took a look around to see if we could protect\nagainst the whole class of scribble-on-a-utility-statement\nissues in a more centralized way.\n\nWhat I found is that there are only two places that call\nProcessUtility with a statement that might be coming from the plan\ncache. _SPI_execute_plan is always doing so, so it can just\nunconditionally copy the statement. The other one is\nPortalRunUtility, which can examine the Portal to see if the\nparsetree came out of cache or not. Having added copyObject\ncalls there, we can get rid of the retail calls that exist\nin not-quite-enough utility statement execution routines.\n\nI think this would have been more complicated before plpgsql\nstarted using the plancache; at least, some of the comments\nremoved here refer to plpgsql as being an independent hazard.\nAlso, I didn't risk removing any copyObject calls that are\nfurther down than the top level of statement execution handlers.\nSome of those are visibly still necessary, and others are hard\nto be sure about.\n\nAlthough this adds some overhead in the form of copying of\nutility node trees that won't actually mutate during execution,\nI think that won't be too bad because those trees tend to be\nsmall and hence cheap to copy. The statements that can have\na lot of substructure usually contain expression trees or the\nlike, which do have to be copied for safety. Moreover, we buy\nback a lot of cost by removing pointless copying when we're\nnot executing on a cached plan.\n\n(BTW, in case you are wondering: this hazard only exists for\nutility statements, because we long ago made the executor\nnot modify the Plan tree it's given.)\n\nThis is possibly too aggressive to consider for back-patching.\nIn the back branches, perhaps we should just use my original\nlocalized fix. Another conservative (but expensive) answer\nfor the back branches is to add the new copyObject calls\nbut not remove any of the old ones.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/17053-3ca3f501bbc212b4%40postgresql.org", "msg_date": "Wed, 16 Jun 2021 21:39:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Centralizing protective copying of utility statements" }, { "msg_contents": "On Wed, Jun 16, 2021 at 6:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I haven't pushed my quick-hack fix for bug #17053 ([1]) because\n> I wasn't really satisfied with band-aiding that problem in one\n> more place. I took a look around to see if we could protect\n> against the whole class of scribble-on-a-utility-statement\n> issues in a more centralized way.\n>\n> What I found is that there are only two places that call\n> ProcessUtility with a statement that might be coming from the plan\n> cache. _SPI_execute_plan is always doing so, so it can just\n> unconditionally copy the statement. The other one is\n> PortalRunUtility, which can examine the Portal to see if the\n> parsetree came out of cache or not. Having added copyObject\n> calls there, we can get rid of the retail calls that exist\n> in not-quite-enough utility statement execution routines.\n>\n> I think this would have been more complicated before plpgsql\n> started using the plancache; at least, some of the comments\n> removed here refer to plpgsql as being an independent hazard.\n> Also, I didn't risk removing any copyObject calls that are\n> further down than the top level of statement execution handlers.\n> Some of those are visibly still necessary, and others are hard\n> to be sure about.\n>\n> Although this adds some overhead in the form of copying of\n> utility node trees that won't actually mutate during execution,\n> I think that won't be too bad because those trees tend to be\n> small and hence cheap to copy. The statements that can have\n> a lot of substructure usually contain expression trees or the\n> like, which do have to be copied for safety. Moreover, we buy\n> back a lot of cost by removing pointless copying when we're\n> not executing on a cached plan.\n>\n> (BTW, in case you are wondering: this hazard only exists for\n> utility statements, because we long ago made the executor\n> not modify the Plan tree it's given.)\n>\n> This is possibly too aggressive to consider for back-patching.\n> In the back branches, perhaps we should just use my original\n> localized fix. Another conservative (but expensive) answer\n> for the back branches is to add the new copyObject calls\n> but not remove any of the old ones.\n>\n> Thoughts?\n>\n> regards, tom lane\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/17053-3ca3f501bbc212b4%40postgresql.org\n>\n> Hi,\nFor back-patching, if we wait for a while (a few weeks) after this patch\ngets committed to master branch (and see there is no regression),\nit seems that would give us more confidence in backporting to older\nbranches.\n\nCheers\n\nOn Wed, Jun 16, 2021 at 6:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I haven't pushed my quick-hack fix for bug #17053 ([1]) because\nI wasn't really satisfied with band-aiding that problem in one\nmore place.  I took a look around to see if we could protect\nagainst the whole class of scribble-on-a-utility-statement\nissues in a more centralized way.\n\nWhat I found is that there are only two places that call\nProcessUtility with a statement that might be coming from the plan\ncache.  _SPI_execute_plan is always doing so, so it can just\nunconditionally copy the statement.  The other one is\nPortalRunUtility, which can examine the Portal to see if the\nparsetree came out of cache or not.  Having added copyObject\ncalls there, we can get rid of the retail calls that exist\nin not-quite-enough utility statement execution routines.\n\nI think this would have been more complicated before plpgsql\nstarted using the plancache; at least, some of the comments\nremoved here refer to plpgsql as being an independent hazard.\nAlso, I didn't risk removing any copyObject calls that are\nfurther down than the top level of statement execution handlers.\nSome of those are visibly still necessary, and others are hard\nto be sure about.\n\nAlthough this adds some overhead in the form of copying of\nutility node trees that won't actually mutate during execution,\nI think that won't be too bad because those trees tend to be\nsmall and hence cheap to copy.  The statements that can have\na lot of substructure usually contain expression trees or the\nlike, which do have to be copied for safety.  Moreover, we buy\nback a lot of cost by removing pointless copying when we're\nnot executing on a cached plan.\n\n(BTW, in case you are wondering: this hazard only exists for\nutility statements, because we long ago made the executor\nnot modify the Plan tree it's given.)\n\nThis is possibly too aggressive to consider for back-patching.\nIn the back branches, perhaps we should just use my original\nlocalized fix.  Another conservative (but expensive) answer\nfor the back branches is to add the new copyObject calls\nbut not remove any of the old ones.\n\nThoughts?\n\n                        regards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/17053-3ca3f501bbc212b4%40postgresql.org\nHi,For back-patching, if we wait for a while (a few weeks) after this patch gets committed to master branch (and see there is no regression),it seems that would give us more confidence in backporting to older branches.Cheers", "msg_date": "Wed, 16 Jun 2021 19:23:12 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "I wrote:\n> What I found is that there are only two places that call\n> ProcessUtility with a statement that might be coming from the plan\n> cache. _SPI_execute_plan is always doing so, so it can just\n> unconditionally copy the statement. The other one is\n> PortalRunUtility, which can examine the Portal to see if the\n> parsetree came out of cache or not. Having added copyObject\n> calls there, we can get rid of the retail calls that exist\n> in not-quite-enough utility statement execution routines.\n\nIn the light of morning, I'm not too pleased with this patch either.\nIt's essentially creating a silent API change for ProcessUtility.\nI don't know whether there are any out-of-tree ProcessUtility\ncallers, but if there are, this could easily break them in a way\nthat basic testing might not catch.\n\nWhat seems like a much safer answer is to make the API change noisy.\nThat is, move the responsibility for actually calling copyObject\ninto ProcessUtility itself, and add a bool parameter saying whether\nthat needs to be done. That forces all callers to consider the\nissue, and gives them the tool they need to make the right thing\nhappen.\n\nHowever, this clearly is not a back-patchable approach. I'm\nthinking there are two plausible alternatives for the back branches:\n\n1. Narrow fix as per my original patch for #17053. Low cost,\nbut no protection against other bugs of the same ilk.\n\n2. Still move the responsibility for calling copyObject into\nProcessUtility; but lacking the bool parameter, just do it\nunconditionally. Offers solid protection at some uncertain\nperformance cost.\n\nI'm not terribly certain which way to go. Thoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 11:30:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "I wrote:\n> What seems like a much safer answer is to make the API change noisy.\n> That is, move the responsibility for actually calling copyObject\n> into ProcessUtility itself, and add a bool parameter saying whether\n> that needs to be done. That forces all callers to consider the\n> issue, and gives them the tool they need to make the right thing\n> happen.\n\nHere's a v2 that does it like that. In this formulation, we're\nbasically hoisting the responsibility for doing copyObject up into\nProcessUtility from its direct children, which seems like a clearer\nway of thinking about what has to change.\n\nWe could avoid the side-effects on users of ProcessUtility_hook by\ndoing the copy step in ProcessUtility itself rather than passing the\nflag on to standard_ProcessUtility. But that sounded like a bit of a\nkluge. Also, putting the work in standard_ProcessUtility preserves\nthe option to redistribute it into the individual switch arms, in case\nanyone does find the extra copying overhead annoying for statement\ntypes that don't need it. (I don't plan to do any such thing as part\nof this bug-fix patch, though.)\n\nBarring objections, I'm going to push this into HEAD fairly soon,\nsince beta2 is hard upon us. Still thinking about which way to\nfix it in the back branches.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 17 Jun 2021 13:03:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "On Thu, Jun 17, 2021 at 01:03:29PM -0400, Tom Lane wrote:\n> \n> Here's a v2 that does it like that. In this formulation, we're\n> basically hoisting the responsibility for doing copyObject up into\n> ProcessUtility from its direct children, which seems like a clearer\n> way of thinking about what has to change.\n\nI agree that forcing an API break is better. Just a nit:\n\n+ *\treadOnlyTree: treat pstmt's node tree as read-only\n\nMaybe it's because I'm not a native english speaker, or because it's quite\nlate here, but I don't find \"treat as read-only\" really clear. I don't have a\nconcise better wording to suggest.\n\n> Still thinking about which way to fix it in the back branches.\n\nI'm +0.5 for a narrow fix, due to the possibility of unspotted similar problem\nvs possibility of performance regression ratio.\n\n\n", "msg_date": "Fri, 18 Jun 2021 02:00:55 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "Hi,\n\nOn 2021-06-16 21:39:49 -0400, Tom Lane wrote:\n> Although this adds some overhead in the form of copying of\n> utility node trees that won't actually mutate during execution,\n> I think that won't be too bad because those trees tend to be\n> small and hence cheap to copy. The statements that can have\n> a lot of substructure usually contain expression trees or the\n> like, which do have to be copied for safety. Moreover, we buy\n> back a lot of cost by removing pointless copying when we're\n> not executing on a cached plan.\n\nHave you evaluated the cost in some form? I don't think it a relevant\ncost for most utility statements, but there's a few exceptions that *do*\nworry me. In particular, in some workloads transaction statements are\nvery frequent.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Jun 2021 12:23:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "Hi,\n\nOn 2021-06-17 13:03:29 -0400, Tom Lane wrote:\n> Here's a v2 that does it like that. In this formulation, we're\n> basically hoisting the responsibility for doing copyObject up into\n> ProcessUtility from its direct children, which seems like a clearer\n> way of thinking about what has to change.\n> \n> We could avoid the side-effects on users of ProcessUtility_hook by\n> doing the copy step in ProcessUtility itself rather than passing the\n> flag on to standard_ProcessUtility. But that sounded like a bit of a\n> kluge. Also, putting the work in standard_ProcessUtility preserves\n> the option to redistribute it into the individual switch arms, in case\n> anyone does find the extra copying overhead annoying for statement\n> types that don't need it. (I don't plan to do any such thing as part\n> of this bug-fix patch, though.)\n> \n> Barring objections, I'm going to push this into HEAD fairly soon,\n> since beta2 is hard upon us. Still thinking about which way to\n> fix it in the back branches.\n\nPhew. Do we really want to break a quite significant number of\nextensions this long after feature freeze? Since we already need to find\na backpatchable way to deal with the issue it seems like deferring the\nAPI change to 15 might be prudent?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Jun 2021 12:25:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Phew. Do we really want to break a quite significant number of\n> extensions this long after feature freeze? Since we already need to find\n> a backpatchable way to deal with the issue it seems like deferring the\n> API change to 15 might be prudent?\n\nUh, nobody ever promised that server-internal APIs are frozen as of beta1;\nthat would be a horrid crimp on our ability to fix bugs during beta.\nI've generally supposed that we don't start expecting that till RC stage.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 15:53:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "Hi,\n\nOn 2021-06-17 15:53:22 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Phew. Do we really want to break a quite significant number of\n> > extensions this long after feature freeze? Since we already need to find\n> > a backpatchable way to deal with the issue it seems like deferring the\n> > API change to 15 might be prudent?\n> \n> Uh, nobody ever promised that server-internal APIs are frozen as of beta1;\n> that would be a horrid crimp on our ability to fix bugs during beta.\n\nSure, there's no promise. But I still think it's worth taking the amount\nof breakage more into account than pre beta?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Jun 2021 13:17:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-06-16 21:39:49 -0400, Tom Lane wrote:\n>> Although this adds some overhead in the form of copying of\n>> utility node trees that won't actually mutate during execution,\n>> I think that won't be too bad because those trees tend to be\n>> small and hence cheap to copy.\n\n> Have you evaluated the cost in some form? I don't think it a relevant\n> cost for most utility statements, but there's a few exceptions that *do*\n> worry me. In particular, in some workloads transaction statements are\n> very frequent.\n\nI hadn't, but since you mention it, I tried this test case:\n\n$ cat trivial.sql \nbegin;\ncommit;\n$ pgbench -n -M prepared -f trivial.sql -T 60\n\nI got these results on HEAD:\ntps = 23853.244130 (without initial connection time)\ntps = 23810.759969 (without initial connection time)\ntps = 23167.608493 (without initial connection time)\ntps = 23784.432746 (without initial connection time)\n\nand adding the v2 patch:\ntps = 23298.081147 (without initial connection time)\ntps = 23614.466755 (without initial connection time)\ntps = 23475.297853 (without initial connection time)\ntps = 23530.826527 (without initial connection time)\n\nSo if you squint there might be a sub-one-percent difference\nthere, but it's barely distinguishable from the noise. In\nany situation where the transactions are doing actual work,\nI doubt you could measure a difference.\n\n(In any case, if someone does get excited about this, they\ncould rearrange things to push the copyObject calls into the\nindividual arms of the switch in ProcessUtility. Personally\nthough I doubt it could be worth the code bloat.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 16:36:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-06-17 15:53:22 -0400, Tom Lane wrote:\n>> Uh, nobody ever promised that server-internal APIs are frozen as of beta1;\n>> that would be a horrid crimp on our ability to fix bugs during beta.\n\n> Sure, there's no promise. But I still think it's worth taking the amount\n> of breakage more into account than pre beta?\n\nAre there really so many people using the ProcessUtility hook?\nIn a quick look on codesearch.debian.net, I found\n\nhypopg\npgaudit\npgextwlist\npglogical\n\nwhich admittedly is more than none, but it's not a huge number\neither. I have to think that fixing this bug reliably is a\nmore important consideration.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 16:50:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "Hi,\n\nOn 2021-06-17 16:50:57 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-06-17 15:53:22 -0400, Tom Lane wrote:\n> >> Uh, nobody ever promised that server-internal APIs are frozen as of beta1;\n> >> that would be a horrid crimp on our ability to fix bugs during beta.\n> \n> > Sure, there's no promise. But I still think it's worth taking the amount\n> > of breakage more into account than pre beta?\n> \n> Are there really so many people using the ProcessUtility hook?\n> In a quick look on codesearch.debian.net, I found\n> \n> hypopg\n> pgaudit\n> pgextwlist\n> pglogical\n\nThe do seem to be quite a few more outside of Debian. E.g.\nhttps://github.com/search?p=2&q=ProcessUtility_hook&type=Code\nshows quite a few.\n\nUnfortunately github is annoying to search through - it doesn't seem to\nhave any logic to prevent duplicates across repositories :(. Which means\nthere's dozens of copies of postgres code included...\n\n\n> which admittedly is more than none, but it's not a huge number\n> either. I have to think that fixing this bug reliably is a\n> more important consideration.\n\nSure!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Jun 2021 14:08:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "I wrote:\n> (In any case, if someone does get excited about this, they\n> could rearrange things to push the copyObject calls into the\n> individual arms of the switch in ProcessUtility. Personally\n> though I doubt it could be worth the code bloat.)\n\nIt occurred to me to try making the copying code look like\n\n if (readOnlyTree)\n {\n switch (nodeTag(parsetree))\n {\n case T_TransactionStmt:\n /* stmt is immutable anyway, no need to copy */\n break;\n default:\n pstmt = copyObject(pstmt);\n parsetree = pstmt->utilityStmt;\n break;\n }\n }\n\nThis didn't move the needle at all, in fact it seems maybe a\nshade slower:\n\ntps = 23502.288878 (without initial connection time)\ntps = 23643.821923 (without initial connection time)\ntps = 23082.976795 (without initial connection time)\ntps = 23547.527641 (without initial connection time)\n\nSo I think this confirms my gut feeling that copyObject on a\nTransactionStmt is negligible. To the extent that the prior\nmeasurement shows a real difference, it's probably a chance effect\nof changing code layout elsewhere.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 17:11:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "On Thu, Jun 17, 2021 at 02:08:34PM -0700, Andres Freund wrote:\n> Unfortunately github is annoying to search through - it doesn't seem to\n> have any logic to prevent duplicates across repositories :(. Which means\n> there's dozens of copies of postgres code included...\n\nI agree with the position of doing something now while in beta. I\nhave not looked at the tree, but I am rather sure that we had changes \nin the hooks while in beta phase in the past.\n\n>> which admittedly is more than none, but it's not a huge number\n>> either. I have to think that fixing this bug reliably is a\n>> more important consideration.\n> \n> Sure!\n\nThe DECLARE CURSOR case in ExplainOneUtility() does a copy of a Query.\nPerhaps a comment should be added to explain why a copy is still\nrequired?\n--\nMichael", "msg_date": "Fri, 18 Jun 2021 09:57:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> The DECLARE CURSOR case in ExplainOneUtility() does a copy of a Query.\n> Perhaps a comment should be added to explain why a copy is still\n> required?\n\nI did add a comment about that in the v2 patch --- the issue is the\ncall path for EXPLAIN EXECUTE.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 22:26:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Thu, Jun 17, 2021 at 01:03:29PM -0400, Tom Lane wrote:\n> + *\treadOnlyTree: treat pstmt's node tree as read-only\n\n> Maybe it's because I'm not a native english speaker, or because it's quite\n> late here, but I don't find \"treat as read-only\" really clear. I don't have a\n> concise better wording to suggest.\n\nMaybe \"if true, pstmt's node tree must not be modified\" ?\n\n>> Still thinking about which way to fix it in the back branches.\n\n> I'm +0.5 for a narrow fix, due to the possibility of unspotted similar problem\n> vs possibility of performance regression ratio.\n\nAfter sleeping on it another day, I'm inclined to think the same. The\nkey attraction of a centralized fix is that it prevents the possibility\nof new bugs of the same ilk in newly-added features. Given how long\nthese CREATE/ALTER DOMAIN bugs escaped detection, it's hard to have\nfull confidence that there are no others in the back branches --- but\nthey must be in really lightly-used features.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Jun 2021 10:24:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "On Fri, Jun 18, 2021 at 10:24:20AM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Thu, Jun 17, 2021 at 01:03:29PM -0400, Tom Lane wrote:\n> > + *\treadOnlyTree: treat pstmt's node tree as read-only\n> \n> > Maybe it's because I'm not a native english speaker, or because it's quite\n> > late here, but I don't find \"treat as read-only\" really clear. I don't have a\n> > concise better wording to suggest.\n> \n> Maybe \"if true, pstmt's node tree must not be modified\" ?\n\nThanks, I find it way better!\n\n\n", "msg_date": "Fri, 18 Jun 2021 23:15:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Fri, Jun 18, 2021 at 10:24:20AM -0400, Tom Lane wrote:\n>> Maybe \"if true, pstmt's node tree must not be modified\" ?\n\n> Thanks, I find it way better!\n\nOK, pushed that way, and with a couple other comment tweaks from\nan additional re-reading.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Jun 2021 11:24:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Centralizing protective copying of utility statements" }, { "msg_contents": "On Fri, Jun 18, 2021 at 11:24:00AM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Fri, Jun 18, 2021 at 10:24:20AM -0400, Tom Lane wrote:\n> >> Maybe \"if true, pstmt's node tree must not be modified\" ?\n> \n> > Thanks, I find it way better!\n> \n> OK, pushed that way, and with a couple other comment tweaks from\n> an additional re-reading.\n\nThanks! For the record I already pushed the required compatibility changes for\nhypopg extension.\n\n\n", "msg_date": "Sat, 19 Jun 2021 12:49:07 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Centralizing protective copying of utility statements" } ]
[ { "msg_contents": "Hi Amit,\n\nIn commit e7eea52b2d, you introduced a new function, RelationGetIdentityKeyBitmap(), which uses some odd logic for determining if a relation has a replica identity index. That code segfaults under certain conditions. A test case to demonstrate that is attached. Prior to patching the code, this new test gets stuck waiting for replication to finish, which never happens. You have to break out of the test and check tmp_check/log/021_no_replica_identity_publisher.log.\n\nI believe this bit of logic in src/backend/utils/cache/relcache.c:\n\n indexDesc = RelationIdGetRelation(relation->rd_replidindex);\n for (i = 0; i < indexDesc->rd_index->indnatts; i++)\n\nis unsafe without further checks, also attached.\n\nWould you mind taking a look?\n \n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 16 Jun 2021 21:31:22 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Fix for segfault in logical replication on master" }, { "msg_contents": "On Thursday, June 17, 2021 1:31 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\r\n> In commit e7eea52b2d, you introduced a new function,\r\n> RelationGetIdentityKeyBitmap(), which uses some odd logic for determining\r\n> if a relation has a replica identity index. That code segfaults under certain\r\n> conditions. A test case to demonstrate that is attached. Prior to patching\r\n> the code, this new test gets stuck waiting for replication to finish, which never\r\n> happens. You have to break out of the test and check\r\n> tmp_check/log/021_no_replica_identity_publisher.log.\r\n> \r\n> I believe this bit of logic in src/backend/utils/cache/relcache.c:\r\n> \r\n> indexDesc = RelationIdGetRelation(relation->rd_replidindex);\r\n> for (i = 0; i < indexDesc->rd_index->indnatts; i++)\r\n> \r\n> is unsafe without further checks, also attached.\r\n> \r\n> Would you mind taking a look?\r\nHi, Mark\r\n\r\nThanks for your reporting.\r\nI started to analyze your report and\r\nwill reply after my idea to your modification is settled.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Thu, 17 Jun 2021 05:19:48 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Fix for segfault in logical replication on master" }, { "msg_contents": "\n\n> On Jun 16, 2021, at 10:19 PM, osumi.takamichi@fujitsu.com wrote:\n> \n> I started to analyze your report and\n> will reply after my idea to your modification is settled.\n\nThank you.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 16 Jun 2021 22:43:19 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "On Thursday, June 17, 2021 2:43 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\r\n> > On Jun 16, 2021, at 10:19 PM, osumi.takamichi@fujitsu.com wrote:\r\n> >\r\n> > I started to analyze your report and\r\n> > will reply after my idea to your modification is settled.\r\n> \r\n> Thank you.\r\nI'll share my first analysis.\r\n\r\n> In commit e7eea52b2d, you introduced a new function,\r\n> RelationGetIdentityKeyBitmap(), which uses some odd logic for determining\r\n> if a relation has a replica identity index. That code segfaults under certain\r\n> conditions. A test case to demonstrate that is attached. Prior to patching\r\n> the code, this new test gets stuck waiting for replication to finish, which never\r\n> happens. You have to break out of the test and check\r\n> tmp_check/log/021_no_replica_identity_publisher.log.\r\n> \r\n> I believe this bit of logic in src/backend/utils/cache/relcache.c:\r\n> \r\n> indexDesc = RelationIdGetRelation(relation->rd_replidindex);\r\n> for (i = 0; i < indexDesc->rd_index->indnatts; i++)\r\n> \r\n> is unsafe without further checks, also attached.\r\nYou are absolutely right.\r\nI checked the crash scenario and reproduced the core,\r\nwhich has a null indexDesc. Also, rd_replidindex must be checked beforehand\r\nas you included in your patch, because having an index does not necessarily\r\nmean to have a replica identity index. As the proof of this, the oid of\r\nrd_replidindex in the scenario is 0. OTOH, I've confirmed your new test\r\nhas passed with your fix.\r\n\r\nAlso, your test looks essentially minimum(suitable for the problem) to me.\r\n\r\n* RelationGetIdentityKeyBitmap\r\n+ /*\r\n+ * Fall out if the description is not for an index, suggesting\r\n+ * affairs have changed since we looked. XXX Should we log a\r\n+ * complaint here?\r\n+ */\r\n+ if (!indexDesc)\r\n+ return NULL;\r\n+ if (!indexDesc->rd_index)\r\n+ {\r\n+ RelationClose(indexDesc);\r\n+ return NULL;\r\n+ }\r\nFor the 1st check, isn't it better to use RelationIsValid() ?\r\nI agree with having the check itself of course, though.\r\n\r\nAdditionally, In what kind of actual scenario, did you think that\r\nwe come to the part to \"log a complaint\" ?\r\n\r\nI'm going to spend some time to analyze RelationGetIndexAttrBitmap next\r\nto know if similar hazards can happen, because RelationGetIdentityKeyBitmap's logic\r\ncomes from the function mainly.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Thu, 17 Jun 2021 10:39:32 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Fix for segfault in logical replication on master" }, { "msg_contents": "On Thu, Jun 17, 2021 at 4:09 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, June 17, 2021 2:43 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > > On Jun 16, 2021, at 10:19 PM, osumi.takamichi@fujitsu.com wrote:\n> > >\n> > > I started to analyze your report and\n> > > will reply after my idea to your modification is settled.\n> >\n> > Thank you.\n> I'll share my first analysis.\n>\n> > In commit e7eea52b2d, you introduced a new function,\n> > RelationGetIdentityKeyBitmap(), which uses some odd logic for determining\n> > if a relation has a replica identity index. That code segfaults under certain\n> > conditions. A test case to demonstrate that is attached. Prior to patching\n> > the code, this new test gets stuck waiting for replication to finish, which never\n> > happens. You have to break out of the test and check\n> > tmp_check/log/021_no_replica_identity_publisher.log.\n> >\n> > I believe this bit of logic in src/backend/utils/cache/relcache.c:\n> >\n> > indexDesc = RelationIdGetRelation(relation->rd_replidindex);\n> > for (i = 0; i < indexDesc->rd_index->indnatts; i++)\n> >\n> > is unsafe without further checks, also attached.\n> You are absolutely right.\n> I checked the crash scenario and reproduced the core,\n> which has a null indexDesc. Also, rd_replidindex must be checked beforehand\n> as you included in your patch, because having an index does not necessarily\n> mean to have a replica identity index. As the proof of this, the oid of\n> rd_replidindex in the scenario is 0. OTOH, I've confirmed your new test\n> has passed with your fix.\n>\n> Also, your test looks essentially minimum(suitable for the problem) to me.\n>\n> * RelationGetIdentityKeyBitmap\n> + /*\n> + * Fall out if the description is not for an index, suggesting\n> + * affairs have changed since we looked. XXX Should we log a\n> + * complaint here?\n> + */\n> + if (!indexDesc)\n> + return NULL;\n> + if (!indexDesc->rd_index)\n> + {\n> + RelationClose(indexDesc);\n> + return NULL;\n> + }\n> For the 1st check, isn't it better to use RelationIsValid() ?\n> I agree with having the check itself of course, though.\n>\n> Additionally, In what kind of actual scenario, did you think that\n> we come to the part to \"log a complaint\" ?\n>\n\nYeah, I think that part is not required unless there is some case\nwhere it can happen. I guess we might want to have an elog at that\nplace with a check like:\nif (!RelationIsValid(relation))\nelog(ERROR, \"could not open relation with OID %u\", relid);\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 17 Jun 2021 17:03:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "\n\n> On Jun 17, 2021, at 3:39 AM, osumi.takamichi@fujitsu.com wrote:\n> \n> For the 1st check, isn't it better to use RelationIsValid() ?\n\nYes, you are right.\n\n> Additionally, In what kind of actual scenario, did you think that\n> we come to the part to \"log a complaint\" ?\n\nThe way that RelationGetIndexList assigns rd_replidindex to the Relation seems to lack sufficient locking. After scanning pg_index to find indexes associated with the relation, pg_index is closed and the access share lock released. I couldn't prove to myself that by the time we use the rd_replidindex field thus computed that it was safe to assume that the Oid stored there still refers to an index. The most likely problem would be that the index has since been dropped in a concurrent transaction, but it also seems just barely possible that the Oid has been reused and refers to something else, a table perhaps. The check that I added is not completely bulletproof, because the new object reusing that Oid could be a different index, and we'd be none the wiser. Do you think we should do something about that? I felt the checks I put in place were very cheap and would work in almost all cases. In any event, they seemed better than no checks, which is what we have now.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 17 Jun 2021 06:19:58 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "On Thu, Jun 17, 2021 at 6:50 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Jun 17, 2021, at 3:39 AM, osumi.takamichi@fujitsu.com wrote:\n> >\n> > For the 1st check, isn't it better to use RelationIsValid() ?\n>\n> Yes, you are right.\n>\n> > Additionally, In what kind of actual scenario, did you think that\n> > we come to the part to \"log a complaint\" ?\n>\n> The way that RelationGetIndexList assigns rd_replidindex to the Relation seems to lack sufficient locking. After scanning pg_index to find indexes associated with the relation, pg_index is closed and the access share lock released. I couldn't prove to myself that by the time we use the rd_replidindex field thus computed that it was safe to assume that the Oid stored there still refers to an index. The most likely problem would be that the index has since been dropped in a concurrent transaction, but it also seems just barely possible that the Oid has been reused and refers to something else, a table perhaps.\n>\n\nI think such a problem won't happen because we are using historic\nsnapshots in this context. We rely on that in a similar way in\nreorderbuffer.c, see ReorderBufferProcessTXN.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 17 Jun 2021 19:10:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "> On Jun 17, 2021, at 6:40 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> I think such a problem won't happen because we are using historic\n> snapshots in this context. We rely on that in a similar way in\n> reorderbuffer.c, see ReorderBufferProcessTXN.\n\nI think you are right, but that's the part I have trouble fully convincing myself is safe. We certainly have an historic snapshot when we call RelationGetIndexList, but that has an early exit if the relation already has fields set, and we don't know if those fields were set before or after the historic snapshot was taken. Within the context of the pluggable infrastructure, I think we're safe. The only caller of RelationGetIdentityKeyBitmap() in core is logicalrep_write_attrs(), which is only called by logicalrep_write_rel(), which is only called by send_relation_and_attrs(), which is only called by maybe_send_schema(), which is called by pgoutput_change() and pgoutput_truncate(), both being callbacks in core's logical replication plugin.\n\nReorderBufferProcessTXN calls SetupHistoricSnapshot before opening the relation and then calling ReorderBufferApplyChange to invoke the plugin on that opened relation, so the relation's fields could not have been setup before the snapshot was taken. Any other plugin would similarly get invoked after that same logic, so they'd be fine, too. The problem would only be if somebody called RelationGetIdentityKeyBitmap() or one of its calling functions from outside that infrastructure. Is that worth worrying about? The function comments for those mention having an historic snapshot, and the Assert will catch if code doesn't have one, but I wonder how much of a trap for the unwary that is, considering that somebody might open the relation and lookup indexes for the relation before taking an historic snapshot and calling these functions.\n\nI thought it was cheap enough to check that the relation we open is an index, because if it is not, we'll segfault when accessing fields of the relation->rd_index struct. I wouldn't necessarily advocate doing any really expensive checks here, but a quick sanity check seemed worth the effort. If you don't want to commit that part, I'm not going to put up a huge fuss.\n\nSince neither of you knew why I was performing that check, it is clear that my code comment was insufficient. I have added a more detailed code comment to explain the purpose of the check. I also changed the first check to use RelationIsValid(), as suggested upthread.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 17 Jun 2021 08:56:49 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "On Thu, Jun 17, 2021 at 9:26 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Jun 17, 2021, at 6:40 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I think such a problem won't happen because we are using historic\n> > snapshots in this context. We rely on that in a similar way in\n> > reorderbuffer.c, see ReorderBufferProcessTXN.\n>\n> I think you are right, but that's the part I have trouble fully convincing myself is safe. We certainly have an historic snapshot when we call RelationGetIndexList, but that has an early exit if the relation already has fields set, and we don't know if those fields were set before or after the historic snapshot was taken. Within the context of the pluggable infrastructure, I think we're safe. The only caller of RelationGetIdentityKeyBitmap() in core is logicalrep_write_attrs(), which is only called by logicalrep_write_rel(), which is only called by send_relation_and_attrs(), which is only called by maybe_send_schema(), which is called by pgoutput_change() and pgoutput_truncate(), both being callbacks in core's logical replication plugin.\n>\n> ReorderBufferProcessTXN calls SetupHistoricSnapshot before opening the relation and then calling ReorderBufferApplyChange to invoke the plugin on that opened relation, so the relation's fields could not have been setup before the snapshot was taken. Any other plugin would similarly get invoked after that same logic, so they'd be fine, too. The problem would only be if somebody called RelationGetIdentityKeyBitmap() or one of its calling functions from outside that infrastructure. Is that worth worrying about? The function comments for those mention having an historic snapshot, and the Assert will catch if code doesn't have one, but I wonder how much of a trap for the unwary that is, considering that somebody might open the relation and lookup indexes for the relation before taking an historic snapshot and calling these functions.\n>\n\nI think in such a case the caller must call InvalidateSystemCaches\nbefore setting up a historic snapshot, otherwise, there could be other\nproblems as well.\n\n> I thought it was cheap enough to check that the relation we open is an index, because if it is not, we'll segfault when accessing fields of the relation->rd_index struct. I wouldn't necessarily advocate doing any really expensive checks here, but a quick sanity check seemed worth the effort.\n>\n\nI am not telling you anything about the cost of these sanity checks. I\nsuggest you raise elog rather than return NULL because if this happens\nthere is definitely some problem and continuing won't be a good idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 18 Jun 2021 09:18:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "On Fri, Jun 18, 2021 at 9:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > I thought it was cheap enough to check that the relation we open is an index, because if it is not, we'll segfault when accessing fields of the relation->rd_index struct. I wouldn't necessarily advocate doing any really expensive checks here, but a quick sanity check seemed worth the effort.\n> >\n>\n> I am not telling you anything about the cost of these sanity checks. I\n> suggest you raise elog rather than return NULL because if this happens\n> there is definitely some problem and continuing won't be a good idea.\n>\n\nPushed, after making the above change. Additionally, I have moved the\ntest case to the existing file 001_rep_changes instead of creating a\nnew one as the test seems to fit there and I was not sure if the test\nfor just this case deserves a new file.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 19 Jun 2021 14:48:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "\nOn Sat, 19 Jun 2021 at 17:18, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Jun 18, 2021 at 9:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> > I thought it was cheap enough to check that the relation we open is an index, because if it is not, we'll segfault when accessing fields of the relation->rd_index struct. I wouldn't necessarily advocate doing any really expensive checks here, but a quick sanity check seemed worth the effort.\n>> >\n>>\n>> I am not telling you anything about the cost of these sanity checks. I\n>> suggest you raise elog rather than return NULL because if this happens\n>> there is definitely some problem and continuing won't be a good idea.\n>>\n>\n> Pushed, after making the above change. Additionally, I have moved the\n> test case to the existing file 001_rep_changes instead of creating a\n> new one as the test seems to fit there and I was not sure if the test\n> for just this case deserves a new file.\n\nHi, Amit\n\nSorry for the late repay.\n\nWhen we find that the relation has no replica identity index, I think we should\nfree the memory of the indexoidlist. Since we free the memory owned by\nindexoidlist at end of RelationGetIdentityKeyBitmap().\n\n if (!OidIsValid(relation->rd_replidindex))\n {\n list_free(indexoidlist);\n return NULL;\n }\n\nOr we can free the memory owned by indexoidlist after check whether it is NIL,\nbecause we do not use it in the later.\n\nIf we do not free the memory, there might be a memory leak when\nrelation->rd_replidindex is invalid. Am I right?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Mon, 21 Jun 2021 16:00:16 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "On Mon, Jun 21, 2021 at 1:30 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Sat, 19 Jun 2021 at 17:18, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Jun 18, 2021 at 9:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Or we can free the memory owned by indexoidlist after check whether it is NIL,\n> because we do not use it in the later.\n>\n\nValid point. But I am thinking do we really need to fetch and check\nindexoidlist here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Jun 2021 13:52:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "\nOn Mon, 21 Jun 2021 at 16:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Jun 21, 2021 at 1:30 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>> On Sat, 19 Jun 2021 at 17:18, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > On Fri, Jun 18, 2021 at 9:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> Or we can free the memory owned by indexoidlist after check whether it is NIL,\n>> because we do not use it in the later.\n>>\n>\n> Valid point. But I am thinking do we really need to fetch and check\n> indexoidlist here?\n\nIMO, we shold not fetch and check the indexoidlist here, since we do not\nuse it. However, we should use RelationGetIndexList() to update the\nreladion->rd_replidindex, so we should fetch the indexoidlist, maybe we\ncan use the following code:\n\n indexoidlist = RelationGetIndexList(relation);\n list_free(indexoidlist);\n\nOr does there any function that only update the relation->rd_replidindex\nor related fields, but do not fetch the indexoidlist?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Mon, 21 Jun 2021 16:36:12 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "On Mon, Jun 21, 2021 at 2:06 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Mon, 21 Jun 2021 at 16:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Mon, Jun 21, 2021 at 1:30 PM Japin Li <japinli@hotmail.com> wrote:\n> >>\n> >> On Sat, 19 Jun 2021 at 17:18, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> > On Fri, Jun 18, 2021 at 9:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> Or we can free the memory owned by indexoidlist after check whether it is NIL,\n> >> because we do not use it in the later.\n> >>\n> >\n> > Valid point. But I am thinking do we really need to fetch and check\n> > indexoidlist here?\n>\n> IMO, we shold not fetch and check the indexoidlist here, since we do not\n> use it. However, we should use RelationGetIndexList() to update the\n> reladion->rd_replidindex, so we should fetch the indexoidlist, maybe we\n> can use the following code:\n>\n> indexoidlist = RelationGetIndexList(relation);\n> list_free(indexoidlist);\n>\n> Or does there any function that only update the relation->rd_replidindex\n> or related fields, but do not fetch the indexoidlist?\n>\n\nHow about RelationGetReplicaIndex? It fetches the indexlist only when\nrequired and frees it immediately. But otherwise, currently, there\nshouldn't be any memory leak because we allocate this in \"logical\nreplication output context\" which is reset after processing each\nchange message, see pgoutput_change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Jun 2021 15:24:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "\nOn Mon, 21 Jun 2021 at 17:54, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Jun 21, 2021 at 2:06 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>> On Mon, 21 Jun 2021 at 16:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > On Mon, Jun 21, 2021 at 1:30 PM Japin Li <japinli@hotmail.com> wrote:\n>> >>\n>> >> On Sat, 19 Jun 2021 at 17:18, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >> > On Fri, Jun 18, 2021 at 9:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >>\n>> >> Or we can free the memory owned by indexoidlist after check whether it is NIL,\n>> >> because we do not use it in the later.\n>> >>\n>> >\n>> > Valid point. But I am thinking do we really need to fetch and check\n>> > indexoidlist here?\n>>\n>> IMO, we shold not fetch and check the indexoidlist here, since we do not\n>> use it. However, we should use RelationGetIndexList() to update the\n>> reladion->rd_replidindex, so we should fetch the indexoidlist, maybe we\n>> can use the following code:\n>>\n>> indexoidlist = RelationGetIndexList(relation);\n>> list_free(indexoidlist);\n>>\n>> Or does there any function that only update the relation->rd_replidindex\n>> or related fields, but do not fetch the indexoidlist?\n>>\n>\n> How about RelationGetReplicaIndex? It fetches the indexlist only when\n> required and frees it immediately. But otherwise, currently, there\n> shouldn't be any memory leak because we allocate this in \"logical\n> replication output context\" which is reset after processing each\n> change message, see pgoutput_change.\n\nThanks for your explanation. It might not be a memory leak, however it's\na little confuse when we free the memory of the indexoidlist in one place,\nbut not free it in another place.\n\nI attached a patch to fix this. Any thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\ndiff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c\nindex d55ae016d0..94fbf1aa19 100644\n--- a/src/backend/utils/cache/relcache.c\n+++ b/src/backend/utils/cache/relcache.c\n@@ -5244,9 +5244,9 @@ Bitmapset *\n RelationGetIdentityKeyBitmap(Relation relation)\n {\n Bitmapset *idindexattrs = NULL; /* columns in the replica identity */\n- List *indexoidlist;\n Relation indexDesc;\n int i;\n+ Oid replidindex;\n MemoryContext oldcxt;\n\n /* Quick exit if we already computed the result */\n@@ -5260,18 +5260,14 @@ RelationGetIdentityKeyBitmap(Relation relation)\n /* Historic snapshot must be set. */\n Assert(HistoricSnapshotActive());\n\n- indexoidlist = RelationGetIndexList(relation);\n-\n- /* Fall out if no indexes (but relhasindex was set) */\n- if (indexoidlist == NIL)\n- return NULL;\n+ replidindex = RelationGetReplicaIndex(relation);\n\n /* Fall out if there is no replica identity index */\n- if (!OidIsValid(relation->rd_replidindex))\n+ if (!OidIsValid(replidindex))\n return NULL;\n\n /* Look up the description for the replica identity index */\n- indexDesc = RelationIdGetRelation(relation->rd_replidindex);\n+ indexDesc = RelationIdGetRelation(replidindex);\n\n if (!RelationIsValid(indexDesc))\n elog(ERROR, \"could not open relation with OID %u\",\n@@ -5295,7 +5291,6 @@ RelationGetIdentityKeyBitmap(Relation relation)\n }\n\n RelationClose(indexDesc);\n- list_free(indexoidlist);\n\n /* Don't leak the old values of these bitmaps, if any */\n bms_free(relation->rd_idattr);\n\n\n\n\n", "msg_date": "Mon, 21 Jun 2021 18:39:10 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "On Mon, Jun 21, 2021 at 4:09 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Mon, 21 Jun 2021 at 17:54, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Mon, Jun 21, 2021 at 2:06 PM Japin Li <japinli@hotmail.com> wrote:\n> >>\n> >> On Mon, 21 Jun 2021 at 16:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> > On Mon, Jun 21, 2021 at 1:30 PM Japin Li <japinli@hotmail.com> wrote:\n> >> >>\n> >> >> On Sat, 19 Jun 2021 at 17:18, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> >> > On Fri, Jun 18, 2021 at 9:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> >>\n> >> >> Or we can free the memory owned by indexoidlist after check whether it is NIL,\n> >> >> because we do not use it in the later.\n> >> >>\n> >> >\n> >> > Valid point. But I am thinking do we really need to fetch and check\n> >> > indexoidlist here?\n> >>\n> >> IMO, we shold not fetch and check the indexoidlist here, since we do not\n> >> use it. However, we should use RelationGetIndexList() to update the\n> >> reladion->rd_replidindex, so we should fetch the indexoidlist, maybe we\n> >> can use the following code:\n> >>\n> >> indexoidlist = RelationGetIndexList(relation);\n> >> list_free(indexoidlist);\n> >>\n> >> Or does there any function that only update the relation->rd_replidindex\n> >> or related fields, but do not fetch the indexoidlist?\n> >>\n> >\n> > How about RelationGetReplicaIndex? It fetches the indexlist only when\n> > required and frees it immediately. But otherwise, currently, there\n> > shouldn't be any memory leak because we allocate this in \"logical\n> > replication output context\" which is reset after processing each\n> > change message, see pgoutput_change.\n>\n> Thanks for your explanation. It might not be a memory leak, however it's\n> a little confuse when we free the memory of the indexoidlist in one place,\n> but not free it in another place.\n>\n> I attached a patch to fix this. Any thoughts?\n>\n\nYour patch appears to be on the lines we discussed but I would prefer\nto get it done after Beta2 as this is just a minor code improvement.\nCan you please send the change as a patch file instead of copy-pasting\nthe diff at the end of the email?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Jun 2021 16:36:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "On Mon, 21 Jun 2021 at 19:06, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Jun 21, 2021 at 4:09 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>> On Mon, 21 Jun 2021 at 17:54, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > On Mon, Jun 21, 2021 at 2:06 PM Japin Li <japinli@hotmail.com> wrote:\n>> >>\n>> >> On Mon, 21 Jun 2021 at 16:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >> > On Mon, Jun 21, 2021 at 1:30 PM Japin Li <japinli@hotmail.com> wrote:\n>> >> >>\n>> >> >> On Sat, 19 Jun 2021 at 17:18, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >> >> > On Fri, Jun 18, 2021 at 9:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >> >>\n>> >> >> Or we can free the memory owned by indexoidlist after check whether it is NIL,\n>> >> >> because we do not use it in the later.\n>> >> >>\n>> >> >\n>> >> > Valid point. But I am thinking do we really need to fetch and check\n>> >> > indexoidlist here?\n>> >>\n>> >> IMO, we shold not fetch and check the indexoidlist here, since we do not\n>> >> use it. However, we should use RelationGetIndexList() to update the\n>> >> reladion->rd_replidindex, so we should fetch the indexoidlist, maybe we\n>> >> can use the following code:\n>> >>\n>> >> indexoidlist = RelationGetIndexList(relation);\n>> >> list_free(indexoidlist);\n>> >>\n>> >> Or does there any function that only update the relation->rd_replidindex\n>> >> or related fields, but do not fetch the indexoidlist?\n>> >>\n>> >\n>> > How about RelationGetReplicaIndex? It fetches the indexlist only when\n>> > required and frees it immediately. But otherwise, currently, there\n>> > shouldn't be any memory leak because we allocate this in \"logical\n>> > replication output context\" which is reset after processing each\n>> > change message, see pgoutput_change.\n>>\n>> Thanks for your explanation. It might not be a memory leak, however it's\n>> a little confuse when we free the memory of the indexoidlist in one place,\n>> but not free it in another place.\n>>\n>> I attached a patch to fix this. Any thoughts?\n>>\n>\n> Your patch appears to be on the lines we discussed but I would prefer\n> to get it done after Beta2 as this is just a minor code improvement.\n> Can you please send the change as a patch file instead of copy-pasting\n> the diff at the end of the email?\n\nThanks for your review! Attached v1 patch.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Tue, 22 Jun 2021 10:07:37 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for segfault in logical replication on master" }, { "msg_contents": "On Tuesday, June 22, 2021 11:08 AM Japin Li <japinli@hotmail.com> wrote:\n> On Mon, 21 Jun 2021 at 19:06, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Mon, Jun 21, 2021 at 4:09 PM Japin Li <japinli@hotmail.com> wrote:\n> >>\n> >> On Mon, 21 Jun 2021 at 17:54, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >> > On Mon, Jun 21, 2021 at 2:06 PM Japin Li <japinli@hotmail.com> wrote:\n> >> >>\n> >> >> On Mon, 21 Jun 2021 at 16:22, Amit Kapila\n> <amit.kapila16@gmail.com> wrote:\n> >> >> > On Mon, Jun 21, 2021 at 1:30 PM Japin Li <japinli@hotmail.com>\n> wrote:\n> >> >> >>\n> >> >> >> On Sat, 19 Jun 2021 at 17:18, Amit Kapila\n> <amit.kapila16@gmail.com> wrote:\n> >> >> >> > On Fri, Jun 18, 2021 at 9:18 AM Amit Kapila\n> <amit.kapila16@gmail.com> wrote:\n> >> >> >>\n> >> >> >> Or we can free the memory owned by indexoidlist after check\n> >> >> >> whether it is NIL, because we do not use it in the later.\n> >> >> >>\n> >> >> >\n> >> >> > Valid point. But I am thinking do we really need to fetch and\n> >> >> > check indexoidlist here?\n> >> >>\n> >> >> IMO, we shold not fetch and check the indexoidlist here, since we\n> >> >> do not use it. However, we should use RelationGetIndexList() to\n> >> >> update the\n> >> >> reladion->rd_replidindex, so we should fetch the indexoidlist,\n> >> >> reladion->maybe we\n> >> >> can use the following code:\n> >> >>\n> >> >> indexoidlist = RelationGetIndexList(relation);\n> >> >> list_free(indexoidlist);\n> >> >>\n> >> >> Or does there any function that only update the\n> >> >> relation->rd_replidindex or related fields, but do not fetch the\n> indexoidlist?\n> >> >>\n> >> >\n> >> > How about RelationGetReplicaIndex? It fetches the indexlist only\n> >> > when required and frees it immediately. But otherwise, currently,\n> >> > there shouldn't be any memory leak because we allocate this in\n> >> > \"logical replication output context\" which is reset after\n> >> > processing each change message, see pgoutput_change.\n> >>\n> >> Thanks for your explanation. It might not be a memory leak, however\n> >> it's a little confuse when we free the memory of the indexoidlist in\n> >> one place, but not free it in another place.\n> >>\n> >> I attached a patch to fix this. Any thoughts?\n> >>\n> >\n> > Your patch appears to be on the lines we discussed but I would prefer\n> > to get it done after Beta2 as this is just a minor code improvement.\n> > Can you please send the change as a patch file instead of copy-pasting\n> > the diff at the end of the email?\n> \n> Thanks for your review! Attached v1 patch.\nYour patch can be applied to the HEAD.\nAnd, I also reviewed your patch, which seems OK.\nMake check-world has passed with your patch in my env as well.\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n", "msg_date": "Tue, 22 Jun 2021 03:48:20 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Fix for segfault in logical replication on master" }, { "msg_contents": "On Tue, Jun 22, 2021 at 9:18 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, June 22, 2021 11:08 AM Japin Li <japinli@hotmail.com> wrote:\n> > > Your patch appears to be on the lines we discussed but I would prefer\n> > > to get it done after Beta2 as this is just a minor code improvement.\n> > > Can you please send the change as a patch file instead of copy-pasting\n> > > the diff at the end of the email?\n> >\n> > Thanks for your review! Attached v1 patch.\n> Your patch can be applied to the HEAD.\n> And, I also reviewed your patch, which seems OK.\n> Make check-world has passed with your patch in my env as well.\n>\n\nPushed, thanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 28 Jun 2021 14:09:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix for segfault in logical replication on master" } ]
[ { "msg_contents": "I am making use of the new pipeline mode added to libpq in\nPostgreSQL 14. At the same time I would still like to support\nolder libpq versions by not providing the extended functionality\nthat depends on this mode.\n\nThe natural way to achieve this in C/C++ is to conditionally\nenable code that depends on the additional APIs based on the\npreprocessor macro. And I could easily do this if libpq-fe.h\nprovided a macro containing its version.\n\nNow, such a macro (PG_VERSION_NUM) is provided by pg_config.h\nthat normally accompanies libpq-fe.h. However, I don't believe\nthe presence of this file is guaranteed. All the documentation\nsays[1] about headers is this:\n\n\"Client programs that use libpq must include the header file \nlibpq-fe.h and must link with the libpq library.\"\n\nAnd there are good reasons why packagers of libpq may decide to\nomit this header (in a nutshell, it embeds target architecture-\nspecific information, see this discussion for background[2]). And\nI may not want to include it in my code (it defines a lot of free-\nnamed macros that may clash with my names).\n\nSo I am wondering if it would make sense to provide a better way\nto obtain the libpq version as a macro?\n\nTo me, as a user, the simplest way would be to have such a macro\ndefined by libpq-fe.h. This would also provide a reasonable\nfallback for previous versions: if this macro is not defined, I\nknow I am dealing with version prior to 14 and if I need to know\nwhich exactly I can try to include pg_config.h (perhaps with the\nhelp of __has_include if I am using C++).\n\nIf simply moving this macro to libpq-fe.h is not desirable (for\nexample, because it is auto-generated), then perhaps we could\nmove this (and a few other version-related macros[3]) to a\nseparate header (for example, libpq-version.h) and either include\nit from libpq-fe.h or define a macro in libpq-fe.h that signals\nits presence (e.g., PG_HAS_VERSION or some such).\n\nWhat do you think?\n\n\n[1] https://www.postgresql.org/docs/9.3/libpq.html\n\n[2] https://bugzilla.redhat.com/show_bug.cgi?id=828467\n\n[3] PG_MAJORVERSION\n PG_MAJORVERSION_NUM\n PG_MINORVERSION_NUM\n PG_VERSION\n PG_VERSION_NUM\n PG_VERSION_STR (this one includes target so maybe leave it in pg_config.h)\n\n\n", "msg_date": "Thu, 17 Jun 2021 11:04:06 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Add version macro to libpq-fe.h" }, { "msg_contents": "Boris Kolpackov <boris@codesynthesis.com> writes:\n> I am making use of the new pipeline mode added to libpq in\n> PostgreSQL 14. At the same time I would still like to support\n> older libpq versions by not providing the extended functionality\n> that depends on this mode.\n\nGood point.\n\n> The natural way to achieve this in C/C++ is to conditionally\n> enable code that depends on the additional APIs based on the\n> preprocessor macro. And I could easily do this if libpq-fe.h\n> provided a macro containing its version.\n\nI think putting a version number as such in there is a truly\nhorrid idea. However, I could get behind adding a boolean flag\nthat says specifically whether the pipeline feature exists.\nThen you'd do something like\n\n#ifdef LIBPQ_HAS_PIPELINING\n\nrather than embedding knowledge of exactly which release\nadded that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 09:34:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "On Thu, Jun 17, 2021 at 9:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think putting a version number as such in there is a truly\n> horrid idea. However, I could get behind adding a boolean flag\n> that says specifically whether the pipeline feature exists.\n> Then you'd do something like\n>\n> #ifdef LIBPQ_HAS_PIPELINING\n>\n> rather than embedding knowledge of exactly which release\n> added that.\n\nI realize that this kind of feature-based testing is generally\nconsidered a best practice, but the problem is we're unlikely to do it\nconsistently. If we put a version number in there, people will be able\nto test for whatever they want.\n\nThen again, why would pg_config.h be absent?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 12:56:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jun 17, 2021 at 9:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think putting a version number as such in there is a truly\n>> horrid idea. However, I could get behind adding a boolean flag\n>> that says specifically whether the pipeline feature exists.\n\n> I realize that this kind of feature-based testing is generally\n> considered a best practice, but the problem is we're unlikely to do it\n> consistently. If we put a version number in there, people will be able\n> to test for whatever they want.\n\nWe don't really add major new APIs to libpq very often, so I don't\nfind that too compelling. I do find it compelling that user code\nshouldn't embed knowledge about \"feature X arrived in version Y\".\n\n> Then again, why would pg_config.h be absent?\n\nLikely because somebody decided it was a server-side include rather\nthan an application-side include.\n\nA more critical point is that if pg_config is present, it'll likely\ncontain the server version, which might not have a lot to do with the\nlibpq version. Debian's already shipping things in a way that decouples\nthose, and I gather Red Hat is moving in that direction too.\n\nI think what people really want to know is \"if I try to call\nPQenterPipelineMode, will that compile?\". Comparing v13 and v14\nlibpq-fe.h, I see that there is a solution available now:\n\"#ifdef PQ_QUERY_PARAM_MAX_LIMIT\". But depending on that seems\nlike a bit of a hack, because I'm not sure that it's directly tied\nto the pipelining feature.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 13:16:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "> On 17 Jun 2021, at 19:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> A more critical point is that if pg_config is present, it'll likely\n> contain the server version, which might not have a lot to do with the\n> libpq version. Debian's already shipping things in a way that decouples\n> those, and I gather Red Hat is moving in that direction too.\n> \n> I think what people really want to know is \"if I try to call\n> PQenterPipelineMode, will that compile?\".\n\nI think this is the most compelling argument for feature-based gating rather\nthan promote version based. +1 on doing \"#ifdef LIBPQ_HAS_PIPELINING\" or along\nthose lines and try to be consistent going forward. If we've truly failed to\ndo so in X releases time, then we can revisit this.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 17 Jun 2021 20:03:18 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "On Thu, Jun 17, 2021 at 1:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We don't really add major new APIs to libpq very often, so I don't\n> find that too compelling. I do find it compelling that user code\n> shouldn't embed knowledge about \"feature X arrived in version Y\".\n\nI just went and looked at how exports.txt has evolved over the years.\nSince PostgreSQL 8.1, every release except for 9.4 and 11 added at\nleast one new function to libpq. That means in 14 releases we've done\nsomething that might break someone's compile 12 times. Now maybe you\nwant to try to argue that few of those changes are \"major,\" but I\ndon't know how that could be a principled argument. Every new function\nis something someone may want to use, and thus a potential compile\nbreak.\n\nSome of those releases also changed behavior. For example, version 10\nallowed multi-host connection strings and URLs. People might want to\nknow about that sort of thing, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 14:15:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Hi,\n\nOn 2021-06-17 13:16:17 -0400, Tom Lane wrote:\n> > Then again, why would pg_config.h be absent?\n> \n> Likely because somebody decided it was a server-side include rather\n> than an application-side include.\n\nWhich is the right call - pg_config.h can't easily be included in\napplications that themselves use autoconf. Most problematically it\ndefines all the standard autotools PACKAGE_* macros that are guaranteed\nto conflict in any autotools using project. There's obviously also a lot\nof other defines in there that quite possibly could conflict.\n\nWe probably split pg_config.h at some point. Even for extensions it can\nbe annoying because pg_config.h is always included in server code, which\nmeans that the extension can't easily include an autoheader style header\nitself.\n\n\nI'm not sure I understand why you think that exposing the version number\nfor libpq is such a bad idea?\n\n\nI think it'd be reasonable to add a few more carefully chosen macros to\npg_config_ext.h.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Jun 2021 11:34:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I just went and looked at how exports.txt has evolved over the years.\n> Since PostgreSQL 8.1, every release except for 9.4 and 11 added at\n> least one new function to libpq. That means in 14 releases we've done\n> something that might break someone's compile 12 times. Now maybe you\n> want to try to argue that few of those changes are \"major,\" but I\n> don't know how that could be a principled argument. Every new function\n> is something someone may want to use, and thus a potential compile\n> break.\n\nInteresting, but then you have to explain why this is the first time\nthat somebody has asked for a version number in libpq-fe.h. Maybe\nall those previous additions were indeed minor enough that the\nproblem didn't come up. (Another likely possibility, perhaps, is\nthat people have been misusing the server version for this purpose,\nand have been lucky enough to not have that approach fail for them.)\n\nAnyway, I do not see why we can't establish a principle going forward\nthat new additions to libpq's API should involve at least one macro,\nso that they can be checked for with #ifdefs. Just because the\nversion-number approach offloads work from us doesn't make it a good\nidea, because the work doesn't vanish; it will be dumped in the laps\nof packagers and end users.\n\nBTW, by that principle, we should likely be adding a symbol\nassociated with the new tracing features, as well as one for\npipelining. Or is it good enough to tell people they can\ncheck \"#ifdef PQTRACE_SUPPRESS_TIMESTAMPS\" ?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 14:34:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'm not sure I understand why you think that exposing the version number\n> for libpq is such a bad idea?\n> I think it'd be reasonable to add a few more carefully chosen macros to\n> pg_config_ext.h.\n\nThe primary problem I've got with that is the risk of confusion\nbetween server and libpq version numbers. In particular, if we do\nit like that then we've just totally screwed the Debian packagers.\nThey will have to choose whether to install pg_config_ext.h from\ntheir server build or their libpq build. Both choices are wrong,\ndepending on what applications want to know.\n\nNow we could alternatively invent a libpq_version.h and hope that\npackagers remember to install the right version of that. But I\nthink it's a better user experience all around to do it the other\nway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 14:41:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Hi,\n\nOn 2021-06-17 14:41:40 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'm not sure I understand why you think that exposing the version number\n> > for libpq is such a bad idea?\n> > I think it'd be reasonable to add a few more carefully chosen macros to\n> > pg_config_ext.h.\n> \n> The primary problem I've got with that is the risk of confusion\n> between server and libpq version numbers. In particular, if we do\n> it like that then we've just totally screwed the Debian packagers.\n> They will have to choose whether to install pg_config_ext.h from\n> their server build or their libpq build. Both choices are wrong,\n> depending on what applications want to know.\n\nThat's a fair point.\n\nHowever, we kind of already force them to do so - libpq already depends\non pg_config_ext.h, so they need to deal with the issue in some\nform. It's not particularly likely to lead to a problem to have a\nmismatching pg_config_ext.h, though, so maybe that's not too bad.\n\nOur make install actually forsees the issue to some degree, and installs\npg_config_ext.h in two places, which then debian builds on:\n\n$ apt-file search pg_config_ext.h\nlibpq-dev: /usr/include/postgresql/pg_config_ext.h\npostgresql-server-dev-13: /usr/include/postgresql/13/server/pg_config_ext.h\npostgresql-server-dev-14: /usr/include/postgresql/14/server/pg_config_ext.h\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Jun 2021 12:13:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> I just went and looked at how exports.txt has evolved over the years.\n>> Since PostgreSQL 8.1, every release except for 9.4 and 11 added at\n>> least one new function to libpq. That means in 14 releases we've done\n>> something that might break someone's compile 12 times. Now maybe you\n>> want to try to argue that few of those changes are \"major,\" but I\n>> don't know how that could be a principled argument. Every new function\n>> is something someone may want to use, and thus a potential compile\n>> break.\n>\n> Interesting, but then you have to explain why this is the first time\n> that somebody has asked for a version number in libpq-fe.h. Maybe\n> all those previous additions were indeed minor enough that the\n> problem didn't come up. (Another likely possibility, perhaps, is\n> that people have been misusing the server version for this purpose,\n> and have been lucky enough to not have that approach fail for them.)\n\nFWIW, the perl DBD::Pg module extracts the version number from\n`pg_config --version` at build time, and uses that to define a\nPGLIBVERSION which is used to define fatal fallbacks for a few\nfunctions:\n\nhttps://metacpan.org/release/TURNSTEP/DBD-Pg-3.15.0/source/dbdimp.c#L26-55\n\nI have an unfinished branch which does similar for PQsetSingleRowMode,\n(added in 9.2).\n\n- ilmari\n\n\n", "msg_date": "Thu, 17 Jun 2021 20:15:42 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "On Thu, Jun 17, 2021 at 08:15:42PM +0100, Dagfinn Ilmari Manns�ker wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> >> I just went and looked at how exports.txt has evolved over the years.\n> >> Since PostgreSQL 8.1, every release except for 9.4 and 11 added at\n> >> least one new function to libpq. That means in 14 releases we've done\n> >> something that might break someone's compile 12 times. Now maybe you\n> >> want to try to argue that few of those changes are \"major,\" but I\n> >> don't know how that could be a principled argument. Every new function\n> >> is something someone may want to use, and thus a potential compile\n> >> break.\n> >\n> > Interesting, but then you have to explain why this is the first time\n> > that somebody has asked for a version number in libpq-fe.h. Maybe\n> > all those previous additions were indeed minor enough that the\n> > problem didn't come up. (Another likely possibility, perhaps, is\n> > that people have been misusing the server version for this purpose,\n> > and have been lucky enough to not have that approach fail for them.)\n> \n> FWIW, the perl DBD::Pg module extracts the version number from\n> `pg_config --version` at build time, and uses that to define a\n\npygresql is also using pg_config --version:\n\nsetup.py- wanted = self.escaping_funcs\nsetup.py: supported = pg_version >= (9, 0)\n--\nsetup.py- wanted = self.pqlib_info\nsetup.py: supported = pg_version >= (9, 1)\n--\nsetup.py- wanted = self.ssl_info\nsetup.py: supported = pg_version >= (9, 5)\n--\nsetup.py- wanted = self.memory_size\nsetup.py: supported = pg_version >= (12, 0)\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 17 Jun 2021 14:47:22 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "On Thu, Jun 17, 2021 at 2:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Interesting, but then you have to explain why this is the first time\n> that somebody has asked for a version number in libpq-fe.h. Maybe\n> all those previous additions were indeed minor enough that the\n> problem didn't come up. (Another likely possibility, perhaps, is\n> that people have been misusing the server version for this purpose,\n> and have been lucky enough to not have that approach fail for them.)\n\nWell, I don't know for sure. I sometimes find it difficult to account\nfor the behavior even of the small number of people I know fairly\nwell, let alone the rather large number of people I've never even met.\nBut if I had to speculate ... I think one contributing factor is that\nthe number of people who write applications that use a C-language\nconnector to the database isn't terribly large, because most\napplication developers are going to use a higher-level language like\nJava or Python or something. And of those that do, I would guess most\nof them aren't trying to write applications that work across versions,\nand so the problem doesn't arise. Now I know that personally, I have\ntried to do that on a number of occasions, and I've accidentally used\nfunctions that only existed in newer versions on, err, most of those\noccasions. I chose to handle that problem by either (a) rewriting the\ncode to use only functions that appeared in all relevant versions of\nlibpq or (b) upgrading all the versions of libpq in my environment to\nsomething new enough that it would work. If I'd run into a problem\nthat couldn't be handled in either of those ways, I likely would have\nhandled it by (c) depending on some symbol that actually indicates the\nserver version number, and demanding that anybody compiling my code\nuse a packaging system where those versions were the same. But none of\nthose workarounds seem like a real argument against having a version\nindicator for libpq proper.\n\n> Anyway, I do not see why we can't establish a principle going forward\n> that new additions to libpq's API should involve at least one macro,\n> so that they can be checked for with #ifdefs. Just because the\n> version-number approach offloads work from us doesn't make it a good\n> idea, because the work doesn't vanish; it will be dumped in the laps\n> of packagers and end users.\n\nWhat work? Including an additional #define in a header file doesn't\ncreate any work for packagers or end-users that I can see. If\nanything, it seems easier for end-users. If you want a function that\nfirst appears in v16, just test whether the version number is >= 16.\nOn the other hand if we promise to add at least one #define to that\nfile for each new release, then somebody's got to be like, oh, let's\nsee, this function was added in v16, now which #define got added in\nthat release ... hmm, let me go diff the branches in git ... how is\nthat any better? Especially because it seems really likely that we\nwill fail to actually follow this principle consistently, in which\ncase they may find that #define that they need doesn't even exist.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 15:47:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jun 17, 2021 at 2:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... Just because the\n>> version-number approach offloads work from us doesn't make it a good\n>> idea, because the work doesn't vanish; it will be dumped in the laps\n>> of packagers and end users.\n\n> What work? Including an additional #define in a header file doesn't\n> create any work for packagers or end-users that I can see. If\n> anything, it seems easier for end-users. If you want a function that\n> first appears in v16, just test whether the version number is >= 16.\n\nYou're omitting the step of \"figure out which version the feature you\nwant to use appeared in\". A few years down the road, that'll get\nharder than it might seem to be for a shiny new feature.\n\nAs for the packagers, this creates a requirement to include the right\nversion of the right file in the right sub-package. Admittedly, if\nwe hack things so that the #define appears directly in libpq-fe.h through\nsome configure magic, then there's nothing extra for packagers to get\nright; but if we put it anywhere else, we're adding ways for them to\nget it wrong.\n\n> On the other hand if we promise to add at least one #define to that\n> file for each new release,\n\nNew libpq API feature, not every new release. I don't really see\nthat that's much harder than, say, bumping catversion.\n\n> ... then somebody's got to be like, oh, let's\n> see, this function was added in v16, now which #define got added in\n> that release ... hmm, let me go diff the branches in git ... how is\n> that any better?\n\nI repeat that you are evaluating this through the lens of how much\nwork it is for us as opposed to other people, and I fundamentally\ndisagree with that being the primary metric.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 16:13:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> I think putting a version number as such in there is a truly\n> horrid idea. However, I could get behind adding a boolean flag\n> that says specifically whether the pipeline feature exists.\n> Then you'd do something like\n> \n> #ifdef LIBPQ_HAS_PIPELINING\n> \n> rather than embedding knowledge of exactly which release\n> added that.\n\nThat would be even better, but I agree with what others have\nsaid: we would have to keep adding such feature test macros\ngoing forward.\n\nI think ideally you would want to have both since the version\nmacro could still be helpful in dealing with \"features\" that you\ndid not plan to add (aka bugs).\n\n\n> Comparing v13 and v14 libpq-fe.h, I see that there is a solution\n> available now: \"#ifdef PQ_QUERY_PARAM_MAX_LIMIT\".\n\nHm, it must have been added recently since I don't see it in 14beta1.\nBut thanks for the pointer, if nothing better comes up this will\nhave to do.\n\n\n", "msg_date": "Fri, 18 Jun 2021 15:52:41 +0200", "msg_from": "Boris Kolpackov <boris@codesynthesis.com>", "msg_from_op": true, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Boris Kolpackov <boris@codesynthesis.com> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> I think putting a version number as such in there is a truly\n>> horrid idea. However, I could get behind adding a boolean flag\n>> that says specifically whether the pipeline feature exists.\n\n> That would be even better, but I agree with what others have\n> said: we would have to keep adding such feature test macros\n> going forward.\n\nYes, and I think that is a superior solution. I think the\nargument that it's too much effort is basically nonsense.\n\n> I think ideally you would want to have both since the version\n> macro could still be helpful in dealing with \"features\" that you\n> did not plan to add (aka bugs).\n\nI really doubt that a version number appearing in libpq-fe.h would\nbe helpful for deciding whether you need to work around a bug.\nThe problem again is version skew: how well does the libpq.so you\nare running against today match up with the header you compiled\nagainst (possibly months ago, possibly on a different machine)?\nWhat you'd want for that sort of thing is a runtime test, i.e.\nconsult PQlibVersion().\n\nThat point, along with the previously-discussed point about confusion\nbetween server and libpq versions, nicely illustrates another reason\nwhy I'm resistant to just adding a version number to libpq-fe.h.\nIf we do that, application programmers will be presented with THREE\ndifferent Postgres version numbers, and it seems inevitable that\npeople will make mistakes and consult the wrong one for a particular\npurpose. I think we can at least reduce the confusion by handling\nthe question of which-features-are-visible-in-the-include-file in a\ndifferent style.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Jun 2021 10:12:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "On 2021-Jun-18, Boris Kolpackov wrote:\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > I think putting a version number as such in there is a truly\n> > horrid idea. However, I could get behind adding a boolean flag\n> > that says specifically whether the pipeline feature exists.\n> > Then you'd do something like\n> > \n> > #ifdef LIBPQ_HAS_PIPELINING\n> > \n> > rather than embedding knowledge of exactly which release\n> > added that.\n> \n> That would be even better, but I agree with what others have\n> said: we would have to keep adding such feature test macros\n> going forward.\n\nBut we do not add that many significant features to libpq in the first\nplace, so I'm not sure it would be too bad. As far as I am aware, this\nis the first time someone has requested a mechanism to detect feature\npresence specifically in libpq.\n\nTo put a number to it, I counted the number of commits to exports.txt\nsince Jan 2015 -- there are 17. But many of them are just intra-release\nfixups; the number of actual \"features\" is 11, an average of two per\nyear. That seems small enough to me.\n\nSo I'm +1 on adding this \"feature macro\".\n\n(The so-version major changed from 4 to 5 in commit 1e7bb2da573e, dated\nApril 2006.)\n\n> I think ideally you would want to have both since the version\n> macro could still be helpful in dealing with \"features\" that you\n> did not plan to add (aka bugs).\n> \n> \n> > Comparing v13 and v14 libpq-fe.h, I see that there is a solution\n> > available now: \"#ifdef PQ_QUERY_PARAM_MAX_LIMIT\".\n> \n> Hm, it must have been added recently since I don't see it in 14beta1.\n> But thanks for the pointer, if nothing better comes up this will\n> have to do.\n\nYeah, this one was added by commit cb92703384e2 on June 8th, three weeks\nafter beta1.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Pido que me den el Nobel por razones humanitarias\" (Nicanor Parra)\n\n\n", "msg_date": "Fri, 18 Jun 2021 10:27:50 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n> So I'm +1 on adding this \"feature macro\".\n\nConcretely, how about the attached? (I also got rid of a recently-added\nextra comma. While the compilers we use might not warn about that,\nit seems unwise to assume that no user's compiler will.)\n\nI guess one unresolved question is whether we want to mention these in\nthe SGML docs. I vote \"no\", because it'll raise the maintenance cost\nnoticeably. But I can see an argument on the other side.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 18 Jun 2021 13:44:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "On 2021-Jun-18, Tom Lane wrote:\n\n> Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n> > So I'm +1 on adding this \"feature macro\".\n> \n> Concretely, how about the attached?\n\nSeems OK to me. We can just accumulate any similar ones in the future\nnearby.\n\n> (I also got rid of a recently-added\n> extra comma. While the compilers we use might not warn about that,\n> it seems unwise to assume that no user's compiler will.)\n\nOops.\n\n> I guess one unresolved question is whether we want to mention these in\n> the SGML docs. I vote \"no\", because it'll raise the maintenance cost\n> noticeably. But I can see an argument on the other side.\n\nWell, if we do want docs for these macros, then IMO it'd be okay to have\nthem in libpq-fe.h itself rather than SGML. A one-line comment for each\nwould suffice:\n\n+/*\n+ * These symbols may be used in compile-time #ifdef tests for the availability\n+ * of newer libpq features.\n+ */\n+/* Indicates presence of PQenterPipelineMode and friends */\n+#define LIBPQ_HAS_PIPELINING 1\n+\n+/* Indicates presence of PQsetTraceFlags; PQtrace changed output format */\n+#define LIBPQ_HAS_TRACE_FLAGS 1\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Fri, 18 Jun 2021 14:03:41 -0400", "msg_from": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n> On 2021-Jun-18, Tom Lane wrote:\n>> I guess one unresolved question is whether we want to mention these in\n>> the SGML docs. I vote \"no\", because it'll raise the maintenance cost\n>> noticeably. But I can see an argument on the other side.\n\n> Well, if we do want docs for these macros, then IMO it'd be okay to have\n> them in libpq-fe.h itself rather than SGML. A one-line comment for each\n> would suffice:\n\nWFM. I'd sort of supposed that the symbol names were self-documenting,\nbut you're right that a line or so of annotation improves things.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Jun 2021 14:24:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "I wrote:\n> Alvaro Herrera <alvaro.herrera@2ndquadrant.com> writes:\n>> Well, if we do want docs for these macros, then IMO it'd be okay to have\n>> them in libpq-fe.h itself rather than SGML. A one-line comment for each\n>> would suffice:\n\n> WFM. I'd sort of supposed that the symbol names were self-documenting,\n> but you're right that a line or so of annotation improves things.\n\nHearing no further comments, done that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Jun 2021 11:45:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "On Sat, Jun 19, 2021 at 11:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hearing no further comments, done that way.\n\nWhat will prevent us from forgetting to do something about this again,\na year from now?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Jun 2021 11:27:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> What will prevent us from forgetting to do something about this again,\n> a year from now?\n\nAs long as we notice it before 15.0, we can fix it retroactively,\nas we just did for 14. For that matter, fixing before 15.1 or\nso would likely be Good Enough.\n\nBut realistically, how is this any worse of a problem than a hundred\nother easily-forgotten coding rules we have? We manage to uphold\nmost of them most of the time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Jun 2021 11:39:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "> On 21 Jun 2021, at 17:27, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Sat, Jun 19, 2021 at 11:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hearing no further comments, done that way.\n> \n> What will prevent us from forgetting to do something about this again,\n> a year from now?\n\nAn entry in a release checklist could perhaps be an idea?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 21 Jun 2021 18:19:13 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 21 Jun 2021, at 17:27, Robert Haas <robertmhaas@gmail.com> wrote:\n>> What will prevent us from forgetting to do something about this again,\n>> a year from now?\n\n> An entry in a release checklist could perhaps be an idea?\n\nYeah, I was wondering if adding an entry to RELEASE_CHANGES would be\nhelpful. Again, I'm not sure that this coding rule is much more\nlikely to be violated than any other. On the other hand, the fact\nthat it's not critical until we approach release does suggest that\nmaybe it'd be useful to treat it as a checklist item.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Jun 2021 12:34:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" }, { "msg_contents": "\nOn 6/21/21 12:34 PM, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 21 Jun 2021, at 17:27, Robert Haas <robertmhaas@gmail.com> wrote:\n>>> What will prevent us from forgetting to do something about this again,\n>>> a year from now?\n>> An entry in a release checklist could perhaps be an idea?\n> Yeah, I was wondering if adding an entry to RELEASE_CHANGES would be\n> helpful. Again, I'm not sure that this coding rule is much more\n> likely to be violated than any other. On the other hand, the fact\n> that it's not critical until we approach release does suggest that\n> maybe it'd be useful to treat it as a checklist item.\n>\n> \t\t\n\n\nMaybe for release note preparation, since that's focused on new\nfeatures, but this doesn't sound like a release prep function to me.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 21 Jun 2021 12:43:25 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Add version macro to libpq-fe.h" } ]
[ { "msg_contents": "Here's a patch I propose to apply to fix this bug (See\n<https://www.postgresql.org/message-id/flat/759e997e-e1ca-91cd-84db-f4ae963fada1%40dunslane.net#b1cf11c3eb1f450bed97c79ad473909f>)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 17 Jun 2021 11:01:58 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Patch for bug #17056 fast default on non-plain table" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Here's a patch I propose to apply to fix this bug (See\n> <https://www.postgresql.org/message-id/flat/759e997e-e1ca-91cd-84db-f4ae963fada1%40dunslane.net#b1cf11c3eb1f450bed97c79ad473909f>)\n\nIf I'm reading the code correctly, your change in RelationBuildTupleDesc\nis scribbling directly on the disk buffer, which is surely not okay.\nI don't understand why you need that at all given the other defenses\nyou added ... but if you need it, you have to modify the tuple AFTER\ncopying it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 11:05:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch for bug #17056 fast default on non-plain table" }, { "msg_contents": "\nOn 6/17/21 11:05 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Here's a patch I propose to apply to fix this bug (See\n>> <https://www.postgresql.org/message-id/flat/759e997e-e1ca-91cd-84db-f4ae963fada1%40dunslane.net#b1cf11c3eb1f450bed97c79ad473909f>)\n> If I'm reading the code correctly, your change in RelationBuildTupleDesc\n> is scribbling directly on the disk buffer, which is surely not okay.\n> I don't understand why you need that at all given the other defenses\n> you added ... but if you need it, you have to modify the tuple AFTER\n> copying it.\n\n\nOK, will fix. I think we do need it (See Andres' comment in the bug\nthread). It should be a fairly simple fix.\n\n\nThanks for looking.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 17 Jun 2021 11:13:10 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Patch for bug #17056 fast default on non-plain table" }, { "msg_contents": "On 6/17/21 11:13 AM, Andrew Dunstan wrote:\n> On 6/17/21 11:05 AM, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> Here's a patch I propose to apply to fix this bug (See\n>>> <https://www.postgresql.org/message-id/flat/759e997e-e1ca-91cd-84db-f4ae963fada1%40dunslane.net#b1cf11c3eb1f450bed97c79ad473909f>)\n>> If I'm reading the code correctly, your change in RelationBuildTupleDesc\n>> is scribbling directly on the disk buffer, which is surely not okay.\n>> I don't understand why you need that at all given the other defenses\n>> you added ... but if you need it, you have to modify the tuple AFTER\n>> copying it.\n>\n> OK, will fix. I think we do need it (See Andres' comment in the bug\n> thread). It should be a fairly simple fix.\n>\n>\n\n\nrevised patch attached.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 17 Jun 2021 11:52:30 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Patch for bug #17056 fast default on non-plain table" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> revised patch attached.\n\nOK. One other point is that in HEAD, you only need the hunk that\nprevents atthasmissing from becoming incorrectly set. The hacks\nto cope with it being already wrong are only needed in the back\nbranches. Since we already forced initdb for beta2, there will\nnot be any v14 installations in which pg_attribute contains\na wrong value.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 13:45:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch for bug #17056 fast default on non-plain table" }, { "msg_contents": "\nOn 6/17/21 1:45 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> revised patch attached.\n> OK. One other point is that in HEAD, you only need the hunk that\n> prevents atthasmissing from becoming incorrectly set. The hacks\n> to cope with it being already wrong are only needed in the back\n> branches. Since we already forced initdb for beta2, there will\n> not be any v14 installations in which pg_attribute contains\n> a wrong value.\n>\n> \t\t\t\n\n\n\nGood point. Should I replace the relcache.c changes in HEAD with an\nAssert? Or just skip them altogether?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 17 Jun 2021 17:24:55 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Patch for bug #17056 fast default on non-plain table" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 6/17/21 1:45 PM, Tom Lane wrote:\n>> OK. One other point is that in HEAD, you only need the hunk that\n>> prevents atthasmissing from becoming incorrectly set.\n\n> Good point. Should I replace the relcache.c changes in HEAD with an\n> Assert? Or just skip them altogether?\n\nI wouldn't bother touching relcache.c.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Jun 2021 17:33:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch for bug #17056 fast default on non-plain table" } ]
[ { "msg_contents": "Hackers,\n\nLogical replication apply workers for a subscription can easily get stuck in an infinite loop of attempting to apply a change, triggering an error (such as a constraint violation), exiting with an error written to the subscription worker log, and restarting.\n\nAs things currently stand, only superusers can create subscriptions. Ongoing work to delegate superuser tasks to non-superusers creates the potential for even more errors to be triggered, specifically, errors where the apply worker does not have permission to make changes to the target table.\n\nThe attached patch makes it possible to create a subscription using a new subscription_parameter, \"disable_on_error\", such that rather than going into an infinite loop, the apply worker will catch errors and automatically disable the subscription, breaking the loop. The new parameter defaults to false. When false, the PG_TRY/PG_CATCH overhead is avoided, so for subscriptions which do not use the feature, there shouldn't be any change. Users can manually clear the error after fixing the underlying issue with an ALTER SUBSCRIPTION .. ENABLE command. \n \nIn addition to helping on production systems, this makes writing TAP tests involving error conditions simpler. I originally ran into the motivation to write this patch when frustrated that TAP tests needed to parse the apply worker log file to determine whether permission failures were occurring and what they were. It was also obnoxiously easy to have a test get stuck waiting for a permanently stuck subscription to catch up. This helps with both issues.\n\nI don't think this is quite ready for commit, but I'd like feedback if folks like this idea or want to suggest design changes.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 17 Jun 2021 13:18:38 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Optionally automatically disable logical replication subscriptions on\n error" }, { "msg_contents": "On Fri, Jun 18, 2021 at 1:48 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> Hackers,\n>\n> Logical replication apply workers for a subscription can easily get stuck in an infinite loop of attempting to apply a change, triggering an error (such as a constraint violation), exiting with an error written to the subscription worker log, and restarting.\n>\n> As things currently stand, only superusers can create subscriptions. Ongoing work to delegate superuser tasks to non-superusers creates the potential for even more errors to be triggered, specifically, errors where the apply worker does not have permission to make changes to the target table.\n>\n> The attached patch makes it possible to create a subscription using a new subscription_parameter, \"disable_on_error\", such that rather than going into an infinite loop, the apply worker will catch errors and automatically disable the subscription, breaking the loop. The new parameter defaults to false. When false, the PG_TRY/PG_CATCH overhead is avoided, so for subscriptions which do not use the feature, there shouldn't be any change. Users can manually clear the error after fixing the underlying issue with an ALTER SUBSCRIPTION .. ENABLE command.\n>\n\nI see this idea has merits and it will help users to repair failing\nsubscriptions. Few points on a quick look at the patch: (a) The patch\nseem to be assuming that the error can happen only by the apply worker\nbut I think the constraint violation can happen via one of the table\nsync workers as well, (b) What happens if the error happens when you\nare updating the error information in the catalog table. I think\ninstead of seeing the actual apply time error, the user might see some\nother for which it won't be clear what is an appropriate action.\n\nWe are also discussing another action like skipping the apply of the\ntransaction on an error [1]. I think it is better to evaluate both the\nproposals as one seems to be an extension of another. Adding\nSawada-San, as he is working on the other proposal.\n\n[1] - https://www.postgresql.org/message-id/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK%3D30xJfUVihNZDA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 18 Jun 2021 10:17:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Fri, Jun 18, 2021 at 6:18 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> Hackers,\n>\n> Logical replication apply workers for a subscription can easily get stuck in an infinite loop of attempting to apply a change, triggering an error (such as a constraint violation), exiting with an error written to the subscription worker log, and restarting.\n>\n> As things currently stand, only superusers can create subscriptions. Ongoing work to delegate superuser tasks to non-superusers creates the potential for even more errors to be triggered, specifically, errors where the apply worker does not have permission to make changes to the target table.\n>\n> The attached patch makes it possible to create a subscription using a new subscription_parameter, \"disable_on_error\", such that rather than going into an infinite loop, the apply worker will catch errors and automatically disable the subscription, breaking the loop. The new parameter defaults to false. When false, the PG_TRY/PG_CATCH overhead is avoided, so for subscriptions which do not use the feature, there shouldn't be any change. Users can manually clear the error after fixing the underlying issue with an ALTER SUBSCRIPTION .. ENABLE command.\n>\n> In addition to helping on production systems, this makes writing TAP tests involving error conditions simpler. I originally ran into the motivation to write this patch when frustrated that TAP tests needed to parse the apply worker log file to determine whether permission failures were occurring and what they were. It was also obnoxiously easy to have a test get stuck waiting for a permanently stuck subscription to catch up. This helps with both issues.\n>\n> I don't think this is quite ready for commit, but I'd like feedback if folks like this idea or want to suggest design changes.\n\nI tried your patch.\n\nIt applied OK (albeit with whitespace warnings).\n\nThe code build and TAP tests are all OK.\n\nBelow are a few comments and observations.\n\nCOMMENTS\n========\n\n(1) PG Docs catalogs.sgml\n\nDocumented new column \"suberrmsg\" but did not document the other new\ncolumns (\"disable_on_error\", \"disabled_by_error\")?\n\n------\n\n(2) New column \"disabled_by_error\".\n\nI wondered if there was actually any need for this column. Isn't the\nsame information conveyed by just having \"subenabled\" = false, at same\ntime as as non-empty \"suberrmsg\"? This would remove any confusion for\nhaving 2 booleans which both indicate disabled.\n\n------\n\n(3) New columns \"disabled_by_error\", \"disabled_on_error\".\n\nAll other columns of the pg_subscription have a \"sub\" prefix.\n\n------\n\n(4) errhint member used?\n\n@@ -91,12 +100,16 @@ typedef struct Subscription\n char *name; /* Name of the subscription */\n Oid owner; /* Oid of the subscription owner */\n bool enabled; /* Indicates if the subscription is enabled */\n+ bool disable_on_error; /* Whether errors automatically disable */\n+ bool disabled_by_error; /* Whether an error has disabled */\n bool binary; /* Indicates if the subscription wants data in\n * binary format */\n bool stream; /* Allow streaming in-progress transactions. */\n char *conninfo; /* Connection string to the publisher */\n char *slotname; /* Name of the replication slot */\n char *synccommit; /* Synchronous commit setting for worker */\n+ char *errmsg; /* Message from error which disabled */\n+ char *errhint; /* Hint from error which disabled */\n List *publications; /* List of publication names to subscribe to */\n } Subscription;\n\nI did not find any code using that newly added member \"errhint\".\n\n------\n\n(5) dump.c\n\ni. No mention of new columns \"disabled_on_error\" and\n\"disabled_by_error\". Is that right?\n\nii. Shouldn't the code for the \"suberrmsg\" be qualified with some PG\nversion number checks?\n\n------\n\n(6) Patch only handles errors only from the Apply worker.\n\nTablesync can give similar errors (e.g. PK violation during DATASYNC\nphase) which will trigger re-launch forever regardless of the setting\nof \"disabled_on_error\".\n(confirmed by observations below)\n\n------\n\n(7) TAP test code\n\n+$node_subscriber->init(allows_streaming => 'logical');\n\nAFAIK that \"logical\" configuration is not necessary for the subscriber side. So,\n\n$node_subscriber->init();\n\n////////////\n\n\nSome Experiments/Observations\n==============================\n\nIn general, I found this functionality is useful and it works as\nadvertised by your patch comment.\n\n======\n\nTest: Display pg_subscription with the new columns\nObservation: As expected. But some new colnames are not prefixed like\ntheir peers.\n\ntest_sub=# \\pset x\nExpanded display is on.\ntest_sub=# select * from pg_subscription;\n-[ RECORD 1 ]-----+--------------------------------------------------------\noid | 16394\nsubdbid | 16384\nsubname | tap_sub\nsubowner | 10\nsubenabled | t\ndisable_on_error | t\ndisabled_by_error | f\nsubbinary | f\nsubstream | f\nsubconninfo | host=localhost dbname=test_pub application_name=tap_sub\nsubslotname | tap_sub\nsubsynccommit | off\nsuberrmsg |\nsubpublications | {tap_pub}\n\n======\n\nTest: Cause a PK violation during normal Apply replication (when\n\"disabled_on_error=true\")\nObservation: Apply worker stops. Subscription is disabled. Error\nmessage is in the catalog.\n\n2021-06-18 15:12:45.905 AEST [25904] LOG: edata is true for\nsubscription 'tap_sub': message = \"duplicate key value violates unique\nconstraint \"test_tab_pkey\"\", hint = \"<NONE>\"\n2021-06-18 15:12:45.905 AEST [25904] LOG: logical replication apply\nworker for subscription \"tap_sub\" will stop because the subscription\nwas disabled due to error\n2021-06-18 15:12:45.905 AEST [25904] ERROR: duplicate key value\nviolates unique constraint \"test_tab_pkey\"\n2021-06-18 15:12:45.905 AEST [25904] DETAIL: Key (a)=(1) already exists.\n2021-06-18 15:12:45.908 AEST [19924] LOG: background worker \"logical\nreplication worker\" (PID 25904) exited with exit code 1\n\ntest_sub=# select * from pg_subscription;\n-[ RECORD 1 ]-----+---------------------------------------------------------------\noid | 16394\nsubdbid | 16384\nsubname | tap_sub\nsubowner | 10\nsubenabled | f\ndisable_on_error | t\ndisabled_by_error | t\nsubbinary | f\nsubstream | f\nsubconninfo | host=localhost dbname=test_pub application_name=tap_sub\nsubslotname | tap_sub\nsubsynccommit | off\nsuberrmsg | duplicate key value violates unique constraint\n\"test_tab_pkey\"\nsubpublications | {tap_pub}\n\n======\n\nTest: Try to enable subscription (without fixing the PK violation problem).\nObservation. OK. It just stops again\n\ntest_sub=# alter subscription tap_sub enable;\nALTER SUBSCRIPTION\ntest_sub=# 2021-06-18 15:17:18.067 AEST [10228] LOG: logical\nreplication apply worker for subscription \"tap_sub\" has started\n2021-06-18 15:17:18.078 AEST [10228] LOG: edata is true for\nsubscription 'tap_sub': message = \"duplicate key value violates unique\nconstraint \"test_tab_pkey\"\", hint = \"<NONE>\"\n2021-06-18 15:17:18.078 AEST [10228] LOG: logical replication apply\nworker for subscription \"tap_sub\" will stop because the subscription\nwas disabled due to error\n2021-06-18 15:17:18.078 AEST [10228] ERROR: duplicate key value\nviolates unique constraint \"test_tab_pkey\"\n2021-06-18 15:17:18.078 AEST [10228] DETAIL: Key (a)=(1) already exists.\n2021-06-18 15:17:18.079 AEST [19924] LOG: background worker \"logical\nreplication worker\" (PID 10228) exited with exit code 1\n\n======\n\nTest: Manually disable the subscription (which had previously already\nbeen disabled due to error)\nObservation: OK. The suberrmsg gets reset to an empty string.\n\nalter subscription tap_sub disable;\n\n=====\n\nTest: Turn off the disable_on_error\nObservation: As expected, now the Apply worker goes into re-launch\nforever loop every time it hits PK violation\n\ntest_sub=# alter subscription tap_sub set (disable_on_error=false);\nALTER SUBSCRIPTION\n\n...\n\n======\n\nTest: Cause a PK violation in the Tablesync copy (DATASYNC) phase.\n(when disable_on_error = true)\nObservation: This patch changes nothing for this case. The Tablesyn\nre-launchs in a forever loop the same as current functionality.\n\ntest_sub=# CREATE SUBSCRIPTION tap_sub CONNECTION 'host=localhost\ndbname=test_pub application_name=tap_sub' PUBLICATION tap_pub WITH\n(disable_on_error=false);\nNOTICE: created replication slot \"tap_sub\" on publisher\nCREATE SUBSCRIPTION\ntest_sub=# 2021-06-18 15:38:19.547 AEST [18205] LOG: logical\nreplication apply worker for subscription \"tap_sub\" has started\n2021-06-18 15:38:19.557 AEST [18207] LOG: logical replication table\nsynchronization worker for subscription \"tap_sub\", table \"test_tab\"\nhas started\n2021-06-18 15:38:19.610 AEST [18207] ERROR: duplicate key value\nviolates unique constraint \"test_tab_pkey\"\n2021-06-18 15:38:19.610 AEST [18207] DETAIL: Key (a)=(1) already exists.\n2021-06-18 15:38:19.610 AEST [18207] CONTEXT: COPY test_tab, line 1\n2021-06-18 15:38:19.611 AEST [19924] LOG: background worker \"logical\nreplication worker\" (PID 18207) exited with exit code 1\n2021-06-18 15:38:24.634 AEST [18369] LOG: logical replication table\nsynchronization worker for subscription \"tap_sub\", table \"test_tab\"\nhas started\n2021-06-18 15:38:24.689 AEST [18369] ERROR: duplicate key value\nviolates unique constraint \"test_tab_pkey\"\n2021-06-18 15:38:24.689 AEST [18369] DETAIL: Key (a)=(1) already exists.\n2021-06-18 15:38:24.689 AEST [18369] CONTEXT: COPY test_tab, line 1\n2021-06-18 15:38:24.690 AEST [19924] LOG: background worker \"logical\nreplication worker\" (PID 18369) exited with exit code 1\n2021-06-18 15:38:29.701 AEST [18521] LOG: logical replication table\nsynchronization worker for subscription \"tap_sub\", table \"test_tab\"\nhas started\n2021-06-18 15:38:29.765 AEST [18521] ERROR: duplicate key value\nviolates unique constraint \"test_tab_pkey\"\n2021-06-18 15:38:29.765 AEST [18521] DETAIL: Key (a)=(1) already exists.\n2021-06-18 15:38:29.765 AEST [18521] CONTEXT: COPY test_tab, line 1\n2021-06-18 15:38:29.766 AEST [19924] LOG: background worker \"logical\nreplication worker\" (PID 18521) exited with exit code 1\netc...\n\n\n-[ RECORD 1 ]-----+--------------------------------------------------------\noid | 16399\nsubdbid | 16384\nsubname | tap_sub\nsubowner | 10\nsubenabled | t\ndisable_on_error | f\ndisabled_by_error | f\nsubbinary | f\nsubstream | f\nsubconninfo | host=localhost dbname=test_pub application_name=tap_sub\nsubslotname | tap_sub\nsubsynccommit | off\nsuberrmsg |\nsubpublications | {tap_pub}\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 18 Jun 2021 16:34:47 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Jun 17, 2021, at 9:47 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> (a) The patch\n> seem to be assuming that the error can happen only by the apply worker\n> but I think the constraint violation can happen via one of the table\n> sync workers as well\n\nYou are right. Peter mentioned the same thing, and it is clearly so. I am working to repair this fault in v2 of the patch.\n\n> (b) What happens if the error happens when you\n> are updating the error information in the catalog table.\n\nI think that is an entirely different kind of error. The patch attempts to catch errors caused by the user, not by core functionality of the system failing. If there is a fault that prevents the catalogs from being updated, it is unclear what the patch can do about that.\n\n> I think\n> instead of seeing the actual apply time error, the user might see some\n> other for which it won't be clear what is an appropriate action.\n\nGood point.\n\nBefore trying to do much of anything with the caught error, the v2 patch logs the error. If the subsequent efforts to disable the subscription fail, at least the logs should contain the initial failure message. The v1 patch emitted a log message much further down, and really just intended for debugging the patch itself, with many opportunities for something else to throw before the log is written.\n\n> We are also discussing another action like skipping the apply of the\n> transaction on an error [1]. I think it is better to evaluate both the\n> proposals as one seems to be an extension of another.\n\nThanks for the link.\n\nI think they are two separate options. For some users and data patterns, subscriber-side skipping of specific problematic commits will be fine. For other usage patterns, skipping earlier commits will results in more and more data integrity problems (foreign key references, etc.) such that the failures will snowball with skipping becoming the norm rather than the exception. Users with those usage patterns would likely prefer the subscription to automatically be disabled until manual intervention can clean up the problem.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 18 Jun 2021 12:36:28 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "> On Jun 17, 2021, at 11:34 PM, Peter Smith <smithpb2250@gmail.com> wrote:\n> \n> I tried your patch.\n\nThanks for the quick and thorough review!\n\n> (2) New column \"disabled_by_error\".\n> \n> I wondered if there was actually any need for this column. Isn't the\n> same information conveyed by just having \"subenabled\" = false, at same\n> time as as non-empty \"suberrmsg\"? This would remove any confusion for\n> having 2 booleans which both indicate disabled.\n\nYeah, I wondered about that before posting v1. I removed the disabled_by_error field for v2.\n\n> (3) New columns \"disabled_by_error\", \"disabled_on_error\".\n> \n> All other columns of the pg_subscription have a \"sub\" prefix.\n\nI don't feel strongly about this. How about \"subdisableonerr\"? I used that in v2.\n\n> I did not find any code using that newly added member \"errhint\".\n\nThanks for catching that. I had tried to remove all references to \"errhint\" before posting v1. The original idea was that both the message and hint of the error would be kept, but in testing I found the hint field was typically empty, so I removed it. Sorry that I left one mention of it lying around.\n\n> (5) dump.c\n\nI didn't bother getting pg_dump working before posting v1, and I still have not done so, as I mainly want to solicit feedback on whether the basic direction I am going will work for the community.\n\n> (6) Patch only handles errors only from the Apply worker.\n> \n> Tablesync can give similar errors (e.g. PK violation during DATASYNC\n> phase) which will trigger re-launch forever regardless of the setting\n> of \"disabled_on_error\".\n> (confirmed by observations below)\n\nYes, this is a good point, and also mentioned by Amit. I have fixed it in v2 and adjusted the regression test to trigger an automatic disabling for initial table sync as well as for change replication.\n\n> 2021-06-18 15:12:45.905 AEST [25904] LOG: edata is true for\n> subscription 'tap_sub': message = \"duplicate key value violates unique\n> constraint \"test_tab_pkey\"\", hint = \"<NONE>\"\n\nYou didn't call this out, but FYI, I don't intend to leave this particular log message in the patch. It was for development only. I have removed it for v2 and have added a different log message much sooner after catching the error, to avoid squashing the error in case some other action fails.\n\nThe regression test shows this, if you open tmp_check/log/022_disable_on_error_subscriber.log:\n\n2021-06-18 16:25:20.138 PDT [56926] LOG: logical replication subscription \"s1\" will be disabled due to error: duplicate key value violates unique constraint \"s1_tbl_unique\"\n2021-06-18 16:25:20.139 PDT [56926] ERROR: duplicate key value violates unique constraint \"s1_tbl_unique\"\n2021-06-18 16:25:20.139 PDT [56926] DETAIL: Key (i)=(1) already exists.\n2021-06-18 16:25:20.139 PDT [56926] CONTEXT: COPY tbl, line 2\n\nThe first line logs the error prior to attempting to disable the subscription, and the next three lines are due to rethrowing the error after committing the successful disabling of the subscription. If the attempt to disable the subscription itself throws, these additional three lines won't show up, but the first one should. Amit mentioned this upthread. Do you think this will be ok, or would you like to also have a suberrdetail field so that the detail doesn't get lost? I haven't added such an extra field, and am inclined to think it would be excessive, but maybe others feel differently?\n\n\n> ======\n> \n> Test: Cause a PK violation in the Tablesync copy (DATASYNC) phase.\n> (when disable_on_error = true)\n> Observation: This patch changes nothing for this case. The Tablesyn\n> re-launchs in a forever loop the same as current functionality.\n\nIn v2, tablesync copy errors should also be caught. The test has been extended to cover this also.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 18 Jun 2021 17:03:45 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Sat, Jun 19, 2021 at 1:06 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Jun 17, 2021, at 9:47 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > We are also discussing another action like skipping the apply of the\n> > transaction on an error [1]. I think it is better to evaluate both the\n> > proposals as one seems to be an extension of another.\n>\n> Thanks for the link.\n>\n> I think they are two separate options.\n>\n\nRight, but there are things that could be common from the design\nperspective. For example, why is it mandatory to update this conflict\n( error) information in the system catalog instead of displaying it\nvia some stats view? Also, why not also log the xid of the failed\ntransaction?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 19 Jun 2021 15:47:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Jun 19, 2021, at 3:17 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> Right, but there are things that could be common from the design\n> perspective.\n\nI went to reconcile my patch with that from [1] only to discover there is no patch on that thread. Is there one in progress that I can see?\n\nI don't mind trying to reconcile this patch with what you're discussing in [1], but I am a bit skeptical about [1] becoming a reality and I don't want to entirely hitch this patch to that effort. This can be committed with or without any solution to the idea in [1]. The original motivation for this patch was that the TAP tests don't have a great way to deal with a subscription getting into a fail-retry infinite loop, which makes it harder for me to make progress on [2]. That doesn't absolve me of the responsibility of making this patch a good one, but it does motivate me to keep it simple.\n\n> For example, why is it mandatory to update this conflict\n> ( error) information in the system catalog instead of displaying it\n> via some stats view?\n\nThe catalog must be updated to disable the subscription, so placing the error information in the same row doesn't require any independent touching of the catalogs. Likewise, the catalog must be updated to re-enable the subscription, so clearing the error from that same row doesn't require any independent touching of the catalogs.\n\nThe error information does not *need* to be included in the catalog, but placing the information in any location that won't survive server restart leaves the user no information about why the subscription got disabled after a restart (or crash + restart) happens.\n\nFurthermore, since v2 removed the \"disabled_by_error\" field in favor of just using subenabled + suberrmsg to determine if the subscription was automatically disabled, not having the information in the catalog would make it ambiguous whether the subscription was manually or automatically disabled.\n\n> Also, why not also log the xid of the failed\n> transaction?\n\nWe could also do that. Reading [1], it seems you are overly focused on user-facing xids. The errdetail in the examples I've been using for testing, and the one mentioned in [1], contain information about the conflicting data. I think users are more likely to understand that a particular primary key value cannot be replicated because it is not unique than to understand that a particular xid cannot be replicated. (Likewise for permissions errors.) For example:\n\n2021-06-18 16:25:20.139 PDT [56926] ERROR: duplicate key value violates unique constraint \"s1_tbl_unique\"\n2021-06-18 16:25:20.139 PDT [56926] DETAIL: Key (i)=(1) already exists.\n2021-06-18 16:25:20.139 PDT [56926] CONTEXT: COPY tbl, line 2 \n\nThis tells the user what they need to clean up before they can continue. Telling them which xid tried to apply the change, but not the change itself or the conflict itself, seems rather unhelpful. So at best, the xid is an additional piece of information. I'd rather have both the ERROR and DETAIL fields above and not the xid than have the xid and lack one of those two fields. Even so, I have not yet included the DETAIL field because I didn't want to bloat the catalog.\n\nFor the problem in [1], having the xid is more important than it is in my patch, because the user is expected in [1] to use the xid as a handle. But that seems like an odd interface to me. Imagine that a transaction on the publisher side inserted a batch of data, and only a subset of that data conflicts on the subscriber side. What advantage is there in skipping the entire transaction? Wouldn't the user rather skip just the problematic rows? I understand that on the subscriber side it is difficult to do so, but if you are going to implement this sort of thing, it makes more sense to allow the user to filter out data that is problematic rather than filtering out xids that are problematic, and the filter shouldn't just be an in-or-out filter, but rather a mapping function that can redirect the data someplace else or rewrite it before inserting or change the pre-existing conflicting data prior to applying the problematic data or whatever. That's a huge effort, of course, but if the idea in [1] goes in that direction, I don't want my patch to have already added an xid field which ultimately nobody wants.\n\n[1] - https://www.postgresql.org/message-id/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK%3D30xJfUVihNZDA%40mail.gmail.com\n\n[2] - https://www.postgresql.org/message-id/flat/915B995D-1D79-4E0A-BD8D-3B267925FCE9%40enterprisedb.com#dbbce39c9e460183b67ee44b647b1209\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 19 Jun 2021 07:44:11 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Jun 19, 2021, at 7:44 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Wouldn't the user rather skip just the problematic rows? I understand that on the subscriber side it is difficult to do so, but if you are going to implement this sort of thing, it makes more sense to allow the user to filter out data that is problematic rather than filtering out xids that are problematic, and the filter shouldn't just be an in-or-out filter, but rather a mapping function that can redirect the data someplace else or rewrite it before inserting or change the pre-existing conflicting data prior to applying the problematic data or whatever.\n\nThinking about this some more, it seems my patch already sets the stage for this sort of thing.\n\nWe could extend the concept of triggers to something like ErrorTriggers that could be associated with subscriptions. I already have the code catching errors for subscriptions where disable_on_error is true. We could use that same code path for subscriptions that have one or more BEFORE or AFTER ErrorTriggers defined. We could pass the trigger all the error context information along with the row and subscription information, and allow the trigger to either modify the data being replicated or make modifications to the table being changed. I think having support for both BEFORE and AFTER would be important, as a common design pattern might be to move aside the conflicting rows in the BEFORE trigger, then reconcile and merge them back into the table in the AFTER trigger. If the xid still cannot be replicated after one attempt using the triggers, the second attempt to disable the subscription instead.\n\nThere are a lot of details to consider, but to my mind this idea is much more user friendly than the idea that users should muck about with xids for arbitrarily many conflicting transactions.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 19 Jun 2021 09:21:56 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Sat, Jun 19, 2021 at 11:44 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jun 19, 2021, at 3:17 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Right, but there are things that could be common from the design\n> > perspective.\n>\n> I went to reconcile my patch with that from [1] only to discover there is no patch on that thread. Is there one in progress that I can see?\n\nI will submit the patch.\n\n>\n> I don't mind trying to reconcile this patch with what you're discussing in [1], but I am a bit skeptical about [1] becoming a reality and I don't want to entirely hitch this patch to that effort. This can be committed with or without any solution to the idea in [1]. The original motivation for this patch was that the TAP tests don't have a great way to deal with a subscription getting into a fail-retry infinite loop, which makes it harder for me to make progress on [2]. That doesn't absolve me of the responsibility of making this patch a good one, but it does motivate me to keep it simple.\n\nThere was a discussion that the skipping transaction patch would also\nneed to have a feature that tells users the details of the last\nfailure transaction such as its XID, timestamp, action etc. In that\nsense, those two patches might need the common infrastructure that the\napply workers leave the error details somewhere so that the users can\nsee it.\n\n> > For example, why is it mandatory to update this conflict\n> > ( error) information in the system catalog instead of displaying it\n> > via some stats view?\n>\n> The catalog must be updated to disable the subscription, so placing the error information in the same row doesn't require any independent touching of the catalogs. Likewise, the catalog must be updated to re-enable the subscription, so clearing the error from that same row doesn't require any independent touching of the catalogs.\n>\n> The error information does not *need* to be included in the catalog, but placing the information in any location that won't survive server restart leaves the user no information about why the subscription got disabled after a restart (or crash + restart) happens.\n>\n> Furthermore, since v2 removed the \"disabled_by_error\" field in favor of just using subenabled + suberrmsg to determine if the subscription was automatically disabled, not having the information in the catalog would make it ambiguous whether the subscription was manually or automatically disabled.\n\nIs it really useful to write only error message to the system catalog?\nEven if we see the error message like \"duplicate key value violates\nunique constraint “test_tab_pkey”” on the system catalog, we will end\nup needing to check the server log for details to properly resolve the\nconflict. If the user wants to know whether the subscription is\ndisabled manually or automatically, the error message on the system\ncatalog might not necessarily be necessary.\n\n> For the problem in [1], having the xid is more important than it is in my patch, because the user is expected in [1] to use the xid as a handle. But that seems like an odd interface to me. Imagine that a transaction on the publisher side inserted a batch of data, and only a subset of that data conflicts on the subscriber side. What advantage is there in skipping the entire transaction? Wouldn't the user rather skip just the problematic rows? I understand that on the subscriber side it is difficult to do so, but if you are going to implement this sort of thing, it makes more sense to allow the user to filter out data that is problematic rather than filtering out xids that are problematic, and the filter shouldn't just be an in-or-out filter, but rather a mapping function that can redirect the data someplace else or rewrite it before inserting or change the pre-existing conflicting data prior to applying the problematic data or whatever. That's a huge effort, of course, but if the idea in [1] goes in that direction, I don't want my patch to have already added an xid field which ultimately nobody wants.\n>\n\nThe feature discussed in that thread is meant to be a repair tool for\nthe subscription in emergency cases when something that should not\nhave happened happened. I guess that resolving row (or column) level\nconflict should be done in another way, for example, by defining\npolicies for each type of conflict.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 21 Jun 2021 11:17:05 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Jun 20, 2021, at 7:17 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> \n> I will submit the patch.\n\nGreat, thanks!\n\n> There was a discussion that the skipping transaction patch would also\n> need to have a feature that tells users the details of the last\n> failure transaction such as its XID, timestamp, action etc. In that\n> sense, those two patches might need the common infrastructure that the\n> apply workers leave the error details somewhere so that the users can\n> see it.\n\nRight. Subscription on error triggers would need that, too, if we wrote them.\n\n> Is it really useful to write only error message to the system catalog?\n> Even if we see the error message like \"duplicate key value violates\n> unique constraint “test_tab_pkey”” on the system catalog, we will end\n> up needing to check the server log for details to properly resolve the\n> conflict. If the user wants to know whether the subscription is\n> disabled manually or automatically, the error message on the system\n> catalog might not necessarily be necessary.\n\nWe can put more information in there. I don't feel strongly about it. I'll wait for your patch to see what infrastructure you need.\n\n> The feature discussed in that thread is meant to be a repair tool for\n> the subscription in emergency cases when something that should not\n> have happened happened. I guess that resolving row (or column) level\n> conflict should be done in another way, for example, by defining\n> policies for each type of conflict.\n\nI understand that is the idea, but I'm having trouble believing it will work that way in practice. If somebody has a subscription that has gone awry, what reason do we have to believe there will only be one transaction that will need to be manually purged? It seems just as likely that there would be a million transactions that need to be purged, and creating an interface for users to manually review them and keep or discard on a case by case basis seems unworkable. Sure, you might have specific cases where the number of transactions to purge is small, but I don't like designing the feature around that assumption.\n\nAll the same, I'm looking forward to seeing your patch!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 20 Jun 2021 19:26:23 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Sat, Jun 19, 2021 at 8:14 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Jun 19, 2021, at 3:17 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Also, why not also log the xid of the failed\n> > transaction?\n>\n> We could also do that. Reading [1], it seems you are overly focused on user-facing xids. The errdetail in the examples I've been using for testing, and the one mentioned in [1], contain information about the conflicting data. I think users are more likely to understand that a particular primary key value cannot be replicated because it is not unique than to understand that a particular xid cannot be replicated. (Likewise for permissions errors.) For example:\n>\n> 2021-06-18 16:25:20.139 PDT [56926] ERROR: duplicate key value violates unique constraint \"s1_tbl_unique\"\n> 2021-06-18 16:25:20.139 PDT [56926] DETAIL: Key (i)=(1) already exists.\n> 2021-06-18 16:25:20.139 PDT [56926] CONTEXT: COPY tbl, line 2\n>\n> This tells the user what they need to clean up before they can continue. Telling them which xid tried to apply the change, but not the change itself or the conflict itself, seems rather unhelpful. So at best, the xid is an additional piece of information. I'd rather have both the ERROR and DETAIL fields above and not the xid than have the xid and lack one of those two fields. Even so, I have not yet included the DETAIL field because I didn't want to bloat the catalog.\n>\n\nI never said that we don't need the error information. I think we need\nxid along with other things.\n\n> For the problem in [1], having the xid is more important than it is in my patch, because the user is expected in [1] to use the xid as a handle. But that seems like an odd interface to me. Imagine that a transaction on the publisher side inserted a batch of data, and only a subset of that data conflicts on the subscriber side. What advantage is there in skipping the entire transaction? Wouldn't the user rather skip just the problematic rows?\n>\n\nI think skipping some changes but not others can make the final\ntransaction data inconsistent. Say, we have a case where, in a\ntransaction after insert, there is an update or delete on the same\nrow, then we might silently skip such updates/deletes unless the same\nrow is already present in the subscriber. I think skipping the entire\ntransaction based on user instruction would be safer than skipping\nsome changes that lead to an error.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Jun 2021 08:23:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Jun 21, 2021 at 7:56 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Jun 20, 2021, at 7:17 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I will submit the patch.\n>\n> Great, thanks!\n>\n> > There was a discussion that the skipping transaction patch would also\n> > need to have a feature that tells users the details of the last\n> > failure transaction such as its XID, timestamp, action etc. In that\n> > sense, those two patches might need the common infrastructure that the\n> > apply workers leave the error details somewhere so that the users can\n> > see it.\n>\n> Right. Subscription on error triggers would need that, too, if we wrote them.\n>\n> > Is it really useful to write only error message to the system catalog?\n> > Even if we see the error message like \"duplicate key value violates\n> > unique constraint “test_tab_pkey”” on the system catalog, we will end\n> > up needing to check the server log for details to properly resolve the\n> > conflict. If the user wants to know whether the subscription is\n> > disabled manually or automatically, the error message on the system\n> > catalog might not necessarily be necessary.\n> >\n\nI think the two key points are (a) to define exactly what all\ninformation is required to be logged on error, (b) where do we want to\nstore the information based on requirements. I see that for (b) Mark\nis inclined to use the existing catalog table. I feel that is worth\nconsidering but not sure if that is the best way to deal with it. For\nexample, if we store that information in the catalog, we might need to\nconsider storing it both in pg_subscription and pg_subscription_rel,\notherwise, we might overwrite the errors as I think what is happening\nin the currently proposed patch. The other possibilities could be to\ndefine a new catalog table to capture the error information or log the\nrequired information via stats collector and then the user can see\nthat info via some stats view.\n\n>\n> We can put more information in there. I don't feel strongly about it. I'll wait for your patch to see what infrastructure you need.\n>\n> > The feature discussed in that thread is meant to be a repair tool for\n> > the subscription in emergency cases when something that should not\n> > have happened happened. I guess that resolving row (or column) level\n> > conflict should be done in another way, for example, by defining\n> > policies for each type of conflict.\n>\n> I understand that is the idea, but I'm having trouble believing it will work that way in practice. If somebody has a subscription that has gone awry, what reason do we have to believe there will only be one transaction that will need to be manually purged?\n>\n\nBecause currently, we don't proceed after an error unless it is\nresolved. Why do you think there could be multiple such transactions?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Jun 2021 08:39:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Jun 21, 2021 at 12:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 21, 2021 at 7:56 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> >\n> > > On Jun 20, 2021, at 7:17 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I will submit the patch.\n> >\n> > Great, thanks!\n> >\n> > > There was a discussion that the skipping transaction patch would also\n> > > need to have a feature that tells users the details of the last\n> > > failure transaction such as its XID, timestamp, action etc. In that\n> > > sense, those two patches might need the common infrastructure that the\n> > > apply workers leave the error details somewhere so that the users can\n> > > see it.\n> >\n> > Right. Subscription on error triggers would need that, too, if we wrote them.\n> >\n> > > Is it really useful to write only error message to the system catalog?\n> > > Even if we see the error message like \"duplicate key value violates\n> > > unique constraint “test_tab_pkey”” on the system catalog, we will end\n> > > up needing to check the server log for details to properly resolve the\n> > > conflict. If the user wants to know whether the subscription is\n> > > disabled manually or automatically, the error message on the system\n> > > catalog might not necessarily be necessary.\n> > >\n>\n> I think the two key points are (a) to define exactly what all\n> information is required to be logged on error,\n\nWhen it comes to the patch for skipping transactions, it would\nsomewhat depend on how users specify transactions to skip. On the\nother hand, for this patch, the minimal information would be whether\nthe subscription is disabled automatically by the server.\n\n> (b) where do we want to\n> store the information based on requirements. I see that for (b) Mark\n> is inclined to use the existing catalog table. I feel that is worth\n> considering but not sure if that is the best way to deal with it. For\n> example, if we store that information in the catalog, we might need to\n> consider storing it both in pg_subscription and pg_subscription_rel,\n> otherwise, we might overwrite the errors as I think what is happening\n> in the currently proposed patch. The other possibilities could be to\n> define a new catalog table to capture the error information or log the\n> required information via stats collector and then the user can see\n> that info via some stats view.\n\nThis point is also related to the point whether or not that\ninformation needs to last after the server crash (and restart). When\nit comes to the patch for skipping transactions, there was a\ndiscussion that we don’t necessarily need it since the tools will be\nused in rare cases. But for this proposed patch, I guess it would be\nuseful if it does. It might be worth considering doing a different way\nfor each patch. For example, we send the details of last failure\ntransaction to the stats collector while updating subenabled to\nsomething like “automatically-disabled” instead of to just “false” (or\nusing another column to show the subscriber is disabled automatically\nby the server).\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 21 Jun 2021 12:50:31 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Jun 20, 2021, at 8:09 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> Because currently, we don't proceed after an error unless it is\n> resolved. Why do you think there could be multiple such transactions?\n\nJust as one example, if the subscriber has a unique index that the publisher lacks, any number of transactions could add non-unique data that then fails to apply on the subscriber. My patch took the view that the user should figure out how to get the subscriber side consistent with the publisher side, but if you instead take the approach that problematic commits should be skipped, it would seem that arbitrarily many such transactions could be committed on the publisher side.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 20 Jun 2021 21:54:53 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Jun 21, 2021 at 10:24 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Jun 20, 2021, at 8:09 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Because currently, we don't proceed after an error unless it is\n> > resolved. Why do you think there could be multiple such transactions?\n>\n> Just as one example, if the subscriber has a unique index that the publisher lacks, any number of transactions could add non-unique data that then fails to apply on the subscriber.\n>\n\nThen also it will fail on the first such conflict, so even without\nyour patch, the apply worker corresponding to the subscription won't\nbe able to proceed after the first error, it won't lead to multiple\nfailing xids. However, I see a different case where there could be\nmultiple failing xids and that can happen during initial table sync\nwhere multiple workers failed due to some error. I am not sure your\npatch would be able to capture all such failed transactions because\nyou are recording this information in pg_subscription and not in\npg_subscription_rel.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Jun 2021 10:41:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Jun 20, 2021, at 8:09 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> (a) to define exactly what all\n> information is required to be logged on error, (b) where do we want to\n> store the information based on requirements.\n\nI'm not sure it has to be stored anywhere durable. I have a patch in the works to do something like:\n\ncreate function foreign_key_insert_violation_before() returns conflict_trigger as $$\nBEGIN\n RAISE NOTICE 'elevel: %', TG_ELEVEL:\n RAISE NOTICE 'sqlerrcode: %', TG_SQLERRCODE:\n RAISE NOTICE 'message: %', TG_MESSAGE:\n RAISE NOTICE 'detail: %', TG_DETAIL:\n RAISE NOTICE 'detail_log: %', TG_DETAIL_LOG:\n RAISE NOTICE 'hint: %', TG_HINT:\n RAISE NOTICE 'schema: %', TG_SCHEMA_NAME:\n RAISE NOTICE 'table: %', TG_TABLE_NAME:\n RAISE NOTICE 'column: %', TG_COLUMN_NAME:\n RAISE NOTICE 'datatype: %', TG_DATATYPE_NAME:\n RAISE NOTICE 'constraint: %', TG_CONSTRAINT_NAME:\n\n -- do something useful to prepare for retry of transaction\n -- which raised a foreign key violation\nEND\n$$ language plpgsql;\n\ncreate function foreign_key_insert_violation_after() returns conflict_trigger as $$\nBEGIN\n -- do something useful to cleanup after retry of transaction\n -- which raised a foreign key violation\nEND\n$$ language plpgsql;\n\ncreate conflict trigger regress_conflict_trigger_insert on regress_conflictsub\n before foreign_key_violation\n when tag in ('INSERT')\n execute procedure foreign_key_insert_violation_before();\n\ncreate conflict trigger regress_conflict_trigger_insert on regress_conflictsub\n after foreign_key_violation\n when tag in ('INSERT')\n execute procedure foreign_key_insert_violation_after();\n\nThe idea is that, for subscriptions that have conflict triggers defined, the apply will be wrapped in a PG_TRY()/PG_CATCH() block. If it fails, the ErrorData will be copied in the ConflictTriggerContext, and then the transaction will be attempted again, but this time with any BEFORE and AFTER triggers applied. The triggers could then return a special result indicating whether the transaction should be permanently skipped, applied, or whatever. None of the data needs to be stored anywhere non-transient, as it just gets handed to the triggers.\n\nI think the other patch is a subset of this functionality, as using this system to create triggers which query a table containing transactions to be skipped would be enough to get the functionality you've been discussing. But this system could also do other things, like modify data. Admittedly, this is akin to a statement level trigger and not a row level trigger, so a number of things you might want to do would be hard to do from this. But perhaps the equivalent of row level triggers could also be written?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 20 Jun 2021 22:12:44 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Jun 20, 2021, at 10:11 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> Then also it will fail on the first such conflict, so even without\n> your patch, the apply worker corresponding to the subscription won't\n> be able to proceed after the first error, it won't lead to multiple\n> failing xids. \n\nI'm not sure we're talking about the same thing. I'm saying that if the user is expected to clear each error manually, there could be many such errors for them to clear. It may be true that the second error doesn't occur on the subscriber side until after the first is cleared, but that still leaves the user having to clear one after the next until arbitrarily many of them coming from the publisher side are cleared.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 20 Jun 2021 22:16:21 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Jun 20, 2021, at 10:11 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> However, I see a different case where there could be\n> multiple failing xids and that can happen during initial table sync\n> where multiple workers failed due to some error. I am not sure your\n> patch would be able to capture all such failed transactions because\n> you are recording this information in pg_subscription and not in\n> pg_subscription_rel.\n\nRight, I wasn't trying to capture everything, just enough to give the user a reasonable indication of what went wrong. My patch was designed around the idea that the user would need to figure out how to fix the subscriber side prior to re-enabling the subscription. As such, I wasn't bothered with trying to store everything, just enough to give the user a clue where to look. I don't mind if you want to store more information, and maybe that needs to be stored somewhere else. Do you believe pg_subscription_rel is a suitable location? \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 20 Jun 2021 22:25:48 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Jun 21, 2021 at 9:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jun 21, 2021 at 12:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jun 21, 2021 at 7:56 AM Mark Dilger\n> > <mark.dilger@enterprisedb.com> wrote:\n> > >\n> > > > On Jun 20, 2021, at 7:17 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > I will submit the patch.\n> > >\n> > > Great, thanks!\n> > >\n> > > > There was a discussion that the skipping transaction patch would also\n> > > > need to have a feature that tells users the details of the last\n> > > > failure transaction such as its XID, timestamp, action etc. In that\n> > > > sense, those two patches might need the common infrastructure that the\n> > > > apply workers leave the error details somewhere so that the users can\n> > > > see it.\n> > >\n> > > Right. Subscription on error triggers would need that, too, if we wrote them.\n> > >\n> > > > Is it really useful to write only error message to the system catalog?\n> > > > Even if we see the error message like \"duplicate key value violates\n> > > > unique constraint “test_tab_pkey”” on the system catalog, we will end\n> > > > up needing to check the server log for details to properly resolve the\n> > > > conflict. If the user wants to know whether the subscription is\n> > > > disabled manually or automatically, the error message on the system\n> > > > catalog might not necessarily be necessary.\n> > > >\n> >\n> > I think the two key points are (a) to define exactly what all\n> > information is required to be logged on error,\n>\n> When it comes to the patch for skipping transactions, it would\n> somewhat depend on how users specify transactions to skip. On the\n> other hand, for this patch, the minimal information would be whether\n> the subscription is disabled automatically by the server.\n>\n\nTrue, but still there will be some information related to ERROR which\nwe wanted the user to see unless we ask them to refer to logs for\nthat.\n\n> > (b) where do we want to\n> > store the information based on requirements. I see that for (b) Mark\n> > is inclined to use the existing catalog table. I feel that is worth\n> > considering but not sure if that is the best way to deal with it. For\n> > example, if we store that information in the catalog, we might need to\n> > consider storing it both in pg_subscription and pg_subscription_rel,\n> > otherwise, we might overwrite the errors as I think what is happening\n> > in the currently proposed patch. The other possibilities could be to\n> > define a new catalog table to capture the error information or log the\n> > required information via stats collector and then the user can see\n> > that info via some stats view.\n>\n> This point is also related to the point whether or not that\n> information needs to last after the server crash (and restart). When\n> it comes to the patch for skipping transactions, there was a\n> discussion that we don’t necessarily need it since the tools will be\n> used in rare cases. But for this proposed patch, I guess it would be\n> useful if it does. It might be worth considering doing a different way\n> for each patch. For example, we send the details of last failure\n> transaction to the stats collector while updating subenabled to\n> something like “automatically-disabled” instead of to just “false” (or\n> using another column to show the subscriber is disabled automatically\n> by the server).\n>\n\nI agree that it is worth considering to have subenabled to have a\ntri-state (enable, disabled, automatically-disabled) value instead of\njust a boolean. But in this case, if the stats collector missed\nupdating the information, the user may have to manually update the\nsubscription and let the error happen again to see it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Jun 2021 10:56:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Jun 21, 2021 at 10:55 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Jun 20, 2021, at 10:11 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > However, I see a different case where there could be\n> > multiple failing xids and that can happen during initial table sync\n> > where multiple workers failed due to some error. I am not sure your\n> > patch would be able to capture all such failed transactions because\n> > you are recording this information in pg_subscription and not in\n> > pg_subscription_rel.\n>\n> Right, I wasn't trying to capture everything, just enough to give the user a reasonable indication of what went wrong. My patch was designed around the idea that the user would need to figure out how to fix the subscriber side prior to re-enabling the subscription. As such, I wasn't bothered with trying to store everything, just enough to give the user a clue where to look.\n>\n\nOkay, but the clue will be pretty random because you might end up just\nlogging one out of several errors.\n\n> I don't mind if you want to store more information, and maybe that needs to be stored somewhere else. Do you believe pg_subscription_rel is a suitable location?\n>\nIt won't be sufficient to store information in either\npg_subscription_rel or pg_susbscription. I think if we want to store\nthe required information in a catalog then we need to define a new\ncatalog (pg_subscription_conflicts or something like that) with\ninformation corresponding to each rel in subscription (srsubid oid\n(Reference to subscription), srrelid oid (Reference to relation),\n<columns for error_info>). OTOH, we can choose to send the error\ninformation to stats collector which will then be available via stat\nview and update system catalog to disable the subscription but there\nwill be a risk that we might send info of failed transaction to stats\ncollector but then fail to update system catalog to disable the\nsubscription.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Jun 2021 11:19:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Jun 21, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 21, 2021 at 10:55 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>\n> > I don't mind if you want to store more information, and maybe that needs to be stored somewhere else. Do you believe pg_subscription_rel is a suitable location?\n> >\n> It won't be sufficient to store information in either\n> pg_subscription_rel or pg_susbscription. I think if we want to store\n> the required information in a catalog then we need to define a new\n> catalog (pg_subscription_conflicts or something like that) with\n> information corresponding to each rel in subscription (srsubid oid\n> (Reference to subscription), srrelid oid (Reference to relation),\n> <columns for error_info>). OTOH, we can choose to send the error\n> information to stats collector which will then be available via stat\n> view and update system catalog to disable the subscription but there\n> will be a risk that we might send info of failed transaction to stats\n> collector but then fail to update system catalog to disable the\n> subscription.\n>\n\nI think we should store the input from the user (like disable_on_error\nflag or xid to skip) in the system catalog pg_subscription and send\nthe error information (subscrtion_id, rel_id, xid of failed xact,\nerror_code, error_message, etc.) to the stats collector which can be\nused to display such information via a stat view.\n\nThe disable_on_error flag handling could be that on error it sends the\nrequired error info to stats collector and then updates the subenabled\nin pg_subscription. In rare conditions, where we are able to send the\nmessage but couldn't update the subenabled info in pg_subscription\neither due to some error or server restart, the apply worker would\nagain try to apply the same change and would hit the same error again\nwhich I think should be fine because it will ultimately succeed.\n\nThe skip xid handling will also be somewhat similar where on an error,\nwe will send the error information to stats collector which will be\ndisplayed via stats view. Then the user is expected to ask for skip\nxid (Alter Subscription ... SKIP <xid_value>) based on information\ndisplayed via stat view. Now, the apply worker can skip changes from\nsuch a transaction, and then during processing of commit record of the\nskipped transaction, it should update xid to invalid value, so that\nnext time that shouldn't be used. I think it is important to update\nxid to an invalid value as part of the skipped transaction because\notherwise, after the restart, we won't be able to decide whether we\nstill want to skip the xid stored for a subscription.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Jun 2021 16:17:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "Much of the discussion above seems to be related to where to store the\nerror information and how much information is needed to be useful.\n\nAs a summary, the 5 alternatives I have seen mentioned are:\n\n#1. Store some simple message in the pg_subscription (\"I wasn't trying\nto capture everything, just enough to give the user a reasonable\nindication of what went wrong\" [Mark-1]). Storing the error message\nwas also seen as a convenience for writing TAP tests (\"I originally\nran into the motivation to write this patch when frustrated that TAP\ntests needed to parse the apply worker log file\" [Mark-2}). It also\ncan sometimes provide a simple clue for the error (e.g. PK violation\nfor table TBL) but still the user will have to look elsewhere for\ndetails to resolve the error. So while this implementation seems good\nfor simple scenarios, it appears to have been shot down because the\nnon-trivial scenarios either have insufficient or wrong information in\nthe error message. Some DETAILS could have been added to give more\ninformation but that would maybe bloat the catalog (\"I have not yet\nincluded the DETAIL field because I didn't want to bloat the catalog.\"\n[Mark-3])\n\n#2. Similarly another idea was to use another existing catalog table\npg_subscription_rel. This could have the same problems (\"It won't be\nsufficient to store information in either pg_subscription_rel or\npg_susbscription.\" [Amit-1])\n\n#3. There is another suggestion to use the Stats Collector to hold the\nerror message [Amit-2]. For me, this felt like blurring too much the\ndistinction between \"stats tracking/metrics\" and \"logs\". ERROR logs\nmust be flushed, whereas for stats (IIUC) there is no guarantee that\neverything you need to see would be present. Indeed Amit wrote \"But in\nthis case, if the stats collector missed updating the information, the\nuser may have to manually update the subscription and let the error\nhappen again to see it.\" [Amit-3]. Requesting the user to cause the\nsame error again just in case it was not captured a first time seems\ntoo strange to me.\n\n#4. The next idea was to have an entirely new catalog for holding the\nsubscription error information. I feel that storing/duplicating lots\nof error information in another table seems like a bridge too far.\nWhat about the risks of storing incorrect or sufficient information?\nWhat is the advantage of duplicating errors over just referring to the\nlog files for ERROR details?\n\n#5. Document to refer to the logs. All ERROR details are already in\nthe logs, and this seems to me the intuitive place to look for them.\nSearching for specific errors becomes difficult programmatically (is\nthis really a problem other than complex TAP tests?). But here there\nis no risk of missing or insufficient information captured in the log\nfiles (\"but still there will be some information related to ERROR\nwhich we wanted the user to see unless we ask them to refer to logs\nfor that.\" [Amit-4}).\n\n---\n\nMy preferred alternative is #5. ERRORs are logged in the log file, so\nthere is nothing really for this patch to do in this regard (except\ndocumentation), and there is no risk of missing any information, no\nambiguity of having duplicated errors, and it is the intuitive place\nthe user would look.\n\nSo I felt current best combination is just this:\na) A tri-state indicating the state of the subscription: e.g.\nsomething like \"enabled\" ('e')/ \"disabled\" ('d') / \"auto-disabled\"\n('a') [Amit-5]\nb) For \"auto-disabled\" the PG docs would be updated tell the user to\ncheck the logs to resolve the problem before re-enabling the\nsubscription\n\n//////////\n\nIMO it is not made exactly clear to me what is the main goal of this\npatch. Because of this, I feel that you can't really judge if this new\noption is actually useful or not except only in hindsight. It seems\nlike whatever you implement can be made to look good or bad, just by\nciting different test scenarios.\n\ne.g.\n\n* Is the goal mainly to help automated (TAP) testing? In that case,\nthen maybe you do want to store the error message somewhere other than\nthe log files. But still I wonder if results would be unpredictable\nanyway - e.g if there are multiple tables all with errors then it\ndepends on the tablesync order of execution which error you see caused\nthe auto-disable, right? And if it is not predictable maybe it is less\nuseful.\n\n* Is the goal to prevent some *unattended* SUBSCRIPTION from going bad\nat some point in future and then going into a relaunch loop for\ndays/weeks and causing 1000's of errors without the user noticing. In\nthat case, this patch seems to be quite useful, but for this goal\nmaybe you don't want to be checking the tablesync workers at all, but\nshould only be checking the apply worker like your original v1 patch\ndid.\n\n* Is the goal just to be a convenient way to disable the subscription\nduring the CREATE SUBSCRIPTION phase so that the user can make\ncorrections in peace without the workers re-launching and making more\nerror logs? Here the patch is helpful, but only for simple scenarios\nlike 1 faulty table. Imagine if there are 10 tables (all with PK\nviolations at DATASYNC copy) then you will encounter them one at a\ntime and have to re-enable the subscription 10 times, after fixing\neach error in turn. So in this scenario the new option might be more\nof a hindrance than a help because it would be easier if the user just\ndid \"ALTER SUBSCRIPTION sub DISABLE\" manually and fixed all the\nproblems in one sitting before re-enabling.\n\n* etc\n\n//////////\n\nFinally, here is one last (crazy?) thought-bubble just for\nconsideration. I might be wrong, but my gut feeling is that the Stats\nCollector is intended more for \"tracking\" and for \"metrics\" rather\nthan for holding duplicates of logged error messages. At the same\ntime, I felt that disabling an entire subscription due to a single\nrogue error might be overkill sometimes. But I wonder if there is a\nway to combine those two ideas so that the Stats Collector gets some\nnew counter for tracking the number of worker re-launches that have\noccurred, meanwhile there could be a subscription option which gives a\nthreshold above which you would disable the subscription.\ne.g.\n\"disable_on_error_threshold=0\" default, relaunch forever\n\"disable_on_error_threshold=1\" disable upon first error encountered.\n(This is how your patch behaves now I think.)\n\"disable_on_error_threshold=500\" disable if the re-launch errors go\nunattended and happen 500 times.\n\n------\n[Mark-1] https://www.postgresql.org/message-id/A539C848-670E-454F-B31C-82D3CBE9F5AC%40enterprisedb.com\n[Mark-2] https://www.postgresql.org/message-id/DB35438F-9356-4841-89A0-412709EBD3AB%40enterprisedb.com\n[Mark-3] https://www.postgresql.org/message-id/DE7E13B7-DC76-416A-A98F-3BC3F80E6BE9%40enterprisedb.com\n[Amit-1] https://www.postgresql.org/message-id/CAA4eK1K_JFSFrAkr_fgp3VX6hTSmjK%3DwNs4Tw8rUWHGp0%2BBsaw%40mail.gmail.com\n[Amit-2] https://www.postgresql.org/message-id/CAA4eK1%2BNoRbYSH1J08zi4OJ_EUMcjmxTwnmwVqZ6e_xzS0D6VA%40mail.gmail.com\n[Amit-3] https://www.postgresql.org/message-id/CAA4eK1Kyx6U9yxC7OXoBD7pHC3bJ4LuNGd%3DOiABmiW6%2BqG%2BvEQ%40mail.gmail.com\n[Amit-4] https://www.postgresql.org/message-id/CAA4eK1Kyx6U9yxC7OXoBD7pHC3bJ4LuNGd%3DOiABmiW6%2BqG%2BvEQ%40mail.gmail.com\n[Amit-5] https://www.postgresql.org/message-id/CAA4eK1Kyx6U9yxC7OXoBD7pHC3bJ4LuNGd%3DOiABmiW6%2BqG%2BvEQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 22 Jun 2021 10:57:40 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Jun 21, 2021, at 5:57 PM, Peter Smith <smithpb2250@gmail.com> wrote:\n> \n> #5. Document to refer to the logs. All ERROR details are already in\n> the logs, and this seems to me the intuitive place to look for them.\n\nMy original motivation came from writing TAP tests to check that the permissions systems would properly deny the apply worker when running under a non-superuser role. The idea is that the user with the responsibility for managing subscriptions won't have enough privilege to read the logs. Whatever information that user needs (if any) must be someplace else.\n\n> Searching for specific errors becomes difficult programmatically (is\n> this really a problem other than complex TAP tests?).\n\nI believe there is a problem, because I remain skeptical that these errors will be both existent and rare. Either you've configured your system correctly and you get zero of these, or you've misconfigured it and you get some non-zero number of them. I don't see any reason to assume that number will be small.\n\nThe best way to deal with that is to be able to tell the system what to do with them, like \"if the error has this error code and the error message matches this regular expression, then do this, else do that.\" That's why I think allowing triggers to be created on subscriptions makes the most sense (though is probably the hardest system being proposed so far.)\n\n> But here there\n> is no risk of missing or insufficient information captured in the log\n> files (\"but still there will be some information related to ERROR\n> which we wanted the user to see unless we ask them to refer to logs\n> for that.\" [Amit-4}).\n\nNot only is there a problem if the user doesn't have permission to view the logs, but also, if we automatically disable the subscription until the error is manually cleared, the logs might be rotated out of existence before the user takes any action. In that case, the logs will be entirely missing, and not even the error message will remain. At least with the patch I submitted, the error message will remain, though I take Amit's point that there are deficiencies in handling parallel tablesync workers, etc.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 21 Jun 2021 19:29:38 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Jun 21, 2021, at 5:57 PM, Peter Smith <smithpb2250@gmail.com> wrote:\n> \n> * Is the goal mainly to help automated (TAP) testing?\n\nAbsolutely, that was my original motivation. But I don't think that is the primary reason the patch would be accepted. There is a cost to having the logical replication workers attempt ad infinitum to apply a transaction that will never apply.\n\nAlso, if you are waiting for a subscription to catch up, it is far from obvious that you will wait forever.\n\n> In that case,\n> then maybe you do want to store the error message somewhere other than\n> the log files. But still I wonder if results would be unpredictable\n> anyway - e.g if there are multiple tables all with errors then it\n> depends on the tablesync order of execution which error you see caused\n> the auto-disable, right? And if it is not predictable maybe it is less\n> useful.\n\nBut if you are writing a TAP test, you should be the one controlling whether that is the case. I don't think it would be unpredictable from the point of view of the test author.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 21 Jun 2021 19:35:38 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Jun 21, 2021 at 7:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 21, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jun 21, 2021 at 10:55 AM Mark Dilger\n> > <mark.dilger@enterprisedb.com> wrote:\n> >\n> > > I don't mind if you want to store more information, and maybe that needs to be stored somewhere else. Do you believe pg_subscription_rel is a suitable location?\n> > >\n> > It won't be sufficient to store information in either\n> > pg_subscription_rel or pg_susbscription. I think if we want to store\n> > the required information in a catalog then we need to define a new\n> > catalog (pg_subscription_conflicts or something like that) with\n> > information corresponding to each rel in subscription (srsubid oid\n> > (Reference to subscription), srrelid oid (Reference to relation),\n> > <columns for error_info>). OTOH, we can choose to send the error\n> > information to stats collector which will then be available via stat\n> > view and update system catalog to disable the subscription but there\n> > will be a risk that we might send info of failed transaction to stats\n> > collector but then fail to update system catalog to disable the\n> > subscription.\n> >\n>\n> I think we should store the input from the user (like disable_on_error\n> flag or xid to skip) in the system catalog pg_subscription and send\n> the error information (subscrtion_id, rel_id, xid of failed xact,\n> error_code, error_message, etc.) to the stats collector which can be\n> used to display such information via a stat view.\n>\n> The disable_on_error flag handling could be that on error it sends the\n> required error info to stats collector and then updates the subenabled\n> in pg_subscription. In rare conditions, where we are able to send the\n> message but couldn't update the subenabled info in pg_subscription\n> either due to some error or server restart, the apply worker would\n> again try to apply the same change and would hit the same error again\n> which I think should be fine because it will ultimately succeed.\n>\n> The skip xid handling will also be somewhat similar where on an error,\n> we will send the error information to stats collector which will be\n> displayed via stats view. Then the user is expected to ask for skip\n> xid (Alter Subscription ... SKIP <xid_value>) based on information\n> displayed via stat view. Now, the apply worker can skip changes from\n> such a transaction, and then during processing of commit record of the\n> skipped transaction, it should update xid to invalid value, so that\n> next time that shouldn't be used. I think it is important to update\n> xid to an invalid value as part of the skipped transaction because\n> otherwise, after the restart, we won't be able to decide whether we\n> still want to skip the xid stored for a subscription.\n\nSounds reasonable.\n\nThe feature that sends the error information to the stats collector is\na common feature for both and itself is also useful. As discussed in\nthat skip transaction patch thread, it would also be good if we write\nerror information (relation, action, xid, etc) to the server log too.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 22 Jun 2021 11:42:04 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Jun 22, 2021 at 6:27 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> #3. There is another suggestion to use the Stats Collector to hold the\n> error message [Amit-2]. For me, this felt like blurring too much the\n> distinction between \"stats tracking/metrics\" and \"logs\". ERROR logs\n> must be flushed, whereas for stats (IIUC) there is no guarantee that\n> everything you need to see would be present. Indeed Amit wrote \"But in\n> this case, if the stats collector missed updating the information, the\n> user may have to manually update the subscription and let the error\n> happen again to see it.\" [Amit-3]. Requesting the user to cause the\n> same error again just in case it was not captured a first time seems\n> too strange to me.\n>\n\nI don't think it will often be the case that the stats collector will\nmiss updating the information. I am not feeling comfortable storing\nerror information in system catalogs. We have some other views which\ncapture somewhat similar conflict information\n(pg_stat_database_conflicts) or failed transactions information. So, I\nthought here we are extending the similar concept by storing some\nadditional information about errors.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Jun 2021 08:14:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Jun 21, 2021, at 5:57 PM, Peter Smith <smithpb2250@gmail.com> wrote:\n> \n> * Is the goal to prevent some *unattended* SUBSCRIPTION from going bad\n> at some point in future and then going into a relaunch loop for\n> days/weeks and causing 1000's of errors without the user noticing. In\n> that case, this patch seems to be quite useful, but for this goal\n> maybe you don't want to be checking the tablesync workers at all, but\n> should only be checking the apply worker like your original v1 patch\n> did.\n\nYeah, my motivation was preventing an infinite loop, and providing a clean way for the users to know that replication they are waiting for won't ever complete, rather than having to infer that it will never halt. \n\n> * Is the goal just to be a convenient way to disable the subscription\n> during the CREATE SUBSCRIPTION phase so that the user can make\n> corrections in peace without the workers re-launching and making more\n> error logs?\n\nNo. This is not and never was my motivation. It's an interesting question, but that idea never crossed my mind. I'm not sure what changes somebody would want to make *after* creating the subscription. Certainly, there may be problems with how they have things set up, but they won't know that until the first error happens.\n\n> Here the patch is helpful, but only for simple scenarios\n> like 1 faulty table. Imagine if there are 10 tables (all with PK\n> violations at DATASYNC copy) then you will encounter them one at a\n> time and have to re-enable the subscription 10 times, after fixing\n> each error in turn.\n\nYou are assuming disable_on_error=true. It is false by default. But ok, let's accept that assumption for the sake of argument. Now, will you have to manually go through the process 10 times? I'm not sure. The user might figure out their mistake after seeing the first error.\n\n> So in this scenario the new option might be more\n> of a hindrance than a help because it would be easier if the user just\n> did \"ALTER SUBSCRIPTION sub DISABLE\" manually and fixed all the\n> problems in one sitting before re-enabling.\n\nYeah, but since the new option is off by default, I don't see any sensible complaint.\n\n> \n> * etc\n> \n> //////////\n> \n> Finally, here is one last (crazy?) thought-bubble just for\n> consideration. I might be wrong, but my gut feeling is that the Stats\n> Collector is intended more for \"tracking\" and for \"metrics\" rather\n> than for holding duplicates of logged error messages. At the same\n> time, I felt that disabling an entire subscription due to a single\n> rogue error might be overkill sometimes.\n\nI'm happy to entertain criticism of the particulars of how my patch approaches this problem, but it is already making a distinction between transient errors (resources, network, etc.) vs. ones that are non-transient. Again, I might not have drawn the line in the right place, but the patch is not intended to disable subscriptions in response to transient errors.\n\n> But I wonder if there is a\n> way to combine those two ideas so that the Stats Collector gets some\n> new counter for tracking the number of worker re-launches that have\n> occurred, meanwhile there could be a subscription option which gives a\n> threshold above which you would disable the subscription.\n> e.g.\n> \"disable_on_error_threshold=0\" default, relaunch forever\n> \"disable_on_error_threshold=1\" disable upon first error encountered.\n> (This is how your patch behaves now I think.)\n> \"disable_on_error_threshold=500\" disable if the re-launch errors go\n> unattended and happen 500 times.\n\nThat sounds like a misfeature to me. You could have a subscription that works fine for a month, surviving numerous short network outages, but then gets autodisabled after a longer network outage. I'm not sure why anybody would want that. You might argue for exponential backoff, where it never gets autodisabled on transient errors, but retries less frequently. But I don't want to expand the scope of this patch to include that, at least not without a lot more evidence that it is needed.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 21 Jun 2021 19:49:56 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Jun 21, 2021 at 4:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 21, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> I think we should store the input from the user (like disable_on_error\n> flag or xid to skip) in the system catalog pg_subscription and send\n> the error information (subscrtion_id, rel_id, xid of failed xact,\n> error_code, error_message, etc.) to the stats collector which can be\n> used to display such information via a stat view.\n>\n> The disable_on_error flag handling could be that on error it sends the\n> required error info to stats collector and then updates the subenabled\n> in pg_subscription. In rare conditions, where we are able to send the\n> message but couldn't update the subenabled info in pg_subscription\n> either due to some error or server restart, the apply worker would\n> again try to apply the same change and would hit the same error again\n> which I think should be fine because it will ultimately succeed.\n>\n> The skip xid handling will also be somewhat similar where on an error,\n> we will send the error information to stats collector which will be\n> displayed via stats view. Then the user is expected to ask for skip\n> xid (Alter Subscription ... SKIP <xid_value>) based on information\n> displayed via stat view. Now, the apply worker can skip changes from\n> such a transaction, and then during processing of commit record of the\n> skipped transaction, it should update xid to invalid value, so that\n> next time that shouldn't be used. I think it is important to update\n> xid to an invalid value as part of the skipped transaction because\n> otherwise, after the restart, we won't be able to decide whether we\n> still want to skip the xid stored for a subscription.\n>\n\nOne minor detail I missed in the above sketch for skipped transaction\nfeature was that actually we only need replication origin state from\nthe commit record of the skipped transaction and then I think we need\nto start a transaction, update the xid value to invalid, set the\nreplication origin state and commit that transaction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Jun 2021 08:23:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Jun 21, 2021 at 11:26 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jun 20, 2021, at 7:17 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I will submit the patch.\n>\n> Great, thanks!\n\nI've submitted the patches on that thread[1]. There are three patches:\nskipping the transaction on the subscriber side, reporting error\ndetails in the errcontext, and reporting the error details to the\nstats collector. Feedback is very welcome.\n\n[1] https://www.postgresql.org/message-id/CAD21AoBU4jGEO6AXcykQ9y7tat0RrB5--8ZoJgfcj%2BLPs7nFZQ%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 28 Jun 2021 13:47:11 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Monday, June 28, 2021 1:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Mon, Jun 21, 2021 at 11:26 AM Mark Dilger\r\n> <mark.dilger@enterprisedb.com> wrote:\r\n> > > On Jun 20, 2021, at 7:17 PM, Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > I will submit the patch.\r\n> >\r\n> > Great, thanks!\r\n> \r\n> I've submitted the patches on that thread[1]. There are three patches:\r\n> skipping the transaction on the subscriber side, reporting error details in the\r\n> errcontext, and reporting the error details to the stats collector. Feedback is\r\n> very welcome.\r\n> \r\n> [1]\r\n> https://www.postgresql.org/message-id/CAD21AoBU4jGEO6AXcykQ9y7tat0R\r\n> rB5--8ZoJgfcj%2BLPs7nFZQ%40mail.gmail.com\r\nHi, thanks Sawada-san for keep updating the skip xid patch in the thread.\r\n\r\nThis thread has stopped since the patch submission.\r\nI've rebased the 'disable_on_error' option\r\nso that it can be applied on top of skip xid shared in [1].\r\nI've written Mark Dilger as the original author in the commit message.\r\n\r\nThis patch is simply rebased to reactive this thread.\r\nSo there are still pending item to discuss for example,\r\nhow we should deal with multiple errors of several table sync workers.\r\n\r\nI extracted only 'disable_on_error' option\r\nbecause the skip xid and the latest error message fulfill the motivation\r\nto make it easy to write TAP tests already I felt.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoDY-9_x819F_m1_wfCVXXFJrGiSmR2MfC9Nw4nW8Om0qA%40mail.gmail.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Tue, 2 Nov 2021 10:42:27 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Nov 2, 2021 at 4:12 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, June 28, 2021 1:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Mon, Jun 21, 2021 at 11:26 AM Mark Dilger\n> > <mark.dilger@enterprisedb.com> wrote:\n> > > > On Jun 20, 2021, at 7:17 PM, Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > I will submit the patch.\n> > >\n> > > Great, thanks!\n> >\n> > I've submitted the patches on that thread[1]. There are three patches:\n> > skipping the transaction on the subscriber side, reporting error details in the\n> > errcontext, and reporting the error details to the stats collector. Feedback is\n> > very welcome.\n> >\n> > [1]\n> > https://www.postgresql.org/message-id/CAD21AoBU4jGEO6AXcykQ9y7tat0R\n> > rB5--8ZoJgfcj%2BLPs7nFZQ%40mail.gmail.com\n> Hi, thanks Sawada-san for keep updating the skip xid patch in the thread.\n>\n> This thread has stopped since the patch submission.\n> I've rebased the 'disable_on_error' option\n> so that it can be applied on top of skip xid shared in [1].\n> I've written Mark Dilger as the original author in the commit message.\n>\n> This patch is simply rebased to reactive this thread.\n> So there are still pending item to discuss for example,\n> how we should deal with multiple errors of several table sync workers.\n>\n> I extracted only 'disable_on_error' option\n> because the skip xid and the latest error message fulfill the motivation\n> to make it easy to write TAP tests already I felt.\n>\n\nThanks for the updated patch. Please create a Commitfest entry for\nthis. It will help to have a look at CFBot results for the patch, also\nif required rebase and post a patch on top of Head.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 8 Nov 2021 18:44:34 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Monday, November 8, 2021 10:15 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> Thanks for the updated patch. Please create a Commitfest entry for this. It will\r\n> help to have a look at CFBot results for the patch, also if required rebase and\r\n> post a patch on top of Head.\r\nAs requested, created a new entry for this - [1]\r\n\r\nFYI: the skip xid patch has been updated to v20 in [2]\r\nbut the v3 for disable_on_error is not affected by this update\r\nand still applicable with no regression.\r\n\r\n[1] - https://commitfest.postgresql.org/36/3407/\r\n[2] - https://www.postgresql.org/message-id/CAD21AoAT42mhcqeB1jPfRL1%2BEUHbZk8MMY_fBgsyZvJeKNpG%2Bw%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 10 Nov 2021 01:26:27 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Nov 10, 2021 at 12:26 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, November 8, 2021 10:15 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the updated patch. Please create a Commitfest entry for this. It will\n> > help to have a look at CFBot results for the patch, also if required rebase and\n> > post a patch on top of Head.\n> As requested, created a new entry for this - [1]\n>\n> FYI: the skip xid patch has been updated to v20 in [2]\n> but the v3 for disable_on_error is not affected by this update\n> and still applicable with no regression.\n>\n> [1] - https://commitfest.postgresql.org/36/3407/\n> [2] - https://www.postgresql.org/message-id/CAD21AoAT42mhcqeB1jPfRL1%2BEUHbZk8MMY_fBgsyZvJeKNpG%2Bw%40mail.gmail.com\n>\n\nI had a look at this patch and have a couple of initial review\ncomments for some issues I spotted:\n\nsrc/backend/commands/subscriptioncmds.c\n(1) bad array entry assignment\nThe following code block added by the patch assigns\n\"values[Anum_pg_subscription_subdisableonerr - 1]\" twice, resulting in\nit being always set to true, rather than the specified option value:\n\n+ if (IsSet(opts.specified_opts, SUBOPT_DISABLE_ON_ERR))\n+ {\n+ values[Anum_pg_subscription_subdisableonerr - 1]\n+ = BoolGetDatum(opts.disableonerr);\n+ values[Anum_pg_subscription_subdisableonerr - 1]\n+ = true;\n+ }\n\nThe 2nd line is meant to instead be\n\"replaces[Anum_pg_subscription_subdisableonerr - 1] = true\".\n(compare to handling for other similar options)\n\nsrc/backend/replication/logical/worker.c\n(2) unreachable code?\nIn the patch code there seems to be some instances of unreachable code\nafter re-throwing errors:\n\ne.g.\n\n+ /* If we caught an error above, disable the subscription */\n+ if (disable_subscription)\n+ {\n+ ReThrowError(DisableSubscriptionOnError(cctx));\n+ MemoryContextSwitchTo(ecxt);\n+ }\n\n+ else\n+ {\n+ PG_RE_THROW();\n+ MemoryContextSwitchTo(ecxt);\n+ }\n\n\n+ if (disable_subscription)\n+ {\n+ ReThrowError(DisableSubscriptionOnError(cctx));\n+ MemoryContextSwitchTo(ecxt);\n+ }\n\nI'm guessing it was intended to do the \"MemoryContextSwitch(ecxt);\"\nbefore re-throwing (?), but it's not really clear, as in the 1st and\n3rd cases, the DisableSubscriptionOnError() calls anyway immediately\nswitch the memory context to cctx.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 10 Nov 2021 15:22:50 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Nov 10, 2021 at 3:22 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> I had a look at this patch and have a couple of initial review\n> comments for some issues I spotted:\n>\n\nIncidentally, I found that the v3 patch only applies after the skip xid v20\npatch [1] has been applied.\n\n[2] -\nhttps://www.postgresql.org/message-id/CAD21AoAT42mhcqeB1jPfRL1%2BEUHbZk8MMY_fBgsyZvJeKNpG%2Bw%40mail.gmail.com\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\nOn Wed, Nov 10, 2021 at 3:22 PM Greg Nancarrow <gregn4422@gmail.com> wrote:>> I had a look at this patch and have a couple of initial review> comments for some issues I spotted:>Incidentally, I found that the v3 patch only applies after the skip xid v20 patch [1] has been applied.[2] - https://www.postgresql.org/message-id/CAD21AoAT42mhcqeB1jPfRL1%2BEUHbZk8MMY_fBgsyZvJeKNpG%2Bw%40mail.gmail.comRegards,Greg NancarrowFujitsu Australia", "msg_date": "Wed, 10 Nov 2021 15:31:19 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wednesday, November 10, 2021 1:23 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> On Wed, Nov 10, 2021 at 12:26 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Monday, November 8, 2021 10:15 PM vignesh C <vignesh21@gmail.com>\r\n> wrote:\r\n> > > Thanks for the updated patch. Please create a Commitfest entry for\r\n> > > this. It will help to have a look at CFBot results for the patch,\r\n> > > also if required rebase and post a patch on top of Head.\r\n> > As requested, created a new entry for this - [1]\r\n> >\r\n> > FYI: the skip xid patch has been updated to v20 in [2] but the v3 for\r\n> > disable_on_error is not affected by this update and still applicable\r\n> > with no regression.\r\n> >\r\n> > [1] - https://commitfest.postgresql.org/36/3407/\r\n> > [2] -\r\n> >\r\n> https://www.postgresql.org/message-id/CAD21AoAT42mhcqeB1jPfRL1%2B\r\n> EUHbZ\r\n> > k8MMY_fBgsyZvJeKNpG%2Bw%40mail.gmail.com\r\n> \r\n> I had a look at this patch and have a couple of initial review comments for some\r\n> issues I spotted:\r\nThank you for checking it.\r\n\r\n\r\n> src/backend/commands/subscriptioncmds.c\r\n> (1) bad array entry assignment\r\n> The following code block added by the patch assigns\r\n> \"values[Anum_pg_subscription_subdisableonerr - 1]\" twice, resulting in it\r\n> being always set to true, rather than the specified option value:\r\n> \r\n> + if (IsSet(opts.specified_opts, SUBOPT_DISABLE_ON_ERR)) {\r\n> + values[Anum_pg_subscription_subdisableonerr - 1]\r\n> + = BoolGetDatum(opts.disableonerr);\r\n> + values[Anum_pg_subscription_subdisableonerr - 1]\r\n> + = true;\r\n> + }\r\n> \r\n> The 2nd line is meant to instead be\r\n> \"replaces[Anum_pg_subscription_subdisableonerr - 1] = true\".\r\n> (compare to handling for other similar options)\r\nOops, fixed.\r\n \r\n> src/backend/replication/logical/worker.c\r\n> (2) unreachable code?\r\n> In the patch code there seems to be some instances of unreachable code after\r\n> re-throwing errors:\r\n> \r\n> e.g.\r\n> \r\n> + /* If we caught an error above, disable the subscription */ if\r\n> + (disable_subscription) {\r\n> + ReThrowError(DisableSubscriptionOnError(cctx));\r\n> + MemoryContextSwitchTo(ecxt);\r\n> + }\r\n> \r\n> + else\r\n> + {\r\n> + PG_RE_THROW();\r\n> + MemoryContextSwitchTo(ecxt);\r\n> + }\r\n> \r\n> \r\n> + if (disable_subscription)\r\n> + {\r\n> + ReThrowError(DisableSubscriptionOnError(cctx));\r\n> + MemoryContextSwitchTo(ecxt);\r\n> + }\r\n> \r\n> I'm guessing it was intended to do the \"MemoryContextSwitch(ecxt);\"\r\n> before re-throwing (?), but it's not really clear, as in the 1st and 3rd cases, the\r\n> DisableSubscriptionOnError() calls anyway immediately switch the memory\r\n> context to cctx.\r\nYou are right I think.\r\nFixed based on an idea below.\r\n\r\nAfter an error happens, for some additional work\r\n(e.g. to report the stats of table sync/apply worker\r\nby pgstat_report_subworker_error() or\r\nto update the catalog by DisableSubscriptionOnError())\r\nrestore the memory context that is used before the error (cctx)\r\nand save the old memory context of error (ecxt). Then,\r\ndo the additional work and switch the memory context to the ecxt\r\njust before the rethrow. As you described, \r\nin contrast to PG_RE_THROW, DisableSubscriptionOnError() changes\r\nthe memory context immediatedly at the top of it,\r\nso for this case, I don't call the MemoryContextSwitchTo().\r\n\r\nAnother important thing as my modification\r\nis a case when LogicalRepApplyLoop failed and\r\napply_error_callback_arg.command == 0. In the original\r\npatch of skip xid, it just calls PG_RE_THROW()\r\nbut my previous v3 codes missed this macro in this case.\r\nTherefore, I've fixed this part as well.\r\n\r\nC codes are checked by pgindent.\r\n\r\nNote that this depends on the v20 skip xide patch in [1]\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoAT42mhcqeB1jPfRL1%2BEUHbZk8MMY_fBgsyZvJeKNpG%2Bw%40mail.gmail.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Thu, 11 Nov 2021 09:20:44 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thu, Nov 11, 2021 at 2:50 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, November 10, 2021 1:23 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > On Wed, Nov 10, 2021 at 12:26 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Monday, November 8, 2021 10:15 PM vignesh C <vignesh21@gmail.com>\n> > wrote:\n> > > > Thanks for the updated patch. Please create a Commitfest entry for\n> > > > this. It will help to have a look at CFBot results for the patch,\n> > > > also if required rebase and post a patch on top of Head.\n> > > As requested, created a new entry for this - [1]\n> > >\n> > > FYI: the skip xid patch has been updated to v20 in [2] but the v3 for\n> > > disable_on_error is not affected by this update and still applicable\n> > > with no regression.\n> > >\n> > > [1] - https://commitfest.postgresql.org/36/3407/\n> > > [2] -\n> > >\n> > https://www.postgresql.org/message-id/CAD21AoAT42mhcqeB1jPfRL1%2B\n> > EUHbZ\n> > > k8MMY_fBgsyZvJeKNpG%2Bw%40mail.gmail.com\n> >\n> > I had a look at this patch and have a couple of initial review comments for some\n> > issues I spotted:\n> Thank you for checking it.\n>\n>\n> > src/backend/commands/subscriptioncmds.c\n> > (1) bad array entry assignment\n> > The following code block added by the patch assigns\n> > \"values[Anum_pg_subscription_subdisableonerr - 1]\" twice, resulting in it\n> > being always set to true, rather than the specified option value:\n> >\n> > + if (IsSet(opts.specified_opts, SUBOPT_DISABLE_ON_ERR)) {\n> > + values[Anum_pg_subscription_subdisableonerr - 1]\n> > + = BoolGetDatum(opts.disableonerr);\n> > + values[Anum_pg_subscription_subdisableonerr - 1]\n> > + = true;\n> > + }\n> >\n> > The 2nd line is meant to instead be\n> > \"replaces[Anum_pg_subscription_subdisableonerr - 1] = true\".\n> > (compare to handling for other similar options)\n> Oops, fixed.\n>\n> > src/backend/replication/logical/worker.c\n> > (2) unreachable code?\n> > In the patch code there seems to be some instances of unreachable code after\n> > re-throwing errors:\n> >\n> > e.g.\n> >\n> > + /* If we caught an error above, disable the subscription */ if\n> > + (disable_subscription) {\n> > + ReThrowError(DisableSubscriptionOnError(cctx));\n> > + MemoryContextSwitchTo(ecxt);\n> > + }\n> >\n> > + else\n> > + {\n> > + PG_RE_THROW();\n> > + MemoryContextSwitchTo(ecxt);\n> > + }\n> >\n> >\n> > + if (disable_subscription)\n> > + {\n> > + ReThrowError(DisableSubscriptionOnError(cctx));\n> > + MemoryContextSwitchTo(ecxt);\n> > + }\n> >\n> > I'm guessing it was intended to do the \"MemoryContextSwitch(ecxt);\"\n> > before re-throwing (?), but it's not really clear, as in the 1st and 3rd cases, the\n> > DisableSubscriptionOnError() calls anyway immediately switch the memory\n> > context to cctx.\n> You are right I think.\n> Fixed based on an idea below.\n>\n> After an error happens, for some additional work\n> (e.g. to report the stats of table sync/apply worker\n> by pgstat_report_subworker_error() or\n> to update the catalog by DisableSubscriptionOnError())\n> restore the memory context that is used before the error (cctx)\n> and save the old memory context of error (ecxt). Then,\n> do the additional work and switch the memory context to the ecxt\n> just before the rethrow. As you described,\n> in contrast to PG_RE_THROW, DisableSubscriptionOnError() changes\n> the memory context immediatedly at the top of it,\n> so for this case, I don't call the MemoryContextSwitchTo().\n>\n> Another important thing as my modification\n> is a case when LogicalRepApplyLoop failed and\n> apply_error_callback_arg.command == 0. In the original\n> patch of skip xid, it just calls PG_RE_THROW()\n> but my previous v3 codes missed this macro in this case.\n> Therefore, I've fixed this part as well.\n>\n> C codes are checked by pgindent.\n>\n> Note that this depends on the v20 skip xide patch in [1]\n>\n\nThanks for the updated patch, Few comments:\n1) tab completion should be added for disable_on_error:\n/* Complete \"CREATE SUBSCRIPTION <name> ... WITH ( <opt>\" */\nelse if (HeadMatches(\"CREATE\", \"SUBSCRIPTION\") && TailMatches(\"WITH\", \"(\"))\nCOMPLETE_WITH(\"binary\", \"connect\", \"copy_data\", \"create_slot\",\n \"enabled\", \"slot_name\", \"streaming\",\n \"synchronous_commit\", \"two_phase\");\n\n2) disable_on_error is supported by alter subscription, the same\nshould be documented:\n@ -871,11 +886,19 @@ AlterSubscription(ParseState *pstate,\nAlterSubscriptionStmt *stmt,\n {\n supported_opts = (SUBOPT_SLOT_NAME |\n\nSUBOPT_SYNCHRONOUS_COMMIT | SUBOPT_BINARY |\n-\nSUBOPT_STREAMING);\n+\nSUBOPT_STREAMING | SUBOPT_DISABLE_ON_ERR);\n\n parse_subscription_options(pstate,\nstmt->options,\n\n supported_opts, &opts);\n\n+ if (IsSet(opts.specified_opts,\nSUBOPT_DISABLE_ON_ERR))\n+ {\n+\nvalues[Anum_pg_subscription_subdisableonerr - 1]\n+ =\nBoolGetDatum(opts.disableonerr);\n+\nreplaces[Anum_pg_subscription_subdisableonerr - 1]\n+ = true;\n+ }\n+\n\n3) Describe subscriptions (dRs+) should include displaying of disableonerr:\n\\dRs+ sub1\n List of subscriptions\n Name | Owner | Enabled | Publication | Binary | Streaming | Two\nphase commit | Synchronous commit | Conninfo\n------+---------+---------+-------------+--------+-----------+------------------+--------------------+---------------------------\n sub1 | vignesh | t | {pub1} | f | f | d\n | off | dbname=postgres port=5432\n(1 row)\n\n4) I felt transicent should be transient, might be a typo:\n+ Specifies whether the subscription should be automatically disabled\n+ if replicating data from the publisher triggers non-transicent errors\n+ such as referential integrity or permissions errors. The default is\n+ <literal>false</literal>.\n\n5) The commented use PostgresNode and use TestLib can be removed:\n+# Test of logical replication subscription self-disabling feature\n+use strict;\n+use warnings;\n+# use PostgresNode;\n+# use TestLib;\n+use PostgreSQL::Test::Cluster;\n+use PostgreSQL::Test::Utils;\n+use Test::More tests => 10;\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 12 Nov 2021 09:38:35 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thu, Nov 11, 2021 at 8:20 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> C codes are checked by pgindent.\n>\n> Note that this depends on the v20 skip xide patch in [1]\n>\n\nSome comments on the v4 patch:\n\n(1) Patch subject\nI think the patch subject should say \"disable\" instead of \"disabling\":\n Optionally disable subscriptions on error\n\ndoc/src/sgml/ref/create_subscription.sgml\n(2) spelling mistake\n+ if replicating data from the publisher triggers non-transicent errors\n\nnon-transicent -> non-transient\n\n(I notice Vignesh also pointed this out)\n\nsrc/backend/replication/logical/worker.c\n(3) calling geterrcode()\nThe new IsSubscriptionDisablingError() function calls geterrcode().\nAccording to the function comment for geterrcode(), it is only\nintended for use in error callbacks.\nInstead of calling geterrcode(), couldn't the ErrorData from PG_CATCH\nblock be passed to IsSubscriptionDisablingError() instead (from which\nit can get the sqlerrcode)?\n\n(4) DisableSubscriptionOnError\nDisableSubscriptionOnError() is again calling MemoryContextSwitch()\nand CopyErrorData().\nI think the ErrorData from the PG_CATCH block could simply be passed\nto DisableSubscriptionOnError() instead of the memory-context, and the\nexisting MemoryContextSwitch() and CopyErrorData() calls could be\nremoved from it.\n\nAFAICS, applying (3) and (4) above would make the code a lot cleaner.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 12 Nov 2021 15:48:59 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Friday, November 12, 2021 1:09 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Thu, Nov 11, 2021 at 2:50 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> Thanks for the updated patch, Few comments:\r\n> 1) tab completion should be added for disable_on_error:\r\n> /* Complete \"CREATE SUBSCRIPTION <name> ... WITH ( <opt>\" */ else if\r\n> (HeadMatches(\"CREATE\", \"SUBSCRIPTION\") && TailMatches(\"WITH\", \"(\"))\r\n> COMPLETE_WITH(\"binary\", \"connect\", \"copy_data\", \"create_slot\",\r\n> \"enabled\", \"slot_name\", \"streaming\",\r\n> \"synchronous_commit\", \"two_phase\");\r\nFixed.\r\n\r\n> 2) disable_on_error is supported by alter subscription, the same should be\r\n> documented:\r\n> @ -871,11 +886,19 @@ AlterSubscription(ParseState *pstate,\r\n> AlterSubscriptionStmt *stmt,\r\n> {\r\n> supported_opts = (SUBOPT_SLOT_NAME |\r\n> \r\n> SUBOPT_SYNCHRONOUS_COMMIT | SUBOPT_BINARY |\r\n> -\r\n> SUBOPT_STREAMING);\r\n> +\r\n> SUBOPT_STREAMING | SUBOPT_DISABLE_ON_ERR);\r\n> \r\n> parse_subscription_options(pstate,\r\n> stmt->options,\r\n> \r\n> supported_opts, &opts);\r\n> \r\n> + if (IsSet(opts.specified_opts,\r\n> SUBOPT_DISABLE_ON_ERR))\r\n> + {\r\n> +\r\n> values[Anum_pg_subscription_subdisableonerr - 1]\r\n> + =\r\n> BoolGetDatum(opts.disableonerr);\r\n> +\r\n> replaces[Anum_pg_subscription_subdisableonerr - 1]\r\n> + = true;\r\n> + }\r\n> +\r\nFixed the documentation. Also, add one test for alter subscription.\r\n\r\n \r\n> 3) Describe subscriptions (dRs+) should include displaying of disableonerr:\r\n> \\dRs+ sub1\r\n> List of subscriptions\r\n> Name | Owner | Enabled | Publication | Binary | Streaming | Two\r\n> phase commit | Synchronous commit | Conninfo\r\n> ------+---------+---------+-------------+--------+-----------+----------\r\n> --------+--------------------+---------------------------\r\n> sub1 | vignesh | t | {pub1} | f | f | d\r\n> | off | dbname=postgres port=5432\r\n> (1 row)\r\nFixed.\r\n\r\n\r\n> 4) I felt transicent should be transient, might be a typo:\r\n> + Specifies whether the subscription should be automatically\r\n> disabled\r\n> + if replicating data from the publisher triggers non-transicent errors\r\n> + such as referential integrity or permissions errors. The default is\r\n> + <literal>false</literal>.\r\nFixed.\r\n\r\n> 5) The commented use PostgresNode and use TestLib can be removed:\r\n> +# Test of logical replication subscription self-disabling feature use\r\n> +strict; use warnings; # use PostgresNode; # use TestLib; use\r\n> +PostgreSQL::Test::Cluster; use PostgreSQL::Test::Utils; use Test::More\r\n> +tests => 10;\r\nRemoved.\r\n\r\n\r\nAlso, my colleague Greg provided an offlist patch to me and\r\nI've incorporated his suggested modifications into this version.\r\nSo, I noted his name as a coauthor.\r\n\r\nC codes are checked by pgindent again.\r\n\r\nThis v5 depends on v23 skip xid in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoA5jupM6O%3DpYsyfaxQ1aMX-en8%3DQNgpW6KfXsg7_CS0CQ%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Tue, 16 Nov 2021 07:53:23 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "Thank you for checking the patch !\r\n\r\nOn Friday, November 12, 2021 1:49 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> On Thu, Nov 11, 2021 at 8:20 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> Some comments on the v4 patch:\r\n> \r\n> (1) Patch subject\r\n> I think the patch subject should say \"disable\" instead of \"disabling\":\r\n> Optionally disable subscriptions on error\r\nFixed.\r\n\r\n \r\n> doc/src/sgml/ref/create_subscription.sgml\r\n> (2) spelling mistake\r\n> + if replicating data from the publisher triggers\r\n> + non-transicent errors\r\n> \r\n> non-transicent -> non-transient\r\nFixed.\r\n\r\n \r\n> (I notice Vignesh also pointed this out)\r\n> \r\n> src/backend/replication/logical/worker.c\r\n> (3) calling geterrcode()\r\n> The new IsSubscriptionDisablingError() function calls geterrcode().\r\n> According to the function comment for geterrcode(), it is only intended for use\r\n> in error callbacks.\r\n> Instead of calling geterrcode(), couldn't the ErrorData from PG_CATCH block be\r\n> passed to IsSubscriptionDisablingError() instead (from which it can get the\r\n> sqlerrcode)?\r\n> \r\n> (4) DisableSubscriptionOnError\r\n> DisableSubscriptionOnError() is again calling MemoryContextSwitch() and\r\n> CopyErrorData().\r\n> I think the ErrorData from the PG_CATCH block could simply be passed to\r\n> DisableSubscriptionOnError() instead of the memory-context, and the existing\r\n> MemoryContextSwitch() and CopyErrorData() calls could be removed from it.\r\n> \r\n> AFAICS, applying (3) and (4) above would make the code a lot cleaner.\r\nFixed.\r\n\r\nThe updated patch is shared in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373771371B31E1E6CC74B0AED999%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Tue, 16 Nov 2021 07:59:36 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Nov 16, 2021 at 6:53 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> This v5 depends on v23 skip xid in [1].\n>\n\nA minor comment:\n\ndoc/src/sgml/ref/alter_subscription.sgml\n(1) disable_on_err?\n\n+ <literal>disable_on_err</literal>.\n\nThis doc update names the new parameter as \"disable_on_err\" instead of\n\"disable_on_error\".\nAlso \"disable_on_err\" appears in a couple of the test case comments.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 18 Nov 2021 16:07:34 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thursday, November 18, 2021 2:08 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> A minor comment:\r\nThanks for your comments !\r\n \r\n> doc/src/sgml/ref/alter_subscription.sgml\r\n> (1) disable_on_err?\r\n> \r\n> + <literal>disable_on_err</literal>.\r\n> \r\n> This doc update names the new parameter as \"disable_on_err\" instead of\r\n> \"disable_on_error\".\r\n> Also \"disable_on_err\" appears in a couple of the test case comments.\r\nFixed all 3 places.\r\n\r\nAt the same time, I changed one function name\r\nfrom IsSubscriptionDisablingError() to IsTransientError()\r\nso that it can express what it checks correctly.\r\nOf course, the return value of true or false\r\nbecomes reverse by this name change, but\r\nThis would make the function more general.\r\nAlso, its comments were fixed.\r\n\r\nThis version also depends on the v23 of skip xid [1]\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoA5jupM6O%3DpYsyfaxQ1aMX-en8%3DQNgpW6KfXsg7_CS0CQ%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Thu, 18 Nov 2021 07:22:15 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thu, Nov 18, 2021 at 12:52 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, November 18, 2021 2:08 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > A minor comment:\n> Thanks for your comments !\n>\n> > doc/src/sgml/ref/alter_subscription.sgml\n> > (1) disable_on_err?\n> >\n> > + <literal>disable_on_err</literal>.\n> >\n> > This doc update names the new parameter as \"disable_on_err\" instead of\n> > \"disable_on_error\".\n> > Also \"disable_on_err\" appears in a couple of the test case comments.\n> Fixed all 3 places.\n>\n> At the same time, I changed one function name\n> from IsSubscriptionDisablingError() to IsTransientError()\n> so that it can express what it checks correctly.\n> Of course, the return value of true or false\n> becomes reverse by this name change, but\n> This would make the function more general.\n> Also, its comments were fixed.\n>\n> This version also depends on the v23 of skip xid [1]\n\nFew comments:\n1) Changes to handle pg_dump are missing. It should be done in\ndumpSubscription and getSubscriptions\n\n2) \"And\" is missing\n--- a/doc/src/sgml/ref/alter_subscription.sgml\n+++ b/doc/src/sgml/ref/alter_subscription.sgml\n@@ -201,8 +201,8 @@ ALTER SUBSCRIPTION <replaceable\nclass=\"parameter\">name</replaceable> RENAME TO <\n information. The parameters that can be altered\n are <literal>slot_name</literal>,\n <literal>synchronous_commit</literal>,\n- <literal>binary</literal>, and\n- <literal>streaming</literal>.\n+ <literal>binary</literal>,<literal>streaming</literal>\n+ <literal>disable_on_error</literal>.\nShould be:\n- <literal>binary</literal>, and\n- <literal>streaming</literal>.\n+ <literal>binary</literal>,<literal>streaming</literal>, and\n+ <literal>disable_on_error</literal>.\n\n3) Should we change this :\n+ Specifies whether the subscription should be automatically disabled\n+ if replicating data from the publisher triggers non-transient errors\n+ such as referential integrity or permissions errors. The default is\n+ <literal>false</literal>.\nto:\n+ Specifies whether the subscription should be automatically disabled\n+ while replicating data from the publisher triggers\nnon-transient errors\n+ such as referential integrity, permissions errors, etc. The\ndefault is\n+ <literal>false</literal>.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 22 Nov 2021 12:22:56 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Monday, November 22, 2021 3:53 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> Few comments:\r\nThank you so much for your review !\r\n\r\n> 1) Changes to handle pg_dump are missing. It should be done in\r\n> dumpSubscription and getSubscriptions\r\nFixed.\r\n\r\n> 2) \"And\" is missing\r\n> --- a/doc/src/sgml/ref/alter_subscription.sgml\r\n> +++ b/doc/src/sgml/ref/alter_subscription.sgml\r\n> @@ -201,8 +201,8 @@ ALTER SUBSCRIPTION <replaceable\r\n> class=\"parameter\">name</replaceable> RENAME TO <\r\n> information. The parameters that can be altered\r\n> are <literal>slot_name</literal>,\r\n> <literal>synchronous_commit</literal>,\r\n> - <literal>binary</literal>, and\r\n> - <literal>streaming</literal>.\r\n> + <literal>binary</literal>,<literal>streaming</literal>\r\n> + <literal>disable_on_error</literal>.\r\n> Should be:\r\n> - <literal>binary</literal>, and\r\n> - <literal>streaming</literal>.\r\n> + <literal>binary</literal>,<literal>streaming</literal>, and\r\n> + <literal>disable_on_error</literal>.\r\nFixed.\r\n\r\n> 3) Should we change this :\r\n> + Specifies whether the subscription should be automatically\r\n> disabled\r\n> + if replicating data from the publisher triggers non-transient errors\r\n> + such as referential integrity or permissions errors. The default is\r\n> + <literal>false</literal>.\r\n> to:\r\n> + Specifies whether the subscription should be automatically\r\n> disabled\r\n> + while replicating data from the publisher triggers\r\n> non-transient errors\r\n> + such as referential integrity, permissions errors, etc. The\r\n> default is\r\n> + <literal>false</literal>.\r\nI preferred the previous description. The option\r\n\"disable_on_error\" works with even one error.\r\nIf we use \"while\", the nuance would be like\r\nwe keep disabling a subscription more than once.\r\nThis situation happens only when user makes\r\nthe subscription enable without resolving the non-transient error,\r\nwhich sounds a bit unnatural. So, I wanna keep the previous description.\r\nIf you are not satisfied with this, kindly let me know.\r\n\r\nThis v7 uses v26 of skip xid patch [1]\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoDNe_O%2BCPucd_jQPu3gGGaCLNP%2BJ_kSPNecTdAM8HFPww%40mail.gmail.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Fri, 26 Nov 2021 14:36:35 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Fri, Nov 26, 2021 at 8:06 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, November 22, 2021 3:53 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Few comments:\n> Thank you so much for your review !\n>\n> > 1) Changes to handle pg_dump are missing. It should be done in\n> > dumpSubscription and getSubscriptions\n> Fixed.\n>\n> > 2) \"And\" is missing\n> > --- a/doc/src/sgml/ref/alter_subscription.sgml\n> > +++ b/doc/src/sgml/ref/alter_subscription.sgml\n> > @@ -201,8 +201,8 @@ ALTER SUBSCRIPTION <replaceable\n> > class=\"parameter\">name</replaceable> RENAME TO <\n> > information. The parameters that can be altered\n> > are <literal>slot_name</literal>,\n> > <literal>synchronous_commit</literal>,\n> > - <literal>binary</literal>, and\n> > - <literal>streaming</literal>.\n> > + <literal>binary</literal>,<literal>streaming</literal>\n> > + <literal>disable_on_error</literal>.\n> > Should be:\n> > - <literal>binary</literal>, and\n> > - <literal>streaming</literal>.\n> > + <literal>binary</literal>,<literal>streaming</literal>, and\n> > + <literal>disable_on_error</literal>.\n> Fixed.\n>\n> > 3) Should we change this :\n> > + Specifies whether the subscription should be automatically\n> > disabled\n> > + if replicating data from the publisher triggers non-transient errors\n> > + such as referential integrity or permissions errors. The default is\n> > + <literal>false</literal>.\n> > to:\n> > + Specifies whether the subscription should be automatically\n> > disabled\n> > + while replicating data from the publisher triggers\n> > non-transient errors\n> > + such as referential integrity, permissions errors, etc. The\n> > default is\n> > + <literal>false</literal>.\n> I preferred the previous description. The option\n> \"disable_on_error\" works with even one error.\n> If we use \"while\", the nuance would be like\n> we keep disabling a subscription more than once.\n> This situation happens only when user makes\n> the subscription enable without resolving the non-transient error,\n> which sounds a bit unnatural. So, I wanna keep the previous description.\n> If you are not satisfied with this, kindly let me know.\n>\n> This v7 uses v26 of skip xid patch [1]\n\nThanks for the updated patch, Few comments:\n1) Since this function is used only from 027_disable_on_error and not\nused by others, this can be moved to 027_disable_on_error:\n+sub wait_for_subscriptions\n+{\n+ my ($self, $dbname, @subscriptions) = @_;\n+\n+ # Unique-ify the subscriptions passed by the caller\n+ my %unique = map { $_ => 1 } @subscriptions;\n+ my @unique = sort keys %unique;\n+ my $unique_count = scalar(@unique);\n+\n+ # Construct a SQL list from the unique subscription names\n+ my @escaped = map { s/'/''/g; s/\\\\/\\\\\\\\/g; $_ } @unique;\n+ my $sublist = join(', ', map { \"'$_'\" } @escaped);\n+\n+ my $polling_sql = qq(\n+ SELECT COUNT(1) = $unique_count FROM\n+ (SELECT s.oid\n+ FROM pg_catalog.pg_subscription s\n+ LEFT JOIN pg_catalog.pg_subscription_rel sr\n+ ON sr.srsubid = s.oid\n+ WHERE (sr IS NULL OR sr.srsubstate IN\n('s', 'r'))\n+ AND s.subname IN ($sublist)\n+ AND s.subenabled IS TRUE\n+ UNION\n+ SELECT s.oid\n+ FROM pg_catalog.pg_subscription s\n+ WHERE s.subname IN ($sublist)\n+ AND s.subenabled IS FALSE\n+ ) AS synced_or_disabled\n+ );\n+ return $self->poll_query_until($dbname, $polling_sql);\n+}\n\n2) The empty line after comment is not required, it can be removed\n+# Create non-unique data in both schemas on the publisher.\n+#\n+for $schema (@schemas)\n+{\n\n3) Similarly it can be changed across the file\n+# Wait for the initial subscription synchronizations to finish or fail.\n+#\n+$node_subscriber->wait_for_subscriptions('postgres', @schemas)\n+ or die \"Timed out while waiting for subscriber to synchronize data\";\n\n+# Enter unique data for both schemas on the publisher. This should succeed on\n+# the publisher node, and not cause any additional problems on the subscriber\n+# side either, though disabled subscription \"s1\" should not replicate anything.\n+#\n+for $schema (@schemas)\n\n4) Since subid is used only at one place, no need of subid variable,\nyou could replace subid with subform->oid in LockSharedObject\n+ Datum values[Natts_pg_subscription];\n+ HeapTuple tup;\n+ Oid subid;\n+ Form_pg_subscription subform;\n\n+ subid = subform->oid;\n+ LockSharedObject(SubscriptionRelationId, subid, 0, AccessExclusiveLock);\n\n5) \"permissions errors\" should be \"permission errors\"\n+ Specifies whether the subscription should be automatically disabled\n+ if replicating data from the publisher triggers non-transient errors\n+ such as referential integrity or permissions errors. The default is\n+ <literal>false</literal>.\n+ </para>\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 29 Nov 2021 11:07:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Sat, Nov 27, 2021 at 1:36 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> This v7 uses v26 of skip xid patch [1]\n>\n\nThis patch no longer applies on the latest source.\nAlso, the patch is missing an update to doc/src/sgml/catalogs.sgml,\nfor the new \"subdisableonerr\" column of pg_subscription.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 30 Nov 2021 15:09:59 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tuesday, November 30, 2021 1:10 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> On Sat, Nov 27, 2021 at 1:36 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > This v7 uses v26 of skip xid patch [1]\r\n> This patch no longer applies on the latest source.\r\n> Also, the patch is missing an update to doc/src/sgml/catalogs.sgml, for the\r\n> new \"subdisableonerr\" column of pg_subscription.\r\nThanks for your review !\r\n\r\nFixed the documentation accordingly. Further,\r\nthis comment invoked some more refactoring of codes\r\nsince I wrote some internal codes related to\r\n'disable_on_error' in an inconsistent order.\r\nI fixed this by keeping patch's codes\r\nafter that of 'two_phase' subscription option as much as possible.\r\n\r\nI also conducted both pgindent and pgperltidy.\r\n\r\nNow, I'll share the v8 that uses PG\r\nwhose commit id is after 8d74fc9 (pg_stat_subscription_workers).\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Tue, 30 Nov 2021 12:04:14 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Monday, November 29, 2021 2:38 PM vignesh C <vignesh21@gmail.com>\r\n> Thanks for the updated patch, Few comments:\r\nThank you for your review !\r\n\r\n> 1) Since this function is used only from 027_disable_on_error and not used by\r\n> others, this can be moved to 027_disable_on_error:\r\n> +sub wait_for_subscriptions\r\n> +{\r\n> + my ($self, $dbname, @subscriptions) = @_;\r\n> +\r\n> + # Unique-ify the subscriptions passed by the caller\r\n> + my %unique = map { $_ => 1 } @subscriptions;\r\n> + my @unique = sort keys %unique;\r\n> + my $unique_count = scalar(@unique);\r\n> +\r\n> + # Construct a SQL list from the unique subscription names\r\n> + my @escaped = map { s/'/''/g; s/\\\\/\\\\\\\\/g; $_ } @unique;\r\n> + my $sublist = join(', ', map { \"'$_'\" } @escaped);\r\n> +\r\n> + my $polling_sql = qq(\r\n> + SELECT COUNT(1) = $unique_count FROM\r\n> + (SELECT s.oid\r\n> + FROM pg_catalog.pg_subscription s\r\n> + LEFT JOIN pg_catalog.pg_subscription_rel\r\n> sr\r\n> + ON sr.srsubid = s.oid\r\n> + WHERE (sr IS NULL OR sr.srsubstate IN\r\n> ('s', 'r'))\r\n> + AND s.subname IN ($sublist)\r\n> + AND s.subenabled IS TRUE\r\n> + UNION\r\n> + SELECT s.oid\r\n> + FROM pg_catalog.pg_subscription s\r\n> + WHERE s.subname IN ($sublist)\r\n> + AND s.subenabled IS FALSE\r\n> + ) AS synced_or_disabled\r\n> + );\r\n> + return $self->poll_query_until($dbname, $polling_sql); }\r\nFixed.\r\n\r\n> 2) The empty line after comment is not required, it can be removed\r\n> +# Create non-unique data in both schemas on the publisher.\r\n> +#\r\n> +for $schema (@schemas)\r\n> +{\r\nFixed.\r\n\r\n> 3) Similarly it can be changed across the file\r\n> +# Wait for the initial subscription synchronizations to finish or fail.\r\n> +#\r\n> +$node_subscriber->wait_for_subscriptions('postgres', @schemas)\r\n> + or die \"Timed out while waiting for subscriber to synchronize\r\n> +data\";\r\n> \r\n> +# Enter unique data for both schemas on the publisher. This should\r\n> +succeed on # the publisher node, and not cause any additional problems\r\n> +on the subscriber # side either, though disabled subscription \"s1\" should not\r\n> replicate anything.\r\n> +#\r\n> +for $schema (@schemas)\r\nFixed.\r\n \r\n> 4) Since subid is used only at one place, no need of subid variable, you could\r\n> replace subid with subform->oid in LockSharedObject\r\n> + Datum values[Natts_pg_subscription];\r\n> + HeapTuple tup;\r\n> + Oid subid;\r\n> + Form_pg_subscription subform;\r\n> \r\n> + subid = subform->oid;\r\n> + LockSharedObject(SubscriptionRelationId, subid, 0,\r\n> + AccessExclusiveLock);\r\nFixed.\r\n\r\n> 5) \"permissions errors\" should be \"permission errors\"\r\n> + Specifies whether the subscription should be automatically\r\n> disabled\r\n> + if replicating data from the publisher triggers non-transient errors\r\n> + such as referential integrity or permissions errors. The default is\r\n> + <literal>false</literal>.\r\n> + </para>\r\nFixed.\r\n\r\nThe new patch v8 is shared in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83735AA021E0F614A3AB3221ED679%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Tue, 30 Nov 2021 12:13:29 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Nov 30, 2021 at 5:34 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, November 30, 2021 1:10 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > On Sat, Nov 27, 2021 at 1:36 AM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > This v7 uses v26 of skip xid patch [1]\n> > This patch no longer applies on the latest source.\n> > Also, the patch is missing an update to doc/src/sgml/catalogs.sgml, for the\n> > new \"subdisableonerr\" column of pg_subscription.\n> Thanks for your review !\n>\n> Fixed the documentation accordingly. Further,\n> this comment invoked some more refactoring of codes\n> since I wrote some internal codes related to\n> 'disable_on_error' in an inconsistent order.\n> I fixed this by keeping patch's codes\n> after that of 'two_phase' subscription option as much as possible.\n>\n> I also conducted both pgindent and pgperltidy.\n>\n> Now, I'll share the v8 that uses PG\n> whose commit id is after 8d74fc9 (pg_stat_subscription_workers).\n\nThanks for the updated patch, few small comments:\n1) This should be changed:\n+ <structfield>subdisableonerr</structfield> <type>bool</type>\n+ </para>\n+ <para>\n+ If true, the subscription will be disabled when subscription\n+ worker detects an error\n+ </para></entry>\n+ </row>\n\nto:\n+ <structfield>subdisableonerr</structfield> <type>bool</type>\n+ </para>\n+ <para>\n+ If true, the subscription will be disabled when subscription\n+ worker detects non-transient errors\n+ </para></entry>\n+ </row>\n\n\n2) \"Disable On Err\" can be changed to \"Disable On Error\"\n+ \",\nsubtwophasestate AS \\\"%s\\\"\\n\"\n+ \",\nsubdisableonerr AS \\\"%s\\\"\\n\",\n+\ngettext_noop(\"Two phase commit\"),\n+\ngettext_noop(\"Disable On Err\"));\n\n3) Can add a line in the commit message saying \"Bump catalog version.\"\nas the patch involves changing the catalog.\n\n4) This prototype is not required, since the function is called after\nthe function definition:\n static inline void set_apply_error_context_xact(TransactionId xid,\nTimestampTz ts);\n static inline void reset_apply_error_context_info(void);\n+static bool IsTransientError(ErrorData *edata);\n\n5) we could use the new style here:\n+ ereport(LOG,\n+ (errmsg(\"logical replication subscription\n\\\"%s\\\" will be disabled due to error: %s\",\n+ MySubscription->name, edata->message)));\n\nchange it to:\n+ ereport(LOG,\n+ errmsg(\"logical replication subscription\n\\\"%s\\\" will be disabled due to error: %s\",\n+ MySubscription->name, edata->message));\n\nSimilarly it can be changed in the other ereports added.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 1 Dec 2021 11:31:32 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wednesday, December 1, 2021 3:02 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Tue, Nov 30, 2021 at 5:34 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Tuesday, November 30, 2021 1:10 PM Greg Nancarrow\r\n> <gregn4422@gmail.com> wrote:\r\n> > > On Sat, Nov 27, 2021 at 1:36 AM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > >\r\n> > > > This v7 uses v26 of skip xid patch [1]\r\n> > > This patch no longer applies on the latest source.\r\n> > > Also, the patch is missing an update to doc/src/sgml/catalogs.sgml,\r\n> > > for the new \"subdisableonerr\" column of pg_subscription.\r\n> > Thanks for your review !\r\n> >\r\n> > Fixed the documentation accordingly. Further, this comment invoked\r\n> > some more refactoring of codes since I wrote some internal codes\r\n> > related to 'disable_on_error' in an inconsistent order.\r\n> > I fixed this by keeping patch's codes\r\n> > after that of 'two_phase' subscription option as much as possible.\r\n> >\r\n> > I also conducted both pgindent and pgperltidy.\r\n> >\r\n> > Now, I'll share the v8 that uses PG\r\n> > whose commit id is after 8d74fc9 (pg_stat_subscription_workers).\r\n> \r\n> Thanks for the updated patch, few small comments:\r\nI appreciate your check.\r\n\r\n> 1) This should be changed:\r\n> + <structfield>subdisableonerr</structfield> <type>bool</type>\r\n> + </para>\r\n> + <para>\r\n> + If true, the subscription will be disabled when subscription\r\n> + worker detects an error\r\n> + </para></entry>\r\n> + </row>\r\n> \r\n> to:\r\n> + <structfield>subdisableonerr</structfield> <type>bool</type>\r\n> + </para>\r\n> + <para>\r\n> + If true, the subscription will be disabled when subscription\r\n> + worker detects non-transient errors\r\n> + </para></entry>\r\n> + </row>\r\nFixed. Actually, there's no clear definition what \"non-transient\" means\r\nin the documentation. So, I added some words to your suggestion,\r\nwhich would give clearer understanding to users.\r\n\r\n> 2) \"Disable On Err\" can be changed to \"Disable On Error\"\r\n> + \",\r\n> subtwophasestate AS \\\"%s\\\"\\n\"\r\n> + \",\r\n> subdisableonerr AS \\\"%s\\\"\\n\",\r\n> +\r\n> gettext_noop(\"Two phase commit\"),\r\n> +\r\n> gettext_noop(\"Disable On Err\"));\r\nFixed.\r\n\r\n> 3) Can add a line in the commit message saying \"Bump catalog version.\"\r\n> as the patch involves changing the catalog.\r\nHmm, let me postpone this fix till the final version.\r\nThe catalog version gets easily updated by other patch commits\r\nand including it in the middle of development can become\r\ncause of conflicts of my patch when applied to the PG,\r\nwhich is possible to make other reviewers stop reviewing.\r\n\r\n> 4) This prototype is not required, since the function is called after the function\r\n> definition:\r\n> static inline void set_apply_error_context_xact(TransactionId xid,\r\n> TimestampTz ts); static inline void reset_apply_error_context_info(void);\r\n> +static bool IsTransientError(ErrorData *edata);\r\nFixed.\r\n\r\n> 5) we could use the new style here:\r\n> + ereport(LOG,\r\n> + (errmsg(\"logical replication subscription\r\n> \\\"%s\\\" will be disabled due to error: %s\",\r\n> + MySubscription->name,\r\n> + edata->message)));\r\n> \r\n> change it to:\r\n> + ereport(LOG,\r\n> + errmsg(\"logical replication subscription\r\n> \\\"%s\\\" will be disabled due to error: %s\",\r\n> + MySubscription->name,\r\n> + edata->message));\r\n> \r\n> Similarly it can be changed in the other ereports added.\r\nRemoved the unnecessary parentheses.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Wed, 1 Dec 2021 12:25:35 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Dec 1, 2021 at 5:55 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, December 1, 2021 3:02 PM vignesh C <vignesh21@gmail.com> wrote:\n> > On Tue, Nov 30, 2021 at 5:34 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n>\n> > 3) Can add a line in the commit message saying \"Bump catalog version.\"\n> > as the patch involves changing the catalog.\n> Hmm, let me postpone this fix till the final version.\n> The catalog version gets easily updated by other patch commits\n> and including it in the middle of development can become\n> cause of conflicts of my patch when applied to the PG,\n> which is possible to make other reviewers stop reviewing.\n>\n\nVignesh seems to be suggesting just changing the commit message, not\nthe actual code. This is sort of a reminder to the committer to change\nthe catversion before pushing the patch. So that shouldn't cause any\nconflicts while applying your patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 1 Dec 2021 18:46:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wednesday, December 1, 2021 10:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Dec 1, 2021 at 5:55 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, December 1, 2021 3:02 PM vignesh C\r\n> <vignesh21@gmail.com> wrote:\r\n> > > On Tue, Nov 30, 2021 at 5:34 PM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > > 3) Can add a line in the commit message saying \"Bump catalog version.\"\r\n> > > as the patch involves changing the catalog.\r\n> > Hmm, let me postpone this fix till the final version.\r\n> > The catalog version gets easily updated by other patch commits and\r\n> > including it in the middle of development can become cause of\r\n> > conflicts of my patch when applied to the PG, which is possible to\r\n> > make other reviewers stop reviewing.\r\n> >\r\n> \r\n> Vignesh seems to be suggesting just changing the commit message, not the\r\n> actual code. This is sort of a reminder to the committer to change the catversion\r\n> before pushing the patch. So that shouldn't cause any conflicts while applying\r\n> your patch.\r\nAh, sorry for my misunderstanding.\r\nUpdated the patch to include the notification.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Thu, 2 Dec 2021 01:05:16 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thu, Dec 2, 2021 at 12:05 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Updated the patch to include the notification.\n>\n\nFor the catalogs.sgml update, I was thinking that the following\nwording might sound a bit better:\n\n+ If true, the subscription will be disabled when a subscription\n+ worker detects non-transient errors (e.g. duplication error)\n+ that require user intervention to resolve.\n\nWhat do you think?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 2 Dec 2021 13:42:47 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thu, Dec 2, 2021 at 6:35 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, December 1, 2021 10:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Updated the patch to include the notification.\n>\n\nThe patch disables the subscription for non-transient errors. I am not\nsure if we can easily make the call to decide whether any particular\nerror is transient or not. For example, DISK_FULL or OUT_OF_MEMORY\nmight not rectify itself. Why not just allow to disable the\nsubscription on any error? And then let the user check the error\neither in view or logs and decide whether it would like to enable the\nsubscription or do something before it (like making space in disk, or\nfixing the network).\n\nThe other problem I see with this transient error stuff is maintaining\nthe list of error codes that we think are transient. I think we need a\ndiscussion for each of the error_codes we are listing now and whatever\nnew error_code we add in the future which doesn't seem like a good\nidea.\n\nI think the code to deal with apply worker errors and then disable the\nsubscription has some flaws. Say, while disabling the subscription if\nit leads to another error then I think the original error won't be\nreported. Can't we simply emit the error via EmitErrorReport and then\ndo AbortOutOfAnyTransaction, FlushErrorState, and any other memory\ncontext clean up if required and then disable the subscription after\ncoming out of catch?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 2 Dec 2021 10:18:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thursday, December 2, 2021 1:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Dec 2, 2021 at 6:35 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, December 1, 2021 10:16 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > Updated the patch to include the notification.\r\n> >\r\n> The patch disables the subscription for non-transient errors. I am not sure if we\r\n> can easily make the call to decide whether any particular error is transient or\r\n> not. For example, DISK_FULL or OUT_OF_MEMORY might not rectify itself.\r\n> Why not just allow to disable the subscription on any error? And then let the\r\n> user check the error either in view or logs and decide whether it would like to\r\n> enable the subscription or do something before it (like making space in disk, or\r\n> fixing the network).\r\nAgreed. I'll treat any errors as the trigger of the feature\r\nin the next version.\r\n\r\n> The other problem I see with this transient error stuff is maintaining the list of\r\n> error codes that we think are transient. I think we need a discussion for each of\r\n> the error_codes we are listing now and whatever new error_code we add in the\r\n> future which doesn't seem like a good idea.\r\nThis is also true. The maintenance cost of my current implementation\r\ndidn't sound cheap.\r\n\r\n> I think the code to deal with apply worker errors and then disable the\r\n> subscription has some flaws. Say, while disabling the subscription if it leads to\r\n> another error then I think the original error won't be reported. Can't we simply\r\n> emit the error via EmitErrorReport and then do AbortOutOfAnyTransaction,\r\n> FlushErrorState, and any other memory context clean up if required and then\r\n> disable the subscription after coming out of catch?\r\nYou are right. I'll fix related parts accordingly.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Thu, 2 Dec 2021 07:40:44 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "Thursday, December 2, 2021 4:41 PM I wrote:\r\n> On Thursday, December 2, 2021 1:49 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > On Thu, Dec 2, 2021 at 6:35 AM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> > > On Wednesday, December 1, 2021 10:16 PM Amit Kapila\r\n> > <amit.kapila16@gmail.com> wrote:\r\n> > > Updated the patch to include the notification.\r\n> > >\r\n> > The patch disables the subscription for non-transient errors. I am not\r\n> > sure if we can easily make the call to decide whether any particular\r\n> > error is transient or not. For example, DISK_FULL or OUT_OF_MEMORY\r\n> might not rectify itself.\r\n> > Why not just allow to disable the subscription on any error? And then\r\n> > let the user check the error either in view or logs and decide whether\r\n> > it would like to enable the subscription or do something before it\r\n> > (like making space in disk, or fixing the network).\r\n> Agreed. I'll treat any errors as the trigger of the feature in the next version.\r\n> \r\n> > The other problem I see with this transient error stuff is maintaining\r\n> > the list of error codes that we think are transient. I think we need a\r\n> > discussion for each of the error_codes we are listing now and whatever\r\n> > new error_code we add in the future which doesn't seem like a good idea.\r\n> This is also true. The maintenance cost of my current implementation didn't\r\n> sound cheap.\r\n> \r\n> > I think the code to deal with apply worker errors and then disable the\r\n> > subscription has some flaws. Say, while disabling the subscription if\r\n> > it leads to another error then I think the original error won't be\r\n> > reported. Can't we simply emit the error via EmitErrorReport and then\r\n> > do AbortOutOfAnyTransaction, FlushErrorState, and any other memory\r\n> > context clean up if required and then disable the subscription after coming\r\n> out of catch?\r\n> You are right. I'll fix related parts accordingly.\r\nHi, I've made a new patch v11 that incorporated suggestions described above.\r\n\r\nThere are several notes to share regarding v11 modifications.\r\n\r\n1. Modified the commit message a bit.\r\n\r\n2. DisableSubscriptionOnError() doesn't return ErrData anymore,\r\nsince now to emit error message is done in the error recovery area\r\nand the function purpose has become purely to run a transaction to disable\r\nthe subscription.\r\n\r\n3. In DisableSubscriptionOnError(), v10 rethrew the error if the disable_on_error\r\nflag became false in the interim, but v11 just closes the transaction and\r\nfinishes the function.\r\n\r\n4. If table sync worker detects an error during synchronization\r\nand needs to disable the subscription, the worker disables it and just exit by proc_exit.\r\nThe processing after disabling the subscription didn't look necessary to me\r\nfor disabled subscription.\r\n\r\n5. Only when we succeed in the table synchronization, it's necessary to\r\nallocate slot name in long-lived context, after the table synchronization in\r\nApplyWorkerMain(). Otherwise, we'll see junk value of syncslotname\r\nbecause it is the return value of LogicalRepSyncTableStart().\r\n\r\n6. There are 3 places for error recovery in ApplyWorkerMain().\r\nAll of those might look similar but I didn't make an united function for them.\r\nThose are slightly different from each other and I felt\r\nreadability is reduced by grouping them into one type of function call.\r\n\r\n7. In v11, I covered the case that apply worker failed with\r\napply_error_callback_arg.command == 0, as one path to disable the subscription\r\nin order to cover all errors.\r\n\r\n8. I changed one flag name from 'disable_subscription' to 'did_error'\r\nin ApplyWorkerMain().\r\n\r\n9. All chages in this version are C codes and checked by pgindent.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Fri, 3 Dec 2021 13:20:35 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Sat, Dec 4, 2021 at 12:20 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Hi, I've made a new patch v11 that incorporated suggestions described above.\n>\n\nSome review comments for the v11 patch:\n\ndoc/src/sgml/ref/create_subscription.sgml\n(1) Possible wording improvement?\n\nBEFORE:\n+ Specifies whether the subscription should be automatically disabled\n+ if replicating data from the publisher triggers errors. The default\n+ is <literal>false</literal>.\nAFTER:\n+ Specifies whether the subscription should be automatically disabled\n+ if any errors are detected by subscription workers during data\n+ replication from the publisher. The default is <literal>false</literal>.\n\nsrc/backend/replication/logical/worker.c\n(2) WorkerErrorRecovery comments\nInstead of:\n\n+ * As a preparation for disabling the subscription, emit the error,\n+ * handle the transaction and clean up the memory context of\n+ * error. ErrorContext is reset by FlushErrorState.\n\nwhy not just say:\n\n+ Worker error recovery processing, in preparation for disabling the\n+ subscription.\n\nAnd then comment the function's code lines:\n\ne.g.\n\n/* Emit the error */\n...\n/* Abort any active transaction */\n...\n/* Reset the ErrorContext */\n...\n\n(3) DisableSubscriptionOnError return\n\nThe \"if (!subform->subdisableonerr)\" block should probably first:\n heap_freetuple(tup);\n\n(regardless of the fact the only current caller will proc_exit anyway)\n\n(4) did_error flag\n\nI think perhaps the previously-used flag name \"disable_subscription\"\nis better, or maybe \"error_recovery_done\".\nAlso, I think it would look better if it was set AFTER\nWorkerErrorRecovery() was called.\n\n(5) DisableSubscriptionOnError LOG message\n\nThis version of the patch removes the LOG message:\n\n+ ereport(LOG,\n+ errmsg(\"logical replication subscription \\\"%s\\\" will be disabled due\nto error: %s\",\n+ MySubscription->name, edata->message));\n\nPerhaps a similar error message could be logged prior to EmitErrorReport()?\n\ne.g.\n \"logical replication subscription \\\"%s\\\" will be disabled due to an error\"\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Mon, 6 Dec 2021 15:16:18 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Dec 1, 2021, at 8:48 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> The patch disables the subscription for non-transient errors. I am not\n> sure if we can easily make the call to decide whether any particular\n> error is transient or not. For example, DISK_FULL or OUT_OF_MEMORY\n> might not rectify itself. Why not just allow to disable the\n> subscription on any error? And then let the user check the error\n> either in view or logs and decide whether it would like to enable the\n> subscription or do something before it (like making space in disk, or\n> fixing the network).\n\nThe original idea of the patch, back when I first wrote and proposed it, was to remove the *absurdity* of retrying a transaction which, in the absence of human intervention, was guaranteed to simply fail again ad infinitum. Retrying in the face of resource errors is not *absurd* even though it might fail again ad infinitum. The reason is that there is at least a chance that the situation will clear up without human intervention.\n\n> The other problem I see with this transient error stuff is maintaining\n> the list of error codes that we think are transient. I think we need a\n> discussion for each of the error_codes we are listing now and whatever\n> new error_code we add in the future which doesn't seem like a good\n> idea.\n\nA reasonable rule might be: \"the subscription will be disabled if the server can determine that retries cannot possibly succeed without human intervention.\" We shouldn't need to categorize all error codes perfectly, as long as we're conservative. What I propose is similar to how we determine whether to mark a function leakproof; we don't have to mark all leakproof functions as such, we just can't mark one as such if it is not.\n\nIf we're going to debate the error codes, I think we would start with an empty list, and add to the list on sufficient analysis.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 5 Dec 2021 20:37:44 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Monday, December 6, 2021 1:38 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\r\n> > On Dec 1, 2021, at 8:48 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > The patch disables the subscription for non-transient errors. I am not\r\n> > sure if we can easily make the call to decide whether any particular\r\n> > error is transient or not. For example, DISK_FULL or OUT_OF_MEMORY\r\n> > might not rectify itself. Why not just allow to disable the\r\n> > subscription on any error? And then let the user check the error\r\n> > either in view or logs and decide whether it would like to enable the\r\n> > subscription or do something before it (like making space in disk, or\r\n> > fixing the network).\r\n> \r\n> The original idea of the patch, back when I first wrote and proposed it, was to\r\n> remove the *absurdity* of retrying a transaction which, in the absence of\r\n> human intervention, was guaranteed to simply fail again ad infinitum.\r\n> Retrying in the face of resource errors is not *absurd* even though it might fail\r\n> again ad infinitum. The reason is that there is at least a chance that the\r\n> situation will clear up without human intervention.\r\nIn my humble opinion, I felt the original purpose of the patch was to partially remedy\r\nthe situation that during the failure of apply, the apply process keeps going\r\ninto the infinite error loop.\r\n\r\nI'd say that in this sense, if we include such resource errors, we fail to achieve\r\nthe purpose in some cases, because of some left possibilities of infinite loop.\r\nDisabling the subscription with even one any error excludes this irregular possibility,\r\nsince there's no room to continue the infinite loop.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Mon, 6 Dec 2021 06:56:16 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Monday, December 6, 2021 1:16 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> On Sat, Dec 4, 2021 at 12:20 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > Hi, I've made a new patch v11 that incorporated suggestions described\r\n> above.\r\n> >\r\n> \r\n> Some review comments for the v11 patch:\r\nThank you for your reviews !\r\n \r\n> doc/src/sgml/ref/create_subscription.sgml\r\n> (1) Possible wording improvement?\r\n> \r\n> BEFORE:\r\n> + Specifies whether the subscription should be automatically disabled\r\n> + if replicating data from the publisher triggers errors. The default\r\n> + is <literal>false</literal>.\r\n> AFTER:\r\n> + Specifies whether the subscription should be automatically disabled\r\n> + if any errors are detected by subscription workers during data\r\n> + replication from the publisher. The default is <literal>false</literal>.\r\nFixed.\r\n\r\n> src/backend/replication/logical/worker.c\r\n> (2) WorkerErrorRecovery comments\r\n> Instead of:\r\n> \r\n> + * As a preparation for disabling the subscription, emit the error,\r\n> + * handle the transaction and clean up the memory context of\r\n> + * error. ErrorContext is reset by FlushErrorState.\r\n> \r\n> why not just say:\r\n> \r\n> + Worker error recovery processing, in preparation for disabling the\r\n> + subscription.\r\n> \r\n> And then comment the function's code lines:\r\n> \r\n> e.g.\r\n> \r\n> /* Emit the error */\r\n> ...\r\n> /* Abort any active transaction */\r\n> ...\r\n> /* Reset the ErrorContext */\r\n> ...\r\nAgreed. Fixed.\r\n \r\n> (3) DisableSubscriptionOnError return\r\n> \r\n> The \"if (!subform->subdisableonerr)\" block should probably first:\r\n> heap_freetuple(tup);\r\n> \r\n> (regardless of the fact the only current caller will proc_exit anyway)\r\nFixed.\r\n \r\n> (4) did_error flag\r\n> \r\n> I think perhaps the previously-used flag name \"disable_subscription\"\r\n> is better, or maybe \"error_recovery_done\".\r\n> Also, I think it would look better if it was set AFTER\r\n> WorkerErrorRecovery() was called.\r\nAdopted error_recovery_done\r\nand changed its places accordingly.\r\n \r\n> (5) DisableSubscriptionOnError LOG message\r\n> \r\n> This version of the patch removes the LOG message:\r\n> \r\n> + ereport(LOG,\r\n> + errmsg(\"logical replication subscription \\\"%s\\\" will be disabled due\r\n> to error: %s\",\r\n> + MySubscription->name, edata->message));\r\n> \r\n> Perhaps a similar error message could be logged prior to EmitErrorReport()?\r\n> \r\n> e.g.\r\n> \"logical replication subscription \\\"%s\\\" will be disabled due to an error\"\r\nAdded.\r\n\r\nI've attached the new version v12.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Mon, 6 Dec 2021 10:52:32 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Dec 6, 2021 at 10:07 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Dec 1, 2021, at 8:48 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > The patch disables the subscription for non-transient errors. I am not\n> > sure if we can easily make the call to decide whether any particular\n> > error is transient or not. For example, DISK_FULL or OUT_OF_MEMORY\n> > might not rectify itself. Why not just allow to disable the\n> > subscription on any error? And then let the user check the error\n> > either in view or logs and decide whether it would like to enable the\n> > subscription or do something before it (like making space in disk, or\n> > fixing the network).\n>\n> The original idea of the patch, back when I first wrote and proposed it, was to remove the *absurdity* of retrying a transaction which, in the absence of human intervention, was guaranteed to simply fail again ad infinitum. Retrying in the face of resource errors is not *absurd* even though it might fail again ad infinitum. The reason is that there is at least a chance that the situation will clear up without human intervention.\n>\n> > The other problem I see with this transient error stuff is maintaining\n> > the list of error codes that we think are transient. I think we need a\n> > discussion for each of the error_codes we are listing now and whatever\n> > new error_code we add in the future which doesn't seem like a good\n> > idea.\n>\n> A reasonable rule might be: \"the subscription will be disabled if the server can determine that retries cannot possibly succeed without human intervention.\" We shouldn't need to categorize all error codes perfectly, as long as we're conservative. What I propose is similar to how we determine whether to mark a function leakproof; we don't have to mark all leakproof functions as such, we just can't mark one as such if it is not.\n>\n> If we're going to debate the error codes, I think we would start with an empty list, and add to the list on sufficient analysis.\n>\n\nYeah, an empty list is a sort of what I thought was a good start\npoint. I feel we should learn from real-world use cases to see if\npeople really want to continue retrying even after using this option.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 6 Dec 2021 16:29:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Dec 5, 2021, at 10:56 PM, osumi.takamichi@fujitsu.com wrote:\n> \n> In my humble opinion, I felt the original purpose of the patch was to partially remedy\n> the situation that during the failure of apply, the apply process keeps going\n> into the infinite error loop.\n\nI agree.\n\n> I'd say that in this sense, if we include such resource errors, we fail to achieve\n> the purpose in some cases, because of some left possibilities of infinite loop.\n> Disabling the subscription with even one any error excludes this irregular possibility,\n> since there's no room to continue the infinite loop.\n\nI don't think there is any right answer here. It's a question of policy preferences.\n\nMy concern about disabling a subscription in response to *any* error is that people may find the feature does more harm than good. Disabling the subscription in response to an occasional deadlock against other database users, or occasional resource pressure, might annoy people and lead to the feature simply not being used.\n\nI am happy to defer to your policy preference. Thanks for your work on the patch!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 6 Dec 2021 08:06:02 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Dec 7, 2021 at 3:06 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n> My concern about disabling a subscription in response to *any* error is that people may find the feature does more harm than good. Disabling the subscription in response to an occasional deadlock against other database users, or occasional resource pressure, might annoy people and lead to the feature simply not being used.\n>\nI can understand this point of view.\nIt kind of suggests to me the possibility of something like a\nconfigurable timeout (e.g. disable the subscription if the same error\nhas occurred for more than X minutes) or, similarly, perhaps if some\nthreshold has been reached (e.g. same error has occurred more than X\ntimes), but I think that this was previously suggested by Peter Smith\nand the idea wasn't looked upon all that favorably?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 7 Dec 2021 11:22:08 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Dec 7, 2021 at 5:52 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Dec 7, 2021 at 3:06 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >\n> > My concern about disabling a subscription in response to *any* error is that people may find the feature does more harm than good. Disabling the subscription in response to an occasional deadlock against other database users, or occasional resource pressure, might annoy people and lead to the feature simply not being used.\n> >\n> I can understand this point of view.\n> It kind of suggests to me the possibility of something like a\n> configurable timeout (e.g. disable the subscription if the same error\n> has occurred for more than X minutes) or, similarly, perhaps if some\n> threshold has been reached (e.g. same error has occurred more than X\n> times), but I think that this was previously suggested by Peter Smith\n> and the idea wasn't looked upon all that favorably?\n>\n\nI think if we are really worried about transient errors then probably\nthe idea \"disable only if the same error has occurred more than X\ntimes\" seems preferable as compared to taking a decision on which\nerror_codes fall in the transient error category.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 8 Dec 2021 18:40:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Dec 8, 2021, at 5:10 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> I think if we are really worried about transient errors then probably\n> the idea \"disable only if the same error has occurred more than X\n> times\" seems preferable as compared to taking a decision on which\n> error_codes fall in the transient error category.\n\nNo need. We can revisit this design decision in a later release cycle if the current patch's design proves problematic in the field.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 8 Dec 2021 07:52:40 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Dec 8, 2021 at 9:22 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n>\n> > On Dec 8, 2021, at 5:10 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I think if we are really worried about transient errors then probably\n> > the idea \"disable only if the same error has occurred more than X\n> > times\" seems preferable as compared to taking a decision on which\n> > error_codes fall in the transient error category.\n>\n> No need. We can revisit this design decision in a later release cycle if the current patch's design proves problematic in the field.\n>\n\nSo, do you agree that we can disable the subscription on any error if\nthis parameter is set?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 9 Dec 2021 09:39:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "\n\n> On Dec 8, 2021, at 8:09 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> So, do you agree that we can disable the subscription on any error if\n> this parameter is set?\n\nYes, I think that is fine. We can commit it that way, and revisit the issue for v16 if it becomes a problem in practice.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 9 Dec 2021 08:44:50 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Dec 6, 2021 at 4:22 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, December 6, 2021 1:16 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > On Sat, Dec 4, 2021 at 12:20 AM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > Hi, I've made a new patch v11 that incorporated suggestions described\n> > above.\n> > >\n> >\n> > Some review comments for the v11 patch:\n> Thank you for your reviews !\n>\n> > doc/src/sgml/ref/create_subscription.sgml\n> > (1) Possible wording improvement?\n> >\n> > BEFORE:\n> > + Specifies whether the subscription should be automatically disabled\n> > + if replicating data from the publisher triggers errors. The default\n> > + is <literal>false</literal>.\n> > AFTER:\n> > + Specifies whether the subscription should be automatically disabled\n> > + if any errors are detected by subscription workers during data\n> > + replication from the publisher. The default is <literal>false</literal>.\n> Fixed.\n>\n> > src/backend/replication/logical/worker.c\n> > (2) WorkerErrorRecovery comments\n> > Instead of:\n> >\n> > + * As a preparation for disabling the subscription, emit the error,\n> > + * handle the transaction and clean up the memory context of\n> > + * error. ErrorContext is reset by FlushErrorState.\n> >\n> > why not just say:\n> >\n> > + Worker error recovery processing, in preparation for disabling the\n> > + subscription.\n> >\n> > And then comment the function's code lines:\n> >\n> > e.g.\n> >\n> > /* Emit the error */\n> > ...\n> > /* Abort any active transaction */\n> > ...\n> > /* Reset the ErrorContext */\n> > ...\n> Agreed. Fixed.\n>\n> > (3) DisableSubscriptionOnError return\n> >\n> > The \"if (!subform->subdisableonerr)\" block should probably first:\n> > heap_freetuple(tup);\n> >\n> > (regardless of the fact the only current caller will proc_exit anyway)\n> Fixed.\n>\n> > (4) did_error flag\n> >\n> > I think perhaps the previously-used flag name \"disable_subscription\"\n> > is better, or maybe \"error_recovery_done\".\n> > Also, I think it would look better if it was set AFTER\n> > WorkerErrorRecovery() was called.\n> Adopted error_recovery_done\n> and changed its places accordingly.\n>\n> > (5) DisableSubscriptionOnError LOG message\n> >\n> > This version of the patch removes the LOG message:\n> >\n> > + ereport(LOG,\n> > + errmsg(\"logical replication subscription \\\"%s\\\" will be disabled due\n> > to error: %s\",\n> > + MySubscription->name, edata->message));\n> >\n> > Perhaps a similar error message could be logged prior to EmitErrorReport()?\n> >\n> > e.g.\n> > \"logical replication subscription \\\"%s\\\" will be disabled due to an error\"\n> Added.\n>\n> I've attached the new version v12.\n\nThanks for the updated patch, few comments:\n1) This is not required as it is not used in the caller.\n+++ b/src/backend/replication/logical/launcher.c\n@@ -132,6 +132,7 @@ get_subscription_list(void)\n sub->dbid = subform->subdbid;\n sub->owner = subform->subowner;\n sub->enabled = subform->subenabled;\n+ sub->disableonerr = subform->subdisableonerr;\n sub->name = pstrdup(NameStr(subform->subname));\n /* We don't fill fields we are not interested in. */\n\n2) Should this be changed:\n+ <structfield>subdisableonerr</structfield> <type>bool</type>\n+ </para>\n+ <para>\n+ If true, the subscription will be disabled when subscription\n+ worker detects any errors\n+ </para></entry>\n+ </row>\nTo:\n+ <structfield>subdisableonerr</structfield> <type>bool</type>\n+ </para>\n+ <para>\n+ If true, the subscription will be disabled when subscription's\n+ worker detects any errors\n+ </para></entry>\n+ </row>\n\n3) The last line can be slightly adjusted to keep within 80 chars:\n+ Specifies whether the subscription should be automatically disabled\n+ if any errors are detected by subscription workers during data\n+ replication from the publisher. The default is\n<literal>false</literal>.\n\n4) Similarly this too can be handled:\n--- a/src/backend/catalog/system_views.sql\n+++ b/src/backend/catalog/system_views.sql\n@@ -1259,7 +1259,7 @@ REVOKE ALL ON pg_replication_origin_status FROM public;\n -- All columns of pg_subscription except subconninfo are publicly readable.\n REVOKE ALL ON pg_subscription FROM public;\n GRANT SELECT (oid, subdbid, subname, subowner, subenabled, subbinary,\n- substream, subtwophasestate, subslotname,\nsubsynccommit, subpublications)\n+ substream, subtwophasestate, subdisableonerr,\nsubslotname, subsynccommit, subpublications)\n ON pg_subscription TO public;\n\n5) Since disabling subscription code is common in and else, can we\nmove it below:\n+ if (MySubscription->disableonerr)\n+ {\n+ WorkerErrorRecovery();\n+ error_recovery_done = true;\n+ }\n+ else\n+ {\n+ /*\n+ * Some work in error recovery work is\ndone. Switch to the old\n+ * memory context and rethrow.\n+ */\n+ MemoryContextSwitchTo(ecxt);\n+ PG_RE_THROW();\n+ }\n+ }\n+ else\n+ {\n+ /*\n+ * Don't miss any error, even when it's not\nreported to stats\n+ * collector.\n+ */\n+ if (MySubscription->disableonerr)\n+ {\n+ WorkerErrorRecovery();\n+ error_recovery_done = true;\n+ }\n+ else\n+ /* Simply rethrow because of no recovery work */\n+ PG_RE_THROW();\n+ }\n\n6) Can we move LockSharedObject below the if condition.\n+ subform = (Form_pg_subscription) GETSTRUCT(tup);\n+ LockSharedObject(SubscriptionRelationId, subform->oid, 0,\nAccessExclusiveLock);\n+\n+ /*\n+ * We would not be here unless this subscription's\ndisableonerr field was\n+ * true when our worker began applying changes, but check whether that\n+ * field has changed in the interim.\n+ */\n+ if (!subform->subdisableonerr)\n+ {\n+ /*\n+ * Disabling the subscription has been done already. No need of\n+ * additional work.\n+ */\n+ heap_freetuple(tup);\n+ table_close(rel, RowExclusiveLock);\n+ CommitTransactionCommand();\n+ return;\n+ }\n+\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 13 Dec 2021 15:27:17 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Monday, December 13, 2021 6:57 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Mon, Dec 6, 2021 at 4:22 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > I've attached the new version v12.\r\nI appreciate your review.\r\n\r\n\r\n> Thanks for the updated patch, few comments:\r\n> 1) This is not required as it is not used in the caller.\r\n> +++ b/src/backend/replication/logical/launcher.c\r\n> @@ -132,6 +132,7 @@ get_subscription_list(void)\r\n> sub->dbid = subform->subdbid;\r\n> sub->owner = subform->subowner;\r\n> sub->enabled = subform->subenabled;\r\n> + sub->disableonerr = subform->subdisableonerr;\r\n> sub->name = pstrdup(NameStr(subform->subname));\r\n> /* We don't fill fields we are not interested in. */\r\nOkay.\r\nThe comment of the get_subscription_list() mentions that\r\nwe collect and fill only fields related to worker start/stop.\r\nThen, I didn't need it. Fixed.\r\n\r\n\r\n> 2) Should this be changed:\r\n> + <structfield>subdisableonerr</structfield> <type>bool</type>\r\n> + </para>\r\n> + <para>\r\n> + If true, the subscription will be disabled when subscription\r\n> + worker detects any errors\r\n> + </para></entry>\r\n> + </row>\r\n> To:\r\n> + <structfield>subdisableonerr</structfield> <type>bool</type>\r\n> + </para>\r\n> + <para>\r\n> + If true, the subscription will be disabled when subscription's\r\n> + worker detects any errors\r\n> + </para></entry>\r\n> + </row>\r\nI felt either is fine. So fixed.\r\n\r\n\r\n> 3) The last line can be slightly adjusted to keep within 80 chars:\r\n> + Specifies whether the subscription should be automatically disabled\r\n> + if any errors are detected by subscription workers during data\r\n> + replication from the publisher. The default is\r\n> <literal>false</literal>.\r\nFixed.\r\n\r\n> 4) Similarly this too can be handled:\r\n> --- a/src/backend/catalog/system_views.sql\r\n> +++ b/src/backend/catalog/system_views.sql\r\n> @@ -1259,7 +1259,7 @@ REVOKE ALL ON pg_replication_origin_status FROM\r\n> public;\r\n> -- All columns of pg_subscription except subconninfo are publicly readable.\r\n> REVOKE ALL ON pg_subscription FROM public; GRANT SELECT (oid,\r\n> subdbid, subname, subowner, subenabled, subbinary,\r\n> - substream, subtwophasestate, subslotname,\r\n> subsynccommit, subpublications)\r\n> + substream, subtwophasestate, subdisableonerr,\r\n> subslotname, subsynccommit, subpublications)\r\n> ON pg_subscription TO public;\r\nI split the line into two to make each line less than 80 chars.\r\n\r\n> 5) Since disabling subscription code is common in and else, can we move it\r\n> below:\r\n> + if (MySubscription->disableonerr)\r\n> + {\r\n> + WorkerErrorRecovery();\r\n> + error_recovery_done = true;\r\n> + }\r\n> + else\r\n> + {\r\n> + /*\r\n> + * Some work in error recovery work is\r\n> done. Switch to the old\r\n> + * memory context and rethrow.\r\n> + */\r\n> + MemoryContextSwitchTo(ecxt);\r\n> + PG_RE_THROW();\r\n> + }\r\n> + }\r\n> + else\r\n> + {\r\n> + /*\r\n> + * Don't miss any error, even when it's not\r\n> reported to stats\r\n> + * collector.\r\n> + */\r\n> + if (MySubscription->disableonerr)\r\n> + {\r\n> + WorkerErrorRecovery();\r\n> + error_recovery_done = true;\r\n> + }\r\n> + else\r\n> + /* Simply rethrow because of no recovery\r\n> work */\r\n> + PG_RE_THROW();\r\n> + }\r\nI moved the common code below those condition branches.\r\n\r\n\r\n> 6) Can we move LockSharedObject below the if condition.\r\n> + subform = (Form_pg_subscription) GETSTRUCT(tup);\r\n> + LockSharedObject(SubscriptionRelationId, subform->oid, 0,\r\n> AccessExclusiveLock);\r\n> +\r\n> + /*\r\n> + * We would not be here unless this subscription's\r\n> disableonerr field was\r\n> + * true when our worker began applying changes, but check whether\r\n> that\r\n> + * field has changed in the interim.\r\n> + */\r\n> + if (!subform->subdisableonerr)\r\n> + {\r\n> + /*\r\n> + * Disabling the subscription has been done already. No need\r\n> of\r\n> + * additional work.\r\n> + */\r\n> + heap_freetuple(tup);\r\n> + table_close(rel, RowExclusiveLock);\r\n> + CommitTransactionCommand();\r\n> + return;\r\n> + }\r\n> +\r\nFixed.\r\n\r\nBesides all of those changes, I've removed the obsolete\r\ncomment of DisableSubscriptionOnError in v12.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Tue, 14 Dec 2021 05:34:53 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Dec 14, 2021 at 4:34 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Besides all of those changes, I've removed the obsolete\n> comment of DisableSubscriptionOnError in v12.\n>\n\nI have a few minor comments, otherwise the patch LGTM at this point:\n\ndoc/src/sgml/catalogs.sgml\n(1)\nCurrent comment says:\n\n+ If true, the subscription will be disabled when subscription's\n+ worker detects any errors\n\nHowever, in create_subscription.sgml, it says \"disabled if any errors\nare detected by subscription workers ...\"\n\nFor consistency, I think it should be:\n\n+ If true, the subscription will be disabled when subscription\n+ workers detect any errors\n\nsrc/bin/psql/describe.c\n(2)\nI think that:\n\n+ gettext_noop(\"Disable On Error\"));\n\nshould be:\n\n+ gettext_noop(\"Disable on error\"));\n\nfor consistency with the uppercase/lowercase usage on other similar entries?\n(e.g. \"Two phase commit\")\n\n\nsrc/include/catalog/pg_subscription.h\n(3)\n\n+ bool subdisableonerr; /* True if apply errors should disable the\n+ * subscription upon error */\n\nThe comment should just say \"True if occurrence of apply errors should\ndisable the subscription\"\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 16 Dec 2021 16:32:08 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thursday, December 16, 2021 2:32 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> On Tue, Dec 14, 2021 at 4:34 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > Besides all of those changes, I've removed the obsolete comment of\r\n> > DisableSubscriptionOnError in v12.\r\n> >\r\n> \r\n> I have a few minor comments, otherwise the patch LGTM at this point:\r\nThank you for your review !\r\n\r\n> doc/src/sgml/catalogs.sgml\r\n> (1)\r\n> Current comment says:\r\n> \r\n> + If true, the subscription will be disabled when subscription's\r\n> + worker detects any errors\r\n> \r\n> However, in create_subscription.sgml, it says \"disabled if any errors are\r\n> detected by subscription workers ...\"\r\n> \r\n> For consistency, I think it should be:\r\n> \r\n> + If true, the subscription will be disabled when subscription\r\n> + workers detect any errors\r\nOkay. Fixed.\r\n \r\n> src/bin/psql/describe.c\r\n> (2)\r\n> I think that:\r\n> \r\n> + gettext_noop(\"Disable On Error\"));\r\n> \r\n> should be:\r\n> \r\n> + gettext_noop(\"Disable on error\"));\r\n> \r\n> for consistency with the uppercase/lowercase usage on other similar entries?\r\n> (e.g. \"Two phase commit\")\r\nAgreed. Fixed.\r\n\r\n> src/include/catalog/pg_subscription.h\r\n> (3)\r\n> \r\n> + bool subdisableonerr; /* True if apply errors should disable the\r\n> + * subscription upon error */\r\n> \r\n> The comment should just say \"True if occurrence of apply errors should disable\r\n> the subscription\"\r\nFixed.\r\n\r\nAttached the updated patch v14.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Thu, 16 Dec 2021 12:51:04 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thursday, December 16, 2021 9:51 PM I wrote:\r\n> Attached the updated patch v14.\r\nFYI, I've conducted a test of disable_on_error flag using\r\npg_upgrade. I prepared PG14 and HEAD applied with disable_on_error patch.\r\nThen, I setup a logical replication pair of the publisher and the subscriber by 14\r\nand executed pg_upgrade for both the publisher and the subscriber individually.\r\n\r\nAfter the updation, on the subscriber, I've confirmed the disable_on_error is false\r\nvia both pg_subscription and \\dRs+, as expected.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n\r\n", "msg_date": "Tue, 21 Dec 2021 14:17:31 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tuesday, December 21, 2021 11:18 PM I wrote:\r\n> On Thursday, December 16, 2021 9:51 PM I wrote:\r\n> > Attached the updated patch v14.\r\n> FYI, I've conducted a test of disable_on_error flag using pg_upgrade. I\r\n> prepared PG14 and HEAD applied with disable_on_error patch.\r\n> Then, I setup a logical replication pair of the publisher and the subscriber by 14\r\n> and executed pg_upgrade for both the publisher and the subscriber\r\n> individually.\r\n> \r\n> After the updation, on the subscriber, I've confirmed the disable_on_error is\r\n> false via both pg_subscription and \\dRs+, as expected.\r\nAdditionally, I've tested the new TAP test in a tight loop\r\nthat executed 027_disable_on_error.pl 100 times sequentially.\r\nThere was no failure, which means\r\nany timing issue should not exist in the test.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 22 Dec 2021 10:24:05 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thursday, December 16, 2021 8:51 PM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n> Attached the updated patch v14.\r\n\r\nA comment to the timing of printing a log:\r\nAfter the log[1] was printed, I altered subscription's option\r\n(DISABLE_ON_ERROR) from true to false before invoking DisableSubscriptionOnError\r\nto disable subscription. Subscription was not disabled.\r\n[1] \"LOG: logical replication subscription \"sub1\" will be disabled due to an error\"\r\n\r\nI found this log is printed in function WorkerErrorRecovery:\r\n+\tereport(LOG,\r\n+\t\t\terrmsg(\"logical replication subscription \\\"%s\\\" will be disabled due to an error\",\r\n+\t\t\t\t MySubscription->name));\r\nThis log is printed here, but in DisableSubscriptionOnError, there is a check to\r\nconfirm subscription's disableonerr field. If disableonerr is found changed from\r\ntrue to false in DisableSubscriptionOnError, subscription will not be disabled.\r\n\r\nIn this case, \"disable subscription\" is printed, but subscription will not be\r\ndisabled actually.\r\nI think it is a little confused to user, so what about moving this message after\r\nthe check which is mentioned above in DisableSubscriptionOnError?\r\n\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Tue, 28 Dec 2021 02:52:47 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tuesday, December 28, 2021 11:53 AM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> On Thursday, December 16, 2021 8:51 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > Attached the updated patch v14.\r\n> \r\n> A comment to the timing of printing a log:\r\nThank you for your review !\r\n\r\n> After the log[1] was printed, I altered subscription's option\r\n> (DISABLE_ON_ERROR) from true to false before invoking\r\n> DisableSubscriptionOnError to disable subscription. Subscription was not\r\n> disabled.\r\n> [1] \"LOG: logical replication subscription \"sub1\" will be disabled due to an\r\n> error\"\r\n> \r\n> I found this log is printed in function WorkerErrorRecovery:\r\n> +\tereport(LOG,\r\n> +\t\t\terrmsg(\"logical replication subscription \\\"%s\\\" will\r\n> be disabled due to an error\",\r\n> +\t\t\t\t MySubscription->name));\r\n> This log is printed here, but in DisableSubscriptionOnError, there is a check to\r\n> confirm subscription's disableonerr field. If disableonerr is found changed from\r\n> true to false in DisableSubscriptionOnError, subscription will not be disabled.\r\n> \r\n> In this case, \"disable subscription\" is printed, but subscription will not be\r\n> disabled actually.\r\n> I think it is a little confused to user, so what about moving this message after\r\n> the check which is mentioned above in DisableSubscriptionOnError?\r\nMakes sense. I moved the log print after\r\nthe check of the necessity to disable the subscription.\r\n\r\nAlso, I've scrutinized and refined the new TAP test as well for refactoring.\r\nAs a result, I fixed wait_for_subscriptions()\r\nso that some extra codes that can be simplified,\r\nsuch as escaped variable and one part of WHERE clause, are removed.\r\nOther change I did is to replace two calls of wait_for_subscriptions()\r\nwith polling_query_until() for the subscriber, in order to\r\nmake the tests better and more suitable for the test purposes.\r\nAgain, for this refinement, I've conducted a tight loop test\r\nto check no timing issue and found no problem.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Wed, 5 Jan 2022 12:53:06 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wednesday, January 5, 2022 8:53 PM osumi.takamichi@fujitsu.com \r\n<osumi.takamichi@fujitsu.com> wrote:\r\n> \r\n> On Tuesday, December 28, 2021 11:53 AM Wang, Wei/王 威\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> > On Thursday, December 16, 2021 8:51 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > Attached the updated patch v14.\r\n> >\r\n> > A comment to the timing of printing a log:\r\n> Thank you for your review !\r\n> \r\n> > After the log[1] was printed, I altered subscription's option\r\n> > (DISABLE_ON_ERROR) from true to false before invoking\r\n> > DisableSubscriptionOnError to disable subscription. Subscription was not\r\n> > disabled.\r\n> > [1] \"LOG: logical replication subscription \"sub1\" will be disabled due to an\r\n> > error\"\r\n> >\r\n> > I found this log is printed in function WorkerErrorRecovery:\r\n> > +\tereport(LOG,\r\n> > +\t\t\terrmsg(\"logical replication subscription \\\"%s\\\" will\r\n> > be disabled due to an error\",\r\n> > +\t\t\t\t MySubscription->name));\r\n> > This log is printed here, but in DisableSubscriptionOnError, there is a check to\r\n> > confirm subscription's disableonerr field. If disableonerr is found changed from\r\n> > true to false in DisableSubscriptionOnError, subscription will not be disabled.\r\n> >\r\n> > In this case, \"disable subscription\" is printed, but subscription will not be\r\n> > disabled actually.\r\n> > I think it is a little confused to user, so what about moving this message after\r\n> > the check which is mentioned above in DisableSubscriptionOnError?\r\n> Makes sense. I moved the log print after\r\n> the check of the necessity to disable the subscription.\r\n> \r\n> Also, I've scrutinized and refined the new TAP test as well for refactoring.\r\n> As a result, I fixed wait_for_subscriptions()\r\n> so that some extra codes that can be simplified,\r\n> such as escaped variable and one part of WHERE clause, are removed.\r\n> Other change I did is to replace two calls of wait_for_subscriptions()\r\n> with polling_query_until() for the subscriber, in order to\r\n> make the tests better and more suitable for the test purposes.\r\n> Again, for this refinement, I've conducted a tight loop test\r\n> to check no timing issue and found no problem.\r\n> \r\n\r\nThanks for updating the patch. Here are some comments:\r\n\r\n1)\r\n+\t/*\r\n+\t * We would not be here unless this subscription's disableonerr field was\r\n+\t * true when our worker began applying changes, but check whether that\r\n+\t * field has changed in the interim.\r\n+\t */\r\n+\tif (!subform->subdisableonerr)\r\n+\t{\r\n+\t\t/*\r\n+\t\t * Disabling the subscription has been done already. No need of\r\n+\t\t * additional work.\r\n+\t\t */\r\n+\t\theap_freetuple(tup);\r\n+\t\ttable_close(rel, RowExclusiveLock);\r\n+\t\tCommitTransactionCommand();\r\n+\t\treturn;\r\n+\t}\r\n\r\nI don't understand what does \"Disabling the subscription has been done already\"\r\nmean, I think we only run here when subdisableonerr is changed in the interim.\r\nShould we modify this comment? Or remove it because there are already some\r\nexplanations before.\r\n\r\n2)\r\n+\t/* Set the subscription to disabled, and note the reason. */\r\n+\tvalues[Anum_pg_subscription_subenabled - 1] = BoolGetDatum(false);\r\n+\treplaces[Anum_pg_subscription_subenabled - 1] = true;\r\n\r\nI didn't see the code corresponding to \"note the reason\". Should we modify the\r\ncomment?\r\n\r\n3)\r\n+\tbool\t\tdisableonerr;\t/* Whether errors automatically disable */\r\n\r\nThis comment is hard to understand. Maybe it can be changed to:\r\n\r\nIndicates if the subscription should be automatically disabled when subscription\r\nworkers detect any errors.\r\n\r\nRegards,\r\nTang\r\n", "msg_date": "Thu, 6 Jan 2022 03:16:30 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thursday, January 6, 2022 12:17 PM Tang, Haiying/唐 海英 <tanghy.fnst@fujitsu.com> wrote:\r\n> Thanks for updating the patch. Here are some comments:\r\nThank you for your review !\r\n\r\n> 1)\r\n> +\t/*\r\n> +\t * We would not be here unless this subscription's disableonerr field\r\n> was\r\n> +\t * true when our worker began applying changes, but check whether\r\n> that\r\n> +\t * field has changed in the interim.\r\n> +\t */\r\n> +\tif (!subform->subdisableonerr)\r\n> +\t{\r\n> +\t\t/*\r\n> +\t\t * Disabling the subscription has been done already. No need\r\n> of\r\n> +\t\t * additional work.\r\n> +\t\t */\r\n> +\t\theap_freetuple(tup);\r\n> +\t\ttable_close(rel, RowExclusiveLock);\r\n> +\t\tCommitTransactionCommand();\r\n> +\t\treturn;\r\n> +\t}\r\n> \r\n> I don't understand what does \"Disabling the subscription has been done\r\n> already\"\r\n> mean, I think we only run here when subdisableonerr is changed in the interim.\r\n> Should we modify this comment? Or remove it because there are already some\r\n> explanations before.\r\nRemoved. The description you pointed out was redundant.\r\n\r\n> 2)\r\n> +\t/* Set the subscription to disabled, and note the reason. */\r\n> +\tvalues[Anum_pg_subscription_subenabled - 1] =\r\n> BoolGetDatum(false);\r\n> +\treplaces[Anum_pg_subscription_subenabled - 1] = true;\r\n> \r\n> I didn't see the code corresponding to \"note the reason\". Should we modify the\r\n> comment?\r\nFixed the comment by removing the part.\r\nWe come here when an error occurred and the reason is printed as log\r\nso no need to note more reason.\r\n\r\n> 3)\r\n> +\tbool\t\tdisableonerr;\t/* Whether errors automatically\r\n> disable */\r\n> \r\n> This comment is hard to understand. Maybe it can be changed to:\r\n> \r\n> Indicates if the subscription should be automatically disabled when\r\n> subscription workers detect any errors.\r\nAgreed. Fixed.\r\n\r\nKindly have a look at the attached v16.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Thu, 6 Jan 2022 05:53:42 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thu, Jan 6, 2022 at 11:23 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Kindly have a look at the attached v16.\n>\n\nFew comments:\n=============\n1.\n@@ -3594,13 +3698,29 @@ ApplyWorkerMain(Datum main_arg)\n apply_error_callback_arg.command,\n apply_error_callback_arg.remote_xid,\n errdata->message);\n- MemoryContextSwitchTo(ecxt);\n+\n+ if (!MySubscription->disableonerr)\n+ {\n+ /*\n+ * Some work in error recovery work is done. Switch to the old\n+ * memory context and rethrow.\n+ */\n+ MemoryContextSwitchTo(ecxt);\n+ PG_RE_THROW();\n+ }\n }\n+ else if (!MySubscription->disableonerr)\n+ PG_RE_THROW();\n\n- PG_RE_THROW();\n\nCan't we combine these two different checks for\n'MySubscription->disableonerr' if you do it as a separate if check\nafter sending the stats message?\n\n2. Can we move the code related to tablesync worker and its error\nhanding (the code insider if (am_tablesync_worker())) to a separate\nfunction say LogicalRepHandleTableSync() or something like that.\n\n3. Similarly, we can move apply-loop related code (\"Run the main\nloop.\") to a separate function say LogicalRepHandleApplyMessages().\n\nIf we do (2) and (3), I think the code in ApplyWorkerMain will look\nbetter. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 14 Feb 2022 17:28:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Monday, February 14, 2022 8:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Jan 6, 2022 at 11:23 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > Kindly have a look at the attached v16.\r\n> >\r\n> \r\n> Few comments:\r\nHi, thank you for checking the patch !\r\n\r\n> =============\r\n> 1.\r\n> @@ -3594,13 +3698,29 @@ ApplyWorkerMain(Datum main_arg)\r\n> apply_error_callback_arg.command,\r\n> apply_error_callback_arg.remote_xid,\r\n> errdata->message);\r\n> - MemoryContextSwitchTo(ecxt);\r\n> +\r\n> + if (!MySubscription->disableonerr)\r\n> + {\r\n> + /*\r\n> + * Some work in error recovery work is done. Switch to the old\r\n> + * memory context and rethrow.\r\n> + */\r\n> + MemoryContextSwitchTo(ecxt);\r\n> + PG_RE_THROW();\r\n> + }\r\n> }\r\n> + else if (!MySubscription->disableonerr) PG_RE_THROW();\r\n> \r\n> - PG_RE_THROW();\r\n> \r\n> Can't we combine these two different checks for\r\n> 'MySubscription->disableonerr' if you do it as a separate if check after sending\r\n> the stats message?\r\nNo, we can't. The second check of MySubscription->disableonerr is for the case\r\napply_error_callback_arg.command equals 0. We disable the subscription\r\non any errors. In other words, we need to rethrow the error in the case,\r\nif the flag disableonerr is not set to true.\r\n\r\nSo, moving it to after sending\r\nthe stats message can't be done. At the same time, if we move\r\nthe disableonerr flag check outside of the apply_error_callback_arg.command condition\r\nbranch, we need to write another call of pgstat_report_subworker_error, with the\r\nsame arguments that we have now. This wouldn't be preferrable as well.\r\n\r\n> \r\n> 2. Can we move the code related to tablesync worker and its error handing (the\r\n> code insider if (am_tablesync_worker())) to a separate function say\r\n> LogicalRepHandleTableSync() or something like that.\r\n> \r\n> 3. Similarly, we can move apply-loop related code (\"Run the main\r\n> loop.\") to a separate function say LogicalRepHandleApplyMessages().\r\n> \r\n> If we do (2) and (3), I think the code in ApplyWorkerMain will look better. What\r\n> do you think?\r\nI agree with (2) and (3), since those contribute to better readability.\r\n\r\nAttached a new patch v17 that addresses those refactorings.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Tue, 15 Feb 2022 05:19:00 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tuesday, February 15, 2022 2:19 PM I wrote\n> On Monday, February 14, 2022 8:58 PM Amit Kapila\n> > 2. Can we move the code related to tablesync worker and its error\n> > handing (the code insider if (am_tablesync_worker())) to a separate\n> > function say\n> > LogicalRepHandleTableSync() or something like that.\n> >\n> > 3. Similarly, we can move apply-loop related code (\"Run the main\n> > loop.\") to a separate function say LogicalRepHandleApplyMessages().\n> >\n> > If we do (2) and (3), I think the code in ApplyWorkerMain will look\n> > better. What do you think?\n> I agree with (2) and (3), since those contribute to better readability.\n> \n> Attached a new patch v17 that addresses those refactorings.\nHi, I noticed that one new tap test was added in the src/test/subscription/\nand needed to increment the number of my test of this patch.\n\nAlso, I conducted minor fixes of comments and function name.\nKindly have a look at the attached v18.\n\nBest Regards,\n\tTakamichi Osumi", "msg_date": "Wed, 16 Feb 2022 11:19:06 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "Hi. Below are my code review comments for v18.\n\n==========\n\n1. Commit Message - wording\n\nBEFORE\nTo partially remedy the situation, adding a new subscription_parameter\nnamed 'disable_on_error'.\n\nAFTER\nTo partially remedy the situation, this patch adds a new\nsubscription_parameter named 'disable_on_error'.\n\n~~~\n\n2. Commit message - wording\n\nBEFORE\nRequire to bump catalog version.\n\nAFTER\nA catalog version bump is required.\n\n~~~\n\n3. doc/src/sgml/ref/alter_subscription.sgml - whitespace\n\n@@ -201,8 +201,8 @@ ALTER SUBSCRIPTION <replaceable\nclass=\"parameter\">name</replaceable> RENAME TO <\n information. The parameters that can be altered\n are <literal>slot_name</literal>,\n <literal>synchronous_commit</literal>,\n- <literal>binary</literal>, and\n- <literal>streaming</literal>.\n+ <literal>binary</literal>,<literal>streaming</literal>, and\n+ <literal>disable_on_error</literal>.\n </para>\n\nThere is a missing space before <literal>streaming</literal>.\n\n~~~\n\n4. src/backend/replication/logical/worker.c - WorkerErrorRecovery\n\n@@ -2802,6 +2803,89 @@ LogicalRepApplyLoop(XLogRecPtr last_received)\n }\n\n /*\n+ * Worker error recovery processing, in preparation for disabling the\n+ * subscription.\n+ */\n+static void\n+WorkerErrorRecovery(void)\n\nI was wondering about the need for this to be a separate function? It\nis only called immediately before calling 'DisableSubscriptionOnError'\nso would it maybe be better just to put this code inside\nDisableSubscriptionOnError with the appropriate comments?\n\n~~~\n\n5. src/backend/replication/logical/worker.c - DisableSubscriptionOnError\n\n+ /*\n+ * We would not be here unless this subscription's disableonerr field was\n+ * true when our worker began applying changes, but check whether that\n+ * field has changed in the interim.\n+ */\n\nApparently, this function might just do nothing if it detects some\nsituation where the flag was changed somehow, but I’m not 100% sure\nthat the callers are properly catering for when nothing happens.\n\nIMO it would be better if this function would return true/false to\nmean \"did disable subscription happen or not?\" because that will give\nthe calling code the chance to check the function return and do the\nright thing - e.g. if the caller first thought it should be disabled\nbut then it turned out it did NOT disable...\n\n~~~\n\n6. src/backend/replication/logical/worker.c - LogicalRepHandleTableSync name\n\n+/*\n+ * Execute the initial sync with error handling. Disable the subscription,\n+ * if it's required.\n+ */\n+static void\n+LogicalRepHandleTableSync(XLogRecPtr *origin_startpos,\n+ char **myslotname, MemoryContext cctx)\n\nI felt that it is a bit overkill to put a \"LogicalRep\" prefix here\nbecause it is a static function.\n\nIMO this function should be renamed as 'SyncTableStartWrapper' because\nthat describes better what it is doing.\n\n~~~\n\n7. src/backend/replication/logical/worker.c - LogicalRepHandleTableSync Assert\n\nEven though we can know this to be true because of where it is called\nfrom, I think the readability of the function will be improved if you\nadd an assertion at the top:\n\nAssert(am_tablesync_worker());\n\nAnd then, because the function is clearly for Tablesync worker only\nthere is no need to keep mentioning that in the subsequent comments...\n\ne.g.1\n/* This is table synchronization worker, call initial sync. */\nAFTER:\n/* Call initial sync. */\n\ne.g.2\n/*\n * Report the table sync error. There is no corresponding message type\n * for table synchronization.\n */\nAFTER\n/*\n * Report the error. There is no corresponding message type for table\n * synchronization.\n */\n\n~~~\n\n8. src/backend/replication/logical/worker.c -\nLogicalRepHandleTableSync unnecessarily complex\n\n+static void\n+LogicalRepHandleTableSync(XLogRecPtr *origin_startpos,\n+ char **myslotname, MemoryContext cctx)\n+{\n+ char *syncslotname;\n+ bool error_recovery_done = false;\n\nIMO this logic is way more complex than it needed to be. IIUC that\n'error_recovery_done' and various conditions can be removed, and the\nwhole thing be simplified quite a lot.\n\nI re-wrote this function as a POC. Please see the attached file [2].\nAll the tests are still passing OK.\n\n(Perhaps the scenario for my comment #5 above still needs to be addressed?)\n\n~~~\n\n9. src/backend/replication/logical/worker.c - LogicalRepHandleApplyMessages name\n\n+/*\n+ * Run the apply loop with error handling. Disable the subscription,\n+ * if necessary.\n+ */\n+static void\n+LogicalRepHandleApplyMessages(XLogRecPtr origin_startpos,\n+ MemoryContext cctx)\n\nI felt that it is a bit overkill to put a \"LogicalRep\" prefix here\nbecause it is a static function.\n\nIMO this function should be renamed as 'ApplyLoopWrapper' because that\ndescribes better what it is doing.\n\n~~~\n\n10. src/backend/replication/logical/worker.c -\nLogicalRepHandleApplyMessages unnecessarily complex\n\n+static void\n+LogicalRepHandleApplyMessages(XLogRecPtr origin_startpos,\n+ MemoryContext cctx)\n+{\n+ bool error_recovery_done = false;\n\nIMO this logic is way more complex than it needed to be. IIUC that\n'error_recovery_done' and various conditions can be removed, and the\nwhole thing be simplified quite a lot.\n\nI re-wrote this function as a POC. Please see the attached file [2].\nAll the tests are still passing OK.\n\n(Perhaps the scenario for my comment #5 above still needs to be addressed?)\n\n~~~\n\n11. src/bin/pg_dump/pg_dump.c - dumpSubscription\n\n@@ -4441,6 +4451,9 @@ dumpSubscription(Archive *fout, const\nSubscriptionInfo *subinfo)\n if (strcmp(subinfo->subtwophasestate, two_phase_disabled) != 0)\n appendPQExpBufferStr(query, \", two_phase = on\");\n\n+ if (strcmp(subinfo->subdisableonerr, \"f\") != 0)\n+ appendPQExpBufferStr(query, \", disable_on_error = on\");\n+\n\nI felt saying disable_on_err is \"true\" would look more natural than\nsaying it is \"on\".\n\n~~~\n\n12. src/bin/psql/describe.c - describeSubscriptions typo\n\n@@ -6096,11 +6096,13 @@ describeSubscriptions(const char *pattern, bool verbose)\n gettext_noop(\"Binary\"),\n gettext_noop(\"Streaming\"));\n\n- /* Two_phase is only supported in v15 and higher */\n+ /* Two_phase and disable_on_error is only supported in v15 and higher */\n\nTypo\n\n\"is only\" --> \"are only\"\n\n~~~\n\n13. src/include/catalog/pg_subscription.h - comments\n\n@@ -103,6 +106,9 @@ typedef struct Subscription\n * binary format */\n bool stream; /* Allow streaming in-progress transactions. */\n char twophasestate; /* Allow streaming two-phase transactions */\n+ bool disableonerr; /* Indicates if the subscription should be\n+ * automatically disabled when subscription\n+ * workers detect any errors. */\n\nIt's not usual to have a full stop here.\nMaybe not needed to repeat the word \"subscription\".\nIMO, generally, it all can be simplified a bit.\n\nBEFORE\nIndicates if the subscription should be automatically disabled when\nsubscription workers detect any errors.\n\nAFTER\nIndicates if the subscription should be automatically disabled if a\nworker error occurs\n\n~~~\n\n14. src/test/regress/sql/subscription.sql - missing test case.\n\nThe \"conflicting options\" error from the below code is not currently\nbeing tested.\n\n@@ -249,6 +253,15 @@ parse_subscription_options(ParseState *pstate,\nList *stmt_options,\n opts->specified_opts |= SUBOPT_TWOPHASE_COMMIT;\n opts->twophase = defGetBoolean(defel);\n }\n+ else if (IsSet(supported_opts, SUBOPT_DISABLE_ON_ERR) &&\n+ strcmp(defel->defname, \"disable_on_error\") == 0)\n+ {\n+ if (IsSet(opts->specified_opts, SUBOPT_DISABLE_ON_ERR))\n+ errorConflictingDefElem(defel, pstate);\n\n~~~\n\n15. src/test/subscription/t/028_disable_on_error.pl - 028 clash\n\nJust a heads-up that this 028 is going to clash with the Row-Filter\npatch 028 which has been announced to be pushed soon, so be prepared\nto change this number again shortly :)\n\n~~~\n\n16. src/test/subscription/t/028_disable_on_error.pl - done_testing\n\nAFAIK is a new style now for the TAP tests where it uses\n\"done_testing();\" instead of saying up-front how many tests there are.\nSee here [1].\n\n~~~\n\n17. src/test/subscription/t/028_disable_on_error.pl - more comments\n\n+# Create an additional unique index in schema s1 on the subscriber only. When\n+# we create subscriptions, below, this should cause subscription \"s1\" on the\n+# subscriber to fail during initial synchronization and to get automatically\n+# disabled.\n\nI felt it could be made a bit more obvious upfront in a comment that 2\npairs of pub/sub will be created, and their names will same as the\nschemas:\ne.g.\nPublisher \"s1\" --> Subscriber \"s1\"\nPublisher \"s2\" --> Subscriber \"s2\"\n\n~~~\n\n18. src/test/subscription/t/028_disable_on_error.pl - ALTER tests?\n\nThe tests here are only using the hardwired 'disable_on_error' options\nset at CREATE SUBSCRIPTION time. There are no TAP tests for changing\nthe disable_on_error using ALTER SUBSCRIPTION.\n\nShould there be?\n\n------\n[1] https://github.com/postgres/postgres/commit/549ec201d6132b7c7ee11ee90a4e02119259ba5b\n[2] worker.c.peter.txt is same as your v18 worker.c but I re-wrote\nfunctions LogicalRepHandleTableSync and LogicalRepHandleApplyMessages\nas POC\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 18 Feb 2022 17:26:48 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Friday, February 18, 2022 3:27 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Hi. Below are my code review comments for v18.\r\nThank you for your review !\r\n\r\n> ==========\r\n> \r\n> 1. Commit Message - wording\r\n> \r\n> BEFORE\r\n> To partially remedy the situation, adding a new subscription_parameter named\r\n> 'disable_on_error'.\r\n> \r\n> AFTER\r\n> To partially remedy the situation, this patch adds a new\r\n> subscription_parameter named 'disable_on_error'.\r\nFixed.\r\n\r\n> ~~~\r\n> \r\n> 2. Commit message - wording\r\n> \r\n> BEFORE\r\n> Require to bump catalog version.\r\n> \r\n> AFTER\r\n> A catalog version bump is required.\r\nFixed.\r\n\r\n> ~~~\r\n> \r\n> 3. doc/src/sgml/ref/alter_subscription.sgml - whitespace\r\n> \r\n> @@ -201,8 +201,8 @@ ALTER SUBSCRIPTION <replaceable\r\n> class=\"parameter\">name</replaceable> RENAME TO <\r\n> information. The parameters that can be altered\r\n> are <literal>slot_name</literal>,\r\n> <literal>synchronous_commit</literal>,\r\n> - <literal>binary</literal>, and\r\n> - <literal>streaming</literal>.\r\n> + <literal>binary</literal>,<literal>streaming</literal>, and\r\n> + <literal>disable_on_error</literal>.\r\n> </para>\r\n> \r\n> There is a missing space before <literal>streaming</literal>.\r\nFixed. \r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 4. src/backend/replication/logical/worker.c - WorkerErrorRecovery\r\n> \r\n> @@ -2802,6 +2803,89 @@ LogicalRepApplyLoop(XLogRecPtr\r\n> last_received) }\r\n> \r\n> /*\r\n> + * Worker error recovery processing, in preparation for disabling the\r\n> + * subscription.\r\n> + */\r\n> +static void\r\n> +WorkerErrorRecovery(void)\r\n> \r\n> I was wondering about the need for this to be a separate function? It is only\r\n> called immediately before calling 'DisableSubscriptionOnError'\r\n> so would it maybe be better just to put this code inside\r\n> DisableSubscriptionOnError with the appropriate comments?\r\nI preferred to have one specific for error handling,\r\nbecause from caller sides, when we catch error, it's apparent\r\nthat error recovery is done. But, the function name \"DisableSubscriptionOnError\"\r\nby itself should have the nuance that we do something on error.\r\nSo, we can think that it's okay to have error recovery processing\r\nin this function.\r\n\r\nSo, I removed the function and fixed some related comments.\r\n\r\n\r\n> ~~~\r\n> \r\n> 5. src/backend/replication/logical/worker.c - DisableSubscriptionOnError\r\n> \r\n> + /*\r\n> + * We would not be here unless this subscription's disableonerr field\r\n> + was\r\n> + * true when our worker began applying changes, but check whether that\r\n> + * field has changed in the interim.\r\n> + */\r\n> \r\n> Apparently, this function might just do nothing if it detects some situation\r\n> where the flag was changed somehow, but I'm not 100% sure that the callers\r\n> are properly catering for when nothing happens.\r\n> \r\n> IMO it would be better if this function would return true/false to mean \"did\r\n> disable subscription happen or not?\" because that will give the calling code the\r\n> chance to check the function return and do the right thing - e.g. if the caller first\r\n> thought it should be disabled but then it turned out it did NOT disable...\r\nI don't think we need to do something more.\r\nAfter this function, table sync worker and the apply worker\r\njust exit. IMO, we don't need to do additional work for\r\nalready-disabled subscription on the caller sides.\r\nIt should be sufficient to fulfill the purpose of\r\nDisableSubscriptionOnError or confirm it has been fulfilled.\r\n\r\n\r\n> ~~~\r\n> \r\n> 6. src/backend/replication/logical/worker.c - LogicalRepHandleTableSync\r\n> name\r\n> \r\n> +/*\r\n> + * Execute the initial sync with error handling. Disable the\r\n> +subscription,\r\n> + * if it's required.\r\n> + */\r\n> +static void\r\n> +LogicalRepHandleTableSync(XLogRecPtr *origin_startpos,\r\n> + char **myslotname, MemoryContext cctx)\r\n> \r\n> I felt that it is a bit overkill to put a \"LogicalRep\" prefix here because it is a static\r\n> function.\r\n> \r\n> IMO this function should be renamed as 'SyncTableStartWrapper' because that\r\n> describes better what it is doing.\r\nMakes sense. Fixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 7. src/backend/replication/logical/worker.c - LogicalRepHandleTableSync\r\n> Assert\r\n> \r\n> Even though we can know this to be true because of where it is called from, I\r\n> think the readability of the function will be improved if you add an assertion at\r\n> the top:\r\n> \r\n> Assert(am_tablesync_worker());\r\nFixed.\r\n\r\n> And then, because the function is clearly for Tablesync worker only there is no\r\n> need to keep mentioning that in the subsequent comments...\r\n> \r\n> e.g.1\r\n> /* This is table synchronization worker, call initial sync. */\r\n> AFTER:\r\n> /* Call initial sync. */\r\nFixed.\r\n\r\n> e.g.2\r\n> /*\r\n> * Report the table sync error. There is no corresponding message type\r\n> * for table synchronization.\r\n> */\r\n> AFTER\r\n> /*\r\n> * Report the error. There is no corresponding message type for table\r\n> * synchronization.\r\n> */\r\nAgreed. Fixed\r\n\r\n\r\n> ~~~\r\n> \r\n> 8. src/backend/replication/logical/worker.c - LogicalRepHandleTableSync\r\n> unnecessarily complex\r\n> \r\n> +static void\r\n> +LogicalRepHandleTableSync(XLogRecPtr *origin_startpos,\r\n> + char **myslotname, MemoryContext cctx) {\r\n> + char *syncslotname;\r\n> + bool error_recovery_done = false;\r\n> \r\n> IMO this logic is way more complex than it needed to be. IIUC that\r\n> 'error_recovery_done' and various conditions can be removed, and the whole\r\n> thing be simplified quite a lot.\r\n> \r\n> I re-wrote this function as a POC. Please see the attached file [2].\r\n> All the tests are still passing OK.\r\n> \r\n> (Perhaps the scenario for my comment #5 above still needs to be addressed?)\r\nRemoved the 'error_recovery_done' flag and fixed.\r\n\r\n\r\n\r\n \r\n> ~~~\r\n> \r\n> 9. src/backend/replication/logical/worker.c -\r\n> LogicalRepHandleApplyMessages name\r\n> \r\n> +/*\r\n> + * Run the apply loop with error handling. Disable the subscription,\r\n> + * if necessary.\r\n> + */\r\n> +static void\r\n> +LogicalRepHandleApplyMessages(XLogRecPtr origin_startpos,\r\n> + MemoryContext cctx)\r\n> \r\n> I felt that it is a bit overkill to put a \"LogicalRep\" prefix here because it is a static\r\n> function.\r\n> \r\n> IMO this function should be renamed as 'ApplyLoopWrapper' because that\r\n> describes better what it is doing.\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 10. src/backend/replication/logical/worker.c -\r\n> LogicalRepHandleApplyMessages unnecessarily complex\r\n> \r\n> +static void\r\n> +LogicalRepHandleApplyMessages(XLogRecPtr origin_startpos,\r\n> + MemoryContext cctx)\r\n> +{\r\n> + bool error_recovery_done = false;\r\n> \r\n> IMO this logic is way more complex than it needed to be. IIUC that\r\n> 'error_recovery_done' and various conditions can be removed, and the whole\r\n> thing be simplified quite a lot.\r\n> \r\n> I re-wrote this function as a POC. Please see the attached file [2].\r\n> All the tests are still passing OK.\r\n> \r\n> (Perhaps the scenario for my comment #5 above still needs to be addressed?)\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 11. src/bin/pg_dump/pg_dump.c - dumpSubscription\r\n> \r\n> @@ -4441,6 +4451,9 @@ dumpSubscription(Archive *fout, const\r\n> SubscriptionInfo *subinfo)\r\n> if (strcmp(subinfo->subtwophasestate, two_phase_disabled) != 0)\r\n> appendPQExpBufferStr(query, \", two_phase = on\");\r\n> \r\n> + if (strcmp(subinfo->subdisableonerr, \"f\") != 0)\r\n> + appendPQExpBufferStr(query, \", disable_on_error = on\");\r\n> +\r\n> \r\n> I felt saying disable_on_err is \"true\" would look more natural than saying it is\r\n> \"on\".\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 12. src/bin/psql/describe.c - describeSubscriptions typo\r\n> \r\n> @@ -6096,11 +6096,13 @@ describeSubscriptions(const char *pattern, bool\r\n> verbose)\r\n> gettext_noop(\"Binary\"),\r\n> gettext_noop(\"Streaming\"));\r\n> \r\n> - /* Two_phase is only supported in v15 and higher */\r\n> + /* Two_phase and disable_on_error is only supported in v15 and higher\r\n> + */\r\n> \r\n> Typo\r\n> \r\n> \"is only\" --> \"are only\"\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 13. src/include/catalog/pg_subscription.h - comments\r\n> \r\n> @@ -103,6 +106,9 @@ typedef struct Subscription\r\n> * binary format */\r\n> bool stream; /* Allow streaming in-progress transactions. */\r\n> char twophasestate; /* Allow streaming two-phase transactions */\r\n> + bool disableonerr; /* Indicates if the subscription should be\r\n> + * automatically disabled when subscription\r\n> + * workers detect any errors. */\r\n> \r\n> It's not usual to have a full stop here.\r\n> Maybe not needed to repeat the word \"subscription\".\r\n> IMO, generally, it all can be simplified a bit.\r\n> \r\n> BEFORE\r\n> Indicates if the subscription should be automatically disabled when\r\n> subscription workers detect any errors.\r\n> \r\n> AFTER\r\n> Indicates if the subscription should be automatically disabled if a worker error\r\n> occurs\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 14. src/test/regress/sql/subscription.sql - missing test case.\r\n> \r\n> The \"conflicting options\" error from the below code is not currently being\r\n> tested.\r\n> \r\n> @@ -249,6 +253,15 @@ parse_subscription_options(ParseState *pstate, List\r\n> *stmt_options,\r\n> opts->specified_opts |= SUBOPT_TWOPHASE_COMMIT;\r\n> opts->twophase = defGetBoolean(defel);\r\n> }\r\n> + else if (IsSet(supported_opts, SUBOPT_DISABLE_ON_ERR) &&\r\n> + strcmp(defel->defname, \"disable_on_error\") == 0) { if\r\n> + (IsSet(opts->specified_opts, SUBOPT_DISABLE_ON_ERR))\r\n> + errorConflictingDefElem(defel, pstate);\r\nWe don't have this test in other options as well.\r\nSo, this should be aligned.\r\n\r\n\r\n> ~~~\r\n> \r\n> 15. src/test/subscription/t/028_disable_on_error.pl - 028 clash\r\n> \r\n> Just a heads-up that this 028 is going to clash with the Row-Filter patch 028\r\n> which has been announced to be pushed soon, so be prepared to change this\r\n> number again shortly :)\r\nThank you for letting me know.\r\n\r\n\r\n> ~~~\r\n> \r\n> 16. src/test/subscription/t/028_disable_on_error.pl - done_testing\r\n> \r\n> AFAIK is a new style now for the TAP tests where it uses \"done_testing();\"\r\n> instead of saying up-front how many tests there are.\r\n> See here [1].\r\nFixed.\r\n\r\n> ~~~\r\n> \r\n> 17. src/test/subscription/t/028_disable_on_error.pl - more comments\r\n> \r\n> +# Create an additional unique index in schema s1 on the subscriber\r\n> +only. When # we create subscriptions, below, this should cause\r\n> +subscription \"s1\" on the # subscriber to fail during initial\r\n> +synchronization and to get automatically # disabled.\r\n> \r\n> I felt it could be made a bit more obvious upfront in a comment that 2 pairs of\r\n> pub/sub will be created, and their names will same as the\r\n> schemas:\r\n> e.g.\r\n> Publisher \"s1\" --> Subscriber \"s1\"\r\n> Publisher \"s2\" --> Subscriber \"s2\"\r\nComments are fixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 18. src/test/subscription/t/028_disable_on_error.pl - ALTER tests?\r\n> \r\n> The tests here are only using the hardwired 'disable_on_error' options set at\r\n> CREATE SUBSCRIPTION time. There are no TAP tests for changing the\r\n> disable_on_error using ALTER SUBSCRIPTION.\r\n> \r\n> Should there be?\r\nI don't think so. Toggling the flag 'disable_on_error' is already tested\r\nin the subscription.sql file. Both new paths for table sync and apply\r\nworker to disable on error are already covered.\r\n\r\n\r\nFYI : I skipped one change of worker.c.peter.txt\r\nabout \"enabled\" flag, which is independent from\r\ndisable_on_error option.\r\n\r\nKindly have a look at the attached v19.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Mon, 21 Feb 2022 00:25:22 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "Thanks for addressing my previous comments. Now I have looked at v19.\n\nOn Mon, Feb 21, 2022 at 11:25 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, February 18, 2022 3:27 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Hi. Below are my code review comments for v18.\n> Thank you for your review !\n...\n> > 5. src/backend/replication/logical/worker.c - DisableSubscriptionOnError\n> >\n> > + /*\n> > + * We would not be here unless this subscription's disableonerr field\n> > + was\n> > + * true when our worker began applying changes, but check whether that\n> > + * field has changed in the interim.\n> > + */\n> >\n> > Apparently, this function might just do nothing if it detects some situation\n> > where the flag was changed somehow, but I'm not 100% sure that the callers\n> > are properly catering for when nothing happens.\n> >\n> > IMO it would be better if this function would return true/false to mean \"did\n> > disable subscription happen or not?\" because that will give the calling code the\n> > chance to check the function return and do the right thing - e.g. if the caller first\n> > thought it should be disabled but then it turned out it did NOT disable...\n> I don't think we need to do something more.\n> After this function, table sync worker and the apply worker\n> just exit. IMO, we don't need to do additional work for\n> already-disabled subscription on the caller sides.\n> It should be sufficient to fulfill the purpose of\n> DisableSubscriptionOnError or confirm it has been fulfilled.\n\nHmmm - Yeah, it may be the workers might just exit soon after anyhow\nas you say so everything comes out in the wash, but still, I felt for\nthis case when DisableSubscriptionOnError turned out to do nothing it\nwould be better to exit via the existing logic. And that is easy to do\nif the function returns true/false.\n\nFor example, changes like below seemed neater code to me. YMMV.\n\nBEFORE (SyncTableStartWrapper):\nif (MySubscription->disableonerr)\n{\nDisableSubscriptionOnError();\nproc_exit(0);\n}\nAFTER\nif (MySubscription->disableonerr && DisableSubscriptionOnError())\nproc_exit(0);\n\nBEFORE (ApplyLoopWrapper)\nif (MySubscription->disableonerr)\n{\n/* Disable the subscription */\nDisableSubscriptionOnError();\nreturn;\n}\nAFTER\nif (MySubscription->disableonerr && DisableSubscriptionOnError())\nreturn;\n\n~~~\n\nHere are a couple more comments:\n\n1. src/backend/replication/logical/worker.c -\nDisableSubscriptionOnError, Refactor error handling\n\n(this comment assumes the above gets changed too)\n\n+static void\n+DisableSubscriptionOnError(void)\n+{\n+ Relation rel;\n+ bool nulls[Natts_pg_subscription];\n+ bool replaces[Natts_pg_subscription];\n+ Datum values[Natts_pg_subscription];\n+ HeapTuple tup;\n+ Form_pg_subscription subform;\n+\n+ /* Emit the error */\n+ EmitErrorReport();\n+ /* Abort any active transaction */\n+ AbortOutOfAnyTransaction();\n+ /* Reset the ErrorContext */\n+ FlushErrorState();\n+\n+ /* Disable the subscription in a fresh transaction */\n+ StartTransactionCommand();\n\nIf this DisableSubscriptionOnError function decides later that\nactually the 'disableonerr' flag is false (i.e. it's NOT going to\ndisable the subscription after all) then IMO it make more sense that\nthe error logging for that case should just do whatever it is doing\nnow by the normal error processing mechanism.\n\nIn other words, I thought perhaps the code to\nEmitErrorReport/FlushError state etc be moved to be BELOW the if\n(!subform->subdisableonerr) bail-out code?\n\nPlease see what you think in my attached POC [1]. It seems neater to\nme, and tests are all OK. Maybe I am mistaken...\n\n~~~\n\n2. Commit message - wording\n\nLogical replication apply workers for a subscription can easily get\nstuck in an infinite loop of attempting to apply a change,\ntriggering an error (such as a constraint violation), exiting with\nan error written to the subscription worker log, and restarting.\n\nSUGGESTION\n\"exiting with an error written\" --> \"exiting with the error written\"\n\n------\n[1] peter-v19-poc.diff - POC just to try some of my suggestions above\nto make sure all tests still pass ok.\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.", "msg_date": "Mon, 21 Feb 2022 16:55:43 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Monday, February 21, 2022 2:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Thanks for addressing my previous comments. Now I have looked at v19.\r\n> \r\n> On Mon, Feb 21, 2022 at 11:25 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, February 18, 2022 3:27 PM Peter Smith\r\n> <smithpb2250@gmail.com> wrote:\r\n> > > Hi. Below are my code review comments for v18.\r\n> > Thank you for your review !\r\n> ...\r\n> > > 5. src/backend/replication/logical/worker.c -\r\n> > > DisableSubscriptionOnError\r\n> > >\r\n> > > + /*\r\n> > > + * We would not be here unless this subscription's disableonerr\r\n> > > + field was\r\n> > > + * true when our worker began applying changes, but check whether\r\n> > > + that\r\n> > > + * field has changed in the interim.\r\n> > > + */\r\n> > >\r\n> > > Apparently, this function might just do nothing if it detects some\r\n> > > situation where the flag was changed somehow, but I'm not 100% sure\r\n> > > that the callers are properly catering for when nothing happens.\r\n> > >\r\n> > > IMO it would be better if this function would return true/false to\r\n> > > mean \"did disable subscription happen or not?\" because that will\r\n> > > give the calling code the chance to check the function return and do\r\n> > > the right thing - e.g. if the caller first thought it should be disabled but then\r\n> it turned out it did NOT disable...\r\n> > I don't think we need to do something more.\r\n> > After this function, table sync worker and the apply worker just exit.\r\n> > IMO, we don't need to do additional work for already-disabled\r\n> > subscription on the caller sides.\r\n> > It should be sufficient to fulfill the purpose of\r\n> > DisableSubscriptionOnError or confirm it has been fulfilled.\r\n> \r\n> Hmmm - Yeah, it may be the workers might just exit soon after anyhow as you\r\n> say so everything comes out in the wash, but still, I felt for this case when\r\n> DisableSubscriptionOnError turned out to do nothing it would be better to exit\r\n> via the existing logic. And that is easy to do if the function returns true/false.\r\n> \r\n> For example, changes like below seemed neater code to me. YMMV.\r\n> \r\n> BEFORE (SyncTableStartWrapper):\r\n> if (MySubscription->disableonerr)\r\n> {\r\n> DisableSubscriptionOnError();\r\n> proc_exit(0);\r\n> }\r\n> AFTER\r\n> if (MySubscription->disableonerr && DisableSubscriptionOnError())\r\n> proc_exit(0);\r\n> \r\n> BEFORE (ApplyLoopWrapper)\r\n> if (MySubscription->disableonerr)\r\n> {\r\n> /* Disable the subscription */\r\n> DisableSubscriptionOnError();\r\n> return;\r\n> }\r\n> AFTER\r\n> if (MySubscription->disableonerr && DisableSubscriptionOnError()) return;\r\nOkay, so this return value works for better readability.\r\nFixed.\r\n\r\n \r\n> ~~~\r\n> \r\n> Here are a couple more comments:\r\n> \r\n> 1. src/backend/replication/logical/worker.c - DisableSubscriptionOnError,\r\n> Refactor error handling\r\n> \r\n> (this comment assumes the above gets changed too)\r\nI think those are independent.\r\n\r\n\r\n> +static void\r\n> +DisableSubscriptionOnError(void)\r\n> +{\r\n> + Relation rel;\r\n> + bool nulls[Natts_pg_subscription];\r\n> + bool replaces[Natts_pg_subscription];\r\n> + Datum values[Natts_pg_subscription];\r\n> + HeapTuple tup;\r\n> + Form_pg_subscription subform;\r\n> +\r\n> + /* Emit the error */\r\n> + EmitErrorReport();\r\n> + /* Abort any active transaction */\r\n> + AbortOutOfAnyTransaction();\r\n> + /* Reset the ErrorContext */\r\n> + FlushErrorState();\r\n> +\r\n> + /* Disable the subscription in a fresh transaction */\r\n> + StartTransactionCommand();\r\n> \r\n> If this DisableSubscriptionOnError function decides later that actually the\r\n> 'disableonerr' flag is false (i.e. it's NOT going to disable the subscription after\r\n> all) then IMO it make more sense that the error logging for that case should just\r\n> do whatever it is doing now by the normal error processing mechanism.\r\n> \r\n> In other words, I thought perhaps the code to EmitErrorReport/FlushError state\r\n> etc be moved to be BELOW the if\r\n> (!subform->subdisableonerr) bail-out code?\r\n> \r\n> Please see what you think in my attached POC [1]. It seems neater to me, and\r\n> tests are all OK. Maybe I am mistaken...\r\nI had a concern that this order change of codes would have a negative\r\nimpact when we have another new error during the call of DisableSubscriptionOnError.\r\n\r\nWith the debugger, I raised an error in this function before emitting the original error.\r\nAs a result, the original error that makes the apply worker go into the path of\r\nDisableSubscriptionOnError (in my test, duplication error) has vanished.\r\nIn this sense, v19 looks safer, and the current order to handle error recovery first\r\nlooks better to me.\r\n\r\nFYI, after the 2nd debugger error,\r\nthe next new apply worker created quickly met the same type of error,\r\nwent into the same path, and disabled the subscription with the log.\r\nBut, it won't be advisable to let the possibility left.\r\n\r\n> ~~~\r\n> \r\n> 2. Commit message - wording\r\n> \r\n> Logical replication apply workers for a subscription can easily get stuck in an\r\n> infinite loop of attempting to apply a change, triggering an error (such as a\r\n> constraint violation), exiting with an error written to the subscription worker log,\r\n> and restarting.\r\n> \r\n> SUGGESTION\r\n> \"exiting with an error written\" --> \"exiting with the error written\"\r\nFixed.\r\n\r\n \r\n> ------\r\n> [1] peter-v19-poc.diff - POC just to try some of my suggestions above to make\r\n> sure all tests still pass ok.\r\nThanks ! I included you as co-author, because\r\nyou shared meaningful patches for me.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Mon, 21 Feb 2022 12:44:04 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Feb 21, 2022 at 11:44 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, February 21, 2022 2:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Thanks for addressing my previous comments. Now I have looked at v19.\n> >\n> > On Mon, Feb 21, 2022 at 11:25 AM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Friday, February 18, 2022 3:27 PM Peter Smith\n> > <smithpb2250@gmail.com> wrote:\n> > > > Hi. Below are my code review comments for v18.\n> > > Thank you for your review !\n> > ...\n> > > > 5. src/backend/replication/logical/worker.c -\n> > > > DisableSubscriptionOnError\n> > > >\n> > > > + /*\n> > > > + * We would not be here unless this subscription's disableonerr\n> > > > + field was\n> > > > + * true when our worker began applying changes, but check whether\n> > > > + that\n> > > > + * field has changed in the interim.\n> > > > + */\n> > > >\n> > > > Apparently, this function might just do nothing if it detects some\n> > > > situation where the flag was changed somehow, but I'm not 100% sure\n> > > > that the callers are properly catering for when nothing happens.\n> > > >\n> > > > IMO it would be better if this function would return true/false to\n> > > > mean \"did disable subscription happen or not?\" because that will\n> > > > give the calling code the chance to check the function return and do\n> > > > the right thing - e.g. if the caller first thought it should be disabled but then\n> > it turned out it did NOT disable...\n> > > I don't think we need to do something more.\n> > > After this function, table sync worker and the apply worker just exit.\n> > > IMO, we don't need to do additional work for already-disabled\n> > > subscription on the caller sides.\n> > > It should be sufficient to fulfill the purpose of\n> > > DisableSubscriptionOnError or confirm it has been fulfilled.\n> >\n> > Hmmm - Yeah, it may be the workers might just exit soon after anyhow as you\n> > say so everything comes out in the wash, but still, I felt for this case when\n> > DisableSubscriptionOnError turned out to do nothing it would be better to exit\n> > via the existing logic. And that is easy to do if the function returns true/false.\n> >\n> > For example, changes like below seemed neater code to me. YMMV.\n> >\n> > BEFORE (SyncTableStartWrapper):\n> > if (MySubscription->disableonerr)\n> > {\n> > DisableSubscriptionOnError();\n> > proc_exit(0);\n> > }\n> > AFTER\n> > if (MySubscription->disableonerr && DisableSubscriptionOnError())\n> > proc_exit(0);\n> >\n> > BEFORE (ApplyLoopWrapper)\n> > if (MySubscription->disableonerr)\n> > {\n> > /* Disable the subscription */\n> > DisableSubscriptionOnError();\n> > return;\n> > }\n> > AFTER\n> > if (MySubscription->disableonerr && DisableSubscriptionOnError()) return;\n> Okay, so this return value works for better readability.\n> Fixed.\n>\n>\n> > ~~~\n> >\n> > Here are a couple more comments:\n> >\n> > 1. src/backend/replication/logical/worker.c - DisableSubscriptionOnError,\n> > Refactor error handling\n> >\n> > (this comment assumes the above gets changed too)\n> I think those are independent.\n\nOK. I was only curious if the change #5 above might cause the error to\nbe logged 2x, if the DisableSubscriptionOnError returns false.\n- firstly, when it logs errors within the function\n- secondly, by normal error mechanism when the caller re-throws it.\n\nBut, if you are sure that won't happen then it is good news.\n\n>\n>\n> > +static void\n> > +DisableSubscriptionOnError(void)\n> > +{\n> > + Relation rel;\n> > + bool nulls[Natts_pg_subscription];\n> > + bool replaces[Natts_pg_subscription];\n> > + Datum values[Natts_pg_subscription];\n> > + HeapTuple tup;\n> > + Form_pg_subscription subform;\n> > +\n> > + /* Emit the error */\n> > + EmitErrorReport();\n> > + /* Abort any active transaction */\n> > + AbortOutOfAnyTransaction();\n> > + /* Reset the ErrorContext */\n> > + FlushErrorState();\n> > +\n> > + /* Disable the subscription in a fresh transaction */\n> > + StartTransactionCommand();\n> >\n> > If this DisableSubscriptionOnError function decides later that actually the\n> > 'disableonerr' flag is false (i.e. it's NOT going to disable the subscription after\n> > all) then IMO it make more sense that the error logging for that case should just\n> > do whatever it is doing now by the normal error processing mechanism.\n> >\n> > In other words, I thought perhaps the code to EmitErrorReport/FlushError state\n> > etc be moved to be BELOW the if\n> > (!subform->subdisableonerr) bail-out code?\n> >\n> > Please see what you think in my attached POC [1]. It seems neater to me, and\n> > tests are all OK. Maybe I am mistaken...\n> I had a concern that this order change of codes would have a negative\n> impact when we have another new error during the call of DisableSubscriptionOnError.\n>\n> With the debugger, I raised an error in this function before emitting the original error.\n> As a result, the original error that makes the apply worker go into the path of\n> DisableSubscriptionOnError (in my test, duplication error) has vanished.\n> In this sense, v19 looks safer, and the current order to handle error recovery first\n> looks better to me.\n>\n> FYI, after the 2nd debugger error,\n> the next new apply worker created quickly met the same type of error,\n> went into the same path, and disabled the subscription with the log.\n> But, it won't be advisable to let the possibility left.\n\nOK - thanks for checking it.\n\nWill it be better to put some comments about that? Something like --\n\nBEFORE\n/* Emit the error */\nEmitErrorReport();\n/* Abort any active transaction */\nAbortOutOfAnyTransaction();\n/* Reset the ErrorContext */\nFlushErrorState();\n\n/* Disable the subscription in a fresh transaction */\nStartTransactionCommand();\n\nAFTER\n/* Disable the subscription in a fresh transaction */\nAbortOutOfAnyTransaction();\nStartTransactionCommand();\n\n/*\n* Log the error that caused DisableSubscriptionOnError to be called.\n* We do this immediately so that it won't be lost if some other internal\n* error occurs in the following code,\n*/\nEmitErrorReport();\nFlushErrorState();\n\n...\n>\n> > ------\n> > [1] peter-v19-poc.diff - POC just to try some of my suggestions above to make\n> > sure all tests still pass ok.\n> Thanks ! I included you as co-author, because\n> you shared meaningful patches for me.\n>\n\nThanks!\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 22 Feb 2022 09:53:07 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tuesday, February 22, 2022 7:53 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> On Mon, Feb 21, 2022 at 11:44 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Monday, February 21, 2022 2:56 PM Peter Smith\r\n> <smithpb2250@gmail.com> wrote:\r\n> > > Thanks for addressing my previous comments. Now I have looked at v19.\r\n> > >\r\n> > > On Mon, Feb 21, 2022 at 11:25 AM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > >\r\n> > > > On Friday, February 18, 2022 3:27 PM Peter Smith\r\n> > > <smithpb2250@gmail.com> wrote:\r\n> > > > > Hi. Below are my code review comments for v18.\r\n> > > > Thank you for your review !\r\n> > > ...\r\n> > > > > 5. src/backend/replication/logical/worker.c -\r\n> > > > > DisableSubscriptionOnError\r\n> > > > >\r\n> > > > > + /*\r\n> > > > > + * We would not be here unless this subscription's disableonerr\r\n> > > > > + field was\r\n> > > > > + * true when our worker began applying changes, but check\r\n> > > > > + whether that\r\n> > > > > + * field has changed in the interim.\r\n> > > > > + */\r\n> > > > >\r\n> > > > > Apparently, this function might just do nothing if it detects\r\n> > > > > some situation where the flag was changed somehow, but I'm not\r\n> > > > > 100% sure that the callers are properly catering for when nothing\r\n> happens.\r\n> > > > >\r\n> > > > > IMO it would be better if this function would return true/false\r\n> > > > > to mean \"did disable subscription happen or not?\" because that\r\n> > > > > will give the calling code the chance to check the function\r\n> > > > > return and do the right thing - e.g. if the caller first thought\r\n> > > > > it should be disabled but then\r\n> > > it turned out it did NOT disable...\r\n> > > > I don't think we need to do something more.\r\n> > > > After this function, table sync worker and the apply worker just exit.\r\n> > > > IMO, we don't need to do additional work for already-disabled\r\n> > > > subscription on the caller sides.\r\n> > > > It should be sufficient to fulfill the purpose of\r\n> > > > DisableSubscriptionOnError or confirm it has been fulfilled.\r\n> > >\r\n> > > Hmmm - Yeah, it may be the workers might just exit soon after\r\n> > > anyhow as you say so everything comes out in the wash, but still, I\r\n> > > felt for this case when DisableSubscriptionOnError turned out to do\r\n> > > nothing it would be better to exit via the existing logic. And that is easy to do\r\n> if the function returns true/false.\r\n> > >\r\n> > > For example, changes like below seemed neater code to me. YMMV.\r\n> > >\r\n> > > BEFORE (SyncTableStartWrapper):\r\n> > > if (MySubscription->disableonerr)\r\n> > > {\r\n> > > DisableSubscriptionOnError();\r\n> > > proc_exit(0);\r\n> > > }\r\n> > > AFTER\r\n> > > if (MySubscription->disableonerr && DisableSubscriptionOnError())\r\n> > > proc_exit(0);\r\n> > >\r\n> > > BEFORE (ApplyLoopWrapper)\r\n> > > if (MySubscription->disableonerr)\r\n> > > {\r\n> > > /* Disable the subscription */\r\n> > > DisableSubscriptionOnError();\r\n> > > return;\r\n> > > }\r\n> > > AFTER\r\n> > > if (MySubscription->disableonerr && DisableSubscriptionOnError())\r\n> > > return;\r\n> > Okay, so this return value works for better readability.\r\n> > Fixed.\r\n> >\r\n> >\r\n> > > ~~~\r\n> > >\r\n> > > Here are a couple more comments:\r\n> > >\r\n> > > 1. src/backend/replication/logical/worker.c -\r\n> > > DisableSubscriptionOnError, Refactor error handling\r\n> > >\r\n> > > (this comment assumes the above gets changed too)\r\n> > I think those are independent.\r\n> \r\n> OK. I was only curious if the change #5 above might cause the error to be logged\r\n> 2x, if the DisableSubscriptionOnError returns false.\r\n> - firstly, when it logs errors within the function\r\n> - secondly, by normal error mechanism when the caller re-throws it.\r\n> \r\n> But, if you are sure that won't happen then it is good news.\r\nI didn't feel this would become a substantial issue.\r\n\r\nWhen we alter subscription with disable_on_error = false\r\nafter we go into the DisableSubscriptionOnError,\r\nwe don't disable the subscription in the same function.\r\nThat means we launch new apply workers repeatedly after that\r\nuntil we solve the error cause or we set the disable_on_error = true again.\r\n\r\nSo, if we confirm that the disable_on_error = false in the DisableSubscriptionOnError,\r\nit's highly possible that we'll get more same original errors from new apply workers.\r\n\r\nThis leads to another question, we should suppress the 2nd error(if there is),\r\neven when it's highly possible that we'll get more same errors by new apply workers\r\ncreated repeatedly or not. I wasn't sure if the implementation complexity for this wins the log print.\r\n\r\nSo, kindly let me keep the current code as is.\r\nIf someone wants share his/her opinion on this, please let me know.\r\n \r\n> >\r\n> >\r\n> > > +static void\r\n> > > +DisableSubscriptionOnError(void)\r\n> > > +{\r\n> > > + Relation rel;\r\n> > > + bool nulls[Natts_pg_subscription]; bool\r\n> > > +replaces[Natts_pg_subscription]; Datum\r\n> > > +values[Natts_pg_subscription]; HeapTuple tup;\r\n> > > +Form_pg_subscription subform;\r\n> > > +\r\n> > > + /* Emit the error */\r\n> > > + EmitErrorReport();\r\n> > > + /* Abort any active transaction */ AbortOutOfAnyTransaction();\r\n> > > + /* Reset the ErrorContext */\r\n> > > + FlushErrorState();\r\n> > > +\r\n> > > + /* Disable the subscription in a fresh transaction */\r\n> > > + StartTransactionCommand();\r\n> > >\r\n> > > If this DisableSubscriptionOnError function decides later that\r\n> > > actually the 'disableonerr' flag is false (i.e. it's NOT going to\r\n> > > disable the subscription after\r\n> > > all) then IMO it make more sense that the error logging for that\r\n> > > case should just do whatever it is doing now by the normal error processing\r\n> mechanism.\r\n> > >\r\n> > > In other words, I thought perhaps the code to\r\n> > > EmitErrorReport/FlushError state etc be moved to be BELOW the if\r\n> > > (!subform->subdisableonerr) bail-out code?\r\n> > >\r\n> > > Please see what you think in my attached POC [1]. It seems neater to\r\n> > > me, and tests are all OK. Maybe I am mistaken...\r\n> > I had a concern that this order change of codes would have a negative\r\n> > impact when we have another new error during the call of\r\n> DisableSubscriptionOnError.\r\n> >\r\n> > With the debugger, I raised an error in this function before emitting the\r\n> original error.\r\n> > As a result, the original error that makes the apply worker go into\r\n> > the path of DisableSubscriptionOnError (in my test, duplication error) has\r\n> vanished.\r\n> > In this sense, v19 looks safer, and the current order to handle error\r\n> > recovery first looks better to me.\r\n> >\r\n> > FYI, after the 2nd debugger error,\r\n> > the next new apply worker created quickly met the same type of error,\r\n> > went into the same path, and disabled the subscription with the log.\r\n> > But, it won't be advisable to let the possibility left.\r\n> \r\n> OK - thanks for checking it.\r\n> \r\n> Will it be better to put some comments about that? Something like --\r\n> \r\n> BEFORE\r\n> /* Emit the error */\r\n> EmitErrorReport();\r\n> /* Abort any active transaction */\r\n> AbortOutOfAnyTransaction();\r\n> /* Reset the ErrorContext */\r\n> FlushErrorState();\r\n> \r\n> /* Disable the subscription in a fresh transaction */\r\n> StartTransactionCommand();\r\n> \r\n> AFTER\r\n> /* Disable the subscription in a fresh transaction */\r\n> AbortOutOfAnyTransaction(); StartTransactionCommand();\r\n> \r\n> /*\r\n> * Log the error that caused DisableSubscriptionOnError to be called.\r\n> * We do this immediately so that it won't be lost if some other internal\r\n> * error occurs in the following code,\r\n> */\r\n> EmitErrorReport();\r\n> FlushErrorState();\r\nI appreciate your suggestion. Yet, I'd like to keep the current order of my patch.\r\nThe FlushErrorState's comment mentions we are not out of the error subsystem until\r\nwe call this and starting a new transaction before it didn't sound a good idea.\r\nBut, I've fixed the comments around this. The indentation for new comments\r\nare checked by pgindent.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Tue, 22 Feb 2022 04:11:32 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Feb 22, 2022 at 3:11 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, February 22, 2022 7:53 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > On Mon, Feb 21, 2022 at 11:44 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Monday, February 21, 2022 2:56 PM Peter Smith\n> > <smithpb2250@gmail.com> wrote:\n> > > > Thanks for addressing my previous comments. Now I have looked at v19.\n> > > >\n> > > > On Mon, Feb 21, 2022 at 11:25 AM osumi.takamichi@fujitsu.com\n> > > > <osumi.takamichi@fujitsu.com> wrote:\n> > > > >\n> > > > > On Friday, February 18, 2022 3:27 PM Peter Smith\n> > > > <smithpb2250@gmail.com> wrote:\n> > > > > > Hi. Below are my code review comments for v18.\n> > > > > Thank you for your review !\n> > > > ...\n> > > > > > 5. src/backend/replication/logical/worker.c -\n> > > > > > DisableSubscriptionOnError\n> > > > > >\n> > > > > > + /*\n> > > > > > + * We would not be here unless this subscription's disableonerr\n> > > > > > + field was\n> > > > > > + * true when our worker began applying changes, but check\n> > > > > > + whether that\n> > > > > > + * field has changed in the interim.\n> > > > > > + */\n> > > > > >\n> > > > > > Apparently, this function might just do nothing if it detects\n> > > > > > some situation where the flag was changed somehow, but I'm not\n> > > > > > 100% sure that the callers are properly catering for when nothing\n> > happens.\n> > > > > >\n> > > > > > IMO it would be better if this function would return true/false\n> > > > > > to mean \"did disable subscription happen or not?\" because that\n> > > > > > will give the calling code the chance to check the function\n> > > > > > return and do the right thing - e.g. if the caller first thought\n> > > > > > it should be disabled but then\n> > > > it turned out it did NOT disable...\n> > > > > I don't think we need to do something more.\n> > > > > After this function, table sync worker and the apply worker just exit.\n> > > > > IMO, we don't need to do additional work for already-disabled\n> > > > > subscription on the caller sides.\n> > > > > It should be sufficient to fulfill the purpose of\n> > > > > DisableSubscriptionOnError or confirm it has been fulfilled.\n> > > >\n> > > > Hmmm - Yeah, it may be the workers might just exit soon after\n> > > > anyhow as you say so everything comes out in the wash, but still, I\n> > > > felt for this case when DisableSubscriptionOnError turned out to do\n> > > > nothing it would be better to exit via the existing logic. And that is easy to do\n> > if the function returns true/false.\n> > > >\n> > > > For example, changes like below seemed neater code to me. YMMV.\n> > > >\n> > > > BEFORE (SyncTableStartWrapper):\n> > > > if (MySubscription->disableonerr)\n> > > > {\n> > > > DisableSubscriptionOnError();\n> > > > proc_exit(0);\n> > > > }\n> > > > AFTER\n> > > > if (MySubscription->disableonerr && DisableSubscriptionOnError())\n> > > > proc_exit(0);\n> > > >\n> > > > BEFORE (ApplyLoopWrapper)\n> > > > if (MySubscription->disableonerr)\n> > > > {\n> > > > /* Disable the subscription */\n> > > > DisableSubscriptionOnError();\n> > > > return;\n> > > > }\n> > > > AFTER\n> > > > if (MySubscription->disableonerr && DisableSubscriptionOnError())\n> > > > return;\n> > > Okay, so this return value works for better readability.\n> > > Fixed.\n> > >\n> > >\n> > > > ~~~\n> > > >\n> > > > Here are a couple more comments:\n> > > >\n> > > > 1. src/backend/replication/logical/worker.c -\n> > > > DisableSubscriptionOnError, Refactor error handling\n> > > >\n> > > > (this comment assumes the above gets changed too)\n> > > I think those are independent.\n> >\n> > OK. I was only curious if the change #5 above might cause the error to be logged\n> > 2x, if the DisableSubscriptionOnError returns false.\n> > - firstly, when it logs errors within the function\n> > - secondly, by normal error mechanism when the caller re-throws it.\n> >\n> > But, if you are sure that won't happen then it is good news.\n> I didn't feel this would become a substantial issue.\n>\n> When we alter subscription with disable_on_error = false\n> after we go into the DisableSubscriptionOnError,\n> we don't disable the subscription in the same function.\n> That means we launch new apply workers repeatedly after that\n> until we solve the error cause or we set the disable_on_error = true again.\n>\n> So, if we confirm that the disable_on_error = false in the DisableSubscriptionOnError,\n> it's highly possible that we'll get more same original errors from new apply workers.\n>\n> This leads to another question, we should suppress the 2nd error(if there is),\n> even when it's highly possible that we'll get more same errors by new apply workers\n> created repeatedly or not. I wasn't sure if the implementation complexity for this wins the log print.\n>\n> So, kindly let me keep the current code as is.\n> If someone wants share his/her opinion on this, please let me know.\n\nOK, but is it really correct that this scenario can happen \"When we\nalter subscription with disable_on_error = false after we go into the\nDisableSubscriptionOnError\". Actually, I thought this window may be\nmuch bigger than that - e.g. maybe we changed the option to false at\n*any* time after the worker was originally started and the original\noption values were got by GetSubscription function (and that might be\nhours/days/weeks ago since it started).\n\n>\n> > >\n> > >\n> > > > +static void\n> > > > +DisableSubscriptionOnError(void)\n> > > > +{\n> > > > + Relation rel;\n> > > > + bool nulls[Natts_pg_subscription]; bool\n> > > > +replaces[Natts_pg_subscription]; Datum\n> > > > +values[Natts_pg_subscription]; HeapTuple tup;\n> > > > +Form_pg_subscription subform;\n> > > > +\n> > > > + /* Emit the error */\n> > > > + EmitErrorReport();\n> > > > + /* Abort any active transaction */ AbortOutOfAnyTransaction();\n> > > > + /* Reset the ErrorContext */\n> > > > + FlushErrorState();\n> > > > +\n> > > > + /* Disable the subscription in a fresh transaction */\n> > > > + StartTransactionCommand();\n> > > >\n> > > > If this DisableSubscriptionOnError function decides later that\n> > > > actually the 'disableonerr' flag is false (i.e. it's NOT going to\n> > > > disable the subscription after\n> > > > all) then IMO it make more sense that the error logging for that\n> > > > case should just do whatever it is doing now by the normal error processing\n> > mechanism.\n> > > >\n> > > > In other words, I thought perhaps the code to\n> > > > EmitErrorReport/FlushError state etc be moved to be BELOW the if\n> > > > (!subform->subdisableonerr) bail-out code?\n> > > >\n> > > > Please see what you think in my attached POC [1]. It seems neater to\n> > > > me, and tests are all OK. Maybe I am mistaken...\n> > > I had a concern that this order change of codes would have a negative\n> > > impact when we have another new error during the call of\n> > DisableSubscriptionOnError.\n> > >\n> > > With the debugger, I raised an error in this function before emitting the\n> > original error.\n> > > As a result, the original error that makes the apply worker go into\n> > > the path of DisableSubscriptionOnError (in my test, duplication error) has\n> > vanished.\n> > > In this sense, v19 looks safer, and the current order to handle error\n> > > recovery first looks better to me.\n> > >\n> > > FYI, after the 2nd debugger error,\n> > > the next new apply worker created quickly met the same type of error,\n> > > went into the same path, and disabled the subscription with the log.\n> > > But, it won't be advisable to let the possibility left.\n> >\n> > OK - thanks for checking it.\n> >\n> > Will it be better to put some comments about that? Something like --\n> >\n> > BEFORE\n> > /* Emit the error */\n> > EmitErrorReport();\n> > /* Abort any active transaction */\n> > AbortOutOfAnyTransaction();\n> > /* Reset the ErrorContext */\n> > FlushErrorState();\n> >\n> > /* Disable the subscription in a fresh transaction */\n> > StartTransactionCommand();\n> >\n> > AFTER\n> > /* Disable the subscription in a fresh transaction */\n> > AbortOutOfAnyTransaction(); StartTransactionCommand();\n> >\n> > /*\n> > * Log the error that caused DisableSubscriptionOnError to be called.\n> > * We do this immediately so that it won't be lost if some other internal\n> > * error occurs in the following code,\n> > */\n> > EmitErrorReport();\n> > FlushErrorState();\n> I appreciate your suggestion. Yet, I'd like to keep the current order of my patch.\n> The FlushErrorState's comment mentions we are not out of the error subsystem until\n> we call this and starting a new transaction before it didn't sound a good idea.\n> But, I've fixed the comments around this. The indentation for new comments\n> are checked by pgindent.\n\nOK.\n\n======\n\nHere are a couple more review comments for v21.\n\n~~~\n\n1. worker.c - comment\n\n+ subform = (Form_pg_subscription) GETSTRUCT(tup);\n+\n+ /*\n+ * We would not be here unless this subscription's disableonerr field was\n+ * true, but check whether that field has changed in the interim.\n+ */\n+ if (!subform->subdisableonerr)\n+ {\n+ heap_freetuple(tup);\n+ table_close(rel, RowExclusiveLock);\n+ CommitTransactionCommand();\n+ return false;\n+ }\n\nI felt that comment belongs above the subform assignment because that\nis the only reason we are getting the subform again.\n\n~~\n\n2. worker.c - subform->oid\n\n+ /* Notify the subscription will be no longer valid */\n+ ereport(LOG,\n+ errmsg(\"logical replication subscription \\\"%s\\\" will be disabled due\nto an error\",\n+ MySubscription->name));\n+\n+ LockSharedObject(SubscriptionRelationId, subform->oid, 0,\nAccessExclusiveLock);\n\nCan't we just use MySubscription->oid here? We really only needed that\nsubform to get new option values.\n\n~~\n\n3. worker.c - whitespace\n\nYour pg_indent has also changed some whitespace for parts of worker.c\nthat are completely unrelated to this patch. You might want to revert\nthose changes.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 22 Feb 2022 17:02:54 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "Hi Osumi-san,\r\n\r\nI have a comment on v21 patch.\r\n\r\nI wonder if we really need subscription s2 in 028_disable_on_error.pl. I think\r\nfor subscription s2, we only tested some normal cases(which could be tested with s1), \r\nand didn't test any error case, which means it wouldn't be automatically disabled. \r\nIs there any reason for creating subscription s2?\r\n\r\nRegards,\r\nTang\r\n", "msg_date": "Wed, 23 Feb 2022 09:52:27 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wednesday, February 23, 2022 6:52 PM Tang, Haiying/唐 海英 <tanghy.fnst@fujitsu.com> wrote:\r\n> I have a comment on v21 patch.\r\n> \r\n> I wonder if we really need subscription s2 in 028_disable_on_error.pl. I think for\r\n> subscription s2, we only tested some normal cases(which could be tested with\r\n> s1), and didn't test any error case, which means it wouldn't be automatically\r\n> disabled.\r\n> Is there any reason for creating subscription s2?\r\nHi, thank you for your review !\r\n\r\nIt's for checking there's no impact/influence when disabling one subscription\r\non the other subscription if any.\r\n\r\n*But*, when I have a look at the past tests to add options (e.g. streaming,\r\ntwo_phase), we don't have this kind of test that I have for disable_on_error patch.\r\nTherefore, I'd like to fix the test as you suggested in my next version.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Thu, 24 Feb 2022 02:44:06 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Feb 22, 2022 at 3:03 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> ~~~\n>\n> 1. worker.c - comment\n>\n> + subform = (Form_pg_subscription) GETSTRUCT(tup);\n> +\n> + /*\n> + * We would not be here unless this subscription's disableonerr field was\n> + * true, but check whether that field has changed in the interim.\n> + */\n> + if (!subform->subdisableonerr)\n> + {\n> + heap_freetuple(tup);\n> + table_close(rel, RowExclusiveLock);\n> + CommitTransactionCommand();\n> + return false;\n> + }\n>\n> I felt that comment belongs above the subform assignment because that\n> is the only reason we are getting the subform again.\n\nIIUC if we return false here, the same error will be emitted twice.\nAnd I'm not sure this check is really necessary. It would work only\nwhen the subdisableonerr is set to false concurrently, but doesn't\nwork for the opposite caces. I think we can check\nMySubscription->disableonerr and then just update the tuple.\n\nHere are some comments:\n\nWhy do we need SyncTableStartWrapper() and ApplyLoopWrapper()?\n\n---\n+ /*\n+ * Log the error that caused DisableSubscriptionOnError to be called. We\n+ * do this immediately so that it won't be lost if some other internal\n+ * error occurs in the following code.\n+ */\n+ EmitErrorReport();\n+ AbortOutOfAnyTransaction();\n+ FlushErrorState();\n\nDo we need to hold interrupts during cleanup here?\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 24 Feb 2022 16:50:19 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thu, Feb 24, 2022 at 1:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Here are some comments:\n>\n> Why do we need SyncTableStartWrapper() and ApplyLoopWrapper()?\n>\n\nI have given this comment to move the related code to separate\nfunctions to slightly simplify ApplyWorkerMain() code but if you don't\nlike we can move it back. I am not sure I like the new function names\nin the patch though.\n\n> ---\n> + /*\n> + * Log the error that caused DisableSubscriptionOnError to be called. We\n> + * do this immediately so that it won't be lost if some other internal\n> + * error occurs in the following code.\n> + */\n> + EmitErrorReport();\n> + AbortOutOfAnyTransaction();\n> + FlushErrorState();\n>\n> Do we need to hold interrupts during cleanup here?\n>\n\nI think so. We do prevent interrupts via\nHOLD_INTERRUPTS/RESUME_INTERRUPTS during cleanup.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 24 Feb 2022 16:38:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thu, Feb 24, 2022 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Feb 24, 2022 at 1:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Here are some comments:\n> >\n> > Why do we need SyncTableStartWrapper() and ApplyLoopWrapper()?\n> >\n>\n> I have given this comment to move the related code to separate\n> functions to slightly simplify ApplyWorkerMain() code but if you don't\n> like we can move it back. I am not sure I like the new function names\n> in the patch though.\n\nOkay, I'm fine with moving this code but perhaps we can find a better\nfunction name as \"Wrapper\" seems slightly odd to me. For example,\nstart_table_sync_start() and start_apply_changes() or something (it\nseems we use the snake case for static functions in worker.c).\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 24 Feb 2022 22:00:16 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thu, Feb 24, 2022 at 6:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Feb 24, 2022 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Feb 24, 2022 at 1:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Here are some comments:\n> > >\n> > > Why do we need SyncTableStartWrapper() and ApplyLoopWrapper()?\n> > >\n> >\n> > I have given this comment to move the related code to separate\n> > functions to slightly simplify ApplyWorkerMain() code but if you don't\n> > like we can move it back. I am not sure I like the new function names\n> > in the patch though.\n>\n> Okay, I'm fine with moving this code but perhaps we can find a better\n> function name as \"Wrapper\" seems slightly odd to me.\n>\n\nAgreed.\n\n> For example,\n> start_table_sync_start() and start_apply_changes() or something (it\n> seems we use the snake case for static functions in worker.c).\n>\n\nI am fine with something on these lines, how about start_table_sync()\nand start_apply() respectively?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 25 Feb 2022 09:27:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Friday, February 25, 2022 12:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Feb 24, 2022 at 6:30 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> >\r\n> > On Thu, Feb 24, 2022 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Thu, Feb 24, 2022 at 1:20 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > >\r\n> > > > Here are some comments:\r\n> > > >\r\n> > > > Why do we need SyncTableStartWrapper() and ApplyLoopWrapper()?\r\n> > > >\r\n> > >\r\n> > > I have given this comment to move the related code to separate\r\n> > > functions to slightly simplify ApplyWorkerMain() code but if you\r\n> > > don't like we can move it back. I am not sure I like the new\r\n> > > function names in the patch though.\r\n> >\r\n> > Okay, I'm fine with moving this code but perhaps we can find a better\r\n> > function name as \"Wrapper\" seems slightly odd to me.\r\n> >\r\n> \r\n> Agreed.\r\n> \r\n> > For example,\r\n> > start_table_sync_start() and start_apply_changes() or something (it\r\n> > seems we use the snake case for static functions in worker.c).\r\n> >\r\n> \r\n> I am fine with something on these lines, how about start_table_sync() and\r\n> start_apply() respectively?\r\nAdopted. (If we come up with better names, we can change those then)\r\n\r\nKindly have a look at attached the v22.\r\nIt has incorporated other improvements of TAP test,\r\nrefinement of the DisableSubscriptionOnError function and so on.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Fri, 25 Feb 2022 12:45:04 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thursday, February 24, 2022 8:09 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> On Thu, Feb 24, 2022 at 1:20 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > + /*\r\n> > + * Log the error that caused DisableSubscriptionOnError to be\r\n> called. We\r\n> > + * do this immediately so that it won't be lost if some other internal\r\n> > + * error occurs in the following code.\r\n> > + */\r\n> > + EmitErrorReport();\r\n> > + AbortOutOfAnyTransaction();\r\n> > + FlushErrorState();\r\n> >\r\n> > Do we need to hold interrupts during cleanup here?\r\n> >\r\n> \r\n> I think so. We do prevent interrupts via\r\n> HOLD_INTERRUPTS/RESUME_INTERRUPTS during cleanup.\r\nFixed.\r\n\r\nKindly have a look at v22 shared in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373D9B26F988307B0D3FE20ED3E9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 25 Feb 2022 12:48:09 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thursday, February 24, 2022 4:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Tue, Feb 22, 2022 at 3:03 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > ~~~\r\n> >\r\n> > 1. worker.c - comment\r\n> >\r\n> > + subform = (Form_pg_subscription) GETSTRUCT(tup);\r\n> > +\r\n> > + /*\r\n> > + * We would not be here unless this subscription's disableonerr field\r\n> > + was\r\n> > + * true, but check whether that field has changed in the interim.\r\n> > + */\r\n> > + if (!subform->subdisableonerr)\r\n> > + {\r\n> > + heap_freetuple(tup);\r\n> > + table_close(rel, RowExclusiveLock);\r\n> > + CommitTransactionCommand();\r\n> > + return false;\r\n> > + }\r\n> >\r\n> > I felt that comment belongs above the subform assignment because that\r\n> > is the only reason we are getting the subform again.\r\n> \r\n> IIUC if we return false here, the same error will be emitted twice.\r\n> And I'm not sure this check is really necessary. It would work only when the\r\n> subdisableonerr is set to false concurrently, but doesn't work for the opposite\r\n> caces. I think we can check\r\n> MySubscription->disableonerr and then just update the tuple.\r\nAddressed. I followed your advice and deleted the check.\r\n\r\n\r\nKindly have a look at v22 shared in [1].\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373D9B26F988307B0D3FE20ED3E9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 25 Feb 2022 12:50:05 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wednesday, February 23, 2022 6:52 PM Tang, Haiying/唐 海英 <tanghy.fnst@fujitsu.com> wrote:\r\n> I have a comment on v21 patch.\r\n> \r\n> I wonder if we really need subscription s2 in 028_disable_on_error.pl. I think for\r\n> subscription s2, we only tested some normal cases(which could be tested with\r\n> s1), and didn't test any error case, which means it wouldn't be automatically\r\n> disabled.\r\n> Is there any reason for creating subscription s2?\r\nRemoved the subscription s2.\r\n\r\nThis has reduced the code amount of TAP tests.\r\nKindly have a look at the v22 shared in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373D9B26F988307B0D3FE20ED3E9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 25 Feb 2022 12:52:07 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tuesday, February 22, 2022 3:03 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are a couple more review comments for v21.\r\n> \r\n> ~~~\r\n> \r\n> 1. worker.c - comment\r\n> \r\n> + subform = (Form_pg_subscription) GETSTRUCT(tup);\r\n> +\r\n> + /*\r\n> + * We would not be here unless this subscription's disableonerr field\r\n> + was\r\n> + * true, but check whether that field has changed in the interim.\r\n> + */\r\n> + if (!subform->subdisableonerr)\r\n> + {\r\n> + heap_freetuple(tup);\r\n> + table_close(rel, RowExclusiveLock);\r\n> + CommitTransactionCommand();\r\n> + return false;\r\n> + }\r\n> \r\n> I felt that comment belongs above the subform assignment because that is the\r\n> only reason we are getting the subform again.\r\nThis part has been removed along with the modification\r\nthat we just disable the subscription in the main processing\r\nwhen we get an error.\r\n\r\n \r\n> ~~\r\n> \r\n> 2. worker.c - subform->oid\r\n> \r\n> + /* Notify the subscription will be no longer valid */ ereport(LOG,\r\n> + errmsg(\"logical replication subscription \\\"%s\\\" will be disabled due\r\n> to an error\",\r\n> + MySubscription->name));\r\n> +\r\n> + LockSharedObject(SubscriptionRelationId, subform->oid, 0,\r\n> AccessExclusiveLock);\r\n> \r\n> Can't we just use MySubscription->oid here? We really only needed that\r\n> subform to get new option values.\r\nFixed.\r\n\r\n\r\n> ~~\r\n> \r\n> 3. worker.c - whitespace\r\n> \r\n> Your pg_indent has also changed some whitespace for parts of worker.c that\r\n> are completely unrelated to this patch. You might want to revert those changes.\r\nFixed.\r\n\r\nKindly have a look at v22 that took in all your comments.\r\nIt's shared in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373D9B26F988307B0D3FE20ED3E9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 25 Feb 2022 12:54:04 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "Please see below my review comments for v22.\n\n======\n\n1. Commit message\n\n\"table sync worker\" -> \"tablesync worker\"\n\n~~~\n\n2. doc/src/sgml/catalogs.sgml\n\n+ <para>\n+ If true, the subscription will be disabled when subscription\n+ workers detect any errors\n+ </para></entry>\n\nIt felt a bit strange to say \"subscription\" 2x in the sentence, but I\nam not sure how to improve it. Maybe like below?\n\nBEFORE\nIf true, the subscription will be disabled when subscription workers\ndetect any errors\n\nSUGGESTED\nIf true, the subscription will be disabled if one of its workers\ndetects an error\n\n~~~\n\n3. src/backend/replication/logical/worker.c - DisableSubscriptionOnError\n\n@@ -2802,6 +2803,69 @@ LogicalRepApplyLoop(XLogRecPtr last_received)\n }\n\n /*\n+ * Disable the current subscription, after error recovery processing.\n+ */\n+static void\n+DisableSubscriptionOnError(void)\n\nI thought the \"after error recovery processing\" part was a bit generic\nand did not really say what it was doing.\n\nBEFORE\nDisable the current subscription, after error recovery processing.\nSUGGESTED\nDisable the current subscription, after logging the error that caused\nthis function to be called.\n\n~~~\n\n4. src/backend/replication/logical/worker.c - start_apply\n\n+ if (MySubscription->disableonerr)\n+ {\n+ DisableSubscriptionOnError();\n+ return;\n+ }\n+\n+ MemoryContextSwitchTo(ecxt);\n+ PG_RE_THROW();\n+ }\n+ PG_END_TRY();\n\nThe current code looks correct, but I felt it is a bit tricky to\neasily see the execution path after the return.\n\nSince it will effectively just exit anyhow I think it will be simpler\njust to do that explicitly right here instead of the 'return'. This\nwill also make the code consistent with the same 'disableonerr' logic\nof the start_start_sync.\n\nSUGGESTION\nif (MySubscription->disableonerr)\n{\nDisableSubscriptionOnError();\nproc_exit(0);\n}\n\n~~~\n\n5. src/bin/pg_dump/pg_dump.c\n\n@@ -4463,6 +4473,9 @@ dumpSubscription(Archive *fout, const\nSubscriptionInfo *subinfo)\n if (strcmp(subinfo->subtwophasestate, two_phase_disabled) != 0)\n appendPQExpBufferStr(query, \", two_phase = on\");\n\n+ if (strcmp(subinfo->subdisableonerr, \"f\") != 0)\n+ appendPQExpBufferStr(query, \", disable_on_error = true\");\n+\n\nAlthough the code is correct, I think it would be more natural to set\nthis option as true when the user wants it true. e.g. check for \"t\"\nsame as 'subbinary' is doing. This way, even if there was some\nunknown/corrupted value the code would do nothing, which is the\nbehaviour you want...\n\nSUGGESTION\nif (strcmp(subinfo->subdisableonerr, \"t\") == 0)\n\n~~~\n\n6. src/include/catalog/pg_subscription.h\n\n@@ -67,6 +67,9 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\nBKI_SHARED_RELATION BKI_ROW\n\n char subtwophasestate; /* Stream two-phase transactions */\n\n+ bool subdisableonerr; /* True if occurrence of apply errors\n+ * should disable the subscription */\n\nThe comment seems not quite right because it's not just about apply\nerrors. E.g. I think any error in tablesync will cause disablement\ntoo.\n\nBEFORE\nTrue if occurrence of apply errors should disable the subscription\nSUGGESTED\nTrue if a worker error should cause the subscription to be disabled\n\n~~~\n\n7. src/test/regress/sql/subscription.sql - whitespace\n\n+-- now it works\n+CREATE SUBSCRIPTION regress_testsub CONNECTION\n'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =\nfalse, disable_on_error = false);\n+\n+\\dRs+\n+\n+ALTER SUBSCRIPTION regress_testsub SET (disable_on_error = true);\n+\n+\\dRs+\n+ALTER SUBSCRIPTION regress_testsub SET (slot_name = NONE);\n+DROP SUBSCRIPTION regress_testsub;\n+\n\nI think should be a blank line after that last \\dRs+ just like the\nother one, because it belongs logically with the code above it, not\nwith the ALTER slot_name.\n\n~~~\n\n8. src/test/subscription/t/028_disable_on_error.pl - filename\n\nThe 028 number needs to be bumped because there is already a TAP test\ncalled 028 now\n\n~~~\n\n9. src/test/subscription/t/028_disable_on_error.pl - missing test\n\nThere was no test case for the last combination where the user correct\nthe apply worker problem: E.g. After a previous error/disable of the\nsubscriber, remove the index, publish the inserts again, and check\nthey get applied properly.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 1 Mar 2022 11:49:14 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Friday, February 25, 2022 9:45 PM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n> Kindly have a look at attached the v22.\r\n> It has incorporated other improvements of TAP test, refinement of the\r\n> DisableSubscriptionOnError function and so on.\r\nThe recent commit(7a85073) has changed the subscription workers\r\nerror handling. So, I rebased my disable_on_error patch first\r\nfor anyone who are interested in the review.\r\n\r\nI'll incorporate incoming comments for v22 in my next version.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Tue, 1 Mar 2022 02:19:12 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tuesday, March 1, 2022 9:49 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Please see below my review comments for v22.\r\n> \r\n> ======\r\n> \r\n> 1. Commit message\r\n> \r\n> \"table sync worker\" -> \"tablesync worker\"\r\nFixed.\r\n\r\n> ~~~\r\n> \r\n> 2. doc/src/sgml/catalogs.sgml\r\n> \r\n> + <para>\r\n> + If true, the subscription will be disabled when subscription\r\n> + workers detect any errors\r\n> + </para></entry>\r\n> \r\n> It felt a bit strange to say \"subscription\" 2x in the sentence, but I am not sure\r\n> how to improve it. Maybe like below?\r\n> \r\n> BEFORE\r\n> If true, the subscription will be disabled when subscription workers detect any\r\n> errors\r\n> \r\n> SUGGESTED\r\n> If true, the subscription will be disabled if one of its workers detects an error\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 3. src/backend/replication/logical/worker.c - DisableSubscriptionOnError\r\n> \r\n> @@ -2802,6 +2803,69 @@ LogicalRepApplyLoop(XLogRecPtr\r\n> last_received) }\r\n> \r\n> /*\r\n> + * Disable the current subscription, after error recovery processing.\r\n> + */\r\n> +static void\r\n> +DisableSubscriptionOnError(void)\r\n> \r\n> I thought the \"after error recovery processing\" part was a bit generic and did not\r\n> really say what it was doing.\r\n> \r\n> BEFORE\r\n> Disable the current subscription, after error recovery processing.\r\n> SUGGESTED\r\n> Disable the current subscription, after logging the error that caused this\r\n> function to be called.\r\nFixed.\r\n\r\n> ~~~\r\n> \r\n> 4. src/backend/replication/logical/worker.c - start_apply\r\n> \r\n> + if (MySubscription->disableonerr)\r\n> + {\r\n> + DisableSubscriptionOnError();\r\n> + return;\r\n> + }\r\n> +\r\n> + MemoryContextSwitchTo(ecxt);\r\n> + PG_RE_THROW();\r\n> + }\r\n> + PG_END_TRY();\r\n> \r\n> The current code looks correct, but I felt it is a bit tricky to easily see the\r\n> execution path after the return.\r\n> \r\n> Since it will effectively just exit anyhow I think it will be simpler just to do that\r\n> explicitly right here instead of the 'return'. This will also make the code\r\n> consistent with the same 'disableonerr' logic of the start_start_sync.\r\n> \r\n> SUGGESTION\r\n> if (MySubscription->disableonerr)\r\n> {\r\n> DisableSubscriptionOnError();\r\n> proc_exit(0);\r\n> }\r\nFixed.\r\n\r\n> ~~~\r\n> \r\n> 5. src/bin/pg_dump/pg_dump.c\r\n> \r\n> @@ -4463,6 +4473,9 @@ dumpSubscription(Archive *fout, const\r\n> SubscriptionInfo *subinfo)\r\n> if (strcmp(subinfo->subtwophasestate, two_phase_disabled) != 0)\r\n> appendPQExpBufferStr(query, \", two_phase = on\");\r\n> \r\n> + if (strcmp(subinfo->subdisableonerr, \"f\") != 0)\r\n> + appendPQExpBufferStr(query, \", disable_on_error = true\");\r\n> +\r\n> \r\n> Although the code is correct, I think it would be more natural to set this option\r\n> as true when the user wants it true. e.g. check for \"t\"\r\n> same as 'subbinary' is doing. This way, even if there was some\r\n> unknown/corrupted value the code would do nothing, which is the behaviour\r\n> you want...\r\n> \r\n> SUGGESTION\r\n> if (strcmp(subinfo->subdisableonerr, \"t\") == 0)\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 6. src/include/catalog/pg_subscription.h\r\n> \r\n> @@ -67,6 +67,9 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\r\n> BKI_SHARED_RELATION BKI_ROW\r\n> \r\n> char subtwophasestate; /* Stream two-phase transactions */\r\n> \r\n> + bool subdisableonerr; /* True if occurrence of apply errors\r\n> + * should disable the subscription */\r\n> \r\n> The comment seems not quite right because it's not just about apply errors. E.g.\r\n> I think any error in tablesync will cause disablement too.\r\n> \r\n> BEFORE\r\n> True if occurrence of apply errors should disable the subscription SUGGESTED\r\n> True if a worker error should cause the subscription to be disabled\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 7. src/test/regress/sql/subscription.sql - whitespace\r\n> \r\n> +-- now it works\r\n> +CREATE SUBSCRIPTION regress_testsub CONNECTION\r\n> 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect = false,\r\n> disable_on_error = false);\r\n> +\r\n> +\\dRs+\r\n> +\r\n> +ALTER SUBSCRIPTION regress_testsub SET (disable_on_error = true);\r\n> +\r\n> +\\dRs+\r\n> +ALTER SUBSCRIPTION regress_testsub SET (slot_name = NONE); DROP\r\n> +SUBSCRIPTION regress_testsub;\r\n> +\r\n> \r\n> I think should be a blank line after that last \\dRs+ just like the other one,\r\n> because it belongs logically with the code above it, not with the ALTER\r\n> slot_name.\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 8. src/test/subscription/t/028_disable_on_error.pl - filename\r\n> \r\n> The 028 number needs to be bumped because there is already a TAP test\r\n> called 028 now\r\nThis is already done in v22, so I've skipped this.\r\n\r\n> ~~~\r\n> \r\n> 9. src/test/subscription/t/028_disable_on_error.pl - missing test\r\n> \r\n> There was no test case for the last combination where the user correct the\r\n> apply worker problem: E.g. After a previous error/disable of the subscriber,\r\n> remove the index, publish the inserts again, and check they get applied\r\n> properly.\r\nFixed.\r\n\r\nAttached the updated version v24.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Tue, 1 Mar 2022 05:40:41 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "Please see below my review comments for v24.\n\n======\n\n1. src/backend/replication/logical/worker.c - start_table_sync\n\n+ /* Report the worker failed during table synchronization */\n+ pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n\n(This review comment is just FYI in case you did not do this deliberately)\n\nFYI, you didn't really need to call am_tablesync_worker() here because\nit is already asserted for the sync phase that it MUST be a tablesync\nworker.\n\nOTOH, IMO it documents the purpose of the parm so if it was deliberate\nthen that is OK too.\n\n~~~\n\n2. src/backend/replication/logical/worker.c - start_table_sync\n\n+ PG_CATCH();\n+ {\n+ /*\n+ * Abort the current transaction so that we send the stats message in\n+ * an idle state.\n+ */\n+ AbortOutOfAnyTransaction();\n+\n+ /* Report the worker failed during table synchronization */\n+ pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n+\n\n[Maybe you will say that this review comment is unrelated to\ndisable_on_err, but since this is a totally new/refactored function\nthen I think maybe there is no problem to make this change at the same\ntime. Anyway there is no function change, it is just rearranging some\ncomments.]\n\nI felt the separation of those 2 statements and comments makes that\ncode less clean than it could/should be. IMO they should be grouped\ntogether.\n\nSUGGESTED\n/*\n* Report the worker failed during table synchronization. Abort the\n* current transaction so that the stats message is sent in an idle\n* state.\n*/\nAbortOutOfAnyTransaction();\npgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n\n~~~\n\n3. src/backend/replication/logical/worker.c - start_apply\n\n+ /*\n+ * Abort the current transaction so that we send the stats message in\n+ * an idle state.\n+ */\n+ AbortOutOfAnyTransaction();\n+\n+ /* Report the worker failed during the application of the change */\n+ pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n\nSame comment as #2 above, but this code fragment is in start_apply function.\n\n~~~\n\n4. src/test/subscription/t/029_disable_on_error.pl - comment\n\n+# Drop the unique index on the sub and re-enabled the subscription.\n+# Then, confirm that we have finished the apply.\n\nSUGGESTED (tweak the comment wording)\n# Drop the unique index on the sub and re-enable the subscription.\n# Then, confirm that the previously failing insert was applied OK.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 2 Mar 2022 11:34:17 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wednesday, March 2, 2022 9:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Please see below my review comments for v24.\r\nThank you for checking my patch !\r\n\r\n \r\n> ======\r\n> \r\n> 1. src/backend/replication/logical/worker.c - start_table_sync\r\n> \r\n> + /* Report the worker failed during table synchronization */\r\n> + pgstat_report_subscription_error(MySubscription->oid,\r\n> + !am_tablesync_worker());\r\n> \r\n> (This review comment is just FYI in case you did not do this deliberately)\r\n> \r\n> FYI, you didn't really need to call am_tablesync_worker() here because it is\r\n> already asserted for the sync phase that it MUST be a tablesync worker.\r\n> \r\n> OTOH, IMO it documents the purpose of the parm so if it was deliberate then\r\n> that is OK too.\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 2. src/backend/replication/logical/worker.c - start_table_sync\r\n> \r\n> + PG_CATCH();\r\n> + {\r\n> + /*\r\n> + * Abort the current transaction so that we send the stats message in\r\n> + * an idle state.\r\n> + */\r\n> + AbortOutOfAnyTransaction();\r\n> +\r\n> + /* Report the worker failed during table synchronization */\r\n> + pgstat_report_subscription_error(MySubscription->oid,\r\n> + !am_tablesync_worker());\r\n> +\r\n> \r\n> [Maybe you will say that this review comment is unrelated to disable_on_err,\r\n> but since this is a totally new/refactored function then I think maybe there is no\r\n> problem to make this change at the same time. Anyway there is no function\r\n> change, it is just rearranging some comments.]\r\n> \r\n> I felt the separation of those 2 statements and comments makes that code less\r\n> clean than it could/should be. IMO they should be grouped together.\r\n> \r\n> SUGGESTED\r\n> /*\r\n> * Report the worker failed during table synchronization. Abort the\r\n> * current transaction so that the stats message is sent in an idle\r\n> * state.\r\n> */\r\n> AbortOutOfAnyTransaction();\r\n> pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_work\r\n> er());\r\nI think this is OK. Thank you for suggestion. Fixed.\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 3. src/backend/replication/logical/worker.c - start_apply\r\n> \r\n> + /*\r\n> + * Abort the current transaction so that we send the stats message in\r\n> + * an idle state.\r\n> + */\r\n> + AbortOutOfAnyTransaction();\r\n> +\r\n> + /* Report the worker failed during the application of the change */\r\n> + pgstat_report_subscription_error(MySubscription->oid,\r\n> + !am_tablesync_worker());\r\n> \r\n> Same comment as #2 above, but this code fragment is in start_apply function.\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 4. src/test/subscription/t/029_disable_on_error.pl - comment\r\n> \r\n> +# Drop the unique index on the sub and re-enabled the subscription.\r\n> +# Then, confirm that we have finished the apply.\r\n> \r\n> SUGGESTED (tweak the comment wording)\r\n> # Drop the unique index on the sub and re-enable the subscription.\r\n> # Then, confirm that the previously failing insert was applied OK.\r\nFixed.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Wed, 2 Mar 2022 03:12:54 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Mar 2, 2022 at 9:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Please see below my review comments for v24.\n>\n> ======\n>\n> 1. src/backend/replication/logical/worker.c - start_table_sync\n>\n> + /* Report the worker failed during table synchronization */\n> + pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n>\n> (This review comment is just FYI in case you did not do this deliberately)\n>\n> FYI, you didn't really need to call am_tablesync_worker() here because\n> it is already asserted for the sync phase that it MUST be a tablesync\n> worker.\n>\n> OTOH, IMO it documents the purpose of the parm so if it was deliberate\n> then that is OK too.\n>\n> ~~~\n>\n> 2. src/backend/replication/logical/worker.c - start_table_sync\n>\n> + PG_CATCH();\n> + {\n> + /*\n> + * Abort the current transaction so that we send the stats message in\n> + * an idle state.\n> + */\n> + AbortOutOfAnyTransaction();\n> +\n> + /* Report the worker failed during table synchronization */\n> + pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n> +\n>\n> [Maybe you will say that this review comment is unrelated to\n> disable_on_err, but since this is a totally new/refactored function\n> then I think maybe there is no problem to make this change at the same\n> time. Anyway there is no function change, it is just rearranging some\n> comments.]\n>\n> I felt the separation of those 2 statements and comments makes that\n> code less clean than it could/should be. IMO they should be grouped\n> together.\n>\n> SUGGESTED\n> /*\n> * Report the worker failed during table synchronization. Abort the\n> * current transaction so that the stats message is sent in an idle\n> * state.\n> */\n> AbortOutOfAnyTransaction();\n> pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n\nAfter more thoughts, should we do both AbortOutOfAnyTransaction() and\nerror message handling while holding interrupts? That is,\n\nHOLD_INTERRUPTS();\nEmitErrorReport();\nFlushErrorState();\nAbortOutOfAny Transaction();\nRESUME_INTERRUPTS();\npgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n\nI think it's better that we do clean up first and then do other works\nsuch as sending the message to the stats collector and updating the\ncatalog.\n\nHere are some comments on v24 patch:\n\n+ /* Look up our subscription in the catalogs */\n+ tup = SearchSysCacheCopy2(SUBSCRIPTIONNAME, MyDatabaseId,\n+\nCStringGetDatum(MySubscription->name));\n\ns/catalogs/catalog/\n\nWhy don't we use SUBSCRIPTIONOID with MySubscription->oid?\n\n---\n+ if (!HeapTupleIsValid(tup))\n+ ereport(ERROR,\n+ errcode(ERRCODE_UNDEFINED_OBJECT),\n+ errmsg(\"subscription \\\"%s\\\" does not exist\",\n+ MySubscription->name));\n\nI think we should use elog() here rather than ereport() since it's a\nshould-not-happen error.\n\n---\n+ /* Notify the subscription will be no longer valid */\n\nI'd suggest rephrasing it to like \"Notify the subscription will be\ndisabled\". (the subscription is still valid actually, but just\ndisabled).\n\n---\n+ /* Notify the subscription will be no longer valid */\n+ ereport(LOG,\n+ errmsg(\"logical replication subscription\n\\\"%s\\\" will be disabled due to an error\",\n+ MySubscription->name));\n+\n\nI think we can report the log at the end of this function rather than\nduring the transaction.\n\n---\n+my $cmd = qq(\n+CREATE TABLE tbl (i INT);\n+ALTER TABLE tbl REPLICA IDENTITY FULL;\n+CREATE INDEX tbl_idx ON tbl(i));\n\nI think we don't need to set REPLICA IDENTITY FULL to this table since\nthere is notupdate/delete.\n\nDo we need tbl_idx?\n\n---\n+$cmd = qq(\n+SELECT COUNT(1) = 1 FROM pg_catalog.pg_subscription_rel sr\n+WHERE sr.srsubstate IN ('s', 'r'));\n+$node_subscriber->poll_query_until('postgres', $cmd);\n\nIt seems better to add a condition of srrelid just in case.\n\n---\n+$cmd = qq(\n+SELECT count(1) = 1 FROM pg_catalog.pg_subscription s\n+WHERE s.subname = 'sub' AND s.subenabled IS FALSE);\n+$node_subscriber->poll_query_until('postgres', $cmd)\n+ or die \"Timed out while waiting for subscriber to be disabled\";\n\nI think that it's more natural to directly check the subscription's\nsubenabled. For example:\n\nSELECT subenabled = false FROM pg_subscription WHERE subname = 'sub';\n\n---\n+$cmd = q(ALTER SUBSCRIPTION sub ENABLE);\n+$node_subscriber->safe_psql('postgres', $cmd);\n+$cmd = q(SELECT COUNT(1) = 3 FROM tbl WHERE i = 3);\n+$node_subscriber->poll_query_until('postgres', $cmd)\n+ or die \"Timed out while waiting for applying\";\n\nI think it's better to wait for the subscriber to catch up and check\nthe query result instead of using poll_query_until() so that we can\ncheck the query result in case where the test fails.\n\n---\n+$cmd = qq(DROP INDEX tbl_unique);\n+$node_subscriber->safe_psql('postgres', $cmd);\n\nIn the newly added tap tests, all queries are consistently assigned to\n$cmd and executed even when the query is used only once. It seems a\ndifferent style from the one in other tap tests. Is there any reason\nwhy we do this style for all queries in the tap tests?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 2 Mar 2022 12:46:43 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wednesday, March 2, 2022 12:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> After more thoughts, should we do both AbortOutOfAnyTransaction() and error\r\n> message handling while holding interrupts? That is,\r\n> \r\n> HOLD_INTERRUPTS();\r\n> EmitErrorReport();\r\n> FlushErrorState();\r\n> AbortOutOfAny Transaction();\r\n> RESUME_INTERRUPTS();\r\n> pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_work\r\n> er());\r\n> \r\n> I think it's better that we do clean up first and then do other works such as\r\n> sending the message to the stats collector and updating the catalog.\r\nI agree. Fixed. Along with this change, I corrected the header comment of\r\nDisableSubscriptionOnError, too.\r\n\r\n\r\n> Here are some comments on v24 patch:\r\n> \r\n> + /* Look up our subscription in the catalogs */\r\n> + tup = SearchSysCacheCopy2(SUBSCRIPTIONNAME, MyDatabaseId,\r\n> +\r\n> CStringGetDatum(MySubscription->name));\r\n> \r\n> s/catalogs/catalog/\r\n> \r\n> Why don't we use SUBSCRIPTIONOID with MySubscription->oid?\r\nChanged.\r\n\r\n\r\n> ---\r\n> + if (!HeapTupleIsValid(tup))\r\n> + ereport(ERROR,\r\n> + errcode(ERRCODE_UNDEFINED_OBJECT),\r\n> + errmsg(\"subscription \\\"%s\\\" does not\r\n> exist\",\r\n> + MySubscription->name));\r\n> \r\n> I think we should use elog() here rather than ereport() since it's a\r\n> should-not-happen error.\r\nFixed\r\n\r\n\r\n> ---\r\n> + /* Notify the subscription will be no longer valid */\r\n> \r\n> I'd suggest rephrasing it to like \"Notify the subscription will be disabled\". (the\r\n> subscription is still valid actually, but just disabled).\r\nFixed. Also, I've made this sentence past one, because of the code place\r\nchange below.\r\n\r\n \r\n> ---\r\n> + /* Notify the subscription will be no longer valid */\r\n> + ereport(LOG,\r\n> + errmsg(\"logical replication subscription\r\n> \\\"%s\\\" will be disabled due to an error\",\r\n> + MySubscription->name));\r\n> +\r\n> \r\n> I think we can report the log at the end of this function rather than during the\r\n> transaction.\r\nFixed. In this case, I needed to adjust the comment to indicate the processing\r\nto disable the sub has *completed* as well.\r\n\r\n> ---\r\n> +my $cmd = qq(\r\n> +CREATE TABLE tbl (i INT);\r\n> +ALTER TABLE tbl REPLICA IDENTITY FULL;\r\n> +CREATE INDEX tbl_idx ON tbl(i));\r\n> \r\n> I think we don't need to set REPLICA IDENTITY FULL to this table since there is\r\n> notupdate/delete.\r\n> \r\n> Do we need tbl_idx?\r\nRemoved both the replica identity and tbl_idx;\r\n\r\n\r\n> ---\r\n> +$cmd = qq(\r\n> +SELECT COUNT(1) = 1 FROM pg_catalog.pg_subscription_rel sr WHERE\r\n> +sr.srsubstate IN ('s', 'r'));\r\n> +$node_subscriber->poll_query_until('postgres', $cmd);\r\n> \r\n> It seems better to add a condition of srrelid just in case.\r\nMakes sense. Fixed.\r\n\r\n\r\n> ---\r\n> +$cmd = qq(\r\n> +SELECT count(1) = 1 FROM pg_catalog.pg_subscription s WHERE\r\n> s.subname =\r\n> +'sub' AND s.subenabled IS FALSE);\r\n> +$node_subscriber->poll_query_until('postgres', $cmd)\r\n> + or die \"Timed out while waiting for subscriber to be disabled\";\r\n> \r\n> I think that it's more natural to directly check the subscription's subenabled.\r\n> For example:\r\n> \r\n> SELECT subenabled = false FROM pg_subscription WHERE subname = 'sub';\r\nFixed. I modified another code similar to this for tablesync error as well.\r\n\r\n\r\n> ---\r\n> +$cmd = q(ALTER SUBSCRIPTION sub ENABLE);\r\n> +$node_subscriber->safe_psql('postgres', $cmd); $cmd = q(SELECT\r\n> COUNT(1)\r\n> += 3 FROM tbl WHERE i = 3);\r\n> +$node_subscriber->poll_query_until('postgres', $cmd)\r\n> + or die \"Timed out while waiting for applying\";\r\n> \r\n> I think it's better to wait for the subscriber to catch up and check the query\r\n> result instead of using poll_query_until() so that we can check the query result\r\n> in case where the test fails.\r\nI modified the code to wait for the subscriber and deleted poll_query_until.\r\nAlso, when I consider the test failure for this test as you mentioned,\r\nexpecting and checking the number of return value that equals 3\r\nwould be better. So, I expressed this point in my test as well,\r\naccording to your advice.\r\n\r\n\r\n> ---\r\n> +$cmd = qq(DROP INDEX tbl_unique);\r\n> +$node_subscriber->safe_psql('postgres', $cmd);\r\n> \r\n> In the newly added tap tests, all queries are consistently assigned to $cmd and\r\n> executed even when the query is used only once. It seems a different style from\r\n> the one in other tap tests. Is there any reason why we do this style for all queries\r\n> in the tap tests?\r\nFixed. I removed the 'cmd' variable itself.\r\n\r\n\r\nAttached an updated patch v26.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Wed, 2 Mar 2022 09:38:47 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Mar 2, 2022 at 6:38 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, March 2, 2022 12:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > After more thoughts, should we do both AbortOutOfAnyTransaction() and error\n> > message handling while holding interrupts? That is,\n> >\n> > HOLD_INTERRUPTS();\n> > EmitErrorReport();\n> > FlushErrorState();\n> > AbortOutOfAny Transaction();\n> > RESUME_INTERRUPTS();\n> > pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_work\n> > er());\n> >\n> > I think it's better that we do clean up first and then do other works such as\n> > sending the message to the stats collector and updating the catalog.\n> I agree. Fixed. Along with this change, I corrected the header comment of\n> DisableSubscriptionOnError, too.\n>\n>\n> > Here are some comments on v24 patch:\n> >\n> > + /* Look up our subscription in the catalogs */\n> > + tup = SearchSysCacheCopy2(SUBSCRIPTIONNAME, MyDatabaseId,\n> > +\n> > CStringGetDatum(MySubscription->name));\n> >\n> > s/catalogs/catalog/\n> >\n> > Why don't we use SUBSCRIPTIONOID with MySubscription->oid?\n> Changed.\n>\n>\n> > ---\n> > + if (!HeapTupleIsValid(tup))\n> > + ereport(ERROR,\n> > + errcode(ERRCODE_UNDEFINED_OBJECT),\n> > + errmsg(\"subscription \\\"%s\\\" does not\n> > exist\",\n> > + MySubscription->name));\n> >\n> > I think we should use elog() here rather than ereport() since it's a\n> > should-not-happen error.\n> Fixed\n>\n>\n> > ---\n> > + /* Notify the subscription will be no longer valid */\n> >\n> > I'd suggest rephrasing it to like \"Notify the subscription will be disabled\". (the\n> > subscription is still valid actually, but just disabled).\n> Fixed. Also, I've made this sentence past one, because of the code place\n> change below.\n>\n>\n> > ---\n> > + /* Notify the subscription will be no longer valid */\n> > + ereport(LOG,\n> > + errmsg(\"logical replication subscription\n> > \\\"%s\\\" will be disabled due to an error\",\n> > + MySubscription->name));\n> > +\n> >\n> > I think we can report the log at the end of this function rather than during the\n> > transaction.\n> Fixed. In this case, I needed to adjust the comment to indicate the processing\n> to disable the sub has *completed* as well.\n>\n> > ---\n> > +my $cmd = qq(\n> > +CREATE TABLE tbl (i INT);\n> > +ALTER TABLE tbl REPLICA IDENTITY FULL;\n> > +CREATE INDEX tbl_idx ON tbl(i));\n> >\n> > I think we don't need to set REPLICA IDENTITY FULL to this table since there is\n> > notupdate/delete.\n> >\n> > Do we need tbl_idx?\n> Removed both the replica identity and tbl_idx;\n>\n>\n> > ---\n> > +$cmd = qq(\n> > +SELECT COUNT(1) = 1 FROM pg_catalog.pg_subscription_rel sr WHERE\n> > +sr.srsubstate IN ('s', 'r'));\n> > +$node_subscriber->poll_query_until('postgres', $cmd);\n> >\n> > It seems better to add a condition of srrelid just in case.\n> Makes sense. Fixed.\n>\n>\n> > ---\n> > +$cmd = qq(\n> > +SELECT count(1) = 1 FROM pg_catalog.pg_subscription s WHERE\n> > s.subname =\n> > +'sub' AND s.subenabled IS FALSE);\n> > +$node_subscriber->poll_query_until('postgres', $cmd)\n> > + or die \"Timed out while waiting for subscriber to be disabled\";\n> >\n> > I think that it's more natural to directly check the subscription's subenabled.\n> > For example:\n> >\n> > SELECT subenabled = false FROM pg_subscription WHERE subname = 'sub';\n> Fixed. I modified another code similar to this for tablesync error as well.\n>\n>\n> > ---\n> > +$cmd = q(ALTER SUBSCRIPTION sub ENABLE);\n> > +$node_subscriber->safe_psql('postgres', $cmd); $cmd = q(SELECT\n> > COUNT(1)\n> > += 3 FROM tbl WHERE i = 3);\n> > +$node_subscriber->poll_query_until('postgres', $cmd)\n> > + or die \"Timed out while waiting for applying\";\n> >\n> > I think it's better to wait for the subscriber to catch up and check the query\n> > result instead of using poll_query_until() so that we can check the query result\n> > in case where the test fails.\n> I modified the code to wait for the subscriber and deleted poll_query_until.\n> Also, when I consider the test failure for this test as you mentioned,\n> expecting and checking the number of return value that equals 3\n> would be better. So, I expressed this point in my test as well,\n> according to your advice.\n>\n>\n> > ---\n> > +$cmd = qq(DROP INDEX tbl_unique);\n> > +$node_subscriber->safe_psql('postgres', $cmd);\n> >\n> > In the newly added tap tests, all queries are consistently assigned to $cmd and\n> > executed even when the query is used only once. It seems a different style from\n> > the one in other tap tests. Is there any reason why we do this style for all queries\n> > in the tap tests?\n> Fixed. I removed the 'cmd' variable itself.\n>\n>\n> Attached an updated patch v26.\n\nThank you for updating the patch.\n\nHere are some comments on v26 patch:\n\n+/*\n+ * Disable the current subscription.\n+ */\n+static void\n+DisableSubscriptionOnError(void)\n\nThis function now just updates the pg_subscription catalog so can we\nmove it to pg_subscritpion.c while having this function accept the\nsubscription OID to disable? If you agree, the function comment will\nalso need to be updated.\n\n---\n+ /*\n+ * First, ensure that we log the error message so\nthat it won't be\n+ * lost if some other internal error occurs in the\nfollowing code.\n+ * Then, abort the current transaction and send the\nstats message of\n+ * the table synchronization failure in an idle state.\n+ */\n+ HOLD_INTERRUPTS();\n+ EmitErrorReport();\n+ FlushErrorState();\n+ AbortOutOfAnyTransaction();\n+ RESUME_INTERRUPTS();\n+ pgstat_report_subscription_error(MySubscription->oid, false);\n+\n+ if (MySubscription->disableonerr)\n+ {\n+ DisableSubscriptionOnError();\n+ proc_exit(0);\n+ }\n+\n+ PG_RE_THROW();\n\nIf the disableonerr is false, the same error is reported twice. Also,\nthe code in PG_CATCH() in both start_apply() and start_table_sync()\nare almost the same. Can we create a common function to do post-error\nprocessing?\n\nThe worker should exit with return code 1.\n\nI've attached a fixup patch for changes to worker.c for your\nreference. Feel free to adopt the changes.\n\n---\n+\n+# Confirm that we have finished the table sync.\n+is( $node_subscriber->safe_psql(\n+ 'postgres', qq(SELECT MAX(i), COUNT(*) FROM tbl)),\n+ \"1|3\",\n+ \"subscription sub replicated data\");\n+\n\nCan we store the result to a local variable to check? I think it's\nmore consistent with other tap tests.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Fri, 4 Mar 2022 15:55:00 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Fri, Mar 4, 2022 at 5:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Mar 2, 2022 at 6:38 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Wednesday, March 2, 2022 12:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > After more thoughts, should we do both AbortOutOfAnyTransaction() and error\n> > > message handling while holding interrupts? That is,\n> > >\n> > > HOLD_INTERRUPTS();\n> > > EmitErrorReport();\n> > > FlushErrorState();\n> > > AbortOutOfAny Transaction();\n> > > RESUME_INTERRUPTS();\n> > > pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_work\n> > > er());\n> > >\n> > > I think it's better that we do clean up first and then do other works such as\n> > > sending the message to the stats collector and updating the catalog.\n> > I agree. Fixed. Along with this change, I corrected the header comment of\n> > DisableSubscriptionOnError, too.\n> >\n> >\n> > > Here are some comments on v24 patch:\n> > >\n> > > + /* Look up our subscription in the catalogs */\n> > > + tup = SearchSysCacheCopy2(SUBSCRIPTIONNAME, MyDatabaseId,\n> > > +\n> > > CStringGetDatum(MySubscription->name));\n> > >\n> > > s/catalogs/catalog/\n> > >\n> > > Why don't we use SUBSCRIPTIONOID with MySubscription->oid?\n> > Changed.\n> >\n> >\n> > > ---\n> > > + if (!HeapTupleIsValid(tup))\n> > > + ereport(ERROR,\n> > > + errcode(ERRCODE_UNDEFINED_OBJECT),\n> > > + errmsg(\"subscription \\\"%s\\\" does not\n> > > exist\",\n> > > + MySubscription->name));\n> > >\n> > > I think we should use elog() here rather than ereport() since it's a\n> > > should-not-happen error.\n> > Fixed\n> >\n> >\n> > > ---\n> > > + /* Notify the subscription will be no longer valid */\n> > >\n> > > I'd suggest rephrasing it to like \"Notify the subscription will be disabled\". (the\n> > > subscription is still valid actually, but just disabled).\n> > Fixed. Also, I've made this sentence past one, because of the code place\n> > change below.\n> >\n> >\n> > > ---\n> > > + /* Notify the subscription will be no longer valid */\n> > > + ereport(LOG,\n> > > + errmsg(\"logical replication subscription\n> > > \\\"%s\\\" will be disabled due to an error\",\n> > > + MySubscription->name));\n> > > +\n> > >\n> > > I think we can report the log at the end of this function rather than during the\n> > > transaction.\n> > Fixed. In this case, I needed to adjust the comment to indicate the processing\n> > to disable the sub has *completed* as well.\n> >\n> > > ---\n> > > +my $cmd = qq(\n> > > +CREATE TABLE tbl (i INT);\n> > > +ALTER TABLE tbl REPLICA IDENTITY FULL;\n> > > +CREATE INDEX tbl_idx ON tbl(i));\n> > >\n> > > I think we don't need to set REPLICA IDENTITY FULL to this table since there is\n> > > notupdate/delete.\n> > >\n> > > Do we need tbl_idx?\n> > Removed both the replica identity and tbl_idx;\n> >\n> >\n> > > ---\n> > > +$cmd = qq(\n> > > +SELECT COUNT(1) = 1 FROM pg_catalog.pg_subscription_rel sr WHERE\n> > > +sr.srsubstate IN ('s', 'r'));\n> > > +$node_subscriber->poll_query_until('postgres', $cmd);\n> > >\n> > > It seems better to add a condition of srrelid just in case.\n> > Makes sense. Fixed.\n> >\n> >\n> > > ---\n> > > +$cmd = qq(\n> > > +SELECT count(1) = 1 FROM pg_catalog.pg_subscription s WHERE\n> > > s.subname =\n> > > +'sub' AND s.subenabled IS FALSE);\n> > > +$node_subscriber->poll_query_until('postgres', $cmd)\n> > > + or die \"Timed out while waiting for subscriber to be disabled\";\n> > >\n> > > I think that it's more natural to directly check the subscription's subenabled.\n> > > For example:\n> > >\n> > > SELECT subenabled = false FROM pg_subscription WHERE subname = 'sub';\n> > Fixed. I modified another code similar to this for tablesync error as well.\n> >\n> >\n> > > ---\n> > > +$cmd = q(ALTER SUBSCRIPTION sub ENABLE);\n> > > +$node_subscriber->safe_psql('postgres', $cmd); $cmd = q(SELECT\n> > > COUNT(1)\n> > > += 3 FROM tbl WHERE i = 3);\n> > > +$node_subscriber->poll_query_until('postgres', $cmd)\n> > > + or die \"Timed out while waiting for applying\";\n> > >\n> > > I think it's better to wait for the subscriber to catch up and check the query\n> > > result instead of using poll_query_until() so that we can check the query result\n> > > in case where the test fails.\n> > I modified the code to wait for the subscriber and deleted poll_query_until.\n> > Also, when I consider the test failure for this test as you mentioned,\n> > expecting and checking the number of return value that equals 3\n> > would be better. So, I expressed this point in my test as well,\n> > according to your advice.\n> >\n> >\n> > > ---\n> > > +$cmd = qq(DROP INDEX tbl_unique);\n> > > +$node_subscriber->safe_psql('postgres', $cmd);\n> > >\n> > > In the newly added tap tests, all queries are consistently assigned to $cmd and\n> > > executed even when the query is used only once. It seems a different style from\n> > > the one in other tap tests. Is there any reason why we do this style for all queries\n> > > in the tap tests?\n> > Fixed. I removed the 'cmd' variable itself.\n> >\n> >\n> > Attached an updated patch v26.\n>\n> Thank you for updating the patch.\n>\n> Here are some comments on v26 patch:\n>\n> +/*\n> + * Disable the current subscription.\n> + */\n> +static void\n> +DisableSubscriptionOnError(void)\n>\n> This function now just updates the pg_subscription catalog so can we\n> move it to pg_subscritpion.c while having this function accept the\n> subscription OID to disable? If you agree, the function comment will\n> also need to be updated.\n>\n> ---\n> + /*\n> + * First, ensure that we log the error message so\n> that it won't be\n> + * lost if some other internal error occurs in the\n> following code.\n> + * Then, abort the current transaction and send the\n> stats message of\n> + * the table synchronization failure in an idle state.\n> + */\n> + HOLD_INTERRUPTS();\n> + EmitErrorReport();\n> + FlushErrorState();\n> + AbortOutOfAnyTransaction();\n> + RESUME_INTERRUPTS();\n> + pgstat_report_subscription_error(MySubscription->oid, false);\n> +\n> + if (MySubscription->disableonerr)\n> + {\n> + DisableSubscriptionOnError();\n> + proc_exit(0);\n> + }\n> +\n> + PG_RE_THROW();\n>\n> If the disableonerr is false, the same error is reported twice. Also,\n> the code in PG_CATCH() in both start_apply() and start_table_sync()\n> are almost the same. Can we create a common function to do post-error\n> processing?\n>\n> The worker should exit with return code 1.\n>\n> I've attached a fixup patch for changes to worker.c for your\n> reference. Feel free to adopt the changes.\n\nThe way that common function is implemented required removal of the\nexisting PG_RE_THROW logic, which in turn was only possible using\nspecial knowledge that this just happens to be the last try/catch\nblock for the apply worker. Yes, I believe everything will work ok,\nbut it just seemed like a step too far for me to change the throw\nlogic. I feel that once you get to the point of having to write\nspecial comments in the code to explain \"why we can get away with\ndoing this...\" then that is an indication that perhaps it's not really\nthe best way...\n\nIs there some alternative way to share common code, but without having\nto change the existing throw error logic to do so?\n\nOTOH, maybe others think it is ok?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 7 Mar 2022 10:25:21 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Mar 2, 2022 5:39 PM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n> \r\n> Attached an updated patch v26.\r\n> \r\n\r\nThanks for your patch. A comment on the document.\r\n\r\n@@ -7771,6 +7771,16 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l\r\n \r\n <row>\r\n <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n+ <structfield>subdisableonerr</structfield> <type>bool</type>\r\n+ </para>\r\n+ <para>\r\n+ If true, the subscription will be disabled if one of its workers\r\n+ detects an error\r\n+ </para></entry>\r\n+ </row>\r\n+\r\n+ <row>\r\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n <structfield>subconninfo</structfield> <type>text</type>\r\n </para>\r\n <para>\r\n\r\nThe document for \"subdisableonerr\" option is placed after \"The following\r\nparameters control what happens during subscription creation: \". I think it\r\nshould be placed after \"The following parameters control the subscription's\r\nreplication behavior after it has been created: \", right?\r\n\r\nRegards,\r\nShi yu\r\n", "msg_date": "Mon, 7 Mar 2022 03:00:39 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Friday, March 4, 2022 3:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> Thank you for updating the patch.\r\n> \r\n> Here are some comments on v26 patch:\r\nThank you for your review !\r\n\r\n\r\n\r\n> +/*\r\n> + * Disable the current subscription.\r\n> + */\r\n> +static void\r\n> +DisableSubscriptionOnError(void)\r\n> \r\n> This function now just updates the pg_subscription catalog so can we move it\r\n> to pg_subscritpion.c while having this function accept the subscription OID to\r\n> disable? If you agree, the function comment will also need to be updated.\r\nAgreed. Fixed.\r\n\r\n\r\n> ---\r\n> + /*\r\n> + * First, ensure that we log the error message so\r\n> that it won't be\r\n> + * lost if some other internal error occurs in the\r\n> following code.\r\n> + * Then, abort the current transaction and send the\r\n> stats message of\r\n> + * the table synchronization failure in an idle state.\r\n> + */\r\n> + HOLD_INTERRUPTS();\r\n> + EmitErrorReport();\r\n> + FlushErrorState();\r\n> + AbortOutOfAnyTransaction();\r\n> + RESUME_INTERRUPTS();\r\n> + pgstat_report_subscription_error(MySubscription->oid,\r\n> + false);\r\n> +\r\n> + if (MySubscription->disableonerr)\r\n> + {\r\n> + DisableSubscriptionOnError();\r\n> + proc_exit(0);\r\n> + }\r\n> +\r\n> + PG_RE_THROW();\r\n> \r\n> If the disableonerr is false, the same error is reported twice. Also, the code in\r\n> PG_CATCH() in both start_apply() and start_table_sync() are almost the same.\r\n> Can we create a common function to do post-error processing?\r\nYes. Also, calling PG_RE_THROW wasn't appropriate,\r\nbecause in the previous v26, for the second error you mentioned,\r\nthe patch didn't call errstart when disable_on_error = false.\r\nThis was introduced by recent patch refactoring around this code and the rebase\r\nof this patch, but has been fixed by your suggestion.\r\n\r\n\r\n> The worker should exit with return code 1.\r\n> I've attached a fixup patch for changes to worker.c for your reference. Feel free\r\n> to adopt the changes.\r\nYes. I adopted almost all of your suggestion.\r\nOne thing I fixed was a comment that mentioned table sync\r\nin worker_post_error_processing(), because start_apply()\r\nalso uses the function.\r\n\r\n\r\n> \r\n> ---\r\n> +\r\n> +# Confirm that we have finished the table sync.\r\n> +is( $node_subscriber->safe_psql(\r\n> + 'postgres', qq(SELECT MAX(i), COUNT(*) FROM tbl)),\r\n> + \"1|3\",\r\n> + \"subscription sub replicated data\");\r\n> +\r\n> \r\n> Can we store the result to a local variable to check? I think it's more consistent\r\n> with other tap tests.\r\nAgreed. Fixed.\r\n\r\n\r\nAttached the v27. Kindly review the patch.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Mon, 7 Mar 2022 03:04:14 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Monday, March 7, 2022 12:01 PM Shi, Yu/侍 雨 <shiy.fnst@fujitsu.com> wrote:\r\n> On Wed, Mar 2, 2022 5:39 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > Attached an updated patch v26.\r\n> >\r\n> \r\n> Thanks for your patch. A comment on the document.\r\nHi, thank you for checking my patch !\r\n\r\n\r\n> @@ -7771,6 +7771,16 @@ SCRAM-SHA-256$<replaceable>&lt;iteration\r\n> count&gt;</replaceable>:<replaceable>&l\r\n> \r\n> <row>\r\n> <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>subdisableonerr</structfield> <type>bool</type>\r\n> + </para>\r\n> + <para>\r\n> + If true, the subscription will be disabled if one of its workers\r\n> + detects an error\r\n> + </para></entry>\r\n> + </row>\r\n> +\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> <structfield>subconninfo</structfield> <type>text</type>\r\n> </para>\r\n> <para>\r\n> \r\n> The document for \"subdisableonerr\" option is placed after \"The following\r\n> parameters control what happens during subscription creation: \". I think it\r\n> should be placed after \"The following parameters control the subscription's\r\n> replication behavior after it has been created: \", right?\r\nAddressed your comment for create_subscription.sgml\r\n(not for catalogs.sgml).\r\n\r\nAttached an updated patch v28.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Mon, 7 Mar 2022 05:25:20 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Mon, Mar 7, 2022 at 4:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Mar 4, 2022 at 5:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > ---\n> > + /*\n> > + * First, ensure that we log the error message so\n> > that it won't be\n> > + * lost if some other internal error occurs in the\n> > following code.\n> > + * Then, abort the current transaction and send the\n> > stats message of\n> > + * the table synchronization failure in an idle state.\n> > + */\n> > + HOLD_INTERRUPTS();\n> > + EmitErrorReport();\n> > + FlushErrorState();\n> > + AbortOutOfAnyTransaction();\n> > + RESUME_INTERRUPTS();\n> > + pgstat_report_subscription_error(MySubscription->oid, false);\n> > +\n> > + if (MySubscription->disableonerr)\n> > + {\n> > + DisableSubscriptionOnError();\n> > + proc_exit(0);\n> > + }\n> > +\n> > + PG_RE_THROW();\n> >\n> > If the disableonerr is false, the same error is reported twice. Also,\n> > the code in PG_CATCH() in both start_apply() and start_table_sync()\n> > are almost the same. Can we create a common function to do post-error\n> > processing?\n> >\n> > The worker should exit with return code 1.\n> >\n> > I've attached a fixup patch for changes to worker.c for your\n> > reference. Feel free to adopt the changes.\n>\n> The way that common function is implemented required removal of the\n> existing PG_RE_THROW logic, which in turn was only possible using\n> special knowledge that this just happens to be the last try/catch\n> block for the apply worker.\n>\n\nI think we should re_throw the error in case we have not handled it by\ndisabling the subscription (in which case we can exit with success\ncode (0)).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 7 Mar 2022 14:14:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Monday, March 7, 2022 5:45 PM Amit Kaila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Mar 7, 2022 at 4:55 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Fri, Mar 4, 2022 at 5:55 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > ---\r\n> > > + /*\r\n> > > + * First, ensure that we log the error message so\r\n> > > that it won't be\r\n> > > + * lost if some other internal error occurs in the\r\n> > > following code.\r\n> > > + * Then, abort the current transaction and send the\r\n> > > stats message of\r\n> > > + * the table synchronization failure in an idle state.\r\n> > > + */\r\n> > > + HOLD_INTERRUPTS();\r\n> > > + EmitErrorReport();\r\n> > > + FlushErrorState();\r\n> > > + AbortOutOfAnyTransaction();\r\n> > > + RESUME_INTERRUPTS();\r\n> > > +\r\n> > > + pgstat_report_subscription_error(MySubscription->oid, false);\r\n> > > +\r\n> > > + if (MySubscription->disableonerr)\r\n> > > + {\r\n> > > + DisableSubscriptionOnError();\r\n> > > + proc_exit(0);\r\n> > > + }\r\n> > > +\r\n> > > + PG_RE_THROW();\r\n> > >\r\n> > > If the disableonerr is false, the same error is reported twice.\r\n> > > Also, the code in PG_CATCH() in both start_apply() and\r\n> > > start_table_sync() are almost the same. Can we create a common\r\n> > > function to do post-error processing?\r\n> > >\r\n> > > The worker should exit with return code 1.\r\n> > >\r\n> > > I've attached a fixup patch for changes to worker.c for your\r\n> > > reference. Feel free to adopt the changes.\r\n> >\r\n> > The way that common function is implemented required removal of the\r\n> > existing PG_RE_THROW logic, which in turn was only possible using\r\n> > special knowledge that this just happens to be the last try/catch\r\n> > block for the apply worker.\r\n> >\r\n> \r\n> I think we should re_throw the error in case we have not handled it by disabling\r\n> the subscription (in which case we can exit with success code (0)).\r\nAgreed. Fixed the patch so that it use re_throw.\r\n\r\nAnother point I changed from v28 is the order\r\nto call AbortOutOfAnyTransaction and FlushErrorState,\r\nwhich now is more aligned with other places.\r\n\r\nKindly check the attached v29.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Mon, 7 Mar 2022 09:37:09 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "Please find below some review comments for v29.\n\n======\n\n1. src/backend/replication/logical/worker.c - worker_post_error_processing\n\n+/*\n+ * Abort and cleanup the current transaction, then do post-error processing.\n+ * This function must be called in a PG_CATCH() block.\n+ */\n+static void\n+worker_post_error_processing(void)\n\nThe function comment and function name are too vague/generic. I guess\nthis is a hang-over from Sawada-san's proposed patch, but now since\nthis is only called when disabling the subscription so both the\ncomment and the function name should say that's what it is doing...\n\ne.g. rename to DisableSubscriptionOnError() or something similar.\n\n~~~\n\n2. src/backend/replication/logical/worker.c - worker_post_error_processing\n\n+ /* Notify the subscription has been disabled */\n+ ereport(LOG,\n+ errmsg(\"logical replication subscription \\\"%s\\\" has been be disabled\ndue to an error\",\n+ MySubscription->name));\n\n proc_exit(0);\n }\n\nI know this is common code, but IMO it would be better to do the\nproc_exit(0); from the caller in the PG_CATCH. Then I think the code\nwill be much easier to read the throw/exit logic, rather than now\nwhere it is just calling some function that never returns...\n\nAlternatively, if you want the code how it is, then the function name\nshould give some hint that it is never going to return - e.g.\nDisableSubscriptionOnErrorAndExit)\n\n~~~\n\n3. src/backend/replication/logical/worker.c - start_table_sync\n\n+ {\n+ /*\n+ * Abort the current transaction so that we send the stats message\n+ * in an idle state.\n+ */\n+ AbortOutOfAnyTransaction();\n+\n+ /* Report the worker failed during table synchronization */\n+ pgstat_report_subscription_error(MySubscription->oid, false);\n+\n+ PG_RE_THROW();\n+ }\n\n(This is a repeat of a previous comment from [1] comment #2)\n\nI felt the separation of those 2 statements and comments makes the\ncode less clean than it could/should be. IMO they should be grouped\ntogether.\n\nSUGGESTED\n\n/*\n* Report the worker failed during table synchronization. Abort the\n* current transaction so that the stats message is sent in an idle\n* state.\n*/\nAbortOutOfAnyTransaction();\npgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n\n~~~\n\n4. src/backend/replication/logical/worker.c - start_apply\n\n+ {\n+ /*\n+ * Abort the current transaction so that we send the stats message\n+ * in an idle state.\n+ */\n+ AbortOutOfAnyTransaction();\n+\n+ /* Report the worker failed while applying changes */\n+ pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n+\n+ PG_RE_THROW();\n+ }\n\n(same as #3 but comment says \"while applying changes\")\n\nSUGGESTED\n\n/*\n* Report the worker failed while applying changing. Abort the current\n* transaction so that the stats message is sent in an idle state.\n*/\nAbortOutOfAnyTransaction();\npgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPucrizJpqhSyD7dKj1yRkNMskqmiekD_RRqYpdDdusMRQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 8 Mar 2022 15:07:29 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Mar 8, 2022 at 9:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Please find below some review comments for v29.\n>\n> ======\n>\n> 1. src/backend/replication/logical/worker.c - worker_post_error_processing\n>\n> +/*\n> + * Abort and cleanup the current transaction, then do post-error processing.\n> + * This function must be called in a PG_CATCH() block.\n> + */\n> +static void\n> +worker_post_error_processing(void)\n>\n> The function comment and function name are too vague/generic. I guess\n> this is a hang-over from Sawada-san's proposed patch, but now since\n> this is only called when disabling the subscription so both the\n> comment and the function name should say that's what it is doing...\n>\n> e.g. rename to DisableSubscriptionOnError() or something similar.\n>\n> ~~~\n>\n> 2. src/backend/replication/logical/worker.c - worker_post_error_processing\n>\n> + /* Notify the subscription has been disabled */\n> + ereport(LOG,\n> + errmsg(\"logical replication subscription \\\"%s\\\" has been be disabled\n> due to an error\",\n> + MySubscription->name));\n>\n> proc_exit(0);\n> }\n>\n> I know this is common code, but IMO it would be better to do the\n> proc_exit(0); from the caller in the PG_CATCH. Then I think the code\n> will be much easier to read the throw/exit logic, rather than now\n> where it is just calling some function that never returns...\n>\n> Alternatively, if you want the code how it is, then the function name\n> should give some hint that it is never going to return - e.g.\n> DisableSubscriptionOnErrorAndExit)\n>\n\nI think we are already in error so maybe it is better to name it as\nDisableSubscriptionAndExit.\n\nFew other comments:\n=================\n1.\nDisableSubscription()\n{\n..\n+\n+ LockSharedObject(SubscriptionRelationId, subid, 0, AccessExclusiveLock);\n\nWhy do we need AccessExclusiveLock here? The Alter/Drop Subscription\ntakes AccessExclusiveLock, so AccessShareLock should be sufficient\nunless we have a reason to use AccessExclusiveLock lock. The other\nsimilar usages in this file (pg_subscription.c) also take\nAccessShareLock.\n\n2. Shall we mention this feature in conflict handling docs [1]:\nNow:\nTo skip the transaction, the subscription needs to be disabled\ntemporarily by ALTER SUBSCRIPTION ... DISABLE first.\n\nAfter:\nTo skip the transaction, the subscription needs to be disabled\ntemporarily by ALTER SUBSCRIPTION ... DISABLE first or alternatively,\nthe subscription can be used with the disable_on_error option.\n\nFeel free to use something on the above lines, if you agree.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Mar 2022 11:22:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tuesday, March 8, 2022 2:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Mar 8, 2022 at 9:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> >\r\n> > Please find below some review comments for v29.\r\n> >\r\n> > ======\r\n> >\r\n> > 1. src/backend/replication/logical/worker.c -\r\n> > worker_post_error_processing\r\n> >\r\n> > +/*\r\n> > + * Abort and cleanup the current transaction, then do post-error processing.\r\n> > + * This function must be called in a PG_CATCH() block.\r\n> > + */\r\n> > +static void\r\n> > +worker_post_error_processing(void)\r\n> >\r\n> > The function comment and function name are too vague/generic. I guess\r\n> > this is a hang-over from Sawada-san's proposed patch, but now since\r\n> > this is only called when disabling the subscription so both the\r\n> > comment and the function name should say that's what it is doing...\r\n> >\r\n> > e.g. rename to DisableSubscriptionOnError() or something similar.\r\n> >\r\n> > ~~~\r\n> >\r\n> > 2. src/backend/replication/logical/worker.c -\r\n> > worker_post_error_processing\r\n> >\r\n> > + /* Notify the subscription has been disabled */ ereport(LOG,\r\n> > + errmsg(\"logical replication subscription \\\"%s\\\" has been be disabled\r\n> > due to an error\",\r\n> > + MySubscription->name));\r\n> >\r\n> > proc_exit(0);\r\n> > }\r\n> >\r\n> > I know this is common code, but IMO it would be better to do the\r\n> > proc_exit(0); from the caller in the PG_CATCH. Then I think the code\r\n> > will be much easier to read the throw/exit logic, rather than now\r\n> > where it is just calling some function that never returns...\r\n> >\r\n> > Alternatively, if you want the code how it is, then the function name\r\n> > should give some hint that it is never going to return - e.g.\r\n> > DisableSubscriptionOnErrorAndExit)\r\n> >\r\n> \r\n> I think we are already in error so maybe it is better to name it as\r\n> DisableSubscriptionAndExit.\r\nOK. Renamed.\r\n\r\n\r\n \r\n> Few other comments:\r\n> =================\r\n> 1.\r\n> DisableSubscription()\r\n> {\r\n> ..\r\n> +\r\n> + LockSharedObject(SubscriptionRelationId, subid, 0,\r\n> + AccessExclusiveLock);\r\n> \r\n> Why do we need AccessExclusiveLock here? The Alter/Drop Subscription\r\n> takes AccessExclusiveLock, so AccessShareLock should be sufficient unless\r\n> we have a reason to use AccessExclusiveLock lock. The other similar usages in\r\n> this file (pg_subscription.c) also take AccessShareLock.\r\nFixed.\r\n\r\n \r\n> 2. Shall we mention this feature in conflict handling docs [1]:\r\n> Now:\r\n> To skip the transaction, the subscription needs to be disabled temporarily by\r\n> ALTER SUBSCRIPTION ... DISABLE first.\r\n> \r\n> After:\r\n> To skip the transaction, the subscription needs to be disabled temporarily by\r\n> ALTER SUBSCRIPTION ... DISABLE first or alternatively, the subscription can\r\n> be used with the disable_on_error option.\r\n> \r\n> Feel free to use something on the above lines, if you agree.\r\nAgreed. Fixed.\r\n\r\nAt the same time, the attached v30 has incorporated\r\nsome rebase results of recent commit(d3e8368)\r\nso that start_table_sync allocates the origin names\r\nin long-lived context. Accoring to this, I modified\r\nsome comments on this function.\r\n\r\nI made some comments for sending stats in\r\nstart_table_sync and start_apply united and concise,\r\nwhich were pointed out by Peter Smith in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/CAHut%2BPs3b8HjsVyo-Aygtnxbw1PiVOC9nvrN6dKxYtS4C8s%2Bgw%40mail.gmail.com\r\n\r\nKindly have a look at v30.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Tue, 8 Mar 2022 08:07:35 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tuesday, March 8, 2022 1:07 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Please find below some review comments for v29.\r\nThank you for your comments !\r\n\r\n\r\n \r\n> ======\r\n> \r\n> 1. src/backend/replication/logical/worker.c - worker_post_error_processing\r\n> \r\n> +/*\r\n> + * Abort and cleanup the current transaction, then do post-error processing.\r\n> + * This function must be called in a PG_CATCH() block.\r\n> + */\r\n> +static void\r\n> +worker_post_error_processing(void)\r\n> \r\n> The function comment and function name are too vague/generic. I guess this is\r\n> a hang-over from Sawada-san's proposed patch, but now since this is only\r\n> called when disabling the subscription so both the comment and the function\r\n> name should say that's what it is doing...\r\n> \r\n> e.g. rename to DisableSubscriptionOnError() or something similar.\r\nFixed the comments and the function name in v30 shared in [1].\r\n\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 2. src/backend/replication/logical/worker.c - worker_post_error_processing\r\n> \r\n> + /* Notify the subscription has been disabled */ ereport(LOG,\r\n> + errmsg(\"logical replication subscription \\\"%s\\\" has been be disabled\r\n> due to an error\",\r\n> + MySubscription->name));\r\n> \r\n> proc_exit(0);\r\n> }\r\n> \r\n> I know this is common code, but IMO it would be better to do the proc_exit(0);\r\n> from the caller in the PG_CATCH. Then I think the code will be much easier to\r\n> read the throw/exit logic, rather than now where it is just calling some function\r\n> that never returns...\r\n> \r\n> Alternatively, if you want the code how it is, then the function name should give\r\n> some hint that it is never going to return - e.g.\r\n> DisableSubscriptionOnErrorAndExit)\r\nI renamed it to DisableSubscriptionAndExit in the end\r\naccording to the discussion.\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 3. src/backend/replication/logical/worker.c - start_table_sync\r\n> \r\n> + {\r\n> + /*\r\n> + * Abort the current transaction so that we send the stats message\r\n> + * in an idle state.\r\n> + */\r\n> + AbortOutOfAnyTransaction();\r\n> +\r\n> + /* Report the worker failed during table synchronization */\r\n> + pgstat_report_subscription_error(MySubscription->oid, false);\r\n> +\r\n> + PG_RE_THROW();\r\n> + }\r\n> \r\n> (This is a repeat of a previous comment from [1] comment #2)\r\n> \r\n> I felt the separation of those 2 statements and comments makes the code less\r\n> clean than it could/should be. IMO they should be grouped together.\r\n> \r\n> SUGGESTED\r\n> \r\n> /*\r\n> * Report the worker failed during table synchronization. Abort the\r\n> * current transaction so that the stats message is sent in an idle\r\n> * state.\r\n> */\r\n> AbortOutOfAnyTransaction();\r\n> pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_work\r\n> er());\r\nFixed.\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 4. src/backend/replication/logical/worker.c - start_apply\r\n> \r\n> + {\r\n> + /*\r\n> + * Abort the current transaction so that we send the stats message\r\n> + * in an idle state.\r\n> + */\r\n> + AbortOutOfAnyTransaction();\r\n> +\r\n> + /* Report the worker failed while applying changes */\r\n> + pgstat_report_subscription_error(MySubscription->oid,\r\n> + !am_tablesync_worker());\r\n> +\r\n> + PG_RE_THROW();\r\n> + }\r\n> \r\n> (same as #3 but comment says \"while applying changes\")\r\n> \r\n> SUGGESTED\r\n> \r\n> /*\r\n> * Report the worker failed while applying changing. Abort the current\r\n> * transaction so that the stats message is sent in an idle state.\r\n> */\r\n> AbortOutOfAnyTransaction();\r\n> pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_work\r\n> er());\r\nFixed. I choose the woring \"while applying changes\" which you mentioned first\r\nand sounds more natural.\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373B74627C6BAF2F146D779ED099%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Tue, 8 Mar 2022 08:18:44 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Mar 8, 2022 at 1:37 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Kindly have a look at v30.\n>\n\nReview comments:\n===============\n1.\n+ ereport(LOG,\n+ errmsg(\"logical replication subscription \\\"%s\\\" has been be disabled\ndue to an error\",\n\nTypo.\n/been be/been\n\n2. Is there a reason the patch doesn't allow workers to restart via\nmaybe_reread_subscription() when this new option is changed, if so,\nthen let's add a comment for the same? We currently seem to be\nrestarting the worker on any change via Alter Subscription. If we\ndecide to change it for this option as well then I think we need to\naccordingly update the current comment: \"Exit if any parameter that\naffects the remote connection was changed.\" to something like \"Exit if\nany parameter that affects the remote connection or a subscription\noption was changed...\"\n\n3.\n if (fout->remoteVersion >= 150000)\n- appendPQExpBufferStr(query, \" s.subtwophasestate\\n\");\n+ appendPQExpBufferStr(query, \" s.subtwophasestate,\\n\");\n else\n appendPQExpBuffer(query,\n- \" '%c' AS subtwophasestate\\n\",\n+ \" '%c' AS subtwophasestate,\\n\",\n LOGICALREP_TWOPHASE_STATE_DISABLED);\n\n+ if (fout->remoteVersion >= 150000)\n+ appendPQExpBuffer(query, \" s.subdisableonerr\\n\");\n+ else\n+ appendPQExpBuffer(query,\n+ \" false AS subdisableonerr\\n\");\n\nIt is better to combine these parameters. I see there is a similar\ncoding pattern for 14 but I think that is not required.\n\n4.\n+$node_subscriber->safe_psql('postgres', qq(ALTER SUBSCRIPTION sub ENABLE));\n+\n+# Wait for the data to replicate.\n+$node_subscriber->poll_query_until(\n+ 'postgres', qq(\n+SELECT COUNT(1) = 1 FROM pg_catalog.pg_subscription_rel sr\n+WHERE sr.srsubstate IN ('s', 'r') AND sr.srrelid = 'tbl'::regclass));\n\nSee other scripts like t/015_stream.pl and wait for data replication\nin the same way. I think there are two things to change: (a) In the\nabove query, we use NOT IN at other places (b) use\n$node_publisher->wait_for_catchup before this query.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Mar 2022 18:53:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Mar 8, 2022 at 5:07 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Kindly have a look at v30.\n\nThank you for updating the patch. Here are some comments:\n\n+ /*\n+ * Allocate the origin name in long-lived context for error context\n+ * message.\n+ */\n+ ReplicationOriginNameForTablesync(MySubscription->oid,\n+ MyLogicalRepWorker->relid,\n+ originname,\n+ sizeof(originname));\n+ apply_error_callback_arg.origin_name = MemoryContextStrdup(ApplyContext,\n+ originname);\n\nI think it's better to set apply_error_callback_arg.origin_name in the\ncaller rather than in start_table_sync(). Apply workers set\napply_error_callback_arg.origin_name there and it's not necessarily\nnecessary to do that in this function.\n\nEven if we want to do that, I think it's not necessary to pass\noriginname to start_table_sync(). It's a local variable and used only\nto temporarily store the tablesync worker's origin name.\n\n---\nIt might have already been discussed but the worker disables the\nsubscription on an error but doesn't work for a fatal. Is that\nexpected or should we handle that too?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 9 Mar 2022 09:58:28 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Mar 9, 2022 at 6:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> ---\n> It might have already been discussed but the worker disables the\n> subscription on an error but doesn't work for a fatal. Is that\n> expected or should we handle that too?\n>\n\nI am not too sure about handling FATALs with this feature because this\nis mainly to aid in resolving conflicts due to various constraints. It\nmight be okay to retry in case of FATAL which is possibly due to some\nsystem resource error. OTOH, if we see that it will be good to disable\nfor a FATAL error as well then I think we can use\nPG_ENSURE_ERROR_CLEANUP construct. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Mar 2022 09:06:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tue, Mar 8, 2022 at 6:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 8, 2022 at 1:37 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > Kindly have a look at v30.\n> >\n>\n> Review comments:\n> ===============\n>\nFew comments on test script:\n=======================\n1.\n+# This tests the uniqueness violation will cause the subscription\n+# to fail during initial synchronization and make it disabled.\n\n/This tests the/This tests that the\n\n2.\n+$node_publisher->safe_psql('postgres',\n+ qq(CREATE PUBLICATION pub FOR TABLE tbl));\n+$node_subscriber->safe_psql(\n+ 'postgres', qq(\n+CREATE SUBSCRIPTION sub\n+CONNECTION '$publisher_connstr'\n+PUBLICATION pub WITH (disable_on_error = true)));\n\nPlease check other test scripts like t/015_stream.pl or\nt/028_row_filter.pl and keep the indentation of these commands\nsimilar. It looks odd and inconsistent with other tests. Also, we can\nuse double-quotes instead of qq so as to be consistent with other\nscripts. Please check other similar places and make them consistent\nwith other test script files.\n\n3.\n+# Initial synchronization failure causes the subscription\n+# to be disabled.\n\nHere and in other places in test scripts, the comment lines seem too\nshort to me. Normally, we can keep it at the 80 char limit but this\nappears too short.\n\n4.\n+# Delete the data from the subscriber and recreate the unique index.\n+$node_subscriber->safe_psql(\n+ 'postgres', q(\n+DELETE FROM tbl;\n+CREATE UNIQUE INDEX tbl_unique ON tbl (i)));\n\nIn other tests, we are executing single statements via safe_psql. I\ndon't see a problem with this but also don't see a reason to deviate\nfrom the normal pattern.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Mar 2022 09:59:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Mar 9, 2022 at 12:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 9, 2022 at 6:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > ---\n> > It might have already been discussed but the worker disables the\n> > subscription on an error but doesn't work for a fatal. Is that\n> > expected or should we handle that too?\n> >\n>\n> I am not too sure about handling FATALs with this feature because this\n> is mainly to aid in resolving conflicts due to various constraints. It\n> might be okay to retry in case of FATAL which is possibly due to some\n> system resource error. OTOH, if we see that it will be good to disable\n> for a FATAL error as well then I think we can use\n> PG_ENSURE_ERROR_CLEANUP construct. What do you think?\n\nI think that since FATAL raised by logical replication workers (e.g.,\nterminated by DDL or out of memory etc?) is normally not a repeatable\nerror, it's reasonable to retry in this case.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 9 Mar 2022 14:52:01 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Mar 9, 2022 at 11:22 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Mar 9, 2022 at 12:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 9, 2022 at 6:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > ---\n> > > It might have already been discussed but the worker disables the\n> > > subscription on an error but doesn't work for a fatal. Is that\n> > > expected or should we handle that too?\n> > >\n> >\n> > I am not too sure about handling FATALs with this feature because this\n> > is mainly to aid in resolving conflicts due to various constraints. It\n> > might be okay to retry in case of FATAL which is possibly due to some\n> > system resource error. OTOH, if we see that it will be good to disable\n> > for a FATAL error as well then I think we can use\n> > PG_ENSURE_ERROR_CLEANUP construct. What do you think?\n>\n> I think that since FATAL raised by logical replication workers (e.g.,\n> terminated by DDL or out of memory etc?) is normally not a repeatable\n> error, it's reasonable to retry in this case.\n>\n\nYeah, I think we can add a comment in the code for this so that future\nreaders know that this has been done deliberately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Mar 2022 11:32:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wednesday, March 9, 2022 3:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Mar 9, 2022 at 11:22 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> >\r\n> > On Wed, Mar 9, 2022 at 12:37 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Wed, Mar 9, 2022 at 6:29 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > >\r\n> > > > ---\r\n> > > > It might have already been discussed but the worker disables the\r\n> > > > subscription on an error but doesn't work for a fatal. Is that\r\n> > > > expected or should we handle that too?\r\n> > > >\r\n> > >\r\n> > > I am not too sure about handling FATALs with this feature because\r\n> > > this is mainly to aid in resolving conflicts due to various\r\n> > > constraints. It might be okay to retry in case of FATAL which is\r\n> > > possibly due to some system resource error. OTOH, if we see that it\r\n> > > will be good to disable for a FATAL error as well then I think we\r\n> > > can use PG_ENSURE_ERROR_CLEANUP construct. What do you think?\r\n> >\r\n> > I think that since FATAL raised by logical replication workers (e.g.,\r\n> > terminated by DDL or out of memory etc?) is normally not a repeatable\r\n> > error, it's reasonable to retry in this case.\r\n> >\r\n> \r\n> Yeah, I think we can add a comment in the code for this so that future readers\r\n> know that this has been done deliberately.\r\nOK. I've added some comments in the codes.\r\n\r\nThe v31 addressed other comments on hackers so far.\r\n(a) brush up the TAP test alignment\r\n(b) fix the place of apply_error_callback_arg.origin_name for table sync worker\r\n(c) modify maybe_reread_subscription to exit, when disable_on_error changes\r\n(d) improve getSubscriptions to combine some branches for v15\r\n\r\nKindly check the attached v31.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Wed, 9 Mar 2022 07:16:39 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wednesday, March 9, 2022 1:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Mar 8, 2022 at 6:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Tue, Mar 8, 2022 at 1:37 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> > > Kindly have a look at v30.\r\n> > >\r\n> >\r\n> > Review comments:\r\n> > ===============\r\nThank you for reviewing !\r\n\r\n\r\n> Few comments on test script:\r\n> =======================\r\n> 1.\r\n> +# This tests the uniqueness violation will cause the subscription # to\r\n> +fail during initial synchronization and make it disabled.\r\n> \r\n> /This tests the/This tests that the\r\nFixed.\r\n\r\n\r\n> 2.\r\n> +$node_publisher->safe_psql('postgres',\r\n> + qq(CREATE PUBLICATION pub FOR TABLE tbl));\r\n> +$node_subscriber->safe_psql( 'postgres', qq( CREATE SUBSCRIPTION\r\n> sub\r\n> +CONNECTION '$publisher_connstr'\r\n> +PUBLICATION pub WITH (disable_on_error = true)));\r\n> \r\n> Please check other test scripts like t/015_stream.pl or t/028_row_filter.pl and\r\n> keep the indentation of these commands similar. It looks odd and inconsistent\r\n> with other tests. Also, we can use double-quotes instead of qq so as to be\r\n> consistent with other scripts. Please check other similar places and make\r\n> them consistent with other test script files.\r\nFixed the inconsistent indentations within each commands.\r\nAlso, replace the qq with double-quotes (except for the is()'s\r\n2nd argument, which is the aligned way to write the tests).\r\n\r\n\r\n\r\n> 3.\r\n> +# Initial synchronization failure causes the subscription # to be\r\n> +disabled.\r\n> \r\n> Here and in other places in test scripts, the comment lines seem too short to\r\n> me. Normally, we can keep it at the 80 char limit but this appears too short.\r\nFixed.\r\n\r\n\r\n> 4.\r\n> +# Delete the data from the subscriber and recreate the unique index.\r\n> +$node_subscriber->safe_psql(\r\n> + 'postgres', q(\r\n> +DELETE FROM tbl;\r\n> +CREATE UNIQUE INDEX tbl_unique ON tbl (i)));\r\n> \r\n> In other tests, we are executing single statements via safe_psql. I don't see a\r\n> problem with this but also don't see a reason to deviate from the normal\r\n> pattern.\r\nFixed.\r\n\r\n\r\nAt the same time, I fixed one comment\r\nwhere I should write \"subscriber\", not \"sub\",\r\nsince in the entire test file, I express the subscriber\r\nby using the former.\r\n\r\n\r\nThe new patch v31 is shared in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373824855A6C4D2178027A0ED0A9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 9 Mar 2022 07:20:20 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wednesday, March 9, 2022 9:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Tue, Mar 8, 2022 at 5:07 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > Kindly have a look at v30.\r\n> \r\n> Thank you for updating the patch. Here are some comments:\r\nHi, thank you for your review !\r\n\r\n\r\n> + /*\r\n> + * Allocate the origin name in long-lived context for error context\r\n> + * message.\r\n> + */\r\n> + ReplicationOriginNameForTablesync(MySubscription->oid,\r\n> + MyLogicalRepWorker->relid,\r\n> + originname,\r\n> + sizeof(originname));\r\n> + apply_error_callback_arg.origin_name =\r\n> MemoryContextStrdup(ApplyContext,\r\n> +\r\n> + originname);\r\n> \r\n> I think it's better to set apply_error_callback_arg.origin_name in the caller\r\n> rather than in start_table_sync(). Apply workers set\r\n> apply_error_callback_arg.origin_name there and it's not necessarily necessary\r\n> to do that in this function.\r\nOK. I made this origin_name logic back to the level of ApplyWorkerMain.\r\n\r\n\r\nThe new patch v31 is shared in [1].\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373824855A6C4D2178027A0ED0A9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regardfs,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 9 Mar 2022 07:24:53 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tuesday, March 8, 2022 10:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Mar 8, 2022 at 1:37 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > Kindly have a look at v30.\r\n> >\r\n> \r\n> Review comments:\r\nThank you for checking !\r\n\r\n\r\n> ===============\r\n> 1.\r\n> + ereport(LOG,\r\n> + errmsg(\"logical replication subscription \\\"%s\\\" has been be disabled\r\n> due to an error\",\r\n> \r\n> Typo.\r\n> /been be/been\r\nFixed.\r\n\r\n \r\n> 2. Is there a reason the patch doesn't allow workers to restart via\r\n> maybe_reread_subscription() when this new option is changed, if so, then let's\r\n> add a comment for the same? We currently seem to be restarting the worker on\r\n> any change via Alter Subscription. If we decide to change it for this option as\r\n> well then I think we need to accordingly update the current comment: \"Exit if\r\n> any parameter that affects the remote connection was changed.\" to something\r\n> like \"Exit if any parameter that affects the remote connection or a subscription\r\n> option was changed...\"\r\nI thought it's ok without the change at the beginning, but I was wrong.\r\nTo make this new option aligned with others, I should add one check\r\nfor this feature. Fixed.\r\n\r\n\r\n> 3.\r\n> if (fout->remoteVersion >= 150000)\r\n> - appendPQExpBufferStr(query, \" s.subtwophasestate\\n\");\r\n> + appendPQExpBufferStr(query, \" s.subtwophasestate,\\n\");\r\n> else\r\n> appendPQExpBuffer(query,\r\n> - \" '%c' AS subtwophasestate\\n\",\r\n> + \" '%c' AS subtwophasestate,\\n\",\r\n> LOGICALREP_TWOPHASE_STATE_DISABLED);\r\n> \r\n> + if (fout->remoteVersion >= 150000)\r\n> + appendPQExpBuffer(query, \" s.subdisableonerr\\n\"); else\r\n> + appendPQExpBuffer(query,\r\n> + \" false AS subdisableonerr\\n\");\r\n> \r\n> It is better to combine these parameters. I see there is a similar coding pattern\r\n> for 14 but I think that is not required.\r\nFixed and combined them together.\r\n\r\n \r\n> 4.\r\n> +$node_subscriber->safe_psql('postgres', qq(ALTER SUBSCRIPTION sub\r\n> +ENABLE));\r\n> +\r\n> +# Wait for the data to replicate.\r\n> +$node_subscriber->poll_query_until(\r\n> + 'postgres', qq(\r\n> +SELECT COUNT(1) = 1 FROM pg_catalog.pg_subscription_rel sr WHERE\r\n> +sr.srsubstate IN ('s', 'r') AND sr.srrelid = 'tbl'::regclass));\r\n> \r\n> See other scripts like t/015_stream.pl and wait for data replication in the same\r\n> way. I think there are two things to change: (a) In the above query, we use NOT\r\n> IN at other places (b) use $node_publisher->wait_for_catchup before this\r\n> query.\r\nFixed.\r\n\r\nThe new patch is shared in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373824855A6C4D2178027A0ED0A9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 9 Mar 2022 07:33:22 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Mar 9, 2022 at 4:33 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, March 8, 2022 10:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Mar 8, 2022 at 1:37 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n>\n>\n> > 2. Is there a reason the patch doesn't allow workers to restart via\n> > maybe_reread_subscription() when this new option is changed, if so, then let's\n> > add a comment for the same? We currently seem to be restarting the worker on\n> > any change via Alter Subscription. If we decide to change it for this option as\n> > well then I think we need to accordingly update the current comment: \"Exit if\n> > any parameter that affects the remote connection was changed.\" to something\n> > like \"Exit if any parameter that affects the remote connection or a subscription\n> > option was changed...\"\n> I thought it's ok without the change at the beginning, but I was wrong.\n> To make this new option aligned with others, I should add one check\n> for this feature. Fixed.\n\nWhy do we need to restart the apply worker when disable_on_error is\nchanged? It doesn't affect the remote connection at all. I think it\ncan be changed without restarting like synchronous_commit option.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 9 Mar 2022 17:50:49 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Mar 9, 2022 at 2:21 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Mar 9, 2022 at 4:33 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Tuesday, March 8, 2022 10:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Tue, Mar 8, 2022 at 1:37 PM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > >\n> >\n> >\n> > > 2. Is there a reason the patch doesn't allow workers to restart via\n> > > maybe_reread_subscription() when this new option is changed, if so, then let's\n> > > add a comment for the same? We currently seem to be restarting the worker on\n> > > any change via Alter Subscription. If we decide to change it for this option as\n> > > well then I think we need to accordingly update the current comment: \"Exit if\n> > > any parameter that affects the remote connection was changed.\" to something\n> > > like \"Exit if any parameter that affects the remote connection or a subscription\n> > > option was changed...\"\n> > I thought it's ok without the change at the beginning, but I was wrong.\n> > To make this new option aligned with others, I should add one check\n> > for this feature. Fixed.\n>\n> Why do we need to restart the apply worker when disable_on_error is\n> changed? It doesn't affect the remote connection at all. I think it\n> can be changed without restarting like synchronous_commit option.\n>\n\noh right, I thought that how will we update its value in\nMySubscription after a change but as we re-read the pg_subscription\ntable for the current subscription and update MySubscription, I feel\nwe don't need to restart it. I haven't tested it but it should work\nwithout a restart.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Mar 2022 16:52:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wednesday, March 9, 2022 8:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Mar 9, 2022 at 2:21 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> >\r\n> > On Wed, Mar 9, 2022 at 4:33 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> > > On Tuesday, March 8, 2022 10:23 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > > On Tue, Mar 8, 2022 at 1:37 PM osumi.takamichi@fujitsu.com\r\n> > > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > > >\r\n> > >\r\n> > >\r\n> > > > 2. Is there a reason the patch doesn't allow workers to restart\r\n> > > > via\r\n> > > > maybe_reread_subscription() when this new option is changed, if\r\n> > > > so, then let's add a comment for the same? We currently seem to be\r\n> > > > restarting the worker on any change via Alter Subscription. If we\r\n> > > > decide to change it for this option as well then I think we need\r\n> > > > to accordingly update the current comment: \"Exit if any parameter\r\n> > > > that affects the remote connection was changed.\" to something like\r\n> > > > \"Exit if any parameter that affects the remote connection or a\r\n> subscription option was changed...\"\r\n> > > I thought it's ok without the change at the beginning, but I was wrong.\r\n> > > To make this new option aligned with others, I should add one check\r\n> > > for this feature. Fixed.\r\n> >\r\n> > Why do we need to restart the apply worker when disable_on_error is\r\n> > changed? It doesn't affect the remote connection at all. I think it\r\n> > can be changed without restarting like synchronous_commit option.\r\n> >\r\n> \r\n> oh right, I thought that how will we update its value in MySubscription after a\r\n> change but as we re-read the pg_subscription table for the current\r\n> subscription and update MySubscription, I feel we don't need to restart it. I\r\n> haven't tested it but it should work without a restart.\r\nHi, attached v32 removed my additional code for maybe_reread_subscription.\r\n\r\nAlso, I judged that we don't need to add a comment for this feature in this patch.\r\nIt's because we can interpret this discussion from existing comments and codes.\r\n(1) \"Reread subscription info if needed. Most changes will be exit.\"\r\n\tThere are some cases we don't exit.\r\n(2) Like \"Exit if any parameter that affects the remote connection was changed.\",\r\n\treaders can understand no exit case matches the disable_on_error option change.\r\n\r\nKindly review the v32.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Wed, 9 Mar 2022 14:27:50 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Wed, Mar 9, 2022 at 7:57 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Hi, attached v32 removed my additional code for maybe_reread_subscription.\n>\n\nThanks, the patch looks good to me. I have made minor edits in the\nattached. I am planning to commit this early next week (Monday) unless\nthere are any other major comments.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 10 Mar 2022 12:04:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Thu, Mar 10, 2022 at 12:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 9, 2022 at 7:57 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > Hi, attached v32 removed my additional code for maybe_reread_subscription.\n> >\n>\n> Thanks, the patch looks good to me. I have made minor edits in the\n> attached. I am planning to commit this early next week (Monday) unless\n> there are any other major comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 14 Mar 2022 16:19:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Monday, March 14, 2022 7:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Mar 10, 2022 at 12:04 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Wed, Mar 9, 2022 at 7:57 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> > > Hi, attached v32 removed my additional code for\r\n> maybe_reread_subscription.\r\n> > >\r\n> >\r\n> > Thanks, the patch looks good to me. I have made minor edits in the\r\n> > attached. I am planning to commit this early next week (Monday) unless\r\n> > there are any other major comments.\r\n> >\r\n> \r\n> Pushed.\r\nThank you so much !\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Mon, 14 Mar 2022 11:53:24 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "My compiler is worried that syncslotname may be used uninitialized in\nstart_table_sync(). The attached patch seems to silence this warning.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 14 Mar 2022 16:04:24 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optionally automatically disable logical replication\n subscriptions on error" }, { "msg_contents": "On Tuesday, March 15, 2022 8:04 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> My compiler is worried that syncslotname may be used uninitialized in\n> start_table_sync(). The attached patch seems to silence this warning.\nThank you for your reporting !\n\nYour fix looks good to me.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n", "msg_date": "Tue, 15 Mar 2022 02:01:14 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Optionally automatically disable logical replication\n subscriptions on error" } ]
[ { "msg_contents": "Hi,\n\nWhile another long thread discusses the situation of old_snapshot_threshold,\nI believe we can improve procarray.c by avoiding calling\nMaintainOldSnapshotTimeMapping (src/backend/utils/time/snapmgr.c).\n\nThere's a very explicit comment there, which says (line 1866):\n\"Never call this function when old snapshot checking is disabled.\"\n\nWell, assert should never be used to validate a condition that certainly\noccurs at runtime.\n\nSince old_snapshot_threshold is -1, it is disabled, so\nMaintainOldSnapshotTimeMapping doesn't need to be run, right?\n\nregards,\nRanier Vilela", "msg_date": "Thu, 17 Jun 2021 21:27:15 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Avoid call MaintainOldSnapshotTimeMapping, if old_snapshot_threshold\n is disabled." }, { "msg_contents": "Hi,\n\nOn 2021-06-17 21:27:15 -0300, Ranier Vilela wrote:\n> While another long thread discusses the situation of old_snapshot_threshold,\n> I believe we can improve procarray.c by avoiding calling\n> MaintainOldSnapshotTimeMapping (src/backend/utils/time/snapmgr.c).\n> \n> There's a very explicit comment there, which says (line 1866):\n> \"Never call this function when old snapshot checking is disabled.\"\n> \n> Well, assert should never be used to validate a condition that certainly\n> occurs at runtime.\n\nI don't see how it can happen at runtime currently?\n\n> Since old_snapshot_threshold is -1, it is disabled, so\n> MaintainOldSnapshotTimeMapping doesn't need to be run, right?\n\nIt *isn't* run, the caller checks OldSnapshotThresholdActive() first.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Jun 2021 18:08:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid call MaintainOldSnapshotTimeMapping, if\n old_snapshot_threshold is disabled." }, { "msg_contents": "Em qui., 17 de jun. de 2021 às 22:08, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2021-06-17 21:27:15 -0300, Ranier Vilela wrote:\n> > While another long thread discusses the situation of\n> old_snapshot_threshold,\n> > I believe we can improve procarray.c by avoiding calling\n> > MaintainOldSnapshotTimeMapping (src/backend/utils/time/snapmgr.c).\n> >\n> > There's a very explicit comment there, which says (line 1866):\n> > \"Never call this function when old snapshot checking is disabled.\"\n> >\n> > Well, assert should never be used to validate a condition that certainly\n> > occurs at runtime.\n>\n> I don't see how it can happen at runtime currently?\n>\n> > Since old_snapshot_threshold is -1, it is disabled, so\n> > MaintainOldSnapshotTimeMapping doesn't need to be run, right?\n>\n> It *isn't* run, the caller checks OldSnapshotThresholdActive() first.\n>\nTrue. My mistake.\nI didn't check GetSnapshotDataInitOldSnapshot correctly.\n\nregards,\nRanier Vilela\n\nEm qui., 17 de jun. de 2021 às 22:08, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\nOn 2021-06-17 21:27:15 -0300, Ranier Vilela wrote:\n> While another long thread discusses the situation of old_snapshot_threshold,\n> I believe we can improve procarray.c by avoiding calling\n> MaintainOldSnapshotTimeMapping (src/backend/utils/time/snapmgr.c).\n> \n> There's a very explicit comment there, which says (line 1866):\n> \"Never call this function when old snapshot checking is disabled.\"\n> \n> Well, assert should never be used to validate a condition that certainly\n> occurs at runtime.\n\nI don't see how it can happen at runtime currently?\n\n> Since old_snapshot_threshold is -1, it is disabled, so\n> MaintainOldSnapshotTimeMapping doesn't need to be run, right?\n\nIt *isn't* run, the caller checks OldSnapshotThresholdActive() first.True. My mistake.I didn't check GetSnapshotDataInitOldSnapshot correctly.regards,Ranier Vilela", "msg_date": "Thu, 17 Jun 2021 22:14:45 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid call MaintainOldSnapshotTimeMapping,\n if old_snapshot_threshold\n is disabled." } ]
[ { "msg_contents": "Hi hackers!\n\nStarting from v13 pg_rewind can use restore_command if it lacks necessary WAL segments. And this is awesome for HA clusters with many nodes! Thanks to everyone who worked on the feature!\n\nHere's some feedback on how to make things even better.\n\nIf we run 'pg_rewind --restore-target-wal' there must be restore_command in config of target installation. But if the config is not within $PGDATA\\postgresql.conf pg_rewind cannot use it.\nIf we run postmaster with `-c config_file=/etc/postgresql/10/data/postgresql.conf`, we simply cannot use the feature. We solved the problem by putting config into PGDATA only during pg_rewind, but this does not seem like a very robust solution.\n\nMaybe we could add \"-C, --target-restore-command=COMMAND target WAL restore_command\\n\" as was proposed within earlier versions of the patch[0]? Or instruct pg_rewind to pass config to 'postgres -C restore_command' run?\n\nFrom my POV adding --target-restore-command is simplest way, I can extract corresponding portions from original patch.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/CAPpHfduUqKLr2CRpcpHcv1qjaz%2B-%2Bi9bOL2AOvdWSr954ti8Xw%40mail.gmail.com#1d4b372b5aa26f93af9ed1d5dd0693cd\n\n", "msg_date": "Fri, 18 Jun 2021 17:02:15 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Supply restore_command to pg_rewind via CLI argument" }, { "msg_contents": "Hi,\n\nOn Fri, Jun 18, 2021 at 5:42 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> If we run 'pg_rewind --restore-target-wal' there must be restore_command in config of target installation. But if the config is not within $PGDATA\\postgresql.conf pg_rewind cannot use it.\n> If we run postmaster with `-c config_file=/etc/postgresql/10/data/postgresql.conf`, we simply cannot use the feature. We solved the problem by putting config into PGDATA only during pg_rewind, but this does not seem like a very robust solution.\n>\n\nYeah, Michael was against it, while we had no good arguments, so\nAlexander removed it, IIRC. This example sounds reasonable to me. I\nalso recall some complaints from PostgresPro support folks, that it is\nsad to not have a cli option to pass restore_command. However, I just\nthought about another recent feature --- ensure clean shutdown, which\nis turned on by default. So you usually run Postgres with one config,\nbut pg_rewind may start it with another, although in single-user mode.\nIs it fine for you?\n\n>\n> Maybe we could add \"-C, --target-restore-command=COMMAND target WAL restore_command\\n\" as was proposed within earlier versions of the patch[0]? Or instruct pg_rewind to pass config to 'postgres -C restore_command' run?\n\nHm, adding --target-restore-command is the simplest way, sure, but\nforwarding something like '-c config_file=...' to postgres sounds\ninteresting too. Could it have any use case beside providing a\nrestore_command? I cannot imagine anything right now, but if any\nexist, then it could be a more universal approach.\n\n>\n> From my POV adding --target-restore-command is simplest way, I can extract corresponding portions from original patch.\n>\n\nI will have a look, maybe I even already have this patch separately. I\nremember that we were considering adding this option to PostgresPro,\nwhen we did a backport of this feature.\n\n\n--\nAlexey Kondratov\n\n\n", "msg_date": "Fri, 18 Jun 2021 22:06:53 +0300", "msg_from": "Alexey Kondratov <kondratov.aleksey@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Supply restore_command to pg_rewind via CLI argument" }, { "msg_contents": "On Fri, Jun 18, 2021 at 10:06 PM Alexey Kondratov\n<kondratov.aleksey@gmail.com> wrote:\n> On Fri, Jun 18, 2021 at 5:42 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> >\n> > If we run 'pg_rewind --restore-target-wal' there must be restore_command in config of target installation. But if the config is not within $PGDATA\\postgresql.conf pg_rewind cannot use it.\n> > If we run postmaster with `-c config_file=/etc/postgresql/10/data/postgresql.conf`, we simply cannot use the feature. We solved the problem by putting config into PGDATA only during pg_rewind, but this does not seem like a very robust solution.\n> >\n>\n> Yeah, Michael was against it, while we had no good arguments, so\n> Alexander removed it, IIRC. This example sounds reasonable to me. I\n> also recall some complaints from PostgresPro support folks, that it is\n> sad to not have a cli option to pass restore_command. However, I just\n> thought about another recent feature --- ensure clean shutdown, which\n> is turned on by default. So you usually run Postgres with one config,\n> but pg_rewind may start it with another, although in single-user mode.\n> Is it fine for you?\n>\n> >\n> > Maybe we could add \"-C, --target-restore-command=COMMAND target WAL restore_command\\n\" as was proposed within earlier versions of the patch[0]? Or instruct pg_rewind to pass config to 'postgres -C restore_command' run?\n>\n> Hm, adding --target-restore-command is the simplest way, sure, but\n> forwarding something like '-c config_file=...' to postgres sounds\n> interesting too. Could it have any use case beside providing a\n> restore_command? I cannot imagine anything right now, but if any\n> exist, then it could be a more universal approach.\n>\n> >\n> > From my POV adding --target-restore-command is simplest way, I can extract corresponding portions from original patch.\n> >\n>\n> I will have a look, maybe I even already have this patch separately. I\n> remember that we were considering adding this option to PostgresPro,\n> when we did a backport of this feature.\n>\n\nHere it is. I have slightly adapted the previous patch to the recent\npg_rewind changes. In this version -C does not conflict with -c, it\njust overrides it.\n\n\n-- \nAlexey Kondratov", "msg_date": "Tue, 29 Jun 2021 17:34:49 +0300", "msg_from": "Alexey Kondratov <kondratov.aleksey@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Supply restore_command to pg_rewind via CLI argument" }, { "msg_contents": "\n\n> 29 июня 2021 г., в 19:34, Alexey Kondratov <kondratov.aleksey@gmail.com> написал(а):\n> \n> On Fri, Jun 18, 2021 at 10:06 PM Alexey Kondratov\n> <kondratov.aleksey@gmail.com> wrote:\n>> On Fri, Jun 18, 2021 at 5:42 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>>> \n>>> If we run 'pg_rewind --restore-target-wal' there must be restore_command in config of target installation. But if the config is not within $PGDATA\\postgresql.conf pg_rewind cannot use it.\n>>> If we run postmaster with `-c config_file=/etc/postgresql/10/data/postgresql.conf`, we simply cannot use the feature. We solved the problem by putting config into PGDATA only during pg_rewind, but this does not seem like a very robust solution.\n>>> \n>> \n>> Yeah, Michael was against it, while we had no good arguments, so\n>> Alexander removed it, IIRC. This example sounds reasonable to me. I\n>> also recall some complaints from PostgresPro support folks, that it is\n>> sad to not have a cli option to pass restore_command. However, I just\n>> thought about another recent feature --- ensure clean shutdown, which\n>> is turned on by default. So you usually run Postgres with one config,\n>> but pg_rewind may start it with another, although in single-user mode.\n>> Is it fine for you?\nWe rewind failovered node, so clean shutdown was not performed. But I do not see how it could help anyway.\nTo pass restore command we had to setup new config in PGDATA configured as standby, because either way we cannot set restore_command there.\n\n>>> Maybe we could add \"-C, --target-restore-command=COMMAND target WAL restore_command\\n\" as was proposed within earlier versions of the patch[0]? Or instruct pg_rewind to pass config to 'postgres -C restore_command' run?\n>> \n>> Hm, adding --target-restore-command is the simplest way, sure, but\n>> forwarding something like '-c config_file=...' to postgres sounds\n>> interesting too. Could it have any use case beside providing a\n>> restore_command? I cannot imagine anything right now, but if any\n>> exist, then it could be a more universal approach.\nI think --target-restore-command is the best solution right now.\n\n>>> From my POV adding --target-restore-command is simplest way, I can extract corresponding portions from original patch.\n>>> \n>> \n>> I will have a look, maybe I even already have this patch separately. I\n>> remember that we were considering adding this option to PostgresPro,\n>> when we did a backport of this feature.\n>> \n> \n> Here it is. I have slightly adapted the previous patch to the recent\n> pg_rewind changes. In this version -C does not conflict with -c, it\n> just overrides it.\n\nGreat, thanks!\n\nThere is a small bug\n+\t/*\n+\t * Take restore_command from the postgresql.conf only if it is not already\n+\t * provided as a command line option.\n+\t */\n+\tif (!restore_wal && restore_command == NULL)\n \t\treturn;\n\nI think we should use condition (!restore_wal || restore_command != NULL).\n\nBesides this patch looks good and is ready for committer IMV.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 27 Aug 2021 12:05:50 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: Supply restore_command to pg_rewind via CLI argument" }, { "msg_contents": "On Fri, Aug 27, 2021 at 10:05 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> There is a small bug\n> + /*\n> + * Take restore_command from the postgresql.conf only if it is not already\n> + * provided as a command line option.\n> + */\n> + if (!restore_wal && restore_command == NULL)\n> return;\n>\n> I think we should use condition (!restore_wal || restore_command != NULL).\n>\n\nYes, you are right, thanks. Attached is a fixed version. Tests were\npassing since PostgresNode->enable_restoring is adding restore_command\nto the postgresql.conf anyway.\n\n>\n> Besides this patch looks good and is ready for committer IMV.\n>\n\n\n-- \nAlexey Kondratov", "msg_date": "Fri, 27 Aug 2021 16:32:02 +0300", "msg_from": "Alexey Kondratov <kondratov.aleksey@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Supply restore_command to pg_rewind via CLI argument" }, { "msg_contents": ">> Besides this patch looks good and is ready for committer IMV.\n\nA variant of this patch was originally objected against by Michael, and as this\nversion is marked Ready for Committer I would like to hear his opinions on\nwhether the new evidence changes anything.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 14 Sep 2021 15:41:00 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Supply restore_command to pg_rewind via CLI argument" }, { "msg_contents": "\n\n> 14 сент. 2021 г., в 18:41, Daniel Gustafsson <daniel@yesql.se> написал(а):\n> \n>>> Besides this patch looks good and is ready for committer IMV.\n> \n> A variant of this patch was originally objected against by Michael, and as this\n> version is marked Ready for Committer I would like to hear his opinions on\n> whether the new evidence changes anything.\n\nI skimmed the thread for reasoning. --target-restore-command was rejected on the following grounds\n\n> Do we actually need --target-restore-command at all? It seems to me\n> that we have all we need with --restore-target-wal, and that's not\n> really instinctive to pass down a command via another command..\n\nCurrently we know that --restore-target-wal is not enough if postgresql.conf does not reside within PGDATA.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 14 Sep 2021 19:05:02 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: Supply restore_command to pg_rewind via CLI argument" }, { "msg_contents": "> On 14 Sep 2021, at 16:05, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n>> Do we actually need --target-restore-command at all? It seems to me\n>> that we have all we need with --restore-target-wal, and that's not\n>> really instinctive to pass down a command via another command..\n> \n> Currently we know that --restore-target-wal is not enough if postgresql.conf does not reside within PGDATA.\n\nThat's a useful reason which wasn't brought up in the earlier thread, and may\ntip the scales in favor.\n\nThe patch no longer applies, can you submit a rebased version please?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 4 Nov 2021 13:55:43 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Supply restore_command to pg_rewind via CLI argument" }, { "msg_contents": "> 4 нояб. 2021 г., в 17:55, Daniel Gustafsson <daniel@yesql.se> написал(а):\n> \n> The patch no longer applies, can you submit a rebased version please?\n\nThanks, Daniel! PFA rebase.\n\nBest regards, Andrey Borodin.", "msg_date": "Fri, 5 Nov 2021 15:10:29 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: Supply restore_command to pg_rewind via CLI argument" }, { "msg_contents": "Hi,\n\nOn 2021-11-05 15:10:29 +0500, Andrey Borodin wrote:\n> > 4 нояб. 2021 г., в 17:55, Daniel Gustafsson <daniel@yesql.se> написал(а):\n> > \n> > The patch no longer applies, can you submit a rebased version please?\n> \n> Thanks, Daniel! PFA rebase.\n\nDoesn't apply once more: http://cfbot.cputube.org/patch_37_3213.log\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:32:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Supply restore_command to pg_rewind via CLI argument" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 22, 2022 at 3:32 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Doesn't apply once more: http://cfbot.cputube.org/patch_37_3213.log\n>\n\nThanks for the reminder, a rebased version is attached.\n\n\nRegards\n-- \nAlexey Kondratov", "msg_date": "Tue, 22 Mar 2022 12:23:35 +0300", "msg_from": "Alexey Kondratov <kondratov.aleksey@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Supply restore_command to pg_rewind via CLI argument" }, { "msg_contents": "On Thu, Nov 04, 2021 at 01:55:43PM +0100, Daniel Gustafsson wrote:\n>> On 14 Sep 2021, at 16:05, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>>> Do we actually need --target-restore-command at all? It seems to me\n>>> that we have all we need with --restore-target-wal, and that's not\n>>> really instinctive to pass down a command via another command..\n>> \n>> Currently we know that --restore-target-wal is not enough if postgresql.conf does not reside within PGDATA.\n> \n> That's a useful reason which wasn't brought up in the earlier thread, and may\n> tip the scales in favor.\n\nIt does now, as of 0d5c3875. FWIW, I am not much a fan of the design\nwhere we pass down a command line as an option value of a different\ncommand line (more games with quoting comes into mind first), and\n--config-file should give enough room for the case of this thread. I\nhave switched the status of the patch to reflect that.\n--\nMichael", "msg_date": "Thu, 7 Apr 2022 14:53:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Supply restore_command to pg_rewind via CLI argument" } ]
[ { "msg_contents": "Hi,\n\nStatistics for range types are not currently exposed in pg_stats view \n(i.e. STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM and \nSTATISTIC_KIND_BOUNDS_HISTOGRAM).\n\nShouldn't they? If so, here is a patch for adding them.\n\nThe following is a simple example of what it looks like:\n\nCREATE TABLE test(r int4range);\nINSERT INTO test\n     SELECT int4range((random()*10)::integer,(10+random()*10)::integer)\n     FROM generate_series(1,10000);\nSET default_statistics_target = 10;\nANALYZE test;\n\nSELECT range_length_histogram, range_length_empty_frac, \nrange_bounds_histogram\nFROM pg_stats\nWHERE tablename = 'test' \\gx\n\n-[ RECORD 1 \n]-----------+------------------------------------------------------------------------------------------------------\nrange_length_histogram  | {1,4,6,8,9,10,11,12,14,16,20}\nrange_length_empty_frac | {0.0036666666}\nrange_bounds_histogram  | \n{\"[0,10)\",\"[1,11)\",\"[2,12)\",\"[3,13)\",\"[4,14)\",\"[5,15)\",\"[6,16)\",\"[7,17)\",\"[8,18)\",\"[9,19)\",\"[10,20)\"}\n\n\nRegards,\nEgor Rogov.", "msg_date": "Fri, 18 Jun 2021 19:22:51 +0300", "msg_from": "Egor Rogov <e.rogov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "pg_stats and range statistics" }, { "msg_contents": "On 6/18/21 6:22 PM, Egor Rogov wrote:\n> Hi,\n> \n> Statistics for range types are not currently exposed in pg_stats view \n> (i.e. STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM and \n> STATISTIC_KIND_BOUNDS_HISTOGRAM).\n> \n> Shouldn't they? If so, here is a patch for adding them.\n> \n\nI think they should be exposed - I don't see why not to do that. I \nnoticed this when working on the count-min sketch experiment too, so \nthanks for this patch.\n\nFWIW I've added the patch to the next CF:\n\nhttps://commitfest.postgresql.org/33/3184/\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 18 Jun 2021 22:31:43 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "Hello,\n\nThis should have been added with [1].\n\nExcerpt from the documentation:\n\"pg_stats is also designed to present the information in a more readable\nformat than the underlying catalog — at the cost that its schema must\nbe extended whenever new slot types are defined for pg_statistic.\" [2]\n\nSo, I added a reminder in pg_statistic.h.\n\nAttached is v2 of this patch with some cosmetic changes. Renamed the columns a\nbit and updated the docs to be a bit more descriptive.\n(range_length_empty_frac -> empty_range_frac, range_bounds_histogram ->\nrange_bounds_histograms)\n\nOne question:\n\nWe do have the option of representing the histogram of lower bounds separately\nfrom the histogram of upper bounds, as two separate view columns. Don't know if\nthere is much utility though and there is a fair bit of added complexity: see\nbelow. Thoughts?\n\nMy attempts via SQL (unnest -> lower|upper -> array_agg) were futile given\nunnest does not play nice with anyarray. For instance:\n\nselect unnest(stavalues1) from pg_statistic;\nERROR: cannot determine element type of \"anyarray\" argument\n\nMaybe the only option is to write a UDF pg_get_{lower|upper}_bounds_histogram\nwhich can do something similar to what calc_hist_selectivity does:\n\n/*\n * Convert histogram of ranges into histograms of its lower and upper\n * bounds.\n */\nnhist = hslot.nvalues;\nhist_lower = (RangeBound *) palloc(sizeof(RangeBound) * nhist);\nhist_upper = (RangeBound *) palloc(sizeof(RangeBound) * nhist);\nfor (i = 0; i < nhist; i++)\n{\nbool empty;\n\nrange_deserialize(rng_typcache, DatumGetRangeTypeP(hslot.values[i]),\n &hist_lower[i], &hist_upper[i], &empty);\n/* The histogram should not contain any empty ranges */\nif (empty)\nelog(ERROR, \"bounds histogram contains an empty range\");\n}\n\nThis is looking good and ready.\n\n[1] https://github.com/postgres/postgres/commit/918eee0c497c88260a2e107318843c9b1947bc6f\n[2] https://www.postgresql.org/docs/devel/view-pg-stats.html\n\nRegards,\nSoumyadeep (VMware)", "msg_date": "Sun, 11 Jul 2021 11:54:23 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "Hi,\n\nthanks for the review and corrections.\n\nOn 11.07.2021 21:54, Soumyadeep Chakraborty wrote:\n> Hello,\n>\n> This should have been added with [1].\n>\n> Excerpt from the documentation:\n> \"pg_stats is also designed to present the information in a more readable\n> format than the underlying catalog — at the cost that its schema must\n> be extended whenever new slot types are defined for pg_statistic.\" [2]\n>\n> So, I added a reminder in pg_statistic.h.\n\nGood point.\n\n\n> Attached is v2 of this patch with some cosmetic changes.\n\nI wonder why \"TODO: catalog version bump\"? This patch doesn't change \ncatalog structure, or I miss something?\n\n\n> Renamed the columns a\n> bit and updated the docs to be a bit more descriptive.\n> (range_length_empty_frac -> empty_range_frac, range_bounds_histogram ->\n> range_bounds_histograms)\n\nI intended to make the same prefix (\"range_\") for all columns concerned \nwith range types, although I'm fine with the proposed naming.\n\n\n> One question:\n>\n> We do have the option of representing the histogram of lower bounds separately\n> from the histogram of upper bounds, as two separate view columns. Don't know if\n> there is much utility though and there is a fair bit of added complexity: see\n> below. Thoughts?\n\nI thought about it too, and decided not to transform the underlying data \nstructure. As far as I can see, pg_stats never employed such \ntransformations. For example, STATISTIC_KIND_DECHIST is an array \ncontaining the histogram followed by the average in its last element. It \nis shown in pg_stats.elem_count_histogram as is, although it arguably \nmay be splitted into two fields. All in all, I believe pg_stats's job is \nto \"unpack\" stavalues and stanumbers into meaningful fields, and not to \ntry to go deeper than that.\n\n\n>\n> My attempts via SQL (unnest -> lower|upper -> array_agg) were futile given\n> unnest does not play nice with anyarray. For instance:\n>\n> select unnest(stavalues1) from pg_statistic;\n> ERROR: cannot determine element type of \"anyarray\" argument\n>\n> Maybe the only option is to write a UDF pg_get_{lower|upper}_bounds_histogram\n> which can do something similar to what calc_hist_selectivity does:\n>\n> /*\n> * Convert histogram of ranges into histograms of its lower and upper\n> * bounds.\n> */\n> nhist = hslot.nvalues;\n> hist_lower = (RangeBound *) palloc(sizeof(RangeBound) * nhist);\n> hist_upper = (RangeBound *) palloc(sizeof(RangeBound) * nhist);\n> for (i = 0; i < nhist; i++)\n> {\n> bool empty;\n>\n> range_deserialize(rng_typcache, DatumGetRangeTypeP(hslot.values[i]),\n> &hist_lower[i], &hist_upper[i], &empty);\n> /* The histogram should not contain any empty ranges */\n> if (empty)\n> elog(ERROR, \"bounds histogram contains an empty range\");\n> }\n>\n> This is looking good and ready.\n>\n> [1] https://github.com/postgres/postgres/commit/918eee0c497c88260a2e107318843c9b1947bc6f\n> [2] https://www.postgresql.org/docs/devel/view-pg-stats.html\n>\n> Regards,\n> Soumyadeep (VMware)\n\n\n", "msg_date": "Mon, 12 Jul 2021 14:10:53 +0300", "msg_from": "Egor Rogov <e.rogov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "On 7/12/21 1:10 PM, Egor Rogov wrote:\n> Hi,\n> \n> thanks for the review and corrections.\n> \n> On 11.07.2021 21:54, Soumyadeep Chakraborty wrote:\n>> Hello,\n>>\n>> This should have been added with [1].\n>>\n>> Excerpt from the documentation:\n>> \"pg_stats is also designed to present the information in a more readable\n>> format than the underlying catalog — at the cost that its schema must\n>> be extended whenever new slot types are defined for pg_statistic.\" [2]\n>>\n>> So, I added a reminder in pg_statistic.h.\n> \n> Good point.\n> \n> \n>> Attached is v2 of this patch with some cosmetic changes.\n> \n> I wonder why \"TODO: catalog version bump\"? This patch doesn't change\n> catalog structure, or I miss something?\n> \n\nIt changes system_views.sql, which is catalog change, as it redefines\nthe pg_stats system view (it adds 3 more columns). So it changes what\nyou get after initdb, hence catversion has to be bumped.\n\n> \n>> Renamed the columns a\n>> bit and updated the docs to be a bit more descriptive.\n>> (range_length_empty_frac -> empty_range_frac, range_bounds_histogram ->\n>> range_bounds_histograms)\n> \n> I intended to make the same prefix (\"range_\") for all columns concerned\n> with range types, although I'm fine with the proposed naming.\n> \n\nYeah, I'd vote to change empty_range_frac -> range_empty_frac.\n\n> \n>> One question:\n>>\n>> We do have the option of representing the histogram of lower bounds\n>> separately\n>> from the histogram of upper bounds, as two separate view columns.\n>> Don't know if\n>> there is much utility though and there is a fair bit of added\n>> complexity: see\n>> below. Thoughts?\n> \n> I thought about it too, and decided not to transform the underlying data\n> structure. As far as I can see, pg_stats never employed such\n> transformations. For example, STATISTIC_KIND_DECHIST is an array\n> containing the histogram followed by the average in its last element. It\n> is shown in pg_stats.elem_count_histogram as is, although it arguably\n> may be splitted into two fields. All in all, I believe pg_stats's job is\n> to \"unpack\" stavalues and stanumbers into meaningful fields, and not to\n> try to go deeper than that.\n> \n\nNot firm opinion, but the pg_stats is meant to be easier to\nread/understand for humans. So far the transformation were simple\nbecause all the data was fairly simple, but the range stuff may need\nmore complex transformation.\n\nFor example we do quite a bit more in pg_stats_ext views, because it\ndeals with multi-column stats.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 12 Jul 2021 15:04:08 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "Hi Tomas,\n\nOn 12.07.2021 16:04, Tomas Vondra wrote:\n> On 7/12/21 1:10 PM, Egor Rogov wrote:\n>> Hi,\n>>\n>> thanks for the review and corrections.\n>>\n>> On 11.07.2021 21:54, Soumyadeep Chakraborty wrote:\n>>> Hello,\n>>>\n>>> This should have been added with [1].\n>>>\n>>> Excerpt from the documentation:\n>>> \"pg_stats is also designed to present the information in a more readable\n>>> format than the underlying catalog — at the cost that its schema must\n>>> be extended whenever new slot types are defined for pg_statistic.\" [2]\n>>>\n>>> So, I added a reminder in pg_statistic.h.\n>> Good point.\n>>\n>>\n>>> Attached is v2 of this patch with some cosmetic changes.\n>> I wonder why \"TODO: catalog version bump\"? This patch doesn't change\n>> catalog structure, or I miss something?\n>>\n> It changes system_views.sql, which is catalog change, as it redefines\n> the pg_stats system view (it adds 3 more columns). So it changes what\n> you get after initdb, hence catversion has to be bumped.\n>\n>>> Renamed the columns a\n>>> bit and updated the docs to be a bit more descriptive.\n>>> (range_length_empty_frac -> empty_range_frac, range_bounds_histogram ->\n>>> range_bounds_histograms)\n>> I intended to make the same prefix (\"range_\") for all columns concerned\n>> with range types, although I'm fine with the proposed naming.\n>>\n> Yeah, I'd vote to change empty_range_frac -> range_empty_frac.\n>\n>>> One question:\n>>>\n>>> We do have the option of representing the histogram of lower bounds\n>>> separately\n>>> from the histogram of upper bounds, as two separate view columns.\n>>> Don't know if\n>>> there is much utility though and there is a fair bit of added\n>>> complexity: see\n>>> below. Thoughts?\n>> I thought about it too, and decided not to transform the underlying data\n>> structure. As far as I can see, pg_stats never employed such\n>> transformations. For example, STATISTIC_KIND_DECHIST is an array\n>> containing the histogram followed by the average in its last element. It\n>> is shown in pg_stats.elem_count_histogram as is, although it arguably\n>> may be splitted into two fields. All in all, I believe pg_stats's job is\n>> to \"unpack\" stavalues and stanumbers into meaningful fields, and not to\n>> try to go deeper than that.\n>>\n> Not firm opinion, but the pg_stats is meant to be easier to\n> read/understand for humans. So far the transformation were simple\n> because all the data was fairly simple, but the range stuff may need\n> more complex transformation.\n>\n> For example we do quite a bit more in pg_stats_ext views, because it\n> deals with multi-column stats.\n\n\nIn pg_stats_ext, yes, but not in pg_stats (at least until now).\n\nSince no one has expressed a strong desire for a more complex \ntransformation, should we proceed with the proposed approach (with \nfurther renaming empty_range_frac -> range_empty_frac as you suggested)? \nOr should we wait more for someone to weigh in?\n\n\n>\n>\n> regards\n>\n\n\n", "msg_date": "Fri, 23 Jul 2021 21:05:50 +0300", "msg_from": "Egor Rogov <e.rogov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "Hi Egor,\n\nWhile reviewing a patch improving join estimates for ranges [1] I\nrealized we don't show stats for ranges in pg_stats, and I recalled we\nhad this patch.\n\nI rebased the v2, and I decided to took a stab at showing separate\nhistograms for lower/upper histogram bounds. I believe it makes it way\nmore readable, which is what pg_stats is about IMHO.\n\nThis simply adds two functions, accepting/producing anyarray - one for\nlower bounds, one for upper bounds. I don't think it can be done with a\nplain subquery (or at least I don't know how).\n\nFinally, it renames the empty_range_frac to start with range_, per the\nearlier discussion. I wonder if the new column names for lower/upper\nbounds (range_lower_bounds_histograms/range_upper_bounds_histograms) are\ntoo long ...\n\nregards\n\n[1] https://commitfest.postgresql.org/41/3821/\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 20 Jan 2023 22:50:56 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "Hi Tomas,\n\nOn 21.01.2023 00:50, Tomas Vondra wrote:\n> Hi Egor,\n>\n> While reviewing a patch improving join estimates for ranges [1] I\n> realized we don't show stats for ranges in pg_stats, and I recalled we\n> had this patch.\n>\n> I rebased the v2, and I decided to took a stab at showing separate\n> histograms for lower/upper histogram bounds. I believe it makes it way\n> more readable, which is what pg_stats is about IMHO.\n\n\nThanks for looking into this.\n\nI have to admit it looks much better this way, so +1.\n\n\n> This simply adds two functions, accepting/producing anyarray - one for\n> lower bounds, one for upper bounds. I don't think it can be done with a\n> plain subquery (or at least I don't know how).\n\n\nAnyarray is an alien to SQL, so functions are well justified here. What \nmakes me a bit uneasy is two almost identical functions. Should we \nconsider other options like a function with an additional parameter or a \nfunction returning an array of bounds arrays (which is somewhat \nwasteful, but probably it doesn't matter much here)?\n\n\n> Finally, it renames the empty_range_frac to start with range_, per the\n> earlier discussion. I wonder if the new column names for lower/upper\n> bounds (range_lower_bounds_histograms/range_upper_bounds_histograms) are\n> too long ...\n\n\nIt seems so. The ending -s should be left out since it's a single \nhistogram now. And I think that \nrange_lower_histogram/range_upper_histogram are descriptive enough.\n\nI'm adding one more patch to shorten the column names, refresh the docs, \nand make 'make check' happy (unfortunately, we have to edit \nsrc/regress/expected/rules.out every time pg_stats definition changes).\n\n\n>\n> regards\n>\n> [1] https://commitfest.postgresql.org/41/3821/\n>", "msg_date": "Sat, 21 Jan 2023 21:53:20 +0300", "msg_from": "Egor Rogov <e.rogov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "On 1/21/23 19:53, Egor Rogov wrote:\n> Hi Tomas,\n> \n> On 21.01.2023 00:50, Tomas Vondra wrote:\n>> Hi Egor,\n>>\n>> While reviewing a patch improving join estimates for ranges [1] I\n>> realized we don't show stats for ranges in pg_stats, and I recalled we\n>> had this patch.\n>>\n>> I rebased the v2, and I decided to took a stab at showing separate\n>> histograms for lower/upper histogram bounds. I believe it makes it way\n>> more readable, which is what pg_stats is about IMHO.\n> \n> \n> Thanks for looking into this.\n> \n> I have to admit it looks much better this way, so +1.\n> \n\nOK, good to hear.\n\n> \n>> This simply adds two functions, accepting/producing anyarray - one for\n>> lower bounds, one for upper bounds. I don't think it can be done with a\n>> plain subquery (or at least I don't know how).\n> \n> \n> Anyarray is an alien to SQL, so functions are well justified here. What\n> makes me a bit uneasy is two almost identical functions. Should we\n> consider other options like a function with an additional parameter or a\n> function returning an array of bounds arrays (which is somewhat\n> wasteful, but probably it doesn't matter much here)?\n> \n\nI thought about that, but I think the alternatives (e.g. a single\nfunction with a parameter determining which boundary to return). But I\ndon't think it's better.\n\nMoreover, I think this is pretty similar to lower/upper, which already\nwork on range values. So if we have separate functions for that, we\nshould do the same thing here.\n\nI renamed the functions to ranges_lower/ranges_upper, but maybe why not\nto even call the functions lower/upper too?\n\nThe main trouble with the function I can think of is that we only have\nanyarray type, not anyrangearray. So the functions will get called for\narbitrary array, and the check that it's array of ranges happens inside.\nI'm not sure if that's a good or bad idea, or what would it take to add\na new polymorphic type ...\n\nFor now I at least kept \"ranges_\" to make it less likely.\n\n> \n>> Finally, it renames the empty_range_frac to start with range_, per the\n>> earlier discussion. I wonder if the new column names for lower/upper\n>> bounds (range_lower_bounds_histograms/range_upper_bounds_histograms) are\n>> too long ...\n> \n> \n> It seems so. The ending -s should be left out since it's a single\n> histogram now. And I think that\n> range_lower_histogram/range_upper_histogram are descriptive enough.\n> \n> I'm adding one more patch to shorten the column names, refresh the docs,\n> and make 'make check' happy (unfortunately, we have to edit\n> src/regress/expected/rules.out every time pg_stats definition changes).\n> \n\nThanks. I noticed the docs were added to pg_user_mapping by mistake, not\nto pg_stats. So I fixed that, and I also added the new functions.\n\nFinally, I reordered the fields a bit - moved range_empty_frac to keep\nthe histogram fields together.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 22 Jan 2023 19:19:41 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "On Sun, Jan 22, 2023 at 07:19:41PM +0100, Tomas Vondra wrote:\n> On 1/21/23 19:53, Egor Rogov wrote:\n> > Hi Tomas,\n> > On 21.01.2023 00:50, Tomas Vondra wrote:\n> >> This simply adds two functions, accepting/producing anyarray - one for\n> >> lower bounds, one for upper bounds. I don't think it can be done with a\n> >> plain subquery (or at least I don't know how).\n> > \n> > Anyarray is an alien to SQL, so functions are well justified here. What\n> > makes me a bit uneasy is two almost identical functions. Should we\n> > consider other options like a function with an additional parameter or a\n> > function returning an array of bounds arrays (which is somewhat\n> > wasteful, but probably it doesn't matter much here)?\n> > \n> \n> I thought about that, but I think the alternatives (e.g. a single\n> function with a parameter determining which boundary to return). But I\n> don't think it's better.\n\nWhat about a common function, maybe called like:\n\nranges_upper_bounds(PG_FUNCTION_ARGS)\n{\n AnyArrayType *array = PG_GETARG_ANY_ARRAY_P(0);\n Oid element_type = AARR_ELEMTYPE(array);\n TypeCacheEntry *typentry;\n\n /* Get information about range type; note column might be a domain */\n typentry = range_get_typcache(fcinfo, getBaseType(element_type));\n\n return ranges_bounds_common(typentry, array, false);\n}\n\nThat saves 40 LOC.\n\nShouldn't this add some sql tests ?\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 22 Jan 2023 15:33:11 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "\n\nOn 1/22/23 22:33, Justin Pryzby wrote:\n> On Sun, Jan 22, 2023 at 07:19:41PM +0100, Tomas Vondra wrote:\n>> On 1/21/23 19:53, Egor Rogov wrote:\n>>> Hi Tomas,\n>>> On 21.01.2023 00:50, Tomas Vondra wrote:\n>>>> This simply adds two functions, accepting/producing anyarray - one for\n>>>> lower bounds, one for upper bounds. I don't think it can be done with a\n>>>> plain subquery (or at least I don't know how).\n>>>\n>>> Anyarray is an alien to SQL, so functions are well justified here. What\n>>> makes me a bit uneasy is two almost identical functions. Should we\n>>> consider other options like a function with an additional parameter or a\n>>> function returning an array of bounds arrays (which is somewhat\n>>> wasteful, but probably it doesn't matter much here)?\n>>>\n>>\n>> I thought about that, but I think the alternatives (e.g. a single\n>> function with a parameter determining which boundary to return). But I\n>> don't think it's better.\n> \n> What about a common function, maybe called like:\n> \n> ranges_upper_bounds(PG_FUNCTION_ARGS)\n> {\n> AnyArrayType *array = PG_GETARG_ANY_ARRAY_P(0);\n> Oid element_type = AARR_ELEMTYPE(array);\n> TypeCacheEntry *typentry;\n> \n> /* Get information about range type; note column might be a domain */\n> typentry = range_get_typcache(fcinfo, getBaseType(element_type));\n> \n> return ranges_bounds_common(typentry, array, false);\n> }\n> \n> That saves 40 LOC.\n> \n\nThanks, that's better. But I'm still not sure it's a good idea to add\nfunction with anyarray argument, when we need it to be an array of\nranges ...\n\nI wonder if we have other functions doing something similar, i.e.\naccepting a polymorphic type and then imposing additional restrictions\non it.\n\n> Shouldn't this add some sql tests ?\n> \n\nYeah, I guess we should have a couple tests calling these functions on\ndifferent range arrays.\n\nThis reminds me lower()/upper() have some extra rules about handling\nempty ranges / infinite boundaries etc. These functions should behave\nconsistently (as if we called lower() in a loop) and I'm pretty sure\nthat's not the current state.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 23 Jan 2023 00:21:21 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "Hi,\n\nOn 23.01.2023 02:21, Tomas Vondra wrote:\n>\n> On 1/22/23 22:33, Justin Pryzby wrote:\n>> On Sun, Jan 22, 2023 at 07:19:41PM +0100, Tomas Vondra wrote:\n>>> On 1/21/23 19:53, Egor Rogov wrote:\n>>>> Hi Tomas,\n>>>> On 21.01.2023 00:50, Tomas Vondra wrote:\n>>>>> This simply adds two functions, accepting/producing anyarray - one for\n>>>>> lower bounds, one for upper bounds. I don't think it can be done with a\n>>>>> plain subquery (or at least I don't know how).\n>>>> Anyarray is an alien to SQL, so functions are well justified here. What\n>>>> makes me a bit uneasy is two almost identical functions. Should we\n>>>> consider other options like a function with an additional parameter or a\n>>>> function returning an array of bounds arrays (which is somewhat\n>>>> wasteful, but probably it doesn't matter much here)?\n>>>>\n>>> I thought about that, but I think the alternatives (e.g. a single\n>>> function with a parameter determining which boundary to return). But I\n>>> don't think it's better.\n>> What about a common function, maybe called like:\n>>\n>> ranges_upper_bounds(PG_FUNCTION_ARGS)\n>> {\n>> AnyArrayType *array = PG_GETARG_ANY_ARRAY_P(0);\n>> Oid element_type = AARR_ELEMTYPE(array);\n>> TypeCacheEntry *typentry;\n>>\n>> /* Get information about range type; note column might be a domain */\n>> typentry = range_get_typcache(fcinfo, getBaseType(element_type));\n>>\n>> return ranges_bounds_common(typentry, array, false);\n>> }\n>>\n>> That saves 40 LOC.\n>>\n> Thanks, that's better. But I'm still not sure it's a good idea to add\n> function with anyarray argument, when we need it to be an array of\n> ranges ...\n>\n> I wonder if we have other functions doing something similar, i.e.\n> accepting a polymorphic type and then imposing additional restrictions\n> on it.\n\n\nI couldn't find such examples, but adding an adhoc polymorphic type just \ndoesn't look right for me. Besides, you'll end up adding not just \nanyrangearray type, but also anymultirangearray, \nanycompatiblerangearray, anycompatiblemultirangearray, and maybe their \n\"non\"-counterparts like anynonrangearray, and all of these are not of \nmuch use. And one day you may need an array of arrays or something...\n\nI wonder if it's possible to teach SQL to work with anyarray type - at \nruntime the actual type of anyarray elements is known, right? In fact, \nunnest() alone is enough to eliminate the need of C functions altogether.\n\n\n>> Shouldn't this add some sql tests ?\n>>\n> Yeah, I guess we should have a couple tests calling these functions on\n> different range arrays.\n>\n> This reminds me lower()/upper() have some extra rules about handling\n> empty ranges / infinite boundaries etc. These functions should behave\n> consistently (as if we called lower() in a loop) and I'm pretty sure\n> that's not the current state.\n\n\nI can try to tidy things up, but first we need to decide on the general \napproach.\n\n\n>\n>\n> regards\n>\n\n\n", "msg_date": "Mon, 23 Jan 2023 13:01:46 +0300", "msg_from": "Egor Rogov <e.rogov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "On 23.01.2023 13:01, Egor Rogov wrote:\n\n> On 23.01.2023 02:21, Tomas Vondra wrote:\n>> On 1/22/23 22:33, Justin Pryzby wrote:\n>>> On Sun, Jan 22, 2023 at 07:19:41PM +0100, Tomas Vondra wrote:\n>>>> On 1/21/23 19:53, Egor Rogov wrote:\n>>>>> Hi Tomas,\n>>>>> On 21.01.2023 00:50, Tomas Vondra wrote:\n>>>>>> This simply adds two functions, accepting/producing anyarray - \n>>>>>> one for\n>>>>>> lower bounds, one for upper bounds. I don't think it can be done \n>>>>>> with a\n>>>>>> plain subquery (or at least I don't know how).\n>>>>> Anyarray is an alien to SQL, so functions are well justified here. \n>>>>> What\n>>>>> makes me a bit uneasy is two almost identical functions. Should we\n>>>>> consider other options like a function with an additional \n>>>>> parameter or a\n>>>>> function returning an array of bounds arrays (which is somewhat\n>>>>> wasteful, but probably it doesn't matter much here)?\n>>>>>\n>>>> I thought about that, but I think the alternatives (e.g. a single\n>>>> function with a parameter determining which boundary to return). But I\n>>>> don't think it's better.\n>>> What about a common function, maybe called like:\n>>>\n>>> ranges_upper_bounds(PG_FUNCTION_ARGS)\n>>> {\n>>>      AnyArrayType *array = PG_GETARG_ANY_ARRAY_P(0);\n>>>      Oid         element_type = AARR_ELEMTYPE(array);\n>>>      TypeCacheEntry *typentry;\n>>>\n>>>      /* Get information about range type; note column might be a \n>>> domain */\n>>>      typentry = range_get_typcache(fcinfo, getBaseType(element_type));\n>>>\n>>>      return ranges_bounds_common(typentry, array, false);\n>>> }\n>>>\n>>> That saves 40 LOC.\n>>>\n>> Thanks, that's better. But I'm still not sure it's a good idea to add\n>> function with anyarray argument, when we need it to be an array of\n>> ranges ...\n>>\n>> I wonder if we have other functions doing something similar, i.e.\n>> accepting a polymorphic type and then imposing additional restrictions\n>> on it.\n>\n>\n> I couldn't find such examples, but adding an adhoc polymorphic type \n> just doesn't look right for me. Besides, you'll end up adding not just \n> anyrangearray type, but also anymultirangearray, \n> anycompatiblerangearray, anycompatiblemultirangearray, and maybe their \n> \"non\"-counterparts like anynonrangearray, and all of these are not of \n> much use. And one day you may need an array of arrays or something...\n>\n> I wonder if it's possible to teach SQL to work with anyarray type - at \n> runtime the actual type of anyarray elements is known, right? In fact, \n> unnest() alone is enough to eliminate the need of C functions altogether.\n\n\nWhen started to look at how we deal with anyarray columns, I came across \nthe following comment in parse_coerce.c for \nenforce_generic_type_consistency():\n\n* A special case is that we could see ANYARRAY as an actual_arg_type even\n  * when allow_poly is false (this is possible only because pg_statistic has\n  * columns shown as anyarray in the catalogs).\n\nIt makes me realize how anyarray as-a-real-type is specific to \npg_statistic. Even if it's possible to somehow postpone type inference \nfor this case from parse time to execute time, it clearly doesn't worth \nthe effort.\n\nSo, I am for the simplest possible approach, that is, the two proposed \nfunctions ranges_upper(anyarray) and ranges_lower(anyarray). I am not \neven sure if it's worth documenting them, as they are very \npg_statistic-specific and likely won't be useful for end users.\n\n\n>\n>\n>>> Shouldn't this add some sql tests ?\n>>>\n>> Yeah, I guess we should have a couple tests calling these functions on\n>> different range arrays.\n>>\n>> This reminds me lower()/upper() have some extra rules about handling\n>> empty ranges / infinite boundaries etc. These functions should behave\n>> consistently (as if we called lower() in a loop) and I'm pretty sure\n>> that's not the current state.\n>\n>\n> I can try to tidy things up, but first we need to decide on the \n> general approach.\n>\n>\n\n\n", "msg_date": "Tue, 24 Jan 2023 10:35:59 +0300", "msg_from": "Egor Rogov <e.rogov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "On Sun, 22 Jan 2023 at 18:22, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I wonder if we have other functions doing something similar, i.e.\n> accepting a polymorphic type and then imposing additional restrictions\n> on it.\n\nMeh, there's things like array comparison functions that require both\narguments to be the same kind of arrays. And array_agg that requires\nthe elements to be the same type as the state array (ie, same type as\nthe first element). Not sure there are any taking just one specific\ntype though.\n\n> > Shouldn't this add some sql tests ?\n>\n> Yeah, I guess we should have a couple tests calling these functions on\n> different range arrays.\n>\n> This reminds me lower()/upper() have some extra rules about handling\n> empty ranges / infinite boundaries etc. These functions should behave\n> consistently (as if we called lower() in a loop) and I'm pretty sure\n> that's not the current state.\n\nAre we still waiting on these two items? Egor, do you think you'll\nhave a chance to work it for this month?\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 20 Mar 2023 15:27:37 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "On 20.03.2023 22:27, Gregory Stark (as CFM) wrote:\n> On Sun, 22 Jan 2023 at 18:22, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> I wonder if we have other functions doing something similar, i.e.\n>> accepting a polymorphic type and then imposing additional restrictions\n>> on it.\n> Meh, there's things like array comparison functions that require both\n> arguments to be the same kind of arrays. And array_agg that requires\n> the elements to be the same type as the state array (ie, same type as\n> the first element). Not sure there are any taking just one specific\n> type though.\n>\n>>> Shouldn't this add some sql tests ?\n>> Yeah, I guess we should have a couple tests calling these functions on\n>> different range arrays.\n>>\n>> This reminds me lower()/upper() have some extra rules about handling\n>> empty ranges / infinite boundaries etc. These functions should behave\n>> consistently (as if we called lower() in a loop) and I'm pretty sure\n>> that's not the current state.\n> Are we still waiting on these two items? Egor, do you think you'll\n> have a chance to work it for this month?\n\n\nI can try to tidy things up, but I'm not sure if we reached a consensus.\n\nDo we stick with the ranges_upper(anyarray) and ranges_lower(anyarray) \nfunctions? This approach is okay with me. Tomas, have you made up your mind?\n\nDo we want to document these functions? They are very \npg_statistic-specific and won't be useful for end users imo.\n\n\n\n\n", "msg_date": "Mon, 20 Mar 2023 22:54:24 +0300", "msg_from": "Egor Rogov <e.rogov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "\n\nOn 3/20/23 20:54, Egor Rogov wrote:\n> On 20.03.2023 22:27, Gregory Stark (as CFM) wrote:\n>> On Sun, 22 Jan 2023 at 18:22, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>> I wonder if we have other functions doing something similar, i.e.\n>>> accepting a polymorphic type and then imposing additional restrictions\n>>> on it.\n>> Meh, there's things like array comparison functions that require both\n>> arguments to be the same kind of arrays. And array_agg that requires\n>> the elements to be the same type as the state array (ie, same type as\n>> the first element). Not sure there are any taking just one specific\n>> type though.\n>>\n>>>> Shouldn't this add some sql tests ?\n>>> Yeah, I guess we should have a couple tests calling these functions on\n>>> different range arrays.\n>>>\n>>> This reminds me lower()/upper() have some extra rules about handling\n>>> empty ranges / infinite boundaries etc. These functions should behave\n>>> consistently (as if we called lower() in a loop) and I'm pretty sure\n>>> that's not the current state.\n>> Are we still waiting on these two items? Egor, do you think you'll\n>> have a chance to work it for this month?\n> \n> \n> I can try to tidy things up, but I'm not sure if we reached a consensus.\n> \n\nWe don't have any objections, and that's probably the best consensus we\ncan get here, I guess ...\n\nSo if you could clean it up a bit, and do something about the two open\nitems I mentioned (a bunch of tests on different array, and behavior\nconsistent with lower/upper), that'd be great.\n\n> Do we stick with the ranges_upper(anyarray) and ranges_lower(anyarray)\n> functions? This approach is okay with me. Tomas, have you made up your\n> mind?\n> \n\nI think the function approach is fine, but in my January 22 message I\nwas wondering why we're not actually naming them simply lower/upper.\n\n> Do we want to document these functions? They are very\n> pg_statistic-specific and won't be useful for end users imo.\n> \n\nI don't see why not to document them. Sure, we're using them in a fairly\nspecific context, but I don't see why not to let people use them too\n(which would be hard without docs).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 23 Mar 2023 23:46:14 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "On 24.03.2023 01:46, Tomas Vondra wrote:\n\n>\n> So if you could clean it up a bit, and do something about the two open\n> items I mentioned (a bunch of tests on different array,\n\n\nI've added some tests to resgress/sql/rangetypes.sql, based on the same \ndataset that is used to test lower() and upper().\n\n\n> and behavior\n> consistent with lower/upper),\n\n\nDone. This required to switch from construct_array(), which doesn't \nsupport NULLs, to construct_md_array(), which does. A nice side effect \nis that now we also support multidimentional arrays.\n\nI've moved a common part of ranges_lower_bounds() and \nranges_upper_bounds() to ranges_bounds_common(), following Justin's advice.\n\n\nThere is one thing I'm not sure what to do about. This check:\n\n      if (typentry->typtype != TYPTYPE_RANGE)\n          ereport(ERROR,\n                  (errcode(ERRCODE_DATATYPE_MISMATCH),\n                   errmsg(\"expected array of ranges\")));\n\ndoesn't work, because the range_get_typcache() call errors out first \n(\"type %u is not a range type\"). The message doesn't look friendly \nenough for user-faced SQL function. Should we duplicate \nrange_get_typcache's logic and replace the error message?\n\n\n> that'd be great.\n>\n>> Do we stick with the ranges_upper(anyarray) and ranges_lower(anyarray)\n>> functions? This approach is okay with me. Tomas, have you made up your\n>> mind?\n>>\n> I think the function approach is fine, but in my January 22 message I\n> was wondering why we're not actually naming them simply lower/upper.\n\n\nI'd expect from lower(anyarray) function to return the lowest element in \nthe array. This name doesn't hint that the function takes an array of \nranges. So, ranges_ prefix seems justified to me.\n\n\n>\n>> Do we want to document these functions? They are very\n>> pg_statistic-specific and won't be useful for end users imo.\n>>\n> I don't see why not to document them. Sure, we're using them in a fairly\n> specific context, but I don't see why not to let people use them too\n> (which would be hard without docs).\n\n\nOkay. I've corrected the examples a bit.\n\nThe patch is attached.\n\n\nThanks,\nEgor", "msg_date": "Fri, 24 Mar 2023 21:48:09 +0300", "msg_from": "Egor Rogov <e.rogov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "On Fri, 24 Mar 2023 at 14:48, Egor Rogov <e.rogov@postgrespro.ru> wrote:\n>\n> Done.\n\n> There is one thing I'm not sure what to do about. This check:\n>\n> if (typentry->typtype != TYPTYPE_RANGE)\n> ereport(ERROR,\n> (errcode(ERRCODE_DATATYPE_MISMATCH),\n> errmsg(\"expected array of ranges\")));\n>\n> doesn't work, because the range_get_typcache() call errors out first\n> (\"type %u is not a range type\"). The message doesn't look friendly\n> enough for user-faced SQL function. Should we duplicate\n> range_get_typcache's logic and replace the error message?\n\n> Okay. I've corrected the examples a bit.\n\nIt sounds like you've addressed Tomas's feedback and still have one\nopen question.\n\nFwiw I rebased it, it seemed to merge fine automatically.\n\nI've updated the CF entry to Needs Review. But at this late date it\nmay have to wait until the next release.\n\n\n\n\n-- \nGregory Stark\nAs Commitfest Manager", "msg_date": "Mon, 3 Apr 2023 17:10:00 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "hi. I played around with the 2023-Apr 4 latest patch.\n\n+ <literal>lower(ARRAY[numrange(1.1,2.2),numrange(3.3,4.4)])</literal>\nshould be\n+ <literal>ranges_lower(ARRAY[numrange(1.1,2.2),numrange(3.3,4.4)])</literal>\n\n+ <literal>upper(ARRAY[numrange(1.1,2.2),numrange(3.3,4.4)])</literal>\nshould be\n+ <literal>ranges_upper(ARRAY[numrange(1.1,2.2),numrange(3.3,4.4)])</literal>\n\nhttps://www.postgresql.org/docs/current/catalog-pg-type.html\nthere is no association between numrange and their base type numeric.\nso for template: anyarray ranges_lower(anyarray). I don't think we can\ninput numrange array and return a numeric array.\n\nhttps://www.postgresql.org/docs/current/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC\n>> When the return value of a function is declared as a polymorphic type, there must be at least one argument position that is also >> polymorphic, and the actual data type(s) supplied for the polymorphic arguments determine the actual result type for that call.\n\n\nregression=# select\nranges_lower(ARRAY[numrange(1.1,2.2),numrange(3.3,4.4),\nnumrange(5.5,6.6)]);\n ranges_lower\n---------------\n {1.1,3.3,5.5}\n(1 row)\nregression=# \\gdesc\n Column | Type\n--------------+------------\n ranges_lower | numrange[]\n(1 row)\n\nI don't think you can cast literal ' {1.1,3.3,5.5}' to numrange[].\n\n\n", "msg_date": "Wed, 6 Sep 2023 17:56:55 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "Hi!\n\nOn Wed, Sep 6, 2023 at 6:18 PM jian he <jian.universality@gmail.com> wrote:\n> + <literal>lower(ARRAY[numrange(1.1,2.2),numrange(3.3,4.4)])</literal>\n> should be\n> + <literal>ranges_lower(ARRAY[numrange(1.1,2.2),numrange(3.3,4.4)])</literal>\n>\n> + <literal>upper(ARRAY[numrange(1.1,2.2),numrange(3.3,4.4)])</literal>\n> should be\n> + <literal>ranges_upper(ARRAY[numrange(1.1,2.2),numrange(3.3,4.4)])</literal>\n>\n> https://www.postgresql.org/docs/current/catalog-pg-type.html\n> there is no association between numrange and their base type numeric.\n> so for template: anyarray ranges_lower(anyarray). I don't think we can\n> input numrange array and return a numeric array.\n>\n> https://www.postgresql.org/docs/current/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC\n> >> When the return value of a function is declared as a polymorphic type, there must be at least one argument position that is also >> polymorphic, and the actual data type(s) supplied for the polymorphic arguments determine the actual result type for that call.\n>\n>\n> regression=# select\n> ranges_lower(ARRAY[numrange(1.1,2.2),numrange(3.3,4.4),\n> numrange(5.5,6.6)]);\n> ranges_lower\n> ---------------\n> {1.1,3.3,5.5}\n> (1 row)\n> regression=# \\gdesc\n> Column | Type\n> --------------+------------\n> ranges_lower | numrange[]\n> (1 row)\n>\n> I don't think you can cast literal ' {1.1,3.3,5.5}' to numrange[].\n\nThank you for noticing this. Indeed, our polymorphic type system\ndoesn't support this case. In order to support this, we need\nsomething like \"anyrangearray\" pseudo-type. However, it seems\noverkill to introduce a new pseudo-type just to update pg_stats.\n\nAdditionally, I found that the current patch can't handle infinite\nrange bounds and discards information about inclusiveness of range\nbounds. The infinite bounds could be represented as NULL (while I'm\nnot sure how good this representation is). Regarding inclusiveness, I\ndon't see the possibility to represent them in a reasonable way within\nan array of base types. I also don't feel good about discarding the\naccuracy in the pg_stats view.\n\nIn conclusion of all of the above, I decided to revise the patch and\nshow the bounds histogram as it's stored in pg_statistic. I revised\nthe docs correspondingly.\n\nAlso for some reason, the patch added description of new columns to\nthe documentation of pg_user_mapping table. I've fixed that by moving\nthem to the documentation of pg_stats view.\n\nAlso, I've extracted the new comment in pg_statistic.h into a separate patch.\n\nI'm going to push this if there are no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sat, 25 Nov 2023 01:06:24 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "On Sat, Nov 25, 2023 at 7:06 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> Hi!\n>\n> I'm going to push this if there are no objections.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n\nsrc/include/catalog/pg_statistic.h\n268: * range type's subdiff function. Only non-null rows are considered.\n\nshould it be: * range type's subdiff function. Only non-null,\nnon-empty rows are considered.\n\nOther than that, it looks fine to me.\n\n\n", "msg_date": "Sat, 25 Nov 2023 16:28:32 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "On Sat, Nov 25, 2023 at 7:06 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> Hi!\n> Additionally, I found that the current patch can't handle infinite\n> range bounds and discards information about inclusiveness of range\n> bounds. The infinite bounds could be represented as NULL (while I'm\n> not sure how good this representation is). Regarding inclusiveness, I\n> don't see the possibility to represent them in a reasonable way within\n> an array of base types. I also don't feel good about discarding the\n> accuracy in the pg_stats view.\n>\n\nin range_length_histogram, maybe we can document that when calculating\nthe length of a range, inclusiveness will be true.\n\n\n", "msg_date": "Sat, 25 Nov 2023 16:57:49 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "Hi Alexander,\n\nOn 25.11.2023 02:06, Alexander Korotkov wrote:\n>\n> In conclusion of all of the above, I decided to revise the patch and\n> show the bounds histogram as it's stored in pg_statistic. I revised\n> the docs correspondingly.\n\n\nSo basically we returned to what it all has started from? I guess it's \nbetter than nothing, although I have to admit that two-array \nrepresentation is much more readable. Unfortunately it brings in a \nsurprising amount of complexity.\n\nAnyway, thanks for looking into this!\n\n\n\n\n", "msg_date": "Sat, 25 Nov 2023 12:14:24 +0300", "msg_from": "Egor Rogov <e.rogov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "On Sat, Nov 25, 2023 at 10:58 AM jian he <jian.universality@gmail.com> wrote:\n> On Sat, Nov 25, 2023 at 7:06 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > Hi!\n> > Additionally, I found that the current patch can't handle infinite\n> > range bounds and discards information about inclusiveness of range\n> > bounds. The infinite bounds could be represented as NULL (while I'm\n> > not sure how good this representation is). Regarding inclusiveness, I\n> > don't see the possibility to represent them in a reasonable way within\n> > an array of base types. I also don't feel good about discarding the\n> > accuracy in the pg_stats view.\n> >\n>\n> in range_length_histogram, maybe we can document that when calculating\n> the length of a range, inclusiveness will be true.\n\nI've revised the patchset. Edited comment in pg_statistic.h as you\nproposed. And I've added to the documentation a short note on how the\nrange length histogram is calculated.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sat, 25 Nov 2023 18:55:11 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" }, { "msg_contents": "On Sat, Nov 25, 2023 at 11:14 AM Egor Rogov <e.rogov@postgrespro.ru> wrote:\n>\n> Hi Alexander,\n>\n> On 25.11.2023 02:06, Alexander Korotkov wrote:\n> >\n> > In conclusion of all of the above, I decided to revise the patch and\n> > show the bounds histogram as it's stored in pg_statistic. I revised\n> > the docs correspondingly.\n>\n>\n> So basically we returned to what it all has started from? I guess it's\n> better than nothing, although I have to admit that two-array\n> representation is much more readable. Unfortunately it brings in a\n> surprising amount of complexity.\n\nYep, it is.\n\n> Anyway, thanks for looking into this!\n\nAnd thank you for the feedback!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sat, 25 Nov 2023 18:57:19 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stats and range statistics" } ]
[ { "msg_contents": "I see that commit 547f04e73 caused pgbench to start printing its\nversion number. I think that's a good idea in general, but it\nappears to me that next to no thought went into the details\n(as perhaps evidenced by the fact that the commit message doesn't\neven mention it). I've got two beefs with how it was done:\n\n* The output only mentions pgbench's own version, which would be\nhighly misleading if the server being used is of a different\nversion. I should think that in most cases the server's version\nis more important than pgbench's.\n\n* We have a convention for how client programs should print their\nversions, and this ain't it. (Specifically, you should print the\nPG_VERSION string not make up your own.)\n\nWhat I think should have been done instead is to steal psql's\nbattle-tested logic for printing its startup version banner,\nmore or less as attached.\n\nOne point here is that printing the server version requires\naccess to a connection, which printResults() hasn't got\nbecause we already closed all the connections by that point.\nI solved that by printing the banner during the initial\nconnection that gets the scale factor, does vacuuming, etc.\nIf you're dead set on not printing the version till the end,\nthat could be made to happen; but it's not clear to me that\nthis way is any worse, and it's certainly easier.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 18 Jun 2021 13:20:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Version reporting in pgbench" }, { "msg_contents": "\nHello Tom,\n\n> One point here is that printing the server version requires\n> access to a connection, which printResults() hasn't got\n> because we already closed all the connections by that point.\n> I solved that by printing the banner during the initial\n> connection that gets the scale factor, does vacuuming, etc.\n\nOk.\n\n> If you're dead set on not printing the version till the end,\n> that could be made to happen; but it's not clear to me that\n> this way is any worse, and it's certainly easier.\n\npgbench (14beta1 dev 2021-06-12 08:10:44, server 13.3 (Ubuntu 13.3-1.pgdg20.04+1))\n\nWhy not move the printVersion call right after the connection is created, \nat line 6374?\n\nOtherwise it works for me.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 18 Jun 2021 20:27:16 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Version reporting in pgbench" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> Why not move the printVersion call right after the connection is created, \n> at line 6374?\n\nI started with that, and one of the 001_pgbench_with_server.pl\ntests fell over --- it was expecting no stdout output before a\n\"Perhaps you need to do initialization\" failure. If you don't\nmind changing that, I agree that printing immediately after\nthe connection is made is a bit less astonishing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Jun 2021 14:40:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Version reporting in pgbench" }, { "msg_contents": "Hello Tom,\n\n>> Why not move the printVersion call right after the connection is \n>> created, at line 6374?\n>\n> I started with that, and one of the 001_pgbench_with_server.pl\n> tests fell over --- it was expecting no stdout output before a\n> \"Perhaps you need to do initialization\" failure. If you don't\n> mind changing that,\n\nWhy would I mind?\n\n> I agree that printing immediately after the connection is made is a bit \n> less astonishing.\n\nOk, so let's just update the test? Attached a proposal with the version \nmoved.\n\nNote that if no connections are available, then you do not get the \nversion, which may be a little bit strange. Attached v3 prints out the \nlocal version in that case. Not sure whether it is worth the effort.\n\n-- \nFabien.", "msg_date": "Fri, 18 Jun 2021 23:18:45 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Version reporting in pgbench" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> Note that if no connections are available, then you do not get the \n> version, which may be a little bit strange. Attached v3 prints out the \n> local version in that case. Not sure whether it is worth the effort.\n\nI'm inclined to think that the purpose of that output is mostly\nto report the server version, so not printing it if we fail to\nconnect isn't very surprising. Certainly that's how psql has\nacted for decades.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Jun 2021 17:28:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Version reporting in pgbench" }, { "msg_contents": "\n>> Note that if no connections are available, then you do not get the\n>> version, which may be a little bit strange. Attached v3 prints out the\n>> local version in that case. Not sure whether it is worth the effort.\n>\n> I'm inclined to think that the purpose of that output is mostly\n> to report the server version, so not printing it if we fail to\n> connect isn't very surprising. Certainly that's how psql has\n> acted for decades.\n\nI'm fine with having a uniform behavior over pg commands.\n\nThanks for the improvement!\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 18 Jun 2021 23:37:44 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Version reporting in pgbench" } ]
[ { "msg_contents": "A few questions about this comment in walsender.c, originating in\ncommit abfd192b1b5b:\n\n/*\n * Found the requested timeline in the history. Check that\n * requested startpoint is on that timeline in our history.\n *\n * This is quite loose on purpose. We only check that we didn't\n * fork off the requested timeline before the switchpoint. We\n * don't check that we switched *to* it before the requested\n * starting point. This is because the client can legitimately\n * request to start replication from the beginning of the WAL\n * segment that contains switchpoint, but on the new timeline, so\n * that it doesn't end up with a partial segment. If you ask for\n * too old a starting point, you'll get an error later when we\n * fail to find the requested WAL segment in pg_wal.\n *\n * XXX: we could be more strict here and only allow a startpoint\n * that's older than the switchpoint, if it's still in the same\n * WAL segment. \n */\n\n1. I think there's a typo: it should be \"fork off the requested\ntimeline before the startpoint\", right?\n\n2. It seems to imply that requesting an old start point is wrong, but I\ndon't see why. As long as the WAL is there (or at least the slot\nboundaries), what's the problem? Can we either just change the comment\nto say that it's fine to start on an ancestor of the requested timeline\n(and maybe update the docs, too)?\n\n3. I noticed when looking at this that the terminology in the docs is a\nbit inconsistent between START_REPLICATION and\nrecovery_target_timeline.\n a. In recovery_target_timeline:\n i. a numeric value means \"stop when this timeline forks\"\n ii. \"latest\" means \"follow forks along the newest timeline\"\n iii. \"current\" is an alias for the backup's numerical timeline\n b. In the start START_REPLICATION docs:\n i. \"current\" means \"follow forks along the newest timeline\"\n ii. a numeric value that is equal to the current timeline is the\nsame as \"current\"\n iii. a numeric value that is less than the current timeline means\n\"stop when this timeline forks\"\n\nOn point #3, it looks like START_REPLICATION could be improved:\n\n * Should we change the docs to say \"latest\" rather than \"current\"?\n * Should we change the behavior so that specifying the current\ntimeline as a number still means a historic timeline (e.g. it will stop\nreplicating there and emit a tuple)?\n * Should we add some keywords like \"latest\" or \"current\" to the\nSTART_REPLICATION command?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 18 Jun 2021 10:27:57 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "A few nuances about specifying the timeline with START_REPLICATION" }, { "msg_contents": "On 18/06/2021 20:27, Jeff Davis wrote:\n> A few questions about this comment in walsender.c, originating in\n> commit abfd192b1b5b:\n> \n> /*\n> * Found the requested timeline in the history. Check that\n> * requested startpoint is on that timeline in our history.\n> *\n> * This is quite loose on purpose. We only check that we didn't\n> * fork off the requested timeline before the switchpoint. We\n> * don't check that we switched *to* it before the requested\n> * starting point. This is because the client can legitimately\n> * request to start replication from the beginning of the WAL\n> * segment that contains switchpoint, but on the new timeline, so\n> * that it doesn't end up with a partial segment. If you ask for\n> * too old a starting point, you'll get an error later when we\n> * fail to find the requested WAL segment in pg_wal.\n> *\n> * XXX: we could be more strict here and only allow a startpoint\n> * that's older than the switchpoint, if it's still in the same\n> * WAL segment.\n> */\n> \n> 1. I think there's a typo: it should be \"fork off the requested\n> timeline before the startpoint\", right?\n\nYes, I think you're right.\n\n> 2. It seems to imply that requesting an old start point is wrong, but I\n> don't see why. As long as the WAL is there (or at least the slot\n> boundaries), what's the problem? Can we either just change the comment\n> to say that it's fine to start on an ancestor of the requested timeline\n> (and maybe update the docs, too)?\n\nHmm, I'm not sure if the logic in WalSndSegmentOpen() would work if you \ndid that. For example, if you had the following WAL segments, because a \ntimeline switch happened somewhere in the middle of segment 13:\n\n000000040000000000000012\n000000040000000000000013\n000000050000000000000013\n000000050000000000000014\n\nand you requested to start streaming from timeline 5, 0/12000000, I \nthink WalSndSegmentOpen() would try to open file \n\"000000050000000000000012\" and not find it.\n\nWe could teach it to look into the timeline history to find the correct \nfile, though. Come to think of it, we could remove the START_REPLICATION \nTIMELINE option altogether (or rather, make it optional or backwards \ncompatibility). The server doesn't need it for anything, it knows the \ntimeline history so the LSN is enough to uniquely identify the starting \npoint.\n\nIf the client asks for a historic timeline, the replication will stop \nwhen it reaches the end of that timeline. In hindsight, I think it would \nmake more sense to send a message to the client to say that it's \nswitching to a new timeline, and continue streaming from the new timeline.\n\n> 3. I noticed when looking at this that the terminology in the docs is a\n> bit inconsistent between START_REPLICATION and\n> recovery_target_timeline.\n> a. In recovery_target_timeline:\n> i. a numeric value means \"stop when this timeline forks\"\n> ii. \"latest\" means \"follow forks along the newest timeline\"\n> iii. \"current\" is an alias for the backup's numerical timeline\n> b. In the start START_REPLICATION docs:\n> i. \"current\" means \"follow forks along the newest timeline\"\n> ii. a numeric value that is equal to the current timeline is the\n> same as \"current\"\n> iii. a numeric value that is less than the current timeline means\n> \"stop when this timeline forks\"\n> \n> On point #3, it looks like START_REPLICATION could be improved:\n> \n> * Should we change the docs to say \"latest\" rather than \"current\"?\n> * Should we change the behavior so that specifying the current\n> timeline as a number still means a historic timeline (e.g. it will stop\n> replicating there and emit a tuple)?\n> * Should we add some keywords like \"latest\" or \"current\" to the\n> START_REPLICATION command?\nHmm, the timeline in the START_REPLICATION command is not specifying a \nrecovery target timeline, so I don't think \"latest\" or \"current\" make \nmuch sense there. Per above, it just tells the server which timeline the \nrequested starting point belongs to, so it's actually redundant.\n\n- Heikki\n\n\n", "msg_date": "Fri, 18 Jun 2021 21:48:47 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: A few nuances about specifying the timeline with\n START_REPLICATION" }, { "msg_contents": "On Fri, 2021-06-18 at 21:48 +0300, Heikki Linnakangas wrote:\n> On 18/06/2021 20:27, Jeff Davis wrote:\n> We could teach it to look into the timeline history to find the\n> correct \n> file, though.\n\nThat's how recovery_target_timeline behaves, and it would match my\nintuition better if START_REPLICATION behaved that way.\n\n> If the client asks for a historic timeline, the replication will\n> stop \n> when it reaches the end of that timeline. In hindsight, I think it\n> would \n> make more sense to send a message to the client to say that it's \n> switching to a new timeline, and continue streaming from the new\n> timeline.\n\nWhy is it important for the standby to be told explicitly in the\nprotocol about timeline switches? If it is important, why only for\nhistorical timelines?\n\n> Hmm, the timeline in the START_REPLICATION command is not specifying\n> a \n> recovery target timeline, so I don't think \"latest\" or \"current\"\n> make \n> much sense there. Per above, it just tells the server which timeline\n> the \n> requested starting point belongs to, so it's actually redundant.\n\nThat's not very clear from the docs: \"if TIMELINE option is specified,\nstreaming starts on timeline tli...\".\n\nPart of the confusion is that there's not a good distinction in\nterminology between:\n 1. a timeline ID, which is a specific segment of a timeline\n 2. a timeline made up of the given timeline ID and all its\nancestors, terminating at the given ID\n 3. the timeline made up of the current ID, all ancestor IDs, and all\ndescendent IDs that the current active primary switches to\n 4. the set of all timelines that contain a given ID\n\nIt seems you are saying that replication only concerns itself with #3,\nwhich does not require a timeline ID at all. That seems basically\ncorrect for now, but since we already document the protocol to take a\ntimeline, it makes sense to me to just have the primary serve it if\npossible.\n\nIf we (continue to?) allow timelines for replication, it will start to\ntreat the primary like an archive. That might not be quite what was\nintended, but could be powerful. You could imagine a special archive\nthat implements the replication protocol, and have replicas directly\noff the archive, or maybe doing PITR off the archive.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 18 Jun 2021 12:55:17 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: A few nuances about specifying the timeline with\n START_REPLICATION" }, { "msg_contents": "On 18/06/2021 22:55, Jeff Davis wrote:\n> On Fri, 2021-06-18 at 21:48 +0300, Heikki Linnakangas wrote:\n>> On 18/06/2021 20:27, Jeff Davis wrote:\n>> We could teach it to look into the timeline history to find the\n>> correct\n>> file, though.\n> \n> That's how recovery_target_timeline behaves, and it would match my\n> intuition better if START_REPLICATION behaved that way.\n> \n>> If the client asks for a historic timeline, the replication will\n>> stop\n>> when it reaches the end of that timeline. In hindsight, I think it\n>> would\n>> make more sense to send a message to the client to say that it's\n>> switching to a new timeline, and continue streaming from the new\n>> timeline.\n> \n> Why is it important for the standby to be told explicitly in the\n> protocol about timeline switches?\n\nSo that it knows to write the WAL to the correctly named WAL segment. \nYou could do it differently, looking at the 'xlp_tli' field in the WAL \npage headers, or watching out for checkpoint records that change the \ntimeline. But currently the standby (and pg_receivewal) depends on the \nprotocol for that.\n\n> If it is important, why only for historical timelines?\n\nWell, the latest timeline doesn't have any timeline switches, by \ndefinition. If you're connected to a standby server, IOW you're doing \ncascading replication, then the current timeline can become historic, if \nthe standby follows a timeline switch. In that case, the replication is \nstopped when you reach the timeline switch, just like when you request a \nhistoric timeline.\n\n>> Hmm, the timeline in the START_REPLICATION command is not specifying\n>> a\n>> recovery target timeline, so I don't think \"latest\" or \"current\"\n>> make\n>> much sense there. Per above, it just tells the server which timeline\n>> the\n>> requested starting point belongs to, so it's actually redundant.\n> \n> That's not very clear from the docs: \"if TIMELINE option is specified,\n> streaming starts on timeline tli...\".\n> \n> Part of the confusion is that there's not a good distinction in\n> terminology between:\n> 1. a timeline ID, which is a specific segment of a timeline\n> 2. a timeline made up of the given timeline ID and all its\n> ancestors, terminating at the given ID\n> 3. the timeline made up of the current ID, all ancestor IDs, and all\n> descendent IDs that the current active primary switches to\n> 4. the set of all timelines that contain a given ID\n\nAgreed, that's a bit confusing.\n\n> It seems you are saying that replication only concerns itself with #3,\n> which does not require a timeline ID at all. That seems basically\n> correct for now, but since we already document the protocol to take a\n> timeline, it makes sense to me to just have the primary serve it if\n> possible.\n> \n> If we (continue to?) allow timelines for replication, it will start to\n> treat the primary like an archive. That might not be quite what was\n> intended, but could be powerful. You could imagine a special archive\n> that implements the replication protocol, and have replicas directly\n> off the archive, or maybe doing PITR off the archive.\n\nTrue.\n\n- Heikki\n\n\n", "msg_date": "Sat, 19 Jun 2021 00:16:32 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: A few nuances about specifying the timeline with\n START_REPLICATION" } ]
[ { "msg_contents": "The recipe for running TAP tests in src/Makefile.global doesn't work for\nthe PGXS case. If you try it you get something like this:\n\n\nandrew@emma:tests $ make PG_CONFIG=../inst.head.5701/bin/pg_config installcheck\nrm -rf '/home/andrew/pgl/tests'/tmp_check\n/usr/bin/mkdir -p '/home/andrew/pgl/tests'/tmp_check\ncd ./ && TESTDIR='/home/andrew/pgl/tests' PATH=\"/home/andrew/pgl/inst.head.5701/bin:$PATH\" PGPORT='65701' \\\n top_builddir='/home/andrew/pgl/tests//home/andrew/pgl/inst.head.5701/lib/postgresql/pgxs/src/makefiles/../..' \\\n PG_REGRESS='/home/andrew/pgl/tests//home/andrew/pgl/inst.head.5701/lib/postgresql/pgxs/src/makefiles/../../src/test/regress/pg_regress' \\\n REGRESS_SHLIB='/src/test/regress/regress.so' \\\n /usr/bin/prove -I /home/andrew/pgl/inst.head.5701/lib/postgresql/pgxs/src/makefiles/../../src/test/perl/ -I ./ t/*.pl\n\n\nNotice those bogus settings for top_builddir, PG_REGRESS and\nREGRESS_SHLIB. The attached patch fixes this bug. With it you can get by\nwith a Makefile as simple as this for running TAP tests under PGXS:\n\n TAP_TESTS = 1\n\n PG_CONFIG = pg_config\n PGXS := $(shell $(PG_CONFIG) --pgxs)\n include $(PGXS)\n\n\nI removed the REGRESS_SHLIB setting altogether for the PGXS case - it's\nnot clear to me why we need it in a TAP test recipe at all. Certainly\nit's not installed anywhere in a standard install so it seems entirely\nbogus for the PGXS case.\n\nThis seems like a bug fix that should be patched all the way back,\nalthough I haven't yet investigated the back branches.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 20 Jun 2021 09:44:30 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "PXGS vs TAP tests" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> The recipe for running TAP tests in src/Makefile.global doesn't work for\n> the PGXS case. If you try it you get something like this:\n> ...\n> Notice those bogus settings for top_builddir, PG_REGRESS and\n> REGRESS_SHLIB. The attached patch fixes this bug.\n\nOK, but does the 'prove_check' macro need similar adjustments?\n\n> I removed the REGRESS_SHLIB setting altogether for the PGXS case - it's\n> not clear to me why we need it in a TAP test recipe at all.\n\nAfter some digging in the git history, it looks like it's there because\nof Noah's c09850992, which makes me wonder whether 017_shm.pl requires\nit. If so, it'd make more sense perhaps for that one test script\nto set up the environment variable than to have it cluttering every TAP\nrun.\n\n(In any case, please don't push this till after beta2 is tagged.\nWe don't need possible test instability right now.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Jun 2021 10:45:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PXGS vs TAP tests" }, { "msg_contents": "\nOn 6/20/21 10:45 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> The recipe for running TAP tests in src/Makefile.global doesn't work for\n>> the PGXS case. If you try it you get something like this:\n>> ...\n>> Notice those bogus settings for top_builddir, PG_REGRESS and\n>> REGRESS_SHLIB. The attached patch fixes this bug.\n> OK, but does the 'prove_check' macro need similar adjustments?\n\n\nNo, PGXS doesn't support 'make check'. In the case of TAP tests it\nreally doesn't matter - you're not going to be running against a started\nserver anyway.\n\n\n>\n>> I removed the REGRESS_SHLIB setting altogether for the PGXS case - it's\n>> not clear to me why we need it in a TAP test recipe at all.\n> After some digging in the git history, it looks like it's there because\n> of Noah's c09850992, which makes me wonder whether 017_shm.pl requires\n> it. If so, it'd make more sense perhaps for that one test script\n> to set up the environment variable than to have it cluttering every TAP\n> run.\n\n\nYeah, I'll do some testing.\n\n\n\n>\n> (In any case, please don't push this till after beta2 is tagged.\n> We don't need possible test instability right now.)\n>\n> \t\t\t\n\n\n\nYes, of course.\n\n\ncheers\n\n\nandre\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 20 Jun 2021 10:56:40 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: PXGS vs TAP tests" }, { "msg_contents": "On 6/20/21 10:56 AM, Andrew Dunstan wrote:\n> On 6/20/21 10:45 AM, Tom Lane wrote:\n>\n>>> I removed the REGRESS_SHLIB setting altogether for the PGXS case - it's\n>>> not clear to me why we need it in a TAP test recipe at all.\n>> After some digging in the git history, it looks like it's there because\n>> of Noah's c09850992, which makes me wonder whether 017_shm.pl requires\n>> it. If so, it'd make more sense perhaps for that one test script\n>> to set up the environment variable than to have it cluttering every TAP\n>> run.\n>\n> Yeah, I'll do some testing.\n>\n>\n>\n\nTests pass with the attached patch, which puts the setting in the\nMakefile for the recovery tests. The script itself doesn't need any\nchanging.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 20 Jun 2021 13:24:04 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: PXGS vs TAP tests" }, { "msg_contents": "On Sun, Jun 20, 2021 at 01:24:04PM -0400, Andrew Dunstan wrote:\n> Tests pass with the attached patch, which puts the setting in the\n> Makefile for the recovery tests. The script itself doesn't need any\n> changing.\n\n+REGRESS_SHLIB=$(abs_top_builddir)/src/test/regress/regress$(DLSUFFIX)\n+export REGRESS_SHLIB\nIt may be better to add a comment here explaning why REGRESS_SHLIB is\nrequired in this Makefile then?\n\nWhile on it, could we split those commands into multiple lines and\nreduce the noise of future diffs? Something as simple as that would\nmake those prove commands easier to follow:\n+cd $(srcdir) && TESTDIR='$(CURDIR)' \\\n+ $(with_temp_install) \\\n+ PGPORT='6$(DEF_PGPORT)' \\\n+ PG_REGRESS='$(CURDIR)/$(top_builddir)/src/test/regress/pg_regress' \\\n+ REGRESS_SHLIB= '$(abs_top_builddir)/src/test/regress/regress$(DLSUFFIX)' \\\n+ $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n\nThere are other places where this could happen, but the TAP commands\nare particularly long.\n--\nMichael", "msg_date": "Tue, 22 Jun 2021 09:23:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PXGS vs TAP tests" }, { "msg_contents": "\nOn 6/21/21 8:23 PM, Michael Paquier wrote:\n> On Sun, Jun 20, 2021 at 01:24:04PM -0400, Andrew Dunstan wrote:\n>> Tests pass with the attached patch, which puts the setting in the\n>> Makefile for the recovery tests. The script itself doesn't need any\n>> changing.\n> +REGRESS_SHLIB=$(abs_top_builddir)/src/test/regress/regress$(DLSUFFIX)\n> +export REGRESS_SHLIB\n> It may be better to add a comment here explaning why REGRESS_SHLIB is\n> required in this Makefile then?\n>\n> While on it, could we split those commands into multiple lines and\n> reduce the noise of future diffs? Something as simple as that would\n> make those prove commands easier to follow:\n> +cd $(srcdir) && TESTDIR='$(CURDIR)' \\\n> + $(with_temp_install) \\\n> + PGPORT='6$(DEF_PGPORT)' \\\n> + PG_REGRESS='$(CURDIR)/$(top_builddir)/src/test/regress/pg_regress' \\\n> + REGRESS_SHLIB= '$(abs_top_builddir)/src/test/regress/regress$(DLSUFFIX)' \\\n> + $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n>\n> There are other places where this could happen, but the TAP commands\n> are particularly long.\n\n\nOK, done.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 1 Jul 2021 09:13:25 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: PXGS vs TAP tests" }, { "msg_contents": "On Thu, Jul 01, 2021 at 09:13:25AM -0400, Andrew Dunstan wrote:\n> OK, done.\n\nThanks!\n--\nMichael", "msg_date": "Fri, 2 Jul 2021 09:10:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PXGS vs TAP tests" } ]
[ { "msg_contents": "I had an idea for another way to attack $SUBJECT, now that we have\nthe ability to adjust debug_invalidate_system_caches_always at\nruntime. Namely, that a lot of time goes into repeated initdb\nruns (especially if the TAP tests are enabled); but surely we\nlearn little from CCA initdb runs after the first one. We could\ntrim this fat by:\n\n(1) Instead of applying CLOBBER_CACHE_ALWAYS as a compile option,\nadd \"debug_invalidate_system_caches_always = 1\" to the buildfarm's\n\"extra_config\" options, which are added to postgresql.conf after\ninitdb. Thus, initdb will run without that option but all the\nactual test cases will have it.\n\n(2) To close the testing gap that now we have *no* CCA coverage\nof initdb runs, adjust either the buildfarm's initdb-only steps\nor initdb's 001_initdb.pl TAP script to set\n\"debug_invalidate_system_caches_always = 1\" in one of the runs.\nI think we can make that happen so far as the bootstrap backend is\nconcerned by including \"-c debug_invalidate_system_caches_always=1\"\non its command line; but of course initdb itself has no way to be\ntold to do that. I think we could invent a \"-c NAME=VALUE\" switch\nfor initdb to tell it to pass down that switch to its child\nbackends. Then there'd have to be some way to tell the calling\ntests whether to do that.\n\n(3) Since this only works in v14 and up, older branches would\nhave to fall back to -DCLOBBER_CACHE_ALWAYS. Perhaps we could\nimprove the buildfarm client script so that buildfarm owners\njust configure \"clobber_cache_testing => 1\" and then the script\nwould do the right branch-dependent thing.\n\nOf course, we could eliminate the need for branch-dependent\nlogic if we cared to back-patch the addition of the\ndebug_invalidate_system_caches_always GUC, but that's probably\na bridge too far.\n\nIt looks to me like this would cut around an hour off of the\nroughly-a-day cycle times of the existing CCA animals. None\nof them run any TAP tests, presumably because that would make\ntheir cycle time astronomical, but maybe this change will help\nmake that practical.\n\nThoughts?\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Jun 2021 18:10:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Reducing the cycle time for CLOBBER_CACHE_ALWAYS buildfarm members" }, { "msg_contents": "\nOn 6/20/21 6:10 PM, Tom Lane wrote:\n> (3) Since this only works in v14 and up, older branches would\n> have to fall back to -DCLOBBER_CACHE_ALWAYS. Perhaps we could\n> improve the buildfarm client script so that buildfarm owners\n> just configure \"clobber_cache_testing => 1\" and then the script\n> would do the right branch-dependent thing.\n\n\nMaybe. Let's see what you come up with.\n\n\n>\n> Of course, we could eliminate the need for branch-dependent\n> logic if we cared to back-patch the addition of the\n> debug_invalidate_system_caches_always GUC, but that's probably\n> a bridge too far.\n\n\nYeah, I think so.\n\n\n>\n> It looks to me like this would cut around an hour off of the\n> roughly-a-day cycle times of the existing CCA animals. None\n> of them run any TAP tests, presumably because that would make\n> their cycle time astronomical, but maybe this change will help\n> make that practical.\n>\n\nIt might. I'm fairly sure there are a lot of repetitive cycles wasted in\nthe TAP tests, quite apart from initdb. We've become rather profligate\nin our use of time and resources.\n\n\ncheers\n\n\nadrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 20 Jun 2021 19:28:29 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Reducing the cycle time for CLOBBER_CACHE_ALWAYS buildfarm\n members" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 6/20/21 6:10 PM, Tom Lane wrote:\n>> (3) Since this only works in v14 and up, older branches would\n>> have to fall back to -DCLOBBER_CACHE_ALWAYS. Perhaps we could\n>> improve the buildfarm client script so that buildfarm owners\n>> just configure \"clobber_cache_testing => 1\" and then the script\n>> would do the right branch-dependent thing.\n\n> Maybe. Let's see what you come up with.\n\nHere's a couple of draft-quality patches --- one for initdb, one\nfor the buildfarm --- to implement this idea. These are just\nlightly tested; in particular I've not had the patience to run\nfull BF cycles to see how much is actually saved.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 22 Jun 2021 17:11:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reducing the cycle time for CLOBBER_CACHE_ALWAYS buildfarm\n members" }, { "msg_contents": "\nOn 6/22/21 5:11 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 6/20/21 6:10 PM, Tom Lane wrote:\n>>> (3) Since this only works in v14 and up, older branches would\n>>> have to fall back to -DCLOBBER_CACHE_ALWAYS. Perhaps we could\n>>> improve the buildfarm client script so that buildfarm owners\n>>> just configure \"clobber_cache_testing => 1\" and then the script\n>>> would do the right branch-dependent thing.\n>> Maybe. Let's see what you come up with.\n> Here's a couple of draft-quality patches --- one for initdb, one\n> for the buildfarm --- to implement this idea. These are just\n> lightly tested; in particular I've not had the patience to run\n> full BF cycles to see how much is actually saved.\n>\n> \t\t\t\n\n\n\nLooks OK for the buildfarm patch. I wonder if we just want to run initdb\nonce with --clobber-cache instead of once per locale?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 23 Jun 2021 09:21:02 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Reducing the cycle time for CLOBBER_CACHE_ALWAYS buildfarm\n members" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Looks OK for the buildfarm patch. I wonder if we just want to run initdb\n> once with --clobber-cache instead of once per locale?\n\nI thought about that, but I'm not sure it's appropriate for the buildfarm\nclient to be making that decision. I do not think any of the CCA animals\nrun more than one locale anyway, so it's likely moot.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Jun 2021 09:32:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reducing the cycle time for CLOBBER_CACHE_ALWAYS buildfarm\n members" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 6/22/21 5:11 PM, Tom Lane wrote:\n>> Here's a couple of draft-quality patches --- one for initdb, one\n>> for the buildfarm --- to implement this idea. These are just\n>> lightly tested; in particular I've not had the patience to run\n>> full BF cycles to see how much is actually saved.\n\n> Looks OK for the buildfarm patch. I wonder if we just want to run initdb\n> once with --clobber-cache instead of once per locale?\n\nSo, where do we want to go with these?\n\nI'm inclined to argue that it's okay to sneak the initdb change into\nv14, on the grounds that it's needed to fully exploit the change\nfrom CLOBBER_CACHE_ALWAYS to debug_invalidate_system_caches_always.\nWithout it, there is no way to do CCA testing on the bootstrap process\nexcept by reverting to the old hard-wired way of doing things.\n\nHaving pushed that, we could try out the buildfarm side of the\nchange and verify it's okay.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Jul 2021 11:01:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reducing the cycle time for CLOBBER_CACHE_ALWAYS buildfarm\n members" }, { "msg_contents": "\nOn 7/1/21 11:01 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 6/22/21 5:11 PM, Tom Lane wrote:\n>>> Here's a couple of draft-quality patches --- one for initdb, one\n>>> for the buildfarm --- to implement this idea. These are just\n>>> lightly tested; in particular I've not had the patience to run\n>>> full BF cycles to see how much is actually saved.\n>> Looks OK for the buildfarm patch. I wonder if we just want to run initdb\n>> once with --clobber-cache instead of once per locale?\n> So, where do we want to go with these?\n>\n> I'm inclined to argue that it's okay to sneak the initdb change into\n> v14, on the grounds that it's needed to fully exploit the change\n> from CLOBBER_CACHE_ALWAYS to debug_invalidate_system_caches_always.\n> Without it, there is no way to do CCA testing on the bootstrap process\n> except by reverting to the old hard-wired way of doing things.\n>\n> Having pushed that, we could try out the buildfarm side of the\n> change and verify it's okay.\n>\n> \t\t\t\n\n\n\nSeems reasonable. I don't have a CCA animal any more, but I guess I\ncould set up a test.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 1 Jul 2021 13:03:00 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Reducing the cycle time for CLOBBER_CACHE_ALWAYS buildfarm\n members" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 7/1/21 11:01 AM, Tom Lane wrote:\n>> I'm inclined to argue that it's okay to sneak the initdb change into\n>> v14, on the grounds that it's needed to fully exploit the change\n>> from CLOBBER_CACHE_ALWAYS to debug_invalidate_system_caches_always.\n>> Without it, there is no way to do CCA testing on the bootstrap process\n>> except by reverting to the old hard-wired way of doing things.\n>> \n>> Having pushed that, we could try out the buildfarm side of the\n>> change and verify it's okay.\n\n> Seems reasonable. I don't have a CCA animal any more, but I guess I\n> could set up a test.\n\nI can run a test here --- I'll commandeer sifaka for awhile,\nsince that's the fastest animal I have.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Jul 2021 13:17:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reducing the cycle time for CLOBBER_CACHE_ALWAYS buildfarm\n members" }, { "msg_contents": "I wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Seems reasonable. I don't have a CCA animal any more, but I guess I\n>> could set up a test.\n\n> I can run a test here --- I'll commandeer sifaka for awhile,\n> since that's the fastest animal I have.\n\nDone, and here's the results:\n\nTraditional way (-DCLOBBER_CACHE_ALWAYS): 32:53:24\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sifaka&dt=2021-07-01%2018%3A06%3A09\n\nNew way: 16:15:43\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sifaka&dt=2021-07-03%2004%3A02%3A16\n\nThat's running sifaka's full test schedule including TAP tests,\nwhich ordinarily takes it a shade under 10 minutes. The savings\non a non-TAP run would be a lot less, of course, thanks to\nfewer initdb invocations.\n\nAlthough I lacked the patience to run a full back-branch cycle\nwith -DCLOBBER_CACHE_ALWAYS, I did verify that the patch\ncorrectly injects that #define when running an old branch.\nSo I think it's ready to go into the buildfarm, modulo any\ncosmetic work you might want to do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 03 Jul 2021 18:59:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reducing the cycle time for CLOBBER_CACHE_ALWAYS buildfarm\n members" }, { "msg_contents": "\nOn 7/3/21 6:59 PM, Tom Lane wrote:\n> I wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> Seems reasonable. I don't have a CCA animal any more, but I guess I\n>>> could set up a test.\n>> I can run a test here --- I'll commandeer sifaka for awhile,\n>> since that's the fastest animal I have.\n> Done, and here's the results:\n>\n> Traditional way (-DCLOBBER_CACHE_ALWAYS): 32:53:24\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sifaka&dt=2021-07-01%2018%3A06%3A09\n>\n> New way: 16:15:43\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sifaka&dt=2021-07-03%2004%3A02%3A16\n>\n> That's running sifaka's full test schedule including TAP tests,\n> which ordinarily takes it a shade under 10 minutes. The savings\n> on a non-TAP run would be a lot less, of course, thanks to\n> fewer initdb invocations.\n>\n> Although I lacked the patience to run a full back-branch cycle\n> with -DCLOBBER_CACHE_ALWAYS, I did verify that the patch\n> correctly injects that #define when running an old branch.\n> So I think it's ready to go into the buildfarm, modulo any\n> cosmetic work you might want to do.\n>\n> \t\t\t\n\n\n\n\nYeah, I'm looking at it now. A couple of things: I think we should\nprobably call the setting 'use_clobber_cache_always' since that's what\nit does. And I think we should probably put in a sanity check to make it\nincompatible with any -DCLOBBER_CACHE_* define in CPPFLAGS.\n\n\nThoughts?\n\n\nThere is one item  I want to complete before putting out a new client\nrelease - making provision for a change in the name of the default git\nbranch - the aim is that with the new release in place that will be\ncompletely seamless whenever it happens and whatever name is chosen. I\nhope to have that done in a week or so., so the new release would be out\nin about two weeks, if all goes well.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 4 Jul 2021 06:49:13 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Reducing the cycle time for CLOBBER_CACHE_ALWAYS buildfarm\n members" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 7/3/21 6:59 PM, Tom Lane wrote:\n>> So I think it's ready to go into the buildfarm, modulo any\n>> cosmetic work you might want to do.\n\n> Yeah, I'm looking at it now. A couple of things: I think we should\n> probably call the setting 'use_clobber_cache_always' since that's what\n> it does. And I think we should probably put in a sanity check to make it\n> incompatible with any -DCLOBBER_CACHE_* define in CPPFLAGS.\n\n> Thoughts?\n\nNo objections here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 04 Jul 2021 10:54:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reducing the cycle time for CLOBBER_CACHE_ALWAYS buildfarm\n members" } ]
[ { "msg_contents": "Hi all\r\n\r\nPQtraceSetFlags has been renamed PQsetTraceFlags, but the <indexterm> has not been modified,\r\nso PQtraceSetFlags is still displayed in bookindex.html.\r\n\r\n -<varlistentry id=\"libpq-PQtraceSetFlags\">\r\n - <term><function>PQtraceSetFlags</function><indexterm><primary>PQtraceSetFlags</primary></indexterm></term>\r\n +<varlistentry id=\"libpq-PQsetTraceFlags\">\r\n + <term><function>PQsetTraceFlags</function><indexterm><primary>PQtraceSetFlags</primary></indexterm></term>\r\n\r\nhttps://github.com/postgres/postgres/commit/d0e750c0acaf31f60667b1635311bcef5ab38bbe\r\n\r\nHere is a patch.\r\n\r\nBest Regards!", "msg_date": "Mon, 21 Jun 2021 02:36:19 +0000", "msg_from": "\"zhangjie2@fujitsu.com\" <zhangjie2@fujitsu.com>", "msg_from_op": true, "msg_subject": "[Patch] Rename PQtraceSetFlags to PQsetTraceFlags for bookindex.html" }, { "msg_contents": "On Mon, Jun 21, 2021 at 02:36:19AM +0000, zhangjie2@fujitsu.com wrote:\n> Here is a patch.\n\nPushed.\n\n\n", "msg_date": "Mon, 21 Jun 2021 02:52:17 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Rename PQtraceSetFlags to PQsetTraceFlags for\n bookindex.html" } ]
[ { "msg_contents": "Hi,\n\nI noticed a striking similarity between the collation versions\nreported by Windows and ICU, and found my way to this new system copy\nof ICU (C APIs only) that you can use on recent enough Windows[1].\nNot planning to do anything with that observation myself but it seemed\ninteresting enough to share...\n\n[1] https://docs.microsoft.com/en-us/windows/win32/intl/international-components-for-unicode--icu-\n\n\n", "msg_date": "Mon, 21 Jun 2021 14:43:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Windows' copy of ICU" } ]
[ { "msg_contents": "Hello Hackers,\nwhile amending Npgsql to account for the Logical Streaming Replication\nProtocol changes in PostgreSQL 14 I stumbled upon two documentation\ninaccuracies in the Logical Replication Message Formats documentation\n(https://www.postgresql.org/docs/devel/protocol-logicalrep-message-formats.html)\nthat have been introduced (or rather omitted) with the recent changes to\nallow pgoutput to send logical decoding messages\n(https://github.com/postgres/postgres/commit/ac4645c0157fc5fcef0af8ff571512aa284a2cec)\nand to allow logical replication to transfer data in binary format\n(https://github.com/postgres/postgres/commit/9de77b5453130242654ff0b30a551c9c862ed661).\n\n\n 1. The content of the logical decoding message in the 'Message' message\n is prefixed with a length field (Int32) which isn't documented yet.\n See\n https://github.com/postgres/postgres/blob/69a58bfe4ab05567a8fab8bdce7f3095ed06b99c/src/backend/replication/logical/proto.c#L388\n 2. The TupleData may now contain the byte 'b' as indicator for binary\n data which isn't documented yet. See\n https://github.com/postgres/postgres/blob/69a58bfe4ab05567a8fab8bdce7f3095ed06b99c/src/include/replication/logicalproto.h#L83\n and\n https://github.com/postgres/postgres/blob/69a58bfe4ab05567a8fab8bdce7f3095ed06b99c/src/backend/replication/logical/proto.c#L558.\n\nThe attached documentation patch fixes both.\n\nBest regards,\n\nBrar", "msg_date": "Mon, 21 Jun 2021 08:56:13 +0200", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Doc patch for Logical Replication Message Formats (PG14)" }, { "msg_contents": "On Mon, Jun 21, 2021 at 12:26 PM Brar Piening <brar@gmx.de> wrote:\n>\n> Hello Hackers,\n> while amending Npgsql to account for the Logical Streaming Replication\n> Protocol changes in PostgreSQL 14 I stumbled upon two documentation\n> inaccuracies in the Logical Replication Message Formats documentation\n> (https://www.postgresql.org/docs/devel/protocol-logicalrep-message-formats.html)\n> that have been introduced (or rather omitted) with the recent changes to\n> allow pgoutput to send logical decoding messages\n> (https://github.com/postgres/postgres/commit/ac4645c0157fc5fcef0af8ff571512aa284a2cec)\n> and to allow logical replication to transfer data in binary format\n> (https://github.com/postgres/postgres/commit/9de77b5453130242654ff0b30a551c9c862ed661).\n>\n>\n> 1. The content of the logical decoding message in the 'Message' message\n> is prefixed with a length field (Int32) which isn't documented yet.\n> See\n> https://github.com/postgres/postgres/blob/69a58bfe4ab05567a8fab8bdce7f3095ed06b99c/src/backend/replication/logical/proto.c#L388\n> 2. The TupleData may now contain the byte 'b' as indicator for binary\n> data which isn't documented yet. See\n> https://github.com/postgres/postgres/blob/69a58bfe4ab05567a8fab8bdce7f3095ed06b99c/src/include/replication/logicalproto.h#L83\n> and\n> https://github.com/postgres/postgres/blob/69a58bfe4ab05567a8fab8bdce7f3095ed06b99c/src/backend/replication/logical/proto.c#L558.\n>\n> The attached documentation patch fixes both.\n>\n\nYeah, I think these should be fixed and your patch looks good to me in\nthat regard.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Jun 2021 14:48:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc patch for Logical Replication Message Formats (PG14)" }, { "msg_contents": "Amit Kapila wrote:\n> On Mon, Jun 21, 2021 at 12:26 PM Brar Piening <brar@gmx.de> wrote:\n>> Hello Hackers,\n>> while amending Npgsql to account for the Logical Streaming Replication\n>> Protocol changes in PostgreSQL 14 I stumbled upon two documentation\n>> inaccuracies in the Logical Replication Message Formats documentation\n>> (https://www.postgresql.org/docs/devel/protocol-logicalrep-message-formats.html)\n>> that have been introduced (or rather omitted) with the recent changes to\n>> allow pgoutput to send logical decoding messages\n>> (https://github.com/postgres/postgres/commit/ac4645c0157fc5fcef0af8ff571512aa284a2cec)\n>> and to allow logical replication to transfer data in binary format\n>> (https://github.com/postgres/postgres/commit/9de77b5453130242654ff0b30a551c9c862ed661).\n>>\n>>\n>> 1. The content of the logical decoding message in the 'Message' message\n>> is prefixed with a length field (Int32) which isn't documented yet.\n>> See\n>> https://github.com/postgres/postgres/blob/69a58bfe4ab05567a8fab8bdce7f3095ed06b99c/src/backend/replication/logical/proto.c#L388\n>> 2. The TupleData may now contain the byte 'b' as indicator for binary\n>> data which isn't documented yet. See\n>> https://github.com/postgres/postgres/blob/69a58bfe4ab05567a8fab8bdce7f3095ed06b99c/src/include/replication/logicalproto.h#L83\n>> and\n>> https://github.com/postgres/postgres/blob/69a58bfe4ab05567a8fab8bdce7f3095ed06b99c/src/backend/replication/logical/proto.c#L558.\n>>\n>> The attached documentation patch fixes both.\n>>\n> Yeah, I think these should be fixed and your patch looks good to me in\n> that regard.\n>\nAfter looking at the docs once again I have another minor amendment (new\npatch attached).", "msg_date": "Mon, 21 Jun 2021 12:43:26 +0200", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Doc patch for Logical Replication Message Formats (PG14)" }, { "msg_contents": "On Mon, Jun 21, 2021 at 4:13 PM Brar Piening <brar@gmx.de> wrote:\n>\n> Amit Kapila wrote:\n> >\n> After looking at the docs once again I have another minor amendment (new\n> patch attached).\n>\n\n+ The value of the column, eiter in binary or in text format.\n\nTypo. /eiter/either\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Jun 2021 16:38:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc patch for Logical Replication Message Formats (PG14)" }, { "msg_contents": "Amit Kapila schrieb:\n> On Mon, Jun 21, 2021 at 4:13 PM Brar Piening <brar@gmx.de> wrote:\n>> Amit Kapila wrote:\n>> After looking at the docs once again I have another minor amendment (new\n>> patch attached).\n>>\n> + The value of the column, eiter in binary or in text format.\n>\n> Typo. /eiter/either\n>\nFixed - thanks!", "msg_date": "Mon, 21 Jun 2021 13:11:27 +0200", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Doc patch for Logical Replication Message Formats (PG14)" }, { "msg_contents": "On Mon, Jun 21, 2021 at 4:41 PM Brar Piening <brar@gmx.de> wrote:\n>\n> Amit Kapila schrieb:\n> > On Mon, Jun 21, 2021 at 4:13 PM Brar Piening <brar@gmx.de> wrote:\n> >> Amit Kapila wrote:\n> >> After looking at the docs once again I have another minor amendment (new\n> >> patch attached).\n> >>\n> > + The value of the column, eiter in binary or in text format.\n> >\n> > Typo. /eiter/either\n> >\n> Fixed - thanks!\n>\n\nThanks for the report and patch. Pushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 24 Jun 2021 15:04:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc patch for Logical Replication Message Formats (PG14)" } ]
[ { "msg_contents": "This patch changes places like this\n\nDECLARE_UNIQUE_INDEX_PKEY(pg_aggregate_fnoid_index, 2650, on \npg_aggregate using btree(aggfnoid oid_ops));\n#define AggregateFnoidIndexId 2650\n\nto this\n\nDECLARE_UNIQUE_INDEX_PKEY(pg_aggregate_fnoid_index, 2650, \nAggregateFnoidIndexId, on pg_aggregate using btree(aggfnoid oid_ops));\n\nand makes genbki.pl generate the #define's. This makes the handling of \ncatalog index OIDs consistent with the handling of catalog tables. \nCompare with:\n\nCATALOG(pg_aggregate,2600,AggregateRelationId)", "msg_date": "Mon, 21 Jun 2021 09:23:09 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Add index OID macro argument to DECLARE_INDEX" }, { "msg_contents": "On Mon, Jun 21, 2021 at 3:23 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n>\n>\n> This patch changes places like this\n>\n> DECLARE_UNIQUE_INDEX_PKEY(pg_aggregate_fnoid_index, 2650, on\n> pg_aggregate using btree(aggfnoid oid_ops));\n> #define AggregateFnoidIndexId 2650\n>\n> to this\n>\n> DECLARE_UNIQUE_INDEX_PKEY(pg_aggregate_fnoid_index, 2650,\n> AggregateFnoidIndexId, on pg_aggregate using btree(aggfnoid oid_ops));\n\n+1, and the patch looks good to me.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jun 21, 2021 at 3:23 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:>>> This patch changes places like this>> DECLARE_UNIQUE_INDEX_PKEY(pg_aggregate_fnoid_index, 2650, on> pg_aggregate using btree(aggfnoid oid_ops));> #define AggregateFnoidIndexId  2650>> to this>> DECLARE_UNIQUE_INDEX_PKEY(pg_aggregate_fnoid_index, 2650,> AggregateFnoidIndexId, on pg_aggregate using btree(aggfnoid oid_ops));+1, and the patch looks good to me.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 21 Jun 2021 07:53:58 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add index OID macro argument to DECLARE_INDEX" }, { "msg_contents": "On 21.06.21 13:53, John Naylor wrote:\n> > This patch changes places like this\n> >\n> > DECLARE_UNIQUE_INDEX_PKEY(pg_aggregate_fnoid_index, 2650, on\n> > pg_aggregate using btree(aggfnoid oid_ops));\n> > #define AggregateFnoidIndexId  2650\n> >\n> > to this\n> >\n> > DECLARE_UNIQUE_INDEX_PKEY(pg_aggregate_fnoid_index, 2650,\n> > AggregateFnoidIndexId, on pg_aggregate using btree(aggfnoid oid_ops));\n> \n> +1, and the patch looks good to me.\n\ncommitted, thanks\n\n\n", "msg_date": "Tue, 29 Jun 2021 08:17:11 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Add index OID macro argument to DECLARE_INDEX" } ]
[ { "msg_contents": "Hello,\n\nWhile discussing auto analyze on partitioned tables, we recognized that\nauto analyze should run on partitioned tables when ATTACH, DETACH\nand DROP commands are executed [1]. Partitioned tables are checked\nwhether they need auto analyze according to their\nchanges_since_analyze (total number of inserts/updates/deletes on\npartitions), but above DDL operations are not counted for now.\n\nTo support ATTACH, DETACH and DROP commands, I proposed\nthe idea as follows:\n* I made new configuration parameters,\n autovacuum_analyze_attach_partition,\n autovacuum_analyze_detach_partition and\n autovacuum_analyze_drop_partition to enable/disable this feature.\n* When a partition is attached/detached/dropped, pgstat_report_anl_ancestors()\n is called and checks the above configurations. If ture, the number of\n livetuples of the partition is counted in its ancestor's changed tuples\n in pgstat_recv_anl_ancestors.\n\nAttach the v1 patch. What do you think?\n\n[1] https://www.postgresql.org/message-id/ce5c3f04-fc17-7139-fffc-037f2c981bec%40enterprisedb.com\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center", "msg_date": "Mon, 21 Jun 2021 17:21:25 +0900", "msg_from": "yuzuko <yuzukohosoya@gmail.com>", "msg_from_op": true, "msg_subject": "Autovacuum (analyze) on partitioned tables for ATTACH/DETACH/DROP\n commands" }, { "msg_contents": "> On 21 Jun 2021, at 10:21, yuzuko <yuzukohosoya@gmail.com> wrote:\n\n> Attach the v1 patch. What do you think?\n\nThis patch no longer applies to HEAD, can you please submit a rebased version\nfor the commitfest?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 1 Sep 2021 11:11:08 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Autovacuum (analyze) on partitioned tables for ATTACH/DETACH/DROP\n commands" }, { "msg_contents": "On Wed, Sep 01, 2021 at 11:11:08AM +0200, Daniel Gustafsson wrote:\n> This patch no longer applies to HEAD, can you please submit a rebased version\n> for the commitfest?\n\nFour weeks later, nothing has happened. So I have marked the patch as\nRwF.\n--\nMichael", "msg_date": "Fri, 1 Oct 2021 15:50:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Autovacuum (analyze) on partitioned tables for\n ATTACH/DETACH/DROP commands" } ]
[ { "msg_contents": "Hi,\n\nSequence MINVALUE/MAXVALUE values are read into \"int64\" variables and\nthen range-checked according to the sequence data-type.\nHowever, for a BIGINT sequence, checking whether these are <\nPG_INT64_MIN or > PG_INT64_MAX always evaluates to false, as an int64\ncan't hold such values.\nI've attached a patch to remove those useless checks.\nThe MINVALUE/MAXVALUE values are anyway int64 range-checked prior to\nthis, as part of conversion to int64.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Mon, 21 Jun 2021 20:10:02 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "On Mon, 21 Jun 2021 at 22:10, Greg Nancarrow <gregn4422@gmail.com> wrote:\n> Sequence MINVALUE/MAXVALUE values are read into \"int64\" variables and\n> then range-checked according to the sequence data-type.\n> However, for a BIGINT sequence, checking whether these are <\n> PG_INT64_MIN or > PG_INT64_MAX always evaluates to false, as an int64\n> can't hold such values.\n\nIt might be worth putting in a comment to mention that the check is\nnot needed. Just in case someone looks again one day and thinks the\nchecks are missing.\n\nProbably best to put this in the July commitfest so it does not get missed.\n\nDavid\n\n\n", "msg_date": "Mon, 21 Jun 2021 22:32:10 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "On Mon, Jun 21, 2021 at 8:32 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> It might be worth putting in a comment to mention that the check is\n> not needed. Just in case someone looks again one day and thinks the\n> checks are missing.\n>\n> Probably best to put this in the July commitfest so it does not get missed.\n\nUpdated the patch, and will add it to the Commitfest, thanks.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Mon, 21 Jun 2021 21:32:24 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "Those code comments look good.", "msg_date": "Tue, 22 Jun 2021 19:26:47 +0000", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "On 21.06.21 13:32, Greg Nancarrow wrote:\n> On Mon, Jun 21, 2021 at 8:32 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> It might be worth putting in a comment to mention that the check is\n>> not needed. Just in case someone looks again one day and thinks the\n>> checks are missing.\n>>\n>> Probably best to put this in the July commitfest so it does not get missed.\n> \n> Updated the patch, and will add it to the Commitfest, thanks.\n\nI don't think this is a good change. It replaces one perfectly solid, \nharmless, and readable line of code with six lines of comment explaining \nwhy the code isn't necessary (times two). And the code is now less \nrobust against changes elsewhere. To maintain this robustness, you'd \nhave to add assertions that prove that what the comment is saying is \nactually true, thus adding even more code.\n\nI think we should leave it as is.\n\n\n", "msg_date": "Sat, 3 Jul 2021 12:44:15 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "On Sat, 3 Jul 2021 at 22:44, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> I don't think this is a good change.\n\n> I think we should leave it as is.\n\nI'm inclined to agree.\n\nWhen I mentioned adding a comment I'd not imagined it would be quite\nso verbose. Plus, I struggle to imagine there's any compiler out there\nthat someone would use that wouldn't just remove the check anyway. I\nhad a quick click around on https://godbolt.org/z/PnKeq5bsT and didn't\nmanage to find any compilers that didn't remove the check.\n\nDavid\n\n\n", "msg_date": "Sun, 4 Jul 2021 20:53:06 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "On Sun, 4 Jul 2021 at 20:53, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sat, 3 Jul 2021 at 22:44, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> > I don't think this is a good change.\n>\n> > I think we should leave it as is.\n>\n> I'm inclined to agree.\n\nDoes anyone object to marking this patch as rejected in the CF app?\n\nDavid\n\n\n", "msg_date": "Tue, 6 Jul 2021 22:43:09 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "On Tue, Jul 6, 2021 at 8:43 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 4 Jul 2021 at 20:53, David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Sat, 3 Jul 2021 at 22:44, Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> > > I don't think this is a good change.\n> >\n> > > I think we should leave it as is.\n> >\n> > I'm inclined to agree.\n>\n> Does anyone object to marking this patch as rejected in the CF app?\n>\n\nI think if you're going to reject this patch, a brief comment should\nbe added to that code to justify why that existing superfluous check\nis worthwhile.\n(After all, similar checks are not being done elsewhere in the\nPostgres code, AFAICS. e.g. \"int\" variables are not being checked to\nsee whether they hold values greater than MAXINT).\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 6 Jul 2021 22:06:04 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "On Wed, 7 Jul 2021 at 00:06, Greg Nancarrow <gregn4422@gmail.com> wrote:\n> I think if you're going to reject this patch, a brief comment should\n> be added to that code to justify why that existing superfluous check\n> is worthwhile.\n\nIt seems strange to add a comment to explain why it's there. If we're\ngoing to the trouble of doing that, then we should just remove it and\nadd a very small comment to mention why INT8 sequences don't need to\nbe checked.\n\nPatch attached\n\nDavid", "msg_date": "Wed, 7 Jul 2021 20:37:42 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "On Wed, 7 Jul 2021 at 20:37, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 7 Jul 2021 at 00:06, Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > I think if you're going to reject this patch, a brief comment should\n> > be added to that code to justify why that existing superfluous check\n> > is worthwhile.\n>\n> It seems strange to add a comment to explain why it's there. If we're\n> going to the trouble of doing that, then we should just remove it and\n> add a very small comment to mention why INT8 sequences don't need to\n> be checked.\n\nAny thoughts on this, Greg?\n\nDavid\n\n\n", "msg_date": "Mon, 12 Jul 2021 16:26:16 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "On Mon, Jul 12, 2021 at 2:26 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> > It seems strange to add a comment to explain why it's there. If we're\n> > going to the trouble of doing that, then we should just remove it and\n> > add a very small comment to mention why INT8 sequences don't need to\n> > be checked.\n>\n> Any thoughts on this, Greg?\n>\n\nThe patch LGTM (it's the same as my original patch but with short comments).\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Mon, 12 Jul 2021 14:48:06 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "On Mon, 12 Jul 2021 at 16:48, Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 2:26 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > > It seems strange to add a comment to explain why it's there. If we're\n> > > going to the trouble of doing that, then we should just remove it and\n> > > add a very small comment to mention why INT8 sequences don't need to\n> > > be checked.\n> >\n> > Any thoughts on this, Greg?\n> >\n>\n> The patch LGTM (it's the same as my original patch but with short comments).\n\nYeah, it's your patch with the comment reduced down to 2 lines. This\nwas to try and address Peter's concern that the comment is too large.\nThis seemed to put him off the patch. I also disagreed that it made\nsense to remove 2 fairly harmless lines of code to replace them with\n12 lines of comments.\n\nWhat I was trying to get to here was something that was more\nreasonable that might make sense to commit. I'm just not certain\nwhere Peter stands on this now that the latest patch is a net zero\nwhen it comes to adding lines. Peter?\n\nDavid\n\n\n", "msg_date": "Mon, 12 Jul 2021 20:44:10 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "On 12.07.21 10:44, David Rowley wrote:\n> What I was trying to get to here was something that was more\n> reasonable that might make sense to commit. I'm just not certain\n> where Peter stands on this now that the latest patch is a net zero\n> when it comes to adding lines. Peter?\n\nYour version looks better to me than the original version, but I'm still \n-0.05 on changing this at all.\n\n\n\n", "msg_date": "Mon, 12 Jul 2021 20:50:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" }, { "msg_contents": "On Tue, 13 Jul 2021 at 06:50, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> Your version looks better to me than the original version, but I'm still\n> -0.05 on changing this at all.\n\nI was more +0.4. It does not seem worth the trouble of too much\ndiscussion so, just to try and bring this to a close, instead of\nadding a comment to explain why we needlessly check the range of the\nINT8 sequence, I just pushed the patch that removes it and adds the 1\nline comment to mention why it's not needed.\n\nDavid\n\n\n", "msg_date": "Tue, 13 Jul 2021 14:00:49 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove useless int64 range checks on BIGINT sequence\n MINVALUE/MAXVALUE values" } ]
[ { "msg_contents": "In [1], Yaoguang reported an Assert failure in expand_grouping_sets.\nSince beta2 deadline is looming, I pushed a quick fix for that.\n\nAs mentioned over on bugs, only 1 test triggers that code and because\nthe List of IntLists always had an empty list as the first element due\nto the code just above sorting the top-level List by the number of\nelements each of the contained IntLists, the NIL was always at the\nstart of the top-level List.\n\nIt wasn't too hard to modify the test to change that.\n\nI wonder if the testing for the feature is just a bit too light.\n\nWould it maybe be worth adding a GROUP BY DISTINCT with GROUPING SETS test?\n\nAny thoughts?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/17067-665d50fa321f79e0@postgresql.org\n\n\n", "msg_date": "Mon, 21 Jun 2021 23:19:45 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Is the testing a bit too light on GROUP BY DISTINCT?" } ]
[ { "msg_contents": "Seems like we can skip the uniqueness check if indexUnchanged, which\nwill speed up non-HOT UPDATEs on tables with B-Trees.\n\nPasses tests.\n\nComments?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Mon, 21 Jun 2021 13:31:07 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Using indexUnchanged with nbtree" }, { "msg_contents": "On Mon, Jun 21, 2021 at 5:31 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> Seems like we can skip the uniqueness check if indexUnchanged, which\n> will speed up non-HOT UPDATEs on tables with B-Trees.\n\nI thought about doing this myself. Never got as far as thinking about\nthe correctness implications in detail.\n\nOne thing that I'm concerned about is LP_DEAD bit setting inside\n_bt_check_unique(), which isn't going to take place when the\noptimization from the patch is applied. That definitely used to be way\nmore important than kill_prior_tuple-based LP_DEAD bit setting, which\nhas real problems with non-HOT updates [1]. _bt_check_unique() clearly\nmakes up for that in the case of unique indexes, at least for many\nyears.\n\nOn the other hand my thinking here might well be outdated, because of\ncourse bottom-up index deletion exists now. You're using\nindexUnchanged here, which is used to trigger bottom-up index deletion\npasses. Maybe that's enough for it to not matter now, meaning that the\nLP_DEAD bit stuff is not a real blocker here. Offhand I'm quite\nunsure.\n\n[1] https://www.postgresql.org/message-id/flat/CAH2-Wz%3DSfAKVMv1x9Jh19EJ8am8TZn9f-yECipS9HrrRqSswnA%40mail.gmail.com#b20ead9675225f12b6a80e53e19eed9d\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 23 Jun 2021 09:17:30 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Using indexUnchanged with nbtree" }, { "msg_contents": "On Wed, Jun 23, 2021 at 5:17 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Jun 21, 2021 at 5:31 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > Seems like we can skip the uniqueness check if indexUnchanged, which\n> > will speed up non-HOT UPDATEs on tables with B-Trees.\n>\n> I thought about doing this myself. Never got as far as thinking about\n> the correctness implications in detail.\n>\n> One thing that I'm concerned about is LP_DEAD bit setting inside\n> _bt_check_unique(), which isn't going to take place when the\n> optimization from the patch is applied. That definitely used to be way\n> more important than kill_prior_tuple-based LP_DEAD bit setting, which\n> has real problems with non-HOT updates [1]. _bt_check_unique() clearly\n> makes up for that in the case of unique indexes, at least for many\n> years.\n>\n> On the other hand my thinking here might well be outdated, because of\n> course bottom-up index deletion exists now. You're using\n> indexUnchanged here, which is used to trigger bottom-up index deletion\n> passes. Maybe that's enough for it to not matter now, meaning that the\n> LP_DEAD bit stuff is not a real blocker here. Offhand I'm quite\n> unsure.\n\nYou're right that skipping the check might also skip killing a prior\nrow version, but it doesn't prevent later scans from killing them, so\nthere is no correctness aspect to that.\n\nIn the case of a non-HOT UPDATE the backend will see the index entry\nfor the old row version and then check it, pointlessly. Since that has\njust been modified, that won't ever be killed, so skipping the check\nmakes sense in those cases.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 23 Jun 2021 17:31:02 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Using indexUnchanged with nbtree" }, { "msg_contents": "On Wed, Jun 23, 2021 at 9:31 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> You're right that skipping the check might also skip killing a prior\n> row version, but it doesn't prevent later scans from killing them, so\n> there is no correctness aspect to that.\n\nProbably not, no. I'll assume for now that there is no correctness issue.\n\n> In the case of a non-HOT UPDATE the backend will see the index entry\n> for the old row version and then check it, pointlessly. Since that has\n> just been modified, that won't ever be killed, so skipping the check\n> makes sense in those cases.\n\nI agree that the check itself is pointless here. But that in itself\ndoesn't make the call to _bt_check_unique() useless. It might still\nmanage to set LP_DEAD bits when nothing else will.\n\nI realize that the original reason for setting LP_DEAD bits in\n_bt_check_unique() was something like \"well, might as well do this\nhere too\". But I believe that LP_DEAD bit setting inside\n_bt_check_unique() is nevertheless often more valuable than the better\nknown kill_prior_tuple mechanism. I have seen clear and convincing\nexamples of this in the past. Might not really be true anymore.\n\nAnother thing is _bt_findinsertloc() and\n_bt_delete_or_dedup_one_page(), which themselves use the\ncheckingunique flag that you're changing the value of. There could\nalso be unintended side-effects there. OTOH they also use\nindexUnchanged too, so even if there is a problem it might be fixable.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 23 Jun 2021 09:42:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Using indexUnchanged with nbtree" }, { "msg_contents": "On Wed, Jun 23, 2021 at 5:42 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Jun 23, 2021 at 9:31 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > You're right that skipping the check might also skip killing a prior\n> > row version, but it doesn't prevent later scans from killing them, so\n> > there is no correctness aspect to that.\n>\n> Probably not, no. I'll assume for now that there is no correctness issue.\n>\n> > In the case of a non-HOT UPDATE the backend will see the index entry\n> > for the old row version and then check it, pointlessly. Since that has\n> > just been modified, that won't ever be killed, so skipping the check\n> > makes sense in those cases.\n>\n> I agree that the check itself is pointless here. But that in itself\n> doesn't make the call to _bt_check_unique() useless. It might still\n> manage to set LP_DEAD bits when nothing else will.\n\nThis case occurs when we are doing non-HOT UPDATEs. That command is\nsearched, so the scan will already have touched the heap and almost\ncertainly the index also, setting any LP_DEAD bits already in the most\nfrequent case.\n\nSo the check isn't going to do anything useful in the vast majority of\ncases, which is why its OK to remove it.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 24 Jun 2021 13:39:46 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Using indexUnchanged with nbtree" }, { "msg_contents": "On Thu, Jun 24, 2021 at 5:39 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> This case occurs when we are doing non-HOT UPDATEs. That command is\n> searched, so the scan will already have touched the heap and almost\n> certainly the index also, setting any LP_DEAD bits already in the most\n> frequent case.\n\nBut it won't, because the restriction that I described with non-HOT\nupdates in kill_prior_tuple in that old thread I linked to. This has\nbeen the case since commit 2ed5b87f96d from Postgres 9.5. This\nprobably should probably be fixed, somehow, but for now I don't think\nyou can assume anything about LP_DEAD bits being set -- they're\nclearly not set with a non-HOT update when the UPDATE's ModifyTable\nnode is fed by a scan of the same index (unless we reach\n_bt_check_unique() because it's a unique index).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 24 Jun 2021 18:33:59 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Using indexUnchanged with nbtree" }, { "msg_contents": "On Fri, Jun 25, 2021 at 2:34 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Jun 24, 2021 at 5:39 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > This case occurs when we are doing non-HOT UPDATEs. That command is\n> > searched, so the scan will already have touched the heap and almost\n> > certainly the index also, setting any LP_DEAD bits already in the most\n> > frequent case.\n>\n> But it won't, because the restriction that I described with non-HOT\n> updates in kill_prior_tuple in that old thread I linked to. This has\n> been the case since commit 2ed5b87f96d from Postgres 9.5. This\n> probably should probably be fixed, somehow, but for now I don't think\n> you can assume anything about LP_DEAD bits being set -- they're\n> clearly not set with a non-HOT update when the UPDATE's ModifyTable\n> node is fed by a scan of the same index (unless we reach\n> _bt_check_unique() because it's a unique index).\n\nSeems a little bizarre to have _bt_check_unique() call back into the\nheap block we literally just unpinned.\nThis is another case of the UPDATE scan and later heap/index\ninsertions not working together very well.\nThis makes this case even harder to solve:\nhttps://www.postgresql.org/message-id/CA%2BU5nMKzsjwcpSoqLsfqYQRwW6udPtgBdqXz34fUwaVfgXKWhA%40mail.gmail.com\n\nIf an UPDATE interferes with its own ability to kill_prior_tuple(),\nthen we should fix it, not allow pointless work to be performed\nsomewhere else instead just because it has some beneficial side\neffect.\n\nIf an UPDATE scans via a index and remembers the block in\nso->currPos.currPage then we could use that to optimize the\nre-insertion by starting the insertion scan at that block (since we\nknow the live unique key is either there or somewhere to the right).\nBy connecting those together, we would then be able to know that the\nchange in LSN was caused by ourself and allow the items to be killed\ncorrectly at that time.\n\nDo you think there is benefit in having PK UPDATEs as a special plan\nthat links these things more closely together?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 25 Jun 2021 09:42:56 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Using indexUnchanged with nbtree" }, { "msg_contents": "On Fri, Jun 25, 2021 at 1:43 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> Seems a little bizarre to have _bt_check_unique() call back into the\n> heap block we literally just unpinned.\n\nI suppose it is a little bizarre.\n\n> This is another case of the UPDATE scan and later heap/index\n> insertions not working together very well.\n> This makes this case even harder to solve:\n> https://www.postgresql.org/message-id/CA%2BU5nMKzsjwcpSoqLsfqYQRwW6udPtgBdqXz34fUwaVfgXKWhA%40mail.gmail.com\n\nI wasn't aware of that thread, but I suspected that something like\nthat was going on in some cases myself.\n\n> If an UPDATE interferes with its own ability to kill_prior_tuple(),\n> then we should fix it, not allow pointless work to be performed\n> somewhere else instead just because it has some beneficial side\n> effect.\n\nDefinitely true. But the fact is that this is where we are today, and\nthat complicates this business with bypassing _bt_check_unique().\n\n> If an UPDATE scans via a index and remembers the block in\n> so->currPos.currPage then we could use that to optimize the\n> re-insertion by starting the insertion scan at that block (since we\n> know the live unique key is either there or somewhere to the right).\n> By connecting those together, we would then be able to know that the\n> change in LSN was caused by ourself and allow the items to be killed\n> correctly at that time.\n>\n> Do you think there is benefit in having PK UPDATEs as a special plan\n> that links these things more closely together?\n\nI think that it might be worth hinting to the index scan that it is\nfeeding a ModifyTable node, and that it should not drop its pin per\nthe optimization added to avoid blocking VACUUM (in commit\n2ed5b87f96d). We can just not do that if for whatever reason we don't\nthink it's worth it - the really important cases for that optimization\ninvolve cursors, things like that.\n\nIt's not like the code that deals with this (that notices LSN change)\ncannot just recheck by going to the heap. The chances of it really\nbeing VACUUM are generally extremely low.\n\nOTOH I wonder if the whole idea of holding a pin on a leaf page to\nblock VACUUM is one that should be removed entirely.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 25 Jun 2021 08:43:55 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Using indexUnchanged with nbtree" }, { "msg_contents": "On Fri, Jun 25, 2021 at 4:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Jun 25, 2021 at 1:43 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > Seems a little bizarre to have _bt_check_unique() call back into the\n> > heap block we literally just unpinned.\n>\n> I suppose it is a little bizarre.\n>\n> > This is another case of the UPDATE scan and later heap/index\n> > insertions not working together very well.\n> > This makes this case even harder to solve:\n> > https://www.postgresql.org/message-id/CA%2BU5nMKzsjwcpSoqLsfqYQRwW6udPtgBdqXz34fUwaVfgXKWhA%40mail.gmail.com\n>\n> I wasn't aware of that thread, but I suspected that something like\n> that was going on in some cases myself.\n>\n> > If an UPDATE interferes with its own ability to kill_prior_tuple(),\n> > then we should fix it, not allow pointless work to be performed\n> > somewhere else instead just because it has some beneficial side\n> > effect.\n>\n> Definitely true. But the fact is that this is where we are today, and\n> that complicates this business with bypassing _bt_check_unique().\n>\n> > If an UPDATE scans via a index and remembers the block in\n> > so->currPos.currPage then we could use that to optimize the\n> > re-insertion by starting the insertion scan at that block (since we\n> > know the live unique key is either there or somewhere to the right).\n> > By connecting those together, we would then be able to know that the\n> > change in LSN was caused by ourself and allow the items to be killed\n> > correctly at that time.\n> >\n> > Do you think there is benefit in having PK UPDATEs as a special plan\n> > that links these things more closely together?\n>\n> I think that it might be worth hinting to the index scan that it is\n> feeding a ModifyTable node, and that it should not drop its pin per\n> the optimization added to avoid blocking VACUUM (in commit\n> 2ed5b87f96d). We can just not do that if for whatever reason we don't\n> think it's worth it - the really important cases for that optimization\n> involve cursors, things like that.\n>\n> It's not like the code that deals with this (that notices LSN change)\n> cannot just recheck by going to the heap. The chances of it really\n> being VACUUM are generally extremely low.\n>\n> OTOH I wonder if the whole idea of holding a pin on a leaf page to\n> block VACUUM is one that should be removed entirely.\n\nDefinitely some good ideas here.\n\nI'm out of time to do anything for this CF, so I've moved this back to later CF.\n\nI'm planning to work on this more, but I won't try to fold in all of\nyour ideas above. Not cos they are bad ones, just there is enough room\nfor 2-4 related patches here.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 1 Jul 2021 16:23:02 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Using indexUnchanged with nbtree" }, { "msg_contents": "On Thu, Jul 1, 2021 at 8:23 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> Definitely some good ideas here.\n\nI have been meaning to come up with some kind of solution to the\nproblem of \"self-blocking\" LP_DEAD bit setting within the\nkill_prior_tuple mechanism. It's hard to argue against that.\n\n> I'm out of time to do anything for this CF, so I've moved this back to later CF.\n>\n> I'm planning to work on this more, but I won't try to fold in all of\n> your ideas above. Not cos they are bad ones, just there is enough room\n> for 2-4 related patches here.\n\nI'm a little concerned about relying on the indexUnchanged flag like\nthis. It is currently just supposed to be a hint, but your proposal\nmakes it truly critical. Currently the consequences are no worse than\nthe risk that we'll maybe waste some cycles on the occasional useless\nbottom-up index deletion pass. With your patch it absolutely cannot be\nfalsely set (though it should still be okay if it is falsely unset).\n\nOf course it should be correct (with or without this new\noptimization), but the difference still matters. And so I think that\nthere ought to be a clear benefit to users from the new optimization,\nthat justifies accepting the new risk. Some kind of benchmark showing\nan improvement in latency and/or throughput. Something like that.\nDoesn't have to be a huge improvement.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 1 Jul 2021 09:22:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Using indexUnchanged with nbtree" }, { "msg_contents": "On Thu, Jul 01, 2021 at 09:22:38AM -0700, Peter Geoghegan wrote:\n> On Thu, Jul 1, 2021 at 8:23 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > Definitely some good ideas here.\n> \n> I have been meaning to come up with some kind of solution to the\n> problem of \"self-blocking\" LP_DEAD bit setting within the\n> kill_prior_tuple mechanism. It's hard to argue against that.\n> \n> > I'm out of time to do anything for this CF, so I've moved this back to later CF.\n> >\n> > I'm planning to work on this more, but I won't try to fold in all of\n> > your ideas above. Not cos they are bad ones, just there is enough room\n> > for 2-4 related patches here.\n> \n> I'm a little concerned about relying on the indexUnchanged flag like\n> this. It is currently just supposed to be a hint, but your proposal\n> makes it truly critical. Currently the consequences are no worse than\n> the risk that we'll maybe waste some cycles on the occasional useless\n> bottom-up index deletion pass. With your patch it absolutely cannot be\n> falsely set (though it should still be okay if it is falsely unset).\n> \n> Of course it should be correct (with or without this new\n> optimization), but the difference still matters. And so I think that\n> there ought to be a clear benefit to users from the new optimization,\n> that justifies accepting the new risk. Some kind of benchmark showing\n> an improvement in latency and/or throughput. Something like that.\n> Doesn't have to be a huge improvement.\n> \n\nHi Simon,\n\nThis has been stalled since July, and based on Peter's comment i feel we\nshould mark this as RwF. Which i'm doing now.\n\nPlease feel free to resubmit for Next Commitfest.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Fri, 1 Oct 2021 09:20:01 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Using indexUnchanged with nbtree" }, { "msg_contents": "On Fri, 1 Oct 2021 at 15:20, Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n\n> This has been stalled since July, and based on Peter's comment i feel we\n> should mark this as RwF. Which i'm doing now.\n>\n> Please feel free to resubmit for Next Commitfest.\n\nAgreed, thank you Jaime.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 1 Oct 2021 15:21:40 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Using indexUnchanged with nbtree" } ]
[ { "msg_contents": "New chapter for Hash Indexes, designed to help users understand how\nthey work and when to use them.\n\nMostly newly written, but a few paras lifted from README when they were helpful.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Mon, 21 Jun 2021 14:08:12 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Doc chapter for Hash Indexes" }, { "msg_contents": "On Mon, Jun 21, 2021 at 02:08:12PM +0100, Simon Riggs wrote:\n> New chapter for Hash Indexes, designed to help users understand how\n> they work and when to use them.\n> \n> Mostly newly written, but a few paras lifted from README when they were helpful.\n\n+ <para>\n+ PostgreSQL includes an implementation of persistent on-disk hash indexes,\n+ which are now fully crash recoverable. Any data type can be indexed by a\n\nI don't see any need to mention that they're \"now\" crash safe.\n\n+ Each hash index tuple stores just the 4-byte hash value, not the actual\n+ column value. As a result, hash indexes may be much smaller than B-trees\n+ when indexing longer data items such as UUIDs, URLs etc.. The absence of\n\ncomma:\nURLs, etc.\n\n+ the column value also makes all hash index scans lossy. Hash indexes may\n+ take part in bitmap index scans and backward scans.\n\nIsn't it more correct to say that it must use a bitmap scan?\n\n+ through the tree until the leaf page is found. In tables with millions\n+ of rows this descent can increase access time to data. The equivalent\n\nrows comma\n\n+ that hash value. When scanning a hash bucket during queries we need to\n\nqueries comma\n\n+ <para>\n+ As a result of the overflow cases, we can say that hash indexes are\n+ most suitable for unique, nearly unique data or data with a low number\n+ of rows per hash bucket will be suitable for hash indexes. One\n\nThe beginning and end of the sentence duplicate \"suitable\".\n\n+ Each row in the table indexed is represented by a single index tuple in\n+ the hash index. Hash index tuples are stored in the bucket pages, and if\n+ they exist, the overflow pages. \n\n\"the overflow pages\" didn't sound right, but I was confused by the comma. \nI think it should say \".. in bucket pages and overflow pages, if any.\"\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 21 Jun 2021 17:54:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Doc chapter for Hash Indexes" }, { "msg_contents": "On Tue, Jun 22, 2021 at 4:25 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Jun 21, 2021 at 02:08:12PM +0100, Simon Riggs wrote:\n> > New chapter for Hash Indexes, designed to help users understand how\n> > they work and when to use them.\n> >\n> > Mostly newly written, but a few paras lifted from README when they were helpful.\n>\n>\n..\n> + the column value also makes all hash index scans lossy. Hash indexes may\n> + take part in bitmap index scans and backward scans.\n>\n> Isn't it more correct to say that it must use a bitmap scan?\n>\n\nWhy? Hash indexes do support regular index scan.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Jun 2021 10:36:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc chapter for Hash Indexes" }, { "msg_contents": "On Mon, Jun 21, 2021 at 6:38 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> New chapter for Hash Indexes, designed to help users understand how\n> they work and when to use them.\n>\n> Mostly newly written, but a few paras lifted from README when they were helpful.\n>\n\nFew comments\n==============\n1.\n+ Hash indexes are best optimized for SELECTs and UPDATEs using equality\n+ scans on larger tables.\n\nIs there a reason to mention Selects and Updates but not Deletes?\n\n2.\n+ Like B-Trees, hash indexes perform simple index tuple deletion. This\n+ is a deferred maintenance operation that deletes index tuples that are\n+ known to be safe to delete (those whose item identifier's LP_DEAD bit\n+ is already set). This is performed speculatively upon each insert,\n+ though may not succeed if the page is pinned by another backend.\n\nIt is not very clear to me when we perform the simple index tuple\ndeletion from the above sentence. We perform it when there is no space\nto accommodate a new tuple on the bucket page and as a result, we\nmight need to create an overflow page. Basically, I am not sure\nsaying: \"This is performed speculatively upon each insert ..\" is\nhelpful.\n\n3.\n+ incrementally expanded. When a new bucket is to be added to the index,\n+ exactly one existing bucket will need to be \"split\", with some of its\n+ tuples being transferred to the new bucket according to the updated\n+ key-to-bucket-number mapping. This is essentially the same hash table\n\nIn most places, the patch has used a single space after the full stop\nbut at some places like above, it has used two spaces after full stop.\nI think it is better to be consistent.\n\n4.\n This is essentially the same hash table\n+ management technique embodied in src/backend/utils/hash/dynahash.c for\n+ in-memory hash tables used within PostgreSQL internals.\n\nI am not sure if there is a need to mention this in the user-facing\ndoc. I think README is a better place for this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Jun 2021 11:45:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc chapter for Hash Indexes" }, { "msg_contents": "On Tue, Jun 22, 2021 at 7:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 21, 2021 at 6:38 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > New chapter for Hash Indexes, designed to help users understand how\n> > they work and when to use them.\n> >\n> > Mostly newly written, but a few paras lifted from README when they were helpful.\n> >\n>\n> Few comments\n> ==============\n> 1.\n> + Hash indexes are best optimized for SELECTs and UPDATEs using equality\n> + scans on larger tables.\n>\n> Is there a reason to mention Selects and Updates but not Deletes?\n\nDeletes decrease the number of rows, so must eventually be matched with inserts.\nSo deletes imply inserts.\n\nPerhaps it should say \"update-heavy\"\n\n> 2.\n> + Like B-Trees, hash indexes perform simple index tuple deletion. This\n> + is a deferred maintenance operation that deletes index tuples that are\n> + known to be safe to delete (those whose item identifier's LP_DEAD bit\n> + is already set). This is performed speculatively upon each insert,\n> + though may not succeed if the page is pinned by another backend.\n>\n> It is not very clear to me when we perform the simple index tuple\n> deletion from the above sentence. We perform it when there is no space\n> to accommodate a new tuple on the bucket page and as a result, we\n> might need to create an overflow page. Basically, I am not sure\n> saying: \"This is performed speculatively upon each insert ..\" is\n> helpful.\n\nOK, thanks, will reword.\n\n> 3.\n> + incrementally expanded. When a new bucket is to be added to the index,\n> + exactly one existing bucket will need to be \"split\", with some of its\n> + tuples being transferred to the new bucket according to the updated\n> + key-to-bucket-number mapping. This is essentially the same hash table\n>\n> In most places, the patch has used a single space after the full stop\n> but at some places like above, it has used two spaces after full stop.\n> I think it is better to be consistent.\n\nOK\n\n> 4.\n> This is essentially the same hash table\n> + management technique embodied in src/backend/utils/hash/dynahash.c for\n> + in-memory hash tables used within PostgreSQL internals.\n>\n> I am not sure if there is a need to mention this in the user-facing\n> doc. I think README is a better place for this.\n\nOK, will remove. Thanks\n\n\nI've reworded most things from both Amit and Justin; thanks for your reviews.\n\nI attach both clean and compare versions.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Tue, 22 Jun 2021 10:00:51 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Doc chapter for Hash Indexes" }, { "msg_contents": "On Tue, Jun 22, 2021 at 2:31 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> I attach both clean and compare versions.\n>\n\nDo we want to hold this work for PG15 or commit in PG14 and backpatch\nit till v10 where we have made hash indexes crash-safe? I would vote\nfor committing in PG14 and backpatch it till v10, however, I am fine\nif we want to commit just to PG14 or PG15.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Jun 2021 09:42:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc chapter for Hash Indexes" }, { "msg_contents": "On Wed, Jun 23, 2021 at 5:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 22, 2021 at 2:31 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > I attach both clean and compare versions.\n> >\n>\n> Do we want to hold this work for PG15 or commit in PG14 and backpatch\n> it till v10 where we have made hash indexes crash-safe? I would vote\n> for committing in PG14 and backpatch it till v10, however, I am fine\n> if we want to commit just to PG14 or PG15.\n\nBackpatch makes sense to me, but since not everyone will be reading\nthis thread, I would look towards PG15 only first. We may yet pick up\nadditional corrections or additions before a backpatch, if that is\nagreed.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 23 Jun 2021 12:56:51 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Doc chapter for Hash Indexes" }, { "msg_contents": "aOn Wed, Jun 23, 2021 at 12:56:51PM +0100, Simon Riggs wrote:\n> On Wed, Jun 23, 2021 at 5:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jun 22, 2021 at 2:31 PM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> > >\n> > > I attach both clean and compare versions.\n> > >\n> >\n> > Do we want to hold this work for PG15 or commit in PG14 and backpatch\n> > it till v10 where we have made hash indexes crash-safe? I would vote\n> > for committing in PG14 and backpatch it till v10, however, I am fine\n> > if we want to commit just to PG14 or PG15.\n> \n> Backpatch makes sense to me, but since not everyone will be reading\n> this thread, I would look towards PG15 only first. We may yet pick up\n> additional corrections or additions before a backpatch, if that is\n> agreed.\n\nYeah, I think backpatching makes sense.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 24 Jun 2021 15:59:35 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Doc chapter for Hash Indexes" }, { "msg_contents": "On Fri, Jun 25, 2021 at 1:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> aOn Wed, Jun 23, 2021 at 12:56:51PM +0100, Simon Riggs wrote:\n> > On Wed, Jun 23, 2021 at 5:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jun 22, 2021 at 2:31 PM Simon Riggs\n> > > <simon.riggs@enterprisedb.com> wrote:\n> > > >\n> > > > I attach both clean and compare versions.\n> > > >\n> > >\n> > > Do we want to hold this work for PG15 or commit in PG14 and backpatch\n> > > it till v10 where we have made hash indexes crash-safe? I would vote\n> > > for committing in PG14 and backpatch it till v10, however, I am fine\n> > > if we want to commit just to PG14 or PG15.\n> >\n> > Backpatch makes sense to me, but since not everyone will be reading\n> > this thread, I would look towards PG15 only first. We may yet pick up\n> > additional corrections or additions before a backpatch, if that is\n> > agreed.\n>\n> Yeah, I think backpatching makes sense.\n>\n\nI checked and found that there are two commits (7c75ef5715 and\n22c5e73562) in the hash index code in PG-11 which might have impacted\nwhat we write in the documentation. However, AFAICS, nothing proposed\nin the patch would change due to those commits. Even, if we don't want\nto back patch, is there any harm in committing this to PG-14?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 25 Jun 2021 08:47:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc chapter for Hash Indexes" }, { "msg_contents": "On Fri, Jun 25, 2021 at 4:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jun 25, 2021 at 1:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > aOn Wed, Jun 23, 2021 at 12:56:51PM +0100, Simon Riggs wrote:\n> > > On Wed, Jun 23, 2021 at 5:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jun 22, 2021 at 2:31 PM Simon Riggs\n> > > > <simon.riggs@enterprisedb.com> wrote:\n> > > > >\n> > > > > I attach both clean and compare versions.\n> > > > >\n> > > >\n> > > > Do we want to hold this work for PG15 or commit in PG14 and backpatch\n> > > > it till v10 where we have made hash indexes crash-safe? I would vote\n> > > > for committing in PG14 and backpatch it till v10, however, I am fine\n> > > > if we want to commit just to PG14 or PG15.\n> > >\n> > > Backpatch makes sense to me, but since not everyone will be reading\n> > > this thread, I would look towards PG15 only first. We may yet pick up\n> > > additional corrections or additions before a backpatch, if that is\n> > > agreed.\n> >\n> > Yeah, I think backpatching makes sense.\n> >\n>\n> I checked and found that there are two commits (7c75ef5715 and\n> 22c5e73562) in the hash index code in PG-11 which might have impacted\n> what we write in the documentation. However, AFAICS, nothing proposed\n> in the patch would change due to those commits. Even, if we don't want\n> to back patch, is there any harm in committing this to PG-14?\n\nI've reviewed those commits and the related code, so I agree.\n\nAs a result, I've tweaked the wording around VACUUM slightly.\n\nClean and compare patches attached.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Fri, 25 Jun 2021 10:41:07 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Doc chapter for Hash Indexes" }, { "msg_contents": "On Fri, Jun 25, 2021 at 3:11 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Fri, Jun 25, 2021 at 4:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jun 25, 2021 at 1:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > aOn Wed, Jun 23, 2021 at 12:56:51PM +0100, Simon Riggs wrote:\n> > > > On Wed, Jun 23, 2021 at 5:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Jun 22, 2021 at 2:31 PM Simon Riggs\n> > > > > <simon.riggs@enterprisedb.com> wrote:\n> > > > > >\n> > > > > > I attach both clean and compare versions.\n> > > > > >\n> > > > >\n> > > > > Do we want to hold this work for PG15 or commit in PG14 and backpatch\n> > > > > it till v10 where we have made hash indexes crash-safe? I would vote\n> > > > > for committing in PG14 and backpatch it till v10, however, I am fine\n> > > > > if we want to commit just to PG14 or PG15.\n> > > >\n> > > > Backpatch makes sense to me, but since not everyone will be reading\n> > > > this thread, I would look towards PG15 only first. We may yet pick up\n> > > > additional corrections or additions before a backpatch, if that is\n> > > > agreed.\n> > >\n> > > Yeah, I think backpatching makes sense.\n> > >\n> >\n> > I checked and found that there are two commits (7c75ef5715 and\n> > 22c5e73562) in the hash index code in PG-11 which might have impacted\n> > what we write in the documentation. However, AFAICS, nothing proposed\n> > in the patch would change due to those commits. Even, if we don't want\n> > to back patch, is there any harm in committing this to PG-14?\n>\n> I've reviewed those commits and the related code, so I agree.\n>\n\nDo you agree to just commit this to PG-14 or to commit in PG-14 and\nbackpatch till PG-10?\n\n> As a result, I've tweaked the wording around VACUUM slightly.\n>\n\nThanks, the changes look good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 26 Jun 2021 15:43:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc chapter for Hash Indexes" }, { "msg_contents": "On Sat, Jun 26, 2021 at 3:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jun 25, 2021 at 3:11 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > On Fri, Jun 25, 2021 at 4:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jun 25, 2021 at 1:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > >\n> > > > aOn Wed, Jun 23, 2021 at 12:56:51PM +0100, Simon Riggs wrote:\n> > > > > On Wed, Jun 23, 2021 at 5:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > On Tue, Jun 22, 2021 at 2:31 PM Simon Riggs\n> > > > > > <simon.riggs@enterprisedb.com> wrote:\n> > > > > > >\n> > > > > > > I attach both clean and compare versions.\n> > > > > > >\n> > > > > >\n> > > > > > Do we want to hold this work for PG15 or commit in PG14 and backpatch\n> > > > > > it till v10 where we have made hash indexes crash-safe? I would vote\n> > > > > > for committing in PG14 and backpatch it till v10, however, I am fine\n> > > > > > if we want to commit just to PG14 or PG15.\n> > > > >\n> > > > > Backpatch makes sense to me, but since not everyone will be reading\n> > > > > this thread, I would look towards PG15 only first. We may yet pick up\n> > > > > additional corrections or additions before a backpatch, if that is\n> > > > > agreed.\n> > > >\n> > > > Yeah, I think backpatching makes sense.\n> > > >\n> > >\n> > > I checked and found that there are two commits (7c75ef5715 and\n> > > 22c5e73562) in the hash index code in PG-11 which might have impacted\n> > > what we write in the documentation. However, AFAICS, nothing proposed\n> > > in the patch would change due to those commits. Even, if we don't want\n> > > to back patch, is there any harm in committing this to PG-14?\n> >\n> > I've reviewed those commits and the related code, so I agree.\n> >\n>\n> Do you agree to just commit this to PG-14 or to commit in PG-14 and\n> backpatch till PG-10?\n>\n\nI am planning to go through the patch once again and would like to\ncommit and backpatch till v10 in a day to two unless someone thinks\notherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 29 Jun 2021 14:21:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc chapter for Hash Indexes" }, { "msg_contents": "On Tue, Jun 29, 2021 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jun 26, 2021 at 3:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> I am planning to go through the patch once again and would like to\n> commit and backpatch till v10 in a day to two unless someone thinks\n> otherwise.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Jul 2021 15:21:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc chapter for Hash Indexes" } ]
[ { "msg_contents": "Since https://github.com/postgres/postgres/commit/ea53100d5 (or Postgres \n12.0) the shipped pkg-config file is broken for statically linking libpq \nbecause libpgcommon and libpgport are missing. This patch adds those two \nmissing private dependencies.", "msg_date": "Mon, 21 Jun 2021 13:47:38 +0000", "msg_from": "Filip Gospodinov <f@gospodinov.ch>", "msg_from_op": true, "msg_subject": "Fix pkg-config file for static linking" }, { "msg_contents": "On 21.06.21 15:47, Filip Gospodinov wrote:\n> -PKG_CONFIG_REQUIRES_PRIVATE = libssl libcrypto\n> +PKG_CONFIG_REQUIRES_PRIVATE = libpgcommon libpgport libssl libcrypto\n\nThis doesn't work.\n\nThis patch adds libpgcommon and libpgport to Requires.private. But they \nare not pkg-config names but library names, so they should be added to \nLibs.private.\n\n\n", "msg_date": "Tue, 6 Jul 2021 15:13:42 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Fix pkg-config file for static linking" }, { "msg_contents": "On 06.07.21 15:13, Peter Eisentraut wrote:\n> This doesn't work.\n> \n> This patch adds libpgcommon and libpgport to Requires.private.  But they \n> are not pkg-config names but library names, so they should be added to \n> Libs.private.\n\nThen, I must admit that I have misunderstood this patch at first\nhttps://github.com/postgres/postgres/commit/beff361bc1edc24ee5f8b2073a1e5e4c92ea66eb \n.\n\nMy impression was that PKG_CONFIG_REQUIRES_PRIVATE ends up in \nLibs.private because of this line\nhttps://github.com/postgres/postgres/blob/d9809bf8694c17e05537c5dd96cde3e67c02d52a/src/Makefile.shlib#L403 \n.\n\nAfter taking a second look, I've noticed that \nPKG_CONFIG_REQUIRES_PRIVATE is *filtered out*. But unfortunately, this \ncurrently doesn't work as intended because PKG_CONFIG_REQUIRES_PRIVATE \nseems to be unset in Makefile.shlib which leaves Requires.private empty \nand doesn't filter out -lcrypto and -lssl for Libs.private.\nThat must be also the reason why I first believed that \nPKG_CONFIG_REQUIRES_PRIVATE is used to populate Libs.private.\n\nAnyway, this issue is orthogonal to my original patch. I'm proposing to \nhardcode -lpgcommon and -lpgport in Libs.private instead. Patch is attached.", "msg_date": "Tue, 20 Jul 2021 20:04:02 +0000", "msg_from": "Filip Gospodinov <f@gospodinov.ch>", "msg_from_op": true, "msg_subject": "Re: Fix pkg-config file for static linking" }, { "msg_contents": "On 20.07.21 22:04, Filip Gospodinov wrote:\n> Anyway, this issue is orthogonal to my original patch. I'm proposing to \n> hardcode -lpgcommon and -lpgport in Libs.private instead. Patch is \n> attached.\n\nMakes sense. I think we could do it without hardcoding those library \nnames, as in the attached patch. But it comes out to the same result \nAFAICT.", "msg_date": "Thu, 2 Sep 2021 13:07:10 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Fix pkg-config file for static linking" }, { "msg_contents": "On 02.09.21 13:07, Peter Eisentraut wrote:\n> On 20.07.21 22:04, Filip Gospodinov wrote:\n>> Anyway, this issue is orthogonal to my original patch. I'm proposing \n>> to hardcode -lpgcommon and -lpgport in Libs.private instead. Patch is \n>> attached.\n> \n> Makes sense.  I think we could do it without hardcoding those library \n> names, as in the attached patch.  But it comes out to the same result \n> AFAICT.\n\nThat is my impression too. Enumerating them in a variable would just \nlead to an indirection. Please let me know if you'd still prefer a \nsolution without hardcoding.\n\n\n", "msg_date": "Thu, 2 Sep 2021 13:08:34 +0000", "msg_from": "Filip Gospodinov <f@gospodinov.ch>", "msg_from_op": true, "msg_subject": "Re: Fix pkg-config file for static linking" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Makes sense. I think we could do it without hardcoding those library \n> names, as in the attached patch. But it comes out to the same result \n> AFAICT.\n\nThis has been pushed, but the CF entry is still open, which is\nmaking the cfbot unhappy. Were you leaving it open pending\npushing to back branches as well? I'm not sure what the point\nof waiting is --- the buildfarm isn't going to exercise the\ntroublesome scenario.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Sep 2021 15:57:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix pkg-config file for static linking" }, { "msg_contents": "On 05.09.21 21:57, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Makes sense. I think we could do it without hardcoding those library\n>> names, as in the attached patch. But it comes out to the same result\n>> AFAICT.\n> \n> This has been pushed, but the CF entry is still open, which is\n> making the cfbot unhappy. Were you leaving it open pending\n> pushing to back branches as well? I'm not sure what the point\n> of waiting is --- the buildfarm isn't going to exercise the\n> troublesome scenario.\n\nI noticed another fix that was required and didn't get to it until now. \nIt's all done and backpatched now. CF entry is closed.\n\n\n", "msg_date": "Mon, 6 Sep 2021 10:30:06 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Fix pkg-config file for static linking" } ]